arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
0808.1388
|
\section{Introduction}\label{intro}
Recent experiments at RHIC have shown that
in high-energy heavy-ion collisions, a strongly coupled
medium consisting of deconfined quarks and gluons has been
produced \cite{Adams:2005dq}.
This medium is opaque to hard scattered partons, which lose energy
as they traverse the medium and subsequently
their fragmentation is modified \cite{Adler:2002xw}.
This fragmentation has been studied using azimuthal
correlations of hadrons with large transverse momentum ($p_{T}$).
Due to the large particle multiplicities observed in heavy-ion collisions, our
current method of measuring jet-like correlations is via di-hadron correlations
\cite{Adams:2006yt}.
To characterise parton energy loss, we would like to measure the
fragment distribution of hadrons in jets. So far, di-hadron correlations
have been used for this purpose, since the large background of soft
particles produced in heavy ion collisions makes it difficult to
directly reconstruct jets. In these measurements, the transverse
momentum of a trigger particle, $p_{T}^{trig}$, is used as a proxy for the jet
energy, $E_{T}^{jet}$. In this paper, we present a new method, using a cluster of
multiple high-$p_{T}$ hadrons as a trigger. Multi-hadron clusters may provide
a better measure of the jet energy than than the leading particle $p_{T}$.
\section{Experimental Setup}\label{techno}
\begin{figure}[!b]
\centering
\includegraphics[width=1.0\textwidth]{fig-1.eps}
\caption{Background subtracted azimuthal distributions
for di-hadron triggers (left) and multi-hadron triggers (right) for
$12 < p_{T}^{trig} < 15$ GeV/$c$ and
4.0 GeV/$c$ $<$ $p_{T}^{assoc}$ $<$ 5.0 GeV/$c$. A minimum
secondary seed of 3.0 GeV/$c$ is used.}
\label{dNdDphi}
\end{figure}
There are approximately 24M Au+Au events at $\sqrt{s_{NN}} = 200$ GeV
used in this study. They are taken from the data collected
during the year 4 run at RHIC, from the 0-12\% most central
events, selected via STAR's Zero Degree Calorimeters.
Details on the triggering and particle reconstruction are
discussed elsewhere \cite{Ackermann:2002ad}.
\section{Analysis and discussion}
Charged tracks from primary vertices are used to construct
multi-hadron and di-hadron azimuthal distributions.
The tracks are selected within the pseudo-rapidity range
of $|\eta| < 1$. The uncorrelated background is removed
using the zero yield at minimum (ZYAM) \cite{Ajitanand:2005jj}
method. As elliptic flow ($v_{2}$) is less than a 1\% modulation of the background
in the ranges selected for $p_{T}^{trig}$ and $p_{T}^{assoc}$
and the signal to background is much larger than 1\%,
the elliptic flow modulation is considered negligible in this analysis.
\begin{figure}[!b]
\centering
\includegraphics[width=1.0\textwidth]{fig-2.eps}
\caption{Radial distributions of triggers with associated tracks
from the same event (white histograms)
and from different events (hatched histograms).
Panels from left to right show minimum secondary seed
cuts of 2.0, 3.0, and 4.0 GeV/$c$ respectively.}
\label{randomClusters}
\end{figure}
When forming multi-hadron triggers, all
tracks which pass the track quality cuts with $p_{T} > 5.0$ GeV/$c$
are collected as ``primary seeds''. Then within a cone radius
($r = \sqrt{\Delta\phi^{2}+\Delta\eta^{2}}$) of 0.3,
all ``secondary seeds''
which fall above a minimum $p_{T}$ cut are collected. Minimum
secondary seed cuts of 2,3, and 4
GeV/$c$ have been used for a systematic study.
Next, the sum of the primary and secondary seeds is taken to
be the trigger $p_{T}$. To illustrate, a multi-hadron trigger of 12 GeV/$c$ might be a combination
of a 6 GeV/$c$ primary seed and two secondary seeds of 3 and 3 GeV/$c$ each, while in the
standard di-hadron analysis \cite{Adler:2002tq}, the trigger would be a single hadron
with $p_{T}$ = 12 GeV/$c$.
With the multi-hadron triggers defined, azimuthal difference distributions are
calculated between the primary seed in the cone and associated tracks with $p_{T}$ greater than
the minimum secondary seed $p_{T}$ cut. Representative distributions are shown in Figure \ref{dNdDphi}.
For the multi-hadron triggers there is a bias on the near-side
due to the algorithm which artifically enhances the yield.
With these distributions, recoil (away-side) yields are extracted and studied for various $p_{T}^{trig}$ bins.
Random combinations occur in the multi-hadron cluster algorithm.
The multi-hadron clusters contain a combinatorial background in which a seed particle
from a jet is combined with one or more secondary seeds from the
underlying soft event. To study this background,
the radial distributions of primary seeds for two different cases are constructed:
with associated tracks in the same event and with associated tracks in
different events. These distributions are shown in Figure \ref{randomClusters}
with the open histograms showing same event correlations and
the grey filled histograms showing correlations from mixed events, taking the seed track and the
secondary seeds from different events. The background histograms have been scaled to the signal
histograms. The secondary seed $p_{T}$ increases from left to right
with $p_{T} >$ 2.0, 3.0, and 4.0 GeV/$c$ and the signal-to-background
increases from 0.2 to 0.7 to 2.0 respectively.
A radius of 0.3 along with a minimum secondary seed $p_{T}$ cut
greater than 3.0 GeV/$c$ leads to a reasonable signal to background for this study.
Future plans include background subtracted yields calculated with an
estimate of background trigger yields.
\begin{figure}[!hb]
\centering
\includegraphics[scale=1.0]{fig-3.eps}
\caption{Recoil yield per trigger for three $p_{T}$ bins:
$10 < p_{T}^{trig} < 12$ GeV/$c$ (circles),
$12 < p_{T}^{trig} < 15$ GeV/$c$ (squares),
and $15 < p_{T}^{trig} < 18$ GeV/$c$ (triangles).
Data is presented on the left (Au+Au), Pythia predictions
are presented on the right (p+p).
A minimum secondary seed cut of $p_{T} >$ 3.0 GeV/$c$
is applied.}
\label{away_yields_30}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=1.0]{fig-4.eps}
\caption{Recoil yield per trigger for three $p_{T}$ bins:
$10 < p_{T}^{trig} < 12$ GeV/$c$ (circles),
$12 < p_{T}^{trig} < 15$ GeV/$c$ (squares),
and $15 < p_{T}^{trig} < 18$ GeV/$c$ (triangles).
Data is presented on the left (Au+Au), Pythia predictions
are presented on the right (p+p).
A minimum secondary seed cut of $p_{T} >$ 4.0 GeV/$c$
is applied.}
\label{away_yields_40}
\end{figure}
Figures \ref{away_yields_30} and \ref{away_yields_40}
show recoil (away-side) yields for three $p_{T}$ bins: $10 < p_{T}^{trig} < 12$ GeV/$c$,
$12 < p_{T}^{trig} < 15$ GeV/$c$, and $15 < p_{T}^{trig} < 18$ GeV/$c$ respectively.
Figure \ref{away_yields_30} shows a comparison between
multi-hadron (open symbols) and di-hadron (solid symbols)
triggers with a minimum secondary
seed cut of $3.0$ GeV/$c$ for the data (left panels)
and Pythia (right panels).
The same comparisons are shown in Figure \ref{away_yields_40} but for a higher
minimum secondary seed cut of $4.0$ GeV/$c$.
The associated per-trigger yields for both single-hadron and multi-hadron triggers
in Figures \ref{away_yields_30} and \ref{away_yields_40}
are similar, suggesting the selection of a similar underlying jet-energy distribution by
both methods. The same analysis performed on Pythia events is shown in the right-hand
panels of Figures \ref{away_yields_30} and \ref{away_yields_40}.
In the Pythia events, the di-hadron analysis and multi-hadron triggered analysis also
give similar results, although the per-trigger yields
are generally higher than measured in the data.
\section{Conclusions}\label{concl}
This paper has presented first results on the use of multi-hadron triggers
investigated as the next step toward full jet reconstruction in heavy-ion collisions.
A cone radius of 0.3 coupled with a minimum secondary seed cut greater than 3.0 GeV/$c$ leads
to a reasonable signal to background ratio of 0.7.
Moreover, the away-side yields for multi-hadron correlations
and from di-hadron measurements are consistent.
This effect is also reproduced in Pythia simulations.
Further analysis of the Pythia events
to compare the underlying jet energy selections for di-hadron analysis
and multi-hadron triggered analysis is ongoing.
|
2101.09875
|
\section{Introduction}
\begin{table}[t]
\small
\caption{
\label{tab:notations}
List of default notations
} \vspace{-4pt}
\begin{minipage}[t]{0.475\linewidth}
\begin{tabular}{ p{0.5cm} p{5.9cm} }
\hline
${\cal M}$ & $d$-dimensional manifold in $\R^D$ \\
$p$ & data sampling density on ${\cal M}$ \\
$\Delta_{\cal M}$ & Laplace-Beltrami operator, also as $\Delta$ \\
$\mu_k$ & population eigenvalue of $-\Delta$ \\
$\psi_k$ & population eigenfunctions of $-\Delta$ \\
$\lambda_k$ & empirical eigenvalue of graph Laplacian\\
$v_k$ & empirical eigenvector of graph Laplacian \\
$\nabla_{\cal M}$ & manifold gradient, also as $\nabla$ \\
${ H}_t$ & manifold heat kernel \\
$Q_t$ & semi-group operator of manifold diffusion, $Q_t = e^{t \Delta}$ \\
$X$ & dataset points used for computing $W$ \\
$N$ & number of samples in $X$ \\
$\epsilon$ & kernel bandwidth parameter \\
$K_\epsilon$ & graph affinity kernel,
$W_{ij} = K_\epsilon(x_i,x_j)$,
$K_\epsilon(x,y)=\epsilon^{-d/2}h( \frac{\| x-y\|^2}{\epsilon})$ \\
$h$ & a function $[0,\infty) \to \R$ \\
$m_0$ & $m_0[h]:=\int_{\R^d} h(|u|^2) du$ \\
$m_2$ & $m_2[h]:= \frac{1}{d}\int_{\R^d} |u|^2 h(|u|^2) du$ \\
$W$ & kernelized graph affinity matrix \\
$D$ & degree matrix of $W$, $D_{ii} = \sum_{j=1}^N W_{ij}$\\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\begin{tabular}{ p{0.5cm} p{7.1cm} }
\hline
$L_{un}$ & un-normalized graph Laplacian \\
$L_{rw}$ & random-walk graph Laplacian \\
$ E_N$ & graph Dirichlet form \\
$ \rho_X$ & function evaluation operator, $\rho_X f = \{ f(x_i) \}_{i=1}^N$ \\
$\tilde{W}$ & density-corrected affinity matrix, $\tilde{W} = D^{-1}W D^{-1}$ \\
$\tilde{D}$ & degree matrix of $\tilde{W}$\\
\hline
\vspace{1pt}
\end{tabular}
%
\begin{tabular}{ p{0.5cm} p{7.1cm} }
\hline
\multicolumn{2}{|c|}{Asymptotic Notations} \\
\hline
$O(\cdot)$ & $f = O(g)$: $|f| \le C |g|$ in the limit, $C> 0$,
$O_a(\cdot)$ declaring the constant dependence on $a$ \\
$\Theta(\cdot)$ & $f = \Theta(g)$: for $f$, $g \ge 0$, $C_1 g \le f \le C_2 g$ in the limit, $C_1, C_2 >0$ \\
$\sim$ & $f \sim g$ same as $f = \Theta(g)$ \\
$o(\cdot)$ & $f = o(g)$: for $g > 0$, $|f|/g \to 0$ in the limit \\
$\Omega(\cdot)$ & $f = \Omega(g)$: for $f, g > 0$, $f/g \to \infty$ in the limit \\
$\tilde{O}(\cdot)$ & $O(\cdot)$ multiplied another factor involving a log, defined every time used in text \\
\multicolumn{2}{p{7.6cm}}{
~ When the superscript $_{a}$ is omitted, it declares that the constants are absolute ones.
~ $f = O( g_1, g_2) $ means that $f =O(|g_1|+|g_2| )$.
}\\
\hline
\end{tabular}
\end{minipage}
\end{table}
Graph Laplacian matrices built from data samples are widely used in data analysis and machine learning.
The earlier works include Isomap \cite{balasubramanian2002isomap},
Laplacian Eigenmap \cite{belkin2003laplacian},
Diffusion Map \cite{coifman2006diffusion,talmon2013diffusion},
among others.
Apart from being a widely-used unsupervised learning method for clustering analysis and dimension reduction (see, e.g., the review papers \cite{van2009dimensionality, talmon2013diffusion}),
graph Laplacian methods also drew attention via the application in semi-supervised learning \cite{nadler2009semi,el2016asymptotic,slepcev2019analysis,flores2019algorithms}.
Under the manifold setting,
data samples are assumed to lie on low-dimensional manifolds embedded in a possibly high-dimensional ambient space.
A fundamental problem is convergence of the graph Laplacian matrix to the manifold Laplacian operator in the large sample limit.
The operator point-wise convergence has been intensively studied and established in a series of works \cite{hein2005graphs,hein2006uniform,belkin2007convergence,coifman2006diffusion,singer2006graph},
and extended to variant settings,
such as different kernel normalizations \cite{marshall2019manifold,wormell2020spectral}
and general class of kernels \cite{ting2011analysis,berry2016variable,cheng2020convergence}.
The eigen-convergence,
namely how the empirical eigenvalues and eigenvectors converge to the population eigenvalues and eigenfunctions of the manifold Laplacian,
is a more subtle issue and has been studied in \cite{belkin2007convergence,von2008consistency,burago2013graph,wang2015spectral,singer2016spectral,eldridge2017unperturbed} (among others) and recently in \cite{trillos2020error,calder2019improved,dunson2019diffusion,calder2020lipschitz}.
The current work proves the eigen-convergence,
specifically the consistency of eigenvalues and eigenvectors in 2-norm,
for finitely many low-lying eigenvalues of the graph Laplacian constructed using Gaussian kernel from i.i.d. sampled manifold data.
The result covers the un-normalized and random-walk graph Laplacian when data density is uniform,
and the density-corrected graph Laplacian (defined below) with non-uniformly sampled data.
For the latter, we also prove new point-wise and Dirichlet form convergence rates as an intermediate result.
We overview the main results in Section \ref{subsec:overview} in the context of literature,
which are also summarized in Table \ref{tab:theory-summary}.
The framework of our work follows the variational principle formulation of eigenvalues using the graph and manifold Dirichlet forms.
Dirichlet form-based approach to prove graph Laplacian eigen-convergence was firstly carried out in \cite{burago2013graph}
under a non-probabilistic setting.
\cite{trillos2020error,calder2019improved} extended the approach under the probabilistic setting, where $x_i$ are i.i.d. samples,
using optimal transport techniques.
Our analysis follows the same form-based approach and differs from previous works in the following aspects:
Let $\epsilon$ be the (squared) kernel bandwidth parameter corresponding to diffusion time,
$N$ the number of samples,
and $d$ the manifold intrinsic dimensionality,
\vspace{5pt}
$\bullet$ Leveraging the observation in \cite{coifman2006diffusion,singer2006graph}
that the bias error in the point-wise rate of graph Laplacian can be improved from $O(\sqrt{\epsilon})$ to $O(\epsilon)$ using a $C^2$ kernel function,
we show that
the improved point-wise rate of Gaussian kernelized graph Laplacian translates into an improved eigen-convergence rate
than when using compactly supported kernels (e.g. indicator function).
\vspace{5pt}
$\bullet$ We show that the eigenvalue convergence rate
matches that of the Dirichlet form convergence rate in \cite{cheng2020convergence},
which is better than the point-wise rate.
This leads to faster convergence of eigenvalue than eigenvectors,
and in particular,
the eigenvalue error rate equals the square of the eigenvector rate
under the regime $\epsilon \sim N^{-1/(d/2+2)}$, both up to a factor of a certain power of $\log N$.
\vspace{5pt}
$\bullet$ In obtaining the initial crude eigenvalue LB, called Step 1 in below,
we develop a short proof using manifold heat kernel to define the ``interpolation mapping'',
which constructs from a vector $v$ a smooth function $f$ on ${\cal M}$.
The manifold variational form of $f$, defined via the heat kernel,
naturally relates to the graph Dirichlet form of $v$ when the graph affinity matrix is constructed using Gaussian kernel.
The approach of heat kernel interpolation for variational principle eigen-convergence analysis,
to the best knowledge of the authors,
has not been explored in the literature.
\vspace{5pt}
Towards the eigen-convergence, our work also recaps and develops several intermediate results under weaker assumptions of the kernel function (i.e., non-Gaussian),
including an improved point-wise convergence rate of density-corrected graph Laplacian.
The density-corrected graph Laplacian, originally proposed in \cite{coifman2006diffusion},
is an important variant of the kernelized graph Laplacian
where the affinity matrix is $\tilde{W}=D^{-1}WD^{-1}$.
In applications, the data distribution $p$ is often not uniform on the manifold,
and then the standard graph Laplacian with $W$ recovers the Fokker-Planck operator (weighted Laplacian)
with measure $p^2$,
which involves a drift term depending on $\nabla_{\cal M} \log p$.
The density-corrected graph Laplacian, in contrast,
recovers the Laplace-Beltrami operator consistently when $p$ satisfies certain regularity condition,
and thus is useful in many applications.
In this work, we first prove the point-wise convergence and Dirichlet form convergence of the density-corrected graph Laplacian with $\tilde{W}$,
both matching those of the standard graph Laplacian,
and this can be of independent interest.
Then the eigen-consistency result extends to such graph Laplacians (with Gaussian kernel function),
also achieving the same rate as the standard graph Laplacian when $p$ is uniform.
In below, we give an overview of the theoretical results and end the introduction section with further literature review.
In the rest of the paper,
Section \ref{sec:prelim} gives set-up and preliminaries needed in the analysis.
Sections \ref{sec:step0}-\ref{sec:step23} develop the eigen-convergence of standard graph Laplacians,
both the un-normalized and the normalized (random-walk) ones.
Section \ref{sec:density-corrected} extends to density-corrected graph Laplacian,
and
Section \ref{sec:exp} gives numerical results.
We discuss possible extensions in the last section.
{\bf Notations}.
Default and asymptotic notations like $O(\cdot)$, $\Omega(\cdot)$, $\Theta(\cdot)$, are listed in Table \ref{tab:notations}.
In this paper, we treat constants which are determined by $h$, ${\cal M}$, $p$ as absolute ones, including the intrinsic dimension $d$.
We mainly track the number of samples $N$ and the kernel diffusion time parameter $\epsilon$,
and we may emphasize the constant dependence on $p$ or ${\cal M}$ in certain circumstances, using the subscript notation like $O_{{\cal M}}(\cdot)$.
All constant dependence can be tracked in the proof.
\subsection{Overview of main results}\label{subsec:overview}
The current paper inherits the probabilistic manifold data setting, namely,
the dataset $\{x_i\}_{i=1}^N$ consists of i.i.d. samples drawn from a distribution on ${\cal M}$ with density $p$ satisfying the following assumption:
\begin{assumption}[Smooth ${\cal M}$ and $p$]\label{assump:M-p}
(A1) ${\cal M}$ is a $d$-dimensional compact connected
$C^{\infty}$ manifold (without boundary)
isometrically embedded in $\mathbb{R}^{D}$.
(A2)
$p\in C^{\infty}(\cal M)$ and uniformly bounded both from below and above, that is, $\exists p_{min}, \, p_{max} > 0$ s.t.
\[ 0< p_{min} \le p(x) \le p_{max} < \infty, \quad\forall x\in{\cal M}.
\]
\end{assumption}
\noindent
Suppose ${\cal M}$ is embedded via $\iota$, and when there is no danger of confusion, we use the same notation $x$ to denote $x\in {\cal M}$ and $\iota(x)\in \mathbb{R}^D$.
We have the measure space $({\cal M}, dV)$:
when ${\cal M}$ is orientable, $dV$ is the Riemann volume form;
otherwise, $dV$ is the measure associated with the local volume form.
The smoothness of $p$ and ${\cal M}$ fulfills many application scenarios, and possible extensions to less regular ${\cal M}$ or $p$ are postponed.
Our analysis first addresses the basic case where $p$ is uniform on ${\cal M}$, i.e., $p = \frac{1}{\mathrm{Vol}({\cal M})}$ and is a positive constant.
For non-uniform $p$ as in (A2),
we adopt and analyze the density correction graph Laplacian
in Section \ref{sec:density-corrected}.
In both cases, the graph Laplacian recovers the Laplace-Beltrami operator $\Delta_{\cal M}$.
In below, we write $\Delta_{\cal M}$ as $\Delta$, $\nabla_{\cal M}$ as $\nabla$.
Given $N$ data samples,
the {\it graph affinity} matrix $W$ and the {\it degree matrix} $D$ are defined as
\[
W_{ij} = K_{\epsilon}(x_i, x_j),
\quad
D_{ii} = \sum_{j = 1}^N W_{ij}.
\]
$W$ is real symmetric, typically $W_{ij} \ge 0$, and for the kernelized affinity matrix, $W_{ij} = K_\epsilon(x_i,x_j)$ where
\begin{equation}\label{eq:def-K-eps}
K_\epsilon(x,y) : = \epsilon^{-d/2} h(\frac{ \| x- y\|^2 }{\epsilon}),
\end{equation}
for a function $h: [0,\infty) \to \R$.
The parameter $\epsilon > 0$ can be viewed as the ``time'' of the diffusion process.
Some results in literature are written in terms of the parameter $\sqrt{\epsilon} > 0$,
which corresponds to the scale of the local distance $\| x - y \|$ such that $h(\frac{ \| x- y\|^2 }{\epsilon})$ is of $O(1)$ magnitude.
Our results are written with respect to the time parameter $\epsilon$,
which corresponds to the {\it squared} local distance length scale.
Our main result of graph Laplacian eigen-convergence
considers when the kernelized graph affinity is computed with
\begin{equation}\label{eq:def-h-gaussian}
h( \xi) = \frac{1}{( 4 \pi )^{d/2}} e^{- \xi/ 4}, \quad \xi \in [0, \infty),
\end{equation}
we call such $h$ the {\it Gaussian} one.
The Gaussian $h$ belongs to a larger family of differentiable functions:
\begin{assumption}[Differentiable $h$]\label{assump:h-C2-nonnegative}
(C1)
Regularity. $h$ is continuous on $[0,\infty)$, $C^2$ on $(0, \infty)$.
\\
(C2) Decay condition. $\exists a, a_k >0$, s.t., $ |h^{(k)}(\xi )| \leq a_k e^{-a \xi}$ for all $\xi > 0$,
$k=0, 1,2 $.
\\
(C3) {Non-negativity}. $h \ge 0$ on $[0, \infty)$. To exclude the case that $h \equiv 0$, assume $ \| h \|_\infty > 0 $.
\end{assumption}
\begin{table}[t]
\hspace{-10pt}
\begin{centering}
\scriptsize
\caption{
\label{tab:theory-summary}
Summary of theoretical results.
}
\vspace{3pt}
\begin{tabular}{ m{1.6cm} | m{1.5cm} m{1.5cm} | c | m{1.4cm} | m{2.3cm} | m{2.35cm} }
\hline
& \multicolumn{2}{c |}{$p$ uniform} & $p$ non-uniform & \multicolumn{2}{c|}{Needed assumptions } & \multirow{2}{*}{Error bound} \\
\cline{5-6}
& $L_{un}$ with $W$ & $L_{rw}$ with $W$ & $\tilde{L}_{rw}$ with $\tilde{W}$ & on $h$ & on $\epsilon$ ($\epsilon \to 0+$) & \\
\hline
\hspace{-5pt}\scriptsize{Eigenvalue UB} & Prop. \ref{prop:eigvalue-UB} & Prop. \ref{prop:eigvalue-UB-rw} & Prop. \ref{prop:eigvalue-UB-rw-density-correct} & Assump. \ref{assump:h-C2-nonnegative} & $\epsilon^{d/2} = \Omega( \frac{\log N}{N})$ & form rate \\
\hline
\hspace{-5pt}\scriptsize{Crude eigenvalue LB} & Prop. \ref{prop:eigvalue-LB-crude}& Prop. \ref{prop:eigvalue-LB-crude-rw} & Prop. \ref{prop:eigvalue-LB-crude-rw-density-correct}& & & $O(1)$\\
\hspace{-5pt}\scriptsize{Eigenvector convergence} & Prop. \ref{prop:step2} & \centering{-} & \centering{-} & Gaussian & $\epsilon^{d/2+2} > c_K \frac{\log N}{N}$ & point-wise rate\\
\hspace{-5pt}\scriptsize{Eigenvalue convergence} & Prop. \ref{prop:step3} & \centering{-} & \centering{-} & & & form rate\\
\hline
\multirow{4}{1.6cm}
{\hspace{-5pt}\scriptsize{Eigen-convergence}} &\multirow{4}{*}{ Thm. \ref{thm:refined-rates}} & \multirow{4}{*}{ Thm. \ref{thm:refined-rates-rw}} & \multirow{4}{*}{ Thm. \ref{thm:refined-rates-rw-density-correct} }
& \multirow{4}{*}{Gaussian} &$\epsilon^{d/2+3} \sim N^{-1}$ & $\lambda_k$ and $v_k$: $\tilde{O}(N^{-\frac{1}{d/2+3}})$ \\
\cline{6-7}
& & & & & $\epsilon^{d/2+2} =c \frac{\log N}{N}$, $c > c_K$ & $\lambda_k: \tilde{O}(N^{-\frac{1}{d/2+2}})$, $v_k: \tilde{O}(N^{-\frac{1}{d+4}})$ \\
\hline
\hspace{-5pt}Point-wise convergence & \multicolumn{2}{c|}{Thm. \ref{thm:pointwise-rate-C2h} \cite{singer2006graph,cheng2020convergence}$^{*}$} &Thm. \ref{thm:pointwise-rate-dencity-correct}& Assump. \ref{assump:h-C2-nonnegative}
& $\epsilon^{d/2+1} = \Omega( \frac{\log N}{N})$ & point-wise rate\\
\hspace{-5pt}Dirichlet form convergence & \multicolumn{2}{c|}{Thm. \ref{thm:form-rate} \cite{cheng2020convergence}$^{*}$} & Thm. \ref{thm:form-rate-density-correction} & Assump. \ref{assump:h-C2-nonnegative}
& $\epsilon^{d/2} = \Omega( \frac{\log N}{N})$ & form rate\\
\hline
\end{tabular}
\vspace{3pt}
{\scriptsize
In the last column,
the ``form rate'' is $O(\epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2}}})$,
and
the ``point-wise rate'' is $O(\epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}})$.
Convergence of first $k_{max}$ eigenvalues and eigenvectors are concerned, $k_{max}$ is fixed.
``$\lambda_k$:'' means the error for eigenvalue convergence, ``$v_k$:'' means the error for eigenvector convergence (in 2-norm),
and $\tilde{O}( \cdot)$ means the possibly involving of a factor of $(\log N)^\alpha$ for some $\alpha >0$.
In the 2nd (3rd) column, the eigenvector and eigenvalue convergences are proved in Thm. \ref{thm:refined-rates-rw} (Thm. \ref{thm:refined-rates-rw-density-correct})
and are not written as separated propositions.
$^{*}$The point-wise convergence and Dirichlet form convergence results of graph Laplacian with $W$
hold when $p$ satisfies Assump. \ref{assump:M-p}(A2), i.e., when $p$ is not uniform.
The form convergence rate may hold when $h$ is not differentiable, e.g., when $h = {\bf 1}_{[0,1)}$,
c.f. Remark \ref{rk:indicator-h-form-rate}.
}
\end{centering}
\end{table}
\noindent
Several important intermediate results, which can be of independent interest,
only require $h$ to satisfy Assumption \ref{assump:h-C2-nonnegative} or weaker,
including
\begin{itemize}[leftmargin=10pt]
\item[-]
Point-wise convergence of graph Laplacians,
which we call the {\it point-wise rate}.
\item[-]
Convergence of the graph Dirichlet form
$\frac{1}{\epsilon N^2}u^T (D-W) u$
applied to smooth manifold functions, i.e., $u = \{ f(x_i) \}_{i=1}^N$ for $f$ smooth on ${\cal M}$,
which we call the {\it form rate}.
\item[-]
The eigenvalue upper bound (UB), which matches the form rate.
\end{itemize}
\noindent
A summary of results with needed assumptions is provided in Table \ref{tab:theory-summary}.
The point-wise rate and form rate of standard graph Laplacian
only require a differentiable and decay condition of $h$ as originally taken in \cite{coifman2006diffusion},
and even without Assumption \ref{assump:h-C2-nonnegative}(C3) non-negativity.
In literature,
the point-wise rate of random-walk graph Laplacian $(I-D^{-1}W)$ with differentiable and decay $h$ was shown to be
$O(\epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}})$
in \cite{singer2006graph}.
The exposition in \cite{singer2006graph} was for Gaussian $h$ but the analysis therein extends directly to general $h$.
The form rate with differentiable $h$ was shown to be $O(\epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2}}})$
in \cite{cheng2020convergence} via a V-statistic analysis.
\cite{cheng2020convergence} also derived point-wise rate for both the random-walk and the un-normalized graph Laplacian $(D-W)$.
The analysis in \cite{cheng2020convergence}
was mainly developed for kernel with adaptive bandwidth,
and higher order regularity of $h$ ($C^4$ instead of $C^2$)
was assumed to handle the complication due to variable kernel bandwidth.
For the fixed-bandwidth kernel as in \eqref{eq:def-K-eps},
the analysis in \cite{cheng2020convergence} can be simplified to proceed under less restrictive conditions of $h$.
We include more details in below when quoting these previous results.
Our analysis of density-corrected graph Laplacian assumes $W_{ij} \ge 0$,
and our main result of eigen-covergence needs $h$ to be Gaussian,
thus we include (C3) in Assumption \ref{assump:h-C2-nonnegative} to simplify exposition.
As shown in Table \ref{tab:theory-summary},
the eigenvalue UB holds for general differentiable $h$,
while the initial crude eigenvalue LB (to be explained in below)
and consequently the final eigenvalue and eigenvector convergence rate,
need $h $ to be Gaussian.
This difference between eigenvalue UB and LB analysis
is due to the subtlety of the variational principle approach in analyzing empirical eigenvalues.
To be more specific, by ``projecting'' the population eigenfunctions to vectors in $\R^N$ and use as ``candidate'' eigenvectors in the variational form,
the form rate directly translates into a rate of eigenvalue UB (for fixed finitely many low-lying eigenvalues).
The eigenvalue LB,
however, is more difficult, as has been pointed out in \cite{burago2013graph}.
In \cite{burago2013graph} and following works taking the variational principle approach,
the LB analysis is by ``interpolating'' the empirical eigenvectors to be functions on ${\cal M}$.
Unlike with the population eigenfunctions which are known to be smooth,
there is less property of the empirical eigenvectors that one can use,
and any regularity property of these discrete objects is usually non-trivial to obtain \cite{calder2020lipschitz}.
The interpolation mapping in \cite{burago2013graph} first assigns
a point $x_i$ to a Voronoi cell $V_i$,
assuming that $\{x_i\}_i$ forms an $\varepsilon$-net of ${\cal M}$ to begin with (a non-probabilistic setting),
and this maps a vector $u$ to a piece-wise constant function $P^* u $ on ${\cal M}$;
next, $P^* u $ is convolved with a kernel function which is compacted supported on a small geodesic ball,
and this produces ``candidate'' eigenfunctions,
whose manifold differential Dirichlet form is upper bounded by the graph Dirchlet form of $u$, up to an error,
through differential geometry calculations.
Under the probabilistic setting of i.i.d. samples,
\cite{trillos2020error} constructed the mapping $P^*$ using a Wasserstein-$\infty$ optimal transport (OT) map,
where the $\infty$-OT distance between the empirical measure $\frac{1}{N}\sum_i \delta_{x_i}$ and the population measure $p dV$ is bounded
by constructing a Voronoi tessellation of ${\cal M}$ when $d \ge 2$.
This led to an overall eigen-convergence rate of $\tilde{O}( N^{-1/2d})$ in \cite{trillos2020error}
when $h$ is compactly supported and satisfies certain regularity conditions and $d \ge 2$,
the $\tilde{O}$ indicating a possible a factor of certain power of $\log N$.
A typical example is when $h$ is an indicator function $h = {\bf1}_{[0,1)}$,
which is called ``$\varepsilon$-graph'' in computer science literature ($\varepsilon$ corresponds to $\sqrt{\epsilon}$ in our notation).
The approach was extended to $k$NN graphs in \cite{calder2019improved},
where the rate of eigenvalue and $2$-norm eigenvector convergence
was also improved to match the point-wise rate of the $\varepsilon$-graph or $k$NN graph Laplacians,
leading to a rate of $\tilde{O}(N^{- 1/(d + 4)})$ when
${\epsilon}^{d/2+2} = \Omega (\frac{\log N }{N})$.
The same rate was shown for $\infty$-norm consistency of eigenvectors in \cite{calder2020lipschitz},
combined with Lipschitz regularity analysis of empirical eigenvectors using advanced PDE tools.
In the current work,
we take a different approach for the interpolation mapping in the eigenvalue LB analysis, which is based on manifold heat kernels.
Our analysis makes use of the fact that at short time and on small local neighborhoods, the heat kernel ${ H}_t(x,y)$ can be approximated by
\begin{equation}\label{eq:def-Gt}
G_{t} (x,y) :=
\frac{1}{( 4 \pi t )^{d/2}} e^{- \frac{ d_{\cal M}( x,y)^2 } { 4 t }},
\end{equation}
and consequently by $K_t(x,y)$ when $h$ is Gaussian as in \eqref{eq:def-h-gaussian}.
The first approximation $H_t \approx G_t$ is by classical results of elliptical operators on Riemannian manifolds, c.f. Theorem \ref{thm:Heat-short-time}.
Next, $G_t \approx K_t$ because $K_t$ replaces geodesic distance $d_{\cal M}( x,y)$
with Euclidean distance $\| x-y\|$ in $G_t$,
and the two locally match by $d_{\cal M}( x,y) = \| x- y\| + O(\| x- y\|^3) $.
These estimates allow us to construct interpolated $C^\infty({\cal M})$ functions $I_r [v]$ from discrete vector $v \in \R^N$ by convolving with the heat kernel at time $r = \frac{\epsilon \delta }{2}$, where $ 0 < \delta < 1$ is a fixed constant determined by the first $K=k_{max}+1$ low-lying population eigenvalues $\mu_k$ of $-\Delta$.
Specifically, $\delta$ is inversely proportional to the smallest eigen-gap between $\mu_k$ for $k \le K$
($\mu_k$ assumed to have single multiplicity in the first place, and then the result generalizes to greater than one multiplicity),
which is an $O(1)$ constant determined by $-\Delta$ and $K$.
Applying the variational principle to the operator $I-Q_t$,
where $Q_t$ is the diffusion semi-group operator and $Q_t$'s spectrum is determined by that of $-\Delta$,
allows to prove an initial eigenvalue LB smaller than half of the minimum first-$K$ eigen-gap,
which is enough for the bootstrap strategy, following \cite{calder2019improved},
to obtain refined eigenvector and eigenvalue consistency rate.
In our case, Step 2 matches the eigenvector 2-norm consistency to the point-wise rate, which is standard.
In Step 3, leveraging the eigenvector consistency proved in Step 2,
we further improve the eigenvalue convergence to match the form rate,
and then the refined eigenvalue LB rate matches the eigenvalue UB rate.
In the process, the first $K$ many empirical eigenvalues are upper bounded to be $O(1)$,
which follows by the eigenvalue UB proved in the beginning.
As a road map, our eigen-convergence analysis consists of the following four steps,
\begin{itemize}[leftmargin=10pt]
\item[-]
Step 0.
Eigenvalue UB by the Dirichlet form convergence, up to the form rate.
\item[-]
Step 1.
Initial crude eigenvalue LB,
providing eigenvalue error up to the smallest first $K$ eigen-gap.
\item[-]
Step 2. $2$-norm consistency of eigenvectors, up to the point-wise rate.
\item[-]
Step 3. Refined eigenvalue consistency, up to the form rate.
\end{itemize}
\noindent
Step 1 requires $h $ to be non-negative and currently only covers the Gaussian case.
This may be relaxed,
since the proof only uses the approximation property of $h$, namely that $K_\epsilon \approx { H}_\epsilon$.
In this work, we restrict to the Gaussian case for simplicity and the wide use of Gaussian kernels in applications.
\subsection{More related works}
As we adopt a Dirichlet form-based analysis,
the eigen-convergence result in the current paper is of the same type as in previous works using variational principle \cite{burago2013graph, trillos2020error, calder2019improved}.
In particular, the rate concerns the convergence of the first $k_{max}$ many low-lying eigenvalues of the Laplacian,
where $k_{max}$ is a {\it fixed} finite integer.
The constants in the big-$O$ notations in the bounds are treated as $O(1)$,
and they depend on $k_{max}$ and these leading eigenvalues and eigenfunctions of the manifold Laplacian.
Such results are useful for applications where leading eigenvectors are the primary focus,
e.g., spectral clustering and dimension-reduced spectral embedding.
An alternative approach is to analyze functional operator consistency
\cite{belkin2007convergence,von2008consistency,singer2016spectral,shi2015convergence},
which may provide different eigen-consistency bounds, e.g.,
$\infty$-norm consistency of eigenvectors using compact embedding of Glivenko-Cantelli function classes \cite{dunson2019diffusion}.
The current work considers noise-less data on ${\cal M}$,
while robustness of graph Laplacian against noise in data is important for applications.
When manifold data vectors are perturbed by noise in the ambient space, \cite{el2016graph} showed that Gaussian kernel function $h$
has special property to make kernelized graph Laplacian robust to noise (by a modification of diagonal entries).
More recently, \cite{landa2020doubly} showed that bi-stochastic normalization can make
the Gaussian kernelized graph affinity matrix robust to high dimensional heteroskedastic noise in data.
These results suggest that Gaussian $h$ is a special and useful choice of kernel function for graph Laplacian methods.
Meanwhile, bi-stochastically normalized graph Laplacian has been studied in \cite{marshall2019manifold},
where the point-wise convergence of the kernel integral operator to the manifold operator was proved.
The spectral convergence of bi-stochastically normalized graph Laplacian
for data on hypertorus was recently proved to be $O( N^{-1/(d/2+4) +o(1)})$ in \cite{wormell2020spectral},
and the algorithm uses a symmetric Sinkhorn iteration with accelerated numerical convergence.
The density-corrected affinity kernel matrix $\tilde{W}= D^{-1}WD^{-1}$, which is analyzed in the current work,
is similar to the step in the Sinkhorn iteration of matrix scaling.
It would be interesting to explore the connections to these works and extend our analysis to bi-stochastically normalized graph Laplacians,
which may have better properties of spectral convergence and noise-robustness.
\section{Set-up and preliminaries}\label{sec:prelim}
\subsection{Graph and manifold Laplacians}
We define the following moment constants of function $h$ satisfying Assumption \ref{assump:h-C2-nonnegative},
\[
m_0 [ h ] := \int_{\R^d} h( \| u \|^2) du,
\quad
m_2 [ h ] := \frac{1}{d} \int_{\R^d} \| u \|^2 h( \| u \|^2) du,
\quad
\tilde{m} :=\frac{m_2}{ 2 m_0}.
\]
By (C3), $h \ge 0$ and the case $h \equiv 0$ is excluded, thus $m_0[h], m_2[h] > 0$.
With Gaussian $h$ as in \eqref{eq:def-h-gaussian},
$ m_0 = 1$,
$ m_2 = 2$,
and $\tilde{m} = 1$.
Denote $m_2[h]$ and $m_0[h]$ by $m_2$ and $m_0$ for a shorthand notation, and
\begin{itemize}
\item
The un-normalized graph Laplacian $L_{un}$ is defined as
\begin{equation}\label{eq:def-L-un}
L_{un} : = \frac{1}{ \frac{m_2}{2} p \epsilon N} (D-W).
\end{equation}
Note that the standard un-normalized graph Laplacian is usually $D-W$, and we divide by the constant $\frac{m_2}{2} p \epsilon N$
for the convergence of $L_{un}$ to $-\Delta$.
\item
The random-walk graph Laplacian $L_{rw}$ is defined as
\begin{equation}\label{eq:def-L-rw}
L_{rw} : = \frac{1}{ \frac{m_2}{ 2 m_0} \epsilon } (I - D^{-1}W),
\end{equation}
with the constant normalization to ensure convergence to $-\Delta$.
\end{itemize}
\noindent
The matrix $L_{un}$ is real-symmetric,
positive semi-definite (PSD), and the smallest eigenvalue is zero.
Suppose eigenvalues of $L_{un}$ are $\lambda_k$, $k=1, 2, \cdots$, and sorted in ascending order, that is,
\[
0 = \lambda_1 (L_{un}) \le \lambda_2 (L_{un}) \le \cdots \le \lambda_N (L_{un}).
\]
The $L_{rw} $ matrix is well-define when $D_i > 0$ for all $i$,
which holds w.h.p. under the regime that $\epsilon^{d/2} =\Omega( \frac{\log N}{N}) $,
c.f. Lemma \ref{lemma:Di-concen}.
We always work under the $\epsilon^{d/2} =\Omega( \frac{\log N}{N}) $ regime,
namely the connectivity regime.
Due to that $D^{-1}W$ is similar to $D^{-1/2}WD^{-1/2}$ which is PSD,
$L_{rw} $ is also real-diagonalized and has $N$ non-negative real eigenvalues, sorted and denoted as
$
0 = \lambda_1(L_{rw}) \le \lambda_2 (L_{rw}) \le \cdots \le \lambda_N(L_{rw})$.
We also have that, by the min-max variational formula for real-symmetric matrix,
\begin{equation*}
\lambda_k(L_{un}) = \min_{ L \subset \R^N, \, dim(L) = k} \sup_{ v \in L, v \neq 0}
\frac{ v^T L_{un} v}{v^T v},
\quad k=1,\cdots, N.
\end{equation*}
We define the {\it graph Dirichlet form} $E_N( u)$ for $u \in \R^N$ as
\begin{equation}\label{eq:def-ENu}
E_N(u) =
\frac{1}{\frac{m_2}{2}}
\frac{1}{\epsilon N^2} u^T( D-W) u
=
\frac{1}{\frac{m_2}{2}}
\frac{1}{2 \epsilon N^2}\sum_{i,j = 1}^N W_{i,j} (u_i - u_j)^2.
\end{equation}
By \eqref{eq:def-L-un},
$E_N(u) = p
\frac{1}{N} u^T L_{un} u$,
and thus
\begin{equation}\label{eq:lambdak-min-max}
\lambda_k (L_{un})
= \min_{ L \subset \R^N, \, dim(L) = k} \sup_{ v \in L, v \neq 0 }
\frac{
E_N(v)}{ p \frac{1}{N} \| v \|^2 },
\quad k =1, \cdots, N.
\end{equation}
Similarly, we have
\begin{equation}\label{eq:lambdak-rw-min-max}
\lambda_k (L_{rw})
= \min_{ L \subset \R^N, \, dim(L) = k} \sup_{ v \in L, v \neq 0}
\frac{
E_N(v)}{ \frac{1}{m_0} \frac{1}{N^2} v^T D v },
\quad k =1, \cdots, N.
\end{equation}
To introduce notations of manifold Laplacian,
we define inner-product in $H: =L^2({\cal M} ,dV) $ as
$\langle f, g \rangle : = \int_{\cal M} f(x) g(x) dV(x)$, for $f ,g \in L^2({\cal M} ,dV)$.
We also use $\langle \cdot, \cdot \rangle_q$ to denote inner-product in
$L^2( {\cal M}, q dV)$, $qdV$ being a general measure on ${\cal M}$ (not necessarily probability measure),
that is
$
\langle f, g \rangle_q : = \int_{\cal M} f(x) g(x) q(x) dV(x)$,
for $f ,g \in L^2({\cal M} , qdV)$.
For smooth connected compact manifold ${\cal M}$,
the (minus) manifold Laplacian-Beltrami operator $-\Delta$ has eigen-pairs $\{ \mu_k, \psi_k\}_{k=1}^\infty$,
\[
0 = \mu_1 < \mu_2 \le \cdots \le \mu_k \le \cdots,
\]
\[
-\Delta \psi_k = \mu_k \psi_k,
\quad
\langle \psi_k, \psi_l \rangle = \delta_{k,l},
\quad
\psi_k \in C^\infty({\cal M}),
\quad k, l =1, 2, \cdots.
\]
The second eigenvalue $\mu_2 > 0 $ due to connectivity of ${\cal M}$.
When $\mu_i = \cdots = \mu_{i+l-1} = \mu$ for some eigenvalue $\mu$ of $-\Delta$ having multiplicity $l$,
the eigenfunctions $\psi_i , \cdots, \psi_{i+l-1}$ can be set to be an orthonormal basis of the $l$-dimensional eigenspace associated with $\mu$.
Note that $\psi_k \in C^\infty({\cal M})$ for generic smooth ${\cal M}$.
\subsection{Heat kernel on ${\cal M}$}
We leverage the special property of Gaussian kernel in the ambient space $\R^D$ that it locally approximates the manifold heat kernel on ${\cal M}$.
We start from the notations of manifold heat kernel.
Since $\cal M$ is smooth compact (no-boundary),
the Green's function of the heat equation on ${\cal M}$ exists, namely the heat kernel ${ H}_t(x,y)$ of ${\cal M}$.
We denote the heat diffusion semi-group operator as $Q_t$ which can be formally written as $Q_t = e^{ t \Delta}$, and
\[
Q_t f(x) = \int_{{\cal M}} { H}_t(x,y) f(y) dV(y), \quad \forall f \in L^2({\cal M}, dV).
\]
By that $Q_t$ is semi-group, we have the reproduce property
\[
\int_{{\cal M}} { H}_t(x,y) { H}_t( y,z) dV(y) = H_{2t} (x,z), \quad \forall x, z \in {\cal M}, \quad \forall t > 0.
\]
Meanwhile, by the probability interpretation,
\[
\int_{{\cal M}} { H}_t(x,y) dV(y) = 1, \quad \forall x\in {\cal M}, \quad \forall t > 0.
\]
Using the eigenvalue and eigenfunctions $\{ \mu_k, \psi_k \}_k$ of $-\Delta$,
the heat kernel has the expansion representation
${ H}_t(x,y) = \sum_{k=1}^\infty e^{-t \mu_k} \psi_k(x) \psi_k(y)$.
We will not use the spectral expansion of ${ H}_t$ in our analysis,
but only that $\psi_k$ are also eigenfunctions of $Q_t$, that is,
\begin{equation}\label{eq:Qt-eigen}
Q_t \psi_k = e^{-t \mu_k } \psi_k, \quad k=1,2,\cdots
\end{equation}
Next, we derive Lemma \ref{lemma:heat},
which characterizes two properties of the heat kernel ${ H}_t$ at sufficiently short time:
First, on a local neighborhood
on ${\cal M}$, $H_t(x,y)$ can be approximated by $K_t(x,y)$ in the leading order,
where $K_t$ is defined as in \eqref{eq:def-K-eps} with Gaussian $h$;
Second, globally on the manifold the heat kernel $H_t(x,y)$ has a sub-Gaussian decay.
These are based on classical results about heat kernel on Riemannian manifolds \cite{li1986parabolic,grigor1997gaussian,rosenberg1997laplacian,grigor2009heat},
summarized in the following theorem.
\begin{theorem}[Heat kernel parametrix and decay \cite{rosenberg1997laplacian,grigor1997gaussian}]
\label{thm:Heat-short-time}
Suppose ${\cal M}$ is as in Assumption \ref{assump:M-p} (A1),
and $m > d/2+2 $ is a positive integer.
Then there are positive constants $t_0 < 1$, $\delta_0 < inj({\cal M})$ i.e. the injective radius of ${\cal M}$,
and both $t_0$ and $\delta_0$ depend on ${\cal M}$, and
1) Local approximation:
There are positive constants $C_1$, $C_2$ which depending on ${\cal M}$,
and $u_0, \cdots, u_m$ $\in C^{\infty}({\cal M})$, where $u_0$ satisfies that
\[
|u_0(x,y) -1| \le C_1 d_{\cal M}(x,y)^2, \quad \forall y \in {\cal M}, \, d_{\cal M}( y,x) < \delta_0,
\]
and $G_t$ is defined as in \eqref{eq:def-Gt},
such that, when $ t < t_0$, for any $x \in {\cal M}$,
\begin{equation}\label{eq:parametrix-m}
| { H}_t( x,y) - G_t(x,y) \left( \sum_{l=0}^m t^l u_l(x,y) \right) | \le C_2 t^{m-d/2+1},
\quad
\forall y \in {\cal M}, \, d_{\cal M}( y,x) < \delta_0.
\end{equation}
2) Global decay:
There is positive constant $C_3$ depending on ${\cal M}$ such that, when $ t < t_0$,
\begin{equation}\label{eq:H-decay}
{ H}_t( x,y ) \le C_3 t^{-d/2} e^{- \frac{ d_{\cal M}( x,y)^2}{ 5 t}},
\quad
\forall x, y \in {\cal M}.
\end{equation}
\end{theorem}
\noindent
Part 1) is by the classical parametrix construction of heat kernel on ${\cal M}$, see e.g. Chapter 3 of \cite{rosenberg1997laplacian},
and Part 2) follows the classical upper bound of heat kernel by Gaussian estimate dating back to 60s \cite{aronson1967bounds,grigor2009heat}.
We include a proof of the theorem in Appendix \ref{app:proofs-prelim} for completeness.
The theorem directly gives to the following lemma (proof in Appendix \ref{app:proofs-prelim}),
which is useful for our construction of interpolation mapping using heat kernel.
We denote by $B_\delta(x)$ the Euclidean ball in $\R^D$ centered at point $x$ of radius $\delta$.
\begin{lemma}\label{lemma:heat}
Suppose ${\cal M}$ is as in Assumption \ref{assump:M-p} (A1),
and $t \to 0+$. Let $\delta_t := \sqrt{ 6(10 + \frac{d}{2}) t \log{\frac{1}{t}}}$,
and $K_t(x,y)$ be with Gaussian kernel $h$, i.e.,
$K_t(x,y) = (4 \pi t)^{-d/2} e^{ - \|x - y\|^2/4t}$.
Then there is positive constant $\epsilon_0$ depending on ${\cal M}$
such that, when $t < \epsilon_0$, for any $ x \in {\cal M}$,
\begin{eqnarray}
& { H}_t ( x,y) = K_t( x,y) (1 + {O}( t (\log t^{-1})^2) ) + O(t^3),
\quad
\forall y \in B_{\delta_t}(x) \cap {\cal M},
\label{eq:H-eps-local}\\
& { H}_t(x,y) = O( t^{10}),
\quad \forall y \notin B_{\delta_t}(x) \cap {\cal M},
\label{eq:H-eps-truncate} \\
& { H}_t(x,y) = O( t^{-d/2}),
\quad \forall x, y \in {\cal M}. \label{eq:H-global-boundedness}
\end{eqnarray}
The constants in big-$O$ in all the equations
only depend on ${\cal M}$ and are uniform for all $x$.
\end{lemma}
\section{Eigenvalue upper bound}\label{sec:step0}
In this section, we consider uniform $p$ on ${\cal M}$,
and standard graph Laplacians $L_{un}$ and $L_{rw}$
with the kernelized affinity matrix $W$, $W_{ij} = K_\epsilon(x_i, x_j)$ defined as in \eqref{eq:def-K-eps}.
We show the eigenvalue UB for general differentiable $h$ satisfying Assumption \ref{assump:h-C2-nonnegative}, not necessarily Gaussian.
\subsection{Un-normalized graph Laplacian}
\begin{proposition}[Eigenvalue UB of $L_{un}$]
\label{prop:eigvalue-UB}
Under Assumption \ref{assump:M-p}(A1),
$p$ being uniform on ${\cal M}$,
and Assumption \ref{assump:h-C2-nonnegative}.
For fixed $K \in \mathbb{N}$,
if as $N \to \infty$, $\epsilon \to 0+ $ and $\epsilon^{d/2} = \Omega( \frac{\log N}{N} ) $, then for sufficiently large $N$,
w.p. $> 1-4 K^2 N^{-10}$,
\[
\lambda_k (L_{un})
\le \mu_k + O(\epsilon , \sqrt{ \frac{\log N}{N \epsilon^{d/2} } } ) ,
\quad k=1,\cdots, K.
\]
\end{proposition}
\noindent
The proposition holds when the population eigenvalues $\mu_k$ have more than 1 multiplicities,
as long as they are sorted in an ascending order.
The proof is by constructing a $k$-dimensional subspace $L$ in \eqref{eq:lambdak-min-max}
spanned by vectors in $\R^N$ which are produced by evaluating the population eigenfunctions $\psi_k$ at the $N$ data points.
Given $X = \{x_i\}_{i=1}^N$,
define the function evaluation operator $\rho_X$ applied to $f: {\cal M} \to \R$ as
\[
\rho_X: C({\cal M}) \to \R^N, \quad
\rho_X f = (f(x_1), \cdots, f(x_N)).
\]
We will use $u_k = \frac{1}{\sqrt{p}} \rho_X \psi_k$ as ``candidate'' approximate eigenvectors.
To analyze $E_N ( \frac{1}{\sqrt{p}} \rho_X \psi_k)$,
the following result from \cite{cheng2020convergence} shows that it converges to the differential Dirichlet form
\[
p^{-1} \langle \psi_k, (-\Delta) \psi_k \rangle_{p^2} = p \mu_k
\]
with the form rate.
The result is for general smooth $p$ and weighted Laplacian $\Delta_q$,
which is defined as
$\Delta_q := \Delta + \frac{\nabla q}{q } \cdot \nabla$
for measure $qdV$ on ${\cal M}$.
$\Delta_q$ is reduced to $\Delta$ when $p$ is uniform.
\begin{theorem}[Theorem 3.5 in \cite{cheng2020convergence}]
\label{thm:form-rate}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
as $N \to \infty$, $ \epsilon \to 0+$, $ \epsilon^{d/2 } = \Omega( \frac{ \log N}{N})$,
then for any $f \in C^{\infty} ({\cal M})$,
when $N$ is sufficiently large,
w.p. $> 1- 2N^{-10}$,
\[
E_N( \rho_X f )
= \langle f, -\Delta_{p^{2}} f \rangle_{p^{2}}
+ O_{p,f} \left( \epsilon \right)
+ O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} \int_{\cal M} |\nabla f |^4 p^{2} }\right).
\]
The constant in $O_{p,f}(\cdot)$ depends on the $C^4$ norm of $p$ and $f$ on ${\cal M}$,
and that in $O(\cdot)$ is an absolute one.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:form-rate}]
The proof is by a going through of the proof of Theorem 3.5 of \cite{cheng2020convergence} under the simplified situation when $\beta = 0$ (no normalization of the estimated density is involved).
Specifically,
the proof uses the concentration of the $V$-statistics $V_{ij}:= \frac{1}{\epsilon} K_\epsilon(x_i, x_j) (f(x_i) -f(x_j))^2$.
The expectation of $\mathbb{E} V_{ij}$, $i \neq j$, equals
$\frac{1}{\epsilon} \int_{{\cal M}} \int_{{\cal M}} K_\epsilon(x,y) (f(x)-f(y))^2 p(x)p(y) dV(x) dV(y)
= m_2[h] \langle f, -\Delta_{p^2} f \rangle_{p^2} + O_{p,f}(\epsilon)$.
Meanwhile, $ | V_{ij}|$ is bounded by $O(\epsilon^{-d/2})$, and the variance of the $V_{ij}$ can also be bounded by $O(\epsilon^{-d/2})$
with the constant as in the theorem,
following the calculation in the proof of Theorem 3.5 in \cite{cheng2020convergence}.
The concentration of $\frac{1}{N(N-1)}\sum_{i,j =1}^N V_{ij}$ at $\mathbb{E} V_{ij}$ then follows by the decoupling of the $V$-statistics,
and it gives the high probability bound in the theorem.
Note that the results in \cite{cheng2020convergence} are proved under the assumption that $h$ to be $C^4$ rather than $C^2$,
that is, requiring Assumption \ref{assump:h-C2-nonnegative}(C1)(C2) to hold for up to 4-th derivative of $h$.
This is because $C^4$ regularity of $h$ is used to handle complication of the adaptive bandwidth in the other analysis in \cite{cheng2020convergence}.
With the fixed bandwidth kernel $K_\epsilon(x,y)$ as defined in \eqref{eq:def-K-eps},
$C^2$ regularity suffices, as originally assumed in \cite{coifman2006diffusion}.
\end{proof}
\begin{remark}\label{rk:non-nagativity-not-needed}
Since the proof only involves the computation of moments of the $V$-statistic,
it is possible to relax Assumption \ref{assump:h-C2-nonnegative}(C3) non-negativity of $h$
and replace with certain non-vanishing conditions on $m_0[h]$ and $m_2[h]$,
e.g., as in \cite{coifman2006diffusion} and Assumption A.5 in \cite{cheng2020convergence}.
Since the non-negativity of $W_{ij}$ is used in other places in the paper,
and our eigenvalue LB needs $h$ to be Gaussian,
we adopt the non-negativity of $h$ in Assumption \ref{assump:h-C2-nonnegative} for simplicity.
The $C^4$ regularity of $f$ may also be relaxed, and the constant in $O_{p,f}(\cdot)$ may be improved accordingly.
These extensions are not further pursued here.
\end{remark}
\vspace{3pt}
\begin{remark}\label{rk:indicator-h-form-rate}
When $h = {\bf 1}_{[0,1)}$,
using the same method as in the proof of Lemma 8 in \cite{coifman2006diffusion},
one can verify that (proof in Appendix \ref{app:proofs-step0}), for $i \neq j$,
\begin{equation}\label{eq:bias-error-remark-indicator-h}
\mathbb{E} V_{ij} = m_2 [h] \langle f, -\Delta_{p^2} f \rangle_{p^2} + O_{p,f}(\epsilon),
\quad f \in C^{\infty}({\cal M}).
\end{equation}
The boundedness and variance of $V_{ij}$ are again bounded by $O(\epsilon^{-d/2})$,
and thus the Dirichlet form convergence with
$h = {\bf 1}_{[0,1)}$ has the same rate as in Theorem \ref{thm:form-rate}.
This firstly implies that the eigenvalue UB also has the same rate,
following the same proof of Proposition \ref{prop:eigvalue-UB}.
The final eigen-convergence rate also depends on the point-wise rate of the graph Laplacian,
see more in Remark \ref{rk:eigen-rate-indicator-h}.
\end{remark}
\vspace{10pt}
In Theorem \ref{thm:form-rate} and in below,
the $\log N$ factor in the variance error bound
is due to the concentration argument.
Throughout the paper, the classical Bernstein inequality Lemma \ref{lemma:bern} is intensively used.
To proceed, recall the definition of $E_N(u)$ as in \eqref{eq:def-ENu},
we define the bi-linear form for $u,v \in \R^N$ as
\[
B_N(u,v) := \frac{1}{4}( E_N(u+v) - E_N(u-v) ) = \frac{1}{m_2/2}\frac{1}{\epsilon N^2} u^T( D-W) v,
\]
which is symmetric, i.e., $B_N(u,v)= B_N(v,u)$,
and $B_N(u,u) = E_N(u)$.
The following lemma characterizes the forms $E_N$ and $B_N$ applied to $\rho_X \psi_k$, proved in Appendix \ref{app:proofs-step0}.
\begin{lemma}\label{lemma:form-rate-psi}
Under Assumption \ref{assump:M-p} (A1),
$p$ being uniform on ${\cal M}$,
and Assumption \ref{assump:h-C2-nonnegative}.
As $N \to \infty$, $ \epsilon \to 0+$, $ \epsilon^{d/2 } N = \Omega( \log N)$.
For fixed $K$,
when $N$ is sufficiently large, w.p. $> 1 - 2 K^2 N^{-10}$,
\begin{equation}\label{eq:form-rate-psi}
\begin{split}
E_N( \frac{1}{\sqrt{p}} \rho_X \psi_k)
& = p \mu_k + O(\epsilon ) + O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} }\right),
\quad k = 1, \cdots, K, \\
B_N( \frac{1}{\sqrt{p}} \rho_X \psi_k, \frac{1}{\sqrt{p}} \rho_X \psi_l)
& = O(\epsilon ) + O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} }\right),
\quad k \neq l, \, 1 \le k,l \le K.
\end{split}
\end{equation}
\end{lemma}
We need to show the linear independence of the vectors $\rho_X \psi_1, \cdots, \rho_X \psi_{K}$ such that they span a $K$-dimensional subspace in $\R^N$.
This holds w.h.p. at large $N$, by the following lemma showing the near-isometry of the projection mapping $\rho_X$, proved in Appendix \ref{app:proofs-step0}.
\begin{lemma}\label{lemma:rhoX-isometry-whp}
Under Assumptions \ref{assump:M-p} (A1),
$p$ being uniform on ${\cal M}$.
For fixed $K$,
when $N$ is sufficiently large, w.p. $> 1 - 2 K^2 N^{-10}$,
\begin{equation}\label{eq:uk-near-orthonormal}
\begin{split}
\frac{1}{N } \| \frac{1}{\sqrt{p}} \rho_X \psi_k \|^2
& = 1 + O( \sqrt{\frac{\log N}{N}}), \, 1 \le k \le K; \\
\frac{1}{N } ( \frac{1}{\sqrt{p}} \rho_X \psi_k)^T ( \frac{1}{\sqrt{p}} \rho_X \psi_l)
& = O( \sqrt{\frac{\log N}{N}}), \, k\neq l, \, 1 \le k,l \le K.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}[Proof of Proposition \ref{prop:eigvalue-UB}]
For fixed $K$,
consider the intersection of both good events in Lemma \ref{lemma:form-rate-psi}
and \ref{lemma:rhoX-isometry-whp},
which happens w.p. $> 1- 4K^2 N^{-10}$ with large enough $N$.
Let $u_k = \frac{1}{\sqrt{p}} \rho_X \psi_k$,
by \eqref{eq:uk-near-orthonormal},
the set $\{ u_1, \cdots, u_K\}$ is linearly independent.
For any $1 \le k \le K$,
let $L = \text{Span}\{ u_1, \cdots, u_k\}$, then $dim(L) = k$.
By \eqref{eq:lambdak-min-max}, to show the UB of $\lambda_k$ as in the proposition, it suffices to show that
\[
\sup_{ v \in L, \|v\|^2 = N} \frac{1}{p} E_N(v)
\le \mu_k + O(\epsilon ) + O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} }\right).
\]
For any $v \in L$, $\|v\|^2 = N$,
there are $c_j$, $1 \le j \le k $, such that
$v = \sum_{j=1}^k c_j u_j$.
By \eqref{eq:uk-near-orthonormal},
\[
1 = \frac{1}{N} \|v\|^2
= \sum_{j=1}^k c_j^2 (1 + O( \sqrt{\frac{\log N}{N}} ) )
+ \sum_{j\neq l, j,l =1}^k |c_j | | c_l | O( \sqrt{\frac{\log N}{N}} )
= \| c \|^2 ( 1+ O( K \sqrt{ \frac{\log N}{ N}}) ),
\]
thus $\| c\|^2 = 1 + O( \sqrt{\frac{\log N }{N }})$.
Meanwhile,
$E_N( v) = E_N( \sum_{j=1}^k c_j u_j)
= \sum_{j,l = 1}^k c_j c_l B_N( u_j, u_l)$,
and by \eqref{eq:form-rate-psi},
\begin{align}
E_N( v)
& = \sum_{j=1}^k c_j^2 \left( p \mu_j + O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ) \right)
+ \sum_{j\neq l, j,l=1}^k |c_j | | c_l | O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ) \nonumber \\
& = p \sum_{j=1}^k \mu_j c_j^2 + K \| c \|^2 O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } )
\le \| c \|^2 \left\{ p \mu_k + O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ) \right\},
\label{eq:UB-ENv}
\end{align}
where since $K$ is fixed integer, we incorporate it into the big-$O$.
Also, $\mu_k \le \mu_K = O(1)$, and then
\[
\frac{1}{p}E_N( v)
\le \left( 1 + O( \sqrt{\frac{\log N }{N }}) \right)
\left\{ \mu_k + O(\epsilon) + O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} }\right)
\right\}
= \mu_k + O(\epsilon) + O \left( \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} }\right),
\]
which finishes the proof.
\end{proof}
\subsection{Random-walk graph Laplacian}
We fist establish a concentration argument of $D_i$ in the following lemma,
which shows that $D_i >0$ w.h.p., by that $\frac{1}{N}D_i$ concentrates at the value of $m_0 p > 0$.
Consequently, $\frac{1}{N^2} u^T D u$ also concentrates and the deviation is uniformly bounded for all $u \in \R^N$,
which will be used in analyzing \eqref{eq:lambdak-rw-min-max}.
\begin{lemma}\label{lemma:Di-concen}
Under Assumption \ref{assump:M-p}(A1), $p$ uniform, and Assumption \ref{assump:h-C2-nonnegative}.
Suppose as $N \to 0$, $ \epsilon \to 0+$ and $ \epsilon^{d/2} = \Omega( \frac{\log N}{N})$.
Then, when $N$ is large enough, w.p. $> 1- 2N^{-9}$,
1) The degree $D_i$ concentrates for all $i$, namely,
\begin{equation}\label{eq:concen-Di}
\frac{1}{N}D_i = m_0 p + O( \epsilon, \sqrt{\frac{\log N}{ N \epsilon^{d/2}}}), \quad \forall i=1, \cdots, N.
\end{equation}
2) The from $\frac{1}{N^2} u^T D u $ concentrates for all $u$, namely,
\begin{equation}\label{eq:concen-uDu}
\frac{1}{N^2} u^T D u
= \frac{1}{N} \|u\|^2 ( m_0 p + O( \epsilon, \sqrt{\frac{\log N}{ N \epsilon^{d/2}}})),
\quad
\forall u \in \R^N.
\end{equation}
The constants in the big-O in \eqref{eq:concen-Di} and \eqref{eq:concen-uDu}
are determined by $({\cal M}, h)$
and uniform for all $i$ and $u$.
\end{lemma}
\noindent
Part 2) immediately follows from Part 1), the latter being proved by standard concentration argument of independent sum and a union bound for $N$ events.
With Lemma \ref{lemma:Di-concen}, the proof of the following proposition is similar to that of Proposition \ref{prop:eigvalue-UB},
and the difference lies in handling the denominator of the Rayleigh quotient in \eqref{eq:lambdak-rw-min-max}.
The proofs of Lemma \ref{lemma:Di-concen} and Proposition \ref{prop:eigvalue-UB-rw} are in Appendix \ref{app:proofs-step0}.
\begin{proposition}[Eigenvalue UB of $L_{rw}$]
\label{prop:eigvalue-UB-rw}
Suppose ${\cal M}$, $p$ uniform, $h$, $K$, $\mu_k$, and $\epsilon$
are under the same condition as in Proposition \ref{prop:eigvalue-UB},
then for sufficiently large $N$,
w.p. $> 1- 2 N^{-9} - 4 K^2 N^{-10}$, $D_i >0$ for all $i$, and
\[
\lambda_k (L_{rw})
\le \mu_k + O(\epsilon , \sqrt{ \frac{\log N}{N \epsilon^{d/2} } } ) ,
\quad k=1,\cdots, K.
\]
\end{proposition}
\section{Eigenvalue crude lower bound in Step 1}\label{sec:step1}
In this section, we prove $O(1)$ eigenvalue LB in Step 1,
first for $L_{un}$,
and then the proof for $L_{rw}$ is similar.
We consider for $t > 0$ the operator ${\cal L}_t $ on $H = L^2( {\cal M}, dV )$ defined as
\[
{\cal L}_t := I - Q_t,
\quad
{\cal L}_t f(x) = f(x) - \int_{{\cal M}} { H}_t(x,y) f(y) dV(y),
\quad f \in H.
\]
The semi-group operator $Q_t$ is Hilbert-Schmidt, compact, and has eigenvalues and eigenfunctions as in \eqref{eq:Qt-eigen}.
Thus, the operator ${\cal L}_t$ is self-adjoint and PSD, and has
\[
{\cal L}_t \psi_k = (1-e^{-t \mu_k}) \psi_k, \quad k = 1, 2, \cdots
\]
For any $t>0$, the eigenvalues
$ \{ 1-e^{-t \mu_k} \}_k$
are ascending from 0 and have limit point 1.
We denote $\| f \|^2 = \langle f, f \rangle$ for $f \in H$.
By the variational principle, we have that when $t>0$, for any $k$,
\begin{equation}\label{eq:eig-Lt-minmax}
1-e^{-t \mu_k}
= \inf_{L \subset H, \, dim(L) = k } \sup_{f \in L, \, \|f\|^2 \neq 0}
\frac{ \langle f , {\cal L}_t f \rangle}{\langle f, f \rangle }.
\end{equation}
For the first result, we assume that $\mu_k$ are all of multiplicity 1 for simplicity.
When population eigenvalues have greater than one multiplicity,
the result extends by considering eigenspace rather than eigenvectors in the standard way,
see Remark \ref{rk:multiplicity}.
\subsection{Un-normalized graph Laplacian}
\begin{proposition}[Initial crude eigenvalue LB of $L_{un}$]
\label{prop:eigvalue-LB-crude}
Under Assumptions \ref{assump:M-p} (A1),
suppose $p$ is uniform on ${\cal M}$,
and $h$ is Gaussian.
For fixed $k_{max} \in \mathbb{N}$, $K = k_{max}+1$,
suppose $0 = \mu_1 <\cdots < \mu_{K} < \infty$ are all of single multiplicity,
and define
\begin{equation}\label{eq:def-gamma-K}
\gamma_K : = \frac{1}{2} \min_{1 \le k \le k_{max}} (\mu_{k+1} - \mu_k),
\end{equation}
$\gamma_K > 0$ and is a fixed constant.
Then there is a absolute constant $c_K$ determined by ${\cal M}$ and $k_{max}$
(specifically, $c_K = c (\frac{\mu_K}{\gamma_K})^{d/2} \gamma_K^{-2}$, where $c$ is a constant depending on ${\cal M}$),
such that,
if as $N \to \infty$, $\epsilon \to 0+ $,
and $\epsilon^{d/2+2} > c_K \frac{\log N}{N} $, then for sufficiently large $N$,
w.p. $> 1 - 4 K^2 N^{-10} -4 N^{-9}$,
\[
\lambda_k (L_{un})
> \mu_k - \gamma_K,
\quad k=2,\cdots, K.
\]
\end{proposition}
Suppose $\{ \lambda_k, v_k\}_{k=1}^K$ are eigenvalue and eigenvectors of $L_{un}$, to construct a test function $f_k$ on ${\cal M}$ from the vector $v_k$, we define the
{\it interpolation mapping}
(the terminology ``interpolation'' is inherited from \cite{burago2013graph})
by the heat kernel with diffusion time $r$, $0 < r < \epsilon $ to be determined.
Specifically, define
\[
I_r [ u ](x):= \frac{1}{N} \sum_{j=1}^N u_j { H}_r( x, x_j),
\quad I_r: \R^N \to C^\infty({\cal M}),
\]
and then for any $t > 0$,
\begin{equation}
{ \langle I_r [u] , Q_t I_r [u] \rangle}
= \frac{1}{N^2} \sum_{i,j=1}^N u_i u_j { H}_{2r+t}(x_i, x_j),
\quad
{\langle I_r [u], I_r [u] \rangle}
= \frac{1}{N^2} \sum_{i,j=1}^N u_i u_j { H}_{2r}(x_i, x_j).
\end{equation}
We define the quadratic form
\[
q_s(u) := \frac{1}{N^2} \sum_{i,j=1}^N u_i u_j { H}_{s}(x_i, x_j),
\quad s > 0, \quad u \in \R^N.
\]
We also define $q_s^{(0)}$ and $q_s^{(2)}$ as below, and then for any $u \in \R^N$,
$q_s(u) = q^{(0)}_{s}(u) - q^{(2)}_{s}(u)$, where
\begin{align}
q^{(0)}_{s}(u)
:= \frac{1}{N} \sum_{i=1}^N u_i^2 \left( \frac{1}{N} \sum_{j=1}^N { H}_{s}(x_i, x_j) \right),
\quad
q^{(2)}_{s}(u)
:= \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N { H}_s(x_i,x_j) (u_i - u_j)^2
\end{align}
We will show that $q^{(0)}_{s}(u) \approx p \frac{1}{N}\|u\|^2$ by concentration of the independent sum $\frac{1}{N} \sum_{j=1}^N { H}_{s}(x_i, x_j)$;
$ q^{(2)}_{s}(u) \ge 0$ by definition, and will be $ O(s)$ when $u$ is an eigenvector with $\|u \|^2 = N$.
\begin{lemma}\label{lemma:qs0-concen}
Under Assumptions \ref{assump:M-p} (A1),
$p$ being uniform on ${\cal M}$.
Suppose as $N \to 0$, $s \to 0+$ and $s^{d/2} = \Omega( \frac{\log N}{N})$.
Then, when $N$ is large enough, w.p. $> 1- 2N^{-9}$,
\[
q^{(0)}_{s}(u) = \frac{1}{N} \|u\|^2 ( p + O_{\cal M}(\sqrt{\frac{\log N}{ N s^{d/2}}})),
\quad
\forall u \in \R^N.
\]
The notation $O_{\cal M}(\cdot)$
indicates that the constant depends on ${\cal M}$ and is uniform for all $u$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:qs0-concen}]
By definition,
$ q^{(0)}_{s}(u) = \frac{1}{N} \sum_{i=1}^N u_i^2 (D_s)_i$,
where $
( D_s)_i := \frac{1}{N} \sum_{j=1}^N { H}_s(x_i, x_j)$,
and $\{ ( D_s)_i \}_{i=1}^N$ are $N$ positive valued random variables.
It suffices to show that with large enough $N$, w.p. indicated in the lemma,
\begin{equation}\label{eq:concen-Ds}
(D_s)_i = p + O_{\cal M}(\sqrt{\frac{\log N}{ N s^{d/2}}}), \quad \forall i=1, \cdots, N.
\end{equation}
This can be proved using concentration argument, similar as in the proof of Lemma \ref{lemma:Di-concen} 1),
where we use the boundedness of the heat kernel \eqref{eq:H-global-boundedness} in Lemma \ref{lemma:heat}.
The proof of \eqref{eq:concen-Ds} is given in Appendix \ref{app:proofs-step1}.
Note that \eqref{eq:concen-Ds} is a property of the r.v. ${ H}_s(x_i, x_j)$ only,
which is irrelevant to the vector $u$. Thus the threshold of large $N$ in the lemma
and the constant in big-$O$ depend on ${\cal M}$ and are uniform for all $u$.
\end{proof}
\begin{lemma}\label{lemma:qs2-UB}
Under Assumptions \ref{assump:M-p} ($p$ can be non-uniform),
$h$ being Gaussian,
let $0 < \alpha < 1$ be a fixed constant.
Suppose $\epsilon \to 0 +$ as $N \to \infty$, then with sufficiently small $\epsilon$, for any realization of $X$,
\begin{equation}\label{eq:q2-eps-UB}
0 \le q^{(2)}_{ \epsilon}(u)
= \left( 1 + {O}( \epsilon (\log \frac{1}{ \epsilon})^2) \right) \frac{u^T(D-W) u}{N^2}
+ \frac{\| u \|^2}{N} O(\epsilon^{3}),
\quad
\forall u \in \R^N,
\end{equation}
and
\begin{equation}\label{eq:q2-alphaeps-UB}
0 \le q^{(2)}_{ \alpha \epsilon}(u)
\le 1.1 \alpha^{-d/2} \frac{u^T(D-W) u}{N^2} + \frac{\| u \|^2}{N} O(\epsilon^{3}),
\quad
\forall u \in \R^N.
\end{equation}
The constants in big-$O$ only depend on ${\cal M}$ and are uniform for all $u$ and $\alpha$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:qs2-UB}]
For any $u \in \R^N$,
$q^{(2)}_{ \epsilon}(u) = \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N { H}_\epsilon (x_i,x_j) (u_i - u_j)^2 \ge 0$.
Since $\epsilon = o(1)$, take $t$ in Lemma \ref{lemma:heat} to be $\epsilon$,
when $\epsilon < \epsilon_0$, the three equations hold.
By \eqref{eq:H-eps-truncate},
truncate at an $\delta_\epsilon = \sqrt{ 6(10 + \frac{d}{2}) \epsilon \log{\frac{1}{\epsilon}}}$ Euclidean ball,
\begin{align*}
q^{(2)}_{ \epsilon}(u)
= \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N { H}_\epsilon (x_i,x_j) {\bf 1}_{\{ x_j \in B_{\delta_\epsilon}(x_i) \}}
(u_i - u_j)^2
+ O(\epsilon^{10}) \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N (u_i - u_j)^2.
\end{align*}
By that $\frac{1}{N^2} \sum_{i,j=1}^N (u_i - u_j)^2 \le \frac{2}{N} \| u \|^2$,
and apply \eqref{eq:H-eps-local} with the short hand that $\tilde{O}(\epsilon) $ stands for $ {O}( \epsilon (\log \frac{1}{ \epsilon})^2) $,
\begin{align*}
q^{(2)}_{ \epsilon}(u)
& = \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N
\left( K_\epsilon( x_i,x_j) (1 + \tilde{O}( \epsilon)) + O(\epsilon^3) \right)
{\bf 1}_{\{ x_j \in B_{\delta_\epsilon}(x_i) \}}
(u_i - u_j)^2
+ O(\epsilon^{10}) \frac{\| u \|^2}{N} \\
& = (1 + \tilde{O}( \epsilon) )
\frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N K_\epsilon( x_i,x_j) {\bf 1}_{\{ x_j \in B_{\delta_\epsilon}(x_i) \}} (u_i - u_j)^2
+ O(\epsilon^{3}) \frac{\| u \|^2}{N}.
\end{align*}
By the truncation argument for $K_\epsilon (x_i,x_j)$, we have that
\begin{equation}\label{eq:W-form-truncate}
\frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N K_\epsilon (x_i,x_j) {\bf 1}_{\{ x_j \in B_{\delta_\epsilon}(x_i) \}} (u_i - u_j)^2
= \frac{u^T(D-W)u}{N^2} + \frac{\| u \|^2}{N} O(\epsilon^{10}).
\end{equation}
Putting together, we have
\[
q^{(2)}_{ \epsilon}(u)
= (1 + \tilde{O}( \epsilon) )
\left( \frac{u^T(D-W)u}{N^2} + \frac{\| u \|^2}{N} O(\epsilon^{10})
\right)
+ O(\epsilon^{3}) \frac{\| u \|^2}{N},
\]
which proves \eqref{eq:q2-eps-UB}.
To prove \eqref{eq:q2-alphaeps-UB}, since $\alpha < 1$ is a fixed positive constant, $ 0 < \alpha \epsilon < \epsilon < \epsilon_0$,
we then apply Lemma \ref{lemma:heat} with $t$ therein being $\alpha \epsilon$.
With a truncation at $\delta_{\alpha \epsilon}$-Euclidean ball,
and by \eqref{eq:H-eps-local},
\begin{align*}
q^{(2)}_{\alpha \epsilon}(u)
& = \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N
\left( K_{\alpha \epsilon}( x_i,x_j) (1 + \tilde{O}( \alpha \epsilon)) + O( \alpha^3 \epsilon^3) \right)
{\bf 1}_{\{ x_j \in B_{\delta_{\alpha\epsilon}}(x_i) \}}
(u_i - u_j)^2
+ \frac{\| u \|^2}{N} O(\epsilon^{10}) \\
&=
(1 + \tilde{O}( \epsilon))
\frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N
K_{\alpha \epsilon}( x_i,x_j)
{\bf 1}_{\{ x_j \in B_{\delta_{\alpha\epsilon}}(x_i) \}}
(u_i - u_j)^2
+ \frac{\| u \|^2}{N} O(\epsilon^{3}).
\end{align*}
Suppose $\epsilon$ is sufficiently small such that $1+\tilde{O}(\epsilon)$ is less than 1.1.
Note that
\begin{equation}\label{eq:K-alphaeps-K-eps}
K_{\alpha \epsilon}(x,y) = \frac{1}{(4 \pi \alpha \epsilon)^{d/2}} e^{-\frac{ \| x - y\|^2}{4 \alpha \epsilon}}
\le \frac{1}{\alpha^{d/2}} \frac{1}{(4 \pi \epsilon)^{d/2}} e^{-\frac{ \| x - y \|^2}{4 \epsilon}}
= \alpha^{-d/2} K_{\epsilon} (x,y),
\quad \forall x,y \in {\cal M},
\end{equation}
then, by that ${\bf 1}_{\{ x_j \in B_{\delta_{\alpha\epsilon}}(x_i) \}} \le {\bf 1}_{\{ x_j \in B_{\delta_{\epsilon}}(x_i) \}}$,
and again with \eqref{eq:W-form-truncate},
\begin{align*}
q^{(2)}_{\alpha \epsilon}(u)
& \le
1.1 \frac{1}{2} \frac{1}{N^2} \sum_{i,j=1}^N
\alpha^{-d/2} K_{\epsilon} (x_i,x_j)
{\bf 1}_{\{ x_j \in B_{\delta_{\epsilon}}(x_i) \}}
(u_i - u_j)^2
+ \frac{\| u \|^2}{N} O(\epsilon^{3}) \\
& =
1.1 \alpha^{-d/2}
\left(
\frac{u^T(D-W)u}{N^2} + \frac{\| u \|^2}{N} O(\epsilon^{10})
\right)
+ \frac{\| u \|^2}{N} O(\epsilon^{3}),
\end{align*}
and this proves \eqref{eq:q2-alphaeps-UB}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:eigvalue-LB-crude}]
For fixed $k_{max}$, since $\gamma_K < \mu_K$, define
\begin{equation}\label{eq:def-detla-const}
\delta := \frac{0.5 \gamma_K}{\mu_K} < 0.5,
\end{equation}
$\delta > 0$ and is a fixed constant determined by ${\cal M}$ and $k_{max}$.
For $\epsilon > 0$, let
\[
r: = \frac{\delta \epsilon}{2},
\quad
t = \epsilon - 2r = (1-\delta) \epsilon.
\]
For $L_{un} v_k = \lambda_k v_k$, where $v_k$ are normalized s.t.
\begin{equation}\label{eq:vk-normalize-un}
\frac{1}{N} v_k^T v_l = \delta_{kl}, \quad 1 \le k,l \le N,
\end{equation}
let $f_k = I_r [ v_k ]$, $k=1, \cdots, K$, then $f_k \in C^\infty({\cal M}) \subset H$.
Because $\epsilon^{d/2+2} > c_K \frac{\log N}{N}$, and $\epsilon = o(1)$,
$\epsilon^{d/2} = \Omega( \frac{\log N }{N} )$.
Thus, under the assumption of the current proposition,
the condition needed in Proposition \ref{prop:eigvalue-UB} is satisfied,
and then when $N$ is sufficiently large,
there is an event $E_{UB}$ which happens w.p. $> 1- 4K^2 N^{-10}$, under which
\begin{equation}\label{eq:lambdak-UB-hold}
\lambda_k \le \mu_k + 0.1 \mu_K \le 1.1 \mu_K, \quad 1 \le k \le K.
\end{equation}
We first show that $\{ f_j\}_{j=1}^K$ are linearly independent by considering $\langle f_k, f_l \rangle$.
By definition, for $1 \le k \le K$,
\[
\langle f_k, f_k \rangle = q_{2r}( v_k) = q^{(0)}_{\delta \epsilon }( v_k) - q^{(2)}_{\delta \epsilon }( v_k),
\]
and for $k \neq l$, $1 \le k , l \le K$,
\[
\langle (f_k \pm f_l), (f_k \pm f_l) \rangle = q_{2r}( v_k \pm v_l)
= q^{(0)}_{\delta \epsilon }( v_k \pm v_l ) - q^{(2)}_{\delta \epsilon }( v_k \pm v_l).
\]
Because $s = \delta \epsilon$, under the condition of the proposition, $s$ satisfies the condition in Lemma \ref{lemma:qs0-concen},
and thus,
with sufficiently large $N$,
there is an event $E^{(0)}$ which happens w.p. $> 1- 2N^{-9}$, under which
\[
q^{(0)}_{\delta \epsilon }( v_k)
= p+ O( \sqrt{\frac{\log N}{ N \epsilon^{d/2}}}), \quad 1 \le k \le K;
\quad
q^{(0)}_{\delta \epsilon }( v_k \pm v_l)
= 2p + O( \sqrt{\frac{\log N}{ N \epsilon^{d/2}}}), \quad k \neq l, 1 \le k,l \le K,
\]
where we used that the factor $\delta^{-d/2}$ is a fixed constant.
Meanwhile, applying \eqref{eq:q2-alphaeps-UB} in Lemma \ref{lemma:qs2-UB} where $\alpha = \delta$,
and note that
\[
\frac{v_k^T(D-W) v_k}{N^2} = p \epsilon \lambda_k;
\quad
\frac{ (v_k \pm v_l)^T(D-W) (v_k \pm v_l)}{N^2} = p \epsilon (\lambda_k + \lambda_l),
\quad k \neq l, 1 \le k, l \le K,
\]
we have that
\[
\begin{split}
q^{(2)}_{\delta \epsilon }( v_k)
& =
O( \delta^{-d/2} ) p \epsilon \lambda_k + O(\epsilon^{3}) ,
\quad 1 \le k \le K, \\
q^{(2)}_{\delta \epsilon }( v_k \pm v_l)
&
= O( \delta^{-d/2}) p \epsilon ( \lambda_k + \lambda_l) + 2 O(\epsilon^{3}) ,
\quad k \neq l,
\end{split}
\]
and by that $\lambda_k, \, \lambda_l \le 1.1 \mu_K $ which is a fixed constant, so is $\delta$,
we have that
\begin{equation}\label{eq:q2-delta-eps-vkvl}
q^{(2)}_{\delta \epsilon }( v_k) = O( \epsilon), \quad 1 \le k \le K;
\quad
q^{(2)}_{\delta \epsilon }( v_k \pm v_l)= O(\epsilon), \quad k \neq l, \, 1\le k, l \le K.
\end{equation}
Putting together, we have that
\begin{equation}\label{eq:fk-near-iso}
\begin{split}
\langle f_k, f_k \rangle
&= p + O( \sqrt{\frac{\log N}{ N \epsilon^{d/2}}} , \epsilon), \quad 1 \le k \le K, \\
\langle f_k, f_l \rangle
& = \frac{1}{4}( q_{\delta \epsilon}( v_k + v_l) - q_{\delta \epsilon}( v_k - v_l) )
= O( \sqrt{\frac{\log N}{ N \epsilon^{d/2}}} , \epsilon),
\quad k \neq l, \, 1\le k, l \le K.
\end{split}
\end{equation}
This proves linear independence of $\{ f_j\}_{j=1}^K$ when $N$ is large enough,
since $O( \sqrt{\frac{\log N}{ N \epsilon^{d/2}}} , \epsilon) = o(1)$.
We consider first $K$ eigenvalues of ${\cal L}_t$, $t = (1-\delta) \epsilon$.
For each $2 \le k \le K$, let $L_k = \text{Span}\{ f_1, \cdots, f_k\}$ be a $k$-dimensional subspace in $H$,
then by \eqref{eq:eig-Lt-minmax},
\begin{equation}\label{eq:eigen-relation-2}
1 - e^{-(1-\delta) \epsilon \mu_k }
\le
\sup_{ f \in L_k, \, \|f\|^2 \neq 0}
\frac{ \langle f , {\cal L}_t f \rangle}{\langle f, f \rangle}
= \frac{ \langle f , f \rangle - \langle f , Q_t f \rangle }{\langle f, f \rangle}.
\end{equation}
For any $f \in L_k$, $ \|f\|^2 \neq 0$, there is $c \in \R^k$, $c \neq 0$, such that $ f = \sum_{j=1}^k c_j f_j $.
Thus
\[
f = \sum_{j=1}^k c_j I_r [v_j]
= I_r [ \sum_{j=1}^k c_j v_j] = I_r [v],
\quad
v : = \sum_{j = 1}^k c_j v_j.
\]
Because $v_j$ are orthogonal, $\| v_j \|^2 = N$,
we have that
\[
\frac{\| v \|^2}{N} = \| c \|^2,
\quad
\frac{ v^T (D-W) v}{ N^2} = \sum_{j=1}^k c_j^2 ( p \epsilon \lambda_j ) \le \lambda_k p \epsilon \| c \|^2.
\]
By definition,
$\langle f , f \rangle = q_{\delta \epsilon}( v)$,
and
$ \langle f , Q_t f \rangle = q_{\epsilon}(v)$.
We first upper bound the numerator of the r.h.s. of \eqref{eq:eigen-relation-2}.
By that $q^{(2)}_{\delta \epsilon}( v) \ge 0$,
\begin{align}
\langle f , f \rangle - \langle f , Q_t f \rangle
& = q_{\delta \epsilon}( v) - q_{\epsilon}(v)
= q^{(0)}_{\delta \epsilon}( v) - q^{(2)}_{\delta \epsilon}( v) - q^{(0)}_{\epsilon}(v) +q^{(2)}_{\epsilon}(v) \nonumber \\
& \le ( q^{(0)}_{\delta \epsilon}( v) - q^{(0)}_{\epsilon}(v) ) + q^{(2)}_{\epsilon}(v).
\label{eq:proof-numerator-1}
\end{align}
We have already obtained the good event $E^{(0)}$ when applying Lemma \ref{lemma:qs0-concen} with $s= \delta \epsilon$.
We apply the lemma again to $s = \epsilon$, which gives that with sufficiently large $N$
there is an event $E^{(1)}$ which happens $w.p. > 1- 2 N^{-9}$,
and then under $E^{(0)} \cap E^{(1)}$,
\begin{equation}\label{eq:q0-delta-eps-E0-E1}
q^{(0)}_{ \delta \epsilon } ( v) = \| c \|^2 ( p + O_{\cal M}( \sqrt{ \delta^{-d/2} \frac{\log N}{N \epsilon^{d/2}} })),
\quad
q^{(0)}_{ \epsilon } ( v) = \| c \|^2 ( p + O_{\cal M}( \sqrt{ \frac{\log N}{N \epsilon^{d/2}} })).
\end{equation}
We track the constant dependence here: the constant in $O_{\cal M}(\cdot)$ in Lemma \ref{lemma:qs0-concen} is only depending on ${\cal M}$ (and not on $K$),
thus we use the notation $O_{{\cal M}}(\cdot)$ in \eqref{eq:q0-delta-eps-E0-E1}
and below to emphasize that the constant is ${\cal M}$-dependent only and independent from $K$.
Then \eqref{eq:q0-delta-eps-E0-E1} gives that
\[
q^{(0)}_{\delta \epsilon}( v) - q^{(0)}_{\epsilon}(v)
= \| c \|^2 \delta^{-d/4} O_{\cal M}(\sqrt{ \frac{\log N}{N \epsilon^{d/2}} }).
\]
The UB of $q^{(2)}_{\epsilon}(v)$ follows from \eqref{eq:q2-eps-UB} in Lemma \ref{lemma:qs2-UB},
with the shorthand that $\tilde{O}(\epsilon) $ stands for $ {O}( \epsilon (\log \frac{1}{ \epsilon})^2) $,
\[
q^{(2)}_{\epsilon}(v) = \frac{ v^T (D-W) v}{ N^2} (1 + \tilde{O}(\epsilon)) + \| c \|^2 O( \epsilon^{3})
\le \epsilon \| c \|^2 ( \lambda_k p (1 + \tilde{O}(\epsilon)) + O( \epsilon^{2}) ).
\]
Thus, \eqref{eq:proof-numerator-1} continues as
\begin{equation}\label{eq:numerator-UB}
\langle f , f \rangle - \langle f , Q_t f \rangle
\le \epsilon \| c \|^2 \left(
\lambda_k p (1 + \tilde{O}(\epsilon)) + O( \epsilon^{2})
+ \delta^{-d/4} O_{\cal M} ( \frac{1}{\epsilon}\sqrt{ \frac{\log N}{N \epsilon^{d/2}} })
\right).
\end{equation}
Next we lower bound the denominator $\langle f, f \rangle$.
Here we use \eqref{eq:q2-alphaeps-UB} in Lemma \ref{lemma:qs2-UB}, which gives that
\[
0 \le q^{(2)}_{\delta \epsilon}( v)
\le \Theta( \delta^{-d/2} ) \frac{ v^T (D-W) v}{ N^2} + \| c \|^2 O(\epsilon^{3})
\le \epsilon \| c \|^2 \left( \lambda_k p \Theta(\delta^{-d/2}) + O(\epsilon^{2}) \right).
\]
Note that we assume under event $E_{UB}$ so that the eigenvalue UB \eqref{eq:lambdak-UB-hold} holds,
thus
$\lambda_k p \Theta(\delta^{-d/2}) + O(\epsilon^{2}) = O(1)$.
Together with that $\delta$ is a fixed constant, we have that
\[
q^{(2)}_{\delta \epsilon}( v) = \| c \|^2 O( \epsilon ).
\]
Then, again under $E^{(1)}$,
\[
\langle f, f \rangle
= q^{(0)}_{\delta \epsilon}( v) - q^{(2)}_{\delta \epsilon}( v)
= \| c \|^2 \left ( p + O( \sqrt{ \delta^{-d/2} \frac{\log N}{N \epsilon^{d/2}} }) - O( \epsilon ) \right)
\ge \| c \|^2 \left ( p - O( \epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) \right).
\]
Putting together, and by that $\lambda_k \le 1.1 \mu_K$, we have that
\[
\frac{\langle f , f \rangle - \langle f , Q_t f \rangle}{ \langle f, f \rangle}
\le
\frac{ \epsilon \left(
\lambda_k p + \tilde{O}(\epsilon) + \delta^{-d/4}O_{\cal M} ( \frac{1}{\epsilon}\sqrt{ \frac{\log N}{N \epsilon^{d/2}} })
\right)}
{ p - O( \epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) }
\le
\epsilon \left(
\lambda_k + \tilde{O}(\epsilon) + \frac{C}{\epsilon}\sqrt{ \frac{\log N}{N \epsilon^{d/2}} }
\right),
\]
where
$C = c({\cal M}) \delta^{-d/4}$, and $c({\cal M})$ is a constant only depending on ${\cal M}$.
We set
\begin{equation*}
c_K
: = ( \frac{ C}{0.1 \gamma_K })^2
= ( \frac{ c({\cal M}) }{0.1 })^2 \delta^{-d/2} \gamma_K^{-2},
\end{equation*}
and since we assume $\epsilon^{d/2+2} > c_K \frac{\log N}{N}$ in the current proposition,
we have that
$\frac{C}{\epsilon}\sqrt{ \frac{\log N}{N \epsilon^{d/2}}} < 0.1 \gamma_K $.
Then, comparing to l.h.s. of \eqref{eq:eigen-relation-2}, we have that
\[
1 - e^{-(1-\delta) \epsilon \mu_k }
\le
\frac{\langle f , f \rangle - \langle f , Q_t f \rangle }{ \langle f, f \rangle}
\le
\epsilon \left(
\lambda_k + \tilde{O}(\epsilon) + 0.1 \gamma_K
\right).
\]
By the relation that $1- e^{-x} \ge x - x^2$ for any $x \ge 0$,
$1 - e^{-(1-\delta) \epsilon \mu_k } \ge \epsilon (1-\delta) \left( \mu_k - (1-\delta) \epsilon \mu_k^2 \right)$,
and when $\epsilon$ is sufficiently small s.t. $ \epsilon \mu_k^2 \le \epsilon (1.1 \mu_K)^2 < 0.1 \gamma_K$,
\[
1 - e^{-(1-\delta) \epsilon \mu_k }
\ge
\epsilon (1-\delta) \left( \mu_k - 0.1 \gamma_K \right) > 0.
\]
Noting that for $k \ge 2 $,
$\mu_k \ge \mu_2 \ge 2 \gamma_K > 0$, because $\mu_1 =0$.
Thus,
when
$\epsilon$ is sufficiently small and the $\tilde{O}(\epsilon) $ term is less than $0.1 \gamma_K$,
under the good events $E^{(1)} \cap E_{UB}$, which happens w.p. $> 1- 4K^2 N^{-10} - 4 N^{-9}$, we have that
\[
0 < (1-\delta) ( \mu_k - 0.1 \gamma_K )
\le
\lambda_k + \tilde{O}(\epsilon) + 0.1 \gamma_K
< \lambda_k + 0.2 \gamma_K.
\]
Recall that by definition \eqref{eq:def-detla-const},
$\delta \mu_K = 0.5 \gamma_K$,
then $\delta \mu_k \le \delta \mu_K = 0.5 \gamma_K$,
also $ 0 < \delta < 0.5$.
Re-arranging the terms gives that
$ \mu_k < \lambda_k + 0.8 \gamma_K$.
This can be verified for all $ 2 \le k \le K$,
and note that the good event $E^{(1)}$ is w.r.t $X$,
and $E_{UB}$ is constructed for fixed $k_{max}$,
and none is for specific $k \le K$.
\end{proof}
\subsection{Random-walk graph Laplacian}
The counterpart result of random-walk graph Laplacian is the following proposition.
It replaces Proposition \ref{prop:eigvalue-UB} with Proposition \ref{prop:eigvalue-UB-rw} in obtaining the eigenvalue UB in the analysis,
and consequently the high probability differs slightly.
\begin{proposition}[Initial crude eigenvalue LB of $L_{rw}$]
\label{prop:eigvalue-LB-crude-rw}
Under the same condition and setting
of ${\cal M}$, $p$ being uniform, $h$ being Gaussian,
and $k_{max}$, $\mu_k$, $\epsilon$ same
as in Proposition \ref{prop:eigvalue-LB-crude}.
Then, for sufficiently large $N$,
w.p.$> 1- 4K^2 N^{-10} - 6N^{-9}$,
$
\lambda_k (L_{rw})
> \mu_k - \gamma_K$,
for $k=2,\cdots, K$.
\end{proposition}
The proof is similar to that of Proposition \ref{prop:eigvalue-LB-crude} and left to Appendix \ref{app:proofs-step1}.
The difference lies in that the empirical eigenvectors $v_k$ are $D$-orthonormal rather than orthonormal,
and the degree concentration Lemma \ref{lemma:Di-concen} is used to relate $\frac{\|v\|^2}{N}$ with $\frac{1}{N^2} v^T D v$
for arbitrary vector $v$.
\section{Steps 2-3 and eigen-convergence}\label{sec:step23}
\begin{figure}[t]
\centering
\includegraphics[height=.14\linewidth]{eig_diag}
\caption{
\scriptsize
Population eigenvalues $\mu_k$ of $-\Delta$,
and empirical eigenvalues $\lambda_k$ of graph Laplacian matrix $L_N$,
$L_N$ can be $L_{un}$ or $L_{rw}$.
The positive integer $k_{max}$ is fixed,
and the constant $\gamma_K$ is half of the minimum first-$K$ eigen-gaps, defined as in \eqref{eq:def-gamma-K}.
Eigenvalue UB and initial LB are proved for $k \le K$, which guarantees \eqref{eq:eigen-stay-away}.
Extending to greater than one multiplicity by defining $\gamma_K$ as in \eqref{eq:def-gamma-M-multiplicity}.
}
\label{fig:crude}
\end{figure}
In this section, we obtain eigen-convergence rate of $L_{un}$ and $L_{rw}$ from the initial crude eigenvalue bound in Step 1.
We first derive the Steps 2-3 for $L_{un}$,
and the proof for $L_{rw}$ is similar.
\subsection{Step 2 eigenvector consistency}
In Step 1, the crude bound of eigenvalue (the UB is already with the form rate, the LB is crude) gives that for fixed $k_{max}$
and at large $N$, each $\lambda_k$ will fall into the interval $(\mu_k - \gamma_K, \mu_k + \gamma_K)$,
where $\gamma_K$ is less than half of the smallest eigenvalue gaps $(\mu_2 - \mu_1)$, ..., $ (\mu_{k_{max}+1} - \mu_{k_{max}})$,
illustrated in Fig. \ref{fig:crude}.
This means that $\lambda_k$ is separated from neighboring $\mu_{k-1}$ and $\mu_{k+1}$ by an $O(1)$ distance away.
This $O(1)$ initial separation is enough for proving eigenvector consistency up to the point-wise rate,
which is a standard argument, see e.g. proof of Theorem 2.6 part 2) in \cite{calder2019improved}.
In below we provide an informal explanation and then the formal statement in Proposition \ref{prop:step2}, with a proof for completeness.
We first give an illustrative informal derivation.
Take $k=2$ for example, let $L_N = L_{un}$, $L_N u_k = \lambda_k u_k$, and we want to show that $ u_2$ and $\rho_X \psi_2$ are aligned.
\[
r_2: = L_N (\rho_X \psi_2) - \rho_X (-\Delta) \psi_2 \in \R^N,
\quad
r_2(i) = L_N (\rho_X \psi_2) (x_i) - (-\Delta) \psi_2 (x_i),
\]
the point-wise rate gives $L^\infty$ bound of the residual vector $r_2$, suppose $\|r_2\|_2 \le \varepsilon \| \rho_X \psi_2 \|_2$. Meanwhile,
for any $l = 1, 3, \cdots, N$,
the crude bound of eigenvalues $\lambda_3$ gives that
\[
\lambda_3 > \mu_2 + \gamma_K,
\]
where $\gamma_K > 0$ is an $O(1)$ constant determined by $k_{max}$ and ${\cal M}$.
Because empirical eigenvalues are sorted, $\lambda_l$ for $l \ge 3$ are also $\gamma_K$ away from $\mu_2$.
As a result,
\[
| \lambda_l - \mu_2 | > \gamma_K > 0,
\quad l \neq 2, \quad 1 \le l \le N.
\]
Then we use the relation that for each $l \neq 2$,
$u_l^T r_2 = u_l^T(L_N (\rho_X \psi_2) - \mu_2 \rho_X \psi_2) =
(\lambda_l - \mu_2) u_l^T(\rho_X \psi_2)$,
which gives that
\[
| u_l^T(\rho_X \psi_2 ) | = \frac{ | u_l^T r_2 |}{|\lambda_l - \mu_2| }
\le \frac{ \varepsilon }{ \gamma_K} \| u_l\|_2 \| \rho_X \psi_2 \|_2.
\]
This shows that $\rho_X \psi_2 $ has $O(\varepsilon)$ alignment with all the other eigenvectors than $u_2$,
and since $\{ u_1, \cdots, u_N \}$ are orthogonal basis in $\R^N$,
this guarantees $1-O(\varepsilon)$ alignment between $\rho_X \psi_2 $ and $u_2$.
To proceed, we used the point-wise rate of graph Laplacian with $C^2$ kernel $h$ as in the next theorem.
The analysis of point-wise convergence was given in \cite{singer2006graph} and \cite{cheng2020convergence}:
The original theorem in \cite{singer2006graph} considers the normalized graph Laplacian
$(I -D^{-1}W)$. The analysis is similar for $(D-W)$ and leads to the same rate,
which was derived in \cite{cheng2020convergence} under the setting of variable kernel bandwidth.
These previous works consider a fixed point $x_0$ on ${\cal M}$,
and since the concentration result has exponentially high probability,
it directly gives the version of uniform error bound at every data point $x_i$,
which is needed here.
\begin{theorem}[\cite{singer2006graph,cheng2020convergence}]\label{thm:pointwise-rate-C2h}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
if as $N \to \infty$, $\epsilon \to 0+ $,
$\epsilon^{d/2+1} = \Omega( \frac{\log N}{N} ) $,
then for any $f \in C^4({\cal M})$,
1) When $N$ is large enough, w.p. $> 1-4 N^{-9}$,
\[
\frac{1}{\epsilon \frac{m_2}{2 m_0} } \left( (I -D^{-1}W) (\rho_X f) \right) _i
= - \Delta_{p^2} f(x_i) + \varepsilon_i,
\quad
\sup_{1 \le i \le N} |\varepsilon_i| = O(\epsilon ) + O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ).
\]
2) When $N$ is large enough, w.p. $> 1-2N^{-9}$,
\[
\frac{1}{\epsilon \frac{m_2}{2} p(x_i) N } \left( (D-W) (\rho_X f)\right)_i
= - \Delta_{p^2} f(x_i) + \varepsilon_i,
\quad
\sup_{1 \le i \le N} |\varepsilon_i| = O(\epsilon ) + O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ).
\]
The constants in the big-O notations depend on ${\cal M}$, $p$ and the $C^4$ norm of $f$.
\end{theorem}
\noindent
Note that Theorem \ref{thm:pointwise-rate-C2h} holds for non-uniform $p$,
while in our eigen-convergence analysis of graph Laplacian with $W$ in below,
we only use the result when $p$ is uniform.
Meanwhile, similar to Theorem \ref{thm:form-rate},
Assumption \ref{assump:h-C2-nonnegative}(C3) may be relaxed
for Theorem \ref{thm:pointwise-rate-C2h} to hold,
c.f. Remark \ref{rk:non-nagativity-not-needed}.
\begin{proof}[Proof of Theorem \ref{thm:pointwise-rate-C2h}]
Consider the $N$ events such that $\varepsilon_i $ is less than the error bound.
For each of the $i$-th event, condition on $x_i$,
Theorem 3.10 in \cite{cheng2020convergence} can be directly used to show that the event holds
w.p. $> 1-4N^{-10}$ for the case 1) random-walk graph Laplacian.
For the case 2) un-normalized graph Laplacian,
adopting the same technique of Theorem 3.8 in \cite{cheng2020convergence}
proves the same rate as for the fixed-bandwidth kernel
(Theorem 3.10 is the fixed-bandwidth version of Theorem 3.7 therein),
and gives that the event holds
w.p. $> 1-2N^{-10}$.
Specifically, the proof is by showing the concentration of the
$\frac{1}{\epsilon N} \sum_{j=1}^N K_\epsilon (x_i, x_j) (f(x_j) - f(x_i))$, which is an independent summation condition on $x_i$.
The r.v. $H_j : = \frac{1}{\epsilon}K_\epsilon (x_i, x_j) (f(x_j) - f(x_i))$, $j \neq i$,
has expectation $\mathbb{E} H_j = \frac{m_2}{2} p(x_i) \Delta_{p^2} f(x_i) + O_{f,p}(\epsilon)$,
and $\mathbb{E} H_j^2$ can be shown to be bounded by $\Theta(\epsilon^{-d/2-1})$,
and $|H_j|$ is also bounded by $\Theta(\epsilon^{-d/2-1})$,
following the same calculation as in the proof of Theorem 3.8 \cite{cheng2020convergence}.
This shows that the bias error is $O(\epsilon)$, and the variance error is $O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} )$, by classical Bernstein.
Same as in Theorem \ref{thm:form-rate},
$C^2$ regularity and decay up to 2nd derivative of $h$ are enough here.
Strictly speaking, the analysis in \cite{cheng2020convergence} is for the ``$\frac{1}{N-1}\sum_{j \neq i, j= 1}^N$'' summation
and not the ``$\frac{1}{N}\sum_{j \neq i, j= 1}^N$'' one here.
However,
the difference between $\frac{1}{N-1}$ and $\frac{1}{N}$ only introduces an $O(\frac{1}{N})$ relative error and is of higher order,
same as in the proof of Lemma \ref{lemma:Di-concen},
and the $i =j$ term cancels out in the summation of $(D-W) \rho_X f$.
Then, by the independence of the $x_i$'s, the $i$-th event holds with the same high probability.
The current theorem, in both 1) and 2), follows by a union bound.
\end{proof}
We are ready for Step 2 for the unnormalized graph Laplacian $L_{un} = \frac{1}{\epsilon \frac{m_2}{2} p N }(D-W)$.
Here we consider eigenvectors normalized to have 2-norm 1, i.e.,
$L_{un} u_k = \lambda_k u_k$, $u_k^T u_l = \delta_{kl}$,
and we compare $u_k$ to
\begin{equation}\label{eq:def-phik}
\phi_k : = \frac{1}{\sqrt{p N}} \rho_X \psi_k \in \R^N,
\end{equation}
where $\psi_k$ are population eigenfunctions which are orthonormal in $H=L^2({\cal M}, dV)$, same as above.
\begin{proposition}\label{prop:step2}
Under Assumptions \ref{assump:M-p}(A1),
$p$ being uniform on ${\cal M}$,
and $h$ is Gaussian,
for fixed $k_{max} \in \mathbb{N}$, $K = k_{max}+1$,
assume that the eigenvalues $\mu_k$ for $k \le K$ are all single multiplicity,
and $\gamma_K > 0$ as defined in \eqref{eq:def-gamma-K}, the constant $c_K$ as in Proposition \ref{prop:eigvalue-LB-crude}.
If as $N \to \infty$, $\epsilon \to 0+ $,
$\epsilon^{d/2+2} > c_K \frac{\log N}{N} $,
then for sufficiently large $N$,
w.p. $> 1 - 4 K^2 N^{-10} - (2 K+4) N^{-9}$,
there exist scalars $\alpha_k \neq 0$, actually $|\alpha_k| =1 + o(1)$, such that
\[
\| u_k - \alpha_k \phi_k \|_2 = O(\epsilon , \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ),
\quad 1 \le k \le k_{max}.
\]
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:step2}]
The proof uses the same approach as that of Theorem 2.6 part 2) in \cite{calder2019improved},
and since our setting is different, we include a proof for completeness.
When $k =1$, we always have
$\lambda_1 = \mu_1 =0$,
$u_1$ is the constant vector $u_1 = \frac{1}{\sqrt{N}} \mathbf{1}_N $,
and $\psi_1 $ is the constant function, and thus $\phi_1 = u_1$ up to a sign.
Under the condition of the current proposition,
the assumptions of Proposition \ref{prop:eigvalue-LB-crude} are satisfied,
and because $\epsilon^{d/2+2} > c_K \frac{\log N}{N} $ implies that $\epsilon^{d/2+1} = \Omega( \frac{\log N}{N} )$,
the assumptions of Theorem \ref{thm:pointwise-rate-C2h} 2) are also satisfied.
We apply Theorem \ref{thm:pointwise-rate-C2h} 2) to the $K$ functions $\psi_1, \cdots, \psi_K$.
By a union bound,
we have that when $N$ is large enough, w.p. $> 1-2K N^{-9}$,
$\| L_{un} \phi_k - \mu_k \phi_k \|_\infty
= \frac{1}{\sqrt{pN}} (O(\epsilon) + O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ))$
for $2 \le k \le K$.
By that $\| v \|_2 \le \sqrt{N} \| v \|_\infty$ for any $ v \in \R^N$,
this gives that there is $\text{Err}_{pt} > 0$,
\begin{equation}\label{eq:pontwise-rate-2norm-bound}
\| L_{un} \phi_k - \mu_k \phi_k \|_2 \le \text{Err}_{pt},
\quad 2 \le k \le K,
\quad \text{Err}_{pt} = O(\epsilon) + O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ).
\end{equation}
The constants in big-O depends on first $K$ eigenfunctions and are absolute ones because $K$ is fixed.
Applying Proposition \ref{prop:eigvalue-LB-crude}, and consider the intersection with the good event in Proposition \ref{prop:eigvalue-LB-crude},
we have for each $2 \le k \le K$, $|\mu_k - \lambda_k |< \gamma_K$.
By definition of $\gamma_K$ as in \eqref{eq:def-gamma-K},
\begin{equation}\label{eq:eigen-stay-away}
\min_{1 \le j \le N, \, j \neq k} | \mu_k - \lambda_j | > \gamma_K > 0,
\quad 2 \le k \le k_{max}.
\end{equation}
For each $ k \le k_{max}$, let $S_k = \text{Span}\{ u_k \}$ be the 1-dimensional subspace in $\R^N$, and let $S_k^\perp$ be its orthogonal complement.
We will show that $\|P_{S_k^\perp} \phi_k \|_2$ is small.
By definition,
$P_{S_k^\perp} \mu_k \phi_k = \sum_{j\neq k, j=1}^N \mu_k (u_j^T \phi_k) u_j$,
and meanwhile,
$P_{S_k^\perp} L_{un} \phi_k = \sum_{j\neq k, j=1}^N (u_j^T L_{un} \phi_k) u_j = \sum_{j\neq k, j=1}^N \lambda_j (u_j^T \phi_k) u_j$.
Subtracting the two gives that
$P_{S_k^\perp} ( \mu_k \phi_k - L_{un} \phi_k )
= \sum_{j\neq k, j=1}^N (\mu_k - \lambda_j) (u_j^T \phi_k) u_j$.
By that $u_j$ are orthonormal vectors, and \eqref{eq:eigen-stay-away},
\[
\| P_{S_k^\perp} ( \mu_k \phi_k - L_{un} \phi_k ) \|_2^2
= \sum_{j\neq k, j=1}^N (\mu_k - \lambda_j)^2 (u_j^T \phi_k)^2
\ge \gamma_K^2 \sum_{j\neq k, j=1}^N (u_j^T \phi_k)^2
= \gamma_K^2 \| P_{S_k^\perp} \phi_k \|_2^2.
\]
Then, combined with \eqref{eq:pontwise-rate-2norm-bound}, we have that
$\gamma_K \| P_{S_k^\perp} \phi_k \|_2
\le \| P_{S_k^\perp} ( \mu_k \phi_k - L_{un} \phi_k ) \|_2
\le \| \mu_k \phi_k - L_{un} \phi_k \|_2 \le \text{Err}_{pt}$,
namely,
$\| P_{S_k^\perp} \phi_k \|_2 \le \frac{\text{Err}_{pt}}{\gamma_K } $.
By definition,
$P_{S_k^\perp} \phi_k = \phi_k - (u_k^T \phi_k) u_k$, where $\| u_k \|_2 = 1$.
Note that $\phi_k $ are unit vectors up to an $O( \sqrt{ \frac{\log N}{N} })$ error:
Because the good event in Proposition \ref{prop:eigvalue-LB-crude} is under that in the eigenvalue UB Proposition \ref{prop:eigvalue-UB},
and specifically that of Lemma \ref{lemma:rhoX-isometry-whp}.
Thus \eqref{eq:uk-near-orthonormal} holds, which means that
$| \| \phi_k \|^2 - 1 | \le \text{Err}_{norm}$, $1 \le k \le K$,
where
$\text{Err}_{norm}= O( \sqrt{ \frac{\log N}{N} })$.
Then, one can verify that
\begin{equation}\label{eq:uk-phik-align}
| u_k^T \phi_k | = 1 + O( \text{Err}_{norm}, \text{Err}_{pt}^2) = 1+o(1),
\end{equation}
and then we set $\alpha_k = \frac{1}{ u_k^T \phi_k}$,
and have that
\[
\| \alpha_k \phi_k - u_k \|_2
= \frac{ O( \text{Err}_{pt} )}{ |u_k^T \phi_k|}
\le \frac{ O( \text{Err}_{pt} )}{ 1- O(\text{Err}_{norm}, \text{Err}_{pt}^2)}
= O( \text{Err}_{pt} ) (1+ O(\text{Err}_{norm}, \text{Err}_{pt}^2)) = O(\text{Err}_{pt}).
\]
The bound holds for each $k \le k_{max}$.
\end{proof}
\subsection{Step 3: refined eigenvalue LB }
\begin{proposition}\label{prop:step3}
Under the same condition of Proposition \ref{prop:step2},
$k_{max}$ is fixed.
Then, for sufficiently large $N$, with the same indicated high probability,
\[
| \mu_k - \lambda_k | =
O \left( \epsilon, \, \sqrt{\frac{\log N}{N \epsilon^{d/2}}} \right),
\quad 1 \le k \le k_{max}.
\]
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:step3}]
We inherit the notations in the proof of Proposition \ref{prop:step2}.
Again $\mu_1 = \lambda_1 =0$.
For $2 \le k \le k_{max}$, note that
\begin{equation}\label{eq:eigen-eqn-step3}
u_k^T( L_{un} \phi_k - \mu_k \phi_k) = (\lambda_k - \mu_k) u_k^T \phi_k,
\end{equation}
and meanwhile, we have shown that
$u_k = \alpha_k \phi_k+ \varepsilon_k$, where
$\alpha_k = 1+o(1)$ and $ \| \varepsilon_k \|_2 = O(\text{Err}_{pt})$.
Thus the l.h.s. of \eqref{eq:eigen-eqn-step3} equals
\[
(\alpha_k \phi_k+ \varepsilon_k)^T( L_{un} \phi_k - \mu_k \phi_k)
= \alpha_k ( \phi_k^T L_{un} \phi_k - \mu_k \| \phi_k \|_2^2) + \varepsilon_k^T ( L_{un} \phi_k - \mu_k \phi_k)
=: \textcircled{1} + \textcircled{2}.
\]
By definition of $\phi_k$,
$\phi_k^T L_{un} \phi_k = \frac{1}{pN} (\rho_X \psi_k)^T L_{un} (\rho_X \psi_k)
= \frac{1}{p^2} E_N( \rho_X \psi_k)$.
The good event in Proposition \ref{prop:step2} is under the good event $E_{UB}$,
under which Lemma \eqref{lemma:form-rate-psi} and Lemma \ref{lemma:rhoX-isometry-whp} hold. Then by \eqref{eq:form-rate-psi},
$ E_N( \rho_X \psi_k)
= p^2 \mu_k + O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } )$;
By \eqref{eq:uk-near-orthonormal},
$ \| \phi_k \|^2 = 1 + O( \sqrt{\frac{\log N}{N}})$.
Putting together, and by that $\alpha_k = 1+o(1) = O(1)$,
\[
\textcircled{1} =
\alpha_k (
\phi_k^T L_{un} \phi_k - \mu_k \|\phi_k\|_2^2 )
= O(1)
\left( \mu_k + O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ) - \mu_k( 1 + O( \sqrt{\frac{\log N}{N}})) \right)
= O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ).
\]
Meanwhile, by \eqref{eq:pontwise-rate-2norm-bound},
$\| L_{un} \phi_k - \mu_k \phi_k \|_2 \le \text{Err}_{pt}$,
and then
\[
| \textcircled{2} |
\le \| \varepsilon_k \|_2 \| L_{un} \phi_k - \mu_k \phi_k \|_2 = O(\text{Err}_{pt}^2) .
\]
Because $\epsilon^{d/2+2} > c_K \frac{\log N}{ N} $ for some $c_K > 0$,
$\frac{ \log N}{N \epsilon^{d/2+1}} = \epsilon \frac{ \log N}{N \epsilon^{d/2+2}} < \frac{\epsilon}{ c_K }$, thus
$\text{Err}_{pt} = O( \epsilon + \sqrt{\frac{ \log N}{N \epsilon^{d/2+1}} }) = O( \sqrt{\epsilon})$,
and then
$\textcircled{2} =
O(\text{Err}_{pt}^2) = O(\epsilon)$.
Back to \eqref{eq:eigen-eqn-step3}, we have that
\[
| \lambda_k - \mu_k | | u_k^T \phi_k | = | \textcircled{1} + \textcircled{2} | = O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ) + O(\epsilon),
\]
and by \eqref{eq:uk-phik-align}, $|u_k^T \phi_k| = 1+o(1)$,
thus
$| \lambda_k - \mu_k |
= \frac{ | \textcircled{1} + \textcircled{2} | }{ 1 + o(1) }
= O( | \textcircled{1} + \textcircled{2} | )
= O(\epsilon , \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } )$.
The above holds for all $ k \le k_{max}$.
\end{proof}
\subsection{Eigen-convergence rate}
We are ready to prove the main theorems on eigen-convergence of graph Laplacians,
when $p$ is uniform and the kernel function $h$ is Gaussian.
\begin{theorem}[eigen-convergence of $L_{un}$]
\label{thm:refined-rates}
Under Assumption \ref{assump:M-p} (A1),
$p$ is uniform on ${\cal M}$,
and $h$ is Gaussian.
For $k_{max} \in \mathbb{N}$ fixed,
assume that the eigenvalues $\mu_k$ for $k \le K:= k_{max}+1$ are all single multiplicity,
and the constant $c_K$ as in Proposition \ref{prop:eigvalue-LB-crude}.
Consider first $k_{max}$ eigenvalues and eigenvectors of $L_{un}$,
$L_{un} u_k = \lambda_k u_k$, $u_k^T u_l = \delta_{kl}$,
and the vectors $\phi_k$ are defined as in \eqref{eq:def-phik}.
If as $N \to \infty$, $\epsilon \to 0+ $,
$\epsilon^{d/2+2} > c_K \frac{\log N}{N} $,
then for sufficiently large $N$,
w.p. $> 1 - 4 K^2 N^{-10} - (2 K+4) N^{-9}$,
\[
| \mu_k - \lambda_k | =
O \left( \epsilon, \, \sqrt{\frac{\log N}{N \epsilon^{d/2}}} \right),
\quad
1 \le k \le k_{max},
\]
and there exist scalars $\alpha_k \neq 0$, actually $|\alpha_k| =1 + o(1)$, such that
\[
\| u_k - \alpha_k \phi_k \|_2 = O \left( \epsilon , \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} \right),
\quad 1 \le k \le k_{max}.
\]
In particular, when $\epsilon \sim N^{-{1}/{(d/2+3)}}$,
\[
| \mu_k - \lambda_k | = {O}( N^{- \frac{1}{d/2+3}}),
\quad
\| u_k - \alpha_k \phi_k \|_2 = O( N^{- \frac{1}{d/2+3} } \sqrt{\log N} ),
\quad 1 \le k \le k_{max}.
\]
When $\epsilon = ( c' \frac{\log N}{N} )^{1/(d/2+2)} $, $c' > c_K$,
\[
| \mu_k - \lambda_k | = {O}( (\frac{ \log N }{N}) ^{\frac{1}{d/2+2}}),
\quad
\| u_k - \alpha_k \phi_k \|_2 = {O}( (\frac{ \log N }{N}) ^{\frac{1}{d+4}}),
\quad 1 \le k \le k_{max}.
\]
\end{theorem}
\begin{proof}
Under the condition of the theorem,
the eigenvector and eigenvalue error bounds have been proved in Proposition \ref{prop:step2} and Proposition \ref{prop:step3}.
For the the two specific asymptotic scaling of $\epsilon$,
the rate follows from the bounds involving both $\epsilon$ and $N$.
\end{proof}
\begin{remark}\label{rk:eigen-rate-indicator-h}
With indicator $h= {\bf 1}_{[0,1)}$, the point-wise rate is
$\text{Err}_{pt, ind} = O \left( \sqrt{\epsilon}, \, \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} \right)$
see \cite{hein2005graphs,belkin2007convergence,singer2006graph,calder2019improved} among others.
While our way of Step 1 cannot be applied to such $h$,
\cite{calder2019improved} covered this case when $d \ge 2$,
and provided the eigenvalue and eigenvector consistency up to $\text{Err}_{pt, ind}$
when $\epsilon^{d/2+2} = \Omega( \frac{ \log N}{N})$.
The scaling $\epsilon^{d/2+2} = \tilde{\Theta} ( N^{-1} )$ is the optimal one to balance the bias and variance errors in $\text{Err}_{pt, ind}$,
and then it gives the overall error rate as $\tilde{O}(N^{-1/(d+4)})$,
which agrees with the eigen-convergence rate in \cite{calder2019improved}.
Here $\tilde{O}(\cdot)$ and $\tilde{\Theta}(\cdot)$ indicate that the constant is possibly multiplied by a factor of certain power of $\log N$.
Meanwhile, we note that, using the Dirichlet form convergence rate,
the eigenvalue consistency can be improved to be squared:
By Remark \ref{rk:indicator-h-form-rate}, the Dirichlet form convergence with indicator $h$ is
$\text{Err}_{form, ind}= O( \epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2}}})$.
Then, once the initial crude eigenvalue LB is established,
in Step 2,
the eigenvector 2-norm consistency can be shown to be $\text{Err}_{pt, ind}$.
In Step 3,
the eigenvalue consistency for the first $k_{max}$ eigenvalues can be shown to be $O(\text{Err}_{form, ind}, \text{Err}_{pt, ind}^2) =O( \epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2}}}) $.
This would imply the eigenvalue convergence rate of $\tilde{O}(N^{-1/(d/2+2)})$
under the regime where $\epsilon = \tilde{\Theta}(N^{-1/(d/2+2)})$,
where eigenvector consistency remains $\tilde{O}(N^{-1/(d+4)})$,
which is the same as with Gaussian $h$ in Theorem \ref{thm:refined-rates}.
\end{remark}
\vspace{5pt}
\begin{remark}\label{rk:multiplicity}
The result extends when the population eigenvalues $\mu_k$ have multiplicity greater than one.
Suppose we consider $0 = \mu^{(1)} < \mu^{(2)} < \cdots < \mu^{(M)} < \cdots $,
which are distinct eigenvalues, and $\mu^{(m)}$ has multiplicity $l_m \ge 1$.
Then let $k_{max} = \sum_{m=1}^M l_m$, $K = \sum_{m=1}^{M+1} l_m$, $\mu_K = \mu^{(M+1)}$,
and $\{ \mu_k, \psi_k \}_{k=1}^K$ are sorted eigenvalues and associated eigenfunctions.
Step 0. eigenvalue UB holds, since Proposition \ref{prop:eigvalue-UB} does not require single multiplicity.
In Step 1,
the only place in Proposition \ref{prop:eigvalue-LB-crude}
where single multiplicity of $\mu_k$ is used is in the definition of $\gamma_K$.
Then,
by changing to
\begin{equation}\label{eq:def-gamma-M-multiplicity}
\gamma^{(M)} = \frac{1}{2} \min_{1 \le m \le M} (\mu^{(m+1)} - \mu^{(m)}) > 0,
\end{equation}
and defining $\delta = 0.5 \frac{\gamma^{(M)}}{\mu_K}$, $0< \delta < 0.5 $ is a positive constant depending on ${\cal M}$ and $K$,
Proposition \ref{prop:eigvalue-LB-crude} proves that $| \lambda_k - \mu^{(m)}| <\gamma^{(M)} $ for all
$k \le K$, i.e. $m \le M+1$.
This allows to extend Step 2 Proposition \ref{prop:step2} by considering the projection $P_{S^\perp}$
where the subspace in $\R^N$ is spanned by eigenvectors whose eigenvalues $\lambda_k$ approaches $\mu_k = \mu^{(m)}$,
similar as in the original proof of Theorem 2.6 part 2) in \cite{calder2019improved}.
Specifically,
suppose $\mu_i = \cdots = \mu_{i+l_m-1} = \mu^{(m)}$, $2 \le m \le M$,
let $S^{(m)} = \text{Span} \{ u_i, \cdots, u_{i+l_m-1}\}$, and the index set $I_m:= \{ i, \cdots, i+l_m-1\}$.
For eigenfunction $\psi_k$, $k \in I_m$,
then $\mu_k = \mu^{(m)}$,
similarly as in the proof of Proposition \ref{prop:step2},
one can verify that
\[
\| P_{(S^{(m)})^\perp} ( \mu_k \phi_k - L_{un} \phi_k ) \|_2^2
= \sum_{j\notin I_m} (\mu_k - \lambda_j)^2 (u_j^T \phi_k )^2
\ge (\gamma^{(M)})^2 \sum_{j\notin I_m} (u_j^T \phi_k)^2
= (\gamma^{(M)})^2 \| P_{(S^{(m)})^\perp} \phi_k \|_2^2,
\]
which gives that
$\| \phi_k - P_{S^{(m)}} \phi_k \|_2
= \| P_{(S^{(m)})^\perp} \phi_k \|_2
\le \frac{1}{\gamma^{(M)}}\text{Err}_{pt}$, for all $k \in I_m$.
By that $\{ \phi_k\}_{k=1}^K$ are near orthonormal with large $N$ (Lemma \ref{lemma:rhoX-isometry-whp}),
this proves that there exists an $l_m$-by-$l_m$ orthogonal transform $Q_m$,
and $|\alpha_k| = 1+o(1)$,
such that
$ \| u_k - \alpha_k \phi_k' \|_2
= O(\text{Err}_{pt}) = O(\epsilon , \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} )$, $k \in I_m$,
where
$ \phi_k' = \sum_{j = 1}^{l_m} (Q_m)_{k,j} \phi_k$.
This proves consistency of empirical eigenvectors $u_k$ up to the point-wise rate for $k \le k_{max}$.
Finally, Step 3 Proposition \ref{prop:step3} extends by considering \eqref{eq:eigen-eqn-step3} for
$u_k$ and $\phi_k'$,
making use of $ \| u_k - \alpha_k \phi_k' \|_2 = O(\text{Err}_{pt}) $,
the Dirichlet form convergence of $E_N(\rho_X \psi_k)$ (Lemma \ref{lemma:form-rate-psi}),
and that $\{ \phi_k' \}_{k \in I_m}$ is transformed from $\{ \phi_k \}_{k \in I_m}$ by an orthogonal matrix $Q_m$.
\end{remark}
\vspace{5pt}
To address the eigen-convergence of $L_{rw}$, we define the $D/N$-weighted 2-norm as
\[
\| u \|_{\frac{D}{N}}^2 = \frac{1}{N} u^T D u,
\]
and recall that eigenvectors of $L_{rw}$ are $D$-orthogonal.
The following theorem is the counterpart of Theorem \ref{thm:refined-rates}
for $L_{rw}$, obtaining the same rates.
\begin{theorem}[eigen-convergence of $L_{rw}$]
\label{thm:refined-rates-rw}
Under the same condition and setting of ${\cal M}$, $p$ being uniform, $h$ being Gaussian,
and $k_{max}$, K, $\mu_k$, $\epsilon$ same
as in Theorem \ref{thm:refined-rates}.
Consider first $k_{max}$ eigenvalues and eigenvectors of $L_{rw}$,
$L_{rw} v_k = \lambda_k v_k$,
$ v_k^T D v_l = \delta_{kl} Np$,
i.e. $\| v_k \|_{\frac{D}{N}}^2 = p$,
and the vectors $\phi_k$ defined as in \eqref{eq:def-phik}.
Then, for sufficiently large $N$,
w.p. $> 1 - 4 K^2 N^{-10} - (4 K+ 6) N^{-9}$,
$\| v_k\|_2 = 1+o(1)$,
and the same bound
of $ | \mu_k - \lambda_k |$ and $ \| v_k - \alpha_k \phi_k \|_2$
as in Theorem \ref{thm:refined-rates} hold for $ 1 \le k \le k_{max}$,
with certain scalars $\alpha_k$ satisfying $|\alpha_k| = 1+o(1)$,
\end{theorem}
The extension to when $\mu_k$ has greater than 1 multiplicity is possible, similarly as in Remark \ref{rk:multiplicity}.
The proof of $L_{rw}$ uses almost the same method as for $L_{un}$, and the difference is that $v_k$ are no longer orthonormal but $D$-orthogonal.
This is handled by that $\| u \|_2^2$ and $ \frac{1}{p} \| u\|_{D/N}^2$ agrees in relative error up to the form rate, due to the concentration of $D_i/N$ (Lemma \ref{lemma:Di-concen}).
The detailed proof is left to Appendix \ref{app:proofs-step23}.
\section{Density-corrected graph Laplacian}\label{sec:density-corrected}
We consider $p$ as in Assumption \ref{assump:M-p}(A2).
The density-corrected graph Laplacian is defined as \cite{coifman2006diffusion}
\[
\tilde{L}_{rw} = \frac{1}{ \frac{m_2}{ 2 m_0} \epsilon} (I - \tilde{D}^{-1}\tilde{W}),
\quad
\tilde{W}_{ij} = \frac{W_{ij}}{D_i D_j},
\quad
\tilde{D}_{ii} =\sum_{j=1}^N \tilde{W}_{ij},
\]
where $W_{ij} = K_\epsilon(x_i, x_j)$ as before, and $D$ is the degree matrix of $W$.
The density-corrected graph laplacian recovers Laplace-Beltrami operator when $p$ is not uniform..
In this section,
we extend the theory of point-wise convergence,
Dirichlet form convergence,
and eigen-convergence to such graph Laplacian.
\subsection{Point-wise rate of $\tilde{L}_{rw}$}
This subsection proves Theorem \ref{thm:pointwise-rate-dencity-correct},
which shows that
the point-wise rate of $\tilde{L}_{rw}$ is same as that of $L_{rw}$ without the density-correction.
The result is for general differentiable $h$ satisfying Assumption \ref{assump:h-C2-nonnegative},
which can be of independent interest.
We first establish the counterpart of Lemma \ref{lemma:Di-concen}
about the concentration of all $\frac{1}{N}D_i = \frac{1}{N} \sum_{j=1}^N W_{ij}$ when $p$ is not uniform.
The deviation bound is uniform for all $i$ and has an bias error at $O(\epsilon^2)$.
\begin{lemma}\label{lemma:Di-concen-eps2}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
suppose as $N \to \infty$, $\epsilon \to 0+ $, $\epsilon^{d/2} = \Omega( \frac{\log N}{N} ) $.
Then,
1) When $N$ is large enough, w.p. $> 1- 2 N^{-9}$, $D_i > 0$ for all $i$ s.t. $\tilde{W}$ is well-defined, and
\begin{equation}\label{eq:degree-D-concen-eps2}
\frac{1}{N} D_i
= m_0 \tilde{p}_\epsilon(x_i) + O( \epsilon^2, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }),
\quad \tilde{p}_\epsilon := p + \tilde{m} \epsilon ( \omega p + \Delta p),
\quad
1 \le i \le N.
\end{equation}
where $\omega \in C^{\infty}({\cal M})$ is determined by manifold extrinsic coordinates,
and $\tilde{m}[h] = \frac{m_2[h]}{2 m_0[h]}$.
2) When $N$ is large enough, w.p. $> 1- 4 N^{-9}$, $\tilde{D}_i > 0$ for all $i$ s.t. $ \tilde{L}_{rw}$ is well-defined, and
\begin{equation}\label{eq:denominator}
\sum_{j=1}^N W_{i j} \frac{ 1}{ D_j}
= 1+ O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) ,
\quad
1 \le i \le N.
\end{equation}
The constants in big-$O$ in parts 1) and 2) depend on (${\cal M}, p)$, and are uniform for all $ i$.
\end{lemma}
\noindent
The proof is left to Appendix \ref{app:proofs-density-corrected}. The following theorem proves the point-wise rate of $\tilde{L}_{rw}$.
\begin{theorem}\label{thm:pointwise-rate-dencity-correct}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
if as $N \to \infty$, $\epsilon \to 0+ $,
$\epsilon^{d/2+1} = \Omega( \frac{\log N}{N} ) $,
then for any $f \in C^4({\cal M})$,
when $N$ is large enough, w.p. $> 1- 8 N^{-9}$,
\[
\frac{1}{\epsilon \frac{m_2}{2 m_0} } (I - \tilde{D}^{-1} \tilde{W}) (\rho_X f) (x_i )
= - \Delta f(x_i) + \varepsilon_i,
\quad
\sup_{1 \le i \le N} |\varepsilon_i| = O(\epsilon ) + O( \sqrt{\frac{\log N}{N \epsilon^{d/2+1}}} ).
\]
The constants in the big-O notation depend on ${\cal M}$, $p$ and the $C^4$ norm of $f$.
\end{theorem}
The theorem slightly improves the point-wise convergence rate of $O(\epsilon, \sqrt{\frac{\log N}{N \epsilon^{d/2+2}}})$ in \cite{singer2016spectral}.
It is proved using the same techniques as the analysis of point-wise convergence of $L_{rw}$ in \cite{singer2006graph,cheng2020convergence},
and we include a proof for completeness.
\begin{proof}[Proof of Theorem \ref{thm:pointwise-rate-dencity-correct}]
By definition,
\begin{equation}\label{eq:point-wise-density-correct}
- \frac{1}{\epsilon \frac{m_2}{2 m_0} } (I - \tilde{D}^{-1} \tilde{W}) (\rho_X f) (x_i )
= \frac{1}{\epsilon \frac{m_2}{2 m_0} }
\frac{\sum_{j=1}^N W_{ij} \frac{f(x_j) - f(x_i)}{D_j}}{\sum_{j=1}^N W_{ij} \frac{1}{D_j}}.
\end{equation}
The proof of Lemma \ref{lemma:Di-concen-eps2} has constructed two good events $E_1$ and $E_2$
($E_1$ is for Part 1) to hold, Part 2) assumes $E_1$ and $E_2$),
such that with large enough $N$, $E_1 \cap E_2$ happens w.p. $>1-4N^{-9}$, under which
$D_i$, $\tilde{D}_i > 0$ for all $i$, $\tilde{W}$ and $\tilde{L}_{rw}$ are well-defined,
and equations
\eqref{eq:degree-D-concen-eps2},
\eqref{eq:degree-D-concen-rel}, and \eqref{eq:denominator} hold.
\eqref{eq:denominator} provides the concentration of the denominator of the r.h.s. of \eqref{eq:point-wise-density-correct}.
We now consider the numerator.
Note that, with sufficiently small $\epsilon$, $\tilde{p}_\epsilon$ is uniformly bounded from below by $O(1)$ constant $p_{min}'$.
This is because $\omega, p \in C^\infty( {\cal M})$, ${\cal M}$ is compact, then $(\omega p + \Delta p)$ is uniformly bounded,
and meanwhile $p$ is uniformly bounded from below.
Thus, under $E_1$,
\[
\frac{1}{N}\sum_{j=1}^N W_{i j} \frac{ f(x_j )- f(x_{i})}{ \frac{1}{N}D_j}
= \frac{1}{N}\sum_{j=1}^N \frac{ W_{i j} ( f(x_j )- f(x_{i}))}{ m_0 \tilde{p}_\epsilon (x_j) ( 1+ \varepsilon_j)},
\quad \max_{1 \le j \le N} |\varepsilon_j| = O(\epsilon^2, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }),
\]
and the equation equals
\begin{align*}
\frac{1}{N}\sum_{j=1}^N \frac{ W_{i j} ( f(x_j )- f(x_{i}))}{ m_0 \tilde{p}_\epsilon (x_j) } ( 1+ \varepsilon_j')
& = \frac{1}{N}\sum_{j=1}^N \frac{ W_{i j} ( f(x_j )- f(x_{i}))}{ m_0 \tilde{p}_\epsilon (x_j) }
+ \frac{1}{N}\sum_{j=1}^N \frac{ W_{i j} ( f(x_j )- f(x_{i}))}{ m_0 \tilde{p}_\epsilon (x_j) } \varepsilon_j' \\
&
=: \textcircled{1} + \textcircled{2},
\quad \max_{1 \le j \le N} |\varepsilon_j'| = O(\epsilon^2, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} })
\end{align*}
and we analyze the two terms respectively.
To bound $| \textcircled{2}|$, we use $W_{ij} \ge 0$ and again that $\tilde{p}_\epsilon (x) \ge p_{min}' > 0 $ to have
\begin{align*}
| \textcircled{2}|
\le \frac{1}{N} \sum_{j=1}^N \frac{ W_{i j} | f(x_j )- f(x_{i})| }{ m_0 \tilde{p}_\epsilon (x_j) } |\varepsilon_j'|
\le
\frac{ \max_{1 \le j \le N} |\varepsilon_j'| }{m_0 p_{min}'}
\cdot \frac{1}{N} \sum_{j=1}^N { W_{i j} | f(x_j )- f(x_{i})| }.
\end{align*}
We claim that w.p. $> 1- 2N^{-9}$, and we call this good event $E_3$, under which
\begin{equation}\label{eq:W-abs-diff-f-concen}
\frac{1}{N} \sum_{j=1}^N { W_{i j} | f(x_j )- f(x_{i})| } = O(\sqrt{\epsilon}),
\quad 1 \le i \le N,
\end{equation}
and the proof is in below.
With \eqref{eq:W-abs-diff-f-concen}, under $E_3$, $| \textcircled{2}|$ can be bounded by
\begin{equation}\label{eq:circle2-whp}
| \textcircled{2}| =
(\max_{1 \le j \le N} |\varepsilon_j'| ) O(\sqrt{\epsilon})
=O(\epsilon^2, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) O(\sqrt{\epsilon})
=O(\epsilon^{5/2}, \sqrt{ \frac{\log N}{N \epsilon^{d/2-1}} }).
\end{equation}
The analysis of $\textcircled{1}$ uses concentration of independent sum again. Condition on $x_i$ and consider
\[
\textcircled{1}' =
\frac{1}{ N-1}\sum_{ j \neq i, j=1}^N K_\epsilon(x_i, x_j) \frac{ f(x_j )- f(x_{i}) }{ \tilde{p}_\epsilon (x_j) }
=: \frac{1}{ N-1}\sum_{ j \neq i, j=1}^N Y_j,
\]
and we have
$\textcircled{1} = \frac{1}{m_0}(1-\frac{1}{N}) \textcircled{1}'$.
Due to uniform boundedness of $\tilde{p}_\epsilon$ from below by $p_{min}' > 0$,
$|Y_j|$ are bounded by $L_Y = \Theta(\epsilon^{-d/2})$.
We claim that the expectation (proof in below)
\begin{equation}\label{eq:exp-Yj-ptrate-density-correct-term1}
\mathbb{E} Y_j
= \int_{{\cal M}} K_\epsilon( x_i, y) \frac{ f(y) p(y) }{ \tilde{p}_\epsilon(y)} dV(y)
- f(x_i) \int_{{\cal M}} K_\epsilon( x_i, y)\frac{ p(y) }{ \tilde{p}_\epsilon(y)} dV(y)
= \frac{m_2}{2} \epsilon \Delta f (x_i)
+ O(\epsilon^2).
\end{equation}
The variance of $Y_j$ is bounded by
\begin{align*}
\mathbb{E} Y_j^2
& = \int_{{\cal M}} K_\epsilon(x_i, y)^2 \left( \frac{ f(y)- f(x_{i}) }{ \tilde{p}_\epsilon (y) } \right)^2 p(y) dV(y) \\
& \le \frac{1}{p_{min}'^2}
\int_{{\cal M}} K_\epsilon(x_i, y)^2 \left( f(y)- f(x_{i}) \right)^2 p(y) dV(y)
\le \nu_Y = \Theta_{f,p}( \epsilon^{-d/2+1}),
\end{align*}
which follows the same derivation as in the proof of the point-wise convergence of $L_{rw}$ without density-correction,
c.f. Theorem \ref{thm:pointwise-rate-C2h} 1),
and can be directly verified by a similar calculation as in \eqref{eq:calc-inside-ball}.
We attempt at the large deviation bound at $\Theta( \sqrt{\frac{\log N}{N} \nu_Y } ) \sim (\frac{\log N}{N \epsilon^{d/2-1}})^{1/2}$
which is of small order than $\frac{\nu_Y}{L_Y} = \Theta(\epsilon)$
under the theorem condition that $\epsilon^{d/2+1}=\Omega(\frac{\log N}{N})$.
Thus the classical Bernstein gives that for large enough $N$, w.p. $> 1-2N^{-10}$,
\[
\textcircled{1}' = \mathbb{E} Y_j + O( \sqrt{\frac{\log N}{N} \nu_Y } )
= \frac{m_2}{2} \epsilon \Delta f (x_i)
+ O(\epsilon^2) + O( \sqrt{ \frac{\log N}{N \epsilon^{d/2-1}}} ),
\]
and as a result,
\begin{equation}\label{eq:circle1-whp}
\textcircled{1} = \tilde{m} \epsilon \Delta f (x_i)
+ O(\epsilon^2) + O( \sqrt{ \frac{\log N}{N \epsilon^{d/2-1}}} ).
\end{equation}
By a union bound over the events needed at $N$ points, we have that \eqref{eq:circle1-whp}
holds at all $x_i$ under a good event $E_4$ which happens w.p. $>1-2N^{-9}$.
Putting together, under $E_3$ and $E_4$, by \eqref{eq:circle2-whp} and \eqref{eq:circle1-whp},
at all $x_i$,
\begin{align*}
\frac{1}{\epsilon} \sum_{j=1}^N W_{i j} \frac{ f(x_j )- f(x_{i})}{ D_j}
& = \tilde{m} \Delta f (x_i)
+ O(\epsilon) + O( \sqrt{ \frac{\log N}{N \epsilon^{d/2+1}}} )
+O(\epsilon^{3/2}, \sqrt{ \frac{\log N}{N \epsilon^{d/2+1}} }) \\
& =\tilde{m} \Delta f (x_i)
+ O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2+1}}} ).
\end{align*}
Combined with \eqref{eq:denominator}, under $E_1, E_2, E_3, E_4$,
\begin{align*}
\frac{1}{\epsilon \tilde{m}} \frac{ \sum_{j=1}^N W_{i j} \frac{ f(x_j )- f(x_{i})}{ D_j} }{ \sum_{j=1}^N W_{i j} \frac{ 1}{ D_j} }
& =
\frac{ \Delta f (x_i) + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2+1}}} )}
{1+ O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} })}
= \Delta f (x_i) + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2+1}}} ).
\end{align*}
It remains to establish \eqref{eq:W-abs-diff-f-concen} and \eqref{eq:exp-Yj-ptrate-density-correct-term1}
to finish the proof of the theorem.
\\
\underline{Proof of \eqref{eq:W-abs-diff-f-concen}}:
Define r.v. $Y_j = W_{i j} | f(x_j )- f(x_{i})|$ and condition on $x_i$, for $j \neq i$,
$\mathbb{E} Y_j = \int_{{\cal M}} K_\epsilon( x_i, y) | f(y) - f(x_i) | p(y) dV(y)$.
Let $\delta_\epsilon = \sqrt{ (\frac{d+10}{a}) \epsilon \log {\frac{1}{\epsilon}}} $,
for any $x\in {\cal M}$,
$K_\epsilon(x,y) = O(\epsilon^{10})$ when $y \notin B_{\delta_\epsilon}(x)$, then
\begin{align*}
& \int_{{\cal M}} K_\epsilon( x, y) | f(y) - f(x) | p(y) dV(y) \\
& = \int_{B_{\delta_{\epsilon}}(x)} K_\epsilon( x, y) | f(y) - f(x) | p(y) dV(y)
+ O(\epsilon^{10}) \| f\|_\infty \|p\|_\infty \\
& \le \int_{B_{\delta_{\epsilon}}(x)} K_\epsilon( x, y) ( \| \nabla f\|_\infty \| y - x\| ) p(y) dV(y)
+ O_{f,p}(\epsilon^{10})\\
&= O_{f,p}(\sqrt{\epsilon}) + O_{f,p}(\epsilon^{10}) = O( \sqrt{ \epsilon}).
\end{align*}
The $O_{f,p}(\sqrt{\epsilon})$ is obtained because $\|p\|_\infty$, $\| \nabla f\|_\infty$ are finite constants, and
\begin{align}
& \frac{1}{\sqrt{\epsilon}} \int_{B_{\delta_{\epsilon}}(x)} K_\epsilon( x, y) \| y - x\| dV(y)
= \int_{B_{\delta_{\epsilon}}(x)} \epsilon^{-d/2} h(\frac{\|x - y\|^2}{\epsilon} )\frac{\| y - x\|}{\sqrt{\epsilon}} dV(y) \nonumber \\
& ~~~
\le \int_{B_{\delta_{\epsilon}}(x)} \epsilon^{-d/2} a_0 e^{-a \frac{\|x-y\|^2}{\epsilon}}\frac{\| y - x\|}{\sqrt{\epsilon}} dV(y) \nonumber \\
& ~~~
\le \int_{ \| u\| < 1.1 \delta_\epsilon, \, u\in \R^d} a_0 e^{- \frac{a}{1.1} \| u\|^2} \frac{ \| u \|}{0.9} (1+ O( \|u \|^2))du
= O(1),
\label{eq:calc-inside-ball}
\end{align}
where $u \in \R^d$ is the projected coordinates in the tangent plane $T_{x}({\cal M})$,
and the comparison of $\|x- y\|_{\R^D}$ to $\| u \|$ (namely $ 0.9 \|x -y\|_{\R^D} < \|u\| < 1.1 \|x -y\|_{\R^D}$)
and the volume comparison (namely $dV(y) = (1+O( \| u\|^2)) du$)
hold when $\delta_\epsilon < \delta_0({\cal M})$ which is a constant depending on ${\cal M}$, see e.g. Lemma A.1 in \cite{cheng2020convergence}.
Meanwhile,
$|Y_j| $ is bounded by $L_Y = \|f\|_\infty \Theta(\epsilon^{-d/2})$,
and the variance of $Y_j$ is bounded by $\mathbb{E} Y_j^2$ and then bounded by $\nu_Y =\Theta(\epsilon^{-d/2 + 1})$,
by a similar calculation as in \eqref{eq:calc-inside-ball}.
We attempt at the large deviation bound at $\Theta( \sqrt{\frac{\log N}{N} \nu_Y } ) \sim (\frac{\log N}{N \epsilon^{d/2-1}})^{1/2}$
which is of small order than $\frac{\nu_Y}{L_Y} = \Theta(\epsilon)$
under the theorem condition that $\epsilon^{d/2+1}=\Omega(\frac{\log N}{N})$.
Thus, for fixed $i$, w.p. $> 1- 2N^{-10}$,
\[
\frac{1}{N-1} \sum_{j\ \neq i} Y_j = \mathbb{E} Y + O( \sqrt{\frac{\log N}{ N \epsilon^{d/2-1}}})
= O(\sqrt{\epsilon}) (1 + o(1)) = O(\sqrt{\epsilon}).
\]
The term $\frac{1}{N} Y_i$ is bounded by $O(\epsilon^{-d/2} N^{-1}) = o(\sqrt{\epsilon})$.
By the same argument of independence of $x_i$ from $\{ x_j \}_{j \neq i}$
and the union bound over $N$ events, we have proved \eqref{eq:W-abs-diff-f-concen}.
\\
\underline{Proof of \eqref{eq:exp-Yj-ptrate-density-correct-term1}}:
Note that
\[
\frac{p}{\tilde{p}_\epsilon}
= \frac{1}{1+ \epsilon \tilde{m} (\omega + \frac{\Delta p }{p})}
= 1 - \epsilon \tilde{m} (\omega + \frac{\Delta p }{p}) + \epsilon^2 r_\epsilon
=1 - \epsilon r_1 + \epsilon^2 r_\epsilon,
\]
where $r_1 := \tilde{m} (\omega + \frac{\Delta p }{p})$ is a deterministic function, $r_1 \in C^{\infty}({\cal M})$;
$r_\epsilon \in C^{\infty}({\cal M})$, and $\| r_\epsilon\|_\infty = O(1)$ when $\epsilon$ is less than some $O(1)$ threshold
due to that $\| \omega + \frac{\Delta p }{p} \|_\infty = O(1) $.
Then,
\begin{align*}
& \int_{{\cal M}} K_\epsilon( x_i, y) \frac{ f p }{ \tilde{p}_\epsilon}(y) dV(y)
= \int_{{\cal M}} K_\epsilon( x_i, y) f(y)(1 - \epsilon r_1 + \epsilon^2 r_\epsilon)(y) dV(y) \\
& ~~~
= \int_{{\cal M}} K_\epsilon( x_i, y) f(y) dV(y)
- \epsilon \int_{{\cal M}} K_\epsilon( x_i, y) (f r_1) (y) dV(y)
+ \epsilon^2 \int_{{\cal M}} K_\epsilon( x_i, y) (f r_\epsilon)(y)dV(y) \\
& ~~~
= \left( m_0 f(x_i) + \frac{m_2}{2} \epsilon (\omega f +\Delta f) (x_i) + O(\epsilon^2) \right)
- \epsilon \left( m_0 fr_1( x_i) + O(\epsilon) \right) + O(\epsilon^2) \\
& ~~~
= m_0 f(x_i) + \frac{m_2}{2} \epsilon (\omega f +\Delta f - \frac{1}{\tilde{m}} fr_1) (x_i)
+ O(\epsilon^2),
\end{align*}
and taking $f=1$ gives that
\[
\int_{{\cal M}} K_\epsilon( x_i, y) \frac{ p }{ \tilde{p}_\epsilon}(y) dV(y)
= m_0 + \frac{m_2}{2} \epsilon (\omega - \frac{1}{\tilde{m}} r_1) (x_i)
+ O(\epsilon^2).
\]
Putting together and subtracting the two terms in \eqref{eq:exp-Yj-ptrate-density-correct-term1}
proves that
$\mathbb{E} Y_j =
\frac{m_2}{2} \epsilon \Delta f (x_i)
+ O(\epsilon^2)$.
\end{proof}
\subsection{Form rate of density-corrected graph Laplacian}
The graph Dirichlet form of density-corrected graph Laplacian is defined as
\begin{equation}\label{eq:def-tildeENu}
\tilde{E}_N(u):=\frac{1}{ \frac{m_2}{ 2 m_0^2}\epsilon } u^T( \tilde{D} - \tilde{W}) u
= \frac{1}{ \frac{m_2}{ m_0^2}\epsilon } \sum_{i,j=1}^N \tilde{W}_{i,j} (u_i - u_j)^2
= \frac{1}{ \frac{m_2}{ m_0^2}\epsilon } \sum_{i,j=1}^N W_{i,j} \frac{ (u_i - u_j)^2 }{D_i D_j}.
\end{equation}
We establish the counter part of Theorem \ref{thm:form-rate}, which achieves the same form rate.
The theorem is for general differentiable $h$, which can be of independent interest.
\begin{theorem}
\label{thm:form-rate-density-correction}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
if as $N \to \infty$, $ \epsilon \to 0+$, $ \epsilon^{d/2 } N = \Omega( \log N)$,
then for any $f \in C^{\infty} ({\cal M})$,
when $N$ is sufficiently large,
w.p. $> 1- 2 N^{-9}-2 N^{-10}$,
\[
\tilde{E}_N( \rho_X f )
= \langle f, -\Delta f \rangle
+ O_{p,f} ( \epsilon, \sqrt{ \frac{ \log N }{ N \epsilon^{d/2 }} } ).
\]
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:form-rate-density-correction}]
By definition \eqref{eq:def-tildeENu},
\[
\tilde{E}_N( \rho_X f )
= \frac{1}{ \frac{m_2}{ m_0^2}\epsilon } \frac{1}{N^2} \sum_{i,j=1}^N W_{i,j} \frac{ (f(x_i) - f(x_j))^2 }{ \frac{D_i}{N} \frac{D_j}{N}}.
\]
The following lemma (proved in Appendix \ref{app:proofs-density-corrected})
makes use of the concentration of $D_i/N$ to reduce the graph Dirichlet form to be a V-statistics up to a relative error at the form rate.
\begin{lemma}\label{lemma:tildeENu-V-stat}
Under the good event in Lemma \ref{lemma:Di-concen-eps2} 1),
\[
\tilde{E}_N( u )
=
\left( \frac{1}{ m_2[h] \epsilon }
\frac{1}{N^2} \sum_{i,j=1}^N W_{i,j} \frac{ (u_i - u_j )^2 }{ p(x_i)p(x_j) } \right)
(1 + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) ),
\quad \forall u \in \R^N,
\]
and the constant in big-$O$ is determined by $({\cal M}, p)$ and uniform for all $u$.
\end{lemma}
We consider under the good event in Lemma \ref{lemma:Di-concen-eps2} 1),
which is called $E_1$ and happens w.p. $> 1- 2 N^{-9}$.
Then applying Lemma \ref{lemma:tildeENu-V-stat} with $u = \rho_X f$, we have that
\begin{equation}
\tilde{E}_N( \rho_X f )
=
\left\{ \frac{1}{ m_2 \epsilon }
\frac{1}{N^2} \sum_{i,j=1}^N W_{i,j} \frac{ (f(x_i) - f(x_j))^2 }{ p(x_i)p(x_j) } \right\}
(1 + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) )
=: \textcircled{3} (1 + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) )
\label{eq:form-pf-1}
\end{equation}
The term
$\textcircled{3}$ in \eqref{eq:form-pf-1} equals $\frac{1}{N^2} \sum_{i,j=1}^N V_{i,j} $,
where
$V_{i,j} : = \frac{1}{ m_2 \epsilon } K_\epsilon( x_i, x_j) \frac{ (f(x_i) - f(x_j))^2 }{ p(x_i)p(x_j) }$,
and $ V_{i,i} =0$.
We follow the same approach as in the proof of Theorem 3.5 in \cite{cheng2020convergence} to analyze this V-statistic,
and show that (proof in Appendix \ref{app:proofs-density-corrected})
\begin{equation}\label{eq:proof-form-density-correct-Vstat}
\{\text{ $\textcircled{3}$ in \eqref{eq:form-pf-1} }\}
=
\langle f, - \Delta f \rangle + O_{f,p}(\epsilon, \sqrt{ \frac{\log N }{N \epsilon^{d/2}} }) .
\end{equation}
Back to \eqref{eq:form-pf-1}, we have shown that under $E_1 \cap E_3$,
\begin{align*}
\tilde{E}_N( \rho_X f )
& = \textcircled{3} (1 + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) )
= \left( \langle f, - \Delta f \rangle + O(\epsilon, \sqrt{ \frac{\log N }{N \epsilon^{d/2}} }) \right)
(1 + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2}} }) ) \\
& = \langle f, - \Delta f \rangle + O(\epsilon, \sqrt{ \frac{\log N }{N \epsilon^{d/2}} }),
\end{align*}
and the constant in big-$O$ depends on ${\cal M}$, $f$ and $p$.
\end{proof}
\subsection{Eigen convergence of $\tilde{L}_{rw}$}
In this subsection, let $\lambda_k$ be eigenvalues of $\tilde{L}_{rw}$ and $v_k$ the associated eigenvectors.
By \eqref{eq:def-tildeENu}, recall that $\tilde{m} =\frac{m_2}{ 2 m_0}$,
the analogue of \eqref{eq:lambdak-rw-min-max} is the following
\begin{equation}\label{eq:lambdak-rw-density-correct}
\lambda_k
= \min_{ L \subset \R^N, \, dim(L) = k} \sup_{ v \in L, v \neq 0}
\frac{ \frac{1}{\epsilon \tilde{m} } v^T(\tilde{D}-\tilde{W})v}{ v^T \tilde{D} v }
= \frac{ \frac{1}{m_0 } \tilde{E}_N(v) }{ v^T \tilde{D} v },
\quad 1 \le k \le N.
\end{equation}
The methodology is same as before,
with a main difference in the definition of the heat interpolation mapping with weights $p(x_j)$ as in \eqref{eq:def-tilde-Ir}.
This gives to the $p$-weighted quadratic form $\tilde{q}_s(u)$ defined in \eqref{eq:def-tilde-qs},
for which we derive the concentration argument of for $\tilde{q}^{(0)}_s$ in \eqref{tildeq0-u-E0'}
and the upper bound of $\tilde{q}^{(2)}_s$ in Lemma \ref{lemma:qs2-UB-density-correct}.
The other difference is that the $\tilde{D}$-weighted 2-norm is considered because the eigenvectors are $\tilde{D}$-orthogonal.
All the proofs of the Steps 0-3 are left to Appendix \ref{app:proofs-density-corrected}.
\vspace{5pt}
\noindent
\underline{Step 0}.
We first establish eigenvalue UB based on
Lemma \ref{lemma:Di-concen-eps2} and
the form rate in Theorem \ref{thm:form-rate-density-correction}.
\begin{proposition}[Eigenvalue UB of $\tilde{L}_{rw}$]
\label{prop:eigvalue-UB-rw-density-correct}
Under Assumptions \ref{assump:M-p} and \ref{assump:h-C2-nonnegative},
for fixed $K \in \mathbb{N}$,
Suppose $0 = \mu_1 <\cdots < \mu_{K} < \infty$ are all of single multiplicity.
If as $N \to \infty$, $\epsilon \to 0+$, and $\epsilon^{d/2} = \Omega( \frac{\log N}{N} ) $,
then for sufficiently large $N$,
w.p. $> 1- 4 N^{-9} - 4 K^2 N^{-10}$, $\tilde{L}_{rw}$ is well-defined, and
\[
\lambda_k
\le \mu_k + O(\epsilon, \sqrt{ \frac{\log N}{N \epsilon^{d/2} } } ) ,
\quad k=1,\cdots, K.
\]
\end{proposition}
\noindent
\underline{Step 1}.
Eigenvalue crude LB. We prove with the $p$-weighted interpolation mapping defined as
\begin{equation}\label{eq:def-tilde-Ir}
\tilde{I}_r [u] = \frac{1}{N} \sum_{j=1}^N \frac{u_j}{ p(x_j)} H_r( x, x_j) = I_r [ \tilde{u}],
\quad \tilde{u}_i = u_i/p(x_i).
\end{equation}
Then, same as before,
$\langle \tilde{I}_r [u], \tilde{I}_r [u] \rangle = q_{\delta \epsilon} (\tilde{u})$,
and
$\langle \tilde{I}_r [u], Q_t \tilde{I}_r [u] \rangle = q_{ \epsilon} (\tilde{u})$,
where
for $s > 0$,
\begin{equation}\label{eq:def-tilde-qs}
\begin{split}
\tilde{q}_s( u)
& := \frac{1}{N^2} \sum_{i,j=1}^N \frac{ { H}_s(x_i, x_j) }{p(x_i) p(x_j) }u_i u_j
= q_s( \tilde{u})
= \tilde{q}^{(0)}_s(u) - \tilde{q}^{(2)}_s(u), \\
\tilde{q}^{(0)}_s(u)
& := \frac{1}{N} \sum_{i=1}^N u_i^2 \left( \frac{1}{N} \sum_{j=1}^N \frac{ { H}_s(x_i, x_j) }{p(x_i) p(x_j) } \right),
\quad
\tilde{q}^{(2)}_s(u)
:= \frac{1}{2 N^2} \sum_{i,j=1}^N\frac{ { H}_s(x_i, x_j) }{p(x_i) p(x_j) } (u_i - u_j)^2.
\end{split}
\end{equation}
\begin{proposition}[Initial crude eigenvalue LB of $\tilde{L}_{rw}$]
\label{prop:eigvalue-LB-crude-rw-density-correct}
Under Assumptions \ref{assump:M-p},
$h$ is Gaussian.
For fixed $k_{max} \in \mathbb{N}$, $K = k_{max}+1$,
and $\mu_k$, $\epsilon$ and $N$ satisfy the same condition as in Proposition \ref{prop:eigvalue-LB-crude},
where the definition of $c_K$ is the same except that $c$ is a constant depending on $({\cal M},p)$.
Then, for sufficiently large $N$, w.p.$> 1- 4K^2 N^{-10} - 8N^{-9}$,
$\lambda_k > \mu_k - \gamma_K$,
for $k=2,\cdots, K$.
\end{proposition}
\noindent
\underline{Steps 2-3}.
We prove eigenvector consistency and refined eigenvalue convergence rate. Define
\begin{equation}\label{eq:def-u-tildeD-norm}
\| u \|_{\tilde{D}}^2 : = \sum_{i=1}^N u_i^2 \tilde{D}_i, \quad \forall u \in \R^N.
\end{equation}
The proof uses same techniques as before, and the differences is in handling the $\tilde{D}$-orthogonality of the eigenvectors
and using the concentration arguments in Lemma \ref{lemma:Di-concen-eps2}.
Same as before, extension to when $\mu_k$ has greater than 1 multiplicity is possible (Remark \ref{rk:multiplicity}).
\begin{theorem}[eigen-convergence of $\tilde{L}_{rw}$]
\label{thm:refined-rates-rw-density-correct}
Under the same condition and setting of ${\cal M}$, $p$ being uniform, $h$ being Gaussian,
and $k_{max}$, $K$, $\mu_k$, $\epsilon$ same
as in Theorem \ref{thm:refined-rates},
where the definition of $c_K$ is the same except that $c$ is a constant depending on $({\cal M},p)$.
Consider first $k_{max}$ eigenvalues and eigenvectors of $\tilde{L}_{rw}$,
$\tilde{L}_{rw} v_k = \lambda_k v_k$,
and $v_k$ are normalized s.t.
$ N \| v_k\|_{\tilde{D}}^2 =1 $.
Define for $1 \le k\le K$,
\[
\tilde{\phi}_k := \rho_X( \frac{ \psi_k}{ \sqrt{ N } } ).
\]
Then, for sufficiently large $N$,
w.p.$> 1- 4K^2 N^{-10} - ( 4K + 8)N^{-9}$,
$ \| v_k \|_2 = \Theta(1)$, and
the same bounds as in Theorem \ref{thm:refined-rates} hold
for $ | \mu_k - \lambda_k |$ and $ \| v_k - \alpha_k \tilde{\phi}_k \|_2$, for $ 1 \le k \le k_{max}$,
with certain scalars $\alpha_k$ satisfying $|\alpha_k| = 1+o(1)$,
\end{theorem}
\begin{figure}
\hspace{-40pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\textwidth]{test10_fig22}
\caption{}
\end{subfigure}
\hspace{20pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\linewidth]{test10_fig23}
\caption{}
\end{subfigure}
\caption{
\scriptsize
Data points are sampled uniformly on $S^1$ embedded in $\R^4$.
(a) The eigenvalue relative error $\text{RelErr}_{\lambda}$,
visualized (in $\log_{10}$) as a field on a grid of ($\log_{10}$) $N$ and $\epsilon$,
$k_{max} = 9$.
The red curve on the left plot indicates the post-selected optimal $\epsilon$ which minimizes the error,
and that minimal error as a function of $N$ is plotted on the right in log-log scale.
(b) Same plot as (a) for eigenvector relative error $\text{RelErr}_{v}$.
The relative errors are defined in \eqref{eq:RelErr-eig}.
The empirical errors are averaged over 500 runs of experiments,
and the log error values are smoothed over the grid for better visualization.
Plots of the raw values are shown in Fig. \ref{fig:S1-Lrw-nosmooth}.
}
\label{fig:S1-Lrw}
\end{figure}
\begin{figure}[t]
\hspace{-40pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\textwidth]{test9_fig22}
\caption{}
\end{subfigure}
\hspace{20pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\linewidth]{test9_fig23}
\caption{}
\end{subfigure}
\caption{
\scriptsize
Data points are sampled uniformly on $S^2$ embedded in $\R^3$,
same plots as Fig. \ref{fig:S1-Lrw}.
$k_{max} = 9$,
and the plots of raw values are shown in Fig. \ref{fig:S2-Lrw-nosmooth}.
}
\label{fig:S2-Lrw}
\end{figure}
\section{Numerical experiments}\label{sec:exp}
\subsection{Eigen-convergence of $L_{rw}$}
We test on two simulated datasets, which are uniformly sampled on $S^1$ (embedded in $\R^4$, the formula is in Appendix \ref{app:numerics})
and unit sphere $S^2$ (embedded in $\R^3$).
For both datasets, we compute over an increasing number of samples $N = \{ 562, \cdots, 1584 \}$
and a range of values of $\epsilon $,
where the grid points of both $N$ and $\epsilon$ are evenly spaced in log scale.
For each value of $N$ and $\epsilon$, we generate $N$ data points, construct the kernelized matrix
$W_{ij}=K_\epsilon(x_i, x_j)$ as defined in \eqref{eq:def-K-eps} with Gaussian $h$,
and compute the first 10 eigenvalues $\lambda_k $ and eigenvectors $v_k$ of $L_{rw}$.
The errors are computed by
\begin{equation}\label{eq:RelErr-eig}
\text{RelErr}_{\lambda} = \sum_{k=2}^{k_{max}} \frac{|\lambda_k - \mu_k|}{\mu_k},
\quad
\text{RelErr}_{v} = \sum_{k=2}^{k_{max}} \frac{ \| v_k - \phi_k \|_2}{ \| \phi_k \|_2},
\end{equation}
where $\phi_k $ is as defined by \eqref{eq:def-phik}.
The experiment is repeated for 500 replicas from which the averaged empirical errors are computed.
For the data on $S^1$, $\epsilon = \{ 10^{-2.8}, \cdots, 10^{-4} \}$.
The manifold (in first 3 coordinates) is illustrated in Fig. \ref{fig:S1-tildeLrw-pt}(a) but the density is uniform here.
See more details in Appendix \ref{app:numerics}.
For the data on $S^2$,
$\epsilon = \{ 10^{-0.2}, \cdots, 10^{-1.8} \}$.
These ranges are chosen so that the minimal error over $\epsilon$ for each $N$ are observed, at least for $\text{RelErr}_{\lambda}$.
Note that for $S^1$, the population eigenvalues starting from $\mu_2$ are of multiplicity 2,
and for $S^2$, the multiplicities are 3, 5, $\cdots$.
The results are shown in Figures \ref{fig:S1-Lrw} and \ref{fig:S2-Lrw}.
For data on $S^1$,
Fig. \ref{fig:S1-Lrw} (a) shows that $\text{RelErr}_{\lambda}$ as a function of $N$ (with post-selected best $\epsilon$)
shows a convergence order of about $N^{- 0.4 }$,
which is consistent with the theoretical bound of $N^{-1/(d/2+2)}$ in Theorem \ref{thm:refined-rates-rw}, since $d=1$ here.
In the left plot of colored field,
the log error values are smoothed over the grid of $N$ and $\epsilon$,
and the best $\epsilon$ scales with $N$ as about $N^{-0.4}$.
The empirical scaling of optimal $\epsilon$ is less stable to observe:
depending on the level of smoothing,
the slope of $\log_{10} \epsilon$ varies between -0.2 and -0.5 (the left plot),
while the slope for best (log) error is always about -0.4 (the right plot).
The result without smoothing is shown in Fig. \ref{fig:S1-Lrw-nosmooth}.
The eigenvector error in Fig. \ref{fig:S1-Lrw}(b) shows an order of about $N^{-0.5}$, which is better than the theoretical prediction.
For the data on $S^2$,
the eigenvalue convergence shows an order of about $N^{-0.33}$,
in agreement with the theoretical rate of $N^{-1/(d/2+2)}$ when $d=2$.
The eigenvector error again shows an order of about $N^{-0.5}$ which is better than theory.
The small error of eigenvector estimation at very large value of $\epsilon$ may be due to the symmetry of the simple manifolds $S^1$ and $S^2$.
In both experiments, the eigenvector estimation prefers a much larger value of $\epsilon$ than the eigenvalue estimation,
which is consistent with the theory.
\begin{figure}[t]
\hspace{-18pt}
\begin{subfigure}{0.24 \textwidth}
\includegraphics[trim = 10 0 10 0, clip, height=.94\textwidth]{test12_fig5}
\caption{}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}{0.24 \textwidth}
\includegraphics[trim = 10 0 10 0, clip, height=.93\linewidth]{test12_fig11}
\caption{}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}{0.24 \textwidth}
\includegraphics[trim = 10 0 10 0, clip, height=.93\linewidth]{test12_fig3}
\caption{}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}{0.249 \textwidth}
\includegraphics[trim = 40 0 10 0, clip, height=.96\textwidth]{test12_fig31}
\caption{}
\end{subfigure}
\caption{
\scriptsize
(a) Random sampled data on $S^1$ embedded in $\R^4$, the first 3 coordinates are shown, and colored by the density.
(b) Density $p$ and the test function $f$ plotted as a function of intrinsic coordinate (arc-length) on $[0,1)$ of $S^1$.
(c) One realization of $\tilde{L}_{rw} (\rho_X f)$ plotted in comparison with the true function of $\rho_X (\Delta f)$.
(d) Log relative error $\log_{10} \text{RelErr}_{pt}$, as defined in \eqref{eq:def-err-pt}, computed over a range of values of $\epsilon$,
averaged over 50 runs of repeated experiments.
The two fitted lines shows the approximate scaling of $\text{RelErr}_{pt}$ at small $\epsilon$, where variance error dominates, and at large $\epsilon$, where bias error dominates.
}
\label{fig:S1-tildeLrw-pt}
\end{figure}
\begin{figure}
\hspace{-40pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\textwidth]{test11_fig22}
\caption{}
\end{subfigure}
\hspace{20pt}
\begin{subfigure}{0.5 \textwidth}
\includegraphics[trim = 40 0 40 0, clip, height=.48\linewidth]{test11_fig23}
\caption{}
\end{subfigure}
\caption{
\scriptsize
Same eigenvalue and eigenvector relative error plots as Fig. \ref{fig:S1-Lrw},
where data are non-uniformly sampled on $S^1$ as in Fig. \ref{fig:S1-tildeLrw-pt}(a).
$k_{max} = 9$,
and the plots of raw values are shown in Fig. \ref{fig:S1-tildeLrw-nosmooth}.
}
\label{fig:S1-tildeLrw}
\end{figure}
\subsection{Density-corrected graph Laplacian}\label{subsec:exp-density-correct}
To examine the density-corrected graph Laplacian,
we switch to non-uniform density on $S^1$, illustrated in Fig. \ref{fig:S1-tildeLrw-pt}(a).
We first investigate the point-wise convergence of $-\tilde{L}_{rw} f$ to $\Delta f$,
on a test function $f : S^1 \to \R$, see more details in Appendix \ref{app:numerics}.
The error is computed as
\begin{equation}\label{eq:def-err-pt}
\text{RelErr}_{pt} = \frac{ \| -\tilde{L}_{rw} \rho_X f - \rho_X (\Delta f) \|_1 }{ \| \rho_X (\Delta f) \|_1 },
\end{equation}
and the result is shown in Fig. \ref{fig:S1-tildeLrw-pt}.
Theorem \ref{thm:pointwise-rate-dencity-correct}
predicts the bias error to be $O(\epsilon)$
and the variance error to be $O(\epsilon^{-d/4-1/2}) = O(\epsilon^{-3/4})$ since $N$ is fixed,
which agrees with Fig. \ref{fig:S1-tildeLrw-pt}(d).
The results of $\text{RelErr}_{\lambda}$ and $\text{RelErr}_{v}$
are shown in Fig. \ref{fig:S1-tildeLrw}.
The order of convergence with best $\epsilon$ appears to be about $N^{-0.8}$ for both eigenvalue and eigenvector errors,
which is better than those of $L_{rw}$ (when $p$ is uniform) in Fig. \ref{fig:S1-Lrw},
and better than the theoretical prediction in Theorem \ref{thm:refined-rates-rw-density-correct}.
\section{Discussion}
The current result may be extended in several directions.
First, for manifold with smooth boundary,
the random-walk graph Laplacian recovers the Neumann Laplacian \cite{coifman2006diffusion},
and one can expect to prove the spectral convergence as well, such as in \cite{lu2020graph}.
Second, extension to kernel with variable or adaptive bandwidth \cite{berry2016variable,cheng2020convergence},
and other normalization schemes, e.g., bi-stochastic normalization \cite{marshall2019manifold,landa2020doubly,wormell2020spectral},
would be important to improve the robustness against low sampling density and noise in data,
and even the spectral convergence as well.
Related is the problem of spectral convergence to other manifold diffusion operators, e.g., the Fokker-Planck operator, on $L^2({\cal M}, p dV)$.
It would also be interesting to extend to more general types of kernel function $h$ which is not Gaussian,
and even not symmetric \cite{wu2018think}, for the spectral convergence.
At last, further investigation is needed to explain the good spectral convergence observed in experiments,
particularly that of the eigenvector convergence and the faster rate with density-corrected graph Laplacian.
For the eigenvector convergence,
the current work focuses on the 2-norm consistency,
while the $\infty$-norm consistency as has been derived in \cite{dunson2019diffusion,calder2020lipschitz} is also important to study.
\section*{Acknowledgement}
The authors thank Hau-Tieng Wu for helpful discussion.
Cheng thanks Yiping Lu for helpful discussion on the eigen-convergence problem and the proof.
\bibliographystyle{plain}
|
2201.00799
|
\section{Introduction}
This paper is meant as an informal exposition of \cite{HelfRadz}. The main result is a statement on how a linear
operator defined in terms of divisibility by primes has small norm. In
this exposition, we will choose to start from one of its main current
applications, namely,
\begin{equation}\label{eq:firstdagger}
\frac{1}{\log x} \sum_{n\leq x} \frac{\lambda(n) \lambda(n+1)}{n} = O\left(
\frac{1}{\sqrt{\log \log x}}\right),
\end{equation} which strengthens results by Tao \cite{MR3569059} and
Tao-Teräväinen \cite{zbMATH07141311}. (Here
\(\lambda(n)\) is the Liouville function, viz., the completely
multiplicative function such that \(\lambda(p)=-1\) for every prime
\(p\).) There are other corollaries, some of them subsuming the above
statement. It is also true that above statement is an improvement on a
bound, whereas the main result is a result that is new also in a
qualitative sense. One may thus ask oneself whether it is right to
center the exposition on \eqref{eq:firstdagger}.
All the same, \eqref{eq:firstdagger} is a concrete statement that is obviously
interesting, being a step towards Chowla's conjecture (``logarithmic
Chowla in degree 2''), and so it is a convenient initial goal.
\begin{center}\rule{0.5\linewidth}{0.5000000000pt}\end{center}
First, some meta comments. We may contrast two possible ways of writing
a paper --- what may be called the \emph{incremental} and the
\emph{retrospective} approaches.
\begin{itemize}
\tightlist
\item
In the \emph{incremental} approach, we write a paper while we solve a
problem, letting complications and detours accrete. There is much that
can be said against this approach: it is hard to distinguish it from
simply lazy writing; the end product may be unclear; just how one got
to the solution of the problem may be inessential or even misleading.
\item
The \emph{retrospective} approach consists in writing the paper once
the proof is done, from the perspective that one has reached by the
time one has solved the problem.
\end{itemize}
These are of course two extremes. Few people write nothing down while
solving a problem, and the way one followed to reach the solution
generally has some influence on the finished paper. In the case of my
paper with Maksym, what we followed was mainly the retrospective
approach, with some incremental elements, mainly to deal with technical
complications that we had to deal with after we had an outline of a
proof. It is tempting to say that that is still too much incrementality,
but, in fact, some of the feedback we have received suggests a drawback
of the retrospective approach that I had not thought of before.
When faced with a result with a lengthy proof, readers tend to come up
with their own ``natural strategy''. So far, so good: active reading is
surely a good thing. What then happens, though, is that readers may see
necessary divergences from their ``natural strategy'' as technical
complications. They may often be correct; however, they may miss why the
``natural strategy'' may not work, or how it leads to the main,
essential difficulty --- the heart of the problem, which they may then
miss for following the complications.
What I will do in this write-up is follow, not an incremental approach,
but rather an idealized view of what the path towards the solution was
or could have been like; a recreated incrementality with the benefit of
hindsight, then, starting from a ``natural strategy'', with an emphasis
on what turns out to be essential.
{\em Notation.} We will use notation that is usual within
analytic number theory. In particular, given two functions $f,g$ on
$\mathbb{R}^+$ or $\mathbb{Z}^+$,
$f(x) = O(g(x))$ means that there exists a constant $C>0$
such that $|f(x)|\leq C g(x)$ for all large enough $x$, and
$f(x) = o(g(x))$ means that $\lim_{x\to \infty} f(x)/g(x) = 0$ (and $g(x)>0$ for $x$ large enough).
By \(O^*(B)\), we will mean ``a quantity whose absolute value is no
larger than \(B\)''; it is a useful bit of notation for error terms.
We define \(\omega(n)\) to be the number of prime divisors of an
integer \(n\).
\subsection{Initial setup}
Let us set out to reprove Tao's ``logarithmic Chowla'' statement, that
is,
\[
\frac{1}{\log x} \sum_{n\leq x} \frac{\lambda(n) \lambda(n+1)}{n} \to 0
\]
as \(x\to \infty\). Now, Tao's method gives a bound of
\(O(1/(\log \log \log \log x)^{\alpha})\) on the left side (as explained
in \cite{HelfUbis}, with \(\alpha=1/5\)),
while Tao-Teräväinen should yield a bound of
\(O(1/(\log \log \log x)^{\alpha})\) for some \(\alpha>0\). Their work
is based on depleting entropy, or, more precisely, on depleting mutual
information. Our method gives stronger bounds (namely,
\(O(1/\sqrt{\log \log x})\)) and is also ``stronger'' in ways that will
later become apparent. Let us focus, however, simply on giving a
different proof, and welcome whatever might come from it.
The first step will be consist of a little manipulation as in Tao, based
on the fact that \(\lambda\) is multiplicative. Let
\(W = \sum_{n\leq x} \lambda(n) \lambda(n+1)/n\). For any prime (or
integer!) \(p\),
\[
\begin{aligned}
\frac{1}{p} W &= \sum_{n\leq x} \frac{\lambda(p n) \lambda(p n + p)}{p n}\\
&= \sum_{n\leq p x:\; p|n} \frac{\lambda(n) \lambda(n+p)}{n} =
\sum_{n\leq x:\; p|n} \frac{\lambda(n) \lambda(n+p)}{n}
+ O\left(\frac{\log p}{p}\right).\end{aligned}
\]
Hence, for any set of primes \(\mathbf{P}\),
\[
\sum_{p\in \mathbf{P}} \sum_{n\leq x:\; p|n} \frac{\lambda(n) \lambda(n+p)}{n} = W \mathscr{L} + O\left(\sum_{p\in \mathbf{P}}
\frac{\log p}{p}\right),
\]
where \(\mathscr{L} = \sum_{p\in \mathbf{P}} 1/p\). If \(H\) is such
that \(p\leq H\) for all \(p\in \mathbf{P}\), then, by the prime number
theorem, \(\sum_{p\in \mathbf{P}} (\log p)/p \ll \log H\). Thus
\[
W = \frac{1}{\mathscr{L}} \sum_{n\leq x} \sum_{p\in \mathbf{P}: p|n}
\frac{\lambda(n) \lambda(n+p)}{n} + O\left(\frac{\log H}{\mathscr{L}}\right).
\]
Assuming \(H = x^{o(1)}\) (so that \(\log H = o(\log x)\)) and
\(\mathscr{L}\geq 1\), and using a little partial summation, we see that, to
prove that $W = o(\log x)$,
it is enough to show that \(S_0 = o(N\mathscr{L})\), where
\[
S_0 = \sum_{N < n\leq 2 N}\sum_{p\in \mathbf{P}: p|n} \lambda(n) \lambda(n+p).
\]
Let us make this sum a little more symmetric. Let
\(\mathbf{N} = \{n\in \mathbb{Z}: N < n\leq 2 N\}\), and define
\[
S = \sum_{n\in \mathbf{N}} \sum_{\sigma = \pm 1} \sum_{\substack{p\in \mathbf{P}: p|n\\ n+\sigma p\in \mathbf{N}}}
\lambda(n) \lambda(n+\sigma p).
\]
Then \(S = 2 S_0 + O(\sum_{p\in \mathbf{P}} 1) = 2 S_0 + O(H)\), and
thus it is enough to prove that
\[
S = o(N \mathscr{L}).
\]
\textbf{\emph{Objectives.}} Tao showed that there \emph{exists} a set
\(\mathbf{P}\) of primes (very small compared to \(N\)) such that
\(S = o(N \mathscr{L})\). It is our aim to prove that
\(S = o(N \mathscr{L})\) for \emph{every} set \(\mathbf{P}\) of primes
satisfying some simple conditions. (As we said, we assume \(p\leq H\),
and it is not hard to see that we have to assume
\(\mathscr{L}\to\infty\); it will also be helpful to assume that no
\(p\in \mathbf{P}\) is tiny compared to \(H\).) We will in fact be able
to show that \(S = O(N \sqrt{\mathscr{L}})\), which is essentially
optimal.
\section{A first attempt}
We now set out on our own.
\subsection{Old habits die hard. A reduction}
It is a deep-seated instinct for an analytic
number theorist to apply Cauchy-Schwarz:
\[
\begin{aligned}
S^2 &\leq \left(\sum_{n\in \mathbf{N}} \sum_{\sigma = \pm 1} \sum_{\substack{p\in \mathbf{P}: p|n\\ n+\sigma p\in \mathbf{N}}}
\lambda(n) \lambda(n+\sigma p)\right)^2\\
&\leq N \sum_{n\in \mathbf{N}} \sum_{\sigma_1,\sigma_2 = \pm 1} \sum_{\substack{p_1,p_2\in \mathbf{P}\\ p_i|n,\; n+\sigma_i p_i\in \mathbf{N}}}
\lambda(n+\sigma_1 p_1) \lambda(n+\sigma_2 p_2)\\
&\leq \sum_{n\in \mathbf{N}}\; \sum_{\sigma_1,\sigma_2 = \pm 1} \sum_{\substack{p_1,p_2\in \mathbf{P}\\ p_1|n, p_2|n+\sigma_1 p_1\\
n+\sigma_1 p_1, n+\sigma_1 p_1 + \sigma_2 p_2 \in \mathbf{N}}} \lambda(n)
\lambda(n+\sigma_1 p_1+\sigma_2 p_2),\end{aligned}
\]
where we are changing variables in the last step.
We iterate, applying Cauchy-Schwarz \(\ell\) times:
\[
S^{2^\ell} \leq N^{2^\ell-1} \sum_{n\in \mathbf{N}}
\sum_{\substack{\sigma_i= \pm 1,\; p_i\in \mathbf{P}\\ \forall 1\leq i\leq 2^\ell: p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}}} \lambda(n)
\lambda(n+ \sigma_1 p_1 + \dotsc + \sigma_{2^\ell} p_{2^{\ell}}).
\]
We can see \(n+\sigma_1 p_1 + \dotsc + \sigma_{2^\ell} p_{2^\ell}\) as
the outcome of a ``walk'' of length \(2^\ell\).
Suppose for a moment that, for \(k=2^\ell\) large, the number of walks
of length \(k\) from \(n\) to \(m\) is generally about \(\psi(m)\),
where \(\psi\) is a nice continuous function. Then \(S^{2^\ell}\) would
tend to
\[
N^{2^\ell-1} \sum_{n\in \mathbf{N}} \sum_{m} \lambda(n) \lambda(n+m) \psi(m).
\]
The main result (Theorem 1) in Matomäki-Radziwiłł \cite{MR3488742} would then
give us a bound on that double sum. Let us write that bound in the form
\[
\left|\sum_{n\in \mathbf{N}} \sum_{m} \lambda(n) \lambda(n+m) \psi(m)\right|
\leq \text{err}_2 \cdot N \mathscr{L}^{k},
\]
since \(|\psi|_1\) should be about \(\mathscr{L}^k\), and write our
statement on convergence to \(\psi\) in the form
\begin{equation}\label{eq:diffabs}
\sum_{n\in \mathbf{N}} \sum_m \left|\psi(m) -
\sum_{\substack{\sigma_i= \pm 1,\; p_i\in \mathbf{P}\\ \forall 1\leq i\leq 2^\ell: p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\\ \sigma_1 p_1 + \dotsc +
\sigma_k p_k = m}} 1\right| = \text{err}_1 \cdot N \mathscr{L}^k.
\end{equation}
Then
\[
S\leq (\text{err}_1^{1/k} + \text{err}_2^{1/k})\cdot N \mathscr{L}.
\]
Here already we would seem to have a problem. The ``width'' \(M\) of the
distribution \(\psi\) (meaning its scale) should be
\(\ll \sqrt{k} \cdot \mathbb{E}(p: p\in \mathbb{P})\leq \sqrt{k} H\);
the distribution could be something like a Gaussian at that scale, say.
Now, the bound from \cite{MR3488742} is roughly
of the quality \(\text{err}_2\leq 1/\log M\). One can use intermediate
results in the same paper to obtain a bound on \(\text{err}_2\) roughly
of the form \(1/M^{\delta}\), \(\delta>0\), if we remove some integers
from \(\mathbf{N}\). At any rate, it seems clear that we would need, at the
very least, \(k\) larger than any constant times \(\log H\).
As it turns out, all of that is a non-issue, in that there is a way to avoid
taking the \(k\)th root of \(\text{err}_2\) altogether. Let us make a mental
note, however.
\subsection{Walks of different kinds}
The question now is how large \(\ell\) has to be for the number of walks
of length \(k=2^{\ell}\) from \(n\) to \(n+m\) to approach a continuous
distribution \(\psi(m)\). Consider first the walks
\(n, n+\sigma_1 p_1,\dotsc,n+\sigma_1 p_1 + \dotsb + \sigma_k p_k\) such
that no prime \(p_i\) is repeated. Fix \(\sigma_i\), \(p_i\) and let
\(n\) vary. By the Chinese Remainder Theorem, the number of
\(n\in \mathbf{N}\) such that
\[
p_1|n,\; p_2|n+\sigma_1 p_1,\; \dotsc,\; p_k|n+\sigma_1 p_1 + \dotsc + \sigma_{k-1} p_{k-1}
\]
is almost exactly \(N/p_1 p_2 \dotsb p_k\). In other words, the
probability of that walk being allowed is almost exactly
\(1/p_1 \dotsc p_k\). We may thus guess that \(\psi\) has the same shape
(scaled up by a factor of \(\mathscr{L}^k\)) as the distribution
of the endpoint of a
random walk where each edge of length \(p\) is taken with probability
\(1/p_i\) (divided by \(\mathscr{L}\), so that the probabilities add up
to \(1\)). That distribution should indeed tend to a continuous
distribution --- namely, a Gaussian --- fairly quickly. Of course, here,
we are just talking about the contribution of walks with distinct edges
\(p_i\) to
\[
\sum_{n\in \mathbf{N}} \sum_m \left(\psi(m) -
\sum_{\substack{\sigma_i= \pm 1,\; p_i\in \mathbf{P}\\ \forall 1\leq i\leq 2^\ell: p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\\ \sigma_1 p_1 + \dotsc +
\sigma_k p_k = m}} 1\right),
\]
without absolute values, and we do need to take absolute values as in
\eqref{eq:diffabs}. However, we can get essentially what we want by
looking at the variance
\[
\sum_{n\in \mathbf{N}} \sum_m \left(\psi(m) -
\sum_{\substack{\sigma_i= \pm 1,\; p_i\in \mathbf{P}\\ \forall 1\leq i\leq k: p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\\ \sigma_1 p_1 + \dotsc +
\sigma_k p_k = m}} 1\right)^2,
\]
and considering the contribution to this variance made by closed walks
\[
\begin{aligned}n, &n+\sigma_1 p_1, \dotsc, n+\sigma_1 p_1 + \dotsb + \sigma_k p_k = m,\\
&n+\sigma_1 p_1 + \dotsb + \sigma_k p_k-\sigma_{k+1} p_{k+1},\dotsc ,m-(\sigma_{k+1} p_{k+1} + \dotsc + \sigma_{2 k}
p_{2 k})=n\end{aligned}
\]
with \(p_1,p_2,\dotsc,p_{2 k}\) distinct:
\begin{center}
\begin{tikzpicture}[thick,scale=0.9, every node/.style={transform shape}]
\tikzstyle{membre}= [rectangle]
\tikzstyle{operation}=[->,>=latex]
\tikzstyle{etiquette}=[midway,fill=black!20]
\node[membre] (n) at (0,3) {$n$};
\node[membre] (n1) at (2.5,4) {$n+\sigma_1 p_1$};
\node[membre] (n2) at (6,4.5) {$n+\sigma_1 p_1 + \sigma_2 p_2$};
\node[membre] (aro) at (9.5,4) {$\dots$};
\node[membre] (m) at (12,3) {$n+\sigma_1 p_1 + \dots + \sigma_k p_k$};
\node[membre] (nk1) at (8.5,1) {$n+\sigma_1 p_1 + \dots + \sigma_k p_k - \sigma_{k+1} p_{k+1}$};
\node[membre] (nk2m1) at (2.75,1) {$\dots$};
\draw[operation] (n) to node[midway,above]{$\sigma_1 p_1$} (n1);
\draw[operation] (n1) to node[midway,above] {$\sigma_2 p_2$} (n2);
\draw[operation] (n2) to node[midway,above] {$\sigma_3 p_3$} (aro);
\draw[operation] (aro) to node[midway,above] {$\;\;\sigma_k p_k$} (m);
\draw[operation] (m) to node[midway,below] {$\;\;\;\;\;\;\;\;\;\;\;\;\sigma_{k+1} p_{k+1}$} (nk1);
\draw[operation] (nk1) to node[midway,above] {$\sigma_{k+2} p_{k+2}$} (nk2m1);
\draw[operation] (nk2m1) to node[midway,right] {$\sigma_{2 k} p_{2 k}$} (n);
\end{tikzpicture}
\end{center}
The contribution of these closed walks is almost exactly what we would
obtain from the naïve model we were implicitly considering, viz., a random walk
where each edge \(p_i\) is taken with probability
\(1/(\mathscr{L} p_i)\), and so we should have the same limiting
distribution as in that model.
What about walks where some primes \(p_i\) do repeat? At least some of
them may make a large contribution that is not there in our naïve model.
For instance, consider walks of length \(2 k\) that retrace their steps,
so that the \((n+1)\)th step is the \(n\)th step backwards, the
\((n+2)\)th step is the \((n-1)\)th step backwards, etc.:
\[
\begin{aligned}
n, &n+\sigma_1 p_1, \dotsc, n+\sigma_1 p_1 + \dotsb + \sigma_k p_k,\\
&n+\sigma_1 p_1 + \dotsb + \sigma_{k-1} p_{k-1},\dotsc ,n+\sigma_1 p_1, n,\end{aligned}
\]
with
\[
p_1|n,\; p_2|n+\sigma_1 p_1,\; \dotsc,\; p_k|n+\sigma_1 p_1 + \dotsc + \sigma_{k-1} p_{k-1},
\]
\[
p_k|n+\sigma_1 p_1 + \dotsc + \sigma_{k-1} p_{k-1} + \sigma_k p_k,\;
\dotsc,\; p_2|n+\sigma_1 p_1 + \sigma_2 p_2,\; p_1|n+\sigma_1 p_1.
\]
The second row of divisibility conditions here is obviously implied by
the first row. Hence, again by the Chinese Remainder Theorem, the walk
is valid for almost exactly \(N/p_1 p_2 \dotsb p_k\) elements
\(n\in \mathbf{N}\), rather than for \(N/(p_1 p_2 \dotsb p_k)^2\)
elements. The contribution of such walks to
\[
\sum_{n\in \mathbf{N}}
\sum_{\substack{\forall 1\leq i\leq 2 k: \sigma_i= \pm 1,\; p_i\in \mathbf{P}\\ \forall 1\leq i\leq 2 k: p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\\ \sigma_1 p_1 + \dotsc +
\sigma_{2 k} p_{2 k} = 0}} 1
\]
(which is the interesting part of the variance we wrote down before) is
clearly \(N \mathscr{L}^k\). In order for it not to be of greater order
than what one expects from the limiting distribution, we should have
\(N \mathscr{L}^k \ll N \mathscr{L}^{2 k}/M\), where \(M\), the width of
the distribution, is, as we saw before, very roughly \(\sqrt{k} H\).
Thus, we need \(k\gg (\log H)/(\log \mathscr{L})\).
There are of course other walks that make similar contributions; take,
for instance,
\[
n, n+p_1, n, n-p_3, n-p_3 + p_4, n-p_3, n-p_3+p_6, n-p_3, n
\]
for \(k=3\). These are what we may call \emph{trivial walks}, in the
sense that a word is \emph{trivial} when it reduces to the identity. It
is tempting to say that their number is \(2^k C_k\), where
\(C_k\leq 2^{2 k}\) is the \(k\)th Catalan number (which, among other
things, counts the number of expressions containing \(k\) pairs of
parentheses correctly matched: for example, \(() (())\) would correspond
to the trivial walk above). In fact, the matter becomes more subtle
because some primes may reappear without taking us one step further back
to the origin of the walk; for instance, in the above, we might have
\(p_4=p_1\), and that is a possibility that is not recorded by a simple
pattern of correctly matched parentheses--- yet it must be considered
separately. Here again we make a mental note.
It is, incidentally, no coincidence that, when we try to draw the
trivial walk above, we produce a tree:
\begin{center}
\begin{tikzpicture}[thick,scale=0.9, every node/.style={transform shape}]
\tikzstyle{membre}= [rectangle]
\tikzstyle{operation}=[->,>=latex]
\tikzstyle{etiquette}=[midway,fill=black!20]
\node[membre] (n) at (0,2) {$n$};
\node[membre] (n1) at (3,3) {$n+p_1$};
\node[membre] (n3) at (3,1) {$n-p_3$};
\node[membre] (n34) at (6,2) {$n-p_3+p_4$};
\node[membre] (n36) at (6,0) {$n-p_3+p_6$};
\draw[operation] (n)--(n1);
\draw[operation] (n)--(n3);
\draw[operation] (n3)--(n34);
\draw[operation] (n3)--(n36);
\end{tikzpicture}
\end{center}
Any trivial walk gives us a tree (or rather a tree traversal) when drawn.
Now let us look at walks that fall into neither of the two classes just
discussed; that is, walks where we do have some repeated primes
\(p_i=p_{i'}\) even after we reduce the walk.
(When we say we
\emph{reduce} a walk, we mean an analogous procedure to that of reducing
a word.)
Then, far from being
independent, the condition
\[
p_i|n + \sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}
\]
either implies or contradicts the condition
\[
p_i=p_{i'}|n+\sigma_1 p_1 + \dotsc + \sigma_{i'-1} p_{i'-1}
\]
for given \(\{(\sigma_i,p_i)\}_i\), depending on whether
\[
p_i|\sigma_i p_i + \sigma_{i+1} p_{i+1} + \dotsc + \sigma_{i'-1} p_{i'-1}.
\]
We may draw another graph, emphasizing the two edges with the same label
\(\pm p_i\):
\begin{center}
\begin{tikzpicture}[thick,scale=0.9, every node/.style={transform shape}]
\tikzstyle{membre}= [rectangle]
\tikzstyle{operation}=[->,>=latex]
\tikzstyle{etiquette}=[midway,fill=black!20]
\node[membre] (nim1) at (0,0) {$n+\dots +\sigma_{i-1} p_{i-1}$};
\node[membre] (ni) at (1,2) {$n+\dots+\sigma_i p_i$};
\node[membre] (nmid) at (5,2.5) {$\dots$};
\node[membre] (nim) at (10,2) {$n+\dots+\sigma_i p_i + \dots + \sigma_{i'-1} p_{i'-1}$};
\node[membre] (nip) at (9,0) {$n+\dots+\sigma_i p_i + \dots +
\sigma_{i'} p_{i'}$};
\draw[operation, ultra thick, densely dashed] (nim1)--(ni) node[midway,left]{$\sigma_i p_i$};
\draw[operation] (ni) to[out=20,in=160] (nmid);
\draw[operation] (nmid) to[out=20,in=160] (nim);
\draw[operation, ultra thick, densely dashed] (nim)--(nip) node[midway,left]{$\sigma_{i'} p_{i'}=\sigma_{i'} p_i$};
\end{tikzpicture}
\end{center}
At this point it becomes convenient to introduce the assumption that
\(p\geq H_0\) for all \(p\in \mathbf{P}\). Then it is clear that, if
\(i'-i>1\) and
\(p_j\ne p_i\) for all \(i < j < i'\), the divisibility condition
\(p_i|\sigma_{i+1} p_{i+1} + \dotsc + \sigma_{i'-1} p_{i'-1}\) may
hold only for a proportion \(\ll 1/H_0\) of all tuples
\((p_{i+1},\dotsc,p_{i'-1})\).
So far, so good, except that it
is not enough to save one factor of \(H_0\), and indeed we should save a
factor of at least \(M\), which is roughly in the scale of \(H\), not
\(H_0\). Obviously, for \(\mathscr{L}\to \infty\) to hold, we need
\(H_0 = H^{o(1)}\), and so we need to save more than any constant number
of factors of \(H_0\).
We have seen three rather different cases. In general, we would like to
have a division of all walks into three classes:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\tightlist
\item
walks containing enough non-repeated primes \(p_i\) that their
contribution is one would expect from the hoped-for limiting
distribution;
\item
rare walks, such as, for example, trivial walks;
\item
walks for which there are many independent conditions of the form
\(p_i|n+\sigma_{i+1} p_{i+1} + \dotsc + \sigma_{i'-1} p_{i'-1}\) as
above.
\end{enumerate}
\emph{\textbf{Some initial thoughts on the third case.}} We should think
a little about what we mean or should mean by ``independent''. It is
clear that, if we have several conditions \(p|L_j(p_1,\dotsc,p_{2 k})\),
where the \(L_j\) are linear forms spanning a space of dimension \(D\),
then, in effect, we have only \(D\) distinct conditions. It is also
clear that, while having several primes \(p_i\) divide the same quantity
\(L(p_1,\dotsc,p_{2 k})\) ought to give us more information than just
knowing one prime divides it, that is true only up to a point: if
\(L(p_1,\dotsc,p_{2 k})=0\) (something that we expect to happen about
\(1/\sqrt{k} H\) of the time), then every condition of the form
\(p_i|L(p_1,\dotsc,p_{2 k})\) holds trivially.
It is also the case that we should be careful about which primes do the
dividing. Say two indices \(i\), \(i'\) are equivalent if
\(p_i=p_{i'}\). Choose your equivalence relation \(\sim\), and paint the
indices \(i\) in some equivalence classes blue, while painting the
indices \(i\) in the other equivalence classes red. It is not hard to
show, using a little geometry of numbers, that, if
\(p_{i_j}|L_j(p_1,\dotsc,p_{2 k})\) for some blue indices \(i_j\) and
linear forms \(L_j\), \(j\in J\), and the space spanned by the forms
\(L_j\) \emph{considered as formal linear combinations on the variables}
\(x_i\) \emph{for} \(i\) \emph{red} is \(D\), we can gain a factor of
at least \(H_0^D\) or so: the primes \(p_i\) for \(i\) red have to lie
in a lattice of codimension \(D\) and index \(\geq H_0^D\). A
priori, however, it is not clear which primes we should color blue and
which ones red.
We have, at any rate, arrived at what may be called the core of the
problem -- how to classify our walks in three classes as above, and how
to estimate their contribution accordingly.
\section{Graphs, operators and eigenvalues}
It is now time to step back and take a fresh look at the problem.
Matters will become clearer and simpler, but, as we will see, the core
of the problem will remain.
We have been talking about walks. Now, walks are taken in a graph.
Thinking about it for a moment, we see that we have been considering
walks in the graph \(\Gamma\) having \(V=\mathbf{N}\) as its set of
vertices and
\(E=\{{n,n+p}: n,n+p\in \mathbf{N}, p\in \mathbf{P}, p|n\}\) as its set
of edges. (In other words, we draw an edge between \(n\) and \(n+p\) if
and only if \(p\) divides \(n\).) We also considered random walks in
what we called the ``naïve model''; those are walks in the weighted
graph \(\Gamma'\) having \(\mathbf{N}\) as its set of vertices and an
edge of weight \(1/p\) between any \(n, n+p\in \mathbf{N}\) with
\(p\in \mathbf{P}\), regardless of whether \(p|n\).
\subsection{Adjacency, eigenvalues and expansion}
Questions about walks in a graph \(\Gamma\) are closely tied to the
\emph{adjacency operator} \(\textrm{Ad}_\Gamma\). This is a linear
operator on functions \(f:V\to \mathbb{C}\) taking \(f\) to a function
\(\textrm{Ad}_\Gamma f:V\to \mathbb{C}\) defined as follows: for
\(v\in V\), \[
(\textrm{Ad}_\Gamma f)(v) = \sum_{w: \{v,w\}\in E} f(w).
\] In other words, \(\textrm{Ad}_\Gamma\) replaces the value of \(f\) at
a vertex \(v\) by the sum of its values \(f(w)\) at the neighbors \(w\)
of \(v\). The connection with walks is not hard to see: for instance, it
is very easy to show that, if \(1_v:V\to \mathbb{C}\) is the function
taking the value \(1\) at \(v\) and \(0\) elsewhere, then, for any
\(w\in V\) and any \(k\geq 0\), \(((\textrm{Ad}_\Gamma)^k 1_v)(w)\) is
the number of walks of length \(k\) from \(v\) to \(w\).
The connection between \(\textrm{Ad}_\Gamma\) and our problem is very
direct, in that it can be stated without reference to random walks. We
want to show that \[
\sum_{n\in \mathbf{N}} \sum_{\sigma = \pm 1} \sum_{\substack{p\in \mathbf{P}: p|n\\ n+\sigma p\in \mathbf{N}}}
\lambda(n) \lambda(n+\sigma p) = o(N \mathscr{L}).
\] That is exactly the same as showing that \[
\langle \lambda, \textrm{Ad}_{\Gamma} \lambda\rangle =
o(\mathscr{L}),
\] where \(\langle \cdot, \cdot\rangle\) is the inner product defined by
\[
\langle f,g\rangle = \frac{1}{N} \sum_{n\in \mathbf{N}} f(n) \overline{g(n)}
\] for \(f,g:V\to \mathbb{C}\).
The behavior of random walks on a graph --- in particular, the limit
distribution of their endpoints --- is closely related to the notion of
\emph{expansion}. A regular graph \(\Gamma\) (that is, a graph where
every vertex has the same degree \(d\)) is said to be an \emph{expander
graph} with parameter \(\epsilon>0\) if, for every eigenvalue \(\gamma\)
of \(\textrm{Ad}_\Gamma\) corresponding to an eigenfunction orthogonal
to constant functions, \[|\gamma|\leq (1-\epsilon) d.\] {(A few basic
remarks may be in order. Since \(\Gamma\) is regular of degree \(d\), a
constant function on \(V\) is automatically an eigenfunction with
eigenvalue \(d\). Now, \(\textrm{Ad}_\Gamma\) is a symmetric operator,
and thus it has full real spectrum: the space of all functions
\(V\to \mathbb{C}\) is spanned by a set of eigenfunctions of
\(\textrm{Ad}_\Gamma\), all orthogonal to each other; the corresponding
eigenvalues are all real, and it is easy to see that all of them are at
most \(d\) in absolute value.)}
It is clear that we need something both stronger and weaker than
expansion. (We cannot use the definition of expansion
above ``as is'' anyhow,
in that our graph \(\Gamma\) is not regular; its
\emph{average} degree is \(\mathscr{L}\).) We need a stronger bound than
what expansion provides: we want to show, not just that
\(|\langle \lambda, \textrm{Ad}_\Gamma \lambda\rangle|\) is
\(\leq (1-\epsilon) \mathscr{L}\), but that it is \(= o(\mathscr{L})\).
{ There is nothing unrealistically strong here --- in the strongest kind
of expander graph (\emph{Ramanujan graphs}), the absolute value of every
eigenvalue is at most \(2\sqrt{d-1}\).}
At the same time, we cannot ask for
\(\langle f, \textrm{Ad}_\Gamma f\rangle/|f|_2^2 = o(\mathscr{L})\) to
hold for every \(f\) orthogonal to constant functions. Take
\(f = 1_{I_1}-1_{I_2}\), where \(I_1\), \(I_2\) are two disjoint
intervals of the same length \(\geq 100 H\), say. Then \(f\) is
orthogonal to constant functions, but \((\textrm{Ad}_\Gamma f)(n)\) is
equal to \(\omega(n) f(n)\), except possibly for those \(n\) that lie at
a distance \(\leq H\) of the edges of \(I_1\) and \(I_2\). Hence,
\(\langle f,\textrm{Ad}_{\Gamma} f\rangle/|f|_2^2\) will be close to
\(\mathscr{L}\). It follows that \(\textrm{Ad}_\Gamma\) will have at
least one eigenfunction orthogonal to constant functions and with
eigenvalue close to \(\mathscr{L}\); in fact, it will have many.
{(This observation is related to the fact that endpoint of a short
random walk on \(\Gamma\) \emph{cannot} be approximately
equidistributed, as it is in an expander graph: the edges of \(\Gamma\)
are too short for that. The most we could hope for is what we were
aiming for, namely, that the distribution of the endpoint converges to a
nice distribution, centered at the starting point.)}
We could aim to show that
\(\langle f,\textrm{Ad}_\Gamma f\rangle/|f|_2^2\) is small whenever
\(f\) is approximately orthogonal to approximately locally constant
functions, say. Since the main result in \cite{MR3488742} can be
interpreted as the statement that \(\lambda\) is approximately
orthogonal to such functions, we would then obtain what we wanted to
prove for \(f=\lambda\).
We will find it cleaner to proceed slightly differently. Recall our
weighted graph \(\Gamma'\), which was meant as a naïve model for
\(\Gamma\). It has an adjacency operator \(\textrm{Ad}_{\Gamma'}\) as
well, defined as before. (Since \(\Gamma'\) has weights \(1/p\) on its
edges,
\((\textrm{Ad}_{\Gamma'} f)(n) = \sum_{p\in \mathbf{P}} (f(n+p) + f(n-p))/p\).)
It is not hard to show, using the techniques in \cite{MR3488742}, that
\[\langle \lambda,\textrm{Ad}_{\Gamma'} \lambda\rangle = o(\mathscr{L}).\]
(In fact, what amounts to this statement has already been shown, in
\cite[Lemma 3.4--3.5]{MR3569059}; the main
ingredient is \cite[Thm.~1.3]{MR3435814}, which
applies and generalizes the main theorem in \cite{MR3488742}. Their bound
is a fair deal smaller than \(o(\mathscr{L})\).) We define the operator
\[A = \textrm{Ad}_{\Gamma}-\textrm{Ad}_{\Gamma'}.\] It will then be
enough to show that
\[\langle \lambda,A\lambda\rangle = o(\mathscr{L}),\] as then it will
obviously follow that
\[\langle \lambda,\textrm{Ad}_\Gamma \lambda\rangle =
\langle \lambda, A\lambda \rangle + \langle \lambda, \textrm{Ad}_{\Gamma'} \lambda \rangle = o(\mathscr{L}).\]
It would be natural to guess, and try to prove, that
\(\langle f, Af\rangle = o(\mathscr{L})\) for \emph{all}
\(f:V\to \mathbb{C}\) with \(|f|_2=1\), i.e., that all eigenvalues of
\(A\) are \(o(\mathscr{L})\).
We cannot hope for quite that much. The reason is simple. For any vertex
\(n\), \(\langle A 1_n, A 1_n\rangle\) equals the sum of the squares of
the weights of the edges \(\{n,n'\}\) containing \(n\). That sum equals
\[\sum_{\substack{p\in \mathbf{P}\\ p|n}} \left(1-\frac{1}{p}\right)^2 +
\sum_{\substack{p\in \mathbf{P}\\ p\nmid n}} \frac{1}{p^2},\] which in
turn is greater than \(1/4\) times the number \(\omega_{\mathbf{P}}(n)\)
of divisors of \(n\) in \(\mathbf{P}\). Thus, \(A\) has at least one
eigenvalue greater than \(\sqrt{\omega_{\mathbf{P}}(n)}/2\). Now,
typically, \(n\) has about \(\mathscr{L}\) divisors in \(\mathbf{P}\),
but some integers \(n\) have many more; for some rare \(n\), in fact,
\(\omega_{\mathbf{P}}(n)\) will be greater than \(\mathscr{L}^2\), and
so there have to be eigenvalues of \(A\) greater than \(\mathscr{L}\).
It is thus clear that we will have to exclude some integers, i.e., we
will define our vertex set to be some subset
\(\mathscr{X}\subset \mathbf{N}\) with small complement. We will set
ourselves the goal of proving that all of the eigenvalues of the
operator \(A|_\mathscr{X}\) defined by
\[(A|_\mathscr{X})(f) = (A(f|_\mathscr{X}))|_\mathscr{X}\] are
\(o(\mathscr{L})\). (Here \(f|_\mathscr{X}\) is just the function taking
the value \(f(n)\) for \(n\in \mathscr{X}\) and \(0\) for
\(n\not\in \mathscr{X}\).) Then, for \(f=\lambda\), or for any other
\(f\) with \(|f|_\infty\leq 1\),
\[\langle f, A f\rangle = \langle f, (A|_\mathscr{X}) f\rangle +
O\left(\sum_{n\in \mathbf{N}\setminus \mathscr{X}} 2\, (\omega_{\mathbf{P}}(n)+
\mathscr{L})\right),\]
where, if \(\mathbf{N}\setminus \mathscr{X}\) is small enough (as it
will be), it will not be hard to show that the sum within \(O(\cdot)\)
is quite small. We will then be done: obviously
\(\langle f, (A|_\mathscr{X}) f\rangle\) is bounded by the largest
eigenvalue of \(A|_\mathscr{X}\) times \(|f|_2\) (which is
\(\leq |f|_\infty\leq 1\)), and so we will indeed have
\(\langle f, A f\rangle = o(\mathscr{L})\).
We will in fact be able to prove something stronger: there is a subset
\(\mathscr{X}\subset \mathbf{N}\) with small complement such that all
eigenvalues of \(A|_\mathscr{X}\) are \[O(\sqrt{\mathscr{L}}).\] (This
bound is optimal up to a constant factor.) This is our main theorem.
We hence obtain that
\begin{equation}\label{eq:feast}\langle \lambda, A\lambda\rangle = O(\sqrt{\mathscr{L}}).\end{equation}
From \eqref{eq:feast},
we deduce the bound \begin{equation}\label{eq:olivol}
\frac{1}{\log x} \sum_{n\leq x} \frac{\lambda(n) \lambda(n+1)}{n} = O\left(\frac{1}{\sqrt{\log \log x}}\right)\end{equation}
we stated at the beginning.
More generally, we get
\(\langle f,A f\rangle=O(\sqrt{\mathscr{L}})\) for any \(f\) with
\(|f|_\infty\leq 1\), or for that matter by any \(f\) with
\(|f|_4\leq e^{100 \mathscr{L}}\) and \(|f|_2\leq 1\). We obtain plenty
of consequences besides \eqref{eq:olivol}.
\subsection{Powers, eigenvalues and closed walks}
Now that we know what we want to prove, let us come up with a strategy.
There is a completely standard route towards bounds on eigenvalues of
operators such as \(A\) (or \(A|_{\mathscr{X}}\)), relying on the fact
that the trace is invariant under conjugation. Because of this
invariance, the trace of a power \(A^{2 k}\) is the same whether \(A\)
is written taking a full family of orthogonal eigenvectors as a basis,
or just taking the characteristic functions \(1_n\) as our basis.
Looking at matters the first way, we see that
\[\textrm{Tr} (A|_\mathscr{X})^{2 k} = \sum_{i=1}^N \lambda_i^{2 k},\]
where \(\lambda_1,\lambda_2,\dotsc,\lambda_N\) are the eigenvalues
corresponding to the basis made out of eigenvectors. Looking at matters
the second way, we see that
\(\textrm{Tr} (A|_{\mathscr{X}})^{2 k} = N_{2 k}\), where \(N_{2 k}\) is
the sum over all closed walks of length \(2 k\) of the products of the
weights of the edges in each walk:
\[N_{2 k} = \sum_{n\in \mathscr{X}} \sum_{\substack{p_1,\dotsc,p_{2 k}\in \mathbf{P}\\ \sigma_1,\dotsc,\sigma_{2 k}\in \{-1,1\}\\
\forall 1\leq i\leq 2 k:
n+\sigma_1 p_1 + \dotsc + \sigma_i p_i\in \mathscr{X} \\
\sigma_1 p_1 + \dotsc + \sigma_{2 k} p_{2 k} = 0}}
\prod_{i=1}^{2 k} \left(1_{p_i|n+\sigma_1 p_1+\dotsc+\sigma_{i-1} p_{i-1}} - \frac{1}{p_i}\right)\]
where we adopt the convention \(1_\text{true}=1\), \(1_\text{false}=0\).
Since all eigenvalues are real, it is clear that
\[\lambda_i^{2 k} \leq N_{2 k}\] for every eigenvalue \(\lambda_i\).
Often, and also now, that inequality is not enough in itself for a good
bound on \(\lambda_i\). What is then often done is to show that every
eigenvalue must have multiplicity \(\geq M\), where \(M\) is some large
quantity. Then it follows that, for every eigenvalue \(\gamma\),
\[M \gamma^{2 k} \leq N_{2 k},\] and so
\(|\gamma|\leq (N_{2 k}/M)^{1/2 k}\).
We do not quite have high multiplicity here (why would we?) but we have
something that is almost as good: if there is one large eigenvalue, then
there are many mutually orthogonal functions \(g_i\) of norm \(1\) with
\(\langle g_i, A g_i\rangle\) large. Then we can bound
\(\textrm{Tr} A^{2 k}\) from below, using these functions \(g_i\) (and
some arbitrary functions orthogonal to them) as our basis, and, since
\(\textrm{Tr} A^{2 k}\) also equals \(N_{2 k}\), we can hope to obtain a
contradiction with an upper bound on \(N_{2 k}\).
For simplicity, let us start by sketching a proof that, if
\(|\langle f, A f\rangle|\) is large (\(\geq \rho \mathscr{L}\), say)
for some \(f\) with \(|f|_\infty\leq 1\), then there are many orthogonal
functions \(g_i\) of norm \(1\) and \(\langle g_i, A g_i\rangle\) large
(with ``large'' meaning \(\geq \rho \mathscr{L}/2\), say). This weaker statement
suffices for our original goal, since we may set \(f\) equal to the
Liouville function \(\lambda\).
Let \(I_1,I_2,\dotsc \subset \mathbb{N}\) be disjoint intervals of
length \(\geq 10 H/\rho\) (say) covering \(\mathbb{N}\). Edges starting
at a vertex \(v\) in \(I_i\) end at another vertex in \(I_i\), unless
they are close to the edge. Hence,
\(\sum_i |\langle f|_{I_i},A \left(f|_{I_i}\right)\rangle|\) is not much
smaller than \(|\langle f,A f\rangle|\), and then it follows easily that
\(\langle f|_{I_i}, A \left(f|_{I_i}\right)\rangle/|f|_{I_i}|_2^2\) must
be large for many \(i\). Thus, setting \(g_i = f|_{I_i}/|f|_{I_i}|\) for
these \(i\), we obtain the desired statement.
To prove truly that \(A\) has no large eigenvalues, we should proceed as
we just did, but assuming only that \(|f|_2\leq 1\), not that
\(|f|_\infty \leq 1\). The basic idea is the same, except that (a)
pigeonholing is a little more delicate, (b) if \(f\) is almost entirely
concentrated in a small subset of \(\mathbf{N}\), then we can extract
only a few mutually orthogonal functions \(g_i\) from it. Recall that we
are anyhow restricting to a set \(\mathscr{X}\subset \mathbf{N}\). A
brief argument suffices to show that we can avoid the problem posed by
(b) simply by making \(\mathscr{X}\) a little smaller (essentially:
deleting the support of such \(g_i\), and then running through the
entire procedure again), while keeping its complement
\(\mathbf{N}\setminus \mathscr{X}\) very small.
In any event: we obtain that, if, for some \(X\subset \mathbf{N}\),
\(\textrm{Tr} (A|_X)^{2 k}\) is not too large (smaller than
\((\rho \mathscr{L}/2)^{2 k} N/H\) or so) then there is a subset
\(\mathscr{X}\subset X\) with \(X\setminus \mathscr{X}\) small such that
every eigenvalue of \(A|_\mathscr{X}\) is small
(\(\leq \rho \mathscr{L}\)). It thus remains to prove that
\(\textrm{Tr} (A|_X)^{2 k}\) is small for some \(X\subset \mathbf{N}\)
with small complement \(\mathbf{N}\setminus X\).
Recall that \(\textrm{Tr} (A|_X)^{2 k} = N_{2 k}\) (with \(N_{2 k}\)
defined as above, except with \(X\) instead of \(\mathscr{X}\)) and that
\(X\) should not include integers \(n\) with many more prime divisors in
\(\mathbf{P}\) than average. Our task is to bound \(N_{2 k}\).
\subsection{A brief look back}
We have come full circle, or rather we have arrived twice at the same
place. We started with a somewhat naïve approach that lead us to random
walks. Then we took a step back and analyzed the situation in a way that
turned out to be cleaner; for instance, the problem involving
\(\textrm{err}_2^{1/k}\) vanished. As it happens, that cleaner approach
took us to random walks again. Surely this is a good sign.
It is also encouraging to see signs that other people have thought in
the same direction. The paper by
\href{https://arxiv.org/abs/1509.01545}{Matomäki-Radziwiłł-Tao} on sign
patterns of \(\lambda\) and \(\mu\) is based on the examination of a
graph equivalent to \(\Gamma\); what they show is, in essence, that
\(\Gamma\) is almost everywhere locally connected. Being connected may
be a much weaker property than expansion, but it is a step in the same
direction. As for expansion itself,
\href{https://arxiv.org/abs/1509.05422}{Tao} (§4) comments that ``some
sort of expander graph property'' may hold for that graph (equivalent to
\(\Gamma\)) ``or {[}for{]} some closely related graph''. He goes on to
say:
\begin{quote}
Unfortunately we were unable to establish such an expansion property, as
the edges in the graph {[}\ldots{}{]} do not seem to be either random
enough or structured enough for standard methods of establishing
expansion to work."
\end{quote}
And so we will set about to establish expansion by our methods (standard
or not).
In any event, our initial discussion of random walks is still pertinent. Recall the plan with which we concluded, namely, to divide walks into three kinds: walks with few non-repeated primes, walks imposing many independent divisibility conditions, and rare walks. This plan will shape our approach to bounding $N_{2 k}$ in the next section.
\section{Main part of the proof: counting closed walks}
Let us recapitulate. Let
\(\mathbf{N} = \{n\in \mathbb{Z}: N < n\leq 2 N\}\). We have defined a
linear operator \(A\) on functions \(f:\mathbf{N}\to \mathbb{C}\) as the
difference of the adjacency operators of two graphs \(\Gamma\),
\(\Gamma'\): \[A = \textrm{Ad}_{\Gamma} - \textrm{Ad}_{\Gamma'}.\] We
would like to show that there is a subset \(X\subset \mathbf{N}\) with
small complement \(\mathbf{N}\setminus X\) such that, for some \(k\)
that is not too small, the trace \[\textrm{Tr} (A|_X)^{2 k}\] is
substantially smaller than \(\mathscr{L}^{2 k} N\). Indeed, we will
prove that there is a constant \(C\) such that
\[\textrm{Tr} (A|_X)^{2 k} \leq (C \mathscr{L})^{k} N,\] where
\(\mathscr{L} = \sum_{p\in \mathbf{P}} 1/p\).
Incidentally, when we
say ``\(k\) not too small'', we mean ``\(k\) is larger than \(\log H\)
or so''; we already saw that we stand to lose a factor of \(H^{1/k}\)
when going from (a) a trace bound as above to (b) a bound on
eigenvalues, which is our ultimate goal. If \(k\gg \log H\), then
\(H^{1/k}\) is just a constant.
For comparison: if, as will be the case, we define \(X\) so that every
\(n\in X\) has at most \(K \mathscr{L}\) prime factors, the trivial
bound is \[\textrm{Tr} (A|_X)^{2 k} \leq ((K+1) \mathscr{L})^{2 k} N.\]
We also saw that \(\textrm{Tr} (A|_X)^{2 k}\) can be expressed as a sum
over closed walks, i.e., walks that end where they start:
\[\textrm{Tr} (A|_X)^{2 k} = \sum_{n\in X} \sum_{\substack{p_1,\dotsc,p_{2 k}\in \mathbf{P}\\ \sigma_1,\dotsc,\sigma_{2 k}\in \{-1,1\}\\
\forall 1\leq i\leq 2 k:\;
n+\sigma_1 p_1 + \dotsc + \sigma_i p_i\in X\\
\sigma_1 p_1 + \dotsc + \sigma_{2 k} p_{2 k}=0}}
\prod_{i=1}^{2 k} \left(1_{p_i|n+\sigma_1 p_1+\dotsc+\sigma_{i-1} p_{i-1}} - \frac{1}{p_i}\right).\]
Here the double sum just goes over closed walks of length \(2 k\) in the
weighted graph \(\Gamma - \Gamma'\), which has \(X\) as its set of
vertices and an edge between any two vertices \(n,n'\) whose difference
\(n'-n\) is a prime \(p\) in our set of primes \(\mathbf{P}\); the
weight of the edge is then \(1-1/p\) if \(p|n\), and \(-1/p\) otherwise.
The contribution of a walk equals the product of the weights of its
edges.
\begin{center}
\begin{tikzpicture}[thick, scale=0.85, every node/.style={transform shape}]
\tikzstyle{membre}= [rectangle]
\tikzstyle{operation}=[->,>=latex]
\tikzstyle{etiquette}=[midway,fill=black!20]
\node[membre] (n) at (0,3) {$n$};
\node[membre] (n1) at (2.5,4) {$n_1=n+\sigma_1 p_1$};
\node[membre] (n2) at (6,4.5) {$n_2 = n+\sigma_1 p_1 + \sigma_2 p_2$};
\node[membre] (aro) at (9.5,4) {$\dots$};
\node[membre] (m) at (12,3) {$n_k = n+\sigma_1 p_1 + \dots + \sigma_k p_k$};
\node[membre] (nk1) at (8.5,1) {$n_{k+1} = n+\sigma_1 p_1 + \dots + \sigma_k p_k + \sigma_{k+1} p_{k+1}$};
\node[membre] (nk2m1) at (2.75,1) {$\dots$};
\draw[operation] (n) to (n1);
\draw[operation] (n1) to (n2);
\draw[operation] (n2) to (aro);
\draw[operation] (aro) to(m);
\draw[operation] (m) to (nk1);
\draw[operation] (nk1) to (nk2m1);
\draw[operation] (nk2m1) to (n);
\end{tikzpicture}
\end{center}
\subsection{Cancellation}
It might be nicer to work with an expression with yet simpler weights.
First, though, let us see what gains we can get from cancellation. Let
\(p_1,\dotsc,p_{2 k}\in \mathbf{P}\) and
\(\sigma_1,\dotsc,\sigma_{2 k}\in \{-1,1\}\) be given, and consider the
total contribution of the paths they describe as \(n\) varies in \(X\).
Say there is a \(p_i\) that appears only once, i.e., \(p_j\ne p_i\) for
all \(j\ne i\). The weight of the edge from
\(n_{i-1} = n + \sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\) to
\(n_i = n + \sigma_1 p_1 + \dotsc + \sigma_i p_i\) is \(1-1/p\) if
\(p|n_{i-1}\) and \(1/p\) otherwise. The weights of all the other edges
depend on the congruence classes \(n \;\textrm{mod}\; p_j\) for all
\(j\ne i\).
Suppose for a moment that \(X = \mathbf{N}\). Then, for \(\vec{p}\),
\(\vec{\sigma}\) fixed, and \(n\) in a given congruence class
\(n \;\textrm{mod}\; p_j\) for every \(j\ne i\) (that is, \(n\) in a
given congruence class \(a +P \mathbb{Z}\) for
\(P = \prod_{p\in \{p_1,.\dotsc,p_{i-1}.p_{i+1},\dotsc,p_{2 k}\}} p\),
by the Chinese remainder theorem), the probability that \(p_i\) divides
\(n_{i-1}\) is almost exactly \(1/p_i\): the number of \(n\) in
\(\mathbf{N}\) in our congruence class \(\textrm{mod}\; P\) is
\(N/P + O^*(1)\) (that is, no less than \(N/P-1\) and no more than
\(N/P+1\)), and, for such \(n\), again by the Chinese remainder theorem,
\(p|n_{i-1}\) if and only if \(n\) lies in a certain congruence class
modulo \(p_i\cdot P\); the number of \(n\) in \(\mathbf{N}\) in that
congruence class is \(N/(p_i P) + O^*(1)\).
Hence, among all \(n\) in \(\mathbf{N} \cap (a+ P \mathbb{Z})\), a
proportion almost exactly \(1/p\) have a weight \(1-1/p\) on the edge
from \(n_{i-1}\) to \(n_i\), and a proportion almost exactly \(1-1/p\)
have a weight \(-1/p\) there instead. Since all other weights are fixed,
we obtain practically total cancellation:
\[\frac{1}{p} \left(1 - \frac{1}{p}\right)
- \left(1 - \frac{1}{p}\right) \frac{1}{p} = 0.\] In other words, the
contribution of paths where at least one \(p_i\) appears only once is
practically nil. Hence, we can assume that, in our paths, every \(p_i\)
appears at least twice among \(p_1,p_2,\dotsc,p_{2 k}\).
Of course we do not actually want to set \(X=\mathbf{N}\), and in fact
we cannot, as we have already seen. If \(X\) is well-distributed in
arithmetic progressions, then we should still get cancellation, but it
will not be total -- there will be an error term. Much of the pain here
comes from the fact that we have to exclude numbers with too many prime
factors (meaning: \(> K \mathscr{L}\) prime factors). Suppose for
simplicity that \(X\) is the set of all numbers in \(\mathbf{N}\) with
\(\leq K \mathscr{L}\). Recall that all vertices \(n\),
\(n_1 = n+\sigma_1 p_1\),
\(n_2 = n + \sigma_1 p_1 +\sigma_2 p_2 +\dotsc\) have to be in \(X\); in
particular, \(n_{i-1}\in X\). As a consequence, the likelihood that
\(p|n_{i-1}\) is slightly lower than \(1/p\): if \(n_{i-1} = p m\), then
\(m\) is constrained to have \(\leq K \mathscr{L}-1\) prime factors, and
it is slightly more difficult for \(m\) to satisfy that constraint than
it is for an \(n\in \mathbf{N}\) to have \(\leq K \mathscr{L}\) prime
factors. We do have cancellation, but it is not total, as it is for
\(X=\mathbf{N}\). The techniques involved in estimating how much
cancellation we do have are standard within analytic number theory.
Later, we will also exclude some other integers from \(X\), besides
those having \(> K \mathscr{L}\) prime factors. We will then need to
show that the effect on cancellation is minor. Doing so will require
some arguably new techniques; we will cross that bridge when we come to
it.
To cut a long story short, the effect of cancellation will be, not that
every \(p_i\) appears at least twice among \(p_1,p_2,\dotsc,p_{2 k}\),
but that the number of ``singletons'' (primes that appear only once) is
small. More precisely, a path with \(m\) singletons will have to pay a
penalty of a factor of \(\mathscr{L}^{-m/2}\).
\subsection{Shapes. Geometry of numbers and ranks}
Let us see what we have. Write \(\mathbf{k} = \{1,2,\dotsc,2k\}\). Let
\(\mathbf{l}\) range among all subsets of \(\mathbf{k}\) . Here
\(\mathbf{l}\) will be our set of ``lit'' indices, corresponding to the
set of indices \(i\) such that
\(p_i|n+\sigma_1 p_1 +\dotsc + \sigma_{i-1} p_{i-1}\) in the above.
Every ``unlit'' index \(i\) gives us a weight of \(1/p_i\). We define an
equivalence relation \(\sim\) on \(\mathbf{k}\) by letting \(i\sim j\)
if and only if \(p_i=p_j\). Given an equivalence class \([i]\), we
define \(p_{[i]}\) to equal \(p_i\) for any (and hence every)
\(i\in [i]\). If an equivalence class \([i]\) is not completely unlit
(that is, if \([i]\cap \mathbf{l}\ne \emptyset\)), then it gives us a
weight of \(1/p_{[i]}\) (coming from
\(p_i|n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\) for some lit
index \(i\in [i]\)). It is also the case that, when two indices
\(i\sim j\) are both lit, they impose the condition
\[p_{[i]}|\sigma_{i+1} p_{i+1} + \dotsc + \sigma_j p_j,\] coming from
\(p_i|n + \sigma_1 p_1 + \dotsc + \sigma_i p_i\) and
\(p_i=p_j|n+\sigma_1 p_1 + \dotsc + \sigma_j p_j\). Let us write
\(\beta_i\) as shorthand for \(\sigma_1 p_1 + \dotsc + \sigma_i p_i\);
then our condition becomes \[p_{[i]}|\beta_j-\beta_i.\]
Given a walk \(n, n+\sigma_1 p_1, n+\sigma_1 p_1 +\sigma_2 p_2,\dotsc\),
we define its \emph{shape} to be \((\sim, \vec{\sigma})\), where
\(\sim\) is the equivalence relation it induces (as above). In fact, let
us start with shapes, meaning pairs \((\sim, \vec{\sigma})\), where
\(\sim\) is an equivalence class on \(\{1,2,\dotsc,k\}\) and
\(\vec{\sigma}\in \{-1,1\}^{2 k}\). For any given shape, we will bound
the contribution of all walks of that shape. There will be some shapes
for which we will not be successful; we will later treat walks of those
shapes, and show that their contribution is small in some other way.
To rephrase what we said just before: given
\(\mathbf{l}\subset \mathbf{k}\), the contribution of a shape
\((\sim, \vec{\sigma})\) will be at most
\begin{equation}\label{eq:littlestar}\mathscr{L}^{-\frac{|\mathcal{S}(\sim)|}{2}}
\sum_{\substack{\{p_{[i]}\}_{[i]\in \Pi}, p_{[i]}\in \mathbf{P}\\i_1\sim i_2 \wedge (i_1,i_2\in \mathbf{l})\Rightarrow p_{i_1}|\beta_{i_2}-\beta_{i_1}}}
\prod_{i\not\in \mathbf{l}} \frac{1}{p_{[i]}}
\prod_{\substack{[i]\in \Pi\\ [i]\not\subset \mathbf{k}\setminus \mathbf{l}}} \frac{1}{p_{[i]}}
,\end{equation}
where \(\Pi\) is the set of
equivalence classes of \(\sim\) and \(\mathcal{S}(\sim)\) is the set of
singletons of \(\sim\), where a ``singleton'' is an equivalence class
with exactly one element. We write \(|S|\) for the number of elements of
a set \(S\).
What we have to do then is, in essence, bound the number of solutions
\((p_{[i]})_{[i]\in \Pi}\) to a system of divisibility conditions
\begin{equation}\label{eq:ddagger}
p_{[i]}|\sigma_{i+1} p_{[i+1]} + \dotsc + \sigma_j p_{[j]}.
\end{equation}
It would be convenient if the divisors \(p_{[i]}\) were all distinct
from the primes in the sums being divided. Then we could apply directly
the following Lemma, which is really grade-school-level geometry of
numbers.
\begin{lemma}
Let \(\mathbf{M}=(b_{i,j})_{1\leq i,j\leq m}\) be a
non-singular \(m\)-by-\(m\) matrix with integer entries. Assume
\(|b_{i,j}|\leq C\) for all \(1\leq i,j\leq m\). Let
\(\vec{c}\in \mathbb{Z}^m\), and let \(d_1,\dotsc,d_m\geq D\), where
\(D\geq 1\). Let \(N_1,\dotsc,N_m\) be real numbers \(\geq D\). Then the
number of solutions \(\vec{n}\in \mathbb{Z}^m\) to
\[d_i|(\mathbf{M} \vec{n} + \vec{c})_i\;\;\;\;\;\forall 1\leq i\leq m\]
with \(N_i\leq n_i\leq 2 N_i\) is at most
\[\left(\frac{2 C m}{D}\right)^m \prod_{i=1}^m N_i.\]
\end{lemma}
The trivial bound is clearly \(\prod_{i=1}^m (N_i+1)\).
\begin{proof}
Divide the box \(\prod_{i=1}^m [N_i,2 N_i]\) into
\(\leq \prod_{i=1}^m \left(\frac{N_i}{D}+1\right)\leq \left(\frac{2}{D}\right)^m \prod_{i=1}^m N_i\)
\(m\)-dimensional boxes of side \(D\). The image of such a box under the
map \(\vec{n}\mapsto \mathbf{M} \vec{n} + \vec{c}\) is contained in a
box whose edges are open or half-open intervals of length \(C m D\).
Since \(d_i\geq D\), that box contains at most \((C m)^m\) solutions
\(\vec{m}\) to the equations \(d_i|m_i\).
\end{proof}
Of course we \emph{can} make our set of divisors and our set of
variables disjoint: we can choose to color some equivalence classes
\([i]\) blue and some other equivalence classes \([j]\) red, and
consider only those divisibility relations \eqref{eq:ddagger} in which
\([i]\) is colored blue. We fix \(p_{[i]}\) for \([i]\) blue, and in
fact for all non-red \([i]\), and treat \(p_{[j]}\) with \([j]\) as our
variables. We can then use the Lemma above to bound the number of values
of \((p_{[j]})_{[j]\; \text{red}}\) that satisfy our divisibility
relations.
To be precise: let \(x_{[j]}\) be a formal variable for each red
equivalence class \([j]\). Define
\begin{equation}\label{eq:cross}
v(i) = \sum_{j < i:\; [j]\;\text{red}} \sigma_j x_{[j]}.\end{equation}
Let \(r\) be the dimension of the space spanned by the differences
\(v(i_2)-v(i_1)\) with \([i_1]=[i_2]\) blue. Then we can select \(r\)
divisibility relations of the form \eqref{eq:ddagger} and \(r\) red
equivalence classes such that the matrix consisting of a row
\((\sum_{j\in [j]} \sigma_j)_{[j]\;\text{red}}\) for each equivalence
relation is non-singular. (We are just saying that a matrix of rank
\(r\) has a non-singular \(r\)-by-\(r\) submatrix.) We can then apply
our lemma.
After some book-keeping, we obtain a bound on our sum from \eqref{eq:littlestar}, namely,
\begin{equation}\label{eq:circ}\sum_{\substack{\{p_{[i]}\}_{[i]\in \Pi},\; p_{[i]}\in \mathbf{P}\\i_1\sim i_2 \wedge (i_1,i_2\in \mathbf{l})\Rightarrow p_{i_1}|\beta_{i_2}-\beta_{i_1}}}
\prod_{i\not\in \mathbf{l}} \frac{1}{p_{[i]}}
\prod_{\substack{[i]\in \Pi\\ [i]\not\subset \mathbf{k}\setminus \mathbf{l}}} \frac{1}{p_{[i]}} \leq \frac{1}{H_0^r}
\left(\frac{4 k r \log H}{\mathscr{L} \log 2}\right)^r \mathscr{L}^{|\Pi|}.\end{equation}
Here the important factor is \(1/H_0^r\). We see that we ``win'' if
\(r\) is at least somewhat large. The question is then how to choose
which equivalence classes to color red or blue so as to make the rank
\(r\) large.
\subsection{Ranks and a new graph. Sets with large boundary}
To address this question, let us define a new graph. First, though, let
us define the \emph{reduction} of a shape \((\sim, \sigma)\). A shape
clearly induces a word
\[w = x_{[1]}^{\sigma_1} x_{[2]}^{\sigma_2} \dotsc x_{[2 k]}^{\sigma_{2 k}}.\]
This word can be reduced (if it isn't already), and the resulting
reduced word induces a ``reduced shape'' \((\sim',\sigma')\). If all
representatives of an equivalence class of \((\sim,\sigma)\) disappear
during the reduction, we color that class yellow. It is the non-yellow
classes that we will color red or blue.
We define a graph \(\mathscr{G}_{(\sim,\sigma)}\) to be an undirected
graph having the non-yellow equivalence classes as its vertices, and an
edge between two vertices \(v_1\), \(v_2\) if there are \(i_1\in v_1\),
\(i_2\in v_2\) such that every equivalence class containing at least one
index \(j\in \{i_1+1,i_1+2,\dotsc,i_2-1\}\) is yellow.
(We define matters in this way, rather than simply reduce the word and
join two vertices \(v_1\), \(v_2\) if there are \(i_1\in v_1\),
\(i_2\in v_2\) such that \(i_2 = i_1+1\), because reducing the word
could create more singletons. At any rate, the idea is that, if there
are only yellow indices between \(i_1\) and \(i_2\), then
\(v(i_1) = v(i_2)\), where \(v(i)\) is defined as in \((+)\). But we are
getting ahead of ourselves.)
Let us see two examples. Let \(k=3\), and let \(\sim\) have equivalence
classes \[\{\{1,4\},\{2,5\},\{3\},\{6\}\},\] with
\(\sigma\in \{-1,1\}^{2 k}\) arbitrary. Then the graph
\(\mathscr{G}_{(\sim,\sigma)}\) is
\begin{center}
\begin{tikzpicture}[thick,scale=0.75]
\tikzstyle{every node}=[circle, draw, fill=white!50,inner sep=0pt, minimum width=4pt]
\draw {
(0:0) node {$\{1,4\}$} -- (0:6) node {$\{2,5\}$}
(0:6) -- (-40:4) node {$\{3\}$}
(0:0) -- (-40:4)
(0:6) -- (-10:10) node {$\{6\}$}
};
\end{tikzpicture}
\end{center}
As for our second example, let \(k = 5\),
\(\vec{\sigma}=(1,-1,1,-1,1,-1,1,1,1,-1)\), and let \(\sim\) have
equivalence classes \(\{\{1,8\},\{2,9\},\{3\},\{4,7\},\{5,6,10\}\}\).
Then the induced word is
\[w = x_{[1]} x_{[2]}^{-1} x_{[3]} x_{[4]}^{-1} x_{[5]} x_{[5]}^{-1} x_{[4]} x_{[1]} x_{[2]} x_{[5]}^{-1},\]
which has reduction
\[w' = x_{[1]} x_{[2]}^{-1} x_{[3]} x_{[1]} x_{[2]} x_{[5]}^{-1}.\]
Hence, the equivalence class \(\{4,7\}\) is colored yellow, and the
graph \(\mathscr{G}_{(\sim,\vec{\sigma})}\) is
\begin{center}
\begin{tikzpicture}[thick,scale=0.75]
\tikzstyle{every node}=[circle, draw, fill=white!50, inner sep=0pt, minimum width=4pt]
\draw {
(0:0) node {$\{1,8\}$} -- (0:6) node {$\{2,9\}$}
(0:6) -- (-40:4) node {$\{3\}$}
(0:0) -- (-40:4)
(0:6) -- (-10:10) node {$\{5,6,10\}$}
(0:0) -- (-10:10)
(-40:4) -- (-10:10)
};
\end{tikzpicture}
\end{center}
It is clear from the definition that
\(\mathscr{G}_{(\sim,\vec{\sigma})}\) is always connected: if
\(i_1<i_2<i_3<\dotsc\) are the indices in non-yellow equivalence
classes, then there is an edge from \([i_1]\) to \([i_2]\), an edge from
\([i_2]\) to \([i_3]\), etc.
Given a subset \(V'\) of the set of vertices \(V\) of a graph
\(\mathscr{G}\), write \(\mathscr{G}|_{V'}\) for the restriction of
\(\mathscr{G}\) to \(V'\), i.e., the subgraph of \(\mathscr{G}\) having
\(V'\) as its set of vertices and set of all edges in \(\mathscr{G}\)
between elements of \(V'\) as its set of edges. What happens if we
choose our coloring so that the restriction
\(\mathscr{G}|_{\textbf{blue}}\) to the set of blue vertices (named
\(\textbf{blue}\)) is connected?
\begin{lemma}
Let \((\sim,\sigma)\) be a shape, and let
\(\mathscr{G}=\mathscr{G}_{(\sim,\vec{\sigma})}\) and \(v(i)\) be as
above. Color some non-yellow vertices red and some other non-yellow
vertices blue, in such a way that, for \(\textbf{blue}\) the set of blue
vertices, the restriction \(\mathscr{G}|_\textbf{blue}\) is connected.
Then the space \(V\) spanned by the vectors
\[v(i_2)-v(i_1)\;\;\;\;\;\;\;\;\text{with}\;\;\; [i_1]=[i_2]\; \textbf{blue}\]
equals the space \(W\) spanned by all vectors
\[v(i_2)-v(i_1)\;\;\;\;\;\;\;\;\text{with}\;\;\; [i_1], [i_2]\; \text{both \bf{blue}}.\]
\end{lemma}
The proof is an exercise, and its idea may be best made clear by an
example.
\begin{proof}[Sketch of proof (or rather, a worked example)]
Say we have three blue equivalence classes, corresponding to
letters \(x\), \(y\), \(z\) in the induced word, and let them be
disposed as follows:
\[\underbrace{\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \textcolor{blue}{x} \underbrace{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \textcolor{blue}{z} \underbrace{} \textcolor{blue}{y x} \underbrace{\;\;\;\;\;\;\;\;\;} \textcolor{blue}{z y} \underbrace{}\]
Call the indices of the six letters we have written
\(i_1,i_2,\dotsc,i_6\). Then the space \(V\) in the Lemma is the space
spanned by \[v_{i_4}-v_{i_1},\; v_{i_6}-v_{i_3},\; v_{i_5}-v_{i_2},\]
where the space \(W\) in the Lemma is the space spanned by all vectors
\[v_{i_r} - v_{i_{s}},\;\;\;\;\;\;\;\;\;\; 1\leq r,s\leq 6.\] It is
clear that \(V\subset W\), but why is \(W\subset V\)? Why, say, is
\(v_{i_2}-v_{i_1}\) in \(V\)? Well, let us follow a path in
\(\mathscr{G}|_{\textbf{blue}}\) going from \(x\) (the first blue
letter, i.e., the letter at position \(i_1\)) to \(z\) (the second blue
letter, i.e., the letter at position \(i_2\)): there is an edge from
(the equivalence class labeled) \(x\) to (the equivalence class
label-led) \(y\), and an edge from \(y\) to \(z\). So:
\begin{longtable}[]{@{}ll@{}}
\toprule
\endhead
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\(v_{i_4}-v_{i_1}\) is in \(V\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
because \(i_1\) and \(i_4\) are both in the equivalence class
\(x\)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\(v_{i_4}\) equals \(v_{i_3}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
because \(i_4\) and \(i_3\) are adjacent (meaning there cannot be red
indices between them; note that there is an edge from \(x\) to \(y\)
precisely because \(i_4\) and \(i_3\) are adjacent)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\(v_{i_6}-v_{i_3}\) is in \(V\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
because \(i_6\) and \(i_3\) are both in \(y\)\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\(v_{i_5}\) equals \(v_{i_6}\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
because \(i_6\) and \(i_5\) are adjacent, as is again reflected in the
fact that there is an edge from \(y\) to \(z\),\strut
\end{minipage}\tabularnewline
\begin{minipage}[t]{0.47\columnwidth}\raggedright
\(v_{i_2}-v_{i_6}\) is in \(V\)\strut
\end{minipage} & \begin{minipage}[t]{0.47\columnwidth}\raggedright
because \(i_2\) and \(i_6\) are both in \(z\).\strut
\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
Hence, \(v_{i_2}-v_{i_1}\in V\), as we have shown by following a path in
\(\mathscr{G}|_{\textbf{blue}}\) from \(x\) to \(z\). The same argument
works in general for any two indices in blue equivalence classes.
\end{proof}
Now we have to bound the rank of \(W\) from below. We first reduce our
word; yellow letters disappear. The most optimistic expectation would be
that the rank of \(W\) equal the number of gaps between blue ``chunks''
indicated by braces in our example from before:
\[\underbrace{\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \textcolor{blue}{x} \underbrace{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \textcolor{blue}{z} \underbrace{} \textcolor{blue}{y x} \underbrace{\;\;\;\;\;\;\;\;\;} \textcolor{blue}{z y} \underbrace{}\]
(Chunks may have merged during reduction.) The number of gaps here is
\(4\), considered cyclically (so that the first and last gap become
one). The gaps correspond to \(v(i_2)-v(i_1)\), \(v(i_3)-v(i_2)\),
\(v(i_5)-v(i_4)\) and (lastly, or firstly) \(v(i_1)-v(i_6)\).
Imagine for a moment that each red letter appeared only once. (From now
on, all letters that are neither blue nor yellow will be colored red. In
our reduced word, all letters that are not blue are red.) Then the
optimistic expectation would hold: each of
\(v(i_2)-v(i_1), v(i_5)-v(i_4),\dotsc,v(i_1)-v(i_6)\) would be a
non-trivial formal linear combination of a non-zero number of symbols
\(x_{[j]}\), each appearing only once altogether, and so those
combinations must all be linearly independent.
Of course, we cannot ensure that each red letter will appear only once,
and in fact we are usually treating cases where most of them appear at
least twice (i.e., singletons are rare). Let us see what we can do with
a weaker assumption. What if we assume that each red letter appears at
most \(\kappa\) times?
Let us see an easy linear-algebra lemma.
\begin{lemma}
Let \(A\) be a matrix with \(n\)
rows, satisfying:
\begin{itemize}
\tightlist
\item
every row has at least one non-zero entry,
\item
no column has more than \(\kappa\) non-zero entries.
\end{itemize}
Then the rank of \(A\) is \(\geq n/\kappa\).
\end{lemma}
\begin{proof}
We will construct a finite list \(S\) of columns, starting
with the empty list. At each step, if there is a row \(i\) such that the
\(i\)th entry of every column in \(S\) is \(0\), include at the end of
\(S\) a column whose \(i\)th entry is non-zero. Stop if there is no such
row.
When we stop, we must have \(\kappa\cdot |S|\geq n\), as otherwise there
would still be a row in which no element of \(S\) would have a non-zero
entry. Since, for each column in \(S\), there is a row in which that
column has a non-zero entry and no previous column in \(S\) does, we see
that the columns in \(S\) are linearly independent. Hence
\(\textrm{rank}(A)\geq |S|\geq n/\kappa\).
\end{proof}
Now we see what to do: let \(A\) be a matrix with columns corresponding
to red equivalence classes, and rows corresponding to gaps between blue
chunks, with the entry \(a_{i [j]}\) being the number of times the red
letter corresponding to column \([j]\) appears in the gap corresponding
to row \(i\) (counting appearances as \(x_{[j]}^{-1}\) as negative
appearances). Each column has no more than \(\kappa\) non-zero entries
because, by assumption, each red equivalence class contains at most
\(\kappa\) elements. We still need to show that no or few rows are full
of zeros.
A row is full of zeros iff every letter \(x\) in the corresponding gap
appears an equal number of times as \(x\) and as \(x^{-1}\) (i.e., every
equivalence class has as many representatives \(i\) with \(\sigma_i=1\)
as with \(\sigma_i=-1\) within the gap). Let us call such gaps
\emph{invalid}.
There is a condition that limits how many such gaps there can be while
at the same time ensuring that an equivalence class contains at most
\(\kappa\) elements (or not quite, but something that is as good). Let
\((\sim',\sigma')\) (of length \(2 k'\)) be the reduction of the shape
\((\sim,\sigma)\). Let us say that we see a \emph{revenant} when there
are indices \(i\), \(i'\) such that (a) \(i\sim i'\), and (b) there is a
\(j\not\sim i\) with \(i<j<i'\). (In other words, \(x_{[i]}\) has come
back after going away.) We say that there are \(\kappa\) \emph{disjoint
revenants} if there are
\[1\leq i_1<\jmath_1<i_1'\leq i_2<\jmath_2<i_2'\leq \dotsc \leq
i_\kappa < \jmath_\kappa < i_\kappa'\leq 2 k'\] with \(i_j\sim' i_j'\)
and \(i_j\not\sim' \jmath_j\) for \(1\leq j\leq 2 k\). Thus, for
example, in \[xxz\dotsc x^{-1} y w \dotsc y v \dotsc y,\] we see three
disjoint revenants (with \(x\) at positions \(i_1\) and \(i_1'\), and
\(y\) at positions \(i_2\), \(i_2' = i_3\) and \(i_3'\)).
Let us impose the condition that there cannot be more than \(\kappa\)
disjoint revenants in our walk. (We will be able to assume this
condition by rigging the definition of \(X\) later.) Then it follows
immediately that the appearances of a letter form at most \(\kappa\)
contiguous blocks in the reduced word. Hence, a red letter cannot appear
in more than \(\kappa\) gaps. We also see that there cannot be more than
\(\kappa\) invalid gaps: a gap is a non-empty reduced subword, and, if a
letter \(x\) appears in a reduced, non-trivial word as many times as
\(x\) and as \(x^{-1}\), either the pattern \(x \dotsc x^{-1}\) or the
pattern \(x^{-1} \dotsc x\) appears in the word, with ``\(\dotsc\)''
standing for a non-empty subword consisting of letters that are not
\(x\). Thus, \(> \kappa\) invalid gaps would give us \(>\kappa\)
revenants, all disjoint.
It then follows, by the easy linear-algebra lemma above, that
\[\textrm{dim}(W)\geq \frac{s-\kappa}{\kappa} = \frac{s}{\kappa}-1,\]
where \(s\) is the number of gaps.
The question is then: how do you choose which letters to color blue and
which to color red so that the number \(s\) of gaps is large?
\subsection{Spanning trees and boundaries}
Let us first assume that there are no yellow letters in the non-reduced
word, as that is a somewhat simpler case. Then the number of gaps equals
the number of red letters \(x_i\) such that \(x_{i-1}\) is blue (or
\(x_{2 k}\) is blue, if \(i=1\)). That number is bounded from below by
\[\frac{1}{2} |\partial \textbf{blue}|,\] where
\(\partial \textbf{blue}\) is the set of all red equivalence classes
\([i]\) such that there is a blue equivalence class \([j]\) connected to
\([i]\) by an edge in \(\mathscr{G} = \mathscr{G}_{\sim,\sigma}\)
(meaning that \([j]\) contains an index \(j\) and \([i]\) contains an
index \(i\) such that \(i\) and \(j\) are separated only by yellow
letters; since there are no yellow letters, that means that \(i=j+1\) or
\(i=j-1\) (or one of \(i\), \(j\) is \(1\) and the other one is
\(2 k\))).
The question is then how to choose the set \(\textbf{blue}\) of
equivalence classes to be colored blue in such a way that
\(\partial \textbf{blue}\) is large. Here \(\textbf{blue}\) can be any
set of vertices such that \(\mathscr{G}|_\textbf{blue}\) is connected.
So, in general: given a connected undirected graph \(\mathscr{G}\), how
do we choose a set \(\textbf{blue}\) of vertices so that
\(\mathscr{G}|_\textbf{blue}\) is connected and
\(\partial \textbf{blue}\) is large?
A \emph{spanning tree} of a graph \(\mathscr{G}=(V,E)\) is a subgraph
\((V,E')\) (where \(E'\subset E\)) that is a tree (i.e., has no cycles)
and has the same set of vertices \(V\) as \(\mathscr{G}\). Given a
spanning tree of \(\mathscr{G}\), we can define \(\textbf{blue}\) to be
the set of internal nodes of \(\mathscr{G}\), that is, the set of
vertices that are not leaves. Then \(\textbf{blue}\) is connected, and
\(\partial \textbf{blue}\) equals the set of leaves. The question is
then: is there a spanning tree of \(\mathscr{G}\) with many leaves?
Here there is a result from graph theory that we can just buy off the
shelf.
\begin{prop}[Kleiman-West, 1991; see also Storer, 1981,
Payan-Tchuente-Xuong, 1984, and Griggs-Kleitman-Shastri, 1989] Let
\(\mathscr{G}\) be a connected graph with \(n\) vertices, all of degree
\(\geq 3\). Then \(\mathscr{G}\) has a spanning tree with \(\geq n/4+2\)
leaves.
\end{prop}
Using this Proposition, we prove:
\begin{cor}
Let \(\mathscr{G}\) be a connected graph such that
\(\geq n\) of its vertices have degree \(\geq 3\). Then \(\mathscr{G}\)
has a spanning tree with \(\geq n/4+2\) leaves.
\end{cor}
We omit the proof of the corollary, as it consists just of less than a
page of casework and standard tricks. Alternatively, we can prove it
from scratch in about a page by modifying Kleiman and West's proof.
(It is clear that some condition on the degrees, as here, is necessary;
a spanning tree of a cyclic graph (every one of whose vertices has
degree \(2\)) has no leaves.)
Before we go on to see what do we do with shapes \((\sim,\vec{\sigma})\)
such that \(\mathscr{G}_{(\sim,\vec{\sigma})}\) does \emph{not} have
many vertices with degree \(\geq 3\), let us remove the assumption that
there are no yellow letters.
So, let us go back to counting gaps between blue chunks. For any two
distinct non-yellow equivalence classes \([i]\), \([j]\), let us draw an
arrow from \([i]\) to \([j]\) if there are representatives \(i\in [i]\),
\(j\in [j]\) that survive in the reduced word, and such as that all
letters between \(i\) and \(j\) disappear during reduction. (If \(j<i\),
then ``between'' is to be understood cyclically, i.e., the letters
between \(i\) and \(j\) are those coming after \(i\) or before \(j\).)
We draw each arrow only once, that is, we do not draw multiple arrows.
For instance, in our example
\(w = x_{[1]} x_{[2]}^{-1} x_{[3]} x_{[4]}^{-1} x_{[5]} x_{[5]}^{-1} x_{[4]} x_{[1]} x_{[2]} x_{[5]}^{-1}\)
from before,
\begin{center}
\begin{tikzpicture}[thick]
\tikzstyle{vertex}=[circle, draw, fill=white!50, inner sep=0pt, minimum width=4pt]
\node[vertex] (1) at (0:0) {$\{1,8\}$};
\node[vertex] (2) at (0:6) {$\{2,9\}$};
\node[vertex] (3) at (-40:4) {$\{3\}$};
\node[vertex] (5) at (-10:10) {$\{5,6,10\}$};
\draw {
(1) -- (2)
(2) -- (3)
(1) -- (3)
(2) -- (5)
(1) -- (5)
(3) -- (5)
};
\path (1) edge [->, >=latex, bend left=15, color=olive] (2);
\path (2) edge [->, >=latex, bend right=15, color=olive] (3);
\path (3) edge [->, >=latex, bend left=20, color=olive] (1);
\path (2) edge [->, >=latex, bend left=15, color=olive] (5);
\path (5) edge [->, >=latex, bend left=10, color=olive] (1);
\end{tikzpicture}
\end{center}
It is obvious that every vertex has an in-degree of at least \(1\).
For \(S\) a set of vertices, define the \emph{out-boundary}
\(\vec{\partial} S\) to be the set of all vertices \(v\) not in \(S\)
such that there is an arrow going from some element of \(S\) to \(v\).
Then, whether or not there are yellow letters, the number of red gaps
\(\underbrace{}\) in the reduced word is at least
\(|\vec{\partial} \textbf{blue}|\).
\begin{lemma} Let \(G\) be a directed graph such that every vertex has
positive in-degree. Let \(S\) be a subset of the set vertices of \(G\).
Then there is a subset \(S'\subset S\) with \(|S'|\geq |S|/3\) such
that, for every \(v\in S'\), there is an arrow from some vertex not in
\(S'\) to \(v\).
\end{lemma}
\begin{proof}
The first step is to remove arrows until the in-degree of
every vertex is exactly 1. Then \(G\) is a union of disjoint cycles. If
all vertices in a cycle are contained in \(S\), we number its vertices
in order, starting at an arbitrary vertex, and include in \(S'\) the
second, fourth, etc. elements. If no vertices in a cycle are in \(S\),
we ignore that cycle. If some but not all vertices in a cycle are in
\(S\), the vertices that are in \(S\) fall into disjoint subsets of the
form \(\{v_1,\dotsc v_r\}\), where there is an arrow from some \(v\) not
in \(S\) to \(v_1\), and an arrow from \(v_i\) to \(v_{i+1}\) for
\(1\leq i\leq r-1\); then we include \(v_1,v_3,\dotsc\) in \(S'\).
\end{proof}
We let \(S\) be the set of leaves of our spanning tree, and define
\(\textbf{red}\) to be the set \(S'\) given by the Lemma;
\(\textbf{blue}\) is the set of all other non-yellow equivalence
classes. Then the number of gaps is \(\geq (n/4+2)/3\), where \(n\) is
the number of vertices of degree \(\geq 3\) in
\(\mathscr{G}_{(\sim,\vec{\sigma})}\). Hence, by our work up to now,
\[\textrm{dim}(W) \geq \frac{1}{\kappa} \frac{\frac{n}{4}+2}{3} -1 \geq \frac{n}{12 \kappa} - 1,\]
and so, if \(n\) is even modestly large, we win by a large margin: we
obtain a factor nearly as small as \(1/H_0^{\frac{n}{12 \kappa}}\) in
\eqref{eq:circ}.
\emph{Note.} Had we been a little more careful, we would have obtained a
bound of \(\dim(W)\geq \frac{n}{50 \log \kappa} - \frac{\kappa}{2}\) or
so. This improvement -- which involves drawing, and considering,
multiple arrows -- would affect mainly the allowable range of \(H_0\) in
the end. We will remind ourselves of the matter later.
\subsection{Shapes with low freedom. Writer-reader arguments.}
The question now is what to do with walks of shapes
\((\sim,\vec{\sigma})\) for which \(\mathscr{G}_{(\sim,\vec{\sigma})}\)
does not have many vertices of degree \(\geq 3\).
Let us first give an argument that is sufficient when the word given by
our walk is already reduced; we will later supplement it with an
additional argument that takes care of the reduction. Let
\(\mathbf{n}\subset \{1,2,\dotsc,2 k\}\) be the set of indices that
survive the reduction. It is enough to define an equivalence relation
\(\sim\) on \(\mathbf{n}\) to define the graph
\(\mathscr{G}_\sim = \mathscr{G}_{\sim,\sigma}\) we have been
considering. (We do not need to specify \(\vec{\sigma}\), as its only
role was to help determine which letters are yellow.) Assume that
\(\mathscr{G}_\sim\) has \(\leq \nu\) vertices of degree \(\geq 3\). Let
\(\kappa\) be, as usual, an upper bound on the number of disjoint
revenants; in particular, for any equivalence class \([i]\), there are
at most \(\kappa\) elements \(i\in [i]\) such that the following element
of \(\mathbf{n}\) is not in \([i]\). We claim that the number of
equivalence classes \(\sim\) on \(\mathbf{n}\) satisfying these two
constraints (given by \(\nu\) and \(\kappa\)) is
\[\leq 5^{|\mathbf{n}|} (2 k)^{(\kappa-1) \nu + 2}.\]
We will prove this bound by showing that we can determine an equivalence
class of this kind by describing it by a string \(\vec{s}\) on \(5\)
letters with indices in \(\mathbf{n}\), together with some additional
information at each of at most \((\kappa-1) \nu+2\) indices. The idea is
that, if an index lies in an equivalence class that is a vertex of
degree \(1\) or \(2\) in \(\mathscr{G}_\sim\), then there are very few
possibilities for the equivalence classes on which the index just
thereafter may lie, namely, \(1\) or \(2\) possibilities.
We let the index \(i\) go through \(\mathbf{n}\) from left to right. If
\([i]\) is in an equivalence class we have not seen before, we let
\(s_i = *\). Assume otherwise. Let \(i_-\) be the element of
\(\mathbf{n}\) immediately preceding \(i\). If \([i]=[i_-]\), let
\(s_i=0\). If \([i_-]\) is a vertex of degree \(\leq 2\) and \(i\) is in
an equivalence class that we have already seen next to \([i_-]\) (that
is, just before or just after \([i_-]\) in \(\mathbf{n}\)), then we let
\(s_i=1\) or \(s_i=2\) depending on which one of those \(\leq 2\)
equivalence classes we mean (the first one or the second one to appear).
In all remaining cases, we let \(s_i = \cdot\), and specify our
equivalence class explicitly, by giving an index \(j<i\) in the same
equivalence class.
Let us give an example. Let \(k= 8\),
\(\mathbf{n} = \{1,2,\dotsc,2 k\}\). Let our equivalence classes be
\[\{1,7,15\},\{2,16\},\{3,4,5,11\},\{6,10,12,14\},\{8\},\{9,13\}.\] Then
\(s_1=s_2=s_3=s_6=s_8=s_9=*\) and \(s_4=s_5=0\). The vertices of degree
\(3\) are \([1]\) and \([6]\); all other vertices are of degree \(2\).
Hence, \(s_{16}=\cdot\) (since \(16\) follows \(15\), which is in
\([1]\)) and \(s_7=s_{11}=s_{13}=s_{15}=\cdot\) (since these indices
follow \(6, 10, 12, 14\), which are in \([6]\)). Since
\(3\sim 4\sim 5,\) we let \(s_4 = s_5 = 0\). It remains to consider
\(i=10,12,14\). In the case \(i=10\), we see that \([9]\) has degree
\(2\), but, when we come to \(10\), we realize that no element of
\([10]\) has been seen next to an element of \([9]\) before: \(8\) is
next to \(9\), but \(8\notin [10]\). Hence, we let \(s_{10}=\cdot\). In
the case \(i=12\), we see that \([12]\) has been seen next to \([11]\)
before: \(5\in [11]\) and \(6\in [12]\). Since \([12]\) was the second
equivalence class other than \([11]\) to appear next to \([11]\) (the
first one was \([2]\): \(2\in [2]\), \(3\in [11]\)), we write
\(s_{12}=2\). The situation for \(i=14\) is analogous, in that \([14]\)
appeared next to \([13]\) before: \(9\in [13]\), \(10\in [14]\), and so,
since \(8\notin [13],[14]\), \(s_{14}=2\).
In summary,
\[\vec{s} = \text{***00*$\cdot$**$\cdot\cdot$2$\cdot$2$\cdot\cdot$},\]
and, in addition to writing \(\vec{s}\), we specify the equivalence
classes of the indices \(i\) with \(s_i=.\) explicitly (\([1]\) for
\(i=7,15\), \([6]\) for \(i=10\), \([3]\) for \(i=11\), \([9]\) for
\(i=13\), \([2]\) for \(i=16\)).
A reader can now reconstruct our equivalence classes by reading
\(\vec{s}\) from left to right, given that additional information. (Try
it!) We should now count the number of dots \(\cdot\), since that equals
the number of times we have to give additional information. For a class
\([i']\) that is a vertex of degree \(\leq 2\), it can happen at most
once (that is, for at most one element \(i'\) of \([i']\)) that
\(s_i\ne 0,1,2\) for the index \(i\) in \(\mathbf{n}\) right after
\(i'\), unless \(1\in [i']\), in which case it can happen twice.
(Someone who already has a neighbor and will end up with \(\leq 2\)
neighbors in total can meet a new neighbor at most once.) For \([i']\) a
vertex of arbitrary degree, it can happen at most \(\kappa\) times that
\(s_i\ne 0\). Hence, writing \(n_{\leq 2}\) for the number of vertices
of degree \(\leq 2\) and \(n_{\geq 3}\) for the number of vertices of
degree \(\geq 3\), we see that the total number of indices
\(i\in \mathbf{n}\) with \(s_i\in \{*,.\}\) is at most
\(\kappa n_{\geq 3}+ n_{\leq 2} + 1 + 1\), where the last \(+1\) comes
from the first index \(i\) in \(\mathbf{n}\). The number of indices
\(i\) with \(s_i=*\) equals the number of classes, i.e.,
\(n_{\leq 2} + n_{\geq 3}\). Hence, the number of indices \(i\) with
\(s_i = .\) is
\[\leq \kappa n_{\geq 3}+ n_{\leq 2} + 2 - (n_{\geq 3} + n_{\leq 2}) = (\kappa-1) n_{\geq 3} + 2 \leq (\kappa-1) \nu + 2.
\]
Each equivalence class contributes a factor of at most
\(\mathscr{L} = \sum_{p\in \mathbf{P}} \frac{1}{p}\) to our total in
\eqref{eq:littlestar}; singletons (equivalence classes with
one element each) actually contribute \(\sqrt{\mathscr{L}}\), because of
the factor of \(\mathscr{L}^{-\frac{|\mathcal{S}(\sim)|}{2}}\). Recall
that we are saving a factor of almost
\(H_0^{\frac{\nu}{12 \kappa} - 1}\) through \eqref{eq:circ} (let us say
\(H_0^{\nu/24 \kappa}\), to be safe). Thus, forgetting for a moment
about the yellow equivalence classes, we conclude that the contribution
to \eqref{eq:circ} of the equivalence relations \(\sim\) such that
\(G_\sim\) has \(\nu\) vertices of degree \(\geq 3\) is
\[\ll 4^{2 k} 5^{|\mathbf{n}|} (2 k)^2 \left(\frac{(2 k)^{\kappa-1}}{H_0^{1/24 \kappa}}\right)^\nu \mathscr{L}^k,\]
where the factor of \(4^{2 k}\) is there because we also have to specify
\(\vec{\sigma}\in \{-1,1\}^{\{1,\dotsc,2 k\}}\) and
\(\mathbf{l}, \mathbf{n}\subset \{1,2,\dotsc, 2k\}\). Provided that we
set our parameters so that \(H_0^{1/24 \kappa}\geq 2 (2 k)^{\kappa-1}\)
(and it turns out that we may do so, provided that \(\log H_0\) is
larger than \((\log H)^{2/3+\epsilon}\) -- or rather, larger than
\((\log H)^{1/2+\epsilon}\), if we make the improvement through multiple
arrows we mentioned a little while ago), we are done; we have a bound of
size \[\ll \mathscr{L}^k \sum_{\nu=1}^\infty 2^{-\nu}
\ll \mathscr{L}^k,\] which is what we wanted all along.
But wait! What about the part of the word that disappears during
reduction? It is partly described by a string of matched parentheses:
for example, \(x x^{-1} x^{-1} y y^{-1} x\) gives us \(()(())\). (We
also have to specify the exponents \(\sigma_i\) separately.) The
equivalence class of the index of a closing parenthesis is the same as
that of the index of the matching opening parenthesis. Thus, we need
only worry about specifying the equivalence classes of the opening
parentheses. There are \(k-|\mathbf{n}|/2\) of them.
A naive approach would be to describe each such equivalence class
\([i]\) by specifying the first index \(i\) in it each time it occurs
(except for the first time). The cost of that approach could be about as
large as \(k^{k-|\mathbf{n}|/2}\), which is much too large. It would
seem we are in a pickle. Indeed, we know we would have to be in a
pickle, if we were not using the fact that we are not working in all of
\(\mathbf{N}\), but in a subset \(X\subset \mathbf{N}\) all of whose
elements have \(\leq K \mathscr{L}\) divisors in \(\mathbf{P}\). (If we
worked in all of \(\mathbf{N}\), even trivial walks, which are entirely
yellow, would pose an insurmountable problem.) However, how can we use
\(X\), or the bound \(\leq K \mathscr{L}\), by this point?
The point is that we need not consider all possible \((p_{[i]})\) in
\eqref{eq:littlestar}, but only those tuples that can possibly arise in a walk
\[n, n+\sigma_1 p_1, n+\sigma_1 p_1 + \sigma_2 p_2,\dotsc,
n+\sigma_1 p_1 + \sigma_2 p_2 + \dotsb + \sigma_{2 k} p_{2 k}=n
\] all of whose nodes are in \(X\). Now, if a prime \(p_j\) has appeared
before as \(p_i\) (i.e., \(i<j\) and \(i\sim j\)) and both \(i\) and
\(j\) are ``lit'', that is \(i,j\in \mathbf{l}\), then, as we know,
\(\sigma_{i} p_{i} + \dotsc + \sigma_{j-1} p_{j-1}\) must be divisible
by \(p_i\). (Indices that are not lit do not pose a problem, due to the
factors of the form \(1/p\) that they contribute.) What is more: if
\(i\in \mathbf{l}\), \(i<j\) with
\(p_i|\sigma_{i} p_{i} + \dotsc + \sigma_{j-1} p_{j-1}\), then
\(n + \sigma_1 p_1 + \dotsc + \sigma_{j-1} p_{j-1}\) is \emph{forced} to
be divisible by \(p_i\) (because
\(n+\sigma_1 p_1 + \dotsc + \sigma_{i-1} p_{i-1}\) is divisible by
\(p_i\)). Now, \(n+\sigma_1 p_1 + \dotsc + \sigma_{j-1} p_{j-1}\) has
\(\leq K \mathscr{L}\) divisors. Hence, given \(j\), there are at most
\(K \mathscr{L}\) distinct equivalence classes \([i]\) having at least
one representative \(i<j\), \(i\in \mathbf{l}\) such that
\(p_i|\sigma_{i} p_{i} + \dotsc + \sigma_{j-1} p_{j-1}\) . This is a
property where \(n\) no longer appears.
Now, as we describe \(\sim\) to our reader, when we come to an index of
the one kind that remains problematic -- disappearing in the reduction,
corresponding to an open parenthesis, in an equivalence class that has
been seen before -- we need only specify an equivalence class
\emph{among those \(\leq K \mathscr{L}\) equivalence classes that have
at least one representative \(i<j\), \(i\in \mathbf{l}\) such that}
\(p_i|\sigma_{i} p_{i} + \dotsc + \sigma_{j-1} p_{j-1}\). The reader can
figure out which one those are, as that is a property given solely by
\(p_1,\dotsc,p_{j-1}\) and \(\sigma_1,\dotsc,\sigma_{j-1}\). We can give
them numbers \(1\) to \(\lfloor K \mathscr{L}\rfloor\) by order of first
appearance, and communicate to the reader the equivalence class we want
by its number, rather than by an index. Thus we incur only in a factor
of \(K\mathscr{L}\), not \(2 k\).
In the end, we obtain a total contribution of \[O((K\mathscr{L})^k),\]
which is what we wanted. In other words,
\[\textrm{Tr} (A|_X)^{2 k} \leq O(K \mathscr{L})^{k} N,\] Q.E.D.
Incidentally, in earlier drafts of the paper, we did not have a
``writer'' and a ``reader'', but a mahout and an elephant:
\begin{figure}
\centering
\includegraphics[scale=0.45]{elephant1.png}
\end{figure}
They were unfortunately censored by my coauthor. As this is my
exposition, here they are. The picture might be clearer now -- the
elephant-reader has no idea of \(n\), or of our grand strategy, but it
is an intelligent animal that can follow instructions and is endowed
with a flawless memory (and the ability to test for divisibility,
apparently).
\section{Conclusions}
\begin{main}Let the operator \(A\) be as before, with
\(\mathbf{N}=\{N+1,\dotsc,2 N\}\) and \(H_0,H,N\geq 1\) such that
\(H_0\leq H\) and \(\log H_0 \geq (\log H)^{1/2} (\log \log H)^{2}\).
Let \(\mathbf{P}\subset [H_0,H]\) be a set of primes such that
\(\mathscr{L} = \sum_{p\in \mathbf{P}} 1/p \geq e\) and
\(\log H \leq \sqrt{\frac{\log N}{\mathscr{L}}}\).
Then, for any \(1\leq K\leq \frac{\log N}{\mathscr{L} (\log H)^2}\),
there is a subset \(\mathscr{X}\subset \mathbf{N}\) with
\(|\mathbf{N}\setminus \mathscr{X}|\ll N e^{-K \mathscr{L} \log K} + N/\sqrt{H_0}\)
such that every eigenvalue of \(A|_{\mathscr{X}}\) is
\[O\left(\sqrt{K \mathscr{L}}\right),\] where the implied constants are
absolute. \end{main}
We have sketched a full proof, leaving out one, or rather two, passages
-- namely, the proof that we can take out from \(X\) two kinds of
integers, and still keep \(X\) well-distributed enough in arithmetic
progressions for cancellation to happen when we have too many lone
primes. As we have said before, those two kinds of integers are: (a)
integers \(n\) with \(\geq K \mathscr{L}\) divisors, (b) integers \(n\)
that could give rise to too many disjoint revenants. Here (b) sounds a
little vague, but, if we simply take out from \(X\) the set \(Y_\ell\)
of those integers \(n\) for which there can be a ``premature revenant'',
meaning that there exist \(p\in \mathbf{P}\),
\(p_1,\dotsc,p_l\in \mathbf{P}\) with \(p_i\ne p\) and
\(\sigma\in \{-1,1\}^l\), \(l\leq \ell\), such that
\[p|n, p_{1}|n, p_{2} | n + \sigma_1 p_1, \dotsc, p_l|n+\sigma_1 p_1 + \dotsc + \sigma_{l-1} p_{l-1}, p|n+\sigma_1 p_1 + \dotsc \sigma_l p_l,\]
then we have ensured that there cannot be more than \(2k/\ell\) disjoint
revenants. (We have not really forgotten about the possibility that some
intermediary indices may not be lit -- those are taken care of by a
different argument.) It is actually not hard to show that \(Y_{\ell}\)
is a fairly small set; what takes work is showing that it is
well-distributed. What we did was develop a new tool -- a combinatorial
sieve for conditions involving composite moduli. While it is somewhat
technical, may be interesting in that it will probably be useful for
attacking other problems. Let us leave it to the appendix.
The main theorem has several immediate corollaries. First of all, we
obtain what we set as our original goal.
\begin{cor} For any \(e<w\leq x\) such that \(w\to\infty\) as
\(x\to \infty\),
\[\frac{1}{\log w} \sum_{\frac{x}{w}\leq n\leq x} \frac{\lambda(n) \lambda(n+1)}{n} = O\left(\frac{1}{\sqrt{\log \log w}}\right).\]
\end{cor}
We can also obtain substantially sharper results. A case in point: we can
prove that \(\lambda(n+1)\) averages to zero (with weight \(1/n\) as
above, or ``at almost all scales'') over integers \(\leq N\) having
exactly \(k\) prime factors, where \(k\) is a popular number of prime
factors to have (e.g., \(\lfloor \log \log N\rfloor\), or
\(\lfloor \log \log N\rfloor + 2021\)). To see more such corollaries,
look at the actual paper, or derive your own!
\subsection{Subset of acknowledgments. Bonus track}
I am grateful to many people -- please read the full acknowledgments in
the paper. Here I would like to thank two subsets in particular -- (a)
postdocs and students in Göttingen who patiently attended my online
lectures during the first year of the COVID pandemic, as the proof was
finally gelling, (b) inhabitants of MathOverflow. In (b), one can find,
for example, Fedor Petrov, who pointed us towards Kleitman-West,
besides answering other questions, but you can also find some users who
chose to remain anonymous. Among them was user ``BS.'', who explained
how one of my question about ranks was related to topology. That
relation has gone well under the surface in the current version, so let
us discuss it here, for our own edification.
Consider a word \(w\) of a special kind -- a word \(w\) where every
letter \(x_1,\dotsc,x_k\) appears twice, once as \(x_i\), once as
\(x_i^{-1}\). For \(1\leq i,j\leq k\), let \(m_{i,j}\) equal \(1\) if
either (a) \(x_i\) appears before \(x_i^{-1}\) , and \(x_j\) appears
between them, but \(x_j^{-1}\) does not appear between them, or (b)
\(x_i^{-1}\) appears before \(x_i\), and \(x_j^{-1}\) appears between
them, but \(x_j\) does not. Let \(m_{i,j}=-1\) if either (a) or (b) is
true with \(x_j\) and \(x_j^{-1}\) switched. Let \(m_{i,j}=0\)
otherwise. Then the \(k\)-by-\(k\) matrix \(M=(m_{i,j})\) is
skew-symmetric. As people in MathOverflow kindly showed me (apparently
my education in linear algebra left something to be desired\ldots{}), if
a skew-symmetric matrix \(M\) has rank \(r\), then it has a minor with
disjoint row and column index sets and rank \(\geq r/2\). Since I was
interested precisely in constructing such a minor with high rank (\(I\)
and \(J\) giving us what we called ``blue'' and ``red'' vertices in the
above), it made sense that I would want to know what the rank \(r\) of
\(M\) might be. In particular, when is \(M\) non-singular?
What BS. showed to me is that one can construct a surface \(S\) with
handles corresponding to the word \(w\) in a natural way. (Apparently
this construction is standard, but it was completely unknown to me.) For
instance, for \(w = x_1 x_2 x_1^{-1} x_2^{-1} x_3 x_3^{-1}\), the
surface \(S\) looks as follows:
\begin{center}
\begin{tikzpicture}[scale=0.75]
\colorlet{lightblue}{blue!20!white}
\draw[thick, fill=lightblue] (2,-1) -- (2,3.2) .. controls (2,4) and (2.2,4.8) .. (3,5) .. controls (3.8,5.2) and (5.2,5.2) .. (6,5) .. controls (6.8,4.8) and (7,4) .. (7,3.2) -- (7,-1) -- (6,-1) -- (6,2) .. controls (6,3.2) and (5.8,3.8) .. (5.4,4) .. controls (5,4.2) and (4,4.2) .. (3.6,4) .. controls (3.2,3.8) and (3,3.2) .. (3,0);
\draw[thick, fill=lightblue] (8,-1) -- (8,0) .. controls (8,2) and (8.2,2.8) .. (9,3) .. controls (9.5,3.05) .. (10,3) .. controls (10.8,2.8) and (11,2) .. (11,0) -- (11,-1) -- (10,-1) -- (10,0) .. controls (10,1.2) and (9.9,1.8) .. (9.7,2) .. controls (9.55,2.1) and (9.45,2.1) ..(9.3,2) .. controls (9.1,1.8) and (9,1.2) .. (9,0) -- (9,-0.5) -- (8,-1);
\draw[thick, fill=lightblue] (0,-1) -- (0,0) .. controls (0,2) and (0.2,2.8) .. (1,3) .. controls (2,3.2) and (3,3.2) .. (4,3) .. controls (4.8,2.8) and (5,2) .. (5,0) -- (5,-1) -- (4,-1) -- (4,-0.5) -- (4,0) .. controls (4,1.2) and (3.8,1.8) .. (3.4,2) .. controls (3,2.2) and (2,2.2) .. (1.6,2) .. controls (1.2,1.8) and (1,1.2) .. (1,0);
\draw[fill=lightblue] (0,0) -- (0,-1) .. controls (0,-1.75) and (0.25,-2) .. (1,-2) -- (10,-2) .. controls (10.75,-2) and (11,-1.75) .. (11,-1) -- (11,0);
\draw[thick] (1,0) -- (2,0);
\draw[thick] (3,0) -- (4,0);
\draw[thick] (5,0) -- (6,0);
\draw[thick] (7,0) -- (8,0);
\draw[thick] (9,0) -- (10,0);
\end{tikzpicture}
\end{center}
The matrix \(M\) then corresponds to the intersection form of this
surface. This form is defined as an antisymmetric inner product on
\(H_1(S,\mathbb{Z})\), counting the number of intersections (with
orientation) of two closed paths in the way you may expect. For
instance, in the following, \(\langle z_1,z_2\rangle=-1\), whereas
\(\langle z_1,z_3\rangle = \langle z_2,z_3\rangle = 0\):
\begin{center}
\begin{tikzpicture}[scale=0.75]
\colorlet{lightblue}{blue!20!white}
\draw[thick, fill=lightblue] (2,-1) -- (2,3.2) .. controls (2,4) and (2.2,4.8) .. (3,5) .. controls (3.8,5.2) and (5.2,5.2) .. (6,5) .. controls (6.8,4.8) and (7,4) .. (7,3.2) -- (7,-1) -- (6,-1) -- (6,2) .. controls (6,3.2) and (5.8,3.8) .. (5.4,4) .. controls (5,4.2) and (4,4.2) .. (3.6,4) .. controls (3.2,3.8) and (3,3.2) .. (3,0);
\draw[thick,->,color=violet] (6.5,0.5) -- (6.5,2.5);
\draw[thick,->,color=violet] (6.5,2.5) .. controls (6.5,3.7) and (6.3,4.3) .. (5.9,4.5);
\draw[thick,->,color=violet] (5.9,4.5) .. controls (5.5,4.7) and (3.5,4.7) .. (3.1,4.5);
\draw[thick,color=violet] (3.1,4.5) .. controls (2.7,4.3) and (2.5,3.7) .. (2.5,2.5);
\draw[thick,color=violet] (2.5,2.5) -- (2.5,0.5);
\draw[thick, fill=lightblue] (8,-1) -- (8,0) .. controls (8,2) and (8.2,2.8) .. (9,3) .. controls (9.5,3.05) .. (10,3) .. controls (10.8,2.8) and (11,2) .. (11,0) -- (11,-1) -- (10,-1) -- (10,0) .. controls (10,1.2) and (9.9,1.8) .. (9.7,2) .. controls (9.55,2.1) and (9.45,2.1) ..(9.3,2) .. controls (9.1,1.8) and (9,1.2) .. (9,0) -- (9,-0.5) -- (8,-1);
\draw[thick, fill=lightblue] (0,-1) -- (0,0) .. controls (0,2) and (0.2,2.8) .. (1,3) .. controls (2,3.2) and (3,3.2) .. (4,3) .. controls (4.8,2.8) and (5,2) .. (5,0) -- (5,-1) -- (4,-1) -- (4,-0.5) -- (4,0) .. controls (4,1.2) and (3.8,1.8) .. (3.4,2) .. controls (3,2.2) and (2,2.2) .. (1.6,2) .. controls (1.2,1.8) and (1,1.2) .. (1,0);
\draw[fill=lightblue] (0,0) -- (0,-1) .. controls (0,-1.75) and (0.25,-2) .. (1,-2) -- (10,-2) .. controls (10.75,-2) and (11,-1.75) .. (11,-1) -- (11,0);
\draw[thick] (1,0) -- (2,0);
\draw[thick] (3,0) -- (4,0);
\draw[thick] (5,0) -- (6,0);
\draw[thick] (7,0) -- (8,0);
\draw[thick] (9,0) -- (10,0);
\draw[thick,->,color=violet] (0.9,-0.5) -- (3,-0.5);
\draw[thick,color=violet] (3,-0.5) -- (4.1,-0.5) .. controls (4.5,-0.4) .. (4.5,0.5);
\draw[thick,color=violet] (0.9,-0.5) .. controls (0.5,-0.4) .. (0.5,0.5);
\draw[thick,->,color=violet] (4.5,0.5) .. controls (4.5,1.7) and (4.3,2.3) .. (3.9,2.5) node[below] {$z_1$};
\draw[thick,->,color=violet] (3.9,2.5) .. controls (3.5,2.7) and (1.5,2.7) .. (1.1,2.5);
\draw[thick,color=violet] (1.1,2.5) .. controls (0.7,2.3) and (0.5,1.7) .. (0.5,0.5);
\draw[thick,->,color=violet] (2.9,-1) -- (5,-1);
\draw[thick,color=violet] (5,-1) node[below] {$z_2$} -- (6.1,-1) .. controls (6.5,-0.9) .. (6.5,0.5);
\draw[thick,color=violet] (2.9,-1) .. controls (2.5,-0.9) .. (2.5,1);
\draw[thick,->,color=violet] (8.9,-0.5) -- (9,-0.5) node[below] {$z_3$};
\draw[thick,color=violet] (9,-0.5) -- (10.1,-0.5) .. controls (10.5,-0.4) .. (10.5,0.5);
\draw[thick,color=violet] (8.9,-0.5) .. controls (8.5,-0.4) .. (8.5,0.5);
\draw[thick,->,color=violet] (10.5,0.5) .. controls (10.5,1.7) and (10.3,2.3) .. (9.9,2.5);
\draw[thick,color=violet] (9.9,2.5) .. controls (9.5,2.6) .. (9.1,2.5);
\draw[thick,color=violet] (9.1,2.5) .. controls (8.7,2.3) and (8.5,1.7) .. (8.5,0.5);
\end{tikzpicture}
\end{center}
Say \(S\) has genus \(g\) and \(b\geq 1\) boundary components. Then, for
\(S_g\) the surface of genus \(g\) without boundary, there is an
embedding \(S\hookrightarrow S_g\) preserving the intersection form,
with \(H^1(S)\to H^1(S_g)\) having kernel of rank \(b-1\). The
intersection form on \(H^1(S_g)\) is non-singular. Hence, \(M\) has
corank \(b-1\). In particular, \(M\) is non-singular iff \(b=1\), i.e.,
iff its boundary is connected.
It is an exercise to show that \(b\) equals the number of cycles in the
permutation \(i\mapsto \sigma(i)+1 \bmod 2 k\), where \(\sigma\) is the
permutation of \(\{1,2,\dotsc,2 k\}\) switching \(x_i\) and \(x_i^{-1}\)
in \(w\) for every \(1\leq i\leq k\).
I have no idea of how to define a surface \(S\) like the above for a
word \(w\) of general form -- the natural generalization of \(M\) is the
matrix corresponding to the system
\eqref{eq:ddagger} of divisibility relations, and that matrix need not be skew-symmetric, or even
square. However, in \(S\) and its boundary, you can already see shades
of our graph \(\mathscr{G}_{(\sigma,\sim)}\).
|
2210.14385
|
\section{Introduction}
Determination of the equation of state (EoS) for dense QCD at low temperature has been desired recently especially because it is related with understanding neutron star observations including recent simultaneous measurements of masses and radii of neutron stars.
However, the first-principles calculation of dense QCD at low temperature, beyond the onset scale ($\mu/m_{\pi} >1/2$) in particular, is still extremely difficult because of the severe sign problem.
On the other hand, the sign problem is absent in even-flavor dense $2$-color QCD because of the pseudo-reality of fundamental quarks.
Furthremore, if we add an external source term of the diquark condensate to explicitly break the U(1) baryon symmetry, we can perform numerical simulations using an exact algorithm even beyond the onset scale, namely, in the superfluid phase.
$2$-color QCD at zero chemical potential exhibits the same properties as $3$-color QCD, {\it e.g.,} confinement, spontaneous chiral symmetry breaking, and
thermodynamic behaviors.
It is expected that $2$-color QCD even at non-zero chemical potential could be a good testing ground in qualitatively understanding dense QCD.
Based on this motivation, several Monte Carlo studies on $2$-color QCD have been conducted independently and intensively in recent years (see references in Ref.~\cite{Iida:2022hyy}).
One can conclude that the $2$-color QCD phase diagram has been quantitatively clarified; even at fairly high temperature, $T\approx 100$ MeV, superfluidity can remain.
Now, we would like to focus on the EoS and the sound velocity in a low temperature and high density regime.
Several early works based on a phenomenological quark-hadron crossover picture of neutron star matter~\cite{Masuda2013-jk,Baym:2017whm} suggested that
the zero-temperature sound velocity squared, $c_s^2=\partial p/\partial e$, peaks in $n_B = 1$--$10n_o$ to be consistent with various observational constraints.
Here, $p$, $e$ and $n_0$ denote the pressure, internal energy density of the system, and nuclear saturation density, respectively.
More recently, based on a quarkyonic matter model, McLerran and Reddy~\cite{McLerran2019-qh} have shown that the peak appears at $n_B= 1$--$5n_0$.
Furthermore, Kojo ~\cite{Kojo2021-mg} proposed a microscopic interpretation on the origin of the peak
based on a quark saturation mechanism, which is supposed to work for any number of colors. Actually, Kojo and Suenaga~\cite{Kojo2021-wh} argued that a similar peak of $c_s^2$ emerges not only in $3$-color QCD, but also in $2$-color QCD.
\section{Lattice setup}
The lattice gauge action used in this work is the Iwasaki gauge action
As for the fermion action, we take the naive Wilson fermion with the quark number density and diquark source terms,
\begin{eqnarray}
S_F&=& (\bar{\psi_{1}} ~~ \bar{\varphi}) \left(
\begin{array}{cc}
\Delta(\mu) & J \gamma_5 \\
-J \gamma_5 & \Delta(-\mu)
\end{array}
\right)
\left(
\begin{array}{c}
\psi_{1} \\
\varphi
\end{array}
\right)
\equiv \bar{\Psi} {\mathcal M} \Psi, \nonumber\\ \label{eq:def-M}
\end{eqnarray}
where
$\bar{\varphi}=-\psi_2^T C \tau_2, ~~~ \varphi=C^{-1} \tau_2 \bar{\psi}_2^T.$
Here, the indices $1,2$ of $\psi$ denote the label of the flavor, and the $\Delta(\mu)_{x,y}$ is the Wilson-Dirac operator with the number operator.
The additional parameter $J$ corresponds to the diquark source parameter,
which allows us to perform the numerical simulation in the superfluid phase.
Note that $J=j \kappa$, where $j$ is a source parameter in the corresponding continuum theory, and $\kappa$ is the hopping parameter.
The $C$ in $\bar{\varphi},\varphi$ is the charge conjugation operator, and $\tau_2$ acts on the color index.
The square of the extended matrix ($\mathcal M$) can be diagonal,
but $\det[{\mathcal M}^\dag {\mathcal M}]$ corresponds to the fermion action for the four-flavor theory, since a single $\mathcal{M}$ in Eq.\ (\ref{eq:def-M}) represents the fermion kernel of the two-flavor theory.
To reduce the number of fermions, we take the root of the extended matrix in the action.
In practice, utilizing the Rational Hybrid Monte Carlo (RHMC) algorithm, we can generate gauge configurations.
In this work, we perform the simulation with $(\beta, \kappa,N_s,N_\tau)=(0.80,0.159,16,16)$.
According to Ref.~\cite{Iida:2020emi}, once we introduce the physical scale as $T_c=200$ MeV, where $T_c$ denotes the pseudo-critical temperature of chiral phase transition at $\mu=0$, then our parameter set, $\beta=0.80$ and $N_\tau=16$ ($T=0.39T_c$), corresponds to $a\approx 0.17$ fm and $T\approx 79$ MeV.
The mass of the lightest pseudo-scalar (PS) meson at $\mu=0$, $m_{PS}$, is still heavy in our simulations, $am_{PS}=0.6229(34)$ ($m_{PS}\approx 750 $ MeV).
As for the values of $a\mu$, we generate the configurations at intervals of $a\Delta \mu=0.05$.
The number of configuration for each parameter is $100$--$300$.
The statistical errors are estimated by the jackknife method.
\section{Phase structure at $T=79$ MeV}
We show the schematic
phase structure in Fig.~\ref{fig:phase-diagram} and summarize the definition of each phase in Table~\ref{table:phase}, which is an extract from Ref.~\cite{Iida:2019rah}.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{c}
\includegraphics[keepaspectratio, scale=0.25]{phase-diagram.pdf}
\end{tabular}
\caption{
Schematic 2-color QCD phase diagram. Each phase is defined in Table \ref{table:phase}.
}\label{fig:phase-diagram}
\end{center}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|}
\hline
\multicolumn{1}{|c||}{} & \multicolumn{2}{c|}{Hadronic} & \multicolumn{2}{c|}{Superfluid} \\
\cline{3-3} \cline{4-5} & & Hadronic matter & BEC & BCS \\
\hline \hline
$\langle |L| \rangle$ & zero & zero & & \\
$\langle qq \rangle$ & zero & zero & non-zero & non-zero \\
$ \langle n_q \rangle $ &zero & non-zero & $ 0 < \frac{\langle n^{latt.}_q \rangle}{n_q^{\mbox{tree}}} <1 $ & $ \frac{\langle n^{latt.}_q \rangle}{n_q^{\mbox{tree}}} \approx 1 $ \\
\hline
\end{tabular}
\caption{ Definition of phases. } \label{table:phase}
\end{center}
\end{table}
The order parameters that help classify the phases are the Polyakov loop $\langle |L| \rangle$ and diquark condensate $\langle qq \rangle$, whose zero/nonzero values indicate the appearance of confinement and superfluidity, respectively.
We found that the superfluidity emerges at $\mu_c/m_{PS} \approx 0.5$ as predicted by the chiral perturbation theory (ChPT)~\cite{Kogut2000-so}.
It is natural to use $\mu/m_{PS}$ as a dimensionless parameter of density since the critical value $\mu_c$ can be approximated by $m_{PS}/2$ even if the value of $m_{PS}$ in numerical simulation would be changed~\footnote{It is expected that the corresponding critical value of $\mu$ would be $\mu_c/m_N \approx 1/3$ if the hadronic-superfluid phase transition occurs also in the case of $3$-color QCD, where $m_N$ denotes the nucleon mass. }.
We also confirmed that the scaling law of the order parameter around it is consistent with the ChPT prediction.
Furthermore, we measured the quark number operator, $n_q^{latt.}\equiv a^3 n_q= \sum_{i} \kappa \langle \bar{\psi}_i (x) (\gamma_0 -\mathbb{I}_4) e^\mu U_{4} (x) \psi_i (x+\hat{4}) + \bar{\psi}_i (x) (\gamma_0 + \mathbb{I}_4) e^{-\mu}U_4^\dag (x-\hat{4} )\psi_i (x-\hat{4})\rangle$.
We identified the regime where $\langle n^{latt.}_q \rangle$ is consistent with the free quark theory $n_q^{\mathrm{tree}}$ (see Eq.~(26) in Ref.~\cite{Hands2006-mh}) as the BCS phase.
Thus, we concluded that there are hadronic, hadronic-matter, Bose-Einstein condesed (BEC) and BCS phases at $T=79$ MeV, although there is no clear boundary between the BEC and BCS phases. Interestingly,
up to $\mu/m_{PS} =1.28 $ ($\mu \lesssim 960$ MeV), the confining behavior remains~\cite{Ishiguro:2021yxr}, while nontrivial instanton configurations have been discovered from calculations of the topological susceptibility~\cite{Iida:2019rah}.
It indicates that a naive perturbative picture, for instance, pQCD, is not yet valid in the density regime studied here.
\section{Equation of state and velocity of sound at finite $\mu$}
Now, we utilize a fixed scale method to obtain the EoS at finite density~\cite{Hands2006-mh}.
The trace anomaly can be described by the beta-functions of various parameters and the trace part of the energy-momentum tensor. In our lattice setup, which is explicitly given by
\begin{eqnarray}
e-3p &=& \frac{1}{N_s^3 N_\tau} \left( a \frac{d \beta}{da} |_{\mathrm{LCP}} \langle \frac{\partial S}{\partial \beta}\rangle_{sub.} + a \frac{d \kappa}{da} |_{\mathrm{LCP}} \langle \frac{\partial S}{\partial \kappa} \rangle_{sub.}
+ a\frac{\partial j}{\partial a}|_{\mathrm{LCP}} \langle \frac{\partial S}{\partial j} \rangle_{sub.} \right).\nonumber\\ \label{eq:trace-anomaly}
\end{eqnarray}
Here, $a$ is the lattice spacing, and the beta-function for each parameter is evaluated at $\mu=T=0$ along the line of constant physics (LCP). Note that there is no renormalization for the quark number density as it is a conversed quantity.
We take all physical observables in the $j \rightarrow 0$ limit, which implies that the third term in the right side can be eliminated.
$\langle \mathcal{O} \rangle_{sub.} (\mu) $ denotes the subtraction of the vacuum quantity.
Thus, ideally, we should take $\langle \mathcal{O} \rangle_{sub.} (\mu) = \langle \mathcal{O} (\mu,T) \rangle - \langle \mathcal{O} (\mu=0,T=0) \rangle $, but the exact zero-temperature simulations is practically difficult.
In this work, we take $\langle \mathcal{O} \rangle_{sub.} (\mu) = \langle \mathcal{O} (\mu, T=79\mathrm{MeV}) \rangle - \langle \mathcal{O} (\mu=0, T= 79 \mathrm{MeV}) \rangle $.
Utilizing the scale setting function (Eq.\ (23)) and a set of $(\beta,\kappa)$ with a fixed mass ratio of pseudoscalar and vector mesons $m_{PS}/m_V$ (Table~$1$) in Ref.~\cite{Iida:2020emi},
the coefficients can be nonperturbatively determined as
\begin{eqnarray}
a d\beta /da|_{\beta=0.80,\kappa=0.159}=-0.352, \quad a d\kappa/da |_{\beta=0.80,\kappa=0.159}=0.0282.\label{eq:beta-fn}
\end{eqnarray}
\section{Simulation results}
The first term of the RHS in Eq.~\eqref{eq:trace-anomaly} is given by the measurement of the gauge action.
The raw data are plotted in the left panel of Fig.~\ref{fig:gauge-action}.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[keepaspectratio, scale=0.6]{plot-ave-gauge-action-j-deps.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[keepaspectratio, scale=0.6]{plot-pressure-at-j0o00-scheme-deps.pdf}
\end{minipage}
\caption{
(Left): Raw data of $\langle \partial S /\partial \beta \rangle _{sub.}$ for each $\mu$ and $j$. The purple dashed line denotes the critical value, $\mu_c$, which is the hadronic-superfluid phase transition point, while the green dashed line indicates that the BEC-BCS crossover occurs around this value of $\mu$. (Right): Scheme dependence of the pressure.
}\label{fig:gauge-action}
\end{center}
\end{figure}
We find that they are consistent with zero in the hadronic phase (except for near the phase transition point), while increasing in the BEC phase and then decreasing in the BCS phase.
Although we have determined the phase structure by the measurement of several physical observables in each phase as defined in Table~\ref{table:phase}, the results for $\langle \partial S /\partial \beta \rangle _{sub.}$ indicate that from the value of gauge action during the configuration generation, we can estimate where the hadronic-superfluid phase transition and the BEC-BCS crossover occur.
Furthermore, we can see that the $j$-dependence is mild, so that we take the constant extrapolations of the $j=0.01$ and $j=0.02$ data in the superfluid phase.
The second term of the RHS in Eq.~\eqref{eq:trace-anomaly} is given by
\begin{eqnarray}
\langle \frac{\partial S}{\partial \kappa} \rangle = \frac{1}{\kappa} \left( \mathrm{Tr}_{c, s,f}\mathbbm{1} - N_f \langle \bar{q}q \rangle \right).
\end{eqnarray}
Thus, we measure the chiral condensate. To obtain the extrapolated value at $j=0$, we perform the reweighting of $j$ and take the linear extrapolation (see Fig.8 in Ref.~\cite{Iida:2019rah}).
The pressure can be expressed by the integral of the number density over $\mu$ in the thermodynamic limit.
On the lattice, two schemes with different discretization errors have been proposed in Ref.~\cite{Hands2006-mh}:
\begin{eqnarray}
&\mathrm{Scheme~ I:} \frac{p}{p_{SB}}(\mu) &= \frac{\int_{\mu_o}^{\mu} d\mu' n_{q}^{latt.} (\mu') }{\int_{\mu_o}^{\mu} d\mu' n_{q}^{\mathrm{tree}} (\mu')},\\
&\mathrm{Scheme ~II:} \frac{p}{p_{SB}}(\mu) &= \frac{\int_{\mu_o}^{\mu} d\mu' \frac{n_{SB}^{cont.}}{n_{q}^{\mathrm{tree}}} n^{latt.}_q(\mu') }{\int_{\mu_o}^{\mu} d\mu' n_{SB}^{cont.} (\mu')}.\label{eq:p-scheme2}
\end{eqnarray}
Here, $p_{SB}(\mu)$ denotes the pressure value at the Stefan-Boltzman (SB) limit, which is obtained by the numerical integration of the number density of quarks in the relativistic limit.
$\mu_o$ represents the onset scale, namely, the starting point at which $\langle n_q \rangle$ becomes nonzero as $\mu$ increases.
In the continuum theory, the pressure scales as $p_{SB}(\mu) = \int^\mu n_{SB}^{cont.}(\mu')d\mu' \approx N_fN_c \mu^4 /(12\pi^2)$ in the high $\mu$ regime, where $N_f$ ($N_c$) is the number of flavors (colors)
The simulation results are plotted in the right panel in Fig.~\ref{fig:gauge-action}.
First of all, we can see that the scheme dependence of $p$ is negligible. It indicates that the discretization effect of our simulation is small.
At $\mu_c=m_{PS}/2$ for the hadronic-superfluid phase transition (purple vertical line), $p$ takes a nonzero value since $\langle n_q \rangle$ becomes nonzero in the hadronic-matter phase.
Thus, $\langle n_q \rangle$ becomes nonzero before the hadronic-superfluid phase transition, then $\mu_c$ is not the same as $\mu_o$.
The low but finite temperature effects cause the discrepancy between them as discussed in~\cite{Iida:2019rah}.
We can see that our data
monotonically increase and approach the value in the relativistic limit.
The value of $p/p_{SB}$ is $\approx 0.84$ at the highest density in our simulation.
The trace anomaly and pressure (Scheme II) are shown in Fig.~\ref{fig:raw-data}.
For the trace anomaly, we plot the gauge part (the first term in Eq.~\eqref{eq:trace-anomaly}) and minus the fermion part (the second term) separately.
Both parts are
normalized by $\mu^4$
to see the dimensionless asymptotic behavior.
The magnitude of each part has
a peak around the hadronic-superfluid phase transition.
It is very similar to the emergence of the peak of $(e -3p)/T^4$ around the hadronic-QGP phase transition at $\mu=0$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[keepaspectratio, scale=0.75]{plot-trace-and-pressure-at-j000.pdf}
\caption{Trace anomaly and pressure as a function of $\mu/m_{PS}$. The circle and cross symbols denote the gauge part and {\it minus} the fermion part of the trace anomaly, respectively. We also show $p/\mu^4$ at the relativistic limit, $p_{SB}/\mu^4=N_fN_c /(12\pi^2)$.
}\label{fig:raw-data}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[keepaspectratio, scale=0.6]{plot-e-and-p-vs-mu.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\includegraphics[keepaspectratio, scale=0.6]{plot-sound-velocity.pdf}
\end{minipage}
\caption{
(Left): The EoS as a function of $\mu/m_{PS}$. (Right): Sound velocity squared as a function of $\mu/m_{PS}$. The horizontal line (orange) denotes the value in the relativistic limit, $c_s^2/c^2 =1/3$. The blue curve shows the result of ChPT.
}\label{fig:EoS}
\end{center}
\end{figure}
Combining the data of $e-3p$ and $p$ obtained above, we finally obtain the EoS and sound velocity in Fig.~\ref{fig:EoS}.
In the left panel, we normalize $e$ and $p$ by
$\mu_c$ so as to be dimensionless.
We can see that both $e$ and $p$ are consistent with zero in the hadronic phase.
Thus, these thermodynamic quantities are not changed even if $\mu$ increases before the hadronic-superfluid phase transition.
Note that $e\approx 0$ in hadronic phase indicates that the nonperturbative beta-functions of $\beta$ and $\kappa$ given by Eq.~\eqref{eq:beta-fn} work well enough to make the parts of trace anomaly, $(e -3p)_g$ and $(e -3p)_f$, cancel each other.
Now, let us focus on the sound velocity depicted in the right panel in Fig.~\ref{fig:EoS}.
Here, we evaluate $c_s^2 (\mu)= \Delta p (\mu)/\Delta e (\mu) $, where $\Delta p (\mu)$ and $ \Delta e (\mu)$ are estimated by the symmetric finite difference,
i.e., $\Delta p(\mu) =( p(\mu +\Delta \mu) - p(\mu -\Delta \mu))/2$.
First of all, our results are consistent with the prediction of ChPT~\cite{Son_2001, Hands2006-mh}, which is given by $c_s^2/c^2=(1-\mu_c^4/\mu^4)/(1+3\mu_c^4/\mu^4)$, in the BEC phase.
We also find that $c_s^2/c^2$ is larger than $1/3$, which is the value in the relativistic limit,
at higher densities
than the regime where the BEC-BCS crossover occurs.
Eventually, our data seem to peak around $\mu \approx m_{PS}$ and, as density increases further, decrease so as to go away from the ChPT prediction.
{\it Such a peak of the sound velocity is a characteristic feature previously unknown from any lattice calculations for QCD-like theories.}
For example, in the finite temperature case, the sound velocity monotonically increases in $T>T_c$ and approaches
the relativistic limit as the temperature increases~\cite{Borsanyi:2013bia,HotQCD:2014kol}.
Here, we give a comment on the holography bound. It is a conjecture that $c_s^2/c^2 \le 1/3$ is satisfied for a broad class of four-dimensional theories proposed by Ref.~\cite{Cherman:2009tw}.
The paper itself studies the finite temperature case in the context of holography.
Our result from the first-principles calculation shows that the bound is broken in the case of finite density.
Furthermore, the counterexamples consisting of strongly coupled theories at finite density are also known in the context of holography~\cite{Hoyos:2016cob}.
\section{Summary and discussion}
It is strongly believed that at ultrahigh density, $c_s^2/c^2$ approaches the relativistic limit.
Then, there arises a question of
{\it how} it approaches $1/3$.
According to the pQCD analysis (see Appendix~A in \cite{Kojo2021-on}), it scales as $c_s^2/c^2 \approx (1- 5\beta_0 \alpha_s^2/(48\pi^2))/3$, where $\beta_0 = (11N_c -2 N_f)/3$ denotes the $1$-loop coefficient of the beta-function.
Thus, $c_s^2/c^2$ approaches the asymptotic value from {\it below}.
On the other hand, a result based on the resummed perturbation theory suggests that $c_s^2/c^2$ approaches
the limit from {\it above}~\cite{Fujimoto2020-bh}.
In the numerical simulations, the maximum value of $\mu$ is limited by $\mu \ll 1/a$ to avoid the strong lattice artefact. Otherwise, the hopping term of fermions would be partially suppressed by the factor $e^{-a\mu}$ in the Wilson-Dirac operator. For the extension to larger chemical potential, we need to perform smaller lattice spacing or lighter quark mass simulations.
Furthermore, to obtain $c_s$ at $T=0$,
it is also required to see the EoS in the lower temperature regime by carrying out larger volume simulations.
According to Ref.~\cite{Kojo2021-mg}, a peak of $c_s^2$ appears due to the development of the quark Fermi sea just after the saturation of low momentum quarks.
The density at which the peak appears in our results is apparently low, i.e., $\mu \approx m_{PS}$,
but seems sufficiently high that the quark Fermi sea would be fully developed.
It supports the predictions from several effective models based on the presence of the quark Fermi sea~\cite{McLerran2019-qh, Kojo2021-mg, Kojo2021-wh}.
Furthermore, it is reported that the peak of sound velocity emerges around BEC-BCS crossover also in condensed matter systems with finite-range interactions~\cite{Tajima:2022zhu}.
To ask whether or not the emergence of the peak structure is a universal property of superfluids in a BEC-BCS crossover regime, it would be important to investigate the origin of this structure as
another future work.
If the peak of sound velocity would be a universal property even for real $3$-color QCD as discussed in Refs.~\cite{Kojo2021-mg, Kojo2021-wh}, then it will change a conventional picture
that a first order transition from stiffened hadronic matter to soft quark matter is responsible for the presence of massive neutron stars.
\begin{acknowledgments}
We would like to thank T.~Hatsuda, T.~Kojo, T.~Saito, D.~Suenaga, H.~Tajima and H.~Togashi for useful conversations.
We are grateful to S.~Hands and J.-I.~Skullerud for calling our attention to erroneous data in the earlier version of the manuscript.
The consistency with ChPT was kindly suggested by N.~Yamamoto.
E.~I. especially thanks T.~Kojo T.~Hatsuda and H.~Togashi for fruitful discussions about the origin of peak, the pQCD analysis and the correspondence between the lattice data and neutron-matter analysis.
Discussions in the working group ``Gravitational Wave and Equation of State" in iTHEMS, RIKEN was useful for completing this work.
The work of E.~I. is supported by JSPS KAKENHI with Grant Number 19K03875, JST PRESTO Grant Number JPMJPR2113 and JSPS Grant-in-Aid for Transformative Research Areas (A) JP21H05190, and the work of K.~I. is supported by JSPS KAKENHI with Grant Numbers 18H05406 and 18H01211.
The numerical simulation is supported by the HPCI-JHPCN System Research Project (Project ID: jh220021).
\end{acknowledgments}
\bibliographystyle{utphys}
|
1803.02084
|
\section{Introduction}
The Internet of Things (IoT) denotes the widespread deployment of communication networks between machines without direct human intervention \cite{Manyika2015TheHype}.
One important IoT application is in the monitoring of different aspects within cities to help their complex and distributed management \cite{Zanella2014InternetCities}.
Household management of electricity consumption is a good example case, where IoT allows for a better understanding of how the electricity is consumed, potentially indicating ways to improve its usage efficiency in both individual and aggregate levels \cite{Siano2014DemandSurvey}.
Beyond this, IoT allows for regulatory actions into the physical system based on information \cite{Kuhnlenz2016DynamicsLayers}.
For instance, the utility -- informed by IoT devices -- may directly control few households' heating devices -- via IoT devices -- aiming at monetary savings \cite{Palensky2011DemandLoads}.
Appliances connected to the same network may directly coordinate their reactions in respect to the grid frequency to help balancing supply and demand (e.g. fridges postponing or anticipating their cycles without creating more instabilities \cite{Evora2015SwarmGrids}).
The term IoT, however, is also very broad, covering extreme application cases: from massive deployment with loose requirements (e.g. air quality measurements) to very specific high-reliability low-latency applications (e.g. robot arms in fully autonomous industrial plants) \cite{Raza2017LowOverview}.
In this paper, we target an application with relatively loose requirements related to the communication system, namely electricity metering in households.
Specifically, the scenario under investigation consists in a typical household that sends its average demanded power during a given period to an aggregator node (which can be the utility or a micro-grid trader, depending on the distribution arrangement in question) \cite{Nardelli2016MaximizingConstraints,Tome2016JointUsers}.
At the end of each day, the aggregator wants to reconstruct the power demand curve from the successfully decoded samples transmitted by the household.
The aggregator may use this information to, for example, plan for the next day operations (e.g. \cite{Ma2017TheNetworks}) or profiling consumers (e.g \cite{Li2017LoadDomain}); although the use actually defines the required quality of service provided by the communication network (e.g. \cite{Dawy2017TowardCommunications}), we keep it here unspecified and only look at the relation between the performance of the communication link and the quality of the signal reconstruction.
As to be explained throughout this paper, our contribution to the topic is the following.
\begin{itemize}
%
\item We extend the Long Range (LoRa) technology study case from \cite{Georgiou2017LowScale} by (a) including in their proposed stochastic geometry model a density of external interferers, and (b) using a random spreading factor allocation, which is fairer in relation to our specific application.
%
\item The theoretical model developed in \cite{Nardelli2016MaximizingConstraints} is modified to incorporate realistic LoRa setups from \cite{Georgiou2017LowScale} (which is supported by actual deployments as discussed in \cite{Raza2017LowOverview}.
%
\item We reproduce the sampling strategies from \cite{Tome2016JointUsers} and \cite{Simonov2014HybridGrid} by testing their proposed approach in a different dataset \cite{Zimmermann2009End-useSavings} and providing further comparisons between the time-based and event-based schemes. As in \cite{Tome2016JointUsers}, we show that, for most households evaluated, the latter option consistently provides lower reconstruction errors when the number of samples are similar.
%
\item We investigate the trade-off involved between the system variables pointing out what should be the most suitable gateway range to achieve a communication outage probability that can sustain a signal reconstruction by the aggregator within a given quality level.
%
\end{itemize}
The rest of this paper is divided as follows.
Sec. \ref{sec:related} provides a short overview of the relevant literature about LoRa and stochastic geometry applied in wireless networks, mainly discussing our main advancements.
Sec. \ref{sec:system} contains the system model based on stochastic geometry and metrics to evaluate the performance of the proposed LoRa deployment, as well as the respective numerical results.
We present the sampling strategies and their assessment in Sec. \ref{sec:sampling}.
Sec. \ref{sec:discussions} discuss about the implications of these results focusing on deployment aspects in smart cities, while Sec. \ref{sec:conclusions} concludes this paper, also listing possible research directions.
\vspace{-1ex}
\section{Related work}
\label{sec:related}
\subsection{Long Range Technology -- LoRa}
Low power wireless technologies covering wide areas are becoming trendy nowadays, as indicated by the survey \cite{Raza2017LowOverview}.
The reason for that is the potential to reach the massive number of IoT devices at low cost with reasonably efficient performance.
It is worth saying, however, that the so-called ``low-power wide-range'' technologies are well suited to the massive machine-type Communications (MTC) in contrast to the critical MTC, whose applications require (very) low latency (1-10 ms) and (ultra) high reliability (99.999\%) \cite{Popovski2014Ultra-ReliableSystems}.
The main advantages of Low Power Wide Area (LPWA) technologies are, beyond the long range and the low power consumption themselves, their low cost and scalability while guaranteeing some (not so strict) quality of service.
For some specific applications as presented in \cite{Dawy2017TowardCommunications}, LPWA may become a way to alleviate the data traffic in traditional cellular networks to avoid some problems presented in \cite{Madueno2016AssessmentGrid}.
Among other options discussed in \cite{Raza2017LowOverview}, Long Range (LoRa) -- a proprietary technology proposed by Semtech and promoted by the LoRa Alliance -- provides bidirectional communication based on chirp spread spectrum modulation that spread the narrow-band signals over a wider bandwidth.
LoRa uses a star-of-stars topology consisting of end-devices (in our case, smart-meters), gateways and a central network server.
The end-devices directly send their message via a wireless channel with six different spreading factors (SF7 to SF12) using unslotted Aloha as its medium access control; in Europe the spectrum used by LoRa is in the 863-870 MHz ISM Band range with channels of bandwidth of 125 kHz.
The communication from the gateway to the server occurs through non-LPWA networks (e.g. cellular or Ethernet).
End-devices are divided into three classes (A, B and C) that are most related to their downlink capabilities.
Aspects relevant to this article, such as data-rate and successful detection at the uplink, will be given in the next section, while more general details about LoRa can be found at \cite{Raza2017LowOverview}, \cite{Adelantado2017UnderstandingLoRaWAN} and references therein.
\vspace{-1ex}
\subsection{Stochastic geometry for wireless networks}
Communication engineering communities have quite established analytical frameworks to account for channel and/or traffic uncertainties, which are the basis of almost all (if not all) wireless communication systems \cite{Yacoub1993FoundationsEngineering}.
The uncertainties related to the relative positions between devices are, however, still under development; positions have been traditionally modeled as regular grid-like topologies (e.g. hexagonal or square cellular networks) or toy-models with few communication devices (e.g. two-hop systems).
As a growing research field, stochastic geometry and spatial point process theories applied in wireless networks (e.g. \cite{Haenggi2013StochasticNetworks}) are able to capture many trade-offs involved in the system design and deployment explicitly including the uncertainties related to the devices' positions.
Their main advantage is that the aggregate interference becomes analytically treatable, so neither time-consuming Monte Carlo simulations nor (over-)simplifying assumptions about the aggregate interference (e.g. completely neglecting it) are needed in order to evaluate the system performance.
Our contribution here is mainly based on three papers that employ such a model.
In \cite{Georgiou2017LowScale}, the authors studied whether LoRa can scale considering Spreading Factor (SF) allocations based on distance.
Their results indicate a exponential dependence of the number of end-devices and the outage probability, which is the complement of the probability that a given message is correctly received by the gateway.
They show that success probability exponentially decays with the average number of devices, evincing the negative effects of interference in LoRa even with its mitigation techniques.
But, the most interesting aspect of this paper is the stochastic geometry treatment of interference.
Assuming that end-devices follow a Poisson point process and making use of order statistics, the authors found a closed-form expression for the probability that an outage event being caused by another concurrent end-device using the same SF.
This outage event is defined when the received power of a given desired signal is not four times (i.e. 6 dB) stronger than any other concurrent transmission in the same SF.
In any case, the effect of the aggregate interference from all users using the same SF is not considered, but only the effect of the dominant (stronger) interferer.
In the other two articles \cite{Nardelli2016MaximizingConstraints,Tome2016JointUsers}, the authors propose a general optimization of the link throughput based on Shannon capacity, where smart meters and aggregators (acting as a base-station, a gateway) are unlicensed users of the uplink channel of the cellular network in a spectrum sharing setting.
Their proposed scenario only considers the effects of the interference from the licensed mobile devices (modeled as a Poisson point process) in the aggregator reception, showing that the maximum achievable throughput actually happens with a less strict outage constraint.
This higher outage constraint, however, is shown to have a small effect in the average power demand signal reconstruction by the aggregator.
We find, nevertheless, that these results are still very analytical, only relying on an abstract conception of cognitive radio and spectrum sharing without specifying any possible technology.
\vspace{-2ex}
\subsection{Sampling strategy}
Another interesting result from \cite{Tome2016JointUsers} is the comparison between time- and event-based sampling, indicating the first may lead to redundant samples for most of the day, while not good enough during other times.
The idea of event-based sampling for sensor networks has been discussed for some time (e.g. \cite{Miskowicz2006Send-On-DeltaStrategy}, but only very recently adopted in electricity metering scenarios (e.g. \cite{Simonov2014HybridGrid,Simonov2017GatheringMetering}).
However, only in \cite{Tome2016JointUsers} the joint performance between sampling and the link outage probability is analyzed (but with the previously discussed limitations).
\subsection{Relation to this contribution}
This paper is mainly built upon \cite{Georgiou2017LowScale,Nardelli2016MaximizingConstraints,Tome2016JointUsers}.
In relation to \cite{Georgiou2017LowScale}, we extend the stochastic geometry modeling by considering outage events due to not only the dominant interferer, but also the aggregate interference in a given SF.
Besides, instead of outage caused by Gaussian noise, we consider the aggregate interference of non-LoRa devices by explicitly including a density of interferers that transmit in the same channel.
We then have another variable affecting the outage probability when the non-LoRa interference is treated as noise \cite{Nardelli2015ThroughputInformation}.
This approach follows \cite{Nardelli2016MaximizingConstraints}, but specifies it to an actual LoRa deployment.
In other words, we move from Shannon capacity in bit/s/Hz and their abstract conceptual model (even including highly directional antennas that leads to negligible unlicensed-to-licensed and unlicensed-to-unlicensed user interference).
We analyze here a LoRa scenario following the system setting from \cite{Georgiou2017LowScale}, which is based on the technology specification and several deployment trials (refer to \cite{Raza2017LowOverview} and references therein).
Besides, the time-based and event-based schemes employed in \cite{Tome2016JointUsers} are implemented and tested for another electricity consumption database \cite{Zimmermann2009End-useSavings}, which has different granularity and household composition as well as geographical location.
\vspace{-1ex}
\section{LoRa deployment}
\label{sec:system}
\subsection{System model and performance metrics}
Fig.\ref{fig:deployment} depicts an illustrative network deployment.
We analyze here the LoRa deployment from \cite{Georgiou2017LowScale} with some differences to be explained in the following.
We assume that smart-meters need to transmit to a single gateway a 25 byte message in a bandwidth of 125 kHz containing the cumulative energy consumption in a given period of time as well as the period.
The meters' locations are randomly distributed in the plane following a Poisson point process $\Phi_\mathrm{SF}$ with density $\lambda_\mathrm{SF}$ devices/km$^2$ \cite{Haenggi2013StochasticNetworks}.
When a given smart-meter wants to send a message to the gateway, the LoRa network server assigns to it a spreading factor (SF), which is randomly and independently allocated from SF7 to SF12.
So a smart-meter has a probability of $p_\mathrm{SF}=1/6$ to be allocated to a specific SF, which is independent across the points from $\lambda_\mathrm{SF}$.
\begin{figure}[!t]
\resizebox{1.0\columnwidth}{!}{\input{lorafig.tikz}}
\caption{{Illustrative figure of the network deployment where LoRa end-devices (smart meters in this paper) are the black and blue circles, while the LoRa gateway is the square node at the center. The black circle linked by a dashed line to the gateway is the reference link, located $r$ km away. The other black circles represents LoRa devices using the same spreading factor (SF) as the reference link, and the blue ones are LoRa devices using other SFs. LoRa devices cause interference to each other only if they are transmitting at the same time with the same SF (represented here by the black circles). Note that LoRa devices may also suffer interference from non-LoRa devices (depicted by red circles). Since these devices use different radio access technology, but in the same frequencies, they are treated as noise by the LoRa gateway.}}
\label{fig:deployment}
\end{figure}
The smart-meter traffic is related to the sampling strategy, which will be discussed later in Section \ref{sec:sampling}.
Regardless of the strategy adopted, the actual transmission is randomized to decrease the number of concurrent transmissions.
A duty cycle limitation of 1\% must be also assumed so that each smart meter has a limited number of wireless transmissions per day.
The actual density of smart-meter concurrently transmitting is then (much) smaller than $\lambda_\mathrm{SF}$.
We assume the density of active smart-meters as $p_0 \lambda_\mathrm{SF}$, where $p_0$ is the probability that any smart-meter from $\Phi_\mathrm{SF}$ is active while a given reference link is also transmitting.
Using the point process theory nomenclature, the mapping from $\lambda_\mathrm{SF}$ to a process in which some of the original points were erased is known as \textit{thinning}.
As the transmissions use LoRa, an outage at the reference link occurs when the other active smart-meters using the same SF lead to a signal-to-interference ratio (SIR) at the gateway in respect to the reference link below a given threshold $\beta_\mathrm{SF}$.
As we assume a random SF allocation with probability $p_\mathrm{SF}$, another thinning happens, resulting in a density $\lambda = p_\mathrm{SF} p_0 \lambda_\mathrm{SF} $ of active smart-meters using the same SF.
We consider that the wireless channel is composed of two components: a distance dependent path-loss with exponent $\alpha>2$ and a gain due to multipath.
We assume the basic path-loss equation, where the received power is proportional to $r^{-\alpha}$ with $r$ being the distance between the smart-meter and the gateway, while the multipath is modeled as independent and identically distributed channel gains related to a Rayleigh fading distribution with unity mean.
Then, we can compute the outage probability $P_\mathrm{out:1}$ from smart-meter as \cite{Haenggi2013StochasticNetworks,Nardelli2016MaximizingConstraints}:
\begin{equation}
P_\mathrm{out:1} = 1 - \exp\left(-k \lambda r^2 (\beta_\mathrm{SF})^{2/\alpha}\right),
\label{eq:out1}
\end{equation}
where $k = \pi \Gamma(1 + 1/\alpha) \Gamma(1 - 1/\alpha)$ with $\Gamma( \cdot)$ being the Gamma function.
\begin{table}[!t]
%
\renewcommand{\arraystretch}{1.3}
\centering
\caption{LoRa setting adapted from \cite{Georgiou2017LowScale}}
\label{tab:LoRa}
%
\centering
\begin{tabular}{l l l}
\hline \hline
\textbf{Spreading factor} $\mathrm{x}$ & \textbf{Bit-rate} $R_\mathrm{SFx}$ in kb/s& \textbf{Minimum SIR} $\beta_\mathrm{SFx}$\\ \hline
7 & 5.47 & 0.25 \\
8 & 3.13 & 0.125 \\
9 & 1.76 & 0.06 \\
10 & 0.98 & 0.03 \\
11 & 0.54 & 0.017 \\
12 & 0.29 & 0.01 \\\hline\hline
\end{tabular}
\end{table}
In addition to the outages caused by the concurrent LoRa (smart-meter) transmissions, we consider here outages from other (non-LoRa) users that use the same channel.
Different from \cite{Georgiou2017LowScale} and following \cite{Nardelli2016MaximizingConstraints}, we assume the noise is negligible compared to the interference.
If the non-LoRa devices' positions are modeled as a Poisson point process $\Phi_\mathrm{I}$ with density $\lambda_\mathrm{I}$ devices/km$^2$ and the aggregate interference is treated as noise by the gateway \cite{Nardelli2015ThroughputInformation}, the outage probability $P_\mathrm{out:2, x}$ in a link using a spreading factor $\mathrm{x}$ with $\mathrm{x} = 7,...,12$, with a respective SIR threshold $\beta_\mathrm{SFx}$, is given by:
\vspace{2ex}
\begin{equation}
P_\mathrm{out:2, x} = 1 - \exp\left(-k \lambda_\mathrm{I} r^2 (\beta_\mathrm{SFx})^{2/\alpha}\right),
\label{eq:out2}
\vspace{1ex}
\end{equation}
where $\beta_\mathrm{SFx}$ is different for each SF (refer to Table \ref{tab:LoRa}).
\textbf{Remark:} Both \eqref{eq:out1} and \eqref{eq:out2} assume that all users transmit with the same power.
Different transmit power, or even some kind of channel inversion, may also be incorporated into the proposed formulation, as discussed in, for example, \cite{Nardelli2016MaximizingConstraints} and \cite{Haenggi2013StochasticNetworks}.
Although these differences would affect the overall link performance, it would not change its qualitative behavior in relation to the density of interferers and to the smart-meter-gateway distance, which are our focus in this paper.
The outage probability that the reference link experiences when their transmission occur using a spreading factor $\mathrm{x}$ can be computed as the complement of a successful transmission probability: $1- (1-P_\mathrm{out:1})(1-P_\mathrm{out:2, x})$.
Using \eqref{eq:out1} and \eqref{eq:out2},
\begin{equation}
P_\mathrm{out, x} = 1 - \exp\left(-k r^2 \left(\lambda (\beta_\mathrm{SF})^{2/\alpha} + \lambda_\mathrm{I} (\beta_\mathrm{SFx})^{2/\alpha}\right)\right).
\label{eq:out-total}
\end{equation}
As the SF allocation is random and independent, the average outage probability $P_\mathrm{out}$ can be computed as follows:
\begin{equation}
P_\mathrm{out} = p_\mathrm{SF} \sum\limits_\mathrm{x} P_\mathrm{out, x},
\label{eq:out-avg}
\end{equation}
remembering that, for LoRa, $\mathrm{x}=7,..,12$ and $p_\mathrm{SF} = 1/6$ when the allocation is uniform across the SFs.
Besides, each SF$\mathrm{x}$ has a different bit-rate $R_\mathrm{SFx}$, as shown in Table \ref{tab:LoRa},
This, however, does not include outages and it might be interesting to evaluate the effective bit-rate, here defined as $(1-P_\mathrm{out, x}) R_\mathrm{SFx, eff}$.
Similar to \eqref{eq:out-avg}, we can evaluate the average effective bit-rate $R_\mathrm{SF, eff}$ as:
\begin{equation}
R_\mathrm{SF, eff} = p_\mathrm{SF} \sum\limits_\mathrm{x} (1-P_\mathrm{out, x}) R_\mathrm{SFx, eff}.
\label{eq:rate-avg}
\end{equation}
\subsection{Numerical results}
We present here the numerical results assuming: a path-loss exponent $\alpha = 4$ (urban environment), a SIR threshold related to LoRa end-devices using the same SF $\beta_\mathrm{SF} = 4$ (6dB) and a thinning probability $p_0 = 0.025$ related to the traffic and duty cycle constraint; the numerical setting related to each different SF is given in Table \ref{tab:LoRa}.
When not explicitly mentioned, we assume the distance between the reference smart-meter to the gateway to be $r=1.5$ km, while the density of active non-LoRa devices to be $\lambda_\mathrm{I}= 0.05$ devices/km$^2$.
{These two values were arbitrarily chosen since they do not qualitatively change the analysis \cite{Haenggi2013StochasticNetworks}; the way they affect the link performance will be presented when discussing Fig. \ref{fig:avg-out-li} and \ref{fig:out2-r}.}
Fig. \ref{fig:out-lsf} shows how the density $\lambda_\mathrm{SF}$ of smart-meters affects the outage probability $ P_\mathrm{out, x}$ for each possible SF$\mathrm{x}$ and its respective average $P_\mathrm{out}$.
Although the relation between these variable shown by \eqref{eq:out-total} is exponential, the range of interest does not present a steep behavior, regardless of the SF.
This effect comes with the thinning processes related to the active concurrent transmissions at the same SF.
Therefore, the effects of the smart-meters' interference is not so dramatic.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG2_17-TIE-3215.eps}
%
\caption{Outage probability as a function of the density $\lambda_\mathrm{SF}$ of users served by the gateway for the different spreading factors (SFs) and $r=1.5$km. The red curve is its average considering $p_\mathrm{SF}=1/6$.}
\label{fig:out-lsf}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG3_17-TIE-3215.eps}
%
\caption{Outage probability as a function of the density $\lambda_\mathrm{SF}$ of smart-meters served by the gateway for the different SFs and $r=1.5$km. The red curve is its average considering $p_\mathrm{SF}=1/6$ while the other curves consider that all users are allocated only to one SF.}
\label{fig:out2-lsf}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.87\columnwidth]{FIG4_17-TIE-3215.eps}
%
\caption{Effective bit-rate as a function of the density $\lambda_\mathrm{SF}$ of smart-meters served by the gateway for the different SFs and $r=1.5$km. The red curve is the average outage probability considering that the end-users are randomly assigned to a specific SF with probability $1/6$.}
\label{fig:error-rate-lsf}
\vspace{-1ex}
\end{figure}
Fig. \ref{fig:out2-lsf} reinforces this idea by showing the link outage probability when \textit{all} smart-meters are allocated in the same SF.
This case, compared with the average using an equal share (i.e. $p_\mathrm{SF}=1/6$, shows the steep exponential behavior so that the outage probability grows much faster with $\lambda_\mathrm{SF}$.
Besides, from both figures, we confirm that the lower the spreading factor, the higher the outage probability for the same setting.
This fact is expected since higher SFs imply lower SIR thresholds for successful reception, obtained at expense of the link bit-rate.
Fig. \ref{fig:error-rate-lsf} shows the effective bit-rate for the different SFs and its average from \eqref{eq:rate-avg}.
As in Fig. \ref{fig:out-lsf}, one can see a smooth decrease (not steep as expected in exponential relations) in respect to $\lambda_\mathrm{SF}$, regardless of the SF.
The link can nevertheless transmit, in average, with a bit-rate between 1 and 2 kb/s.
Figs. \ref{fig:avg-out-lsf} and \ref{fig:avg-out-li} present how the average outage probability changes with the density $\lambda_\mathrm{I}$ of non-LoRa devices.
For the range studied here, we can see that, in the worst case scenario with $\lambda_\mathrm{SF} = 4$ and $\lambda_\mathrm{I}=0.2$ devices/km$^2$, the average outage is 60\% (a relatively high value).
Similar to the preset noise level in \cite{Georgiou2017LowScale}, the interference from non-LoRa devices imposes a floor level in the outage probability, but dependent on another parameter, namely $\lambda_\mathrm{I}$.
In specific terms, the higher $\lambda_\mathrm{I}$, the higher the outage floor, as indicated by \eqref{eq:out2}.
Consequently, if the density of non-LoRa devices in a given region is high enough, LoRa deployments will experience a very poor performance, specially for long range links.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG5_17-TIE-3215.eps}
%
\caption{Average outage probability with a random allocation of SF with probability $1/6$ as a function of the density $\lambda_\mathrm{SF}$ smart-meters served by the gateway for different densities
$\lambda_\mathrm{I}$ and $r=1.5$km.}
\label{fig:avg-out-lsf}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG6_17-TIE-3215.eps}
%
\caption{Average outage probability considering a random allocation of SF with probability $1/6$ as a function of the density $\lambda_\mathrm{I}$ of non-LoRa devices for different densities $\lambda_\mathrm{SF}$ of users served by the gateway and $r=1.5$km.}
\label{fig:avg-out-li}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG7_17-TIE-3215.eps}
%
\caption{Outage probability vs. the distance $r$ for $\lambda_\mathrm{SF} = 0.5$ and $\lambda_\mathrm{I}=0.05$. The red curve is the average outage probability considering the smart-meters are assigned to a specific SF with probability $1/6$.}
\label{fig:out2-r}
\end{figure}
To evaluate the effect of the distance on the link performance, Fig. \ref{fig:out2-r} shows the average outage probability when the distance between the reference smart-meter and the gateway changes.
As stated in \eqref{eq:out-total}, the relation is exponential and depends on $r^2$, so growing $r$ implies in a steep increase of outage events.
This plot is important for the network deployment when considering the sampling strategies since it helps to determine the worst case scenario so the most suitable position of the gateway can be chosen so as to achieve a minimum quality related to the signal reconstruction.
For example, if the gateway is planned to have a range of $r=4$ km, the worst case average outage probability is about $70\%$, which probably lead to a poor signal reconstruction (to be assessed in Section \ref{sec:sampling}).
This plot may also indicate that the SF allocation strategy used in \cite{Georgiou2017LowScale} based on distance ranges may outperform the random strategy.
However, from Fig. \ref{eq:out2}, this is not so obvious and requires further studies, since there is a clear trade-off involved between sharing the spectrum and the link distance.
This will be further discussed in Sec. \ref{sec:discussions}.
\vspace{-1ex}
\section{Sampling strategies}
\label{sec:sampling}
We followed here the results presented in \cite{Tome2016JointUsers}, but considering a different database \cite{Zimmermann2009End-useSavings}, which is comprised of around 400 houses with measurement lengths ranging from a few days to a whole year with 10 minutes granularity.
The sampling strategies chosen are (i) time-based with a sampling frequency of 30 minutes, and (ii) event-based as described in \cite{Simonov2014HybridGrid}.
While the time-based implementation is straightforward, the other depends on some simple processing to identify a prescribed event.
Following \cite{Simonov2014HybridGrid,Tome2016JointUsers}, the event is defined by two situations: (a) a certain
amount of energy consumption $E_\mathrm{lim}$ is reached; or (b) a sudden change in the power demand denoted by $P_\mathrm{lim}$ is detected.
\begin{algorithm}[!t]
\caption{Event-based setting}
\label{alg:event}
\begin{algorithmic}
\State $P_\mathrm{lim} \gets \mathrm{Set~Power~threshold}
\State $P_\mathrm{step} \gets \mathrm{Set~Min~Power~Increase}
\State $E_\mathrm{lim} \gets \mathrm{Set~Energy~threshold}
\State $OK \gets $ False
\While{$OK$ is False}
\State $measEvent \gets EventMeasuring(P_\mathrm{lim}, E_\mathrm{lim})$
\If{$len(measEvent) \geq len(measTime)$}
\State $P_\mathrm{lim} \gets P_\mathrm{lim} + P_\mathrm{step}$ \Comment{Increase threshold}
\If {$sum(measEvent('Power')) == 0$}:
\Comment{Power threshold set too high}
\State $E_\mathrm{lim} \gets 2*E_\mathrm{lim}$ \Comment{Increase energy limit}
\State $P_\mathrm{lim} \gets P_\mathrm{step}$ \Comment{Reset threshold}
\EndIf
\Else
\State $OK \gets$ True
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\columnwidth]{FIG8_17-TIE-3215.eps}
%
\caption{{Comparison between the actual measurements and sampled signals for a single house from 6:00 to 18:00 on October 3, 2016 ($x$-axis is presented as mm-dd hh). Top: Time-based; Bottom: Event-based.}}
\label{fig:examples_sampling}
\vspace{-3ex}
\end{figure}
Here, the initial parameters for the event-based approach were set as $E_\mathrm{lim} = 2$ kWh, $P_\mathrm{lim} = 1$ kW, with increments of $P_\mathrm{step}= 0.5$ kW.
These values were arbitrarily chosen to lead to a smaller or equal number of samples compared to the time-based approach (48 samples per day).
In the households that the event-based approach leads to more samples than the time-based, we implemented a simple procedure (presented in Alg. \ref{alg:event}) to modify the parameters in order to achieve a similar number of samples between the two cases.
Such procedure is neither optimal nor exhaustive; however, it builds a smaller set of measurements that is suitable for the present study.
On average, a reduction of about 17\% in the total number of measures was observed, but depending on the consumption patterns of the houses, the reduction ranges from about 70\% (14 measures per day on average) to 0 (no reduction).
Fig. \ref{fig:examples_sampling} shows an example of the two strategies compared to the original measurements.
The results illustrate a single household between 6:00 and 18:00 on Oct. 3, 2006.
Due to the thresholds chosen for plotting the event-based strategy, some smaller peaks were skipped (the power threshold is related to power variation, not to the absolute values).
Remember also that the amount of points generated with these thresholds is designed to be similar to the time-based strategy's amount.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG9_17-TIE-3215.eps}
%
\caption{CV(RMSE) vs. the outage probability for the time-based strategy. Colored regions indicate the percentage of houses which belong to the range. Dashed lines indicate the extreme values.}
\label{fig:error_time}
\vspace{-1ex}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG10_17-TIE-3215.eps}
%
\caption{CV(RMSE) vs. the outage probability for the event-based strategy. Colored regions indicate the percentage of houses which belong to the range. Dashed lines indicate the extreme values.}
\label{fig:error_event}
\vspace{-3ex}
\end{figure}
We ran the same procedure illustrated in Fig. \ref{fig:examples_sampling} for 350 houses for a full week to assess how outage events from the LoRa system would affect the curve reconstruction by the aggregator, following \cite{Nardelli2016MaximizingConstraints,Tome2016JointUsers}.
The outage probabilities were varied from 0 to 30\% so the aggregator observes less points, which are lost with such a probability ({note that this value comes from the LoRa specification, as discussed in Sec. \ref{sec:system}).}
At the end of each day, the aggregator makes a linear interpolation between subsequent points to reconstruct the average power curve.
In the case that a sample is lost, the reconstruction error is computed as the relation between the interpolated point and the original (from the database).
The error analysis is based on 100 simulations for each household.
The quality metric was the root-mean-square error (RMSE) error between the measurements, normalized by the consumption of each one of the houses, herein called CV(RMSE).
While the measurements cannot be used to directly compare two houses with reasonably different consumptions, it provides a fair tool to compare the different sampling strategies {as well as compare the quality degradation due to different values of outage}.
Figs. \ref{fig:error_time} and \ref{fig:error_event} show the results for the time-based and event-based strategies.
The color bands are the percentage of houses that fall into that range, whereas the dotted lines indicate the extreme values.
One can see that the time-based strategy is less sensitive to outage events, that is, its CV(RMSE) value has small changes as the outage probability grows, but its performance is overall poor when compared to the event-based strategy.
This indicates the time-based sampling is more redundant in many periods (e.g. during the night, when the electricity demand tends to be minimal) and missing points can be successfully reconstructed.
Conversely, the event-based samples tend to contain more information so missing points have a bigger impact in the signal reconstruction.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{FIG11_17-TIE-3215.eps}
%
\caption{Relative performance between time-based and event-based strategies. Results above zero imply the event-based strategy is better (i.e. smaller error), while below zero the time-based strategy is better.}
\label{fig:error_relative}
\vspace{-2ex}
\end{figure}
Fig. \ref{fig:error_relative} compares the two strategies by subtracting the errors from both measurements to evaluate their relative performance.
The positive values indicate that the event-based strategy outperform the time-based.
It is possible to see that the event-based strategy provides better reconstruction in almost 90\% of the cases, regardless of the amount of error involved.
On the other hand, as previously discussed, this advantage tends to decrease if the outages become too frequent.
It is important saying that this effect is a byproduct of the design of the event-based strategy where the number of samples cannot grow indefinitely, but is limited to about 48 per day (the number of time-based samples).
With such a limited number of samples, the information of each measurement grows.
\vspace{-1ex}
\section{Discussions}
\label{sec:discussions}
The results presented in the previous two sections showed that: (i) the outage probability is affected by the interference and the smart-meter-gateway distance and (ii) the event-based strategy usually outperforms the time-based in terms of reconstruction error.
Now, let us consider the following situation: a communication engineer needs to deploy LoRa gateways in a city so that the reconstruction error is within a given quality limit.
From the proposed scenario, the engineer can only decide about the gateway range and the sampling strategy; the other parameters like density of smart-meters and non-LoRa devices, as well as the power demand, are given.
For example, a distribution company requires that the quality of the reconstructed signals by the aggregator has, in the worst case, a CV(RMSE) of 4 for the 90\% of the households under its coverage.
Using the event-based scheme, this performance can be achieved with an outage probability up to 30\%, as determined by Fig. \ref{fig:error_event}.
If the density of smart-meters is 0.5 and of non-LoRa devices is 0.05, then we can use the red curve from Fig. \ref{fig:out2-r} to find the gateway shall cover a range of $2$ km, at most.
This simple case illustrates how a LoRa communication system together with a ``smarter'' sampling strategy can be used to deploy electricity metering in cities.
Other important feature introduced by the event-based approach is an inherent peak detection in relation to variations in power demand, which may be useful to identify unusual or critical situations.
It is worth saying that, although the results are based on realistic numbers and on actual electricity demand data, they are used here to test a concept that must be further analyzed and optimized.
The same methodology may be directly applied to, for instance, water and gas metering since the consumption patterns are similar \cite{Mcneill2017EstimatingData}.
Another interesting use case might be related to commuting and traffic, whose daily patterns are somehow similar to electricity demand (e.g. \cite{Mcneill2017EstimatingData}); this, however, would require a more thorough study about the event definition and the gateway points.
In general, despite the particularities of each specific application, we understand that LoRa combined with an event-based sampling strategy provides a scalable solution for massive machine-type communications and IoT deployments needed in the future smart cities.
\section{Conclusions}
\label{sec:conclusions}
{This paper studied a LoRa wireless network deployment for electricity metering, where this technology combined with a event-based metering strategy led to a fairly good quality signal reconstruction.
We presented here new results, which provide advances in the topic as follows.}
We extended the LoRa analysis of \cite{Georgiou2017LowScale} by using a stochastic geometry approach that includes LoRa and non-LoRa devices as interferers.
We employed a randomized (distance-independent) spreading factor allocation without favoring devices closer to the gateway, providing a fairer allocation for the specific metering application.
Note that other applications where fairness is not an issue would best suit with the usual distance-dependent allocation (e.g. temperature monitoring in cities).
Therefore, we see our results as complementary to and consistent with \cite{Georgiou2017LowScale}.
{Our study also moved beyond \cite{Nardelli2016MaximizingConstraints,Tome2016JointUsers} by considering a LoRa technology in contrast to the previously developed cognitive radio approach based on abstract Shannon limits of interference-limited networks and perfect directional antennas. While \cite{Nardelli2016MaximizingConstraints,Tome2016JointUsers}, focused on optimizing the communication system performance under such idealized conditions, we assume LoRa specifications to serve as guidance for actual deployments. Nevertheless, although the present model and our objectives are quite different from [8], [9], the system performance (evaluated in terms of outage) is still limited by the same factor, namely co-channel interference.
}
{Besides, we reinforced the strength of the event-based metering introduced by Simonov et al. \cite{Simonov2014HybridGrid,Simonov2017GatheringMetering} (also used in \cite{Tome2016JointUsers}) by showing that a better signal reconstruction can be consistently achieved in comparison with the time-based one using a different dataset (i.e. different demand profiles and data granularity).
To reach this result, we proposed a simple algorithm that defines the events' thresholds from the consumption data (which are determined by each different household) so the number of samples generated by the event- and time-based are approximately the same.
This algorithm is general and can be used to determine the thresholds based on historical data, and can be easily adapted to provide ``real-time'' adjustments.
}
{All in all, we argue that the proposed approach may be used in planning actual deployments by (\textit{i}) defining event-based metering using thresholds set based on historical data, and (\textit{ii}) defining LoRa gateways' locations since their range can be directly related to the signal reconstruction.
Although the results are quite abstract at this stage, we plan to develop this framework to real case-studies (ranging from dense cities to remote rural zones), as well as assess its feasibility to other smart city applications, as water metering and heating.
}
\bibliographystyle{IEEEtran}
|
2111.13075
|
\section{Introduction}\label{s1}
Neural architectures have achieved state-of-the-art performance on various large-scale tasks in several domains spanning language, vision, speech, recommendations. However, modern deep learning algorithms consume enormous energy in terms of compute power [\cite{Li2016EvaluatingGPUs}, \cite{Strubell2019EnergyNLP}]. The abundance of data and improvements on the hardware side have made it possible to train larger models, increasing the energy consumption at a massive rate. These deep learning algorithms go through intense mathematical calculation during forward and backward pass for each piece of data to update the large weight matrices. Another crucial factor which significantly adds up to the compute power is the immense experimentation and tuning required to choose optimal initialization, architecture, and hyper-parameters. Several variants of a model are generated and trained for a particular task on a particular dataset to make an optimal choice.
While there is active research going on to make deep learning algorithms greener and economical like compressing the model [\cite{Kozlov2020NeuralInference}], training on a subset of the dataset [\cite{Frankle2018TheNetworks}], designing hyper-efficient network [\cite{Tang2020SearchingConvolution}], there's still no standard mechanism to choose an initialization or architecture design. Though various theories have been proposed for choosing an initialization distribution, architecture design, training mechanism, these nets are still most often manually designed through extensive experimentation.
For instance, initialization schemes, such as Xavier [\cite{GlorotUnderstandingNetworks}], He [\cite{He2015DelvingClassification}],
random orthogonal [\cite{Saxe2013ExactNetworks}] have been proposed in the past. Despite some theoretical backing, these commonly used initialization techniques are simple and heuristic [\cite{GoodfellowDeepLearning}]. The more recent neural architecture search (NAS) research has fueled efforts in automatically finding good architectures [\cite{Liu2018DARTS:Search}, \cite{Zoph2016NeuralLearning}]. It has been observed that these NAS algorithms are computationally very intensive and time-consuming as well. In addition, changing the dataset or tasks requires starting these algorithms anew.
Furthermore, DNNs are characterized by variability in performance. The performance of the same architecture, training scheme, and initialization on a particular dataset can vary due to the stochastic nature of deep learning algorithm, system architecture, operating systems, and/or libraries. Consequently, experimentation is required for choosing an optimal initialization weight matrix when sampling from a fixed initialization distribution also.
Therefore, in practice, choosing good initializations and architectures often reduces to experimentally testing several possibilities. While the need for experimentation can not be eliminated wholly, we propose a mechanism to test several weight initailizations and architectures, in lesser computational power. In this paper, we propose an early success indicator which predicts, at an early stage of training, the extent to which a DAI (dataset-architecture-initialization combination) that is trained using stochastic gradient descent would be successful. Such a score would be crucial to test several initializations and architectures with a reduced cost. By discarding DAIs that do not have the potential of learning effectively, at an earlier stage, we save computing power and run time.
We make the following contributions in this paper:
\begin{enumerate}
\item Introduce early success indicators to predict the success of an initialization for a particular architecture and dataset. We also extend our framework to predict the success of different initializations and architectures for a fixed dataset (Appendix \ref{a:2}).
\item Analyze the performance of the proposed early success indicators through extensive empirical study on different datasets, architectures, and initializations.
\end{enumerate}
The paper is organized as follows: Section \ref{s2} outlines the background work and formalisms required to build the scoring mechanisms. Section \ref{s3} introduces the early success indicators. Section \ref{s5} explains the experimental setting and section \ref{s6} states the evaluation technique for early success indicators. Section \ref{s7} presents the results, and section \ref{s8} establishes the conclusion and future work.
\section{Background Work}\label{s2}
Trained neural networks are shown to exhibit compressibility property [\cite{Arora2018StrongerApproach}], which is exploited to design early success indicators.
\subsection{Compressibility in trained neural networks}
The early success indicator introduced in this paper is motivated from the noise compression properties as outlined in [\cite{Arora2018StrongerApproach}].
In their paper, Arora et al. propose an empirical noise-stability framework for DNNs that computes each layer's stability to the noise injected at lower layers. For a well trained network, each layer’s output to an injected Gaussian noise at lower layers is stable. The added noise in a trained neural network (capable of generalization) gets attenuated as it propagates to higher layers. This noise stability allows individual layers to be compressed.
To test the noise stability of a network, a Gaussian noise in injected at one of the hidden layers and its propagation throughout the network is studied. It has been seen empirically that as noise propagates to deeper layers, the effect of noise diminishes. Noise sensitivity $\Psi_{N}$ of a mapping $M$ from a real-valued vector $x$ to a real-valued vector with respect to some noise distribution $N$ is defined as:
\begin{equation}
\Psi_{N}(M,x) = \mathbf{E}_{\eta \in N}\left[\frac{||M(x+\eta||x||)-M(x)||^2}{||M(x)||^2}\right]
\end{equation}
Noise stability from a linear mapping with respect to Gaussian noise has been derived mathematically in [\cite{Arora2018StrongerApproach}]. Their results prove that the noise sensitivity of a matrix $M$ at any vector $x \neq 0$ with respect to Gaussian distribution $N(0,I)$ is $\frac{||M||^2_F||x||^2}{||Mx||^2}$. This suggests that if a vector $x$ is aligned with matrix $M$, the noise sensitivity would be low, implying that the matrix $M$ would be less sensitive when noise is added at $x$. Low sensitivity implies that the matrix $M$ has some large singular values. This gives rise to some preferential directions along which signal $x$ is carried, while the noise, which is uniform across all directions, gets attenuated. Intuitively, this translates to a non-uniform distribution of singular values from the output of the transformation which generalizes well. On training further, as the noise stability of the neural network strengthens, the noise gets suppressed further, extent of compression increases, and the preferential direction of propagation gets aligned with the signal (higher singular values of $M$).
It has been shown through experimentation that this compression property is observed for non-linear deep neural networks as well. We show that the singular values of the output matrix of hidden layers in a well trained DNN is non-uniform. We use this non-uniform distribution of singular values and its evolution as training continues to build the early success indicators and empirically show its utility in predicting the success of a DAI for feed-forward networks.
\subsection{Related Work}
While there is active research being conducted on efficient hyperparameter optimization [\cite{Jasper2012}, \cite{Maclaurin2015}], neural architecture search [\cite{Liu2018DARTS:Search}, \cite{Zoph2016NeuralLearning}], learning to learn, and meta-learning [\cite{Andrychowicz2016}, \cite{Eggensperger2018}], the focus of these approaches is on identifying good configurations, often requiring high computational power. This work, however, focuses on establishing a simple framework for making an early prediction on the success of a particular configuration of a DAI by facilitating early give-up for configurations which might not train well on further training.
\section{Early Success Indicator}\label{s3}
We utilize the preferential direction in which the signal is carried in well trained networks to predict the success of a DAI. We introduce two metrics to capture the singular value distribution of the output from hidden layers of DNNs at a training step and its evolution as training progress. Both of these metrics are combined to predict the success of a DAI early, enabling us to stop the training of networks that are likely not to benefit from training further.
We consider two Dataset-Architecture (DA) settings, and analyze the performance of the final learned network with different random initializations. For consistency of comparison, we have used the same training procedure (as detailed in Section \ref{s7} and trained on cross-entropy loss) that transforms the initialization parameters to the early success score parameters. This transformation is not entirely deterministic due to mini-batch effects, but we find these to be negligible for the synthetic datasets, and conjecture the same for larger datasets too. Figure \ref{fig:1} shows two initializations from one of the Dataset-Architecture combinations of \texttt{Shell} dataset (synthetic dataset consisting of two classes; detailed in Section \ref{s5}), 4-layer MLP architecture (layers of dimensions 512, 256, 256, 128 respectively). These initializations are trained using RMSProp optimizer with a learning rate of $10^{-3}$, and min-batch of size 32. The first initialization trains to a validation accuracy of 85.74\% at the end of 200 epochs. Figure \ref{fig:1ib}-\ref{fig:1ic} show normalized and sorted singular values obtained from the matrix of the outputs from the first two hidden layers of dimension 512 and 256 respectively (the largest singular value is normalized to 1). It can be seen that with each epoch, the distribution of singular value compresses. This can be contrasted with the performance of the second initialization. Even with training till 350 epochs, the model obtains a validation accuracy of only 55.42\%. The distribution of SVD values from the hidden layers (Figure \ref{fig:1ie}-\ref{fig:1if}) does not show significant compression as training progresses.
\begin{figure*}[h]
\subfloat[ Training and Validation accuracy (DAI-I) \label{fig:1ia}]{\includegraphics[width=0.32\textwidth]{Plots/Introduction_DA1_train.png}}\hfill
\subfloat[ Normalized singular values (sorted) - layer 1 of size 512 (DAI-I) \label{fig:1ib}]{\includegraphics[width=0.32\textwidth]{Plots/Introduction_DA1_SVD1.png}}\hfill
\subfloat[ Normalized singular values (sorted) - layer 2 of size 256 (DAI-II) \label{fig:1ic}]{\includegraphics[width=0.32\textwidth]{Plots/Intoduction_DA1_SVD2.png}}\hfill
\subfloat[ Training and Validation accuracy (DAI-II) \label{fig:1id}]{\includegraphics[width=0.32\textwidth]{Plots/Intoduction_DA2_train.png}}\hfill
\subfloat[ Normalized singular values (sorted) - layer 1 of size 512 (DAI-II) \label{fig:1ie}]{\includegraphics[width=0.32\textwidth]{Plots/Intoduction_DA2_SVD1.png}}\hfill
\subfloat[ Normalized singular values (sorted) - layer 2 of size 256 (DAI-II) \label{fig:1if}]{\includegraphics[width=0.32\textwidth]{Plots/Intoduction_DA2_SVD2.png}}\hfill
\caption{Evolution of singular values for \texttt{Shell} dataset, \texttt{4-layer MLP} architecture, initialization from \texttt{Normal Xavier} scheme in (\ref{fig:1ia})-(\ref{fig:1ic}) DAI-I trained till 200 epochs and (\ref{fig:1id})-(\ref{fig:1if}) DAI-II trained till 350 epochs.}
\label{fig:1}
\end{figure*}
Through extensive experimentation in the following sections, we claim that the insight obtained from this shifting distribution of singular values gives a prescience of the DAI's performance and thereby enables us to only train models with higher chances of generalizing well. We propose that deep neural networks capable of learning on further training are characterized by a steep decay of singular values as training progresses and quantify this characteristic to evaluate the utility of our indicators.
{\textbf{Notation: }} We use the following notation for the task of multi-class classification with $k$ classes: Consider a dataset $\{(x_i,y_i)\}_{i=1}^n$, where $x_i \in \mathbb{R}^d$, label $y_i$ is an integer between 1 to $k$. A multi-class classifier $f$ transforms an input $x$ in $\mathbb{R}^d$ to $\mathbb{R}^k$. Let $f$ be a feed-forward neural network of depth $L$ parameterized by weight matrices $W_L,...,W_1$, where dimension of each hidden layer $i$ be $d_i$, such that $W_k \in \mathbb{R}^{d_k \times d_{k-1}} $. $(X,Y)$ be the test dataset with $m$ samples $\{(x_{ti},y_{ti})\}_{i=1}^m$. $A_i$ be the matrix obtained on passing the test dataset through the $i^{th}$ hidden layer of the trained network. $\{\sigma_{tij}\}_{j = 1}^{d_i}$ is the normalized singular values obtained in descending order from the matrix $A_i$ at epoch $t$. For a hidden layer $l$ at epoch $t$, we define two metrics $s_{olt}$ and $s_{slt}$. These matrices are computed over a window of the last $t_0$ epochs.
\subsection{Capturing steep decay of the singular values - $s_{ot}$}
Metric $s_{ot}$ captures the non-uniform distribution of singular values at epoch $t$ and the preferential directions for signals which can compress noise in other directions.
\begin{equation}
s_{olt} = \beta\times \frac{d_l \times t_0}{\sum_{k=t-t_0}^{t}\sum_{i=1}^{d_l}\sigma_{kli}}
\end{equation}
$s_{olt}$ is the score for output of layer $l$ at epoch $t$. An average score can be calculated across all layers as
\begin{equation}
s_{ot} = \frac{1}{L}\sum_{i=1}^{L}s_{olt}
\end{equation}
$\beta$ is a hyper-parameter that is chosen empirically (set to $1$ for all experiments to follow). The higher the value of $s_{ot}$, the higher the decay in the distribution of singular values. For a steep decay of singular values, $\sigma_{kli}$ for any $i > 1$ would result in small values with increasing $i$ (as the highest singular value is scaled to 1). This would result in a higher value of $s_{olt}$ for that layer. Therefore, $s_{ot}$ can capture the non-uniform distribution of singular values. A higher $s_{ot}$ suggests that the distribution of singular values is skewed. Models which generalize well show a high value of $s_{ot}$ during training.
\subsection{Capturing shift of SVD decay - $s_{st}$}
$s_{st}$ quantifies this automatic dimensionality reduction phenomenon across epochs. It captures the shift in the non-uniform distribution of singular values as training progresses.
\begin{equation}
s_{slt} = \frac{1}{(t_0-1)\times d_l}\times \sum_{k=t-t_0+1}^{t}\sum_{j=1}^{d_l}\alpha_{k}(\sigma_{(k-1)lj}-\sigma_{klj})
\end{equation}
$s_{slt}$ is the score for output of layer $l$. An average score is calculated similarly as
\begin{equation}
s_{st} = \frac{1}{L}\sum_{i=1}^{L}s_{slt}
\end{equation}
$\alpha_k$ is a hyper-parameter that is chosen empirically. It is an increasing parameter with $k$. It varies linearly, giving highest weightage to the latest epoch and least weightage to the initial epoch of the window. $\alpha_k$ lies in the range $[0,1]$.
The higher the value of $s_{st}$, the more the shift towards decay. As the validation loss of a model reduces, its transformation matrix gets aligned with the input data leading to further skewness and positive value in $s_{st}$.
The scorings $s_{st}$ and $s_{ot}$ are normalized across different feed-forward architectures by normalizing the score for each hidden layer by the number of units in the layer. Parameter $\beta$ is used to scale $s_{ot}$ in the range of $s_{st}$, which is crucial when combining the two scores. These two metrics are combined, and a scoring is proposed, independent of the labels of the validation dataset, which can be used as an early indicator of predicting the success of a DAI trained using gradient descent.
For a particular DAI combination, an early success indicator calculates the score by combining the two metrics $s_{ot}$ and $s_{st}$ at every epoch $t$. This combined score can be used to make a binary decision on whether to continue training on that DAI or discard this particular choice. If the score crosses a certain threshold $t_1$, the model is trained further. However, if the score is below $t_1$, the model is discarded before completing the training procedure. This can allow several initializations to be tested for different datasets and tasks with lesser time and computational power. We calculate the final score by combining the two metrics:
\begin{equation}
s_t = |log (s_{ot})| + \eta \times s_{st}
\end{equation}
$\eta$ is a hyper-parameter used to take a weighted average of the two terms in the combined score (chosen to be $3.5 \times 10^3$). It is worth noting that compressed singular values does not necessarily imply a subspace in which the signal propagates effectively. There can be configurations of neural networks with extremely compressed singular values, that do not optimize or generalize further on training. Such configurations are captured through $s_{st}$. To discard such models which do not train further, we introduce two more thresholds $t_2$ and $\delta$. These thresholds allow us to remove models which have compressed SVD values and still show very small changes in $s_{st}$. This phenomenon is usually accompanied by non-decreasing validation loss. Training these models further does not change the distribution of singular values and therefore the training could be stopped earlier. Considering these factors, the final combined score $s_F$ is summarized as follows:
\begin{equation}
s_F =
\begin{cases}
0 & \text{if } \text{$s_{ot}$} \geq t_2 \text{ and $s_{st}$} \leq \delta \\
s_t& \text{otherwise}
\end{cases}
\end{equation}
The score $s_F$ is calculated on validation data points but does not require the labels of the data points. However, the scores can be improved with the availability of labels for the validation dataset. We incorporate the validation accuracy (normalized in the range [0, 1]) at a particular epoch $t$ as $v_t$ into the scoring mechanism and propose a new score $s_{Fv}$ at that epoch as follows:
\begin{equation}
s_{Fv} =
\begin{cases}
v_t & \text{if } \text{$s_{ot}$} \geq t_2 \text{ and $s_{st}$} \leq \delta \\
v_t \times (1+ min(\frac{1-v_t}{v_t}, \gamma s_t ))& \text{otherwise}
\end{cases}
\end{equation}
The $min$ function ensures that the score $s_{Fv}$ lies in the interval $[0,1]$. It is an indicative measure of the final validation accuracy of a DAI combination based on the current validation accuracy $v_t$. $\gamma$ is a hyper-parameter to scale the score $s_t$, and is constant for all DAIs ($\gamma$ is taken as $2 \times 10^{-4} \times v_t \times t$, where $t$ is the current epoch at which the scores are calculated).
\section{Experimental Setting}\label{s5}
The goal of early success indicators $s_{F}$ and $s_{Fv}$ is to predict which DAIs would be successful. In the following sections, we confine the analysis of early success indicators to testing various initializations for a fixed task, dataset, architecture, and training mechanism. We show an extension of the set-up to testing combinations of different architectures and initializations on a fixed task, dataset and training mechanism in Appendix \ref{a:2}. We define a "good" initialization as the weight matrices which yield a good validation accuracy upon training on a fixed dataset and architecture using a fixed training procedure. Obtaining a good neural network model is heavily affected by the choice of initial parameters. Initial point has an influence on whether or not the learning process converges. In cases where it converges, initialization can affect how fast or slow the convergence happens, the error, and the generalization gap. There is no universal initialization technique as modern initialization strategies are heuristic. The current understanding of how initial point affects optimization and generalization is still incipient, and therefore, in practice, choosing good initialization is hugely experimental. To determine the success of initialization on a particular architecture and dataset, the model needs to be trained for several epochs (till either the loss reaches a certain threshold or overfitting begins). This process consumes significant compute power and time and can limit the number of initializations to be tried out. Therefore, this framework can, to a reasonable extent, determine the success of a DAI during the early training process and provide insights on the benefits of further training. These scores can aid in making an informed decision on the benefits of further training the model, thereby saving computing power and time by stopping the training of DAIs that do not have the potential of learning effectively.
\begin{figure*}[h]
\centering
\subfloat[ DA-I \label{fig:1c}]{\includegraphics[width=0.42\textwidth]{Plots/Dist1.png}}
\subfloat[ DA-II \label{fig:1c}]{\includegraphics[width=0.42\textwidth]{Plots/Distribution_DA2.png}}
\caption{Distribution of the final training and validation accuracy for (a) DA-I and (b) DA-II at epoch 100.}
\label{fig:2a}
\end{figure*}
The extent of success of a DNN model, in our set-up, is taken as the final validation accuracy (accuracy of validation data in the last epoch $e_L$). The early success indicators, which are calculated at an earlier epoch $e_C $ $(e_C < e_L)$ (called as \textit{checkpoint epoch}), give a score which is indicative of the success of the DAI. This score enables us to identify and discard models that would not train sufficiently at the end of training and continue training the remaining. A higher positive value of score indicates higher chances of success of the DAI. We evaluate the performance of early success indicators on classification task.
\subsection{Dataset}
Experiments have been conducted on synthetic and real-world datasets: \texttt{Shell}, \texttt{CIFAR-10} (\ref{a:1}) for classification.
\texttt{Shell} dataset is a synthetic dataset comprising of 2 classes. Each class lies in a shell in $ \mathbb{R}^d$. $d$ is taken to be 1024 by default. In the experiments, the two classes form a concentric shell of radius 1 and 1.1 respectively. 20,000 training samples and 4,000 validations samples are used throughout the experiments. In 2-dimensional space ($d = 2$), \texttt{Shell} data can be visualized on the 2 curves as shown in Figure \ref{fig:3_ex_1}.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{Plots/shell.png}
\caption{Visualization of the distribution of \texttt{Shell} dataset in ${R}^2$. The two classes lie on shells of different radius $r_1$ and $r_2$ respectively.}
\label{fig:3_ex_1}
\end{figure}
\subsection{Architecture} The experiments in this section have been done on the fully connected layers of multi-layer perceptron (MLP). Preliminary experiments done on the fully connected layers in convolutional neural network architectures (CNNs) have also shown success (refer Appendix \ref{a:1}).
\subsection{Initialization }
Different initializations are generated by randomly varying the standard deviation of the weights in Normal Xavier initialization within a factor of 10. Please note that, throughout the paper, an initialization scheme refers to initializing the weights of neural network layers by sampling from a specific distribution (for example \texttt{Normal Xavier} initialization scheme). An initialization, on the other hand, refers to the specific weight matrices sampled from an initialization scheme.
In all the experiments, rectified linear activation function (ReLU) activation is used. Optimizer used is one of RMSProp, Adam. Loss function is cross-entropy for classification. We further demonstrate the robustness of early success indicators by training on shuffled labels, thereby generating models with varied generalization gaps. The network is trained on a dataset, $y\%$ of whose labels have been randomly shuffled (where $y \in (0, 100)$) (Refer Appendix \ref{a:3}).
\section{Evaluation}\label{s6}
Checkpoints are defined as epochs at which the scores are calculated and a decision regarding the success of the initialization is made. The decision for a DAI can be one among: discarding the models that would not train or stopping further training for models that are trained (but training further won't yield further benefits) and continuing training models which show potential for better results. This decision is taken only based on the values from $s_F$ (and $s_{Fv}$ when validation labels are available) at the checkpoint epoch. To evaluate the performance of these scoring, all initializations are further trained further till a pre-decided final epoch and the prediction at the checkpoint epoch is compared against the validation accuracy at the final epoch. The early success indicator score for an initialization is computed by training a neural network initialized from that initialization using stochastic gradient descent algorithm, fixing the optimizer and its corresponding optimization parameters. At each checkpoint epoch, we pass the test data through the hidden layers of the neural network and obtain its singular values using SVD. These singular values are sorted and normalized across each hidden layer to calculate the scores. While we fix the algorithm that translates the initialization weights to the final score, this algorithm could also be a source of randomness due to mini-batch gradient. However, we observe in the \texttt{Shell} dataset that success is primarily decided by the initialization, i.e. a given initialization is either a success or a failure once we fix the algorithm regardless of the mini-batch randomness.
\begin{figure*}[h]
\subfloat[ Validation accuracy at each epoch for the five initializations \label{fig:1c}]{\includegraphics[width=0.3\textwidth]{Plots/Visualize_val.png}}\hfill
\subfloat[ $s_F$ at each epoch for the five initializations \label{fig:1c}]{\includegraphics[width=0.3\textwidth]{Plots/Visualize_Sf.png}}\hfill
\subfloat[ $s_{Fv}$ at each epoch for the five initializations \label{fig:1c}]{\includegraphics[width=0.3\textwidth]{Plots/Visualize_Sfv.png}}\hfill
\caption{Variation of scores $s_F$ and $s_{Fv}$ with epochs and their corresponding validation accuracy for five random initializations of DA-1. } \label{fig:3a}
\end{figure*}
Evaluation is done using two metrics: Spearman's rank correlation coefficient and the number of correct predictions for the success of a DAI. We calculate the Spearman's correlation between the score obtained from early predictor at checkpoint epoch and validation accuracy at the final epoch to evaluate how accurately the early success predictor can predict the performance of the model.
\textbf{Spearman's Rank Correlation Coefficient:} It evaluates the monotonic relationship between two continuous or ordinal variables. In a monotonic relationship, the variables tend to change together, but not necessarily at a constant rate. The Spearman's correlation coefficient is based on the ranked values for each variable rather than the raw data. It is defined as the Pearson Correlation Coefficient between the rank variables. Mathematically, for two variables $X$ and $Y$, it is given as:
\begin{equation}
r_s = \rho_{rg_{X}, rg_{Y}} =\frac{cov(rg_{X},rg_{Y})}{\sigma_{rg_{X}}\sigma_{rg_{Y}}}
\end{equation}
The notations are as follows:
\begin{itemize}
\item $r_s$ is the Spearman's Rank Correlation coefficient
\item $rg_{X}$ and $rg_{Y}$ are ranks of $X$ and $Y$
\item $cov$ is the covariance
\item $\sigma_{rg_{X}}$ is the variance of the rank variable of $X$
\end{itemize}
Correlation coefficients vary between -1 and +1, with 0 implying no correlation. Correlations of -1 or +1 imply a strong positive relationship. Positive correlations imply that as $x$ increases, so does $y$. Negative correlations imply that as $x$ increases, $y$ decreases. We compute the Spearman's rank correlation between the scores calculated at the checkpoint epoch and the final validation accuracy to show that the two share a positive relation. A high positive correlation indicates that the scoring is a good predictor for the final validation accuracy. We compute Spearman's correlation between $s_{Fv}$ at a checkpoint epoch and the final validation accuracy. In order to test the efficacy of this score, we compare its performance with the correlation of validation accuracy at checkpoint epoch with the final validation accuracy as the baseline.
Scores $s_F$ and $s_{Fv}$ can also be evaluated by formulating the prediction of success of gradient descent for a particular DAI as a two-class classification problem. For a fixed dataset, architecture, and learning scheme, we test several initializations. An arbitrary threshold is set for all DAs to mark successful initializations. Initializations that achieve final validation accuracy above a desired threshold belong in class $y_1$ of successfully trained models. Initializations that do not attain this threshold upon full training belong to class $y_0$ of not successfully trained models. A prediction is made at the checkpoint epoch to predict the class in which the model trained of that initialization would belong in the final epoch.
\section{Results}\label{s7}
We consider the following DAs for evaluation:
\begin{itemize}
\item \textbf{DA-I}: 120 initializations trained on \texttt{Shell} dataset (1024 dimensional input vectors), MLP architecture with four hidden layers of dimensions 512, 256, 256, 128 respectively. Optimizer used is Adam with learning rate $10^{-4}$, and mini-batch size as 32. All initializations are trained to a total of 100 epochs to evaluate the performance of the metrics. Initializations that achieve validation accuracy above 65\% at the final epoch are considered to be in class $y_1$ and all the other initializations in class $y_0$. On final training, we get 32 out of the 120 models in class $y_1$ and 88 in class $y_0$.
\item \textbf{DA-II}: 137 initializations trained of \texttt{Shell} dataset (256 dimensional input vectors), MLP architecture with two hidden layers of dimensions 256, 128 respectively. Optimizer used is RMSprop with learning rate $10^{-4}$, and mini-batch size as 32. All initializations are trained to a total of 100 epochs to evaluate the performance of the metrics. Initializations that achieve validation accuracy above 65\% at the final epoch are considered to be in class $y_1$ and all the other initializations in class $y_0$. On final training, we get 34 out of the 137 models in class $y_1$ and 103 in class $y_0$.
\end{itemize}
Figure \ref{fig:2a} shows the distribution of final training accuracies and validation accuracies for the 120 initialization of DA-I and the 137 initialization of DA-II at epoch 100. The points right to the red line indicates models which qualify to be in class $y_1$.
\begin{figure*}[h]
\begin{tabular}{l|l|c|c|c}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{1}{c}{}&\multicolumn{1}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $8$ & $24$ & $32$\\
\cline{2-4}
& $y_0$ & $7$ & $81$ & $88$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$15$} & \multicolumn{1}{c}{$105$} & \multicolumn{1}{c}{$120$}\\
\multicolumn{5}{c}{(a) Prediction using $s_{F}$} \\
\end{tabular}
\begin{tabular}{l|l|c|c|c}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{1}{c}{}&\multicolumn{1}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $20$ & $12$ & $32$\\
\cline{2-4}
& $y_0$ & $10$ & $78$ & $88$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$30$} & \multicolumn{ 1}{c}{$90$} & \multicolumn{1}{c}{$120$}\\
\multicolumn{5}{c}{(b) Prediction using $s_{Fv}$} \\
\end{tabular}
\begin{tabular}{l|l|c|c|c}
\multicolumn{2}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{2}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $18$ & $14$ & $32$\\
\cline{2-4}
& $y_0$ & $10$ & $78$ & $88$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$28$} & \multicolumn{1}{c}{$92$} & \multicolumn{1}{c}{$120$}\\
\multicolumn{5}{c}{(c) Prediction using validation accuracy} \\
\end{tabular}
\caption{Breakdown of the prediction of eventual success of all initializations in DA-I using $s_F$, $s_{Fv}$ and validation accuracy at checkpoint epoch 10.} \label{fig:5.8}
\end{figure*}
\begin{figure*}
\begin{tabular}{l|l|c|c|c}
\multicolumn{2}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{2}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $19$ & $15$ & $34$\\
\cline{2-4}
& $y_0$ & $7$ & $96$ & $103$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$26$} & \multicolumn{1}{c}{$111$} & \multicolumn{1}{c}{$137$}\\
\multicolumn{5}{c}{(a) Prediction using $s_F$} \\
\end{tabular}
\begin{tabular}{l|l|c|c|c}
\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{1}{c}{}&\multicolumn{1}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $25$ & $9$ & $34$\\
\cline{2-4}
& $y_0$ & $0$ & $103$ & $103$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$25$} & \multicolumn{ 1}{c}{$112$} & \multicolumn{1}{c}{$137$}\\
\multicolumn{5}{c}{(b) Prediction using $s_{Fv}$} \\
\end{tabular}
\begin{tabular}{l|l|c|c|c}
\multicolumn{2}{c}{}&\multicolumn{2}{c}{Prediction}&\\
\cline{3-4}
\multicolumn{2}{c|}{}&$y_1$&$y_0$&\multicolumn{1}{c}{Total}\\
\cline{2-4}
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{True}}}& $y_1$ & $18$ & $16$ & $34$\\
\cline{2-4}
& $y_0$ & $0$ & $103$ & $103$\\
\cline{2-4}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$18$} & \multicolumn{1}{c}{$119$} & \multicolumn{1}{c}{$137$}\\
\multicolumn{5}{c}{(c) Prediction using validation accuracy} \\
\end{tabular}
\caption{Breakdown of the prediction of eventual success of all initializations in DA-II using $s_F$, $s_{Fv}$ and validation accuracy at checkpoint epoch 10.} \label{fig:5n.11}
\end{figure*}
Figure \ref{fig:3a} shows the value of validation accuracy, $s_F$, and $s_{Fv}$ at every epoch for five randomly selected initializations from DA-I. Score $s_F$ and $s_{Fv}$ give an estimate at each epoch on the success of further training the model. The models with a horizontal line in the second curve (initialization 4 and 5) represent models whose score does not change much. While $s_F$ can determine that their accuracies might not change with further training, it can not determine the actual accuracy. This limitation is mitigated when using $s_{Fv}$. When using the validation labels, $s_F$ can determine that the training has saturated and $s_{Fv}$ can predict the final validation accuracy to be close to the current validation accuracy $v_t$ itself (might even reduce with over-fitting).
At checkpoint epoch, the points with $s_F$ greater than $t_1 = 0.25$ are predicted to train successfully for all DAs. We evaluate the performance of proposed scores on both the dataset-architecture combinations.
\subsection{Results on DA-I}
Table \ref{tab:5.8} compares the performance of $s_F$, $s_{Fv}$, and validation accuracy at three different checkpoint epochs. Correlation of $s_{Fv}$ at checkpoint epoch 10 and 20 and final validation accuracy is higher than that of the validation accuracy at checkpoint epochs and the final validation accuracy. $s_{Fv}$ and validation accuracy at checkpoint epoch 30 have high correlation values.
\begin{table}[h]
\centering
\begin{tabular}{||c c c c||}
\hline
Checkpoint &$s_F$& $s_{Fv}$& Validation accuracy \\
epoch &at checkpoint& at checkpoint & at checkpoint \\ [0.5ex]
\hline\hline
10 & 0.404 &\textbf{0.707} & 0.685 \\
\hline
20 & 0.773 &\textbf{0.829} & 0.820 \\
\hline
30 &0.831 & 0.872 &\textbf{0.878}\\ \hline
\end{tabular}
\caption{Correlation of $s_F$, $s_{Fv}$, validation accuracy at different checkpoint epochs with the final validation accuracy for DA-I.}\label{tab:5.8}
\end{table}
Using the validation accuracy at the checkpoint epoch to predict the validation accuracy at the final epoch can lead to several mis-classifications. A low validation accuracy can either stay at low validation accuracy or get well-trained as well. A high validation accuracy can also either yield a lower final validation accuracy (over-fitting) or get better trained. Score $s_{Fv}$ supplements this with additional information, boosting the validation accuracy score that has the potential to train further and a low $s_{st}$ to indicate stopping the training process. Figure \ref{fig:5.8} summarizes the comparison of $s_F$, $s_{Fv}$, and validation accuracy at checkpoint epoch 10 on predicting the class ($y_0$ or $y_1$) of the initializations. It can be seen that score $s_{Fv}$ is better than the baseline of validation accuracy in the classification problem.
\subsection{Results on DA-II}
Similar to DA-I, a prediction is made whether the model, after getting trained till 100 epochs, would belong in class $y_1$ or $y_0$ using these scores at earlier epochs. Figure \ref{fig:5n.11} compares the performance of $s_F$, $s_{Fv}$, and validation accuracy at the checkpoint epoch 10 to make a binary prediction of the success of initializations. It can be seen that $s_{Fv}$ is better at predicting the success of the initializations, compared to simply using the validation accuracy for this DA.
Table \ref{tab:5n.51} summarizes the Spearman's correlation values obtained for all the initializations at different checkpoints. It can be concluded that score $s_{Fv}$ is better than the baseline of validation accuracy at the checkpoint at all checkpoints in this case as well.
\begin{center}
\begin{table}[h]
\centering
\begin{tabular}{|| c c c c||}
\hline
Checkpoint & $s_F$ & $s_{Fv}$& Validation accuracy \\
epoch & at checkpoint & at checkpoint & at checkpoint \\ [0.5ex]
\hline\hline
10 &0.395 & \textbf{0.766} & 0.716 \\
\hline
20 &0.324& \textbf{0.767} & 0.762 \\
\hline
30 &0.414& \textbf{0.818} & 0.786 \\ [1ex]
\hline \end{tabular}
\caption{Correlation $s_F$, $s_{Fv}$, validation accuracy at different checkpoint epochs with the final validation accuracy for all initializations in DA-II.}\label{tab:5n.51}
\end{table}
\end{center}
It must be noted that Spearman's correlation is not an ideal metric to evaluate the performance of score $s_F$. Score $s_F$ can be low for models which have already saturated and would not benefit any more from further training, irrespective of the validation accuracy they saturate at.
\section{Conclusion}\label{s8}
In this work, a mechanism has been realized to predict the success of gradient descent for a particular dataset-architecture-initialization combination. Two early success indicators are proposed, analyzed, and tested on different combinations of datasets, architectures, and initializations to facilitate early give-up, i.e., stopping the training of models early, which are predicted to not generalize well upon further training, thereby saving computational time and power. These scores can be used to supplement the available validation accuracy at an early epoch to make predictions about the future success of the model.
The utility of success indicators can be further extended by exploiting the labels of the validation data points. The calculation of $s_{st}$ and $s_{ot}$ can be computed differently for individual classes. Intuitively, it is expected that the amplifying power of the matrix obtained from the data points of one class would be in different directions than the amplifying power of matrix obtained from data points of a different class. If it were in the exact same direction, distinction between the two classes could not be made out. Another natural extension of the work would be test the effectiveness of the proposed metrics on larger models, exploring different tasks, architectures, and datasets.
\newpage
\bibliographystyle{apalike}
|
2111.13074
|
\section{Introduction}
\label{sec:intro}
Human motion modeling plays an important role in many applications such as video games and computer animation.
Many tasks study the human motion under different circumstances~\cite{zhang2019predicting,kocabas2020vibe,mao2020history,zhang2021we,ling2020character}. Most of them ask for the physically plausible human motion, which means that any independent pose of the motion is plausible, as well as the transition between poses is reasonable.
Several works study pose priors by exploring the constraint on human poses~\cite{bogo2016keep,kanazawa2018end,pavlakos2019expressive,zanfir2020weakly}.
However, a plausible motion asks for both the continuity between poses and the feasibility of the independent pose.
Hence, recent works try to design the prior for human motion. Kocabas~\etal~\cite{kocabas2020vibe} design an adversarial prior to discriminate between generated and real human motions so as to keep the predicted motion plausible. In addition, Holden~\etal~\cite{holden2016deep} learn a motion manifold, where each motion data is embedded into a low-dimensional representation.
However, they design the motion prior only for specific tasks. We argue that a pre-trained and task-agnostic human motion prior is essential for those motion generation tasks because of the insufficient paired 3D motion data. Therefore, we summarize the indispensable properties of the motion prior, and design a versatile motion prior according to these properties.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/fig1_2.pdf}
\caption{Illustration of our learnt motion prior representation space. Given an undersampled observation (\emph{i.e.,} start and end position of (a) and (b) pointed by blue arrow), there may be multiple solutions but with different probabilities, which help to solve the ambiguity. Meanwhile, same motion sequence but with different global orientations, (c) and (d), have the consistent representation in our motion prior space.}
\vspace{-3mm}
\label{fig:fig1}
\end{figure}
First, \textit{a tractable and continuous distribution} over the latent representation is required to model the inherent probability distribution of human motions. It is important for solving ambiguity in ill-posed tasks, such as motion infilling and prediction. Because some common motions have a high probability of occurrence, while the probability of rare or impossible motions is lower.
For example, in Fig.~\ref{fig:fig1}, the full motion (a) and (b) have the same undersampled observation (\textit{i.e.,} start and end position pointed by the blue arrow), leading to the ambiguity in the ill-posed motion infilling. If the prior models the probability distribution of the human motion data, the ambiguity can be solved by offering a more probable solution which is more likely to conform to human behavior.
Therefore, we construct a motion prior based on the probabilistic model, variational auto-encoder (VAE)~\cite{kingma2013auto}, to model the inherent motion distribution.
Second, \textit{a complete and efficient space} is significant for constructing a versatile prior, since the prior needs to generalize to various tasks and datasets without fine-tuning.
Specifically, the complete and efficient space means that we can accurately reconstruct any plausible motion from a low-dimensional representation. A large-scale dataset with a variety of long-term motions is usually adopted for the completeness. However, it leads to a complex data space that is hard to be represented in low dimension. So, we propose to learn an efficient representation space from two aspects: reducing the complexity of the data space to be modeled and encoding each motion data efficiently.
For the reduction of complexity of the data space, Luo~\etal~\cite{luo20203d} resort to shorter-term motions to be modeled. However, more context information provided by long-term motion is beneficial for down-stream tasks. By contrast, we introduce a global orientation normalization, which normalizes the global orientation of each motion around yaw axis while retaining the relative orientation transition between frames. Since the direction in which a person moves is related to the environment, the global orientations of motion in the dataset are biased on the environment information. So it is non-trivial to remove the redundant environment information in the data space as well as reducing the complexity.
For the efficient motion encoding, motion segments with slower changes between frames should be efficiently compressed, while motion segments with higher degree of variation deserve more attention. For instance, in a motion sequence, the dancer may stand still for a while before dancing. Compared with standing segments, dancing segments carry more information and details, and deserve to be well retained~\cite{xiao2020invertible}. Thus, we introduce a two-level motion frequency guidance
to efficiently encode the motion into the low-dimensional prior representation. One level is sequence-based and the other is segment-based. The sequence-based frequency guidance captures the difference between frequency patterns of motions and provides category cues from the frequency~\cite{hu2019skeleton}. The segment-based frequency guidance exploits the frequency difference between segments within a motion to adaptively compress segments with different amount of high-frequency information.
Third, \textit{a consistent and distinguishable representation} should be learnt in the motion prior.
As aforementioned, the same motion, consisting of same poses and relative transitions, may have different global orientations as the environment changes. For example, in Fig. \ref{fig:fig1}, (c) and (d) correspond to the same motion with different orientations, but they are supposed to have the consistent representation. A straightforward solution is to take our orientation normalized motions as both input and output of our motion prior, so as to explicitly remove the global orientation for each input motion during training and inference phase. Yet, this solution may cause the prior fail to capture underlying distinguishable features for the motion.
Thus, we introduce a denoising training scheme to disentangle the global orientation (environment information) from the human motion data in a learnable way, so as to learn a consistent representation while keeping the representation distinguishable~\cite{bengio2013representation,im2017denoising}.
To demonstrate the versatility and effectiveness, we integrate our pre-trained motion prior into different backbones without fine-tuning for different tasks, such as human motion reconstruction, motion prediction and action recognition. Then, we conduct experiments on 3DPW~\cite{von2018recovering}, Human3.6M~\cite{ionescu2013human3} and BABEL~\cite{BABEL_CVPR_2021} to evaluate the performance on different tasks. Results show that our motion prior improves the baseline and achieves the state-of-the-art performance on all three benchmarks.
In summary, our contributions in this paper are:
(1) We first summarize three indispensable properties for the motion prior to achieve versatility, and accordingly,
(2) We introduce the global orientation normalization and a two-level motion frequency guidance to learn the versatile motion prior with a denoising training scheme.
(3) We integrate the proposed prior into prevailing backbones and achieve the state-of-the-art performance on different benchmarks, which demonstrates the versatility of our motion prior.
\section{Related Work}
\label{sec:related_work}
\textbf{Human pose and motion prior.} Constructing a kind of prior is commonly used in pose~\cite{bogo2016keep,kanazawa2018end,pavlakos2019expressive,zanfir2020weakly} and motion~\cite{urtasun20063d,holden2016deep,kocabas2020vibe,luo20203d} modeling.
Pavlakos~\etal~\cite{pavlakos2019expressive} utilize VAE~\cite{kingma2013auto} to build a non-linear manifold as the human pose prior and provide plausibility constraint. Zanfir~\etal~\cite{zanfir2020weakly} construct a wrapped prior space with normalizing flow. Compared with pose priors, motion priors have constraints on both independent poses and transition between poses. Kocabas~\etal~\cite{kocabas2020vibe} train an adversarial discriminator as motion prior to discriminate between generated motions and real human motions.
Holden~\etal~\cite{holden2016deep} employ the autoencoder to encode all plausible motion into a compact manifold, where each latent code can represent a plausible human motion. Luo~\etal~\cite{luo20203d} try to compress a large-scale dataset, AMASS~\cite{mahmood2019amass}, with VAE into a representation space. In this paper, we first analyze the properties of a versatile motion prior and the characteristics of human motion itself. Then, we accordingly propose the global orientation normalization and a two-level motion frequency guidance.
\textbf{Frequency in motion modeling.} Previous works convert the motion into frequency domain and take the frequency coefficients as input to combine both spatial and temporal information~\cite{mao2019learning,cai2020learning}. Mao~\etal~\cite{mao2020history} represent historical sub-sequences of each motion as frequency coefficients, and aggregate them with attention mechanism to predict the future motion. Zhang~\etal~\cite{zhang2021we} encode the motion into different DCT spaces to decompose the motion into several frequency bands. By contrast, we exploit the characteristic of frequency that it represents the amount of information, to adaptively compress the human motion data.
\textbf{Human motion reconstruction.}
Reconstructing the 3D human pose or motion has attracted significant interest~\cite{bogo2016keep,kanazawa2018end,kolotouros2019learning,song2020human,kanazawa2019learning,kocabas2020vibe,zanfir2020weakly}.
Compared with poses, reconstructing human motion has more demanding requirements for shape consistency and smoothness. Kanazawa~\etal~\cite{kanazawa2019learning} present a temporal encoder to learn 3D human dynamics from video and generate smooth motion. Kocabas~\etal~\cite{kocabas2020vibe} design a GRU-based network with the adversarial prior to guide the motion inference.
Choi~\etal~\cite{choi2021beyond} explicitly exploit the past and future frames to achieve smoother and better results. Meanwhile, \cite{shimada2020physcap,yuan2021simpoe,rempe2020contact,PhysAwareTOG2021} try to improve the physical fidelity of generated motion through reinforcement learning and other physical constraints.
\textbf{Motion Prediction.} Motion prediction from past pose sequence is studied from two aspects: deterministic~\cite{aksan2019structured,cui2020learning,mao2020history,zhang2019predicting} and stochastic~\cite{zhang2020perpetual,yan2018mt,yuan2020dlow,aliakbarian2020stochastic}. Instead of using a sequence of past poses, Chao~\etal~\cite{chao2017forecasting} forecast human dynamics from static images and Yuan~\etal~\cite{yuan2019ego} predict future motions from egocentric videos. Zhang~\etal~\cite{zhang2019predicting} utilize the SMPL model to represent the human body and predict future motions with pose and shape from videos.
\textbf{Action Recognition.} To understand human motion, skeleton-based action recognition attracts much attention~\cite{shi2019two,cheng2020skeleton,liu2020disentangling,cheng2020eccv}. Most of them carefully design and train a GraphConv network in a supervised way on the widely-used NTU-RGBD~\cite{shahroudy2016ntu}, which has two major problems: the discontinuity of action and the lack of modeling the long-tailed distribution. Recently, the BABEL~\cite{BABEL_CVPR_2021} dataset is proposed to tackle these two problems, which is closer to the real life.
By contrast, we take the learnt prior representation, referring to a plausible motion, as the intermedia to generate the final outputs, which benefits from the context and probability information encoded in the prior.
\section{Motion Prior}
\subsection{Human Motion Representation}
\label{sec:motion-represent}
We use the parametric human model, SMPL~\cite{loper2015smpl}, to represent each human pose in the motion. SMPL model, which can be regarded as a differential function $\mathcal{M}(\cdot)$, parameterizes the human body pose and shape through $\theta\in\mathbb{R}^{72}$ and $\beta\in\mathbb{R}^{10}$, respectively. Pose parameter $\theta$ consists of
global orientation $\theta^g$ and local body pose $\theta^l$ determined by relative rotation of 23 joints
in axis-angle format. Given $\theta$ and $\beta$, $\mathcal{M}(\theta, \beta)$ outputs a triangulated mesh with $N=6,980$ vertices. Then, we denote the motion sequence with $K$ frames as $\mathbf{X}=\{\Theta_i\}^{K}_{i=1}$, where $\Theta_i=(\theta_i, \beta_i)$ represents the human model for $i$-th frame.
\textbf{Global Orientation Normalization.}
\label{subsec:rotation_norm}
As aforementioned, the global orientation around yaw axis of each motion in the dataset is related to the environment, while we argue that the motion prior should focus on the human motion itself. Hence, to remove the redundant environment information in the motion data and reduce the complexity of data space, we propose the global orientation normalization.
As shown in Fig. \ref{fig:motion_prior}, we normalize the orientation of entire motion around yaw axis according to the first frame while remaining the internal relative orientation transition, so as to make all input sequences start in the same forward direction. Specifically, given an input motion sequence $\mathbf{X}$, we first normalize the first frame by clipping the yaw rotation and generate normalized orientation $\hat{\theta_1^g}$. Then we generate the correction rotation $\mathcal{R}_{cor}$ between the original orientation $\theta_1^g$ and $\hat{\theta_1^g}$ of the first frame, and normalize the global orientation of the motion sequence as follows:
\begin{align}
\mathcal{R}_{cor} = {\mathbf{\hat{\theta_1^g}}} & \cdot {\mathbf{{\theta_1^g}}}^{-1}= {\mathbf{\hat{\theta_1^g}}} \cdot {\mathbf{{\theta_1^g}}}^{T}, \\
{\mathbf{\hat{\theta_i^g}}} = &\mathcal{R}_{cor} \cdot {\mathbf{\theta_i^g}},
\end{align}
where $\mathbf{\theta^g}$ is the rotation matrix format of ${\theta^g}$.
\subsection{Frequency Guiding Prior Framework}
\label{subsec:motion_prior_framework}
To construct our motion prior, we exploit the variational auto-encoder (VAE)~\cite{kingma2013auto} and learn a $256$-dimensional latent representation space. We first introduce the input data, then the encoder where we perform the two-level frequency guidance. Finally, we will introduce the decoder.
\textbf{Input data.}
Given a set of pose and body parameters, $\theta$ and $\beta$, human motion can be expressed through SMPL model. However, the relative rotation and global orientation is less intuitive and straightforward. Therefore, we also take the joint sequence $\mathcal{J}\in\mathbb{R}^{K\times J\times3}$ of each motion as input, where $J$ is the number of body joints. Also, we explicitly calculate velocity $\mathcal{J}^{vel}$ and acceleration $\mathcal{J}^{acc}$ for each joint to better reveal the dynamic features.
Following~\cite{pavlakos2019expressive}, we also ignore the variance of shape information and take the same shape for each motion. Therefore, the motion input used to construct our motion prior is denoted as $\Phi=\{(\hat{\theta^g_i}, \theta^l_i, \beta, \mathcal{J}_i, \mathcal{J}_i^{vel}, \mathcal{J}_i^{acc})\}^K_{i=1}$
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/VAE2.pdf}
\caption{Pipeline and framework. We visualize the motion after global orientation normalization (the red arrow points at the beginning frame) and illustrate the framework of our motion prior.}
\label{fig:motion_prior}
\end{figure}
\textbf{Encoder.}
RNN-based networks, which mainly focus on temporal correlations, usually fail to capture the spatial-temporal dynamics in human motion~\cite{li2018convolutional}. Hence, as shown in Fig.~\ref{fig:motion_prior}, we construct a convolutional encoder, which takes $\Phi$ as input and consists of three residual blocks to extract both the fine-grained information and global context.
To introduce the sequence-based and segment-based frequency guidance for efficient representation learning, we take the joint sequence $\mathcal{J}\in\mathbb{R}^{K\times J\times3}$ and further divide $\mathcal{J}$ into $S$ segments of length $n$, \textit{i.e.,} $\dot{\mathcal{J}}\in\mathbb{R}^{S\times n\times J\times3}$. Then, for each motion, we make use of the discrete cosine transform (DCT) to extract the sequence-based frequency components $\mathcal{F}_{seq}\in\mathbb{R}^{C_m\times J\times3}$ from $\mathcal{J}$ and the segment-based frequency $\mathcal{F}_{seg}\in\mathbb{R}^{S \times C_s \times J\times3}$ from $\dot{\mathcal{J}}$, where $C_m$ and $C_s$ represent the number of kept frequency components.
Then, to perform the segment-based frequency guidance, we extract the segment attention value $\alpha_{seg}\in\mathbb{R}^{S}$ from $\mathcal{F}_{seg}$ and re-weight the segment features $f_{seg}$ extracted by the residual block for each segment, so as to adaptively compress the information according to the frequency:
\begin{equation}
f_{seg}^{'} = f_{seg}\cdot\alpha_{seg} = f_{seg}\cdot\sigma(\phi(\mathcal{F}_{seg})),
\end{equation}
where $f_{seg}^{'}, f_{seg}\in\mathbb{R}^{S\times n\times c}$, $n$ and $c$ are the length of each segment and the channel number of the feature. $\sigma(\cdot)$ and $\phi(\cdot)$ are the softmax function and multi-layer perceptron and are used to predict the segment attention value $\alpha_{seg}$. Besides, this compressing process is conducted in the first layer for efficient compression for the entire motion.
Furthermore, we combine features from different scales as the motion dynamic feature to keep both the global context information and the fine-grained local pose information. Also, we exploit the sequence-based frequency $\mathcal{F}_{seq}$ and introduce the global motion category information from $\mathcal{F}_{seq}$ into the dynamic feature. Finally, we encode both category information and dynamic feature into the latent representation $z_{mot}\in\mathbb{R}^{256}$ with re-parameterization trick~\cite{kingma2013auto}.
\textbf{Decoder.} Different from the encoder, an overly complex decoder may hurt the test log-likelihood and cause the overfitting~\cite{cremer2018inference,vahdat2020NVAE}. Therefore, as shown in Fig. \ref{fig:motion_prior}, our decoder consists of two residual blocks containing one fully connected layer each. The final layer outputs the reconstructed normalized global orientation $\varphi^g$ represented by 6D continuous rotation feature~\cite{zhou2019continuity} and a latent local pose representation $\varphi^l \in \mathbb{R}^{32}$, in VPoser latent space~\cite{pavlakos2019expressive}, which is a reasonable sub-manifold for human pose. Furthermore, $\mathcal{D}_{cont}$ converts $\varphi^g$ to the axis-angle format and the decoder $\mathcal{D}_{vp}$ of VPoser~\cite{pavlakos2019expressive} with pre-trained and fixed weights decodes $\varphi^l$ into predicted local body pose $\Bar{\theta^l}$ in axis-angle format.
\subsection{Denoising Training Scheme}
To learn a consistent and distinguishable representation for the same motion, we design a denoising training scheme.
Given a motion sample $\Phi$ after orientation normalization, we randomly apply a rotation around yaw axis to it, which can be regarded as an inverse process of normalization in Sec.~\ref{subsec:rotation_norm} with random degree, and generate a corrupted sample $\tilde{\Phi}$. Then, our prior is trained to reconstruct normalized motion $\{\hat{\theta^g},\theta^l\}$ from $\tilde{\Phi}$ instead of $\Phi$ as follows:
\begin{align}
\mathcal{L} = \lambda_{rec}\mathcal{L}_{rec} &+ \lambda_{kl}\mathcal{L}_{kl} + \lambda_{vposer}\mathcal{L}_{vposer}\label{eq:total-loss},\\
\mathcal{L}_{rec} = \mathcal{M}(\hat{\theta^g}||\theta^l, \beta)&-\mathcal{M}(\mathcal{D}_{cont}(\tilde{\varphi}^g)||\mathcal{D}_{vp}(\tilde{\varphi}^l), \beta)\label{eq:rec},\\
&(\tilde{\varphi}^g,\tilde{\varphi}^l) = \mathcal{F}(\tilde{\Phi})\label{eq:enc},\\
\mathcal{L}_{KL} = &KL(q(z_{mot}|\tilde{\Phi})||\mathcal{N}(0, I))\label{eq:kl},\\
&\mathcal{L}_{vposer} = |\varphi^l| ^ 2\label{eq:vpose},
\end{align}
where Eq.~(\ref{eq:rec}) is the reconstruction loss for the mesh vertices of SMPL model $\mathcal{M}$ under the same shape parameter $\beta$.
Eq.~(\ref{eq:enc}) denotes the reconstruction from $\tilde{\Phi}$ through our proposed motion prior framework $\mathcal{F}(\cdot)$.
The Kullback-Leibler divergence given by Eq.~(\ref{eq:kl}) encourages the approximate posterior to be close to the normal distribution. Eq.~(\ref{eq:vpose}) constrains the human body pose to the natural range.
\section{Versatility of Motion Prior}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/reconstruct_framework1.pdf}
\caption{Integration of our proposed motion prior. We integrate the pre-trained motion prior with fixed weights into (a) VIBE in a motion reconstruction task and (b) PHD in a motion prediction task, which are simply illustrated.}
\label{fig:reconstruct_frame}
\end{figure}
To demonstrate the versatility and effectiveness of our proposed motion prior, in this section, we integrate our motion prior into several prevailing backbones in different human motion modeling tasks.
\subsection{Human Motion Reconstruction}
\label{subsec:3d-reconstruct}
\textbf{Problem definition.} Given a video sequence $\{I_t\}_{t=1}^T$, we reconstruct the 3D human pose and shape $\{\Theta_t\}_{t=1}^T$ (defined the same as Sec. \ref{sec:motion-represent}) from each frame. It is noteworthy that the reconstructed shape parameter $\beta_t$ should be consistent across the whole sequence for a person.
\textbf{Architecture.} We utilize the VIBE~\cite{kocabas2020vibe} as our backbone and embed our motion prior into it as shown in Fig. \ref{fig:reconstruct_frame}. VIBE extracts temporal feature for each frame through Gated Recurrent Units (GRU). Then, for each frame, they produce pose $\theta$, shape $\beta$, and scale and translation $[s; t]$ of camera using a shared regressor~\cite{kanazawa2018end} from each temporal feature.
However, we predict the pose for each frame in the sequence at once from our motion prior, instead of predicting frame-wisely through the regressor. As illustrated in Fig. \ref{fig:reconstruct_frame}, we construct a motion encoder $\mathit{E}_{mot}$, consisting of two convolutional layers and a fully-connected layer, to predict the motion representation $z_{mot} \in \mathbb{R}^{256}$. Then, the pre-trained motion prior decodes $z_{mot}$ into the motion with $K$ frames. However, the length $T$ of input video is required to be less than $K$ and we use the first $T$ poses $ \{(\Bar{\theta_i^g}, \Bar{\theta_i^l})\}_{i=1}^T$ as the output. Compared with producing poses for consecutive frames one by one, our motion prior provides more context information between poses for accurate prediction. Also, we discard the motion discriminator in VIBE, which acts as a prior to generate plausible motion but fails to solve ambiguity. By contrast, our motion prior generates more probable motion to solve ambiguity while keeping plausibility.
In addition, due to the global orientation normalization (see Sec. \ref{subsec:rotation_norm}), the predicted global orientation sequence $\{\Bar{\theta_t^g}\}_{t=1}^T$ has been normalized according to the first frame. Therefore, we construct another branch to predict the residual rotation $\mathcal{R}_{res}$ around yaw axis for the first frame and rectify the $\{\Bar{\theta_t^g}\}_{t=1}^T$ by $\Bar{\theta_t^g} = \mathcal{R}_{res}\cdot\Bar{\theta_t^g}.$
Furthermore, we also introduce a branch to directly predict the $\beta$ from the first frame and use the predicted $\beta$ for all rest frames to ensure the shape consistency across a video, where the regressor in VIBE may fail. However, given the 2D keypoints supervision, the camera model for each frame is still needed and we keep regressing $[s; t]$ based on predicted shape and pose from each temporal feature.
\subsection{Motion Prediction}
\label{subsec:motion_predict}
\textbf{Problem definition.} In this task, we aim to predict the future 3D human motion $\{\Theta_{T+1}, \Theta_{T+2}, ..., \Theta_{T+N}\}$ conditioning on a past 2D video sequence $\{I_1, I_2, ..., I_T\}$.
\textbf{Architecture.} An auto-regressive framework PHD~\cite{zhang2019predicting} is taken as our backbone.
It conducts auto-regressive prediction in the feature space, where each feature is regularized by an adversarial pose prior~\cite{kanazawa2018end}.
Given a video, PHD first extracts the temporal feature $\{f_t\}_{t=1}^{T}$ for each input frame, and then predicts $\{f_t\}_{t=T+1}^{T+N}$ in an auto-regressive manner, and accordingly generate poses for future motion.
However, the auto-regressive manner is prone to cause compounding errors and fails to capture the long-range context information of whole sequence, which leads to unnatural and inaccurate generated motion. Therefore, we take the architecture introduced in Sec. \ref{subsec:3d-reconstruct} to impose the regularization over the whole sequence of pose features, and refine errors and unnaturalness. As shown in Fig. \ref{fig:reconstruct_frame}, we take both $\{f_t\}_{t=1}^{T}$ and the auto-regressed features $\{f_t\}_{t=T+1}^{T+N}$ as input and generate $z_{mot}$ by the motion encoder $\mathit{E}_{mot}$, which embeds the long-range context information. Then, the pre-trained prior decoder generates the human motion from $z_{mot}$, which is not only constrained to the plausible space but also refined with the context information.
\begin{table}[]
\small
\centering
\begin{threeparttable}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccc}
\toprule
Method & MPJPE $\downarrow$ & PA-MPJPE $\downarrow$ & MPVPE $\downarrow$ & Accel Error $\downarrow$ \cr
\midrule
MEVA~\cite{luo20203d} & - & 43.9 & - & - \\
\midrule
Vanilla VAE & 52.82 & 17.88 & 63.06 & 5.22\\
~~~~~~~ $+ \mathcal{F}_{seq}$ & 34.01 & {\bf 17.18} & 41.02 & 5.24 \\
~~~~~~~ $+ \mathcal{F}_{seg}$ & 26.59 & 17.66 & 33.25 & 5.17 \\
\midrule
Ours & {\bf 26.01} & 17.44 & {\bf 32.70} & {\bf 5.07} \\
\bottomrule
\end{tabular}}
\end{threeparttable}
\vspace{1pt}
\caption{VAE reconstruction error on 3DPW. ``$+\mathcal{F}_{seq}$" and ``$+\mathcal{F}_{seg}$" stands for the guidance of sequence-based frequency and segment-based frequency. ``Accel Error" stands for the acceleration error and ``$\downarrow$" denotes that the lower the result, the better.}
\label{tab:prior_ablation}
\end{table}
\begin{table*}
\small
\centering
\begin{threeparttable}
\resizebox{0.98\linewidth}{!}{
\begin{tabular}{clccccccc}
\toprule
&\multirow{2}{*}{Method}&
\multicolumn{4}{c}{{3DPW}}&\multicolumn{3}{c}{{Human3.6M}}\cr
\cmidrule(lr){3-6} \cmidrule(lr){7-9}
& & MPJPE $\downarrow$ & PA-MPJPE $\downarrow$ & MPVPE $\downarrow$ & Accel Error $\downarrow$ & MPJPE $\downarrow$ & PA-MPJPE $\downarrow$ & Accel Error $\downarrow$\cr
\cmidrule(lr){1-9}
\multirow{3}{*}{\rotatebox{90}{\begin{tabular}{c}Frame \\ based\end{tabular}}} & SPIN ~\cite{kolotouros2019learning} & 96.9 & 59.2 & 116.4 & 29.8 & - & {41.1} & - \\
& I2L-MeshNet~\cite{Moon_2020_ECCV_I2L-MeshNet} & 93.2 & 58.6 & 110.1 & 30.9 & {\bf 55.7} & 41.7 & - \\
& Pose2Mesh~\cite{choi2020pose2mesh} & 88.9 & 58.3 & 106.3 & - & 64.9 & 46.3 & - \\
\cmidrule(lr){1-9}
\multirow{5}{*}{\rotatebox{90}{\begin{tabular}{c}Video \\ based\end{tabular}}} & HMMR~\cite{kanazawa2019learning} & 116.5 & 72.6 & 139.3 & 15.2 & - & 56.9 & - \\
& Sun~\etal~\cite{sun2019human} & - & 69.5 & - & - & 59.1 & 42.4 & - \\
& VIBE~\cite{kocabas2020vibe} & 93.9 & 55.9 & 112.6 & 27.0 & 65.6 & 41.4 & 27.3 \\
\cmidrule(lr){2-9}
& Ours\ddag & {85.1} & {\bf 52.5} & {101.3} & {14.6} & 66.0 & 41.3 & {\bf 13.8} \\
& Ours & {\bf 84.0} & {\bf 52.5} & {\bf 99.6} & {\bf 12.7} & {65.6} & {\bf 41.0} & {13.9} \\
\bottomrule
\end{tabular}}
\end{threeparttable}
\vspace{1pt}
\caption{Evaluation on 3DPW and Human3.6M dataset with the SMPL annotations of Human3.6M. ``Ours\ddag" denotes that we directly take the orientation normalized motion as input and output without denoising training scheme.}
\vspace{-1mm}
\label{tab:mesh_construct}
\end{table*}
\begin{table}
\small
\centering
\begin{threeparttable}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccc}
\toprule
Method& MPJPE $\downarrow$ & PA-MPJPE $\downarrow$ & MPVPE $\downarrow$ & Accel Error $\downarrow$ \cr
\cmidrule(lr){1-5}
VIBE~\cite{kocabas2020vibe} & 91.9 & 57.6 & - & 25.4 \\
MEVA~\cite{luo20203d} & 86.9 & 54.7 & - & {11.6} \\
TCMR~\cite{choi2021beyond} & 86.5 & {\bf 52.7} & 103.2 & {\bf 6.8} \\
\cmidrule(lr){1-5}
Ours\ddag & {85.5} & {53.6} & {102.6} & 15.9\\
Ours & {\bf 85.2} & {53.2} & {\bf 102.1} & 14.3\\
\bottomrule
\end{tabular}}
\end{threeparttable}
\vspace{1pt}
\caption{Evaluation on 3DPW without the SMPL annotations of Human3.6M. ``Ours\ddag" means without denoising training scheme.}
\label{tab:mesh_construct_noh36m}
\end{table}
\subsection{Skeleton-based Action Recognition}
\textbf{Problem definition.} Compared with SMPL, skeleton is more general to application and easy to access. Thus, we integrate our learnt motion prior into the skeleton-based action recognition. This task aims to classify a motion sequence $x_i = \{x_i^t\}_{t=1}^N$, where $x_i^t\in\mathbb{R}^{J\times 3}$ represents the position of $J$ joints, into a certain category $y_i\in\mathcal{Y}$.
\textbf{Architecture.} First, we freeze the pre-trained decoder with parameters $\mathbf{\vartheta}$, which has specified the representation space with posterior density $p_\mathbf{\vartheta}(\mathbf{z}|\mathbf{x})$ over representation $\mathbf{z}$~\cite{kingma2013auto}. Then, to adapt to the skeleton-based motion, which spans a subspace of the SMPL-based data, and embed them into pre-defined representation space, we remove the SMPL parameters, $\hat{\theta^g}$ and $\theta^l$, in $\Phi$ (see Sec.~\ref{subsec:motion_prior_framework}) and retrain the encoder from scratch with only skeleton information as input. Note that, this process follows the VAE training paradigm and it is agnostic to the action recognition task and dataset.
With the retrained and frozen encoder, each skeleton-based motion $x_i$ is embedded into the representation $z_i$ in our pre-defined prior space for further recognition. Then, we feed the representation to a three-layer multi-layer perceptron (MLP) with ReLU activation and $1,024$ hidden units, and it outputs classification logits for final prediction.
\section{Experiments}
In Sec.~\ref{subsec:motion_prior_exp}, we introduce the dataset and implementation details in our motion prior, and evaluate the performance quantitatively and qualitatively in Sec.~\ref{subsec:motion_prior_eval}.
Then, in Sec.~\ref{subsec:mesh_rec_exp} and~\ref{subsec:motion_pred_exp}, we integrate our method into the \textit{human motion reconstruction} and \textit{motion prediction} task to show that we provide the efficient motion representation space for inverse problem with probability as cues.
Also, we exploit the \textit{skeleton-based action recognition} to show that the learnt representation is distinguishable in Sec.~\ref{subsec:ar}. To prove that our prior encodes the reasonable transition between poses, we conduct evaluation on \textit{motion infilling} task in Sec. \ref{subsec:motion_infilling_exp}.
\textbf{Metric.} Following~\cite{kanazawa2019learning,kocabas2020vibe,luo20203d}, to evaluate the performance, we mainly use the following metrics: mean per joint position error (MPJPE), MPJPE after procrustes-alignment (PA-MPJPE) and mean per vertex position error (MPVPE), which are measured in $(mm)$. Besides, acceleration error in $(mm/s^2)$ is used to measure the smoothness.
\subsection{Motion Prior Implementation}
\label{subsec:motion_prior_exp}
\textbf{Dataset.} Following~\cite{luo20203d}, we train our motion prior with a large and varied database of human motion that unifies different mocap datasets, AMASS~\cite{mahmood2019amass}, and split the original dataset into train/val set with pre-processing (details are in Sup. Mat.). To evaluate the generalization ability and effectiveness of our motion prior, we show the VAE reconstruction performance on the unseen in-the-wild {3DPW}~\cite{von2018recovering}.
\textbf{Implementation.} In the training stage, weights in Eq. (\ref{eq:total-loss}) are set as $\{\lambda_{rec}, \lambda_{kl}, \lambda_{vposer}\}=\{1, 0.01, 0.001\}$. We utilize the Adam optimizer with the learning rate $1e^{-4}$ and weight decay $1e^{-4}$. The network is trained for $250$ epochs with the batch size of $60$.
\label{subsec:visual_prior_space_exp}
\begin{figure}
\centering
\vspace{-3mm}
\includegraphics[width=0.99\linewidth]{figures/sample_viz.pdf}
\caption{Illustration of sampled motion in top and bottom rows and interpolated motion in the middle row from our prior latent space. These consecutive poses selected from the first 60 frames of the generated motion with an interval of 10 frames.}
\label{fig:visual_sample}
\label{fig:my_label}
\end{figure}
\subsection{Motion Prior Evaluation}
\label{subsec:motion_prior_eval}
To demonstrate the effectiveness of our proposed methods, we conduct experiments on the test set of 3DPW. The VAE reconstruction error reported in Tab. \ref{tab:prior_ablation} shows that our prior generalizes well from AMASS to the unseen 3DPW, which is important for the versatility.
Then, we train a vanilla VAE with the global orientation normalization. As shown in Tab. \ref{tab:prior_ablation}, compared with MEVA~\cite{luo20203d} which reduces the complexity of data space by resorting to shorter-term motions, we achieve the better performance and it demonstrates the effectiveness of our global orientation normalization. Also, the proposed frequency guidance further improves the performance of vanilla VAE, and it proves that sequence-based and segment-based frequency guidance are effective. Compared with sequence-based frequency that indicates the category information mainly determined by the local poses, segment-based frequency focuses on both orientation transition and local poses compression, leading to better tradeoff between MPJPE and PA-MPJPE.
To qualitatively show that we construct an expressive prior space for plausible motions, we randomly sample the latent variable $z_{mot}\in\mathbb{R}^{256}$ from the normal distribution and generate the human motion. The top and bottom rows in Fig. \ref{fig:visual_sample} show two motions generated from sampled latent variables $z_{mot}^\alpha$ and $z_{mot}^\beta$. Also, we average these two variables and get the interpolated motion in the middle row of Fig. \ref{fig:visual_sample}. These reasonable results demonstrate our prior is plausible while tractably and continuously distributed.
\subsection{Human Motion Reconstruction}
\label{subsec:mesh_rec_exp}
\textbf{Dataset.} Following~\cite{kocabas2020vibe}, in training phase, we use the {InstaVariety}~\cite{kanazawa2019learning} dataset to provide pseudo ground-truth 2D annotations. Also, we utilize {3DPW} and {Human3.6M}~\cite{ionescu2013human3} for SMPL parameters supervision, while employing {MPI-INF-3DHP}~\cite{mehta2017monocular} for 3D joints supervision. For evaluation, we show results on the test set of {3DPW} and {Human3.6M}. Specifically, on Human3.6M, we use [S1, S5, S6, S7, S8] as the training set and [S9, S11] as the test set.
\textbf{Experimental results.} As introduced in Sec. \ref{subsec:3d-reconstruct}, we take the VIBE~\cite{kocabas2020vibe} as our backbone while keeping the same setting, \eg, the length of video $T=16$.
Tab.~\ref{tab:mesh_construct} shows the quantitative results compared with the state-of-the-art methods on {3DPW} and {Human3.6M}.
Compared with VIBE, our motion prior improves the smoothness and reduces the acceleration error from $27.0mm/s^2$ to $12.7m/s^2$ on {3DPW} and from $27.3mm/s^2$ to $13.9mm/s^2$ on {Human3.6M}. Also, because our motion prior naturally encodes the reasonable transition between poses and provides the context information, the reconstruction error (\eg, MPJPE, PA-MPJPE and MPVPE) is also improved. Fig. \ref{fig:3dpw_viz} illustrates the qualitative results in presence of occlusion. Compared with the adversarial prior in VIBE, which only offers a plausible prediction, our motion prior generates a predicted motion with higher probability and achieves better results. More qualitative results are provided in Sup. Mat.
Furthermore, following~\cite{luo20203d}, we also show the performance without SMPL parameters of Human3.6M and the results are shown in Tab.~\ref{tab:mesh_construct_noh36m}. Compared with previous works which are carefully designed for reconstruction task and output the prediction in a frame-wise manner, the MPJPE and MPVPE are improved. Specifically, compared with MEVA~\cite{luo20203d} that constructs the motion prior with more complex latent space and shorter-term motion, the improvement also demonstrates the efficiency of our motion prior.
\begin{figure*}
\centering
\includegraphics[width=0.99\linewidth]{figures/3dpw_visualization1.pdf}
\caption{Qualitative comparison between VIBE (top) and our method (bottom) on the in-the-wild 3DPW.}
\vspace{-1mm}
\label{fig:3dpw_viz}
\end{figure*}
\textbf{Effectiveness of denoising scheme.} Furthermore, we also conduct experiments to prove the effectiveness of our proposed denoising training scheme. From Tab.~\ref{tab:mesh_construct} and \ref{tab:mesh_construct_noh36m}, we can see that the performances are improved on both two settings, especially in MPJPE, which shows that the denoising scheme of rotation noise helps to learn a better representation of orientation transition between frames.
\subsection{Motion Prediction}
\label{subsec:motion_pred_exp}
\textbf{Dataset.} Following~\cite{zhang2019predicting}, we train our network on the combination of {InstaVariey}, {PennAction}~\cite{zhang2013actemes} and Human3.6M. Specifically, the Human3.6M is split into train/val/test set as [S1, S6, S7, S8]/[S5]/[S9, S11].
\textbf{Experimental results.} In Sec.~\ref{subsec:motion_predict}, we introduce that PHD~\cite{zhang2019predicting} is taken as our backbone and we also use the past $T=15$ frames as input and train the network to predict future $25$ frames.
Then, we discard the Dynamic Time Warping in~\cite{zhang2019predicting} and compare with them under the same setting. As shown in Tab.~\ref{tab:motion_pred}, the performance of first $20$ frames are improved with our prior. Following~\cite{zhang2019predicting}, we also report result of the $30th$ frame, which is not supervised in the training phase and directly taken from the motion generated by $z_{mot}$, and we can see that our motion prior still improve the result, because it naturally encodes a sequence of plausible motion starting from the given frames.
\begin{table}
\small
\centering
\begin{threeparttable}
\resizebox{\linewidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{Method}&
\multicolumn{5}{c}{PA-MPJPE $\downarrow$}\cr
\cmidrule(lr){2-6}
& 1$th$~ & 5$th$~ & 10$th$~ & 20$th$~ & 30$th$~\cr
\midrule
Zhang~\etal~\cite{zhang2019predicting}~~ & 57.7~ & 61.2~ & 64.4~ & 67.1~ & 81.1~\\
\midrule
Ours & {\bf 51.9~} & {\bf 61.1~} & {\bf 63.3~} & {\bf 63.9~} & {\bf 80.2~} \\
\bottomrule
\end{tabular}}
\end{threeparttable}
\vspace{1pt}
\caption{Results of motion prediction from video without Dynamic Time Warping. We report the PA-MPJPE for the 1$th$, 5$th$, 10$th$, 20$th$, 30$th$ frame in the future motion.}
\label{tab:motion_pred}
\end{table}
\subsection{Action Recognition}
\label{subsec:ar}
\begin{table}[]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{clcccc}
\toprule
\multirow{2}{*}{Loss}&\multirow{2}{*}{Method}&
\multicolumn{2}{c}{BABEL-60}&\multicolumn{2}{c}{BABEL-120}\cr
\cmidrule(lr){3-4} \cmidrule(lr){5-6}
& & Top1$\uparrow$ & \hspace{-2mm}Top1-\textit{norm}$\uparrow$ & Top1$\uparrow$ & \hspace{-2mm}Top1-\textit{norm}$\uparrow$\cr
\midrule
\multirow{3}{*}{\rotatebox{90}{\begin{tabular}{c}CE\end{tabular}}} & 2s-AGCN~\cite{BABEL_CVPR_2021} & {\bf 44.9} & {17.2} & {\bf 43.6} & 11.3 \\
& Ours\dag & 38.6 & 22.4 & 36.0 & 17.4\\
& Ours & 40.3 & {\bf 23.6} & 37.8 & {\bf 18.2} \\
\midrule
\multirow{3}{*}{\rotatebox{90}{\begin{tabular}{c}Focal\end{tabular}}} & 2s-AGCN~\cite{BABEL_CVPR_2021} & 37.6 & 25.7 & {\bf 31.7} & 19.2 \\
& Ours\dag & 32.7 & 26.2 & 30.4 & 22.3 \\
& Ours & {\bf 38.1} & {\bf 27.2} & 31.5 & \textbf{25.5} \\
\bottomrule
\end{tabular}}
\vspace{1pt}
\caption{Evaluation on BABEL dataset. ``Ours" means that we only train the MLP and freeze the pre-trained encoder. ``Ours\dag" denotes that we train the whole framework from scratch.}
\label{tab:action_recognition}
\end{table}
\textbf{Dataset.} As introduced in Sec.~\ref{sec:related_work}, BABEL~\cite{BABEL_CVPR_2021} provides more diversity and long-tailed distribution of samples, that is more close to real-world applications. Therefore, we conduct experiments on BABEL dataset and follow the official split in~\cite{BABEL_CVPR_2021} to use the long-tailed BABEL-60 and BABEL-120, containing $60$ and $120$ action categories, respectively.
\textbf{Experimental results.}
In Tab.~\ref{tab:action_recognition}, we report two metrics: Top1 and Top1-\textit{norm} accuracy (the mean Top1 across categories). Compared with Top1, Top1-\textit{norm} better reveals the performance of tackling the long-tailed distribution problem. In addition, following~\cite{BABEL_CVPR_2021}, we use both cross-entropy and focal loss in the training phase. On BABEL-60 and BABEL-120, our method achieves better performance in Top1-\textit{norm} compared with the baseline, which is end-to-end trained on the dataset. It demonstrates the effectiveness and generalization ability of learnt representation.
Also, we retrain the skeleton encoder together with MLP in Sec.~\ref{subsec:ar} from scratch in an end-to-end way.
As shown in Tab.~\ref{tab:action_recognition}, the result is worse, and it is because, decoupling representation learning may help to retain more distinguishable information and lead to more generalizable representation~\cite{kang2019decoupling}.
\subsection{Motion Infilling}
\label{subsec:motion_infilling_exp}
\begin{table}
\small
\centering
\begin{tabular}{lcc}
\toprule
Method & 60 Frames $\downarrow$ & 120 Frames $\downarrow$ \\
\midrule
Interpolation & 10.45 ($\pm$ 15.5) ~~ & ~~ 17.04 ($\pm$ 24.4) \\
Holden~\etal~\cite{holden2016deep} & 15.28 ($\pm$ 19.1) ~~ & ~~ 18.26 ($\pm$ 24.5)\\
Kaufmann~\etal~\cite{kaufmann2020convolutional} & 4.96 ($\pm$ 8.5) ~~ & ~~ 12.00 ($\pm$ 19.5)\\
\midrule
Ours & {\bf 2.01 ($\pm$ 2.15)} ~~ & ~~ {\bf 2.51 ($\pm$ 2.52)}\\
\bottomrule
\end{tabular}
\vspace{2mm}
\caption{Results of motion infilling tasks. 3D joint errors are reported by the mean and standard deviation in \textit{cm} computed over all joints and frames on the validation set.}
\label{tab:motion_infill}
\end{table}
To show that our prior encodes the transition between poses, we exploit the motion filling task, which aims to fill in missing frames in a human motion. Instead of designing a network, we utilize our pre-trained decoder with fixed weights and perform the motion infilling in an optimizing manner. We refer the reader to Sup. Mat. for more details.
\textbf{Dataset.} We use the dataset released by~\cite{holden2016deep}, where each pose is represented as $22$ joints. Following~\cite{kaufmann2020convolutional}, we evaluate the performance on the validation set, and $T$ frames are randomly selected as the missing frames in each motion.
\textbf{Experimental results.} Because the data is represented in skeleton, we regress the $22$ joints from predicted SMPL model so as to optimize the $z_{mot}$. Tab.~\ref{tab:motion_infill} shows the results in two settings: i) $T=60$ and ii) $T=120$. We outperform previous methods trained on this dataset, which shows the generalization ability of our motion prior and proves that it naturally represents the inherent transition between poses. Also, Fig.~\ref{fig:motion_infilling} illustrates the qualitative results and the comparison between ground truth is in Sup. Mat.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/motion_infilling_fig.pdf}
\caption{Illustration of motion infilling. We visualized six consecutive poses, with an interval of ten frames. Poses in gray mean the known frame and green ones denote the generated poses.}
\label{fig:motion_infilling}
\end{figure}
\section{Conclusion}
In this paper, we summarize the indispensable properties of a motion prior and propose a versatile motion prior which models the inherent probability distribution of motions. To keep the learnt representation space efficient, we introduce a global orientation normalization and a two-level frequency guidance. Then, we adopt a denoising training scheme to provide the consistent and distinguishable representation for each motion. Finally, we embed our proposed motion prior into different prevailing backbones and conduct extensive experiments on different tasks. The results show that the motion prior can improve the baseline and achieve the state-of-the-art performance, and it demonstrates the versatility and effectiveness of our prior.
\noindent\textbf{Acknowledgments.}
This work is sponsored by the National Key Research and Development Program of China (2019YFC1521104), National Natural Science Foundation of China (61972157), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Science and Technology Commission (21511101200), Art major project of National Social Science Fund (I8ZD22), National Natural Science Foundation of China under Grant (62176092), Shanghai Science and Technology Commission (21511100700) and SenseTime Collaborative Research Grant.
{\small
\bibliographystyle{ieee_fullname}
|
0909.2699
|
\section{Introduction}
\subsection{Background}
A successful hydrodynamic theory of granular media
could allow scientists and engineers to exploit the powerful techniques of
fluid dynamics to describe granular phenomena.
Recent experiments \cite{bocquet, rericha} and simulations \cite{ramirez}
demonstrate the potential for hydrodynamic theory to describe granular
media; however, the validity of such methods has not yet been
established for a general description of granular flow phenomena
\cite{dufty2002, campbell,rericha2004}.
Several proposed rapid granular flow models use equations of motion
for continuum fields -- number density $n$, velocity ${\bf u}$, and
granular temperature $T$ (${3\over2} T$ is the average kinetic energy due
to random particle motion) \cite{haff,jenkinsandsavage, lun1984}. In one
approach, particle interactions are modeled with binary, inelastic
hard-sphere collision operators in kinetic theory to derive continuum
equations to Euler \cite{goldshtein1}, Navier-Stokes
\cite{jenkinsandrichman}, and Burnett \cite{selaandgoldhirsch} order. In
this paper, we use 3D simulations of continuum equations to
Navier-Stokes order and 3D inelastic hard-sphere molecular dynamics
(MD) simulations to investigate the onset of standing wave patterns in
vertically oscillated granular layers.
\subsection{Standing wave patterns in oscillated granular layers}
Vertically oscillated layers have provided an important testbed for
granular research. Flat layers of grains on a
plate oscillating sinusoidally in the direction of gravity exhibit
convection \cite{knight96},
clustering \cite{falcon99}, shocks \cite{goldshtein2}, steady-state
flow fields far from the plate \cite{brey01}, and standing wave pattern
formation \cite{melo}.
A layer of grains on a plate oscillating sinusoidally in the direction of
gravity with frequency $f$ and amplitude $A$ will leave the plate at some
time in the cycle if the maximum
acceleration of the plate is greater than that of gravity. The layer
dilates above the plate, then collides with the plate later in the cycle
and is compressed on the plate by this collision. Above a critical value of
acceleration, standing wave patterns spontaneously form in the layer. This
pattern is subharmonic with respect to the plate, repeating every $2/f$
\cite{melo}.
Various subharmonic standing wave patterns, including stripe, square, and
hexagonal patterns, have been found experimentally,
depending on the nondimensional frequency $f^{*}=f\sqrt{H/g}$ and the
nondimensional accelerational amplitude $\Gamma=A\left(2\pi f\right)^2/g$,
where $H$ is the depth of the layer as poured, and $g$ is the acceleration
due to gravity \cite{melo}.
Studies using hydrodynamic equations have not yet yielded
the standing wave patterns observed in experiments. Here we investigate the
onset of ordered standing wave patterns
using fully three-dimensional (3D) simulations of continuum equations to
Navier-Stokes order as well as molecular dynamics (MD) simulations.
We use a continuum model for frictionless, inelastic
particles, and
investigate the onset of stripe patterns.
\subsection{Fluctuating hydrodynamics}
Near the onset of convection patterns in Rayleigh-B\'{e}nard convection of
fluids, fluctuations caused by thermal noise create deviations from
dynamics predicted from linear theory. These fluctuations are described by
the addition of terms to the Navier-Stokes equations; this theory is known
as fluctuating hydrodynamics
\cite{landauandlifshitz1959,zaitsev,swifthohenberg}. Recent experiments
have shown that fluctuating
hydrodynamics theory accurately describes the dynamics of fluids near the
onset of convection \cite{wu, rehberg, oh}.
Experimental investigations of coherent
fluctuations and pattern formation in oscillated granular layers have indicated
that fluctuations due to the movement of individual grains play a much more
significant role in the collective behavior of granular media than do
thermal fluctuations in ordinary fluids \cite{goldman2004}. Thus, a
consistent theory of granular hydrodynamics may need to include fluctuations.
\subsection{Model system}
We simulate a layer of grains on an impenetrable plate which oscillates
sinusoidally in the direction of gravity.
The layer depth at rest is approximately $H=5.4\sigma,$ where the grains
are modeled as identical,
frictionless spheres with diameter $\sigma$ and coefficient of restitution
$e=0.7$. For most of the paper, we study the onset of patterns as a function of
$\Gamma$, while the frequency of
plate oscillation is held constant at $f^{*}=0.4174$.
This corresponds to
a frequency of $56$ Hz for particles with a diameter of $0.1$
mm.
For $\Gamma\gtrsim2.5$, stripes are seen experimentally for
a range of parameters, including $f^{*}=0.4174$, $H=5.4$ \cite{melo}.
In
Sec.~\ref{section-dispersion} and Sec.~\ref{section-wavelength}, frequency
is varied to
investigate the effect of changing frequency on pattern formation.
Experiments \cite{goldman03} and MD simulations \cite{moon03} indicate that
inter-particle friction plays
an important
role in the standing wave patterns. MD simulations with friction between
particles have quantitatively reproduced the stripe, square, and hexagonal
subharmonic standing wave patterns seen experimentally for a wide range of
parameters \cite{bizon98}.
However, MD simulations using frictionless particles do not yield stable
square or
hexagonal patterns, but only yield stripe patterns, and exhibit the onset
of patterns at lower $\Gamma$ than that seen for frictional particles
\cite{moon03}. This result is consistent with experiments which show that
reducing
friction by adding
graphite can de-stabilize square patterns \cite{goldman03}. In
this study, we neglect the effects of friction in our continuum and MD
simulations, and study only the onset of stripe patterns in frictionless layers.
To
investigate other patterns such as squares or hexagons, simulations would
need to include friction between particles.
We use MD
and continuum
simulations to investigate the dynamics of this system near onset, and use
simulations of the Swift-Hohenberg (SH) model equation to compare our
results between the two. Section II describes the
methods used to simulate and analyze these patterns, Sec. III compares
patterns formed in MD and continuum simulations.
Section IV compares
MD simulations to Swift-Hohenberg theory, and Sec. V presents
our conclusions.
\section{Methods}
\subsection{Molecular dynamics simulation}
We use an inelastic hard sphere molecular dynamics simulation,
which was previously used in conjunction with the continuum
simulation used in this paper to model shock waves in a
granular shaker \cite{bougie2002}.
This same MD code with friction added has been found to describe well the
patterns observed in experiments on oscillating granular layers
\cite{bizon98,moon02}.
The collision model assumes instantaneous binary collisions in which
energy is dissipated, as characterized by the normal coefficient of
restitution $e$. We neglect surface
friction between particles, as well as between the particles and the
plate. To prevent inelastic collapse, we use a coefficient of restitution
which depends on the relative colliding velocity of the particles $v_n$:
$e\left( v_n\right) = 1-0.3\left( v_n/
\sqrt{g\sigma}\right)^{3/4}$ for $v_n < \sqrt{g\sigma}$, and
$e=0.7$ otherwise \cite{bizon98}.
The MD simulations are calculated in a box
of size $L_x \times L_y$ in
the horizontal directions $x$ and $y$, where $L_x$ and $L_y$ are varied to
investigate patterns with different wavelengths.
The simulations use periodic boundary
conditions in the horizontal directions, an impenetrable lower plate
which oscillates sinusoidally between $z=0$ and $z=2A$, and an
upper plate fixed at a height $z=200\sigma$, as in the previous
investigation of shock propagation \cite{bougie2002}.
For each MD simulation, $\left(L_x/\sigma\right)
\times \left(L_y/\sigma\right) \times6$ particles
were used. In actual packings seen experimentally, $6/\sigma^2$ particles
per unit area
of the bottom plate corresponds to a layer depth $H=5.4\sigma$ as poured,
representing a volume fraction $\nu\approx0.58$.
\cite{bizon98}. The
total mass of the layer matches that of the continuum
simulations.
\subsection{Continuum simulation}
We use a continuum simulation previously used to model shock waves in a
granular shaker \cite{bougie2002}. Our simulation numerically integrates
continuum equations of Navier-Stokes order proposed by Jenkins and Richman
\cite{jenkinsandrichman} for a dense gas composed of frictionless
(smooth), inelastic hard spheres. We integrate these
hydrodynamic equations to find number density, momentum, and granular
temperature, using a second order finite difference scheme on a uniform
grid in 3D with first order adaptive time stepping \cite{bougie2002}.
As in our MD simulations, the granular fluid in the continuum simulations
is contained between two
impenetrable horizontal plates at the top and bottom of the container,
where the lower plate oscillates sinusoidally between height $z=0$ and
$z=2A$. In our MD simulations, the ceiling is fixed in space
at a height of $z=200\sigma$, but to minimize computation time, the
ceiling in continuum simulations is located at height $80 \sigma$ above the
lower plate and oscillates with the bottom plate.
In our previous study of shock formation,
changing the ceiling height from $200 \sigma$ to $80 \sigma$ resulted in a
fractional root mean square difference of less than $1 \%$ in the shock
location over one cycle \cite{bougie2002}.
As in the MD simulations, we use periodic horizontal boundary conditions
and boxes of size $L_x \times L_y$ in
the horizontal directions $x$ and $y$, where $L_x$ and $L_y$ are varied.
In each case, continuum simulations are compared to MD simulations with the
same horizontal dimensions $L_x$ and $L_y$.
The numerical methods, boundary
conditions at the top and bottom plate, and grid spacing are the same as
used in the previous study of shocks~\cite{bougie2002}.
The energy loss due to collisions in continuum
simulations is characterized by a single parameter, the normal coefficient
of restitution $e=0.70$. Throughout this paper, we use units such that
particles in MD simulations have mass unity, and the total mass of the
layer in the continuum simulations matches that used in MD simulations.
\subsection{Characterizing patterns}\rm\label{section-characterizing}
To visualize peaks and valleys formed by standing wave patterns, we calculate
the height of the center of mass of the layer, $z_{cm}\left( x, y,
t\right)$ as a function of
horizontal
location in the cell at various times in the cycle. At a given time $t_0$
and horizontal location $\left( x_0, y_0
\right)$, $z_{cm}\left(x_0, y_0, t_0\right)$ is the
center of mass of all particles whose horizontal coordinates lie within a bin
of size $\Delta x_{bin} \times \Delta y_{bin}$ centered at $\left( x_0, y_0
\right)$. For
continuum simulations, we use the simulation grid size to define the bins:
$\Delta x_{bin}
=\Delta x=2\sigma$ and $\Delta y_{bin}=\Delta y=2\sigma$. For MD
simulations, we use bins
of size
$2\sigma\times2\sigma$ in Section~\ref{section-patterns} to compare to
continuum simulations with the same bin size. Peaks in the
pattern correspond to maxima of
$z_{cm}$, and valleys correspond to minima.
To measure the amplitude
of patterns and fluctuations in continuum and MD simulations, we examine
the deviation of the height of the
center of mass of the layer as a function of horizontal location
in the
cell from the center of mass height averaged over the cell at that phase in
the cycle:
\begin{equation}\psi(x,y,t)=z_{cm}(x,y,t)-\left< z_{cm}(x,y,t)
\right>,\end{equation} where $x$ and $y$ are the
horizontal coordinates, $t$ is the time in the cycle, $z_{cm}(x,y)$ is the
height of the center of mass of the layer at horizontal location $(x,y) $,
and the
brackets represent an average over all horizontal locations in the cell at
a given time $t$.
Throughout this paper, we characterize the patterns at the beginning of a
sinusoidal oscillation cycle, such that the plate is at its equilibrium
position and moving upwards.
Using this definition, $\left< \psi^2(t) \right>$ represents the mean square
deviation of the height of the layer from the mean height of the layer at
that phase of the plate.
We note that $\left< \psi^2 \right>$ is large for layers with high
amplitude
patterns or fluctuations, and goes to zero as the layer becomes perfectly flat.
In addition to $\left< \psi^2 \right>$, we
distinguish between ordered patterns (stripes) and disordered fluctuations
by characterizing the long range order of the pattern.
To characterize the long range order of the
patterns, we first calculate the power spectrum of the pattern:
$S\left(k_x,k_y,t\right)=\left|\tilde{\psi}\left(k_x,k_y,t\right)\right|^2$,
where
$\tilde{\psi}\left(k_x,k_y,t\right)=\int_{0}^{L_x}\int_{0}^{L_y}\psi\left(x,y,t\right)e^{-ik_x
x}e^{-ik_y y}dx dy$. We then transform to polar coordinates in $k$ space:
$k_r=\sqrt{k_x^2+k_y^2}$, $k_{\theta}=tan^{-1}\left(k_y/k_x\right)$ to find
$S(k_r,k_{\theta})$ in the range $0\leq k_{\theta} < \pi$.
We integrate radially to find the
angular orientation of the power spectrum:
$S(k_{\theta})=\int_0^{K}
S\left(k_r,k_{\theta}\right) k_r dk_r,$ where $K=\frac{2\pi\Delta x_{bin}}{L_x}$.
We bin $k_{\theta}$ into 21 bins
between $k_{\theta}=0$ and $k_{\theta}=\pi$, and characterize the long range
order of the patterns by the fraction of the total integrated power that
lies in the bin
with the maximum power:
\begin{equation}P_{max}={S_{max}\left(\theta\right)\over\int_{0}^{\pi}S\left(\theta_{i}\right)dk_{\theta}},\label{eq:order}\end{equation}
where $S_{max}\left(\theta\right)$
is the integrated power within an angle $\pi/21$ of the maximum value of
$S(\theta)$. Thus $P_{max}$ is the fraction of the total power that lies within
approximately $\pi/21$ of the angular location of the maximum power. For a
perfectly disordered state, with equal power in all
directions, $P_{max}$ would approach ${1\over 21} \approx 0.05$, while
$P_{max}=1$ for a state with
all power in a single bin.
Thus $P_{max}$
provides a measure of order when stripes form.
\section{Pattern onset and dispersion}\label{section-patterns}
\subsection{Stripe patterns}\label{section-bigover}
Experimental investigations of shaken granular
layers have shown that above a critical acceleration of the plate
$\Gamma_c$, standing wave patterns form spontaneously. These patterns
oscillate subharmonically, repeating every $2/f$, so that the location of
a peak of the pattern becomes a valley after one cycle of the plate, and
vice versa \cite{melo}.
Continuum and MD simulations produce standing wave patterns for
$\Gamma=2.2$ and
$f^{*}=0.4174$ (Fig.~\ref{overpic}). Alternating peaks and valleys form a
stripe
pattern which oscillates at $f/2$ with respect to the plate oscillation; a
location in the cell which represents a peak during one cycle will become a
valley the next cycle, and then return to a peak on the following cycle.
For a box of size
$126 \sigma \times 126 \sigma$ in the horizontal direction, six wavelengths
fit in the box in both MD and continuum simulations, yielding a wavelength
of $21\sigma \pm 4\sigma$ in both continuum and MD simulations
(Fig.~\ref{overpic})
.
\begin{figure}
\subfigure{\label{overmd}\scalebox{.3}{\includegraphics{overmd.eps}}}\\
\vspace{-2.cm}
\hspace{-1.1cm}
\subfigure{\label{overcont}\scalebox{.3}{\includegraphics{overcont.eps}}}\\
\caption{\label{overpic} An overhead view of a layer of grains, showing the
center of mass height $z_{cm}$ as a function of horizontal position
$\left( x,y\right)$ in a cell with horizontal dimensions $L_x \times L_y = 126\sigma
\times 126\sigma$, from (a) MD simulations and (b) continuum
simulations. Peaks of the layer corresponding to large center of mass
height $z_{cm}$ are shown in white; valleys corresponding to low
$z_{cm}$ are shown in black.
}
\end{figure}
\subsection{Dispersion Relations in Continuum, MD, and Experiment}\label{section-dispersion}
Experiments have shown that the wavelength $\lambda$ of standing
wave patterns in
shaken granular layers depends on the frequency of the plate oscillation
\cite{melo1993, clement1996, umbanhowar2000}. For a range of layer depths
and oscillation frequencies, experimental data for
frictional particles near the onset of patterns were found to be fit by the
function $\lambda^{*}=1.0+1.1f^{*-1.32\pm0.03}$, where
$\lambda^{*}=\lambda/H$ \cite{umbanhowar2000}.
We investigate the frequency dependence of standing waves in continuum
simulations and in MD simulations of frictionless particles.
Dimensionless accelerational amplitude $\Gamma=2.2$ was held constant
while dimensionless frequency $f^*$ was varied. Simulations
were conducted in a box of horizontal extent $L_x=168\sigma$ and
$L_y=10\sigma$. This orientation causes stripe patterns to form
parallel to the $y-$ axis. The
dominant wavelength in each case was calculated from $S\left(
k_x,k_y,t\right)$ by finding the wavenumber $k_x$
in the $x-$ direction which exhibited the maximum power during
one cycle of the oscillatory state of the pattern. Due to the
periodic boundary conditions and finite box size, wavelengths must
fit in the box an integer number of times. This finite size
effect of quantized wavelength yields
inherent uncertainty in the wavelength that would be selected in an infinite
box.
Wavelengths found in continuum and MD simulations are compared to the
dispersion relation fit to experimental data in Fig.~\ref{dispersion}.
Investigation is limited to
$f^{*}>0.15$ by the box size, as only two wavelengths fit in
the box in continuum simulations at this frequency. Neither simulation
produced patterns for this box size for $f^{*}\gtrsim0.45$.
Both simulations agree quite well with the experimental fit throughout the
range $0.15\lesssim f^{*} \lesssim 0.45$.
Comparison to the
experimental fit shows that both MD and continnum simulations produce
wavelengths consistent with experimental results for frictional particles.
These data
indicate that friction seems to be
unimportant in wavelength selection through this parameter range.
\begin{figure}
\begin{center}
\scalebox{.5}{\includegraphics{dispersion.eps}}\\
\end{center}
\vspace{0cm}
\caption{\label{dispersion}
Dispersion relation for
stripes which form perpendicular to the long dimension of
cells with horizontal dimensions $168\sigma\times10\sigma$. Data for
continuum simulations are shown as triangles and MD simulations as circles;
points where continuum and MD simulations yield the same wavelength are
shown as squares.
In both continuum
and MD simulations, the dominant
wavelength of the final oscillatory state $\lambda$ fits very well to the
dispersion relation found in experiments
$\lambda^{*}=1.0+1.1f^{-1.32\pm0.03}$ (solid line) \cite{umbanhowar2000}.
Error bars in both
simulations are calculated exclusively from discretization due to
periodic boundary conditions in a finite size box.
}
\end{figure}
\subsection{Layers Above and Below the Onset of Patterns}
Continuum and MD simulations exhibit pattern formation above a
critical acceleration of the plate; however, standing wave patterns are not
observed below a critical value of $\Gamma$ (Fig.~\ref{GAMcomparison}).
For $\Gamma=2.2$, both MD
(Fig.~\ref{GAM2.2md}) and continuum (Fig.~\ref{GAM2.2cont}) simulations show well
defined peaks
and valleys which form stripe patterns with two wavelengths fitting in the
box of size $L_x=L_y=42\sigma$. The only difference between this system and that investigated in
Sec.~\ref{section-bigover} is the horizontal size of the cell; these
patterns look very similar to a section of the patterns formed in the
larger cell
(Fig.~\ref{overpic}). Reducing the accelerational amplitude to $\Gamma=1.9$
while keeping all other parameters constant yields no ordered waves in
either MD
(Fig.~\ref{GAM1.9md}) or continuum
(Fig.~\ref{GAM1.9cont}). Thus both continuum and MD simulations appear to
have a critical value of $\Gamma$ somewhere in the range
$1.9\leq\Gamma_c\leq2.2$, such that no patterns are formed for
$\Gamma<\Gamma_c$, and patterns are formed for $\Gamma>\Gamma_c$. This
critical value is lower than that found in experiments with frictional
particles, where a similar onset of patterns is found at a critical value
of $\Gamma\approx2.5$ \cite{melo}.
\begin{figure}
\hspace{-.34in}
\subfigure{\label{GAM1.9md}\scalebox{.27}{\includegraphics{GAM1.9md.eps}}}
\subfigure{\label{GAM2.2md}\scalebox{.27}{\includegraphics{GAM2.2md.eps}}}\\
\vspace{-2.5cm}
\hspace{7.9cm}
\label{GAMcbar}\scalebox{.27}{\includegraphics{GAMcbar.eps}}\\
\vspace{-1.8cm}
\hspace{-.75cm}
\subfigure{\label{GAM1.9cont}\scalebox{.27}{\includegraphics{GAM1.9cont.eps}}}
\hspace{.115cm}
\subfigure{\label{GAM2.2cont}\scalebox{.27}{\includegraphics{GAM2.2cont.eps}}}
\caption{\label{GAMcomparison}
An overhead view of the layer of grains, showing the center of mass height
$z_{cm}(x,y)$ of
the layer as a function of location in the box, for (a) MD simulations with a
plate acceleration with respect to gravity $\Gamma=1.9$, (b) MD
simulations with $\Gamma=2.2$, (c) continuum simulations with $\Gamma=1.9$,
and (d) continuum simulations with $\Gamma=2.2$. Peaks corresponding to
large $z_{cm}$ are shown in white, while valleys corresponding to
small $z_{cm}$ are shown in black.
The grayscale for all four images is given on the right.
}
\end{figure}
Despite the similarities, differences between MD and continuum simulations
are observable. For $\Gamma=1.9$, the continuum
simulation yields a very smooth, flat layer (Fig.~\ref{GAM1.9cont}), while MD
exhibits visible fluctuations (Fig.~\ref{GAM1.9md}). For $\Gamma=2.2$, the
continuum simulations produce stripes (Fig.~\ref{GAM2.2cont}) which are much
smoother than those found in MD simulation (Fig.~\ref{GAM2.2md}).
To explore the differences between the two simulations, we investigate
the onset of patterns in
more detail in continuum simulations and MD simulations separately.
\subsection{Onset of patterns in continuum simulations}\rm
We investigate the
onset of patterns in continuum simulations by determining
$\left<\psi^2\right>$ of
standing waves for different values of $\Gamma$.
Each simulation begins with a flat layer above
the plate with small amplitude random fluctuations. The simulation is run
until it reaches a periodic state, at which point $\left<\psi^2\right>$ is calculated
as an average over ten cycles of the same phase of the plate.
For $\Gamma \lesssim 1.95$, the initial fluctuations decay rapidly until the
layer is quite flat, as represented by
negligible values of $\left<\psi^{2}\right>$ (Fig.~\ref{contgrowth}). As
$\Gamma$ increases, there is a sudden onset to large amplitude waves, as
seen by the sudden jump in $\left<\psi^{2}\right>$ in Fig.~\ref{contgrowth}.
This onset occurs at the
critical value $\Gamma_c=1.955\pm0.005$. For $\Gamma < \Gamma_c$, initial
fluctuations decay until the layer is very flat, while for all layers above
onset ($\Gamma > \Gamma_c$), these waves produce ordered patterns of
stripes similar to those in Fig.~\ref{GAM2.2cont}.
\begin{figure}
\begin{center}
\scalebox{.5}{\includegraphics{contgrowth.eps}}
\end{center}
\caption{\label{contgrowth}
The mean square deviation $\left< \psi^2 \right>$ of the local center of
mass height from the average center of mass height of the entire layer as a
function of accelerational amplitude $\Gamma$ for MD (triangles) and
continuum (circles) simulations. The vertical dotted line represents the
onset of stripe patterns in the continuum simulations.
}
\end{figure}
\subsection{Onset of patterns in molecular dynamics simulations}\rm
We examine the onset of
patterns in MD simulations using the same methods as for the continuum
equations. Figure~\ref{contgrowth} shows the mean square height deviation $\left<
\psi^2 \right>$
as a function of $\Gamma$ for MD simulations as well as for continuum
simulations. For each value of $\Gamma$, the simulation was run for 400
cycles
of the plate until the layer reached a periodic state, then $\left<\psi^2\right>$ and
$P_{max}$ were calculated from an average of the next 100
cycles.
As in continuum simulations, $\left<\psi^2\right>$ grows
with
increasing $\Gamma$. Unlike the continuum results, $\left<\psi^2\right>$ is
non-negligible in MD simulations even for $\Gamma < 1.95$.
There is still a sharp increase in the slope of the
curve, but it is delayed until $\Gamma >2.1$.
\section{The role of fluctuations}\rm\label{section-noise}
The MD simulations display
an onset of ordered stripes that is delayed with respect to those found in
continuum, and exhibit non-negligible
$\left<\psi^2\right>$ even
below the onset of ordered stripes.
Since the
hydrodynamic model used in the continuum simulations does not include a
stochastic noise term characteristic of fluctuating hydrodynamics, the
differences between the continuum and MD simulations may be consistent with
the presence of noise in the MD simulations due to the small number of
particles per wavelength. To test the hypothesis that these differences
are consistent with the presence of fluctuations in molecular dynamics
simulations, we compare MD
simulations to results from the Swift-Hohenberg model.
\subsection{Swift Hohenberg simulation}
The Swift-Hohenberg (SH) model was
developed to describe thermal noise-driven
phenomena near the onset of long range order in Rayleigh-B\'{e}nard
convection \cite{swifthohenberg}.
Recent experimental evidence suggests
similar phenomena in shaken granular experiments can
be interpreted using the methods of fluctuating hydrodynamics
\cite{goldman2004}.
The SH model describes the time evolution of a scalar field
$\psi_{SH}(\bf{x}\rm,t)$:
\begin{equation} {\partial\psi_{SH}\over\partial t}=
\left(\epsilon-\left(1+\nabla^2\right)^2\right)\psi_{SH}- \psi_{SH}^3 +
\eta\left(\bf{x}\rm,t\right), \label{eq:SH}\end{equation}
where $\epsilon$ is the bifurcation parameter, and $\eta$ is a stochastic
noise term of strength $F$, such that
$\left<\eta\left(\bf{x}\rm,t\right)
\eta\left(\bf{x'}\rm,t'\right)\right> = 2
F\delta\left(\bf{x}\rm-\bf{x'}\rm\right)\delta\left(t-t'\right)$.
In the absence of stochastic noise ($F=0$), called the mean field (MF)
approximation, there is a sharp onset of stripe patterns
with long range order at $\epsilon=\epsilon^{MF}_c=0$
\cite{swifthohenberg,scherer}. For $F\neq0$, the
effect of noise is to delay the onset of long
range (LR) order to a new critical value: $\epsilon_c^{LR}>0$. The delay
in onset is characterized by
$\Delta\epsilon_c=\epsilon_c^{LR}-\epsilon_c^{MF}$. In
addition, the presence of noise creates fluctuations below the
onset of long range order ($\epsilon<\epsilon_c^{LR}$).
The Swift-Hohenberg simulation displays a forward bifurcation to stripes at
onset, while MD simulations show slight ($<1\%$) hysteresis
\cite{goldman2004}. A more complicated SH
model \cite{sakaguchi} yields square patterns
with hysteresis; however, in this work we compare stripe formation in MD
simulations a simpler model of the effects of noise near a bifurcation
(Eq.~\ref{eq:SH}).
We numerically solve the SH equation using the scheme
described in \cite{cross}, with the number of gridpoints $N=42\times 42$,
and
periodic boundary
conditions. We use integration timesteps of 0.5, and
the size of each gridspace in the horizontal directions
$\Delta x=\Delta y=0.29$ so that two
wavelengths of the resulting pattern fit in the box, to match MD and continuum
simulations.
The simulation was allowed to run for 8,000
timesteps to reach a final pattern; then $\left<\psi_{SH}^2\right>$ and
$P_{max}$ were calculated from an average of the next 2,000
timesteps, in the same way as $\left<\psi^2\right>$ and $P_{max}$ were
calculated for MD and continuum simulations in
Section~\ref{section-characterizing}.
\subsection{Comparing Swift-Hohenberg and molecular dynamics simulations}\rm\label{section-fit}
To find the strength of the noise and the mean field onset,
we fit the SH
model to the data from MD simulations (Fig.~\ref{SHcomp}) by varying three
parameters: $F$, $\Delta\epsilon_c$, and an overall scale
factor, as in \cite{oh, goldman2004}.
\begin{figure}
\begin{center}
\subfigure{\label{SHamp}\scalebox{.5}{\includegraphics{SHamp.eps}}}\\
\vspace{-.6cm}
\hspace{.5cm}
\subfigure{\label{SHangle}\scalebox{.5}{\includegraphics{SHangle.eps}}}
\end{center}
\caption{\label{SHcomp}
Comparison of MD simulations (triangles) to the Swift-Hohenberg model
(solid lines) for (a) $\left< \psi^2 \right>$, and (b) global ordering
$P_{max}$ (Eq.~\ref{eq:order}), as a function of
control parameter $\epsilon$ (bottom axis) for SH, and $\Gamma$ (top axis)
for MD. The parameters for SH simulations are noise strength
$F=(1.2\pm0.2)\times 10^{-2}$ and a delayed onset of long range
order $\epsilon^{LR}_{c}=0.094$.
The global ordering jumps sharply at
$\epsilon^{LR}_{c}=0.094$, corresponding to $\Gamma_{c}^{LR}=2.15$ in
MD (the vertical dotted line in the figure),
representing a transition to stripe patterns, while
$\left< \psi^2 \right>$ increases
smoothly through that transition.
This fit predicts a mean field onset value of
$\Gamma_{c}^{MF}=1.965\pm0.007$, corresponding to $\epsilon_{c}^{MF}=0$ (the
vertical dashed line in the figure).
}
\end{figure}
Of the three parameters, only the noise strength $F$ changes
the overall shape of the curve. For a given $F$, the SH simulation is run
for a range of $-0.2\leq\epsilon\leq0.2$; $\psi_{SH}$ and $P_{max}$ are
calculated from the steady state solution for each value of $\epsilon$ and
compared to MD simulations.
For consistency, $\left<\psi^2\right>$ and $P_{max}$ are calculated
for MD simulations from bins of size $\Delta x_{bin}=\Delta
y_{bin}=\sigma$ throughout this section, so that the number
of bins in both MD and SH simulations is $42\times42$.
Increasing the bin size to $\Delta x_{bin}=\Delta
y_{bin}=2\sigma$ does not change any of the fit parameters to within our uncertainty.
Note $\left<\psi_{SH}^2\right>$ in SH simulations is
found as a function of
control parameter $-0.2\leq\epsilon_{SH}\leq0.2$, while in MD simulations,
$\left<\psi_{MD}^2\right>$ is found as a function of control parameter
$1.7\leq\Gamma\leq2.3$. To compare the onset of the SH model to the onset
in MD simulations, we define
$\epsilon_{MD}=\left(\Gamma-\Gamma_C^{MF}\right)/\Gamma_C^{MF}$, where
$\Gamma_C^{MF}$ is the mean field onset of patterns, comparable to
$\epsilon_{SH}=0$. However, we do not know {\it a priori} the value of
$\Gamma_C^{MF}$.
We find that $\left< \psi^2 \right>$ changes relatively
smoothly in MD and SH simulations, making
it difficult to pinpoint an onset of patterns from $\left< \psi^2 \right>$ alone.
However,
there is a distinct onset of long range order in the system (Fig.~\ref{SHcomp}).
For low $\Gamma$ in MD, the
fluctuations are disordered ({\it cf} \rm Fig.~\ref{GAM1.9md}), while for higher
$\Gamma$, standing wave stripe patterns are observed ({\it cf} \rm
Fig.~\ref{GAM2.2md}). A clear transition from disordered fluctuations to an
ordered stripe pattern is demonstrated by the sharp increase in $P_{max}$ as
$\Gamma$ crosses the critical value for long range order, determined from
Fig.~\ref{SHangle} as $\Gamma_c^{LR}=2.15\pm0.01$.
A similar transition to ordered stripes is seen in SH simulations
(Fig.~\ref{SHangle}).
The onset of long range order is used to establish a correspondence between
$\Gamma$ and $\epsilon$. For MD simulations, we measure the onset of long
range order as the point of sharpest increase
in $P_{max}$ (Fig.~\ref{SHangle}). In SH simulations, $\Delta\epsilon_c$ represents
the onset of long range order. We match the single point of steepest
increase of $P_{max}$ between the two curves. The measured value
$\Delta\epsilon_c$ in SH then predicts the mean field onset $\Gamma_C^{MF}$
corresponding to $\epsilon=0$.
Once the relationship between $\Gamma$ and $\epsilon$ is determined, the
overall scale factor for a given $F$ is found by a least squares fit between
$\left<\psi_{SH}^2\right>$ and $\left<\psi_{MD}^2\right>$ for the range
$1.7\leq\Gamma\leq\Gamma_c^{LR}$
(see Fig.~\ref{SHangle}). This minimization procedure gives the best
possible fit for a given value of $F$.
This entire procedure is repeated for varying $F$, minimizing the squared
residual
$R^2=\sum{\left(\left<\psi^2_{MD}\right>-\left<\psi^2_{SH}\right>\right)^2/N},$
where $N$ is the number of bins (Fig.~\ref{bestfit}). The best fit yields
an onset of long range order at $\Delta\epsilon_c$=0.94, corresponding
to $\Gamma_c^{LR}=2.15$.
Figure~\ref{SHamp} shows $<\psi^2>$ as a function of $\epsilon$ for SH
simulations, and as a function of $\Gamma$ for MD simulations.
The fit shows good agreement in $\left<\psi^2\right>$ below
$\epsilon=0$ (Fig.~\ref{SHcomp}).
Although the parameters are fit only in the range
$1.7\leq\Gamma\leq\Gamma_c^{LR}$,
agreement is
reasonable even for
$\Gamma>\Gamma_c^{LR}$.
\begin{figure}
\begin{center}
\scalebox{.5}{\includegraphics{bestfit.eps}}
\end{center}
\caption{\label{bestfit}
The squared residual $R^2$ between $\left<\psi_{MD}^2\right>$ and
$\left<\psi_{SH}^2\right>$ as a
function of the noise strength $F$ used in SH simulations. The best least
squares fit is given by $F=(1.2\pm0.2)\times 10^{-2}$.
}
\end{figure}
The three parameter fit not only allows for agreement in $\left<\psi^2\right>$,
but also matches the measure of order $P_{max}$ in the SH model to
that found
in MD simulation (Fig.~\ref{SHangle}).
In both MD and SH
simulation, below the critical value of long range order, the fluctuations
are disordered, leading to a small value in $P_{max}$. When $\Gamma$
crosses the critical
value, $P_{max}$ jumps up significantly, and the observed patterns are ordered
stripes. Below the onset of stripes, when the fluctuations are constantly
shifting and
changing, there is significant uncertainty in finding the
value of $P_{max}$, as seen by the noisy curve on the plot. Above this onset,
however, the standing waves produce stable stripes, and $P_{max}$ plateaus and
remains quite constant, with good
agreement between MD and SH simulations.
Finally, the mean field onset $\Gamma_{c}^{MF}=1.965\pm0.007$ predicted by
this fit agrees remarkably well with
the critical value $\Gamma_{c}=1.955\pm0.005$ found in our simulations of
Navier-Stokes order continuum equations.
\subsection{Effect of changing wavelength on strength of noise}\label{section-wavelength}
If the noise effects arise from the finite
particle number in MD, we may expect that this effect will
decrease in
systems in which there are more particles per wavelength of pattern.
Since the
number of particles in a volume $\lambda^3$ increases with increasing
wavelength, we investigate the effect of changing frequency on the onset of
patterns in
MD simulations. For cells of horizontal extent $168\sigma\times10\sigma$,
layers shaken with a frequency $f^{*}=0.25$ form peaks with a dominant
wavelength $\lambda=42\sigma$, which is twice the wavelength found for
patterns investigated at $f^{*}=0.4174$ (see
Fig.~\ref{dispersion}).
We examine layers shaken at $f^{*}=0.25$ in cells of size
$L_x=L_y=2\lambda=84\sigma$, while holding constant
layer depth $H=5.4$ and restitution coefficient $e=0.70$. We vary
$\Gamma$ through the same range $1.7\leq\Gamma\leq2.3$ investigated for
$f^{*}=0.4174$ earlier in this paper.
Figure~\ref{noisefreqs} shows the growth of $\left<\psi_{SH}^2\right>$ normalized
by the mean center
of mass height of the layer squared $\sigma^2\left<\psi^2\right>/\left<z_{cm}\right>^2 =
\left<\left(z_{cm}-\left<z_{cm}\right>\right)^2\right>/\left<z_{cm}\right>^2$
for MD simulations with frequencies $f^{*}=0.25$ and $f^{*}=0.4174$.
\begin{figure}
\begin{center}
\scalebox{.5}{\includegraphics{noisefreqs.eps}}
\end{center}
\caption{\label{noisefreqs}
Comparison growth of $\left<\psi_{SH}^2\right>$ normalized by the mean center
of mass height of the layer $\sigma^2\left<\psi^2\right>/\left<z_{cm}\right>^2 =
\left<\left(z_{cm}-\left<z_{cm}\right>\right)^2\right>/\left<z_{cm}\right>^2$
for MD simulations with $f^*=0.25$ (squares) and $f^{*}=0.4174$ (triangles).
The lower frequency displays much smaller
fluctuations
below the onset of patterns than does the higher frequency.
}
\end{figure}
The lower frequency ($f^{*}=0.25$) exhibits a much sharper jump in $\left<\psi_{SH}^2\right>$
than that seen at $f^{*}=0.4174$. Below this onset, the curve is
much flatter for $f^{*}=0.25$, while at $f^{*}=0.4174$,
the curve increases much more gradually through onset. Proportionally
smaller fluctuations compared to pattern amplitude
is consistent with lower noise strength for $f^{*}=0.25$
than that found for $f^{*}=0.4174$. In addition, the
rapid growth of peaks and valleys occurs at a smaller value of
$\Gamma$ for $f^{*}=0.25$, corresponding to an onset even below the mean
field onset $\Gamma_c^{MF}$ for the larger frequency.
We follow the same procedure as for $f^{*}=0.4174$ to fit the
data from MD simulation to the Swift-Hohenberg model.
We note that for frictional
particles, square patterns are formed for $f^{*}=0.25$; in the
absence of friction, peaks and valleys remain disordered, and no regular
square lattice forms in experiments or MD simulations \cite{goldman03,
moon03} (see Fig.~\ref{GAMlowf}).
Thus the onset of long range order is ill defined in this
case. However, this lower frequency exhibits a much sharper onset in the
growth of $\left<\psi_{SH}^2\right>$, which is used to find $\Delta\epsilon_c$.
The best fit yields a noise term $F=\left(4\pm 3\right) \times 10^{-4}$,
and a mean field onset of $\Gamma_{c}^{MF}=1.85\pm0.01$. Our hydrodynamic
simulations find the flat layer becomes unstable at
$\Gamma_{c}=1.84\pm0.01$, which again agrees well with the mean field onset
found from the fit to the SH model.
\begin{figure}
\hspace{0cm}
\subfigure{\label{GAM1.7lowf}\scalebox{.27}{\includegraphics{GAM1.7lowf.eps}}}
\hspace{-0cm}
\subfigure{\label{GAM2.2lowf}\scalebox{.27}{\includegraphics{GAM2.2lowf.eps}}}\\
\caption{\label{GAMlowf}
An overhead view of the layer of grains from MD simulations at
$f^{*}=0.25$, for $\Gamma=1.7$ and $\Gamma=2.2$. Note how much less
noise there is below onset here ($\Gamma=1.7$) compared to the results for
$f^{*}=0.4174$ in
Fig.~\ref{GAMcomparison}. The images show the center of mass height
$z_{cm}(x,y)$ of
the layer as a function of location in the box. These MD simulations use a
cell which is
$L_{x}=L_{y}=84\sigma$ in the
horizontal directions. Peaks corresponding to
large $z_{cm}$ are shown in white, while valleys corresponding to
small $z_{cm}$ are shown in black.
The grayscale for both images is given on the right.
}
\end{figure}
The noise strength at $f^{*}=0.4174$ is approximately 30 times
larger than the noise strength at $f^{*}=0.25$. This leads to
qualitatively different behavior of $\left<\psi_{SH}^2\right>$ near onset,
yielding a smoother curve for the higher frequency and a sharper onset for
lower frequency. Finally, the onset is barely delayed for the lower
frequency, with $\Delta\epsilon_c=0.01$ for $f^{*}=0.25$, as compared to
$\Delta\epsilon_c=0.10$ for $f^{*}=0.4174$.
Thus a change in frequency which increases the wavelength at onset by a
factor of $2$ decreases the amount of noise by a factor of $30$.
For Rayleigh-B\'{e}nard convection in ordinary fluids, the
functional dependence of $F$ on
$n$, ${\bf u}$, $T$, and $\lambda$ is known \cite{hohenbergswift92,
vanbeijeren}. However, this dependence is not known for oscillated granular
layers. Future investigation of the dependence of $F$ on shaking parameters
$f^*$, $\Gamma$, and $H$, or on hydrodynamic variables $n$, ${\bf u}$, $T$
in experiment and MD
simulations may provide information on the dependence of the noise strength
$F$ that can be used in continuum simulations.
\section{Conclusions}
We have shown that continuum simulations can describe important
aspects of pattern formation in granular materials.
For a nondimensional frequency $f^*=0.4174$, both MD and continuum simulations
of granular materials form stripe patterns of the same wavelength above a
critical value $\Gamma>\Gamma_c$, and display no stripes for
$\Gamma<\Gamma_c$. Further, the two simulations yield the same dependence
of wavelength on frequency. These
wavelengths agree with the dispersion relation found experimentally
for frictional particles.
The effect of fluctuations has been examined in simulations of the
Swift-Hohenberg model. The results deduced for the mean field onset in MD
simulations agree well with the actual onset in
continuum simulations for both $f^{*}=0.4174$ and $f^*=0.25$.
We find the strength of the noise to be
$F=\left( 1.2\pm0.2 \right) \times 10^{-2}$ for stripes at $f^{*}=0.4174$,
and $F=\left( 4\pm3 \right) \times 10^{-4}$ for disordered squares at
$f^{*}=0.25$.
The value determined in an experiment for
a slightly shallower granular layer at $f^{*}=0.28$ was $F=3.5 \times
10^{-3}$ \cite{goldman2004}, which is within the range of noise values
obtained in this investigation.
The smallest noise strength found for our granular system is
comparable to the largest noise strength found thus far
in experiments in ordinary fluids, which obtained $F=7.1 \times 10^{-4}$ in
measurements near the critical point, while
values typical for convection are closer to $10^{-9}$ \cite{oh}.
For $f^{*}=0.4174$, the noise is strong
enough to delay onset of long range patterns by $10\%$ in MD simulation,
and influences strongly the behavior of the system even more than
20\% below this onset.
Thus noise plays an important role in granular media near the onset of
patterns.
This study indicates that hydrodynamic theory holds promise for
investigating and understanding pattern formation in granular flows.
However, quantitative comparisons between continuum theory and experiment
will require the addition of noise terms into the equations.
The addition of noise would be an important step towards
using the powerful tools of hydrodynamic theory to investigate problems of
pattern formation in granular materials.
The absence of
friction in these simulations restricts our investigation to stripe
patterns. Simulations without friction have not yielded the square and
hexagonal patterns seen in experiments with frictional particles \cite{moon03}.
Further research into pattern
formation using continuum simulations should investigate the most
effective way to incorporate friction between particles into the continuum
simulations and should examine how the strength of friction in the simulation
affects pattern formation in the system.
\begin{acknowledgments}
We thank Daniel I. Goldman, W. D. McCormick, Sung Joon Moon, and Erin
C. Rericha for helpful
discussions. This work was supported by the Engineering Research Program
of the
Office of Basic Energy Sciences of the Department of Energy (Grant
DE-FG03-93ER14312).
\end{acknowledgments}
|
0909.2431
|
\section{Introduction}
\label{Sec_Intro}
The statistical hadronization model, first introduced by Fermi~\cite{Fermi}
and Hagedorn~\cite{Hagedorn}, has been remarkably successful in the
description of experimentally measured average hadron production yields
in heavy ion collisions ranging from SIS \cite{GSIfits}, and AGS \cite{AGSfits},
over SPS \cite{SPSfits} to RHIC \cite{RHICfits} energies.
Over time this has led to the establishment of the
`chemical freeze-out line`~\cite{FreezeOut}, which is now a vital part of our understanding
of the phase diagram of strongly interacting matter.
Model predictions for the upcoming LHC and future
FAIR~\cite{SHM_predictions_LHC,SHM_predictions_FAIR} experiments largely follow these trends.
Somewhere above this freeze-out line in the phase diagram we expect, in general,
a phase transition from hadronic degrees of freedom to a phase of deconfined quarks
and gluons, generally termed the quark gluon plasma; and more specifically, a first
order phase transition at low temperature and high baryon chemical potential,
and a cross-over at high temperature and low baryon chemical potential. In between,
a second order endpoint or a critical point might emerge.
For recent reviews see \cite{QCD_pd,Model_pd}.
Fluctuation and correlation observables are amongst the most promising candidates
suggested to be suitable for signaling the formation
of new states of matter, and transitions between them. For recent reviews here
see \cite{OnsetOfDecon,PhaseTrans,CriticalPoint,Koch}.
The statistical properties of a sample of events are, however, certainly not solely
determined by critical phenomena. More broadly
speaking, they depend strongly on the way events are chosen for the analysis,
and on the information available about the system.
The ideal gas approximation of the statistical hadronization model will
again serve as our testbed.
Its strong advantage is that it is simple, and to
some extent intuitive.
Given its success in describing experimentally
measured average hadron yields, and its ability to reproduce
low temperature lattice susceptibilities \cite{Karsch_susc},
the question arises as to whether fluctuation and correlation observables
also follow its main line.
Critical phenomena (and many more), however, remain beyond the present study.
Conventionally in statistical mechanics three standard ensembles are
discussed; the microcanonical ensemble (MCE), the canonical ensemble (CE),
and the grand canonical ensemble (GCE). In the MCE\footnote{The term MCE is also often
applied to ensembles with energy but not momentum conservation.} one considers an ensemble
of microstates with exactly fixed values of extensive conserved quantities
(energy, momentum, electric charge, etc.), with `a priori equal probabilities` of
all microstates (see e.g. \cite{Patriha}). The CE introduces the concept of temperature by
introduction of an infinite thermal bath, which can exchange energy (and momentum)
with the system.
The GCE introduces further chemical potentials by attaching the system under consideration
to an infinite charge bath\footnote{Note that a system with many charges can have some
charges described via the CE and others via the GCE.}.
Only if the experimentally accessible system is just a small
fraction of the total, and all parts have had the opportunity
to mutually equilibrate, can the appropriate ensemble be the grand canonical ensemble.
A statistical hadronization model Monte Carlo event generator affords us with the possibility
of studying fluctuation and correlation observables in equilibrium systems.
Data analysis can be done in close relation to experimental analysis techniques.
Imposing global constraints on a sample is always technically a bit more challenging.
Direct sampling of MCE events (or microstates) has only been done in
the non-relativistic limit~\cite{Randrup}.
Sample and reject procedures, suitable for relativistic systems, become rapidly
inefficient with large system size. However, they have the advantage of being very
successful for small system sizes \cite{Bec_MC,Bec_MCE}.
In this article we try a different approach: we sample the GCE, then re-weight events
according to their values of extensive quantities, and approach the sample-reject limit
(MCE) in a controlled manner. In this way one can study the statistical properties
of a global equilibrium system in their dependence on the size of their thermodynamic bath.
As any of the three standard ensembles remain idealizations of physical systems,
one might find intermediate ensembles to be of phenomenological interest too.
We study the first and, in particular, second moments
of joint distributions of extensive quantities. We concentrate
mainly on particle number distributions and distributions of `conserved' charges,
and discuss the influence of acceptance cuts in momentum space, conservations laws, and
resonance decay on the statistical properties of a sample of
hadron resonance gas model events.
We extend our previous studies of ideal particle and anti-particle
gases~\cite{acc,baseline} and of gases of altogether massless particles \cite{feq}.
The numerical code has been written for inclusion into the already
existing THERMUS package~\cite{THERMUS}. We make frequent use of the
functionality provided by the ROOT framework~\cite{ROOT}.
The paper is organized as follows:
In Section~\ref{Sec_SEfB} the basic ideas of this article are formulated.
The GCE Monte Carlo sampling
procedure is described in Section~\ref{Sec_GCEsampling}.
The first and second moments of the distributions of fully phase space
integrated extensive quantities are then extrapolated to the microcanonical limit
in Section~\ref{Sec_ExtraMCE}.
Section~\ref{Sec_MomSpect} contains an analysis of GCE momentum spectra.
The momentum space dependence of correlations between conserved charges
is studied in Section~\ref{Sec_LCfluc}.
Section~\ref{Sec_MultFluc} then deals with
multiplicity fluctuations and correlations in limited acceptance
and their extrapolation to the MCE limit.
A summary is given in Section~\ref{Sec_Summary}.
\section{Statistical Ensembles With Finite Bath}
\label{Sec_SEfB}
We start out as Patriha \cite{Patriha}, and Challa and Hetherington \cite{extgauss},
but quickly take a different route.\\
Let us define two microcanonical partition functions, i.e. the number of microstates,
for two separate systems.
The first system is assumed to be enclosed in a volume $V_1$ and to have fixed values
of extensive quantities $P_1^{\mu}=(E_1,P_{x,1},P_{y,1},P_{z,1})$, and $Q_1^j=(B_1,S_1,Q_1)$,
while the second system is enclosed in a volume $V_2$ and has fixed values of extensive
quantities $P_2^{\mu}=(E_2,P_{x,2},P_{y,2},P_{z,2})$, and $Q_2^j=(B_2,S_2,Q_2)$,
where $E$ is the energy of the system, $P_{x,y,z}$ are the components of its
three-momentum, and $B$, $S$, and $Q$, are baryon number, strangeness and
electric charge, respectively. Thus we have:
\begin{equation}\label{eq_one}
Z(V_1,P_1^{\mu},Q_1^j) ~=~ \sum_{\{N_1^i \}} ~ Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)~, ~~
~~\textrm{and}~~~~Z(V_2,P_2^{\mu},Q_2^j)~,
\end{equation}
where $Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)$ denotes the number of microstates of system 1 with
additionally fixed multiplicities $N_1^i$ of particles of species $i$.
Suppose that system 1 and system 2 are subject to the following constraints:
\begin{eqnarray}
V_g &=& V_1 ~+~ V_2 ~,\label{constraint_V}\\
P_g^{\mu} &=& P_1^{\mu} ~+~ P_2^{\mu}~,\label{constraint_P} \\
Q_g^{j} &=& Q_1^{j} ~+~ Q_2^{j}~. \label{constraint_Q}
\end{eqnarray}
We can then construct the partition function $Z(V_g,P_g^{\mu},Q_g^j)$ of the joint
system as the sums over all possible charge and energy-momentum split-ups:
\begin{equation} \label{PF_combined}
Z(V_g,P_g^{\mu},Q_g^j) = \sum \limits_{\{P_1^{\mu}\}} \sum \limits_{\{Q_1^{j}\}}
Z(V_g-V_1,P_g^{\mu}-P_1^{\mu},Q_g^j-Q_1^j)~ Z(V_1,P_1^{\mu},Q_1^j)~.
\end{equation}
Next we construct the distribution of extensive quantities in the subsystem $V_1$. This is
given by the ratio of the number of all microstates consistent with a given charge
and energy-momentum split-up and a given set of particle multiplicities to the number of
all possible configurations:
\begin{equation}
P(P_1^{\mu},Q_1^j,N_1^i) ~=~
\frac{Z(V_g-V_1,P_g^{\mu}-P_1^{\mu},Q_g^j-Q_1^j)}{Z(V_g,P_g^{\mu},Q_g^j)} ~
~Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)~.
\end{equation}
We then define the weight factor $W(V_1,P_1^{\mu},Q_1^j;V_g,P_g^{\mu},Q_g^j) $ such that:
\begin{equation}\label{basic}
P(P_1^{\mu},Q_1^j,N_1^i) ~=~ W(V_1,P_1^{\mu},Q_1^j;V_g,P_g^{\mu},Q_g^j) ~
~Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)~.
\end{equation}
By construction, the first moment of the weight factor is equal to unity:
\begin{equation}
\langle W \rangle ~=~ \sum_{\{P_1^{\mu}\}} \sum_{\{Q_1^j \}} \sum_{\{N_1^i \}} ~
W(V_1,P_1^{\mu},Q_1^j;V_g,P_g^{\mu},Q_g^j) ~
~Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j) = 1~,
\end{equation}
as the distribution is properly normalized.
The weight factor $W(V_1,P_1^{\mu},Q_1^j;V_g,P_g^{\mu},Q_g^j) $ generates
an ensemble with statistical properties different from the limiting cases
$V_g \rightarrow V_1$ (MCE), and $V_g \rightarrow \infty$ (GCE).
This effectively allows for extrapolation of GCE results to the MCE limit.
In the thermodynamic limit ($V_1$ sufficiently large) a family of
thermodynamically equivalent (same densities) ensembles is generated.
In principle any other (arbitrary) choice of
$W(V_1,P_1^{\mu},Q_1^j;V_g,P_g^{\mu},Q_g^j) $ could be taken.
In this work we confine ourselves, however, to the situation discussed above. Please note
that all microstates consistent with the same set of extensive quantities
$(P_1^{\mu},Q_1^j)$ have `a priori equal probabilities`.
In the large volume limit, ensembles are equivalent in the sense that
densities are the same. The ensembles defined by Eq.(\ref{basic}) and later on by Eq.(\ref{thetrick})
are no exception. If both $V_1$ and $V_g$ are sufficiently large, then the average
densities in both systems will be the same, $Q^j_g / V_g$ and $P^{\mu}_g / V_g$ respectively.
The system in $V_1$ will hence carry on average a certain fraction:
\begin{equation}\label{lambda_def}
\lambda ~\equiv~ V_1/V_g~,
\end{equation}
of the total charge $Q^j_g$ and four-momentum $P^{\mu}_g$, i.e.:
\begin{equation}
\langle Q^j_1 \rangle ~=~ \lambda~ Q^j_g ~,\qquad \textrm{and}
\qquad \langle P_1^{\mu} \rangle ~=~ \lambda~ P^{\mu}_g~.
\end{equation}
By varying the ratio $\lambda = V_1 /V_g$, while keeping $\langle Q^j_1 \rangle$ and
$\langle P_1^{\mu} \rangle$ constant, we can thus study a class of systems with the same
average charge content and four-momentum, but
different statistical properties.
\subsection{Introducing the Monte Carlo Weight $\mathcal{W}$}
Since Eq.(\ref{basic}) poses a formidable challenge, both mathematically and numerically,
we write instead:
\begin{equation}\label{thetrick}
P(P_1^{\mu},Q_1^j,N_1^i) ~=~ \mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j) ~
~P_{gce}(P_1^{\mu},Q_1^j,N_1^i|\beta,u_{\mu},\mu_j)~,
\end{equation}
where the distribution of extensive quantities $P_1^{\mu}$, $Q_1^j$ and particle multiplicities
$N_1^i$ of a GCE system with temperature $T=\beta^{-1}$, volume $V_1$, chemical potentials $\mu_j$ and
collective four-velocity $u_{\mu}$ is given by:
\begin{equation}\label{Pgce}
P_{gce}(P_1^{\mu},Q_1^j,N_1^i|\beta,u_{\mu},\mu_j) ~\equiv~ \frac{e^{-P_1^{\mu} u_{\mu} \beta }~
e^{Q_1^j \mu_j \beta}}{Z(V_1,\beta,u_{\mu},\mu_j)} ~ Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)~,
\end{equation}
where $\mu_j = (\mu_B,\mu_S,\mu_Q)$, summarizes the chemical potentials associated with
baryon number, strangeness and electric charge in a vector.
The normalization in Eq.(\ref{Pgce}) is given by the GCE partition function
$Z(V_1,\beta,u_{\mu},\mu_j)$, i.e. the number of all microstates averaged over
the Boltzmann weights $e^{-P_1^{\mu} u_{\mu} \beta }$ and $e^{Q_1^j \mu_j \beta}$:
\begin{equation}
Z(V_1,\beta,u_{\mu},\mu_j)~=~ \sum_{\{P_1^{\mu}\}} \sum_{\{Q_1^{j} \}} \sum_{\{N_1^i \}}
~ e^{-P_1^{\mu} u_{\mu} \beta }~e^{Q_1^j \mu_j \beta}~ Z_{N_1^i}(V_1,P_1^{\mu},Q_1^j)~.
\end{equation}
The new weight factor $ \mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j)$
now reads:
\begin{eqnarray}\label{newW}
\mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j) &=&
Z(V_1,\beta,u_{\mu},\mu_j)~
\frac{e^{-(P_g^{\mu}-P_1^{\mu}) u_{\mu} \beta }~
e^{(Q_g^j-Q_1^j) \mu_j \beta} }{e^{-P_g^{\mu} u_{\mu} \beta}~ e^{Q_g^j \mu_j \beta} } \nonumber \\
&\times&\frac{Z(V_g-V_1,P_g^{\mu}-P_1^{\mu},Q_g^j-Q_1^j)}{Z(V_g,P_g^{\mu},Q_g^j)}~.
\end{eqnarray}
In the case of an ideal (non-interacting) gas, Eq.(\ref{newW}) can be
written \cite{clt,baseline} as:
\begin{eqnarray}\label{simpleW}
\mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j) &=&
Z(V_1,\beta,u_{\mu},\mu_j)~
\frac{\mathcal{Z}^{P_g^{\mu}-P_1^{\mu},Q_g^j-Q_1^j}(V_g-V_1,\beta,u_{\mu},\mu_j)}{
\mathcal{Z}^{P_g^{\mu},Q_g^j} (V_g,\beta,u_{\mu},\mu_j)} ~.
\end{eqnarray}
The advantage of Eq.(\ref{thetrick}), compared to Eq.(\ref{basic}), is that the
distribution $P_{gce}(P_1^{\mu},Q_1^j,N_1^i|\beta,u_{\mu},\mu_j)$ can easily be sampled for Boltzmann particles,
while a suitable approximation for the weight
$\mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j)$ is available.
Again, by construction, the first moment of the new weight factor is equal to unity:
\begin{equation}
\langle \mathcal{W} \rangle ~=~ \sum_{\{P_1^{\mu}\}} \sum_{\{Q_1^j \}} \sum_{\{N_1^i \}} ~
\mathcal{W}^{P_1^{\mu},Q_1^j;P_g^{\mu},Q_g^j}(V_1;V_g|\beta,u_{\mu},\mu_j)
~P_{gce}(P_1^{\mu},Q_1^j,N_1^i|\beta,u_{\mu},\mu_j) = 1~.
\end{equation}
In principle, Eq.(\ref{basic}) and Eq.(\ref{thetrick})
are equivalent. In fact, Eq.(\ref{basic}) can be obtained by taking the limit
$(\mu_B,\mu_S,\mu_Q) = (0,0,0)$, $u_{\mu}=(1,0,0,0)$, and $\beta \rightarrow 0$ of
Eq.(\ref{thetrick}). However, as one can already see, $\langle \mathcal{W}^n \rangle
\not= \langle W^n \rangle $. Higher, and in particular the second, moments
of the weight factors $W$ and $\mathcal{W}$ are a
measure of the statistical error to be expected for a finite sample of events.
The larger the higher moments of the weight factor, the larger the statistical error,
and the slower the convergence with sample size.
Please see also Appendices \ref{App_ConvStudy} and \ref{App_CBG}.
As GCE and MCE densities are the same in the
system $V_g$, these values are effectively regulated by intensive parameters
$\beta$, $\mu_j$ and $u_{\mu}$. In essence, if you want to study a system
with average $\langle Q^j_1 \rangle$, then sample the GCE with $\langle Q^j_1 \rangle$ and calculate
the weight according to Eq.(\ref{simpleW}). This will result in a low statistical
error for finite samples (as shown in later sections), and allow for extrapolation to the MCE limit.
We will now first calculate the weight factor Eq.(\ref{simpleW}) and then
take the appropriate limits. With the appropriate choice of $\beta$, $\mu_j$
and $u_{\mu}$ the calculation of Eq.(\ref{simpleW}) is particularly easy in the
large volume limit \cite{clt}.
\subsection{Calculating the Monte Carlo Weight $\mathcal{W}$}
In this article, the total number of (potentially) conserved extensive quantities in
a hadron resonance gas is $L=J+4 = 3+4=7$, where $J=3$ is the number of charges $(B,S,Q)$
and there are four components of the four-momentum. Including all extensive quantities
into a single vector:
\begin{equation}
\mathcal{Q}^l = (Q^j,P^{\mu}) = (B,S,Q,E,P_x,P_y,P_z)~,
\end{equation}
the weight Eq.(\ref{simpleW}) can be expressed as:
\begin{eqnarray}\label{curly_W}
\mathcal{W}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}(V_1;V_g|\beta,u_{\mu},\mu_j) &=&
Z(V_1,\beta,u_{\mu},\mu_j)~
\frac{\mathcal{Z}^{\mathcal{Q}_g^l-\mathcal{Q}_1^l}(V_g-V_1,\beta,u_{\mu},\mu_j)}{
\mathcal{Z}^{\mathcal{Q}_g^l} (V_g,\beta,u_{\mu},\mu_j)} ~.
\end{eqnarray}
The general expression for the partition function
$\mathcal{Z}^{\mathcal{Q}^l}(V,\beta,u_{\mu},\mu_j)$
in the large volume limit reads~\cite{clt}:
\begin{equation}\label{clt_approx}
\mathcal{Z}^{\mathcal{Q}^l}(V,\beta,u_{\mu},\mu_j) ~\simeq~Z(V,\beta,u_{\mu},\mu_j)~
\frac{1}{(2 \pi V)^{L/2} \det \sigma}~
\exp \left( - \frac{1}{2}~ \frac{1}{V}~ \xi^l \xi_l \right)~,
\end{equation}
where:
\begin{equation}\label{xi}
\xi^l ~=~ \left(\mathcal{Q}^k - V \kappa_1^k \right) ~ \left( \sigma^{-1} \right)_k^l~,
\end{equation}
and:
\begin{equation}
\sigma_k^l ~=~ \left( \kappa_2^{1/2} \right)_k^l ~.
\end{equation}
Here $\kappa_1$ and $\kappa_2$ are the GCE vector of mean values and the
GCE covariance matrix respectively. The values of $\beta$, $\mu_j$ and $u_{\mu}$ are chosen
such that:
\begin{equation}
\frac{\partial \mathcal{Z}^{\mathcal{Q}^l}}{\partial \mathcal{Q}^l}
\Big|_{\mathcal{Q}^l=\mathcal{Q}^l_{eq}} ~=~ 0_l.
\end{equation}
The approximation (\ref{clt_approx}) gives then a reliable description of
$\mathcal{Z}^{\mathcal{Q}_g^l}$ around the equilibrium value
$\mathcal{Q}_g^l = V_g \kappa_1^l$, provided $V_g$ is sufficiently large.
The charge vector, Eq.(\ref{xi}), is then equal
to the null-vector $\xi_l = 0_l$ ($\mathcal{Q}_g^l = V_g \kappa_1^l$).
For the normalization in Eq.(\ref{curly_W}) we then find:
\begin{equation} \label{W_norm}
\mathcal{Z}^{\mathcal{Q}_g^l}(V_g,\beta,u_{\mu},\mu_j)\Big|_{\mathcal{Q}_g^l=\mathcal{Q}^l_{g,eq}}
~\simeq~ \frac{Z(V_g,\beta,u_{\mu},\mu_j)}{(2 \pi V_g)^{L/2}\det \sigma}~
\exp \left( 0 \right)~.
\end{equation}
For the numerator we obtain:
\begin{equation} \label{W_numm}
\mathcal{Z}^{\mathcal{Q}_g^l-\mathcal{Q}_1^l}(V_g-V_1,\beta,u_{\mu},\mu_j)
\Big|_{\mathcal{Q}_g^l=\mathcal{Q}^l_{g,eq}} ~\simeq~
\frac{Z(V_g-V_1,\beta,u_{\mu},\mu_j) }{(2 \pi \left(V_g -V_1 \right))^{L/2} \det \sigma}~
\exp \left( -\frac{1}{2}~ \frac{1}{(V_g-V_1) }~ \xi^l \xi_l\right)~,
\end{equation}
where in Eq.(\ref{W_numm}) we write for the charge vector Eq.(\ref{xi}):
\begin{equation}
\xi^l = \left( \Delta \mathcal{Q}_2 \right)^k \left( \sigma^{-1} \right)_k^l~.
\end{equation}
Then, using $\mathcal{Q}^k_g = \mathcal{Q}^k_{g,eq} = V_g \kappa_1^k$, we find:
\begin{equation}
\left( \Delta \mathcal{Q}_2 \right)^k = \left(\mathcal{Q}_g - \mathcal{Q}_1 \right)^k -
\left( V_g-V_1\right) \kappa_1^k ~=~ -\left( \mathcal{Q}_1 -V_1\kappa_1 \right)^k~.
\end{equation}
Substituting Eq.(\ref{W_norm}) and Eq.(\ref{W_numm}) into Eq.(\ref{curly_W}) yields:
\begin{eqnarray}\label{almost_solved_curly_W}
\mathcal{W}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}(V_1;V_g|\beta,u_{\mu},\mu_j)
\Big|_{\mathcal{Q}_g^l=\mathcal{Q}^l_{g,eq}}
&\simeq& \frac{Z(V_1,\beta,u_{\mu},\mu_j)~Z(V_g-V_1,\beta,u_{\mu},\mu_j)}{
Z(V_g,\beta,u_{\mu},\mu_j)} \nonumber \\
&&\times ~\frac{(2 \pi V_g )^{L/2} \det \sigma}{(2 \pi \left(V_g -V_1 \right))^{L/2} \det \sigma}
\exp \left( -\frac{1}{2}~ \frac{1}{(V_g-V_1) }~ \xi^l \xi_l\right)~.
\end{eqnarray}
The GCE partition functions are multiplicative in the sense that
$Z(V_1,\beta,u_{\mu},\mu_j)~Z(V_g-V_1,\beta,u_{\mu},\mu_j) = Z(V_g,\beta,u_{\mu},\mu_j)$,
and thus the first term in Eq.(\ref{almost_solved_curly_W}) is equal to unity.
Now using Eq.(\ref{lambda_def}), $\lambda = V_1 / V_g$, we can re-write
Eq.(\ref{almost_solved_curly_W}) as:
\begin{eqnarray}\label{solved_curly_W}
\mathcal{W}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}(V_1;V_g|\beta,u_{\mu},\mu_j)
\Big|_{\mathcal{Q}_g^l=\mathcal{Q}^l_{g,eq}}
&\simeq& ~\frac{1}{(1-\lambda)^{L/2} }
\exp \left( -\frac{1}{2} \left( \frac{\lambda}{1-\lambda}\right)
\frac{1}{V_1}~ \xi^l \xi_l\right)~.
\end{eqnarray}
Model parameters are hence the intensive variables inverse temperature $\beta$, four-velocity
$u^{\mu}$ and chemical potentials $\mu^j$, which regulate energy and charge densities,
and collective motion.
Provided $V_1$ is sufficiently large, we have defined a family of thermodynamically equivalent
ensembles, which can now be studied in their dependence of fluctuation and correlation
observables on the size of the bath $V_2 = V_g - V_1$. Hence, we can test the sensitivity of
such observables, for example, to globally applied conservation laws. The expectation
values $\langle \dots \rangle$ are then identical to GCE expectation values,
while higher moments will depend crucially on the choice of $\lambda$.
\subsection{The Limits of $\mathcal{W}$}
The largest weight is given to states for which $\xi^l \xi_l = 0$, i.e. with extensive
quantities $\mathcal{Q}_1^l = \mathcal{Q}_{1,eq.}^l$. Hence, the maximal weight a microstate
(or event) at a given value of $\lambda = V_1/V_g$ can assume is
$\mathcal{W}_{max}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}(V_1;V_g|\beta,u_{\mu},\mu_j)
= (1-\lambda)^{-L/2}$. Taking the limits of Eq.(\ref{solved_curly_W}), it is easy to see that:
\begin{equation}
\lim_{\lambda \rightarrow 0} ~ \mathcal{W}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}
(V_1;V_g|\beta,u_{\mu},\mu_j) ~=~ 1~.
\end{equation}
I.e. for $\lambda = 0$ we sample the GCE, and all events have a weight equal to unity.
Hence, we also find $\langle \mathcal{W}^2 \rangle = 1$ and therefore
$\langle (\Delta \mathcal{W})^2 \rangle = 0$, implying a low statistical error.
For $\lambda \rightarrow 1$, we effectively approach a ''sample-reject''
procedure, as (for instance) used in \cite{Bec_MCE,Bec_MC}, and:
\begin{equation}
\lim_{\lambda \rightarrow 1} ~ \mathcal{W}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}
(V_1;V_g|\beta,u_{\mu},\mu_j) ~\propto~ \delta(\mathcal{Q}_1^l - V_1 \kappa_1^l)~.
\end{equation}
However, as now not all events have equal weight, $\langle (\Delta \mathcal{W})^2 \rangle$ grows
and so too the statistical error of finite samples. Also, the larger the number
$L$ of extensive quantities considered for re-weighting, the larger
will be the statistical uncertainty.
\section{The GCE sampling procedure}
\label{Sec_GCEsampling}
The Monte Carlo sampling procedure for a GCE system in the Boltzmann
approximation is now explained.
The system to be sampled is assumed to be in an equilibrium state enclosed in a volume
$V_1$ with temperature $T = \beta^{-1}$ and chemical potentials $\mu_j = (\mu_B,\mu_S,\mu_Q)$.
Additionally, the system is assumed to
be at rest. The four-velocity is then $u^{\mu} = (1,0,0,0)$ and the four-temperature
is $\beta^{\mu}= (\beta,0,0,0)$. In this case, multiplicity distributions are Poissonian,
while momentum spectra are of Boltzmann type. \\
The GCE sampling process is composed of four steps, each discussed below.
\subsubsection{Multiplicity Generation}
\label{none}
In the first step, we randomly sample multiplicities $N_1^i$
of all particle species $i$ considered in the model. The expectation value of
the multiplicity of thermal Boltzmann particles in the GCE is given by:
\begin{equation}\label{psi_meanN}
\langle N_1^i \rangle ~=~ \frac{g_i V_1}{2 \pi^2} ~m_i^2 ~ T~ K_2\left(\frac{m_i}{T}\right)~
e^{\mu_i/T}~.
\end{equation}
Multiplicities $\lbrace N_1^i \rbrace_n$ are randomly generated for each event $n$
according to Poissonians with mean values $\langle N_1^i \rangle$:
\begin{equation}
P(N_1^i) ~=~ \frac{\langle N_1^i \rangle^{N_1^i}}{N_1^i!} ~ e^{-\langle N_1^i \rangle}~.
\end{equation}
In the above, $m_i$ and $g_i$ are the mass and degeneracy factor of a particle of
species $i$ respectively.
The chemical potential $\mu_i = \mu_j q_i^j = \mu_B b_i + \mu_S s_i + \mu_Q q_i $, where
$q_i^j = (b_i,s_i,q_i)$ represents the quantum number content of a particle of species $i$.
\subsubsection{Momentum Spectra}
\label{none}
In the second step, we generate momenta for each particle according to
a Boltzmann spectrum. For a static thermal source spherical coordinates are convenient:
\begin{equation}
\frac{dN_i}{d|p|} ~=~ \frac{g_i V_1}{2 \pi^2}~ T^3 ~|p|^2~ e^{-\varepsilon/T}~.
\end{equation}
These momenta are then isotropically distributed in momentum space. Hence:
\begin{eqnarray}
p_x &=& |p| ~ \sin \theta ~\cos \phi ~,\\
p_y &=& |p| ~ \sin \theta ~\sin \phi ~,\\
p_z &=& |p| ~ \cos \theta ~,\\
\varepsilon &=& \sqrt{|p|^2 + m^2}~,
\end{eqnarray}
where $p_x$, $p_y$, and $p_z$ are the components of the three-momentum, $\varepsilon$
is the energy, and $|p| = \sqrt{p_x^2+p_y^2+p_z^2}$ is the total momentum.
The polar and azimuthal angles are sampled according to:
\begin{eqnarray}
\theta &=& \cos^{-1} \left[ 2 \left(x-0.5 \right) \right] ~,\\
\phi &=& 2~\pi \left(x-0.5 \right)~,
\end{eqnarray}
where $x$ is uniformly distributed between $0$ and $1$. Additionally, we calculate
the transverse momentum $p_T$ and rapidity $y$ for each particle:
\begin{eqnarray}
p_T &=& \sqrt{p_x^2 + p_y^2}~,\\
y &=& \frac{1}{2} \ln \left(\frac{\varepsilon+p_z}{\varepsilon-p_z} \right)~.
\end{eqnarray}
Finally, we distribute particles homogeneously in a sphere of radius $r_1$ and
calculate decay times based on the Breit-Wigner width of the resonances.
\subsubsection{Resonance Decay}
\label{none}
The third step (if applicable) is resonance decay. We follow the prescription
used by the authors of the THERMINATOR package~\cite{THERMINATOR},
and perform only 2 and 3 body decays,
while allowing for successive decay of unstable daughter particles. Only strong decays are
considered, while weak and electromagnetic decays are omitted.
Particle decay is first calculated in the parent's rest frame, with daughter momenta
then boosted into the lab frame. Finally, decay positions are generated based on
the parent's production point, momentum and life time.
Throughout this article, always only the lightest states of the following baryons:
\begin{equation}
\textrm{p} \qquad \textrm{n} \qquad \Lambda \qquad \Sigma^+ \qquad \Sigma^- \qquad \Xi^- \qquad
\Xi^0 \qquad \Omega^-
\end{equation}
and mesons:
\begin{equation}
\pi^+ \qquad \pi^- \qquad \pi^0 \qquad K^+ \qquad K^- \qquad K^0
\end{equation}
are considered as stable. The system could now be given collective velocity $u^{\mu}$.
\subsubsection{Re-weighting}
\label{none}
In the fourth step, we calculate the values of extensive quantities for the
events generated by iterating over the particle list of each event.
For the values of extensive quantities $\mathcal{Q}^l_{1,n}~=~(B_{1,n},S_{1,n},Q_{1,n},E_{1,n},
P_{x,1,n},P_{y,1,n},P_{z,1,n})$ in subsystem $V_1$ of event $n$ we write:
\begin{equation}
\mathcal{Q}^l_{1,n} ~=~ \sum_{\textrm{particles } i_n } \mathfrak{q}^l_{i_n}~,
\end{equation}
where $\mathfrak{q}^l_{i_n} = \left( b_{i_n}, s_{i_n}, q_{i_n}, \varepsilon_{i_n}, p_{x,i_n},
p_{y,i_n}, p_{z,i_n}\right)$ is the `charge vector' of particle $i$ in event $n$.
Based on $\mathcal{Q}^l_{1,n}$ we calculate the weight $w_n$ for the event:
\begin{equation}
w_n = \mathcal{W}^{\mathcal{Q}_{1,n}^l;\mathcal{Q}_g^l}(V_1;V_g|\beta,u_{\mu},\mu_j)~,
\end{equation}
according to Eq.(\ref{solved_curly_W}).
Please note that all microstates with the same set of extensive
quantities $\mathcal{Q}^l_{1,n}$ are still counted equally.
\section{Extrapolating Fully Phase Space Integrated Quantities to the MCE}
\label{Sec_ExtraMCE}
We now attempt to extrapolate fully phase space integrated grand canonical results to
the microcanonical limit. For this we iteratively generate, re-weight, and
analyze samples of events for various values of $\lambda = V_1/V_g$.
By construction of the weight factor $\mathcal{W}$, Eq.(\ref{solved_curly_W}), we
extrapolate in a systematic fashion such that, for instance, particle momentum spectra
as well as mean values of extensive quantities remain unchanged. On the other hand,
all variances and covariances of extensive quantities subject to re-weighting
converge linearly to their microcanonical values.
This can be seen from the form of the analytical approximation to the grand canonical
distribution of (fully phase space integrated) extensive quantities
$P_{gce}(\mathcal{Q}^l_1)$ (from Eq.(\ref{clt_approx})):
\begin{equation}
P_{gce}(\mathcal{Q}^l_1)~\simeq~\frac{1}{(2 \pi V_1)^{L/2} \det \sigma}~
\exp \left( - \frac{1}{2}~ \frac{1}{V_1}~ \xi^l \xi_l \right)~,
\end{equation}
where the variable $\xi^l$ is given by Eq.(\ref{xi}). Now taking the weight factor
$\mathcal{W}_{\lambda}$, Eq.(\ref{solved_curly_W}),
($\sigma$ and $\xi_l$ are the same in both equations) we obtain for the distribution
$P_{\lambda}(\mathcal{Q}^l_1)$ of extensive quantities $\mathcal{Q}^l_1$ in subsystem $1$:
\begin{eqnarray}
P_{\lambda}(\mathcal{Q}^l_1) &\simeq& \mathcal{W}_{\lambda}^{\mathcal{Q}_1^l;\mathcal{Q}_g^l}
~P_{gce}(\mathcal{Q}^l_1)\\
\label{Pbath}
&\simeq& \frac{1}{(2 \pi(1-\lambda) V_1)^{L/2} \det \sigma}~
\exp \left( - \frac{1}{2}~ \frac{1}{(1-\lambda)~V_1}~ \xi^l \xi_l \right)~.
\end{eqnarray}
This is essentially the same multivariate normal distribution as the
grand canonical version $P_{gce}(\mathcal{Q}^l_1)$,
however linearly contracted. We will compare Monte Carlo results to Eq.(\ref{Pbath}).
The Monte Carlo output is essentially a distribution $P_{MC}(X_1,X_2,X_3,...)$ of a set
of observables $X_1$, $X_2$, $X_3$, etc. For all practical purposes this distribution
is obtained by histograming all events $n$ according to their values of
$X_{1,n}$, $X_{2,n}$, $X_{3,n}$, etc. and their weight $w_n$.
One can then define moments of two observables $X_i$ and $X_j$ through:
\begin{equation}\label{MCmoment}
\langle X_i^n X_j^m \rangle ~\equiv~ \sum_{X_i,X_j} X_i^n X_j^m P_{MC}(X_i,X_j)~.
\end{equation}
Additionally, we define the variance $\langle \left( \Delta X_i \right)^2 \rangle$
and the covariance $\langle \Delta X_i \Delta X_j \rangle$ respectively as:
\begin{eqnarray}\label{variance}
\langle \left( \Delta X_i \right)^2 \rangle &\equiv& \langle X_i^2\rangle ~-~
\langle X_i \rangle^2~, \qquad \textrm{and} \\
\label{covariance}
\langle \Delta X_i \Delta X_j \rangle &\equiv& \langle X_i X_j \rangle
~-~ \langle X_i \rangle \langle X_j \rangle~.
\end{eqnarray}
In the following, we use the scaled variance $\omega_i$ and the correlation
coefficient $\rho_{ij}$ defined as:
\begin{eqnarray}\label{omega}
\omega_i &\equiv& \frac{\langle \left( \Delta X_i \right)^2 \rangle}{\langle X_i \rangle}~,
\qquad \textrm{and} \\
\label{rho}
\rho_{ij} &\equiv& \frac{\langle \Delta X_i \Delta X_j \rangle}{
\sqrt{\langle \left( \Delta X_i \right)^2 \rangle
\langle \left( \Delta X_j \right)^2 \rangle }}~.
\end{eqnarray}
Let us consider a static and neutral system with four-velocity
$u_{\mu} = (1,0,0,0)$, chemical potentials $\mu_j = (0,0,0)$, local temperature
$T~=~\beta^{-1}=0.160 GeV$, and volume ${V_1=2000fm^3}$.
This is a system large enough\footnote{Generally it is not easy to say when a system is
`large enough` for the large volume approximation to be valid. Here we find good
agreement with asymptotic analytic solutions. Charged systems, or Bose-Einstein/Fermi-Dirac
systems, usually converge more slowly to their asymptotic solution.
}
for using the large volume
approximation worked out in Section~\ref{Sec_SEfB}.
In Figs.(\ref{CwL_plot_fluc}) and (\ref{CwL_plot_corr}) we show the results of Monte Carlo
runs of $2.5 \cdot 10^4$ events each. Each value of $\lambda$ has been sampled $20$ times
to allow for calculation of a statistical uncertainty estimate. 19 different values
of $\lambda$ have been studied. In this case study, the extensive quantities baryon
number $B$, strangeness $S$, electric charge $Q$, energy $E$, and longitudinal
momentum~$P_z$ are considered for re-weighting.
Conservation of transverse momenta $P_x$ and $P_y$ can be shown not to affect the
$\Delta p_{T,i}$ and $\Delta y_i$ dependence of multiplicity fluctuations and correlations
studied in the following sections.
Their $\Delta y_i$ dependence is, however, rather sensitive to $P_z$ conservation.
Angular correlations (not studied in this article), on the other hand, are strongly
sensitive to joint $P_x$ and $P_y$ conservation~\cite{acc,baseline}.
\begin{figure}[ht!]
\epsfig{file=fig_1a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_1b.eps,width=8.4cm,height=6.5cm}
\caption{Mean values ({\it left}) and variances ({\it right}) of various extensive quantities,
as listed in the legends, as a function of $\lambda$.
Each marker and its error bar represents the result of 20 Monte Carlo
runs of $2.5 \cdot 10^4$ events each.
19 different equally spaced values of $\lambda$ have been investigated.
Solid lines indicate GCE values ({\it left}), or linear extrapolations from the GCE value
to the MCE limit ({\it right}).}
\label{CwL_plot_fluc}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_2a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_2b.eps,width=8.4cm,height=6.5cm}
\caption{Covariances ({\it left}) and correlation coefficients ({\it right}) between
various extensive quantities, as listed in the legends, as a function of $\lambda$.
Solid lines indicate linear extrapolations from the GCE value
to the MCE limit ({\it left}), or GCE values ({\it right}).
The rest as in Fig.(\ref{CwL_plot_fluc}).}
\label{CwL_plot_corr}
\end{figure}
In Fig.(\ref{CwL_plot_fluc}) ({\it left}) we show the results for mean values
of baryon number $\langle B \rangle$, strangeness $\langle S \rangle$, electric charge
$\langle Q\rangle$, energy $\langle E \rangle$, and the
momenta $\langle P_x \rangle$ and $\langle P_z \rangle$.
The solid lines represent GCE values. Only the expectation value of energy is not equal to 0,
as the system sampled is assumed to be static and neutral with $T \not= 0$.
The evolution of the respective variances is shown in Fig.(\ref{CwL_plot_fluc}) ({\it right}).
Variances of extensive quantities subject to re-weighting converge linearly
to~$0$ as~$\lambda$ goes to~$1$. One notes that
$\langle \left( \Delta P_x \right)^2 \rangle$ remains constant (within error bars),
as this quantity is not re-weighted in this case study.
Please note that on many data points the error bars are smaller than the symbol used.
In Fig.(\ref{CwL_plot_corr}) ({\it left}) we show the evolution of covariances
$\langle \Delta B \Delta S \rangle$, $\langle \Delta B \Delta Q \rangle$,
$\langle \Delta S \Delta Q \rangle$, and $\langle \Delta E \Delta Q \rangle$
with the `size of the bath'. As seen, the covariances between quantities
considered for re-weighting also converge linearly to 0. In a neutral system,
covariances between energy and charge are equal to 0. As an example, we show
$\langle \Delta E \Delta Q \rangle$. In a static system, also the covariances
between momenta and any other extensive quantity are equal to 0. As
an example, we show $\langle \Delta E \Delta P_z \rangle$. The correlation
coefficients, Eq.(\ref{rho}), on the other hand, remain constant as a function
of $\lambda$, as shown in Fig.(\ref{CwL_plot_corr}) ({\it right}). The values
of fully phase space integrated correlation coefficients $\rho_{BS}$, $\rho_{BQ}$,
and $\rho_{SQ}$ can be compared to the GCE results denoted by the
solid lines shown in Figs.(\ref{lc_bs_0000} - \ref{lc_sq_0000}) in Section \ref{Sec_LCfluc}.
The variances and covariances converge linearly from their GCE values to their respective
MCE limits in the large volume limit.
The dependence of $\langle (\Delta X_i)^2\rangle$, Eq.(\ref{variance}), and
$\langle \Delta X_i \Delta X_j \rangle$, Eq.(\ref{covariance}), on the size of the bath
$\lambda$ is given by:
\begin{eqnarray}\label{variance_lambda}
\langle (\Delta X_i)^2\rangle_{\lambda} &=&
(1-\lambda) ~\langle (\Delta X_i)^2\rangle_{gce}
~+~ \lambda ~\langle (\Delta X_i)^2\rangle_{mce} \\
\label{covariance_lambda}
\langle \Delta X_i \Delta X_j \rangle_{\lambda}&=&
(1-\lambda) ~\langle \Delta X_i \Delta X_j \rangle_{gce}~
~+~ \lambda ~\langle \Delta X_i \Delta X_j \rangle_{mce}~.
\end{eqnarray}
Mean values $\langle X_i \rangle_{\lambda}$ remain constant. This implies that the scaled
variance $\omega$ of multiplicity fluctuations, Eq.(\ref{omega}), also converges linearly:
\begin{equation}\label{acc_scaling}
\omega_{\lambda}~\equiv~
\frac{\langle (\Delta N_i)^2\rangle_{\lambda}}{\langle N_i \rangle_{\lambda}}
~=~ (1-\lambda) ~\omega_{gce} ~+~ \lambda ~\omega_{mce}~,
\end{equation}
from its GCE value $\omega_{gce}$ to the MCE limit $\omega_{mce}$. Please note that
Eqs.(\ref{variance_lambda},\ref{covariance_lambda},\ref{acc_scaling}) are equivalent to the `acceptance
scaling` approximation\footnote{For the situation discussed here one could
equivalently say that particles
are randomly drawn from coordinate space of the total volume~$V_g$.
For the derivation of the acceptance scaling formula \cite{CEfirst} it was, however,
assumed that particles are randomly drawn from a sample in momentum space.}
used in \cite{MCEvsData,Res,CEfirst}. For the correlation
coefficient, Eq.(\ref{rho}),
\begin{equation}\label{lambda_rho}
\rho_{\lambda} ~\equiv~ \frac{\langle \Delta X_i \Delta X_j \rangle_{\lambda}}{
\sqrt{\langle (\Delta X_i)^2\rangle_{\lambda}\langle (\Delta X_j)^2\rangle_{\lambda}}}~,
\end{equation}
the story is more complicated. In case both $X_i$ and $X_j$ are re-weighted and
measured in full phase space, we find:
\begin{equation}
\langle (\Delta X_i)^2 \rangle_{mce}~=~
\langle (\Delta X_j)^2 \rangle_{mce}~=~
\langle \Delta X_i \Delta X_j \rangle_{mce}~=~0~,
\end{equation}
and the correlation coefficient $\rho_{\lambda}$, Eq.(\ref{lambda_rho}),
is independent of the value of $\lambda$, see Fig.(\ref{CwL_plot_corr}).
In all other cases, one needs to extrapolate
Eqs.(\ref{variance_lambda},\ref{covariance_lambda})
separately, and then calculate the correlation coefficient.
We have therefore successively transformed our Monte Carlo sample. As $\lambda \rightarrow 1$,
we give larger and larger weight to events in the immediate vicinity of the equilibrium
expectation value, and smaller and smaller weight to events away from it.
The distribution of extensive quantities considered for re-weighting
(a multivariate normal distribution in the GCE in the large volume limit)
hence gets contracted to a $\delta$-function with
vanishing variances and covariances. I.e., we successively
highlight the properties of events which have very similar values of extensive quantities.
This will have a bearing on charge correlations and, in particular,
multiplicity fluctuations and correlations discussed in the following sections.
\begin{figure}[ht!]
\epsfig{file=fig_3.eps,width=8.4cm,height=6.5cm}
\caption{First and second moment of the weight factor Eq.(\ref{solved_curly_W}) as a
function of $\lambda$.
The rest as in Fig.(\ref{CwL_plot_fluc}).
}
\label{CwL_plot_weight_2nd_mom}
\end{figure}
The price we pay is that, as $\lambda$ grows, so too does the statistical uncertainty. In the
limit $\lambda \rightarrow 1$, we approach a sample-reject type of formalism. We
cannot, therefore, directly obtain the microcanonical limit for the large system size
studied here, as this is prohibited by available computing power.
On the bright side, however, we can extrapolate to this limit. In Fig.(\ref{CwL_plot_weight_2nd_mom}) we
show the second moment of the weight factor, Eq.(\ref{solved_curly_W}), as a function
of $\lambda$. A large second moment $\langle \mathcal{W}^2 \rangle$ implies a large
statistical uncertainty and, hence, usually requires a larger sample. We mention in
this context that the intermediate ensembles, between the limits of GCE and MCE, may also be
of phenomenological interest.
\section{Momentum Spectra}
\label{Sec_MomSpect}
We next consider momentum spectra.
In Fig.(\ref{mom_spect}) we show transverse momentum and rapidity spectra
of positively charged hadrons, both primordial and final state,
for a static thermal system.
Based on these momentum spectra we construct acceptance bins $\Delta p_{T,i}$
and $\Delta y_{i}$, as in~\cite{acc,feq,baseline} and \cite{beni_urqmd,beni_data}.
Momentum bins are constructed such that each
of the five bins constructed contains on average one fifth of the total
yield of positively charged particles. The values defining the bounds of the momentum
space bins $\Delta p_{T,i}$ and $\Delta y_i$ are summarized in Table~\ref{accbins}.
\begin{figure}[ht!]
\epsfig{file=fig_4a,width=8.4cm,height=6.5cm}
\epsfig{file=fig_4b.eps,width=8.4cm,height=6.5cm}
\caption{({\it Left:}) Transverse momentum spectrum of positively charged hadrons,
both primordial and final state. ({\it Right:}) Rapidity spectrum of
positively charged hadrons, both primordial and final state.
$2 \cdot 10^6$ events have been sampled.}
\label{mom_spect}
\end{figure}
Resonance decay shifts the transverse momentum distribution to lower average
transverse momentum $\langle p_T \rangle$ and widens the rapidity distribution of
thermal `fireballs` \cite{ResDecay}. Final state transverse momentum bins are,
hence, slightly `contracted`, while final state rapidity bins get slightly `wider`, when
compared to their respective primordial counterparts.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c|c||}\hline
& ~~$p_{T,1}$ [GeV] & $p_{T,2}$ [GeV] & $p_{T,3}$ [GeV] & $p_{T,4}$ [GeV]
& $p_{T,5}$ [GeV] & ~~$p_{T,6}$ [GeV] \\
\hline
~primordial~ & 0.0 & 0.22795 & 0.36475 & 0.51825 & 0.73995 & 5.0 \\
~final state & 0.0 & 0.17105 & 0.27215 & 0.38785 & 0.56245 & 5.0 \\
\hline \hline
& $y_1$ & $y_2$ & $y_3$ & $y_4$ & $y_5$ & $y_6$ \\
\hline
~primordial~ & -5.0 & -0.4275 & -0.1241 & 0.1241 & 0.4273 & 5.0 \\
~final state & -5.0 & -0.5289 & -0.1553 & 0.1551 & 0.5289 & 5.0 \\
\hline
\end{tabular}
\caption{Transverse momentum and rapidity bins
$\Delta p_{T,i} = \left[p_{T,i},p_{T,i+1} \right]$
and $\Delta y_{i} = \left[y_{i},y_{i+1} \right]$, both primordial and final state,
for a static neutral Boltzmann system with temperature $T=0.160GeV$.
} \label{accbins}
\end{center}
\end{table}
Resonance decay combined with transverse as well as longitudinal flow is believed to
provide a rather good description of experimentally observed momentum spectra
in relativistic heavy ion collisions at SPS and RHIC energies
\cite{SolfrankHeinz,THERMINATOR,BecCley}. Our spectra, on the other hand, contain no flow and
our results thus cannot be directly compared to experimental data
or transport simulations.
However, qualitatively one might observe effects of the kind discussed in the following.
\section{The Momentum Space Dependence of Correlations between conserved charges}
\label{Sec_LCfluc}
An interesting example of quantities for which the measured value depends on the
observed part of the momentum spectrum are the correlation coefficients between the
charges baryon number $B$, strangeness $S$ and electric charge~$Q$.
Please note that also the variances and covariances of the baryon number, strangeness,
and electric charge distribution are sensitive to the acceptance cuts applied. Their
values are additionally rather sensitive to the effects of globally
enforced conservation laws. If the size of the `bath` is reduced, a change in one
interval of phase space will have to be balanced (preferably) by a change in another
interval, and not by the `bath`.
\subsection{Grand Canonical Ensemble}
\label{none}
We will now consider the correlation coefficients $\rho_{BS}$, $\rho_{BQ}$, and $\rho_{SQ}$
in limited acceptance bins $\Delta p_{T,i}$ and $\Delta y_i$, as defined in Table~\ref{accbins},
in the grand canonical ensemble. Particles in one momentum bin are then essentially sampled
independently from particles in any other momentum space segment, due to the
`infinite bath` assumption. Nevertheless, the
way in which quantum numbers are correlated is different in different momentum bins, as
different particle species have, due to their different masses, different momentum spectra.
Let us first make some basic observations about the hadron resonance gas and the way
in which quantum numbers are correlated in a GCE. Charge fluctuations directly probe
the degrees of freedom of a system, i.e. they are sensitive to its particle mass spectrum (and
its quantum number configurations). We first consider the contribution of different particle
species to the covariance $\langle \Delta X_i \Delta X_j \rangle$, Eq.(\ref{covariance}),
and hence to the correlation coefficient $\rho_{ij}$, Eq.(\ref{rho}).
All baryons have baryon number $b=+1$. Baryons can only carry strange quarks, i.e.
their strangeness is always $s\le0$. Anti-baryons have $b=-1$, and $s\ge0$. Hence, both
groups contribute negatively to the baryon-strangeness covariance, and so
$\langle \Delta B \Delta S \rangle < 0$, and therefore ${\rho_{BS}<0}$, as indicated
by the solid lines in Fig.(\ref{lc_bs_0000}).
Positively charged baryons and their anti-particles contribute positively to
the baryon-electric charge covariance $\langle \Delta B \Delta Q \rangle$, while negatively
charged baryons (and their anti-particles) contribute negatively. Two observations can
be made on the hadron resonance gas mass spectrum: there are more positively charged baryons
than negatively charged ones, and their average mass is lower. I.e., in a neutral gas
($\mu_B=\mu_Q=\mu_S = 0$) the contribution of positively charged baryons dominates and
therefore $\langle \Delta B \Delta Q \rangle >0$ and ${\rho_{BQ}>0}$, as indicated
by the solid lines in Fig.(\ref{lc_bq_0000}).
Mesons and their anti-particles always contribute positively to the
strangeness-electric charge correlation coefficient $\rho_{SQ}$. Electrically
charged strange mesons are either composed of a $u$-quark and an {$\bar{s}$-quark},
or of an $\bar{u}$-quark and a $s$-quark (and superpositions thereof).
Their contribution to
$\langle \Delta S \Delta Q \rangle $ is in either case positive. On the baryonic side, only the
$\Sigma^+$ (as well as its degenerate states and their respective anti-particles) has a negative
contribution to $\langle \Delta S \Delta Q \rangle $, while all other strangeness carrying
baryons have either electric charge $q=-1$, or $q=0$. Therefore, we find ${\rho_{SQ} >0}$,
as indicated by the solid lines in Fig.(\ref{lc_sq_0000}).
In Figs.(\ref{lc_bs_0000}-\ref{lc_sq_0000}) we show the correlation coefficients
$\rho_{BS}$ (baryon number - strangeness), $\rho_{BQ}$ (baryon number - electric charge),
and $\rho_{SQ}$ (strangeness - electric charge) as measured in the acceptance
bins $\Delta p_{T,i}$ and $\Delta y_i$ defined in Table~\ref{accbins},
both primordial and final state.
The average baryon number, strangeness, and electric charge in each
bin is equal to zero, as the system is assumed to be neutral.
The analytical primordial values (15 bins) shown in Figs.(\ref{lc_bs_0000}-\ref{lc_sq_0000})
are calculated using analytical spectra.
Please note that, again, on many data points the error bars are smaller than the symbol used.
\begin{figure}[ht!]
\epsfig{file=fig_5a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_5b.eps,width=8.4cm,height=6.5cm}
\caption{Baryon-strangeness correlation coefficient $\rho_{BS}$ in the GCE in limited acceptance windows,
both primordial and final state.
({\it Left:}) transverse momentum bins $\Delta p_{T,i}$.
({\it Right:}) rapidity bins $\Delta y_i$.
Horizontal error bars indicate the width and position of the momentum bins
(And not an uncertainty!).
Vertical error bars indicate the statistical uncertainty of $20$ Monte Carlo
runs of $10^5$ events each.
The marker indicates the center of gravity of the corresponding bin.
The solid lines show the fully phase space integrated GCE result.}
\label{lc_bs_0000}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_6a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_6b.eps,width=8.4cm,height=6.5cm}
\caption{Baryon-electric charge correlation coefficient $\rho_{BQ}$ in the GCE in limited
acceptance windows, both primordial and final state.
({\it Left:}) transverse momentum bins $\Delta p_{T,i}$.
({\it Right:}) rapidity bins $\Delta y_i$.
The rest as in Fig.(\ref{lc_bs_0000}). }
\label{lc_bq_0000}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_7a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_7b.eps,width=8.4cm,height=6.5cm}
\caption{Strangeness-electric charge correlation coefficient $\rho_{SQ}$ in the GCE in limited
acceptance windows, both primordial and final state.
({\it Left:}) transverse momentum bins $\Delta p_{T,i}$.
({\it Right:}) rapidity bins $\Delta y_i$.
The rest as in Fig.(\ref{lc_bs_0000}).}
\label{lc_sq_0000}
\end{figure}
In Tables~\ref{accbins_LC_BS}~to~\ref{accbins_LC_SQ} we summarize the transverse
momentum and rapidity dependence of the correlation coefficients
$\rho_{BS}$, $\rho_{BQ}$, and $\rho_{SQ}$. The statistical error quoted corresponds to
20 Monte Carlo runs of $10^5$ events each. The analytical values (5 bins) listed
in the tables are calculated using the momentum bins defined in Table~\ref{accbins}.
Mild differences between Monte Carlo and analytical
results are unavoidable. The analytical values are also not exactly
symmetric in $\Delta y_i$, as the exact size of the acceptance bins constructed is
sensitive to the number of bins used for the calculation of the momentum spectra.
The values of the correlation coefficient $\rho$ are also
rather sensitive to exact bin size, and the fourth digit becomes somewhat unreliable.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||}\hline
$\rho_{BS}$ & $\Delta p_{T,1}$ & $\Delta p_{T,2}$ & $\Delta p_{T,3}$ & $\Delta p_{T,4}$
& $\Delta p_{T,5}$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $-0.2479$ ~&~$-0.2641$ ~&~$-0.2864$ ~&~$-0.3188$ ~&~$-0.3839$ ~ \\
~$\rho_{prim}$ ~&~ $-0.248 \pm 0.003$ ~&~$-0.264 \pm 0.003$ ~&~$-0.286 \pm 0.003$ ~
&~$-0.319 \pm 0.002$ ~&~$-0.385 \pm 0.002$ ~ \\
~$\rho_{final}$ ~&~ $-0.216 \pm 0.002$ ~&~$-0.220 \pm 0.003$ ~
&~$-0.241 \pm 0.004$ ~&~$-0.269 \pm 0.003$ ~&~$-0.335 \pm 0.003$ ~ \\
\hline \hline
$\rho_{BS}$ & $\Delta y_1$ & $\Delta y_2$ & $\Delta y_3$ & $\Delta y_4$
& $\Delta y_5$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $-0.2407$ ~&~$-0.3345$ ~&~$-0.3536$ ~&~$-0.3345$ ~&~$-0.2408$ ~ \\
~$\rho_{prim}$ ~&~ $-0.241 \pm 0.003$ ~&~$-0.334 \pm 0.003$ ~
&~$-0.353 \pm 0.003$ ~&~$-0.335 \pm 0.003$ ~&~$-0.240 \pm 0.003$ ~ \\
~$\rho_{final}$ ~&~ $-0.191 \pm 0.002$ ~&~$-0.300 \pm 0.002$ ~
&~$-0.328 \pm 0.002$ ~&~$-0.299 \pm 0.002$ ~&~$-0.190 \pm 0.002$ ~ \\
\hline
\end{tabular}
\caption{Baryon-strangeness correlation coefficient $\rho_{BS}$ in the GCE in transverse momentum
bins $\Delta p_{T,i}$ and rapidity bins $\Delta y_i$, both primordial and final state.
For comparison, analytical values $\rho_{prim}^{calc}$ for primordial correlations
are included.
The statistical uncertainty corresponds to $20$ Monte Carlo runs of $10^5$ events each.
}
\label{accbins_LC_BS}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||}\hline
$\rho_{BQ}$ & $\Delta p_{T,1}$ & $\Delta p_{T,2}$ & $\Delta p_{T,3}$ & $\Delta p_{T,4}$
& $\Delta p_{T,5}$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $0.1120$ ~&~$0.1271$ ~&~$0.1420$ ~&~$0.1579$ ~&~$0.1781$ ~ \\
~$\rho_{prim}$ ~&~ $0.113 \pm 0.002$ ~&~$0.126 \pm 0.002$ ~&~$0.143 \pm 0.003$ ~
&~$0.158 \pm 0.002$ ~&~$0.178 \pm 0.003$ ~ \\
~$\rho_{final}$ ~&~ $0.112 \pm 0.003$ ~&~$0.120 \pm 0.003$ ~&~$0.138 \pm 0.003$ ~
&~$0.164 \pm 0.003$ ~&~$0.221 \pm 0.003$ ~ \\
\hline \hline
$\rho_{BQ}$ & $\Delta y_1$ & $\Delta y_2$ & $\Delta y_3$ & $\Delta y_4$
& $\Delta y_5$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $0.1160$ ~&~$0.1601$ ~&~$0.1658$ ~&~$0.1601$ ~&~$0.1160$ ~ \\
~$\rho_{prim}$ ~&~ $0.116 \pm 0.002$ ~&~$0.160 \pm 0.003$ ~&~$0.166 \pm 0.003$ ~
&~$0.159 \pm 0.003$ ~&~$0.117 \pm 0.002$ ~ \\
~$\rho_{final}$ ~&~ $0.118 \pm 0.003$ ~&~$0.192 \pm 0.003$ ~&~$0.202 \pm 0.003$ ~
&~$0.192 \pm 0.003$ ~&~$0.119 \pm 0.003$ ~ \\
\hline
\end{tabular}
\caption{
Baryon-electric charge correlation coefficient $\rho_{BQ}$ in the GCE in transverse momentum
bins $\Delta p_{T,i}$ and rapidity bins $\Delta y_i$, both primordial and final state.
}
\label{accbins_LC_BQ}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||}\hline
$\rho_{SQ}$ & $\Delta p_{T,1}$ & $\Delta p_{T,2}$ & $\Delta p_{T,3}$ & $\Delta p_{T,4}$
& $\Delta p_{T,5}$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $0.2831$ ~&~$0.3033$ ~&~$0.3150$ ~&~$0.3185$ ~&~$0.3055$ ~ \\
~$\rho_{prim}$ ~&~ $0.284 \pm 0.003$ ~&~$0.304 \pm 0.003$ ~&~$0.314 \pm 0.003$ ~
&~$0.319 \pm 0.002$ ~&~$0.305 \pm 0.002$ ~ \\
~$\rho_{final}$ ~&~ $0.243 \pm 0.003$ ~&~$0.254 \pm 0.003$ ~&~$0.276 \pm 0.003$ ~
&~$0.292 \pm 0.003$ ~&~$0.303 \pm 0.002$ ~ \\
\hline \hline
$\rho_{SQ}$ & $\Delta y_1$ & $\Delta y_2$ & $\Delta y_3$ & $\Delta y_4$
& $\Delta y_5$ \\
\hline
~$\rho_{prim}^{calc}$ ~&~ $0.2934$ ~&~$0.3137$ ~&~$0.3104$ ~&~$0.3137$ ~&~$0.2934$ ~ \\
~$\rho_{prim}$ ~&~ $0.294 \pm 0.003$ ~&~$0.314 \pm 0.003$ ~&~$0.310 \pm 0.002$ ~
&~$0.312 \pm 0.003$ ~&~$0.292 \pm 0.002$ ~ \\
~$\rho_{final}$ ~&~ $0.255 \pm 0.002$ ~&~$0.299 \pm 0.003$ ~&~$0.297 \pm 0.003$ ~
&~$0.298 \pm 0.003$ ~&~$0.255 \pm 0.003$ ~ \\
\hline
\end{tabular}
\caption{
Strangeness-electric charge correlation coefficient $\rho_{SQ}$ in the GCE in transverse momentum
bins $\Delta p_{T,i}$ and rapidity bins $\Delta y_i$, both primordial and final state.
}
\label{accbins_LC_SQ}
\end{center}
\end{table}
We next attempt to explain, in turn, the rapidity dependence of $\rho_{BS}$, $\rho_{BQ}$,
and $\rho_{SQ}$. Strange baryons are, on average, heavier than non-strange baryons, so their
rapidity distributions are narrower. The kaon rapidity distribution is then, compared
to baryons, again wider.
A change in baryon number (strangeness) at high $|y|$ is less likely to be accompanied
by a change in strangeness (baryon number) than at low $|y|$.
The value of $\rho_{BS}$, therefore, drops
toward higher rapidity, as shown in Fig.(\ref{lc_bs_0000}), ({\it right}).
By the same argument, we find
a weakening of the baryon-electric charge correlation $\rho_{BQ}$ at higher rapidity
(Fig.(\ref{lc_bq_0000}), ({\it right})) as the rapidity distribution of electrically charged particles
is wider than that of baryons. For the strangeness-electric charge
correlation coefficient we find first a mild rise, and then a somewhat stronger drop
of $\rho_{SQ}$ towards higher rapidity. As one shifts ones acceptance window towards higher
values of $|y|$, first the contribution of baryons (in particular $\Sigma^+$) decreases and, as
the meson contribution grows, $\rho_{SQ}$ rises slightly. Towards the highest $|y|$,
pions again dominate and de-correlate the quantum numbers.\\
The transverse momentum dependence can be understood as follows:
heavier particles have higher average transverse momentum $\langle p_T \rangle$ and,
hence, their influence increases towards higher~$p_T$. Heavy particles have a tendency
to carry several charges, causing the correlation coefficients to grow.
The contribution of strange baryons compared to non-strange baryons grows towards higher
transverse momentum, as strange baryons have on average larger mass than non-strange baryons.
The correlation coefficient $\rho_{BS}$ thus becomes strongly negative at high $p_T$. As the
contribution of baryons compared to mesons grows stronger towards larger $p_T$,
a change in baryon number (electric charge) is now more likely to be accompanied
by a change in electric charge (baryon number) than at low $p_T$, and $\rho_{BQ}$ increases
with $p_T$ (The $\Delta$ resonances\footnote{Included in the THERMUS particle table
up to the $\Delta(2420)$ .} ensure it keeps rising). For the $\Delta p_{T,i}$
dependence of $\rho_{SQ}$ we finally note that one of the strongest contributors at higher
$p_T$ is the $\Omega^-$, with a relatively low mass of $m_{\Omega^-} = 1.672GeV$. So after a
rise, $\rho_{SQ}$ drops again towards highest $p_T$, due to an increasing $\Sigma^+$
contribution\footnote{Included in the THERMUS particle table up to the $\Sigma(2030)$.}. \\
Since resonance decay has the habit of dropping the lighter particles (mesons)
at low $p_T$ and higher~$|y|$, while keeping heavier particles (baryons)
at higher $p_T$ and at mid-rapidity, none of the above arguments about the transverse
momentum and rapidity dependence are essentially changed by resonance decay. The
correlation coefficient $\rho_{BS}$ becomes more negative towards higher $p_T$,
while becoming weaker towards higher $|y|$. Similarly, $\rho_{BQ}$ grows larger at high $p_T$
and drops towards higher~$y$. The larger contributions of baryons to the high $p_T$
tail of the transverse momentum spectrum, and their decreased contribution to the tails
of the rapidity distribution, compared to mesons, are to blame.
The bump in the $p_T$ dependence of $\rho_{SQ}$, presumably caused by the $\Sigma^+$,
has vanished, as the $\Sigma^+$ is only considered as stable in its lightest version
with mass $m_{\Sigma^+}=1.189GeV$. The small bump in the $y$ dependence of $\rho_{SQ}$, however, stays.
The correlation is presumably first increased by a growing kaon contribution
and then again decreased by a growing pion contribution at larger rapidities.\\
The values of $\rho$ after resonance decay are directly sensitive to how the data is
analyzed. In the above study we analyzed final state particles (stable against strong decays)
only. One could, however, also reconstruct decay positions and momenta of parent resonances
and could then count them as belonging to the acceptance bin the parent momentum
would fall into. In the situation above, however, this would again yield the
primordial scenario. If reconstruction of resonances is not done, one is
sensitive to charge correlations carried by final state particles.
As in the primordial case, a larger acceptance bin effectively averages over
smaller bins. However, the smaller the acceptance bin, the more information is lost due to
resonance decay. In full acceptance, final state and primordial correlation coefficients
ought to be the same, since quantum numbers (and energy-momentum) are conserved
in the decays of resonances.
\subsection{Extrapolating to the MCE}
\label{none}
We next consider the extrapolation to the MCE limit of variances and covariances and, hence, correlation
coefficients, of joint distributions of charges in limited acceptance. The primordial
joint baryon number - strangeness distributions in different transverse momentum
bins will serve as examples. In this subsection, we use an
extended data set of $20 \cdot 8 \cdot 10^5$ events.
\begin{figure}[ht!]
\epsfig{file=fig_8a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_8b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the variance of the marginal baryon number distribution
$\langle (\Delta B)^2 \rangle$ ({\it left}) and the variance of the marginal strangeness distribution
$\langle (\Delta S)^2 \rangle$ ({\it right}) with $\lambda$ for a primordial hadron
resonance gas in different $\Delta p_{T,i}$ bins.
Each marker and its error bar except the last
represents the result of $20$ Monte Carlo runs of $10^5$ events
each. 8 different equally spaced values of $\lambda$ have been investigated.
The last marker denotes the result of the extrapolation.
Solid lines indicate
extrapolations from the GCE value to the MCE limit.
}
\label{CwL_varCC}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_9a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_9b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the covariance $\langle \Delta B \Delta S\rangle$ ({\it left})
and the correlation coefficient $\rho_{BS}$ ({\it right})
of the baryon number - strangeness distribution
with $\lambda$ for a primordial hadron
resonance gas in different $\Delta p_{T,i}$ bins.
The rest as in Fig.(\ref{CwL_varCC}).
}
\label{CwL_corrCC}
\end{figure}
In Fig.(\ref{CwL_varCC}) we show the evolution of the variances of the marginal primordial
baryon number distribution $\langle (\Delta B)^2 \rangle$ ({\it left}) and of the marginal
primordial strangeness distribution $\langle (\Delta S)^2 \rangle$ ({\it right})
in the transverse momentum bins $ \Delta p_{T,i}$, defined in Table \ref{accbins}, as
a function of the size of the bath $\lambda= V_1/V_g$. $8$ equally spaced values of $\lambda$
have been investigated. The last marker denotes the result of the extrapolation. In
Fig.(\ref{CwL_corrCC}) we show the dependence of the primordial covariance
$\langle \Delta B \Delta S\rangle$ ({\it left}) and the primordial correlation coefficient
$\rho_{BS}$ ({\it right}) of the joint baryon number - strangeness distribution on the size of the
bath $\lambda$.
Let us first comment on the GCE values of variances (the left most markers
in Fig.(\ref{CwL_varCC})).
As each of the 5 momentum bins holds one fifth of the
charged particle yield and, hence, less than one fifth of the baryonic contribution
in the lowest bin $\Delta p_{T,1}$, and more than one fifth in the highest
bin $\Delta p_{T,5}$, we find the baryon number variance $\langle (\Delta B)^2 \rangle$
largest in $\Delta p_{T,5}$, and smallest in $\Delta p_{T,1}$.
If binned in rapidity: $\Delta y_{3}$ has the strongest baryon contribution, and, hence,
$\langle (\Delta B)^2 \rangle$ is largest there. The same goes for
the variance $\langle (\Delta S)^2 \rangle$ of the marginal strangeness distribution.
Strangeness carrying particles are on average heavier than electrically charged
particles and, hence, the strangeness contribution is strongest around mid-rapidity
and towards larger transverse momentum (i.e. $\langle (\Delta S)^2 \rangle$ is largest
in $\Delta y_{3}$ and $\Delta p_{T,5}$,
while being smallest in $\Delta y_{1}$, $\Delta y_{5}$, and $\Delta p_{T,1}$).
\begin{figure}[ht!]
\epsfig{file=fig_10a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_10b.eps,width=8.4cm,height=6.5cm}
\caption{MCE baryon number - strangeness correlation coefficient $\rho_{BS}$
in limited acceptance
windows, both primordial and final state.
({\it Left:}) transverse momentum bins $\Delta p_{T,i}$.
({\it Right:}) rapidity bins $\Delta y_i$.
Horizontal error bars indicate the width and position of the momentum bins
(And not an uncertainty!).
Vertical error bars indicate the statistical uncertainty of the extrapolation
of $8 \cdot 20$ Monte Carlo runs of $10^5$ events each.
The marker indicates the center of gravity of the corresponding bin.
The solid lines show the fully phase space integrated GCE result.
}
\label{lc_bs_mce}
\end{figure}
The $\Delta p_{T,i}$ dependence of the GCE covariance $\langle \Delta B \Delta S\rangle$
and the GCE correlation coefficient $\rho_{BS}$ in Fig.(\ref{CwL_corrCC}) is explained
by the arguments of the previous subsection. Varying contributions of hadrons of different
mass (and charge contents) to different parts of momentum space are responsible.
We now turn our attention to the extrapolation. MCE effects on the baryonic sector are felt most strongly in
momentum space segments in which the baryonic contribution is strong (e.g. see the evolution of the last bin
$\Delta p_{T,5}$ with $\lambda$
in Figs.(\ref{CwL_varCC},\ref{CwL_corrCC})). The correlation coefficient is
not as strongly affected, in general, by MCE effects.
In Fig.(\ref{lc_bs_mce}) we show the results of the extrapolation to the MCE limit of
the baryon number-strangeness correlation coefficient $\rho_{BS}$ in acceptance bins
$\Delta p_{T,i}$ and $\Delta y_i$, both primordial and final state.
MCE values are closer to each other than corresponding GCE values, Fig.(\ref{lc_bs_0000}).
The influence of globally applied conservation laws on charge correlations is
less strong than for the multiplicity fluctuations
and correlations discussed in the next section.
\section{Momentum Space Dependence of Multiplicity Fluctuations and Correlations}
\label{Sec_MultFluc}
Multiplicity fluctuations and correlations are qualitatively affected by the
choice of ensemble and are directly sensitive to the fraction of the system observed. For vanishing
size of ones acceptance window, one would lose all information on how the
multiplicities of any two distinct groups $N_i$ and $N_j$ of particles are correlated, and
measure $\rho_{ij} =0$. This information, on the other hand, is to some extent
preserved in $\rho_{BS}$, $\rho_{BQ}$, and $\rho_{SQ}$, i.e. the way in which quantum
numbers are correlated, if at least
occasionally a particle is detected during an experiment.
We first sample the same GCE system, which we have discussed in the previous sections,
and consider the effects of resonance decay. Next the joint distributions of
positively and negatively charged particles in momentum bins $\Delta p_{T,i}$
and $\Delta y_i$ are constructed. Then we, in turn, extrapolate the GCE
primordial and final state results on the scaled variance $\omega$, Eq.(\ref{omega}),
and the correlation coefficient $\rho$, Eq.(\ref{rho}), to the MCE limit.
\subsection{Grand Canonical Ensemble}
\label{none}
In Fig.(\ref{mult_pm_0000_omega}) we show the $\Delta p_{T,i}$ ({\it left}) and $\Delta y_i$
({\it right}) dependence of the GCE scaled variance $\omega_+$ of positively charged hadrons, both
primordial and final state. In the primordial Boltzmann case one finds no dependence
of multiplicity fluctuations on the position and size of the acceptance window.
The observed multiplicity distribution is, within error bars, a Poissonian
with scaled variance $\omega_+=1$. In fact, in the primordial GCE
Boltzmann case any selection of particles has $\omega = 1$.
\begin{figure}[ht!]
\epsfig{file=fig_11a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_11b.eps,width=8.4cm,height=6.5cm}
\caption{GCE scaled variance $\omega_+$ of multiplicity fluctuations of positively
charged hadrons, both primordial and final state,
in transverse momentum bins $\Delta p_{T,i}$ ({\it left}) and rapidity bins $\Delta y_i$ ({\it right}).
Horizontal error bars indicate the width and position of the momentum bins
(And not an uncertainty!).
Vertical error bars indicate the statistical uncertainty of $20$ Monte Carlo runs
of $2\cdot 10^5$ events each.
The markers indicate the center of gravity of the corresponding bin.
The solid line indicates the final state acceptance scaling estimate.
}
\label{mult_pm_0000_omega}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_12a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_12b.eps,width=8.4cm,height=6.5cm}
\caption{GCE multiplicity correlations $\rho_{+-}$ between positively and negatively
charged hadrons, both primordial and final state, in transverse momentum
bins $\Delta p_{T,i}$ ({\it left}) and rapidity bins $\Delta y_i$ ({\it right}).
The rest as in Fig.(\ref{mult_pm_0000_omega}).}
\label{mult_pm_0000_rho}
\end{figure}
In Fig.(\ref{mult_pm_0000_rho}) we show the $\Delta p_{T,i}$ ({\it left}) and $\Delta y_i$
({\it right}) dependence of the GCE correlation coefficient $\rho_{+-}$ between positively and
negatively charged hadrons, both primordial and final state. In the primordial Boltzmann
case one finds also no dependence of multiplicity correlations on the position and size
of the acceptance window. The observed joint multiplicity distribution is a product
of two Poissonians with correlation coefficient $\rho_{+-}=0$.
Resonance decay is the only source of correlation in an ideal GCE Boltzmann gas.
Neutral hadrons decaying into two hadrons of opposite electric charge are the strongest
contributors to the correlation coefficient $\rho_{+-}$. The chance that both
(oppositely charged) decay products are dropped into the same momentum space bin is
obviously highest at low transverse momentum (i.e. the correlation coefficient is
strongest in $\Delta p_{T,1}$).
The rapidity dependence is somewhat milder again, because heavier particles (parents)
are dominantly produced at mid-rapidity and spread their daughter
particles over a range in rapidity. One notes that the scaled variances and
correlation coefficients in the respective acceptance bins in
Figs.(\ref{mult_pm_0000_omega},\ref{mult_pm_0000_rho}) are generally larger
than the acceptance scaling procedure\footnote{For the acceptance scaling approximation
it is assumed that particles are randomly detected with a certain probability $q=0.2$,
independent of their momentum.} suggests, with the notable exception of
$\rho_{+-}(\Delta p_{T,5})$.
If one would construct now a larger and larger number of momentum space bins of
equal average particle multiplicities, one would successively lose more and
more information about how multiplicities of distinct groups of
particles are correlated.
There is a simple relation connecting the scaled variance of the fluctuations of all charged
hadrons $\omega_{\pm}$ to the fluctuations of only positively charged particles
$\omega_+$ via the correlation coefficient $\rho_{+-}$ between positively and negatively
charged hadrons in a neutral system:
\begin{equation}
\omega_{\pm} ~=~ \omega_+~\left(1~+~\rho_{+-}\right).
\end{equation}
We, therefore, find the effect of resonance decay on the $\Delta p_{T,i}$
dependence of $\omega_{\pm}$ to be considerably stronger than on that of $\omega_+$,
and generally $\omega_{\pm} > \omega_+$, as the correlation coefficient
$\rho_{+-}$ remains positive in the final state GCE. Compared to this, the final state
values of $\omega_{\pm}$, $\omega_+$ and $\rho_{+-}$ remain rather flat
with $\Delta y_{i}$ in the GCE.
\subsection{Extrapolating to the MCE}
\label{none}
In the very same way that we extrapolated fully phase space integrated extensive quantities
to the MCE limit in Section~\ref{Sec_ExtraMCE}, we now extrapolate multiplicity fluctuations
$\omega_+$ and correlations $\rho_{+-}$ in transverse momentum bins $\Delta p_{T,i}$
and rapidity bins $\Delta y_i$ for a hadron resonance gas from the GCE $(\lambda =0)$ to the MCE
($\lambda \rightarrow 1$).
Analytical primordial MCE results are done in the infinite volume
approximation \cite{acc,baseline}.
We, hence, have some guidance as to further asses the accuracy of the extrapolation
scheme. For final state fluctuations and correlations in limited acceptance,
on the other hand, no analytical results are available.
Mean values of particle numbers of positively charged hadrons $\langle N_+ \rangle$
and negatively charged hadrons $\langle N_- \rangle$ in the respective acceptance
bins, defined in Table~\ref{accbins}, remain constant as $\lambda$ goes from~$0$~to~$1$,
while the variances $\langle ( \Delta N_+)^2 \rangle$ and
$\langle ( \Delta N_-)^2 \rangle$, and covariance $\langle \Delta N_+ \Delta N_- \rangle$
converge linearly to their respective MCE limits. The correlation coefficient $\rho_{+-}$
between positively and negatively charged hadrons, on the other hand, will not approach
its MCE value linearly, as discussed in Section~\ref{Sec_ExtraMCE}.
\subsubsection{Primordial}
\label{none}
In Fig.(\ref{conv_lambda_omega_prim}) we show the primordial scaled variance $\omega_+$
of positively charged hadrons in transverse momentum bins $ \Delta p_{T,i}$ ({\it left})
and rapidity bins $\Delta y_i$ ({\it right}) as a function of the size of the bath
$\lambda= V_1/V_g$, while in Fig.(\ref{conv_lambda_rho_prim}) we show the dependence of
the primordial correlation coefficient $\rho_{+-}$ between positively and negatively
charged hadrons in transverse momentum bins $\Delta p_{T,i}$ ({\it left}) and rapidity
bins $\Delta y_i$ ({\it right}) on $\lambda$.
The results of $8 \cdot 20$ Monte Carlo runs of $2 \cdot 10^5$ events each are summarized in
Table \ref{accbins_Mult_prim}. The system sampled was assumed to be neutral
$\mu_j = (0,0,0)$ and static $u_{\mu} = (1,0,0,0)$ with local
temperature $\beta^{-1} = 0.160GeV$ and a system volume of $V_1 = 2000 fm^3$.
8 different values of $\lambda$ have been studied.
The last marker $(\lambda = 1)$ denotes the result of the extrapolation.
Only primordial hadrons are analyzed.
Values for both $\Delta p_{T,i}$ and $\Delta y_i$ bins are listed.
Analytical numbers are calculated according to the method
developed in \cite{acc,baseline}, using the acceptance bins defined in Table~\ref{accbins},
and are shown for comparison.
The effects of energy-momentum and charge conservation on primordial
multiplicity fluctuations and correlations in finite acceptance have been
discussed in \cite{acc,baseline}. A few words attempt to summarize.
\begin{figure}[ht!]
\epsfig{file=fig_13a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_13b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the primordial scaled variance $\omega_+$ of positively charged hadrons
with the Monte Carlo parameter $\lambda = V_1/V_g$ for transverse momentum
bins $\Delta p_{T,i}$ ({\it left}) and for rapidity bins $\Delta y_i$ ({\it right}).
The solid lines show an analytic extrapolation from GCE results ($\lambda =0$)
to the MCE limit ($\lambda \rightarrow 1$).
Each marker and its error bar except the last represents the result of $20$ Monte Carlo runs
of $2 \cdot 10^5$ events.
$8$ different equally spaced values of $\lambda$ have been investigated.
The last marker denotes the result of the extrapolation.
}
\label{conv_lambda_omega_prim}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_14a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_14b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the primordial correlation coefficient $\rho_{+-}$ between
positively and negatively charged hadrons with the Monte Carlo parameter
$\lambda = V_1/V_g$ for transverse momentum bins $\Delta p_{T,i}$ ({\it left}) and
for rapidity bins $\Delta y_i$ ({\it right}).
The rest as in Fig.(\ref{conv_lambda_omega_prim}).}
\label{conv_lambda_rho_prim}
\end{figure}
Let us first attend to fully phase space integrated results. The scaled variance
of multiplicity fluctuations is lowest in the MCE due to the requirement of exact energy
and charge conservation, somewhat larger in the CE, and largest
in the GCE, as now all constraints on the microstates of the system have been
dropped~\cite{MCEvsData,Res,clt}.
The fully phase space integrated MCE and CE correlation coefficients between oppositely
charged particles are rather close to 1. Doubly charged particles
allow for mild deviation, as also the $\Delta^{++}$ resonance is counted
as only one particle.
The transverse momentum dependence can be understood as follows:
a change in particle number at high transverse momentum involves
a large amount of energy. I.e., in order to balance the energy record, one
needs to create (or annihilate) either a lighter particle with more kinetic energy, or two
particles at lower $p_T$. This leads to suppressed multiplicity fluctuations in
high $\Delta p_{T,i}$ bins compared to low $\Delta p_{T,i}$ bins. By the same argument, it seems
favorable, due to the constraint of energy and charge conservation, to balance
electric charge, by creating (or annihilating) pairs of oppositely charged particles,
predominantly in lower $\Delta p_{T,i}$ bins, while allowing for a more un-correlated
multiplicity distribution, i.e. also larger net-charge ($\delta Q = N_+-N_-$) fluctuations,
in higher $\Delta p_{T,i}$ bins.
For the rapidity dependence similar arguments hold. Here, however, the strongest
role is played by longitudinal momentum conservation. A change in particle number
at high $y$ involves now, in addition to a large amount of energy, a large momentum
$p_z$ to be balanced. The constraints of global $P_z$ conservation are, hence, felt
least severely around $|y| \sim 0$, and it becomes favorable to balance charge
predominantly at mid-rapidity ($\rho_{+-}$ larger) and allow for stronger
multiplicity fluctuations ($\omega_+$ larger) compared to forward and backward
rapidity bins.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||}\hline
~primordial~ & $\Delta p_{T,1}$ & $\Delta p_{T,2}$ & $\Delta p_{T,3}$ & $\Delta p_{T,4}$
& $\Delta p_{T,5}$ \\
\hline
~$\omega^{gce}_+$ ~&~ $1.000 \pm 0.002$ ~&~
$1.000 \pm 0.002$ ~&~ $1.000 \pm 0.002$ ~&~
$1.000 \pm 0.002$ ~&~ $1.000 \pm 0.002$~ \\
~$\omega^{mce}_+$ ~&~ $0.889 \pm 0.007$ ~&~
$0.880 \pm 0.007$ ~&~ $0.869 \pm 0.007$ ~&~
$0.850 \pm 0.006$ ~&~ $0.798 \pm 0.007$~ \\
~$\omega^{mce,c}_+$ ~&~ $0.8886$ ~&~ $0.8802$ ~&~ $0.8682$ ~&~ $0.8489$ ~&~ $0.7980$~ \\
\hline
~$\rho^{gce}_{+-}$ ~&~ $0.000 \pm 0.002$ ~&~
$-0.000 \pm 0.002$ ~&~ $-0.000 \pm 0.002$ ~&~
$0.000 \pm 0.002$ ~&~ $0.000 \pm 0.001$~ \\
~$\rho^{mce}_{+-}$ ~&~ $0.094 \pm 0.005$ ~&~
$0.085 \pm 0.006$ ~&~ $0.072 \pm 0.006$ ~&~
$0.056 \pm 0.006$ ~&~ $0.003 \pm 0.005$~ \\
~$\rho^{mce,c}_{+-}$ ~&~ $0.0935$ ~&~ $0.0844$ ~&~ $0.0730$ ~&~ $0.0554$ ~&~ $0.0040$~ \\
\hline \hline
~primordial~ & $\Delta y_1$ & $\Delta y_2$ & $\Delta y_3$ & $\Delta y_4$
& $\Delta y_5$ \\
\hline
~$\omega^{gce}_+$ ~&~ $1.000 \pm 0.002$ ~&~
$1.000 \pm 0.002$ ~&~ $1.000 \pm 0.003$ ~&~
$1.000 \pm 0.002$ ~&~ $1.000 \pm 0.002$~ \\
~$\omega^{mce}_+$ ~&~ $0.795 \pm 0.006$ ~&~
$0.835 \pm 0.007$ ~&~ $0.853 \pm 0.008$ ~&~
$0.834 \pm 0.006$ ~&~ $0.794 \pm 0.007$~ \\
~$\omega^{mce,c}_+$ ~&~ $0.7950$ ~&~ $0.8350$ ~&~ $0.8521$ ~&~ $0.8351$ ~&~ $0.7949$~ \\
\hline
~$\rho^{gce}_{+-}$ ~&~ $-0.000 \pm 0.001$ ~&~
$0.000 \pm 0.002$ ~&~ $0.001 \pm 0.002$ ~&~
$0.000 \pm 0.002$ ~&~ $-0.000 \pm 0.002$~ \\
~$\rho^{mce}_{+-}$ ~&~ $-0.013 \pm 0.005$ ~&~
$0.040 \pm 0.006$ ~&~ $0.061 \pm 0.006$ ~&~
$0.041 \pm 0.006$ ~&~ $-0.012 \pm 0.006$~ \\
~$\rho^{mce,c}_{+-}$ ~&~ $-0.0135$ ~&~ $0.0406$ ~&~ $0.0616$ ~&~ $0.0406$ ~&~ $-0.0135$~ \\
\hline
\end{tabular}
\caption{Summary of the primordial scaled variance $\omega_+$ of positively charged hadrons
and the correlation coefficient $\rho_{+-}$ between positively and negatively charged
hadrons in transverse momentum bins $\Delta p_{T,i}$ and rapidity bins $\Delta y_i$.
Both the GCE result ($\lambda = 0$) and the extrapolation to MCE ($\lambda = 1$)
are shown.
The uncertainty quoted corresponds to $20$ Monte Carlo runs of $2 \cdot 10^5$ events
(GCE) or is the result of the extrapolation (MCE).
Analytic MCE results $\omega^{mce,c}_+$ and $\rho^{mce,c}_{+-}$ are listed too.
}
\label{accbins_Mult_prim}
\end{center}
\end{table}
In a somewhat casual way one could say: events of a neutral hadron resonance gas
with values of extensive quantities $B$, $S$, $Q$, $E$ and $P_z$ in the vicinity
of $\langle \mathcal{Q}_1^l \rangle$ have a tendency to have similar numbers of positively
and negatively charged particles at low transverse momentum $p_T$ and rapidity $y$
and less strongly so at high $p_T$ and $|y|$.
The statistical error on the `data` points grows as $\lambda \rightarrow 1$, as can be seen
from Figs.(\ref{conv_lambda_omega_prim},\ref{conv_lambda_rho_prim}).
The extrapolation helps greatly to keep the statistical uncertainty on the MCE limit low,
as summarized in Table~\ref{accbins_Mult_prim}, and can be seen from a comparison of the last
two data points in Figs.(\ref{conv_lambda_omega_prim},\ref{conv_lambda_rho_prim}). The
last point and its error bar denote the result of a linear extrapolation of variances
and covariances, while the second to last data point and its error bar are the result
of $20$ Monte Carlo runs with $\lambda = 0.875$.
The analytical MCE values are well within error bars
of extrapolated Monte Carlo results, and
agree surprisingly well, given the large number of ``conserved'' quantities (5) and
a relatively small sample size of $8 \cdot 20 \cdot 2 \cdot 10^5 = 3.2 \cdot 10^7$ events.
In a sample-reject type of approach this sample size would yield a substantially larger
statistical error, as only events with exact values of extensive quantities are
kept for the analysis. As the system size is increased, a sample-reject formalism, hence,
becomes increasingly inefficient, while the extrapolation method still yields
good results. For a further discussion see Appendix~\ref{App_ConvStudy}.
\subsubsection{Final State}
\label{none}
We now attend to the extrapolation of final state multiplicity
fluctuations and correlations to the MCE limit. An independent Monte Carlo run for the same
physical system was done, but now with only stable final state particles `detected'.
In Fig.(\ref{conv_lambda_omega_final}) we show the final state scaled variance $\omega_+$
of positively charged hadrons in transverse momentum bins $\Delta p_{T,i}$ ({\it left})
and rapidity bins $\Delta y_i$ ({\it right}) as a function of $\lambda$, while in
Fig.(\ref{conv_lambda_rho_final}) we show the dependence of the final state correlation
coefficient $\rho_{+-}$ between positively and negatively charged hadrons in transverse
momentum bins $\Delta p_{T,i}$ ({\it left}) and rapidity bins $\Delta y_i$ ({\it right}) on
the size of the bath $\lambda = V_1 / V_g$.
\begin{figure}[ht!]
\epsfig{file=fig_15a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_15b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the final state scaled variance $\omega_+$ of positively charged
hadrons with the Monte Carlo parameter $\lambda = V_1/V_g$ for transverse momentum
bins $\Delta p_{T,i}$ ({\it left}) and for rapidity bins $\Delta y_i$ ({\it right}).
The solid lines show an analytic extrapolation from GCE results ($\lambda =0$)
to the MCE limit ($\lambda \rightarrow 1$).
Each marker except the last represents the result of $20$ Monte Carlo
runs of $2 \cdot 10^5$ events.
$8$~different equally spaced values of $\lambda$ have been investigated.
The last marker denotes the result of the extrapolation.
}
\label{conv_lambda_omega_final}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_16a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_16b.eps,width=8.4cm,height=6.5cm}
\caption{Evolution of the final state correlation coefficient $\rho_{+-}$ between
positively and negatively charged hadrons with the Monte Carlo parameter
$\lambda = V_1/V_g$ for transverse momentum bins $\Delta p_{T,i}$ ({\it left}) and
for rapidity bins $\Delta y_i$ ({\it right}).
The rest as in Fig.(\ref{conv_lambda_omega_final}).}
\label{conv_lambda_rho_final}
\end{figure}
The $\Delta p_{T,i}$ and $\Delta y_i$ dependence on $\lambda$ of the final state MCE
scaled variance $\omega_+$ is qualitatively similar to that of the primordial
versions, Fig.(\ref{conv_lambda_omega_prim}), and is essentially
also explained by the arguments
of the previous section. The effects of charge and energy-momentum conservation work in pretty
much the same way as before, and it still seems favorable to have events with
wider multiplicity distributions at low $p_T$ and low $y$, and narrower distributions at
larger $p_T$ and larger $|y|$. The dependence of the final state
correlation coefficients $\rho_{+-}$ on $\lambda$, Fig.(\ref{conv_lambda_rho_final}),
is a bit different to the primordial case, Fig.(\ref{conv_lambda_rho_prim}).
However, in the MCE limit, events still tend to have more similar numbers of
oppositely charged particles at low $p_T$ and low $y$, than at large $p_T$ and large $|y|$.
The effects of resonance decay are qualitatively different in the MCE, CE, and GCE.
Let us again first attend to fully phase space integrated multiplicity fluctuations
discussed in~\cite{MCEvsData,Res}.
The final state scaled variance increases in the GCE and CE compared to the primordial scaled
variance. Multiplicity fluctuations of neutral mesons remain unconstrained by
conservation laws. However, they often decay into oppositely charged particles, which increases
multiplicity fluctuations of pions, for instance. In the MCE, due to the constraint of
energy conservation, the event-by-event fluctuations of primordial pions are correlated to the
event-by-event fluctuations of, in general, primordial parent particles, and
$\omega^{final}< \omega^{prim}$ is possible in the MCE.
\begin{figure}[ht!]
\epsfig{file=fig_17a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_17b.eps,width=8.4cm,height=6.5cm}
\caption{MCE scaled variance $\omega_+$ of multiplicity fluctuations of positively
charged hadrons, both primordial and final state, in transverse momentum bins
$\Delta p_{T,i}$ ({\it left}) and rapidity bins $\Delta y_i$ ({\it right}).
Horizontal error bars indicate the width and position of the momentum bins
(And not an uncertainty!).
Vertical error bars indicate the statistical uncertainty quoted in
Table~\ref{accbins_Mult_final}.
The markers indicate the center of gravity of the corresponding bin.
The solid and the dashed lines show final state and primordial acceptance
scaling estimates respectively.
}
\label{mult_pm_MCE_omega}
\end{figure}
\begin{figure}[ht!]
\epsfig{file=fig_18a.eps,width=8.4cm,height=6.5cm}
\epsfig{file=fig_18b.eps,width=8.4cm,height=6.5cm}
\caption{MCE multiplicity correlation coefficient $\rho_{+-}$ between positively
and negatively charged hadrons, both primordial and final state, in transverse
momentum bins $\Delta p_{T,i}$ ({\it left}) and rapidity bins $\Delta y_i$ ({\it right}).
The rest as in Fig.(\ref{mult_pm_MCE_omega}).
}
\label{mult_pm_MCE_rho}
\end{figure}
In Fig.(\ref{mult_pm_MCE_omega}) and Fig.(\ref{mult_pm_MCE_rho}) we compare
the final state $\Delta p_{T,i}$ ({\it left})
and $\Delta y_i$ ({\it right}) dependence of the MCE scaled variance $\omega_+$ and
the MCE correlation coefficient $\rho_{+-}$ respectively to
their primordial counterparts. The results of $8 \cdot 20$ Monte
Carlo runs of $2\cdot 10^5$ events each for a static and neutral hadron resonance gas
with $T=0.160 GeV$ are summarized in Table (\ref{accbins_Mult_final}).
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c||c|c|c|c|c||}\hline
~final state~& $\Delta p_{T,1}$ & $\Delta p_{T,2}$ & $\Delta p_{T,3}$ & $\Delta p_{T,4}$
& $\Delta p_{T,5}$ \\
\hline
~$\omega^{gce}_+$ ~&~ $1.031 \pm 0.002$ ~&~
$1.026 \pm 0.002$ ~&~ $1.020 \pm 0.002$ ~&~
$1.015 \pm 0.002$ ~&~ $1.010 \pm 0.002$~ \\
~$\omega^{mce}_+$ ~&~ $0.904 \pm 0.007$ ~&~
$0.884 \pm 0.007$ ~&~ $0.872 \pm 0.007$ ~&~
$0.847 \pm 0.007$ ~&~ $0.778 \pm 0.006$~ \\
\hline
~$\rho^{gce}_{+-}$ ~&~ $0.163 \pm 0.001$ ~&~
$0.107 \pm 0.001$ ~&~ $0.109 \pm 0.001$ ~&~
$0.075 \pm 0.002$ ~&~ $0.052 \pm 0.002$~ \\
~$\rho^{mce}_{+-}$ ~&~ $0.143 \pm 0.005$ ~&~
$0.088 \pm 0.005$ ~&~ $0.090 \pm 0.005$ ~&~
$0.049 \pm 0.006$ ~&~ $-0.010 \pm 0.006$~ \\
\hline \hline
~final state~& $\Delta y_1$ & $\Delta y_2$ & $\Delta y_3$ & $\Delta y_4$
& $\Delta y_5$ \\
\hline
~$\omega^{gce}_+$ ~&~ $1.017 \pm 0.002$ ~&~
$1.023 \pm 0.002$ ~&~ $1.024 \pm 0.002$ ~&~
$1.023 \pm 0.003$ ~&~ $1.017 \pm 0.002$~ \\
~$\omega^{mce}_+$ ~&~ $0.771 \pm 0.007$ ~&~
$0.840 \pm 0.006$ ~&~ $0.859 \pm 0.007$ ~&~
$0.839 \pm 0.007$ ~&~ $0.770 \pm 0.006$~ \\
\hline
~$\rho^{gce}_{+-}$ ~&~ $0.100 \pm 0.001$ ~&~
$0.116 \pm 0.001$ ~&~ $0.115 \pm 0.002$ ~&~
$0.115 \pm 0.002$ ~&~ $0.100 \pm 0.001$~ \\
~$\rho^{mce}_{+-}$ ~&~ $-0.027 \pm 0.005$ ~&~
$0.069 \pm 0.005$ ~&~ $0.092 \pm 0.006$ ~&~
$0.069 \pm 0.006$ ~&~ $-0.027 \pm 0.005$~ \\
\hline
\end{tabular}
\caption{Summary of the final state scaled variance $\omega_+$ of positively charged hadrons
and the correlation coefficient $\rho_{+-}$ between positively and negatively charged
hadrons in transverse momentum bins $\Delta p_{T,i}$ and rapidity bins $\Delta y_i$.
Both the GCE result ($\lambda = 0$) and the extrapolation to the MCE ($\lambda = 1$) are shown.
The uncertainty quoted corresponds to $20$ Monte Carlo runs of $2 \cdot 10^5$ events
(GCE) or is the result of the extrapolation (MCE).
}
\label{accbins_Mult_final}
\end{center}
\end{table}
A few words to summarize Figs.(\ref{mult_pm_MCE_omega},\ref{mult_pm_MCE_rho}): resonance decay and
(energy) conservation laws work in the same direction, as far as the transverse momentum
dependence of the scaled variance $\omega_+$ and the correlation coefficient
$\rho_{+-}$ is concerned. Both effects lead to increased multiplicity fluctuations and
an increased correlation between the
multiplicities of oppositely charged particles in the low $p_T$ region, compared
to the high $p_T$ domain.
Compared to this, the MCE $\Delta y_i$ dependence of $\omega_+$ and $\rho_{+-}$
is mainly dominated by global conservation of $P_z$. Resonance decay effects, see
Figs.(\ref{mult_pm_0000_omega},\ref{mult_pm_0000_rho}), are more equal across rapidity,
than in transverse momentum.
Again, we find the scaled variance of all charged particles larger than the scaled
variance of only positively charged hadrons $\omega_{\pm} > \omega_+$, except
for when $\rho_{+-} <0$, i.e when the multiplicities of oppositely charged particles
are anti-correlated, as for instance in $\Delta p_{T,5}$, $\Delta y_{1}$, and $\Delta y_{5}$.
In contrast to that, we narrowly find $\omega_{\pm} > 1$ in the lowest transverse
momentum bin $\Delta p_{T,1}$.
The qualitative picture presented in Fig.(\ref{mult_pm_MCE_omega}) could be
compared to similar analysis of UrQMD transport simulation data \cite{beni_urqmd},
or recently published NA49 data on multiplicity fluctuations in limited momentum
bins \cite{beni_data}. We, however, do not claim that the effects discussed above are the sole effects
leading to the qualitative agreement with either of the two.
\section{Summary}
\label{Sec_Summary}
We have presented a recipe for a thermal model Monte Carlo event generator
capable of extrapolating fluctuation and correlation observables for
Boltzmann systems of large volume from their GCE values to the MCE limit.
Our approach has a strong advantage compared to analytical approaches or
standard microcanonical sample-and-reject Monte Carlo techniques,
in that it can handle resonance decays as well
as (very) large system sizes at the same time.
To introduce our scheme,
we have conceptually divided a microcanonical system into two subsystems.
These subsystems are assumed to be in equilibrium with each other,
and subject to the constraints of joint energy-momentum and charge conservation.
Particles are only measured in one subsystem, while the second subsystem
provides a thermodynamic bath.
By keeping the size of the first subsystem fixed, while varying the size of the second,
one can thus study the dependence of statistical properties of an ensemble
on the fraction of the system observed (i.e. assess their sensitivity
to globally applied conservation laws).
The ensembles generated are thermodynamically equivalent
in the sense that mean values in the observed subsystem remain unchanged
when the size of the bath is varied, provided the combined system is sufficiently large.
The Monte Carlo process can be divided into four steps.
In the first two steps primordial particle multiplicities for each species,
and momenta for each particle, are generated for each event
by sampling the grand canonical partition function.
In the third step resonance decay of unstable particles is performed.
Lastly the values of extensive quantities are
calculated for each event and a corresponding weight factor is assigned.
All events with the same set of extensive quantities hence still
have `a priori equal probabilities'. In the limit of an infinite bath, all events have
a weight equal to unity. In the opposite limit of a vanishing bath, only events with an exactly
specified set of extensive quantities have non-vanishing weight. In between,
we extrapolate in a controlled manner.
The method is even rather efficient for large volume, inaccessible
to sample-and-reject procedures,
and agrees well, where available,
with analytic asymptotic microcanonical solutions.
Given the success of the hadron resonance gas model in describing experimentally
measured average hadron yields, and its ability to reproduce
low temperature lattice susceptibilities,
the question arises as to whether fluctuation and correlation observables
also follow its main line.
In particular, three effects are nicely discussable:
Resonance decay, conservation laws, and limited acceptance effects.
Due to the Monte Carlo nature, data can be analyzed in close relation to
experimental analysis techniques. The hadron resonance gas is an ideal testbed
for this type of study, in that it is simple and intuitive.
The statistical properties of a sample of hadron resonance gas events
show a systematic dependence on what part of the momentum distribution and
what fraction of the system is observed.
Two examples served to illustrate: grand canonical charge-charge correlations,
and microcanonical multiplicity fluctuations and correlations.
In the case of charge-charge correlations, momentum space effects are caused by different
masses of hadrons and, hence, their varying contribution to different
parts of the momentum spectra.
Although microcanonical effects on the (co)variances of the joint
baryon number - strangeness - electric charge distribution are considerable, they remain weak
for the correlation coefficients between these quantum numbers.
In contrast to this, momentum space effects on multiplicity fluctuations and correlations
arise due to conservations laws. For an ideal primordial grand canonical ensemble
in the Boltzmann approximation (our starting point),
multiplicity distributions are just uncorrelated Poissonians, regardless of the
acceptance cuts applied, as particles are assumed to be produced independently.
The requirement of energy-momentum and charge conservation leads to suppressed
fluctuations and enhanced correlations between the multiplicities of two distinct
groups of particles at the `high momentum' end of the momentum spectrum, provided
some fraction of an isolated system is observed.
Resonance decay does not change these trends.
The arguments on which the explanation of this particular dependence are based seem
general enough to hope that they might hold too in non-equilibrium systems, such as
real heavy ion data or theoretical transport simulations.
A direct comparison with experimental data seems problematic at the moment.
The static global thermal and chemical equilibrium assumption made here is
certainly insufficient. The model presented here is far from complete.
Several interesting aspects deserve attention.
They include the sampling of Fermi-Dirac or Bose-Einstein particles,
for which low transverse momentum is particularly sensitive;
finite volume corrections could be done
(possible if one has a good approximation to $\mathcal{W}$);
the convergence properties (at fixed $\lambda$, and as a function of $\lambda$) fall
basically into the same direction;
so far we also have not derived a thermodynamic potential for our ensembles;
one could also consider more general forms of $\mathcal{W}$;
one could ask how to couple two systems of different densities,
or altogether depart from the local equilibrium assumption.
There are also several interesting things that the model could do in its present form.
Examples include mean transverse momentum fluctuations, correlation between
transverse momentum and particle number, or even 2 and 3 particle correlation functions.
This should be the subject of future work.
\begin{acknowledgments}
We would like to thank F.~Becattini, E.~Bratkovskaya, W.~Cassing, J.~Cleymans,
M.~Gazdzicki, M.~Gorenstein, J.~Manninen, J.~Randrup, and K.~Redlich for fruitful discussions.
Special thanks goes to W.~Broniowski for his contribution to the very idea
which started this project. The computational work was done on the CARMEN cluster
of the UCT physics department. We would also like to thank G.~de Vaux for
valuable help with many aspects of running the code.
\end{acknowledgments}
|
0909.3462
|
\section{Introduction}
Many known families of black hole solutions possess a limit wherein the black hole horizon becomes degenerate,
i.e. where the surface gravity tends to zero; such black holes are called extremal. While extremal black holes
are not believed to be physically realized as macroscopic objects in nature, they are nevertheless highly interesting
from the theoretical viewpoint. Due to the limiting procedure, they are in some sense at the fringe of
the space of all black holes, and therefore possess special properties which make them easier to study
in various respects. For example, in string theory, the derivation of the Bekenstein-Hawking entropy of black holes from counting
microstates (see e.g.~\cite{david} for a review)
is best understood for extremal black holes. Furthermore, many black hole solutions
that have been constructed in the context of supergravity theories (see e.g.~\cite{Gauntlett1,Gauntlett2})
have supersymmetries, and are thus automatically extremal.
Many of the arguments related to the derivation of the black hole entropy---especially in the context of the
``Kerr-CFT correspondence''~\cite{guida,lu,compere,aze,Hartman,amsel}---actually only involve the spacetime geometry
in the immediate (actually infinitesimal) neighborhood of the black hole horizon. More precisely, by applying a suitable scaling process to the spacetime metric which in effect blows up this neighborhood, one can obtain in the limit a new spacetime metric, called a ``near horizon geometry.'' It is the near horizon geometry which enters many of the arguments pertaining to
the derivation of the black hole entropy.
The near horizon limit can be defined for any spacetime $(M,g)$ with a degenerate Killing horizon, $N$---not necessarily a black hole horizon. The construction runs as follows\footnote{
The general definition of a near-horizon limit was first considered
in the context of supergravity black holes in \cite{Reall03},
and in the context of extremal but not supersymmetric black holes
in \cite{crt06} for the static case and in \cite{klr} for the general case.
The concept of near-horizon geometry itself has appeared previously
in the literature, e.g., \cite{Hajicek} for $4$-dimensional vacuum case (also see \cite{lp03} for the isolated horizon case).
}
.
First, recall that a spacetime with degenerate Killing horizon by definition has a smooth, codimension one, null hypersurface $N$, and a Killing vector field $K$ whose orbits are tangent $N$, and which on $N$ are tangent to affinely\footnote{For a non-degenerate horizon, the orbits on $N$ of $K$ would not be affinely
parametrized.} parametrized null-geodesics.
Furthermore, by assumption, there is a ``cross section'', $H$, of codimension one in $N$ with the property
that each generator of $K$ on $N$ is isomorphic to ${\mathbb R}$ and intersects $H$ precisely once.
In the vicinity of $N$, one can then introduce
``Gaussian null coordinates'' $u,v,y^a$ as follows,
see e.g. \cite{MI83}.
First, we choose arbitrarily
local\footnote{Of course, it will take more than one patch to cover $H$, but
the fields $\gamma, \beta, \alpha$ on $H$ below in eq.~\eqref{gnc} are globally defined and
independent of the choice of coordinate systems.} coordinates $y^a$ on $H$, and we Lie-transport them along the flow of $K$ to other places on $N$,
denoting by $v$ the flow parameter. Then, at each point of $N$ we shoot off affinely parametrized null-geodesics and take $u$ to be the affine parameter along these null geodesics. The tangent vector $\partial/\partial u$
to these null geodesics is required to have unit inner product with $K = \partial/\partial v$ on $H$, and to be orthogonal
to the Lie-transported cross-section $H$. It can be shown that the metric then takes the Gaussian null form
\begin{equation}\label{gnc}
g = 2 {\rm d} v ({\rm d} u + u^2 \alpha \, {\rm d} v + u \beta_a \, {\rm d} y^a) + \gamma_{ab} \, {\rm d} y^a {\rm d} y^b \, ,
\end{equation}
where the function $\alpha$, the one-form $\beta = \beta_a \, {\rm d} y^a$, and the tensor field
$\gamma = \gamma_{ab} \, {\rm d} y^a {\rm d} y^b$ do not depend on $v$. The Killing horizon $N$ is
located at $u=0$, and the cross section $H$ at $u=v= 0$. The near horizon limit is now taken
by applying to $g$ the diffeomorphism $v \mapsto v/\epsilon, u \mapsto \epsilon u$ (leaving the other coordinates $y^a$ unchanged), and then taking $\epsilon \to 0$. The so-obtained metric looks
exactly like eq.~\eqref{gnc}, but with new metric functions obtained from the old ones by evaluating them at $u=0$. Thus, the fields $\alpha, \beta, \gamma$ of the near horizon metric neither depend on $v$ nor $u$, and
can therefore be viewed as fields on $H$. If the original spacetime with degenerate Killing horizon satisfied the
vacuum Einstein equation or the Einstein equation with a cosmological constant, then the near horizon limit does, too.
The near horizon limit is simpler than the original metric in the sense that it has more symmetries. For example, if the limit procedure is applied to the extremal Kerr metric in $D=4$ spacetime dimensions with symmetry group
${\mathbb R} \times U(1)$, then---as observed\footnote{By construction, the near horizon geometry has the Killing fields
$\partial/\partial v$ and $u \partial/\partial u - v \partial/\partial v$, which
generate a two-parameter symmetry group. The non-trivial observation by~\cite{bardeen} is that this actually
gets enhanced to the three-parameter group $O(2,1)$.}
first by~\cite{bardeen} (see also \cite{BW,Carter})---the near horizon
metric has an enhanced symmetry
group of $O(2,1) \times U(1)$. The first factor of this group is related to an $AdS_2$-factor in the metric. A similar phenomenon occurs for stationary extremal black holes in higher dimensions with a
comparable amount of symmetry: As proved in~\cite{klr}, if $(M,g)$ is a $D$-dimensional
stationary extremal black hole with isometry group\footnote{The ``rigidity theorem''~\cite{hi}
guarantees that a stationary extremal black hole has a symmetry group that contains ${\mathbb R} \times U(1)$,
i.e. guarantees only one axial Killing field in addition to the assumed timelike Killing field. Therefore,
in $D \ge 5$, assuming a factor of $U(1)^{D-3}$ is a non-trivial restriction, while it is actually a
consequence of the rigidity theorem in $D=4$.} ${\mathbb R} \times U(1)^{D-3}$ and compact horizon cross section $H$,
then the near horizon limit has the enhanced symmetry group $O(2,1) \times U(1)^{D-3}$.
In $D \ge 5$ dimensions, it is not known at present what is the most
general stationary extremal black hole solution with symmetry group
${\mathbb R} \times U(1)^{D-3}$, so one can neither perform explicitly
their near horizon limits.
Nevertheless, because
the near horizon metric has an even higher degree of symmetry---the metric functions essentially only depend non-trivially
on one coordinate---one can try to classify them directly.
This was done for the vacuum Einstein equations
in dimensions $D=4,5$ by~\cite{kl}, where a list of all near horizon geometries, i.e. metrics of the form~\eqref{gnc} with metric functions $\alpha, \beta, \gamma$ independent of $u,v$, was obtained. It is a priori
far from obvious that {\em all} these metrics are the near horizon limits of actual globally defined
black holes. Remarkably though,~\cite{kl} could prove that the metrics found are indeed the limits of the extremal black ring~\cite{Emparan}, boosted Kerr string, Myers-Perry~\cite{Myers}, and the Kaluza-Klein black holes~\cite{ras,lar}, respectively.
In this paper, we give a classification of all possible vacuum near horizon geometries with
symmetry group $O(2,1) \times U(1)^{D-3}$ in arbitrary dimensions $D$.
The method of analysis used in~\cite{kl} seems restricted to $D=4,5$, so we here use a different method based on a matrix formulation of the vacuum Einstein equations that works in arbitrary dimensions. The metrics that we find come in three families
depending on the topology of $H$, which can be either $S^3 \times T^{D-5}, S^2 \times T^{D-4}$ or $L(p,q) \times T^{D-5}$,
where $L(p,q)$ is a Lens space.
The metrics in each of these families depend on
$(D-2)(D-3)/2$ real parameters;
they are given explicitly in Thm.~1 below.
When specialized to $D=5$, our
first two families of metrics must coincide with those previously found in~\cite{kl}, whereas the last family
is shown to arise from the first one by taking quotients (this last properties generalizes to
arbitrary $D$).
In all dimensions, examples for
near horizon geometries with topology $S^2\times T^{D-4}$ are
provided by the near horizon limit of the ``boosted Kerr-branes''
see e.g. \cite{klr,fklr08}.
This family of metrics depends on $(D-2)(D-3)/2$ real parameters
and it is conceivable that all near horizon geometries of
this topology can be obtained in this way.
The analogous construction is also possible
when the horizon topology is $S^3 \times T^{D-5}$.
However, in this case, the resulting metrics depend on fewer
parameters.
We should also point out that there are vacuum near-horizon
geometries that possess fewer symmetries than ${\mathbb R} \times U(1)^{D-3}$.
For example, the near-horizon geometry of the extremal Myers-Perry
black holes, constructed explicitly
in \cite{fklr08}, has the smaller symmetry group,
${\mathbb R} \times U(1)^{[(D-1)/2]}$.
In this paper we are not going to classify such less symmetric
vacuum near-horizon geometries.
Also, we are not going to consider the case of a non-vanishing
cosmological constant, since, as far as we are aware, there has
appeared no successful reduction of the Einstein gravity with
a cosmological constant to a suitable nonlinear sigma model, which is
however required in our approach.
The same remark would apply to other theories with different
matter fields. On the other hand, we expect our approach to be
applicable to theories that can be reduced to suitable sigma-models.
For $D=5$ minimal gauged and ungauged supergravity, the near horizon
geometries were classified in \cite{klr2,Reall03} using a method different
from ours. Also for $D=4$ Einstein-Maxwell theory with a cosmological
constant, see e.g. \cite{kl2}.
\section{Geometrical coordinates}
The aim of this paper is to classify the near horizon geometries in $D$ dimensions. As explained in the
previous section, by this we mean the problem of finding all metrics $g$ of the form~\eqref{gnc} with vanishing Ricci tensor
(i.e. vacuum metrics), where $\gamma = \gamma_{ab} {\rm d} y^a {\rm d} y^b$ is a smooth metric on the compact
manifold $H$, $\beta = \beta_a {\rm d} y^a$ is a 1-form on $H$ and $\alpha$ is a scalar function on $H$.
These fields do not depend on $u,v$, and the near horizon geometries therefore
have the Killing vectors $K = \partial/\partial v$ and $X = u \partial/\partial u - v \partial/\partial v$. We do {\em not} assume a priori that the near horizon metrics arise from
a black hole spacetime by the limiting procedure described above.
Unfortunately, this problem appears to be difficult to solve in this generality, so we will make a significant further symmetry assumption. Namely, we will assume that our metrics do not only have
the Killing vectors $K,X$, but in addition admit the symmetry group $U(1)^{D-3}$, generated by
$(D-3)$ commuting Killing fields $\psi_1, \dots, \psi_{D-3}$ that are tangent to $H$ and also commute with $K,X$.
Thus, the full isometry group of our metric is (at least) $G_2 \times U(1)^{D-3}$, where $G_2$ denotes the Lie-group that
is generated by $K, X$. This means roughly speaking that the metric
functions can nontrivially depend only on a single variable, and our metrics may
hence be called ``cohomogeneity-one.'' As a consequence,
Einstein's equations reduce to a coupled system of non-linear ordinary differential equations in
this variable. Our aim is to solve this system in the most general way and thereby to classify all near
horizon geometries with the assumed symmetry.
It seems that this system becomes tractable only if certain special coordinates are introduced that are adapted in an
optimal way to the geometric situation under consideration. These coordinates are the well-known Weyl-Papapetrou
coordinates up to a simple coordinate transformation. However, to introduce these coordinates
in a rigorous and careful manner is more subtle in the present case than for non-extremal horizons.
These technical difficulties are closely related to the fact that the usual Weyl-Papapetrou coordinates are actually singular on $H$, the very place we are interested in most. To circumvent this problem, we follow the elegant alternative procedure introduced in~\cite{klr,kl}. That procedure applies in the form presented here to non-static geometries, and we will
for the rest of this paper make this assumption.
The static case has been treated previously in \cite{crt06,kl1}.
We first observe that the horizon $H$ is a compact $(D-2)$-dimensional manifold with an action of $U(1)^{D-3}$.
By general and rather straightforward arguments (see e.g.~\cite{pak,hs}) it follows that, topologically, $H$ can only be of the following four types:
\begin{equation}
H \cong \begin{cases}
S^3 \times T^{D-5} \, ,\\
S^2 \times T^{D-4} \, ,\\
L(p,q) \times T^{D-5} \, ,\\
T^{D-2} \, .
\end{cases}
\end{equation}
Furthermore, in the first three cases, the quotient space $H/U(1)^{D-3}$ is a closed interval---which we
take to be $[-1,1]$ for definiteness---whereas in
the last case, it is $S^1$. We will not treat the last case
in this paper\footnote{
See, however, the note added in proof.
, but we note that
the topological censorship theorem~\cite{galoway} implies that there cannot exist any extremal, asymptotically flat or Kaluza-Klein vacuum black holes with $H \cong T^{D-2}$. Thus, while there could still be near
horizon geometries with $H \cong T^{D-2}$, they cannot arise as the limit of a globally defined black hole spacetime.
In this paper, we will focus on the first three topology types. In these cases, the Gram matrix
\begin{equation}\label{gram}
f_{ij} = \gamma(\psi_i, \psi_j)
\end{equation}
is non-singular in the interior of the interval and it has a one-dimensional null-space
at each of the two end points~\cite{hs}. In fact, there are integers $a_\pm^i \in {\mathbb Z}$ such that
\begin{equation}\label{bndycond}
f_{ij}(x) a_\pm^i \to 0 \quad \text{at boundary points $\pm 1$.}
\end{equation}
The integers $a^i_\pm$ determine the topology of $H$ (i.e. which of the first three cases
we are in), as we explain more in Thm.~\ref{thm1} below.
The first geometric coordinate, $x$, parametrizes the interval $[-1,+1]$, and is introduced
as follows. Consider the 1-form on $H$ defined by $\Sigma = (\det f)
\star_\gamma (\psi_1 \wedge \dots \wedge \psi_{D-3})$, where the Hodge
dual is taken with respect to the metric $\gamma$ on $H$. Using the fact that
the $\psi_i$ are commuting Killing fields of $\gamma$, one can show that $\Sigma$ is closed, and
that it is Lie-derived by all $\psi_i$. Hence $\Sigma$ may be viewed as a closed 1-form on the orbit
space $H/U(1)^{D-3}$, which, as we have said, is a closed interval. It can be seen furthermore that $\Sigma$ does not
vanish anywhere within this closed interval, so there exists a function $x$, such that
\begin{equation}
{\rm d} x = C \Sigma \, .
\end{equation}
The constant $C$ is chosen so that $x$ runs from $-1$ to $+1$. We take $x$ to be our first coordinate, and we
take the remaining coordinates on $H$ to be angles $\varphi^1, \dots, \varphi^{D-3}$
running between $0$ and $2\pi$, chosen in such a way
that $\psi_i = \partial/\partial \varphi^i$. In these coordinates, the metric $\gamma$ on $H$ takes the form
\begin{equation}
\gamma = \frac{1}{C^2 \det f} \, {\rm d} x^2 + f_{ij}(x) {\rm d} \varphi^i {\rm d} \varphi^j \, .
\end{equation}
To define our next coordinate, we consider the 1-form field $\beta$ on $H$, see eq.~\eqref{gnc}.
Standard results on the Laplace operator $\Delta_\gamma$ on a compact Riemannian manifold $(H, \gamma)$
guarantee that there exists a smooth function $\lambda$ on $H$ such that
\begin{equation}\label{lamdef}
\star_\gamma {\rm d}\! \star_\gamma \beta = \Delta_\gamma \lambda \, ,
\end{equation}
where $\star_\gamma$ is the Hodge star of $\gamma$. The function $\lambda$ is unique up to a constant. Because
$\beta$ and $\gamma$ are Lie-derived by all the rotational Killing fields $\psi_i$, it follows that
${\mathscr L}_{\psi_i} \lambda = c_i$ are harmonic functions on $H$, i.e. constants. Furthermore, these constants
must vanish, because the $\psi_i$ have periodic orbits. Thus, $\lambda$ is only a function of $x$.
We also claim that the 1-form $\beta - {\rm d} \lambda$ has no ${\rm d} x$-part. To see this, we let
$h$ be the scalar function on $H$ defined by $h = \star_\gamma (\psi_1 \wedge \dots \wedge \psi_{D-3} \wedge [\beta - {\rm d} \lambda])$.
Using eq.~\eqref{lamdef} and the fact that the $\psi_i$ are commuting Killing fields of $\gamma$, it is easy to
show that ${\rm d} h = 0$, so $h$ is constant. Furthermore, by eq.~\eqref{bndycond} there exist points in $H$ where the
linear combinations $a^i_\pm \psi_i = 0$, and it immediately follows from this that $h = 0$ on $H$. This shows that
$\beta - {\rm d} \lambda$ has no ${\rm d} x$-part, hence we can write
\begin{equation}
\beta = {\rm d} \lambda + C {\rm e}^\lambda \, k_i {\rm d}\varphi^i \, ,
\end{equation}
where we have introduced the quantities
\begin{equation}
k_i := C^{-1} {\rm e}^{-\lambda} \, \psi_i \cdot \beta \, .
\end{equation}
The next coordinate is defined by
\begin{equation}
r:= u {\rm e}^\lambda \, ,
\end{equation}
and we keep $v$ as the last remaining coordinate. The coordinates
$\varphi^i, r, x, v$ are the desired geometrical coordinates. In these,
the metric takes the form
\begin{equation}
g = {\rm e}^{-\lambda} [2 {\rm d} v {\rm d} r + r^2 (2\alpha {\rm e}^{-\lambda} - {\rm e}^\lambda k_i k^i) \, {\rm d} v^2]
+ \frac{{\rm d} x^2}{C^2 \det f} + f_{ij}({\rm d} \varphi^i + Cr \, k^i {\rm d} v)({\rm d} \varphi^j + Cr \, k^j {\rm d} v) \, .
\end{equation}
We have also determined that the quantities $k^i, f_{ij}, \alpha, \lambda$ are functions of $x$ only.
The indices $i,j,...$ are raised with the inverse $f^{ij}$ of the Gram matrix, e.g. $k^i = f^{ij} k_j$.
So far, we have only used the symmetries of the metric, but not the fact that it is also required to be Ricci flat.
This imposes significant further restrictions~\cite{kl,klr}.
Namely, one finds that $k^i$ are simply {\em constants},
and that $(2\alpha {\rm e}^{-\lambda} - {\rm e}^\lambda k_i k^i)$
is a negative\footnote{Here one must use that the metric is {\em not} static, i.e. that not all
$k^i$ vanish.} {\em constant}, which one may choose to be $-C^2$ after a suitable rescaling of the
coordinates $r,v$ and the constants $k^i$, and by adding a constant to $\lambda$. Then the Einstein
equations further imply that $\partial_x^2({\rm e}^{-\lambda} \det f)=-2$; hence ${\rm e}^{-\lambda} = -(x-x_-)(x-x_+)(\det f)^{-1}$
for real numbers $x_\pm$. Furthermore, $\lambda$ is smooth and $\det f$ vanishes only at
$x = \pm 1$ by eq.~\eqref{bndycond}, so $x_\pm = \pm 1$ and consequently
\begin{equation}\label{lambdax}
{\rm e}^{-\lambda} = (1-x^2)(\det f)^{-1} \, .
\end{equation}
Thus, in summary, we have determined that the near horizon metric
is given by
\begin{equation}\label{nhg}
g = \frac{1-x^2}{\det f}(2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2) + \frac{{\rm d} x^2}{C^2 \det f} +
f_{ij}( {\rm d} \varphi^i + r Ck^i \, {\rm d} v)({\rm d} \varphi^j + r Ck^j \, {\rm d} v)
\end{equation}
where $k^i, C$ are constants, and where $f_{ij}$ depends only on $x$.
In the remainder of the paper, we will work with above form of the metric~\eqref{nhg}. However, we will,
for completeness, also give the relation to the more familiar Weyl-Papapetrou form: For
$r > 0$ (i.e., strictly outside the horizon), we define new coordinates $(t,\rho,z,\phi^i)$ by the transformation \cite{fl}
\begin{eqnarray}\label{rx}
z &:=& rx\\
\rho &:=& r\sqrt{1-x^2}\\
t &:=& Cv + (Cr)^{-1}\\
\phi^i &:=& \varphi^i + C^{-1} k^i \log r \, .
\end{eqnarray}
In the new coordinates $(t,\rho,z,\phi^i)$, the metric then takes the Weyl-Papapetrou form
\begin{equation}\label{wp}
g = - \frac{\rho^2 \, {\rm d} t^2}{\det f} + \frac{{\rm e}^{-\lambda}}{C^2r^2} ({\rm d} \rho^2 + {\rm d} z^2)
+
f_{ij}( {\rm d} \phi^i + r k^i \, {\rm d} t)({\rm d} \phi^j + r k^j \, {\rm d} t) \, ,
\end{equation}
where it is understood that $r^2 = \rho^2 + z^2$.
Note that, by contrast with the coordinate system $(v,r,x,\varphi^i)$, the Weyl-Papapetrou coordinate system
does not cover the horizon itself, i.e., it is not defined for $r=0$ but only for $r>0$. This can be seen
in several ways, for example by noting that the coordinate transformation is singular at $r=0$, i.e.
on the horizon, or alternatively,
by noting that the horizon corresponds in the new coordinates to the single point $\rho = z = 0$. This
behavior is characteristic for extremal horizons and does not happen in the non-extremal case.
In obtaining our form~\eqref{nhg} for the near horizon metric, we have used up all but the $ij$-components of the
Einstein equations. The remaining Einstein equations determine the matrix of functions
$f_{ij}(x)$. As is well-known~\cite{Maison}, a beautifully simple form of these equations
can be obtained by introducing the twist potentials
of the rotational Killing fields as auxiliary variables. These potentials $\chi_i$ are defined up to a constant by
\begin{equation}
{\rm d} \chi_i = \star ( \psi_1 \wedge \dots \wedge \psi_{D-3} \wedge {\rm d} \psi_i) \, .
\end{equation}
To see that this equation makes sense, one has to prove that the right side is an exact form. Indeed, taking
${\rm d}$ of the right side and using the vanishing of the Ricci tensor together with the fact that the Killing fields all commute,
one gets zero. To see that the right side is even exact, it is best to pass to the orbit space $M/(G_2 \times U(1)^{D-3})$ first,
which can be identified with the interval $[-1,1]$. Then the $\chi_i$ can be defined on this orbit space and lifted back to functions
on $M$. It also follows from this construction that $\chi_i$ only depends
on the coordinate $x$ parametrizing $[-1,1]$. Setting
\begin{equation}\label{phidef}
\Phi = \left(
\begin{matrix}
(\det f)^{-1} & -(\det f)^{-1} \chi_i \\
-(\det f)^{-1} \chi_i & f_{ij} + (\det f)^{-1} \chi_i \chi_j
\end{matrix}
\right) \quad ,
\end{equation}
it is well-known that the vanishing of the Ricci-tensor implies that
\begin{equation}\label{phirx}
\partial_x [(1-x^2) \Phi^{-1} \partial_x \Phi] + \partial_r[ r^2 \, \Phi^{-1} \partial_r \Phi] = 0 \, .
\end{equation}
These equations are normally written in the Weyl-Papapetrou coordinates $\rho,z$ (see e.g.~\cite{hs}), and the above form is obtained simply by the change of variables eq.~\eqref{rx}.
Since $\Phi$ is a function of $x$ only in our situation (but would not be e.g.
for black holes without the near horizon limit taken) an essential further simplification occurs: The second term in
the above set of matrix equations is simply zero! Hence, the content of the remaining Einstein equations is
expressed in the matrix of {\em ordinary} differential equations
\begin{equation}\label{phieq}
\partial_x [(1-x^2) \Phi^{-1} \partial_x \Phi] = 0 \, .
\end{equation}
In fact, this equation could be derived formally and much more directly by simply assuming the Weyl-Papapetrou
form of the metric, introducing $r,x$ as above, and then observing that, in the near horizon limit,
the dependence on $r$ is scaled away, so that the matrix partial differential equations~\eqref{phirx} reduce to the ordinary
differential equations~\eqref{phieq}.
\section{Classification}
To determine all near horizon metrics~\eqref{nhg}, we must solve the matrix equations~\eqref{phieq},
i.e. find $f_{ij}, \chi_i$. Then the constants $k^i$ are given by
\begin{equation}\label{ki1}
k^i = \frac{1-x^2}{\det f} \, f^{ij} \partial_x \chi_j \, ,
\end{equation}
and this determines the full metric up to the choice of the remaining constant $C$.
We must furthermore ensure that, among all such solutions, we pick only those that give rise to a smooth
metric $g$.
The equations~\eqref{phieq} for $\Phi$ are easily integrated to
\begin{equation}
\Phi(x) = Q \, \exp \left[ 2 \, {\rm arcth} (x) \cdot L \right] = Q \, \left( \frac{1+x}{1-x} \right)^{L} \, \, .
\end{equation}
Here, $Q=\Phi(0), L= \frac{1}{2}(1-x^2) \Phi(x)^{-1} \partial_x \Phi(x)$ are both constant real $(D-2) \times (D-2)$ matrices, and we mean the matrix exponential etc.
It follows from the definition that $\Phi$
has the following general properties: It is symmetric, $\det \Phi = 1$, and it is positive definite. It is an easy consequence of these properties that $\det Q = 1$, ${\rm Tr} \, L = 0$ (taking the determinant of the equation), that
$Q = Q^T$ is positive definite, and that $L^T Q = QL$. These relations allow us to write $Q = S^T S$ for some real invertible matrix $S = (s_{IJ})$ of determinant $\pm 1$, and to conclude that $SLS^{-1}$ is a real symmetric matrix. By changing $S$ to $VS$, where $V$ is a suitable orthogonal transformation, we can achieve that
\begin{equation}
SL S^{-1} = \left(
\begin{matrix}
\sigma_0 & 0 & \dots & 0\\
0 & \sigma_1 & \dots & 0\\
\vdots & & & \vdots\\
0 & 0 & \dots & \sigma_{D-3}
\end{matrix}
\right)
\end{equation}
is a real diagonal matrix, while leaving $Q$ unchanged. It then follows that
$\Phi(x) = S^T \, \exp \left[ 2 \, {\rm arcth} (x) \cdot S L S^{-1} \right] S$,
that is
\begin{equation}\label{phi}
\Phi_{IJ}(x) = \sum_{K=0}^{D-3} \left( \frac{1+x}{1-x} \right)^{\sigma_K} s_{KI} s_{KJ} \, .
\end{equation}
This is the most general solution to the field equation for $\Phi$ in the near horizon limit, and it depends
on the real parameters $s_{IJ}, \sigma_I$, which are subject to the constraints
\begin{equation}\label{constr}
\det (s_{IJ})
= \pm 1
\, , \quad \sum_{I=0}^{D-3} \sigma_I = 0 \, .
\end{equation}
The near horizon metric is completely fixed in terms of $\Phi$. It can be obtained combining eqs.~\eqref{phi} with eq.~\eqref{phidef} to determine $f_{ij}, \chi_i$, which in turn then fix the remaining constants $k^i, C$ in the near horizon metric. In the rest of this section, we explain how this can be done. It turns out that the smoothness of the near horizon metric also implies certain constraints on the parameters $\sigma_I, s_{IJ}$, and we will derive the form of these. Our analysis applies in principle to all dimensions $D \ge 4$. The case $D=4$, while being simplest, is somewhat different from the remaining
cases $D \ge 5$ and would require us to distinguish these cases in many of the formulae below. Therefore, to keep the discussion simple, we will stick to $D \ge 5$ in the following.
\medskip
First, we consider the $ij$-component of $\Phi$ in eq.~\eqref{phi}. By eq.~\eqref{phidef}
this is also equal to
\begin{equation}\label{phifconnect}
\sum_{I=0}^{D-3} \left( \frac{1+x}{1-x} \right)^{\sigma_I}
s_{Ii} s_{Ij} = \Phi_{ij} = f_{ij} + (\det f)^{-1} \, \chi_i \chi_j \, .
\end{equation}
Now, the coordinate $x \in [-1, 1]$
parametrizes the orbit space $H/U(1)^{D-3}$
of the horizon, which is topologically a finite interval. The boundary
points $x = \pm 1$ correspond to points on the horizon where an integer linear combination $\sum a^i_\pm \psi_i$ of the rotational Killing fields vanishes.
This is equivalently expressed by the condition
$f_{ij}(x) a_\pm^j \to 0$ as $x \to \pm 1$. By contrast, for all values of $x \in (-1,+1)$,
no linear combination of the rotational fields vanishes. Therefore, $\det f \neq 0$ for $x \in (-1,+1)$, while $\det f \to 0$ as $x \to \pm 1$. In fact, using eq.~\eqref{lambdax} one sees that
\begin{equation}
(\det f)^{-1} = 2 c_+^2(1-x)^{-1} + 2 c_-^2(1+x)^{-1} + \dots \quad
\text{as $x \to \pm 1$,}
\end{equation}
where the dots represent contributions that go to a finite limit, and where $c_{\pm}$ are non-zero constants related to $\lambda$ by $4c_\pm^2={\rm e}^{-\lambda(\pm 1)} \neq 0$. The twist potentials $\chi_i$ also go to a finite limit as $x \to \pm 1$. By adding suitable constants to the twist potentials if necessary, we may achieve that
\begin{equation}
\chi_i \to \frac{1}{c_{\pm}} \, \mu_i \quad \text{as $x \to \pm 1$} \, ,
\end{equation}
where $\mu_i \in {\mathbb R}$ are constants. The upshot of this discussion is that, as one approaches the boundary points, the components $\Phi_{ij}$ are dominated by the rank-1 part $(\det f)^{-1} \chi_i \chi_j$, which diverges as
$2(1 \mp x)^{-1} \, \mu_i \mu_j$ as $x \to \pm 1$. This behavior can be used to fix the possible values of the eigenvalues $\sigma_I$ as follows. First, it is clear that at least one of the eigenvalues must be non-zero, for otherwise
the right side of eq.~\eqref{phifconnect} would be smooth as $x \to \pm 1$, which we have just argued is not the case. Let us assume without loss of generality then that $\sigma_{D-3} \ge \dots \ge \sigma_{D-3-n} > 0$ are the $n$ positive eigenvalues. Multiplying eq.~\eqref{phifconnect} by $1-x$ and taking $x \to +1$, we see that $\sigma_{D-3} = 1$, that $\mu_i = s_{(D-3)i}$, and that all other remaining positive eigenvalues must be strictly between 0 and 1. If we now subtract $(1-x^2)^{-1} \mu_i \mu_j$ from both sides of the equation, then the right side of eq.~\eqref{phifconnect} goes to a finite limit
as $x \to 1$, and so the left side has to have that behavior, too.
This is only possible if there are no other remaining positive eigenvalues besides $\sigma_{D-3}$. A similar argument then likewise shows that there is only one negative eigenvalue, which has to be equal to $-1$ (without loss of generality we may take $\sigma_{D-4} = -1$) and that $\mu_i = s_{(D-4)i}$.
In summary, we have shown that
\begin{equation}
\sigma_I = \begin{cases}
0 & \text{if $I\le D-5$,}\\
-1 & \text{if $I = D-4$,}\\
1 & \text{if $I = D-3$,}
\end{cases}
\end{equation}
and we also see that
\begin{equation}\label{Jeq}
\mu_i = s_{(D-3)i} = s_{(D-4)i} \, , \quad c_+ = s_{(D-3)0} \, , \quad c_- = s_{(D-4)0} \,.
\end{equation}
The condition that $\det S = \pm 1$ then moreover gives
\begin{equation}\label{Sdet}
\pm 1 = (c_+ - c_-) \, \epsilon^{ijk \dots m} s_{0i} s_{1j} s_{2k} \cdots \mu_m \,\, .
\end{equation}
We may now combine this information with the equations~\eqref{phi}
and~\eqref{phidef} and solve for $f_{ij},\chi_i$. The result can be expressed as:
\begin{eqnarray}\label{fij}
f_{ij} \xi^i \xi^j &=& 2 \frac{1+x^2}{1-x^2} (\mu \cdot \xi)^2 + \sum_{I=0}^{D-5} (s_{I} \cdot \xi)^2 \\
&& - \frac{{\rm e}^{\lambda(x)}}{1-x^2} \left( (1-x^2) \sum_{I=0}^{D-5} \nonumber
s_{I0} (s_{I} \cdot \xi) + [c_+(1+x)^2 + c_-(1-x)^2] (\mu \cdot \xi)
\right)^2\\
\chi_i \xi^i &=&{\rm e}^{\lambda(x)}\left( (1-x^2) \sum_{I=0}^{D-5}
s_{I0} (s_{I} \cdot \xi) + [c_+(1+x)^2 + c_-(1-x)^2] (\mu \cdot \xi)
\right) \, .
\end{eqnarray}
Here, we are using shorthand notations such as $\mu \cdot \xi = \mu_i \xi^i$ or
$s_I \cdot \xi = s_{Ii} \xi^i$, and
\begin{equation}\label{Fdef}
\exp[-\lambda(x)] = c_+^2(1+x)^2 + c_-^2(1-x)^2 + (1-x^2) \sum_{I=0}^{D-5} s_{I0}^2 \, ,
\end{equation}
in order to have a reasonably compact notation. This function $\lambda$ agrees with
that previously defined in eq.~\eqref{lamdef} by eq.~\eqref{lambdax}.
From eq.~\eqref{fij}, one now finds after a short calculation that
the conditions~\eqref{bndycond} are equivalent to
\begin{equation}\label{s0I}
s_{I0} \, \mu_i a_+^i = c_{+} \, s_{Ii} a_+^i \, , \quad
s_{I0} \, \mu_i a_-^i = c_{-} \, s_{Ii} a_-^i \, , \quad \text{for $I=0, \dots, D-5$}.
\end{equation}
Either of these equations ``$\pm$'' can be used to solve for $s_{I0}$, because\footnote{
Indeed, let us assume that, say $\mu_i a^i_+ = 0$. Then, since $c_+ \neq 0$,
we know that also $s_{Ii} a_+^i = 0$. It then would follow that
$0=\epsilon^{ijk \dots m} s_{0i} s_{1j} s_{2k} \cdots \mu_m$, which however is
in contradiction with eq.~\eqref{Sdet}.
} $\mu_i a_\pm^i \neq 0$ for both ``$\pm$''. We will do this in the following.
As we have explained, the constants $k^i$ in the near horizon metric are given by~\eqref{ki1}. A longer calculation using eqs.~\eqref{fij},~\eqref{s0I},~\eqref{Sdet} and~\eqref{Fdef} reveals that
\begin{equation}\label{ki}
k^i = \frac{2c_+ c_-}{c_+ - c_-} \left( \frac{a_+^i}{\mu_j a_+^j} + \frac{a_-^i}{\mu_j a_-^j} \right) \, .
\end{equation}
To avoid conical singularities in the near horizon metric~\eqref{nhg}, we must furthermore have\footnote{
Here the constants $a^i_\pm \in {\mathbb Z}$ are normalized so that the greatest common divisor
of $a_+^i, i=1, \dots, D-3$ is equal to 1, and similarly for $a^i_-$.}
\begin{equation}\label{cdet1}
\frac{(1-x^2)^2}{\det f \cdot f_{ij} a^i_\pm a^j_\pm} \to C^2 \quad \text{as $x \to \pm 1$,}
\end{equation}
and this determines $C$.
A longer calculation using eqs.~\eqref{fij},~\eqref{s0I}
shows that
\begin{equation}\label{cdet}
C = \frac{4c_+^2}{(c_+ - c_-) \mu_i a^i_+} = \frac{4c_-^2}{(c_+ - c_-) \mu_i a^i_-} \, .
\end{equation}
Thus, we have determined all quantities $C, k^i, f_{ij}$ in the near horizon metric~\eqref{nhg}.
We substitute these, and make the final coordinate change
\begin{equation}
x = \cos \theta \, , \quad 0 \le \theta \le \pi \, .
\end{equation}
Then, after performing some algebraic manipulations, we get the following
result, which summarizes our entire analysis so far:
\begin{thm}\label{thm1}
All non-static near horizon metrics (except topology type $H \cong T^{D-2}$)
are parametrized by the real parameters $c_\pm, \mu_i, s_{Ii}$,
and the integers $a_\pm^i$ where $I=0,\dots,D-5$ and $i=1, \dots, D-3$,
and ${\rm g.c.d.}(a_\pm^i) = 1$.
The explicit form of the near horizon metric in terms of these parameters is
\begin{eqnarray}\label{NH}
g &=& {\rm e}^{-\lambda} (2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2 + C^{-2} \, {\rm d} \theta^2) +
{\rm e}^{+\lambda} \Bigg\{ (c_+-c_-)^2 (\sin^2 \theta) \, \Omega^2 \nonumber\\
&& +(1+\cos \theta)^2 c_+^2 \sum_I \left( \omega_I - \frac{s_I \cdot a_+}{\mu \cdot a_+} \Omega \right)^2
+(1-\cos \theta)^2 c_-^2 \sum_I \left( \omega_I - \frac{s_I \cdot a_-}{\mu \cdot a_-} \Omega \right)^2\nonumber\\
&& + \frac{c_\pm^2\, \sin^2 \theta}{(\mu \cdot a_\pm)^2} \sum_{I < J} \Big(
(s_I \cdot a_\pm) \omega_J - (s_J \cdot a_\pm)\omega_I \Big)^2
\Bigg\} \, .
\end{eqnarray}
Here, the sums run over $I,J$ from $0, \dots, D-5$, the function
$\lambda(\theta)$ is given by
\begin{equation}
\exp[-\lambda(\theta)] = c_+^2(1+\cos \theta)^2 + c_-^2(1-\cos \theta)^2 + \frac{c_\pm^2 \sin^2 \theta}{(\mu\cdot a_\pm)^2} \sum_I (s_{I}\cdot a_{\pm})^2 \, ,
\end{equation}
$C$ is given by $C = 4c^2_\pm[(c_+-c_-)(\mu \cdot a_\pm)]^{-1}$, and we have defined the 1-forms
\begin{eqnarray}
\Omega(r) &=& \mu \cdot {\rm d} \varphi + 4Cr\frac{c_+c_-}{c_+ -c_-} {\rm d} v \\
\omega_I(r) &=& s_{I} \cdot {\rm d} \varphi +
\frac{r}{2} \, C^2 (s_{I} \cdot a_+ + s_I \cdot a_-) \, {\rm d} v \, .
\end{eqnarray}
We are also using the shorthand notations such as $s_{Ii} a^i_+ = s_I \cdot a_+$, or
$\mu \cdot {\rm d} \varphi = \mu_i {\rm d} \varphi^i$, etc.
The parameters are subject to the constraints $\mu \cdot a_\pm \neq 0$ and
\begin{equation}\label{constr}
\frac{c_+^2}{\mu \cdot a_+} = \frac{c_-^2}{\mu \cdot a_-}
\, ,
\quad
\frac{c_+ (s_{I} \cdot a_+)}{\mu \cdot a_+} = \frac{c_- (s_{I} \cdot a_-)}{ \mu \cdot a_-}
\, ,
\quad
\pm 1 = (c_+ - c_-) \, \epsilon^{ijk \dots m} s_{0i} s_{1j} s_{2k} \cdots \mu_m
\end{equation}
but they are otherwise free. The coordinates $\varphi^i$ are $2\pi$-periodic,
$0 \le \theta \le \pi$, and $v,r$ are arbitrary. When writing ``$\pm$'', we mean that the
formulae hold for both signs.
\end{thm}
{\bf Remarks:}
(1) The function $\lambda(\theta)$ was invariantly defined in eq.~\eqref{lamdef}, and therefore
evidently has to be a smooth function. This is manifestly true, because both $c_\pm \neq 0$.
Because also $\mu \cdot a_\pm$ are both non-zero, we
explicitly see that the above metrics are smooth (in fact analytic).
(2) The part $2 {\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2$ of the metric is that of $AdS_2$ with curvature $C^2$. This is the cause for the
enhanced symmetry group of $O(2,1) \times U(1)^{D-3}$.
\medskip
\noindent
Let us finally discuss the meaning of the parameters on which the near horizon metrics depend.
The parameters $a_\pm^i \in {\mathbb Z}$ are related to the horizon topology.
Up to a globally defined coordinate transformation of the form
$$\varphi^i \mapsto \sum A^i_j \varphi^j \,\, {\rm mod} \,\, 2\pi \, , \quad A \in SL({\mathbb Z}, D-3) \, ,$$
we have
\begin{equation}\label{apm}
a_+ = (1,0,0,\dots,0) \, , \quad a_- = (q, p, 0,\dots,0) \, , \quad p,q \in {\mathbb Z} \, , \quad {\rm g.c.d.}(p,q) = 1\, .
\end{equation}
A general analysis of compact manifolds with a cohomogeneity-1 torus action (see e.g.~\cite{hs}) implies that
the topology of $H$ is
\begin{equation}
H \cong
\begin{cases}
S^3 \times T^{D-5} & \text{if $p=\pm 1,q=0$,} \\
S^2 \times T^{D-4} & \text{if $p=0, q= 1$,}\\
L(p,q) \times T^{D-5} & \text{otherwise.}
\end{cases}
\end{equation}
The constants $\mu_i, c_\pm, a^i_\pm$ are directly related to the
horizon area by
\begin{equation}
A_H = \frac{(2\pi)^{D-3}(c_+-c_-)^2(\mu \cdot a_\pm)^2}{8c_\pm^4} \, ,
\end{equation}
and we also have
\begin{equation}
J_i := \frac{1}{2} \int_H \star ({\rm d} \psi_i) = (2\pi)^{D-3} \, \frac{c_+ - c_-}{2c_-c_+} \mu_i \, .
\end{equation}
In an asymptotically flat or Kaluza-Klein black hole spacetime with a single horizon $H$, the above integral for
$J_i$ could be converted to a convergent integral over a cross section at infinity using
Stokes theorem and the vanishing of the Ricci tensor. Then the $J_i$ would be equal to the
Komar expressions for the angular momentum. The near horizon limits that we consider do not
of course satisfy any such asymptotic conditions, and hence this cannot be done. Nevertheless,
if the near horizon metric under consideration arises from an asymptotically flat or
asymptotically Kaluza-Klein spacetime, then the $J_i$ are
the angular momenta of that spacetime. Hence, we see that the parameters $c_\pm, \mu_i, a_\pm^i$ are directly
related to geometrical/topological properties of the metric. This seems to be less clear for
the remaining parameters $s_{Ii}$.
The number of continuous parameters on which our metric depend can
be counted as follows.
First, the matrix $s_{Ii}$ has $(D-3)(D-4)$ independent components,
$\mu_i$ has $(D-3)$ and $c_\pm$ has $2$ components. These parameters
are subject to the $(D-2)$ constrains, eqs.~(\ref{constr}).
However, changing $s_{Ii}$ to $\sum_{J=0}^{D-5} R^J{}_I s_{Ji}$, with
$R^J{}_I$ an orthogonal matrix in $O(D-4)$, does not change the metric.
Since such a matrix depends on $(D-4)(D-5)/2$ parameters,
our metrics depend only on $(D-3)(D-4)+(D-3)+2 - (D-2) - (D-4)(D-5)/2
= (D-2)(D-3)/2$ real continuous parameters.
It is instructive to compare this number to the number of
parameters of a boosted Kerr-brane. If we start from a direct
product of a $4$-dimensional extremal Kerr metric with a flat torus
$T^{D-4}$ and apply a boost in an arbitrary direction,
then the resulting family of
metrics has $(D-2)(D-3)/2$
parameters, and the horizon topology is $S^2 \times T^{D-4}$.
It is plausible that all our metrics in our Thm.~\ref{thm1} for
this topology can be obtained by taking the near horizon limit of
these boosted Kerr-branes.
By contrast, if we start with a direct product of a $5$-dimensional extremal
Myers-Perry black hole with a flat torus
$T^{D-5}$, then we similarly get a family of metrics
which depends only on $(D-3)(D-4)/2 + 1$ parameters.
Therefore in this case, we get metrics depending on fewer parameters
than those in Thm.~\ref{thm1}.
\section{Examples}
Let us first illustrate our classification in $D=5$ spacetime dimensions. According to our general result,
the metrics have the discrete parameters $a_\pm^1, a_\pm^2$ as well as the 6
continuous parameters $\mu_1, \mu_2, s_{01}, s_{02}, c_+, c_-$ which are subject to 3 constraints.
Thus, the number of free parameters is 3, and we take $C$ [given by eq.~\eqref{cdet}] as one of them
for convenience. We have the following cases to consider,
depending on the possible values of the discrete parameters, see eq.~\eqref{apm}:
\medskip
\noindent
{\bf Topology $H \cong S^1 \times S^2$}: This case corresponds to the choice $a_+ = a_- = (1,0)$.
The constraints~\eqref{constr} read explicitly
\begin{equation}
c_+^2 \mu_1 = c_-^2 \mu_1 \, , \quad
c_+ s_{01} \mu_1 = c_- s_{01} \mu_1 \, , \quad
(c_+ - c_-)
\left|
\begin{matrix}
\mu_1 & s_{01}\\
\mu_2 & s_{02}
\end{matrix}
\right|
= 1 \,
\end{equation}
in this case.
We know that $\mu_1$ cannot vanish, so the first and third
equation imply together that $c_\pm = \pm B$ for some non-zero constant $B$. As a consequence,
the second equation then gives $s_{01} = 0$, from which the third equation then gives
$s_{02} = 1/(2c_+\mu_1)$. Putting all this into our formula~\eqref{NH} for the near horizon metric
gives
\begin{eqnarray}
g &=& 2B^2(1+\cos^2 \theta)(2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2 + C^{-2} {\rm d} \theta^2) +
\frac{C^2}{16 B^4} ({\rm d} \varphi^2)^2 \nonumber\\
&&+ \frac{8B^2\sin^2 \theta}{C^2(1+\cos^2 \theta)} \left({\rm d} \varphi^1 + A \, {\rm d} \varphi^2 + C^2 r \, {\rm d} v\right)^2 \, ,
\end{eqnarray}
where we have put $A = \mu_2/\mu_1$
We can explicitly read off from the metric that the norm of
$\partial/\partial \varphi^1$ [i.e., the coefficient of $({\rm d} \varphi^1)^2$] vanishes at $\theta=0,\pi$, whereas the norm of
$\partial/\partial \varphi^2$ [i.e., the coefficient of $({\rm d} \varphi^2)^2$]
never vanishes. This is the characteristic feature of the action of $U(1)^2$ on $S^2 \times S^1$.
\medskip
\noindent
{\bf Topology $H \cong S^3$}: In this case, $a_+ = (1,0), a_-=(0,1)$.
The constraints~\eqref{constr} are
\begin{equation}
c_+^2 \mu_2 = c_-^2 \mu_1 \, , \quad
c_+ s_{01} \, \mu_2 = c_- s_{02} \, \mu_1 \, , \quad
(c_+ - c_-)
\left|
\begin{matrix}
\mu_1 & s_{01}\\
\mu_2 & s_{02}
\end{matrix}
\right|
= 1 \, .
\end{equation}
The constraints allow us e.g. to express $\mu_1, \mu_2, s_{01}, s_{02}$ in terms of
$A:=c_+, B:=c_-$ and $C$ given by eq.~\eqref{cdet}. The result must then be plugged back into
the equation for the near horizon metric~\eqref{NH}. After some calculation,
one ends up with the result
\begin{eqnarray}\label{gs3}
g &=& {\rm e}^{-\lambda} (2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2+ C^{-2} \, {\rm d} \theta^2 ) \\
&&+ {\rm e}^{+\lambda} \Bigg\{
\left(\frac{4}{C}\right)^2 \sin^2 \theta \left( A^2 {\rm d} \varphi^1 + B^2 {\rm d} \varphi^2 + r ABC^2 {\rm d} v \right)^2 \nonumber\\
&&+
\left(\frac{C}{4}\right)^2 (1+\cos \theta)^2 \left( A^{-1} \, {\rm d} \varphi^2 + r(2B)^{-1} C^2 \, {\rm d} v\right)^2\nonumber\\
&&+
\left(\frac{C}{4}\right)^2 (1-\cos \theta)^2 \left( B^{-1} \, {\rm d} \varphi^1 + r(2A)^{-1} C^2 \, {\rm d} v\right)^2
\Bigg\} \, , \nonumber
\end{eqnarray}
where
\begin{eqnarray}
\exp[-\lambda(\theta)] &=& A^2(1+\cos \theta)^2 + B^2(1-\cos \theta)^2 + \left( \frac{C^2}{16 AB} \right)^2 \sin^2 \theta \, .
\end{eqnarray}
The quantity $A-B$ must be non-zero on account of the third constraint.
Note that $\exp \lambda(\theta) \neq 0$ for $0 \le \theta \le \pi$, so we can explicitly read off from the metric that
the norm of $\partial/\partial \varphi^2$ [i.e., the coefficient of $({\rm d} \varphi^2)^2$] vanishes at
$\theta = \pi$, whereas the norm of $\partial/\partial \varphi^1$ [i.e., the coefficient of $({\rm d} \varphi^1)^2$] vanishes at
$\theta = 0$. This is the characteristic feature of the action of $U(1)^2$ on the 3-sphere.
\medskip
\noindent
{\bf Topology $H \cong L(p,q)$}: In this case, $a_+ = (1,0), a_-=(q,p)$, where $p,q \in \mathbb Z$ and $p \neq 0$.
The constraints~\eqref{constr} are explicitly
\begin{equation}
c_+^2 (q\mu_1+p\mu_2) = c_-^2 \mu_1 \, , \quad
c_+ s_{01} \, (q\mu_1+p\mu_2) = c_- (qs_{01} + ps_{02}) \, \mu_1 \, , \quad
(c_+ - c_-)
\left|
\begin{matrix}
\mu_1 & s_{01}\\
\mu_2 & s_{02}
\end{matrix}
\right|
= 1 \, .
\end{equation}
We choose as the independent parameters $A:= c_+/p, B:= c_-/p$, and
$C$ given by eq.~\eqref{cdet}, and solve for the remaining ones
using the constraints. The result is plugged back into
the equation for the near horizon metric~\eqref{NH}. After some calculation,
one ends up with the result
\begin{eqnarray}\label{glpq}
g &=& {\rm e}^{-\lambda}(2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2 + C^{-2} {\rm d} \theta^2)
\\
&&+ p^2 {\rm e}^{+\lambda}\Bigg\{
\left(\frac{4p}{C}\right)^2 \sin^2 \theta \left( A^2 (1/p) {\rm d} \varphi^1 + B^2 ({\rm d} \varphi^2 - (q/p){\rm d} \varphi^1) + rABC^2 {\rm d} v \right)^2 \nonumber\\
&&+
\left(\frac{C}{4p}\right)^2 (1+\cos \theta)^2 \left( A^{-1}({\rm d} \varphi^2 - (q/p) {\rm d} \varphi^1) + r(2B)^{-1} C^2 \, {\rm d} v\right)^2\nonumber\\
&&+
\left(\frac{C}{4p}\right)^2 (1-\cos \theta)^2 \left( (pB)^{-1} {\rm d} \varphi^1
+r(2A)^{-1} C^2 \, {\rm d} v\right)^2
\Bigg\}\nonumber \, ,
\end{eqnarray}
where
\begin{eqnarray}
\exp[-\lambda(\theta)] &=&
p^2\Big[
A^2(1+\cos \theta)^2 + B^2(1-\cos \theta)^2 + \left( \frac{C^2}{16p^2 AB} \right)^2 \sin^2 \theta
\Big]\, .
\end{eqnarray}
We note that at $\theta = \pi$, the Killing field $\partial/\partial \varphi^1$ has vanishing norm,
while at $\theta = 0$, the Killing field $q \partial/\partial \varphi^1 + p \partial/\partial \varphi^2$ has vanishing norm. This is the characteristic feature of the action of $U(1)^2$ on the Lens space $L(p,q)$.
The metrics with $H \cong L(p,q)$ just described are closely related to those in the case $H \cong S^3$ described in
the previous example. Indeed, in the case $H \cong S^3$,
consider the map given by $(\varphi^1, \varphi^2) \mapsto (\varphi^1 + 2\pi/p, \varphi^2 + 2\pi q/p)$,
leaving invariant the other coordinates, where $\varphi^1, \varphi^2$ are $2\pi$-periodic. This map is an isometry of the metric with $H \cong S^3$, and by repeated application
generates the subgroup ${\mathbb Z}_p$ of the full isometry group. If we factor by this group,
then we get a metric with $H \cong L(p,q)$, and we claim that this metric is exactly the one just given.
To see this more explicitly, we note that factoring by the above group ${\mathbb Z}_p$ of isometries in effect
imposes the further identifications
\begin{equation}\label{ident}
(\varphi^1, \varphi^2) \cong (\varphi^1 + 2\pi/p, \varphi^2 + 2\pi q/p)
\end{equation}
on the angular coordinates in the metric~\eqref{gs3}, which were initially $2\pi$-periodic. If we let
\begin{equation}
f: (r,v,\theta,\varphi^1, \varphi^2) \mapsto (r, p^2 v, \theta, (1/p) \varphi^1, \varphi^2 - (q/p) \varphi^1) \,
\end{equation}
then $f$ provides an invertible mapping from the ordinary $2\pi$-periodic coordinates to the coordinates with the identifications~\eqref{ident}.
If we now take the metric~\eqref{gs3} in the case $H \cong S^3$, factor it by
${\mathbb Z}_p$, pull it back by $f$, and furthermore put $C \to C/p$, then we get precisely the $H \cong L(p,q)$
metrics~\eqref{glpq}. Thus, all metrics in the case $H \cong L(p,q)$ arise from the case
$H \cong S^3$ by taking quotients. The same statement (with similar proof) is true in all dimensions $D$.
\vspace{1cm}
Let us finally briefly discuss an example of our classification in $D=6$ dimensions. In this case, the metrics
are classified by the discrete parameters $a_\pm$ [see eq.~\eqref{apm}] and $7$ real continuous parameters. An example is
\medskip
\noindent
{\bf Topology $S^3 \times S^1$:} In this case, $a_+ = (1,0,0), a_-=(0,1,0)$. The constraints are explicitly
\begin{equation}
c_+ s_{01} \mu_2 = c_- s_{02} \mu_1 \, \quad
c_+ s_{11} \mu_2 = c_- s_{12} \mu_1 \, \quad c_+^2 \mu_2 = c_-^2 \mu_1 \, , \quad
(c_+ - c_-) \left|
\begin{matrix}
\mu_1 & s_{01} & s_{11}\\
\mu_2 & s_{02} & s_{12}\\
\mu_3 & s_{03} & s_{13}
\end{matrix}
\right| = 1 \, .
\end{equation}
To simplify the formulae somewhat, we consider the special case that $c_+ = -c_-=: A/2$.
Then the constraints may be solved easily for the remaining parameters. To obtain
a halfway simple expression, we also consider the special case $s_{11} = s_{03} = 0$,
and we denote the remaining free parameters as
$B:=s_{01}, D = \mu_3$, and $C$ as usual. The resulting metric is still rather
complicated and is given by
\begin{eqnarray}\label{NH1}
g &=& {\rm e}^{-\lambda(\theta)} \left( 2{\rm d} v {\rm d} r - C^2 r^2 {\rm d} v^2 + C^{-2} {\rm d} \theta^2 \right) \nonumber\\
&&+ {\rm e}^{+\lambda(\theta)} \Bigg\{
A^4 C^{-2} \sin^2 \theta \left( {\rm d} \varphi^1 + {\rm d} \varphi^2 + A^{-1}CD \, {\rm d}\varphi^3 - rC^2 \, {\rm d} v \right)^2 \nonumber\\
&&+ \frac{A^2B^2}{4} (1+ \cos \theta)^2 (2 {\rm d} \varphi^2 + A^{-1} CD \, {\rm d} \varphi^3 - rC^2 \, {\rm d} v)^2 \nonumber\\
&&+ \frac{A^2B^2}{4} (1- \cos \theta)^2 (2 {\rm d} \varphi^1 + A^{-1} CD \, {\rm d} \varphi^3 - rC^2 \, {\rm d} v)^2
\Bigg\}
+ \frac{C^2}{4A^4B^2} ({\rm d} \varphi^3)^2 \,.
\end{eqnarray}
Here we also have
\begin{eqnarray}
{\rm e}^{-\lambda(\theta)} &=& \frac{A^2}{2} (1 + \cos^2 \theta)
+ \frac{B^2 C^2}{4} \, \sin^2 \theta \, .
\end{eqnarray}
This special family of metrics depends on only 4 parameters.
It is easy to write down the general 7 parameter family of metrics.
\section{Conclusion}
We have determined explicitly what are the possible (non-static) stationary smooth, cohomogeneity-one
near horizon geometries satisfying the vacuum Einstein equations. We
excluded by hand\footnote{
See, however, the note added in proof.
}
the case that the horizon topology is $T^{D-2}$.
The solution, described in thm.~\ref{thm1}, is given in closed form in
terms of real and discrete parameters (corresponding to the possible topology types other than $T^{D-2}$), which are subject to certain constraints that take
the form of algebraic equations. After taking into account these
constraints, the metrics depend on
$(D-2)(D-3)/2$ independent real parameters, and two discrete ones.
For example, in $D=5$, we initially have 3 real continuous parameters.
We have worked out explicitly this case as did~\cite{kl},
but our metrics are presented in different coordinates\footnote{We also do not distinguish between
the subcases ``A'' and ``B'' as in~\cite{kl} but instead give a unified expression for the metric.}
for the case $H \cong S^3$. In $D \ge 6$, not all of our metrics
can be obtained as the near horizon limit of a known black hole
solution, so in this sense some of our metrics are new for $D \geq 6$.
By contrast to $D \le 5$, not all near horizon metrics that we have found can be obtained as the near horizon limits of known black hole
solutions in dimensions $D \ge 6$. It is conceivable that there are further extremal black hole solutions---to be found---which give our metrics in the near horizon limit, but it is also possible that some of our metrics in
$D \ge 6$ simply do not arise in this way.
Our method as described only works for vacuum solutions. However, we expect that it can be generalized
to any theory whose equations can be recast into equations of the sigma-model type that we encounter.
Thus we expect our method to be applicable e.g. to 5-dimensional minimal supergravity, see e.g.~\cite{bou,virmani,Clement,tom}. By contrast,
our method does not seem applicable straightforwardly to the case of a cosmological constant.
In our proof, we also assumed that the metrics are {\em not} static. All static near horizon geometries were
found in~\cite{kl1} in $D=5$ and in \cite{crt06} in arbitrary dimensions.
It would be interesting to see whether our classification can be used to prove a black hole uniqueness
theorem in arbitrary dimensions
for extremal black holes along the lines of~\cite{don,fl}, thereby generalizing~\cite{hs,hs1}. It would also
be interesting to investigate whether our analysis can be used to obtain new structural insights into the origin of the Bekenstein-Hawking
entropy, e.g. by considering a suitably quantized version of eq.~\eqref{phieq}.
\vspace{1cm}
{\bf Acknowledgements:}
S.H. would like to thank the {\em Centro de Ciencias de Benasque Pedro Pascual}
for its hospitality during the inspiring programme on ``Gravity - New perspectives from strings and higher dimensions'',
where a key part of this work was done. He would also like to thank P. Figueras, H. Kunduri and especially
J. Lucietti for numerous useful discussions.
We especially would like to thank the unknown referee for pointing out
an error in the counting of parameters of our solutions and for
suggesting a simplification of formula~(\ref{NH1}).
This work is supported in part by the Grant-in-Aid for Scientific
Research from the Ministry of Education, Science and Culture of Japan.
\vspace{2cm}
{\bf Note added in proof:}
In our analysis, we excluded by hand the horizon topology
$T^{D-2}$. There cannot exist any asymptotically flat or Kaluza-Klein
black hole solutions with this topology by general
arguments~\cite{galoway, hs}.
At any rate, these could not arise as the near horizon limits of
a black hole.
After we finished this work, it was confirmed by J. Holland that there
cannot be any {\em non-static} cohomogeneity-one near horizon
geometries with topology $H \cong T^{D-2}$ \cite{holland}. Hence our main
theorem~\ref{thm1} covers {\em all} possibilities with $D-3$ commuting
rotational symmetries.
The static case is covered by the results of \cite{crt06}.
|
0909.2859
|
\section{Proof of Theorem~\ref{theo:approx}}
\label{sec:approx}
\begin{proof}[Proof of Theorem~\ref{theo:approx}]
\textit{Notation:}
Note that in this proof we use the notation of~(\ref{dir:pot}).
So, for a potential vector $\psi$, we have
$\psi_{u\to v} := \psi_u-\psi_v$ if $(u,v)$ is an edge and $\psi_u \ge \psi_v$,
and $\psi_{u\to v}:=0$ otherwise. So, for example, the potential
difference on $(u,v)$
can be written as $\psi_{u\to v}+\psi_{v\to u}$. On the other hand,
we use the single letter edge notation $\varphi_e$ to denote the signed
(according to $B$) potential difference on $e$,
so $\varphi_e:=\varphi_u-\varphi_v$ if $B_e=\delta_u-\delta_v$. Let $D$ be the
maximum degree.
\textit{Edge approximation:}
Fix any unit electric flow, defined by potentials
$\varphi:=\sum_v \alpha_v \varphi^{[v]}$, and write its approximation
as $\tilde{\varphi}:=\sum_v \alpha_v \tilde{\varphi}^{[s]}$.
All unit flows can be so expressed under the restriction
that $\sum_v \alpha_v = 0$ and $\sum_v |\alpha_v|=2$.
The approximation condition~(\ref{ineq:pot:approx})
combined with Lemma~\ref{lem:l2:apx} gives us, for every edge $e=(u,v)$,
\begin{align*}
|\varphi_e - \tilde{\varphi}_e|
&= \big|\sum_v\alpha_v(\varphi^{[v]}_e - \tilde{\varphi}^{[v]}_e)\big| \\
&\le \sum_v |\alpha_v|\cdot\big|\varphi^{[v]}_e - \tilde{\varphi}^{[v]}_e\big| \\
&\le \sum_v |\alpha_v| \cdot 2\nu
&\text{apply Lemma~\ref{lem:l2:apx}}\\
&= 4\nu
\end{align*}
We call this the \textit{additive edge approximation condition}
\begin{align}
\varphi_e -4\nu \le \tilde{\varphi}_e \le \varphi_e + 4\nu \label{ineq:edge:add}
\end{align}
Now, consider a fixed path $\gamma$
along the electric flow defined by $\varphi$,
traversing vertices $w_0,w_1,\dots,w_k$. Let $\Pr_\varphi\{W=\gamma\}$
and $\Pr_{\tilde{\varphi}}\{W=\gamma\}$ denote the probability of this path
under the potentials $\varphi$ and $\tilde{\varphi}$, respectively.
In most of what follows, we build machinery to relate one to the other.
\textit{Path probabilities:}
For a general unit flow (not necessarily an $(s,t)$-flow), defined by
vertex potentials $\psi$, $\Pr_{\psi}\{W=\gamma\}$ equals
\begin{align} \label{prob:path}
\Pr_\psi\{W_0=w_0\}\Bigg(\prod_{i=0}^{k-1}
\Pr_\psi\{W_{i+1}=w_{i+1}\,|\,W_i=w_i\}\Bigg)
\Pr_\psi\{W_\infty=w_k\,|\,w_k\},
\end{align}
where next we explain each factor in turn.
The first, $\Pr_{\psi}\{W_0=w_0\}$, is the probability that
the walk starts from $w_0$, and is expressed as
\begin{align}
\Pr_\psi\{W_0=w_0\}
= \max\Big(0,
\sum_u \psi_{w_0\to u} - \sum_u \psi_{u\to w_0} \Big). \label{prob:start}
\end{align}
The second and trickiest, $\Pr_\psi\{W_{i+1}=w_{i+1}\,|\,W_i=w_i\}$, is
the probability that having reached $w_i$ the walk traverses the edge
leading to $w_{i+1}$, and
$\sum_u \psi_{w_i\to u} \ge \sum_u \psi_{u\to w_i}$, we write
\begin{align}
\Pr_\psi\{W_{i+1}=w_{i+1}\,|\,W_i=w_i\}
= \frac{\psi_{{w_i\to w_{i+1}}}}
{\displaystyle\max\Big(\sum_u \psi_{u\to w_i},\sum_u \psi_{w_i\to u}\Big)}.
\label{prob:trans}
\end{align}
To grasp the meaning of the denominator, note that the quantity
$|\sum_u \psi_{u\to w_i} - \sum_u \psi_{w_i\to u}|$ is the magnitude
of the in or out flow (depending on the case) at $w_i$.
The third, $\Pr_\psi\{W_\infty=w_k\,|\,w_k\}$, is the probability that the walk
ends (or exits) at $w_k$ conditioned on having reached $w_k$, and
\begin{align}
\Pr_\psi\{W_\infty=w_k\,|\,w_k\}
= \max \Big(0,\sum_u \psi_{u\to w_k} - \sum_u \psi_{w_k\to u}\Big).
\label{prob:end}
\end{align}
Next, we are going to find multiplicative bounds for all three factors
by focusing on ``dominant'' paths, and discarding ones with overall negligible
probability.
\textit{Dominant paths:}
It is straightforward to verify (from first principles)
that the probability that an edge $(u,v)$
occurs in the electric walk equals $|\varphi_e| = \varphi_{u\to v}+\varphi_{v\to u}$.
We call an edge \textit{short} if
$|\varphi_e|\le \epsilon$, where
the exact asymptotic of $\epsilon > 0$ is determined later,
but for the moment $\nu \ll e \ll 1$. We restrict our
attention to \textit{dominant} paths $\gamma$
that traverse no short edges, and have
$\Pr_\varphi\{W_0=w_0\}\ge\epsilon$ and
$\Pr_\varphi\{W_\infty=w_k\,|\,w_k\}\ge\epsilon$.
Indeed, by a union bound, the probability that
the electric walk traverses a non-dominant path
is at most $2n\epsilon+n^2\epsilon$. This
will be negligible and such paths will be of no interest. In summary,
\begin{align}
\Pr_{\varphi}\{\text{$W$ dominant}\} \ge 1 - 2n\epsilon - n^2\epsilon
\label{prob:dom}
\end{align}
We now
condition on the event that $\gamma$ is dominant.
The no short edge condition gives
$\epsilon\le|\varphi_e|\le 1$, and using~(\ref{ineq:edge:add})
we derive the stronger \textit{multiplicative edge approximation condition}
\begin{align}
\frac{1}{\sigma} \le \frac{\tilde{\varphi}_e}{\varphi_e} \le \sigma,
\text{ where } \sigma = 1 + \frac{8D\nu}{\epsilon}, \label{ineq:edge:mul}
\end{align}
which holds as long as $\epsilon\ge 4\nu$, as guaranteed
by the asymptotics of $\epsilon$. Also note that the latter condition ensures
that $\varphi_e$ and $\tilde{\varphi}_e$ have the same sign.
An extra factor of $2D$ is included in $\sigma$ with
foresight.
For the first factor~(\ref{prob:start}), we have
\begin{align} \label{bound:start}
\Pr_{\tilde{\varphi}}\{W_0=w_0\}
&= \sum_u \tilde{\varphi}_{w_0\to u} - \sum_u \tilde{\varphi}_{u\to w_0} \\
&\ge \sum_u \varphi_{w_0\to u} - \sum_u \varphi_{u\to w_0} - 4D\nu
&\text{use~(\ref{ineq:edge:add})} \notag \\
&\ge \Pr_{\varphi}\{W_0=w_0\} \Big(1-\frac{4D\nu}{\epsilon}\Big)
&\text{use $\Pr_{\varphi}\{W_0=w_0\}\ge \epsilon$} \notag \\
&\ge \frac{1}{\sigma}\Pr_{\varphi}\{W_0=w_0\}
&\text{use $\epsilon\le 1/2$}. \notag
\end{align}
For the second factor~(\ref{prob:trans}), assume
$\sum_u \psi_{u\to w_i} \ge \sum_u \psi_{w_i\to u}$. An identical
argument holds in the other case.
Abbreviate
\begin{align*}
\Pr_{\tilde{\varphi}}\{w_{i+1}\,|\,w_i\}
:= \Pr_{\tilde{\varphi}}\{W_{i+1}=w_{i+1}\,|\,W_i=w_i\}.
\end{align*}
Path dominance implies
$\sum_u \varphi_{u\to w_i} \ge \epsilon$, and so
\begin{align} \label{bound:trans}
\Pr_{\tilde{\varphi}}\{w_{i+1}\,|\,w_i\}
&= \frac{\tilde{\varphi}_{w_i\to w_{i+1}}}
{\displaystyle\sum_u \tilde{\varphi}_{u\to w_i}} \\
%
&\ge \frac{\sigma^{-1}\cdot\varphi_{w_i\to w_{i+1}}}
{\displaystyle\sum_u \varphi_{u\to w_i}+4D\nu}
&\text{use~(\ref{ineq:edge:mul}) and~(\ref{ineq:edge:add})} \notag \\
%
&\ge \sigma^{-2} \frac{\varphi_{w_i\to w_{i+1}}}{\sum_u \varphi_{u\to w_i}}
&\text{use $\epsilon \le 1/2$ and
$\sum_u \varphi_{u\to w_i} \ge \epsilon$} \notag \\
%
&= \frac{1}{\sigma^2} \Pr_{\varphi}\{w_{i+1}\,|\,w_i\}. \notag
\end{align}
For the third factor~(\ref{prob:end}), similarly to the first, we have
\begin{align} \label{bound:end}
&\Pr_{\tilde{\varphi}}\{W_\infty=w_k\,|\,w_k\}
= \\
&\qquad= \sum_u \tilde{\varphi}_{u\to w_k}
- \sum_u \tilde{\varphi}_{w_k\to u} \notag \\
&\qquad\ge \sum_u \varphi_{u\to w_k} - \sum_u \varphi_{w_k\to u} -4D\nu
&\text{use~(\ref{ineq:edge:add})} \notag \\
&\qquad\ge \Pr_{\varphi}\{W_\infty=w_k\,|\,w_k\}\Big(1-\frac{4D\nu}{\epsilon}\Big)
&\text{use $\Pr_{\varphi}\{W_\infty=w_k\,|\,w_k\}\ge \epsilon$} \notag \\
&\qquad\ge \frac{1}{\sigma} \Pr_{\varphi}\{W_\infty=w_k\,|\,w_k\}
&\text{use $\epsilon\le 1/2$}. \notag
\end{align}
\textit{Dominant path bound:}
We now obtain a relation between $\Pr_\varphi\{W=\gamma\}$ and
$\Pr_{\tilde{\varphi}}\{W=\gamma\}$ by combinging the bounds
(\ref{bound:start}), (\ref{bound:trans}) and (\ref{bound:end})
with (\ref{prob:path}):
\begin{align}
\frac{\Pr_{\tilde{\varphi}}\{W=\gamma\}}{\Pr_\varphi\{W=\gamma\}}
&\ge \frac{1}{\sigma^{2n+2}}
&\text{apply bounds, and path length $\le n$}
\label{approx:theta} \\
&\ge \Big(1-\frac{8D\nu}{\epsilon}\Big)^{2n+2}
&\text{use $\sigma^{-1} \ge 1 - 8D\nu/\epsilon$} \notag \\
&\ge \exp\Big(-\frac{16 D\nu}{\epsilon}\Big)^{2n+2}
&\text{use $1-x\ge e^{-2x}$} \notag \\
&=: \theta \notag
\end{align}
\textit{Statistical difference:}
Abbreviate $p(\gamma):=\Pr_\varphi\{W=\gamma\}$ and
$q(\gamma):=\Pr_\varphi\{W=\gamma\}$. Below, $\gamma$ iterates
through all paths, $\zeta$ iterates through dominant
paths and $\xi$ iterates through non-dominant paths.
We bound the statistical difference~(\ref{stat:diff}),
using~(\ref{approx:theta}) which says $q(\zeta)\ge\theta\cdot p(\zeta)$,
\begin{align}
&\sum_{\gamma}|p(\gamma)-q(\gamma)| = \notag\\
&\qquad= \sum_\zeta|p(\zeta)-q(\zeta)|+\sum_\xi|p(\xi)-q(\xi)| \notag \\
&\qquad\le \sum_\zeta|(1-\theta)p(\zeta)-\big(q(\zeta)-\theta p(\zeta)\big)|
+\sum_\xi q(\xi)
&\text{use $\theta<1$} \notag \\
&\qquad\le \sum_\zeta|q(\zeta)-\theta p(\zeta)|+\sum_\xi q(\xi) \notag \\
&\qquad= 1 - \sum_\zeta p(\zeta) \notag \\
&\qquad= (1-\theta) + \theta\sum_\zeta p(\zeta) \label{stat:diff:2}
\end{align}
In this final step, we pin-point the asymptotics of $\epsilon$ that
simultaneously minimize the two terms of~(\ref{stat:diff:2}). In the following,
we parameterize $\epsilon=n^{-B}$ and use~(\ref{prob:dom}),
\begin{align*}
&(1-\theta) + \theta\sum_\zeta p(\zeta) = \\
&\qquad= 1-\exp\Big(-\frac{16D\nu}{\epsilon}\Big)^{2n+2}
+ \exp\Big(-\frac{16D\nu}{\epsilon}\Big)^{2n+2}
\big(2n\epsilon+n^2\epsilon\big) \\
&\qquad= 1-\exp O\big(-Dn^{B-A+1}\big)
+ n^{2-B} \cdot\exp O\big(-Dn^{B-A+1}\big) \\
&\qquad= O\big(Dn^{B-A+1}\big)
+ n^{2-B} \cdot\exp O\big(-Dn^{B-A+1}\big),
\qquad\text{use $1-e^{-x}\le x$}\\
&\qquad= O\big(n^{B-A+2}\big) + O\big(n^{2-B}\big)
\qquad\text{use $D\le n$}\\
&\qquad= O\big(n^{2-\frac{A}{2}}\big),
\qquad\text{set $B=A/2$.} &\qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lem:l2:apx}
If $x,y\in\ell_2$ and $\|x-y\|_2\le\nu$, then for all $i\neq j$,
\begin{align*}
(x_i-x_j)-2\nu \le y_i - y_j \le (x_i-x_j)+2\nu.
\end{align*}
\end{lemma}
\begin{proof}
We have $(x_i-y_i)^2 \le \|x-y\|_2^2\le \nu^2$, implying $|x_i-y_i|\le \nu$.
Similarly for $j$. Combining the two proves the lemma.
\end{proof}
\section{Proof of Theorem~\ref{theo:eta:ub}}
\begin{proof}[Proof of Theorem~\ref{theo:eta:ub}]
It is sufficient to consider demand sets that can be routed
in $G$ with unit congestion, since both electric and optimal routing
scale linearly with scaling the entire demand set.
Let $\oplus_\tau d_\tau$ be any demand set,
which can be (optimally) routed in $G$ with unit congestion
via the multi-commodity flow $\oplus_\tau f_\tau$.
Thus, $d_\tau=\sum_e f_{\tau,e}B_e$, for all $\tau$.
The proof involves two steps:
\begin{align*}
\big\|{\mathcal{E}}(\oplus_\tau d_\tau)\big\|_{G}
\overset{\text{(i)}}{\le}
\big\|{\mathcal{E}}(\oplus_e w_e B_e)\big\|_{G}
\overset{\text{(ii)}}{=}
\big\|W^{1/2}\Pi W^{-1/2}\big\|_{1\rightarrow 1}
\end{align*}
Step (i) shows that congestion incurred when routing $\oplus_\tau d_\tau$
is no more than that incurred when routing $G$'s edges, viewed as demands,
through $G$:
\begin{align*}
\big\|{\mathcal{E}}(\oplus_{\tau}d_\tau)\big\|_{G}
&= \big\|\oplus_\tau {\mathcal{E}}(d_\tau)\big\|_{G} \tag{i} \\
&= \big\|\oplus_\tau{\mathcal{E}}\big(\sum_e f_{\tau,e}B_e\big)\big\|_{G}
&\quad\text{use $d_\tau=\sum_e f_{\tau,e}B_e$} \\
&= \big\|\oplus_\tau\sum_e {\mathcal{E}}(f_{\tau,e}B_e)\big\|_{G}
&\quad\text{use ${\mathcal{E}}\big(\sum_j d_j\big)=\sum_j{\mathcal{E}}(d_j)$} \\
&\le \big\|\oplus_{\tau,e}{\mathcal{E}}(f_{\tau,e}B_e)\big\|_{G}
&\quad\text{use
$\big\|\sum_j f_j\big\|_{G} \le \big\|\oplus_j f_j\big\|_{G}$} \\
&= \big\|\oplus_e{\mathcal{E}}\big(\sum_\tau|f_{\tau,e}|B_e\big)\big\|_{G}
&\quad\text{use $\big\|\oplus_j \alpha_j f\big\|_{G}
=\big\|\sum_j |\alpha_j| f\big\|_{G}$} \\
&\le \big\|\oplus_e {\mathcal{E}}(w_e B_e)\big\|_{G}
&\quad\text{use $\sum_\tau|f_{\tau,e}|\le w_e$} \\
&= \big\|{\mathcal{E}}(\oplus_e w_e B_e)\big\|_{G}
\end{align*}
\begin{align*}
\big\|{\mathcal{E}}(\oplus_e w_e B_e)\big\|_{G}
&\overset{(\ref{eq:congnorm})}{=}
\big\|{\mathcal{E}}(\oplus_e w_e B_e)^*W^{-1}\big\|_{1\to 1}
\tag{ii}\\
&\overset{(\ref{eq:el})}{=}
\big\|WBL^{\dag} B^*W W^{-1}\big\|_{1\to 1}
= \|W^{1/2}\Pi W^{-1/2}\|_{1\to 1}. \notag
&&\qedhere
\end{align*}
\end{proof}
\section{Conclusions}
\label{sec:concl}
Our main result in Theorem~\ref{theo:main} attests to the good congestion
properties on graphs of bounded degree and high vertex expansion, i.e.
$\alpha=O(1)$. A variation on the proof of this theorem establishes a similar
bound on $\eta_{\mathcal{E}}$, however, independent of the degree bound and
as a function of the \textit{edge expansion}
$\beta=\min_{S\subseteq V}
\frac{\vvol(E(S,S^\complement))}{\min\{\vvol(S),\vvol(S^\complement)\}}$, where
$\vvol(S):=\sum_{v\in S}\sum_{u:u\sim v} w_{u,v}$ and
$\vvol(E(S,S^\complement)):=\sum_{(u,v):u\in S,v\in S^\complement} w_{u,v}$.
The bounded degree assumption is also implicit in our computational procedure
in that all vertices must know an upper bound on $d_{\max}$ in order to
apply $M$ in Theorem~\ref{coro:power}. Using a generous bound,
anything $\omega(1)$, on $d_{\max}$ is
bad because it slows down the mixing of the power polynomial. To avoid this
complication, one must use a symmetrization trick outlined in
Appendix~\ref{sec:symmetrize}.
We conclude with a couple of open questions.
A central concern, widely-studied in social-networks, are Sybil
Attacks~\cite{sybil}. These can be modeled as graph-theoretic noise, as defined
in~\cite{noise}. It is interesting to understand how such noise affects
electric routing.
We suspect that any $O(\ln n)$-competitive oblivious routing scheme,
which outputs its routes in the ``next hop'' model, must maintain
$\Omega(n)$-size routing tables at every vertex. In the \textit{next hop}
model, every vertex $v$ must be able to answer the question ``What is the
flow of the $(s,t)$-route in the neighborhood of $v$?'' in time
$O(\polylog(n))$,
using its own
routing table alone and for every source-sink pair $(s,t)$.
\section{The geometry of congestion}
\label{sec:cong}
Recall that given a multi-commodity demand, electric routing
assigns to each demand the corresponding electric flow in $G$,
which we express~(\ref{eq:el}) in operator form
${\mathcal{E}}(\oplus_\tau d_\tau) := WBL^{\dag}(\oplus_\tau d_\tau)$.
Electric routing is oblivious, since
${\mathcal{E}}(\oplus_\tau d_\tau)=\oplus_\tau{\mathcal{E}}(d_\tau)$ ensures that
individual demands are routed independently from each other.
The first key step in our analysis, Theorem~\ref{theo:eta:ub},
entails bounding $\eta_{\mathcal{E}}$
by the $\|\cdot\|_{1\to 1}$ matrix norm of a certain natural graph
operator on $G$. This step hinges on the observation that
all linear routing schemes have an easy-to-express worst-case demand set:
\begin{theorem} \label{theo:eta:ub}
For every undirected, weighted graph $G$, let $\Pi=W^{1/2}BL^{\dag} B^*W^{1/2}$,
then
\begin{align}
\eta_{\mathcal{E}} \le \|W^{1/2}\Pi W^{-1/2}\|_{1\rightarrow 1}.
\label{ineq:eta:ub}
\end{align}
\end{theorem}
\input{bound-cong.tex}
\begin{remark}
Note that the proof of step (i) uses only the linearity of ${\mathcal{E}}$ and so
it holds for any linear routing scheme $R$, i.e. one has
$\|R(\oplus_\tau d_\tau)\|_G\le \|R(\oplus_e w_e B_e)\|_G$.
\end{remark}
Using Theorem~\ref{theo:eta:ub}, the unconditional upper bound in
Theorem~\ref{theo:cong:univ} is simply a consequence of basic norm
inequalities. (See Appendix~\ref{sec:cong:u} for a proof.)
Theorem~\ref{theo:main} provides a much stronger bound on $\eta_{\mathcal{E}}$
when the underlying graph has high vertex expansion.
The lower bound in Theorem~\ref{theo:main} is due to Hajiaghayi, et
al.~\cite{leighton}. They show that every oblivious routing scheme is bound to
incur congestion of at least $\Omega(\ln n/ \ln\ln n)$ on a certain
family of expander graphs.
The upper bound in Theorem~\ref{theo:main}
follows from Theorem~\ref{theo:eta:ub}, Theorem~\ref{theo:ve:ub} and using that
$\|\Pi\|_{1\to 1}=O(\|L^{\dag}\|_{1\to 1})$ for unweighted bounded-degree graphs.
Thus in the next section we derive a bound on $\|L^{\dag}\|_{1\to 1}$ in terms of
vertex expansion.
\section{Introduction}
\label{sec:intro}
\paragraph{Overview}
We address a vision of the Internet where every participant exchanges messages
with their direct friends and no one else. Yet such an Internet should be able
to support reliable and efficient routing to remote locations identified by
unchanging names in the presence of an ever changing graph of connectivity.
Modestly to this end, this paper investigates the properties of routing along
the electric flow in a graph (\textit{electric routing} for short) for intended
use in such distributed systems, whose topology changes over
time. We focus on the class of expanding graphs which, we believe, gives a good
trade-off between applicability and precision in modeling a large class of
real-world networks. We address distributed representation and
computation. As a measure of performanace, we show that electric routing,
being an oblivious routing scheme, achieves minimal maximum edge congestion (as
compared to a demand-dependent optimal scheme). Furthermore,
we show that electric
routing continues to work (on average) in the presence of large edge failures
ocurring after the routing scheme has been computed,
which attests to its applicability in changing environments. We now proceed to
a formal definition of oblivious routing and statement of our results.
\paragraph{Oblivious routing}
The object of interest is a graph $G=(V,E)$ (with $V=[n]$ and $|E|=m$)
undirected, positively edge-weighted by $w_{u,v}\ge 0$, and not necessarily
simple. The intention is that higher $w_{u,v}$ signifies stronger connectivity
between $u$ and $v$; in particular, $w_{u,v}=0$ indicates the absence of edge
$(u,v)$. For analysis purposes,
we fix an arbitrary orientation ``$\to$'' on the edges $(u,v)$ of $G$, i.e.
if $(u,v)$ is an edge then exactly one of $u\to v$ or $v\to u$ holds.
Two important operators are associated to every $G$.
The \textit{discrete gradient} operator $B\in\matha{R}^{E\times V}$, sending
functions on $V$ to functions on the \textit{undirected} edge set $E$, is
defined as $\chi_{(u,v)}^* B := \chi_u -\chi_v$ if $u\to v$, and
$\chi_{(u,v)}^* B := \chi_v -\chi_u$ otherwise, where $\chi_y$ is the
Kronecker delta function with mass on $y$. For $e\in E$, we use the
shorthand $B_e:= (\chi_e B)^*$. The \textit{discrete divergence} operator is
defined as $B^*$.
A \textit{(single-commodity) demand} of
amount $\alpha > 0$ between $s\in V$ and $t\in V$ is
defined as the vector $d=\alpha(\chi_s-\chi_t)\in\matha{R}^V$. A
\textit{(single-commodity) flow}
on $G$ is defined as a vector $f\in\matha{R}^E$, so that $f_{(u,v)}$ equals the
flow value from $u$ towards $v$ if $u\to v$, and the negative of this value
otherwise. We also use the notation $f_{u\to v}:=f_{(u,v)}$ if $u\to v$, and
$f_{u\to v}:=-f_{(u,v)}$ otherwise. We say that flow $f$ \textit{routes} demand
$d$ if $B^*f=d$. This is a linear algebraic way of encoding the fact that
$f$ is an $(s,t)$-flow of amount $\alpha$.
A \textit{multi-commodity demand}, also called a \textit{demand set}, is a
matrix whose columns constitute the individual demands' vectors. It is given as
the direct product $\oplus_\tau d_\tau$ of its columns.
A \textit{multi-commodity flow} is represented as a matrix $\oplus_\tau
f_\tau$, given as a direct product of its columns, the single-commodity
flows. For clarity, we write $f_{\tau,e}$ for $(f_\tau)_e$.
The flow $\oplus_\tau f_\tau$ \textit{routes} the demand
set $\oplus_\tau d_\tau$ if
$B^*f_\tau = d_\tau$, for all $\tau$, or in matrix notation
$B^*(\oplus_\tau f_\tau)=\oplus_\tau d_\tau$.
The \textit{congestion} $\|\cdot\|_{G}$ of a multi-commodity
flow measures the load
of the most-loaded edge, relative to its capacity. It is given by
\begin{align}
\|\oplus_\tau f_\tau\|_{G}
:= \max_e \sum_\tau \big|f_{\tau,e}/w_e\big|
=\|(\oplus_\tau f_\tau)^*W^{-1}\|_{1\to 1},
\text{ where }\|A\|_{1\to 1}:=\sup_{x\neq 0} \frac{\|Ax\|_1}{\|x\|_1}.
\label{eq:congnorm}
\end{align}
An \textit{oblivious routing scheme} is a (not necessarily linear)
function $R:\matha{R}^V\to\matha{R}^E$ which has the property that $R(d)$
routes $d$ when $d$ is a valid single-commodity demand
(according to our definition). We extend $R$ to a function over demand sets
by defining $R(\oplus_\tau d_\tau):=\oplus_\tau R(d_\tau)$. This says that
each demand in a set is routed independently of the others by its corresponding
$R$-flow.
We measure the ``goodness'' of an oblivious routing scheme by
the maximum traffic that it incurs on an edge (relative to its capacity)
compared to that of the optimal (demand-dependent) routing.
This is captured
by the \textit{competitive ratio $\eta_R$} of the routing scheme $R$, defined
\begin{align}
\eta_R :=
\sup_{\oplus_\tau d_\tau}
\sup_{\substack{\oplus_\tau f_\tau \\
B^* (\oplus_\tau f_\tau) = \oplus_\tau d_\tau}}
\frac{\|R(\oplus_\tau d_\tau)\|_{G}}{\|\oplus_\tau f_\tau\|_{G}}.
\label{def:eta}
\end{align}
Let ${\mathcal{E}}$ denote the (yet undefined) function corresponding to electric
routing. Our main theorem states:
\begin{theorem}\label{theo:main}
For every undirected graph $G$ with unit capacity edges, maximum degree
$d_{\max}$ and \textit{vertex expansion}
$\alpha:=\min_{S\subseteq V}\frac{|E(S,S^\cmpl)|}{\min\{|S|,|S^\cmpl|\}}$,
one has
$\eta_{{\mathcal{E}}} \le
\Big(4\ln \frac{n}{2}\Big) \cdot
\Big(\alpha\ln \frac{2d_{\max}}{2d_{\max}-\alpha}\Big)^{-1}
$. This is tight up to a factor of $O(\ln\ln n)$.
\end{theorem}
The competitive ratio in Theorem~\ref{theo:main} is best achievable for any
oblivious routing scheme up to a factor of $O(\ln\ln n)$ due to a lower bound
for expanders, i.e. the case $\alpha = O(1)$, given in~\cite{leighton}.
Theorem~\ref{theo:main} can be extended to other definitions of graph
expansion, weighted and unbounded-degree graphs. We omit these extensions for
brevity. We also give an unconditional, albeit much worse, bound on $\eta_{\mathcal{E}}$:
\begin{theorem}\label{theo:cong:univ}
For every unweighted graph on $m$ edges,
electrical routing has $\eta_{\mathcal{E}} \le O(m^{1/2})$.
Furthermore, there are families of graphs with corresponding demand sets
for which $\eta_{\mathcal{E}} = \Omega(m^{1/2})$.
\end{theorem}
\paragraph{Electric routing}
Let $W=\vdiag(\dots,w_e,\dots)\in\matha{R}^{E\times E}$ be the edge weights
matrix. We appeal to a known connection between graph Laplacians and electric
current~\cite{snell, spielman-lecture}. Graph edges are viewed as wires of
resistance $w_e^{-1}$ and vertices are viewed as connection points. If
$\varphi\in\matha{R}^V$ is a vector of vertex potentials then, by Ohm's law, the
\textit{electric flow} (over the edge set)
is given by $f=WB\varphi$ and the corresponding demand is
$B^*f=L\varphi$ where the \textit{(un-normalized) Laplacian} $L$ is defined as
$L=B^*WB$. Central to the present work will be the
vertex potentials that induce a desired $(s,t)$-flow, given by
$\varphi^{[s,t]}=L^{\dag}(\chi_s-\chi_t)$, where $L^{\dag}$ is the pseudo-inverse of $L$.
Thus, the electric flow corresponding to the
demand pair $(s,t)$ is $WB\varphi^{[s,t]}=WBL^{\dag}(\chi_s-\chi_t)$.
We define the \textit{electric routing} operator as
\begin{align}
{\mathcal{E}}(d) = WBL^{\dag} d \label{eq:el}
\end{align}
The vector ${\mathcal{E}}(\chi_s-\chi_t)\in\matha{R}^E$ encodes a unit flow from
$s$ to $t$ supported on $G$, where the flow along an edge $(u,v)$ is given by
$\llbracket st,uv \rrbracket
:={\mathcal{E}}(\chi_s-\chi_t)_{u\to
v}=(\varphi^{[s,t]}_u-\varphi^{[s,t]}_v)w_{u,v}$.\footnote{The bilinear form
$\llbracket st,uv \rrbracket = \chi_{s,t} BL^{\dag} B^* \chi_{u,v}$
acts like a ``representation'' of $G$, hence the custom bracket notation.}
(Our convention is that current flows towards lower potential.)
When routing an indivisible message (an IP packet e.g.), we can view the
unit flow ${\mathcal{E}}(\chi_s-\chi_t)$ as a distribution over $(s,t)$-paths
defined recursively as follows: Start at $s$.
At any vertex $u$, \textit{forward} the message
along an edge with positive flow, with probability proportional to the
edge flow value. Stop when $t$ is reached.
This rule defines the \textit{electric walk} from $s$ to $t$.
It is immediate that the flow value over an edge $(u,v)$ equals the
probability that the electric walk traverses that edge.
Let ``$\sim$'' denote the vertex adjacency relation of $G$. In order to make
a (divisible or indivisible) forwarding decision, a vertex $u$ must be able to
compute $\llbracket st,uv \rrbracket$ for all neighbors
$v\sim u$ and all pairs $(s,t)\in\binom{V}{2}$. We address this next.
\paragraph{Representation}
In order to compute $\llbracket st,uv \rrbracket$ (for all $s,t \in V$ and
all $v\sim u$) at $u$, it suffices that $u$ stores the vector
$\varphi^{[w]}:=L^{\dag} \chi_w$, for all $w\in\{w:w\sim u\}\cup\{u\}$. This is
apparent from writing
\begin{align}
\llbracket st,uv \rrbracket
= (\chi_u-\chi_v)L^{\dag}(\chi_s-\chi_t)
= (\varphi^{[u]}-\varphi^{[v]})^*(\chi_s-\chi_t),
\label{fwd:flow}
\end{align}
where we have (crucially) used the fact that $L^{\dag}$ is symmetric. The
vectors $\varphi^{[w]}$ stored at $u$ comprise the \textit{(routing) table}
of $u$, which consists of $\deg(u)\cdot n$ real numbers.
Thus the per-vertex table sizes of our scheme grow linearly with the vertex
degree \textendash\ a property we call \textit{fair} representation.
It seems that fair representation is key for routing in heterogenous sytems
consisting of devices with varying capabilities.
Equation~(\ref{fwd:flow}),
written as $\llbracket st,uv \rrbracket=(\chi_s-\chi_t)^*(\varphi^{[u]}-\varphi^{[v]})$,
shows that in order to compute $\llbracket st,uv \rrbracket$ at $u$, it suffices
to know the \textit{indices} of $s$ and $t$ (in the $\varphi^{[w]}$'s).
These indices could be represented by $O(\ln n)$-bit opaque vertex ID's
and could be carried in the message headers. Routing schemes that support
opaque vertex addressing are called \textit{name-independent}. Name
independence allows for vertex name persistence across time (i.e. changing
graph topology) and across multiple co-existing routing schemes.
\paragraph{Computation}
We use an idealized computational model to facilitate this exposition. The
vertices of $G$ are viewed as processors, synchronized by a global step
counter. During a time step, pairs of processors can exchange messages of
arbitrary (finite) size as long as they are connected by an edge.
We describe an algorithm for computing
approximations $\tilde{\varphi}^{[v]}$ to
all $\varphi^{[v]}$ in $O(\ln n/\lambda)$ steps, where $\lambda$ is
the Fiedler eigenvalue of $G$ (the smallest non-zero eigenvalue of $L$).
If $G$ is an expander, then $\lambda=O(1)$.
At every step the algorithm sends messages consisting of
$O(n)$ real numbers across every edge and performs $O(\deg(v)\cdot n)$
arithmetic operations on each processor $v$.
Using standard techniques, this algorithm can be
converted into a relatively easy-to-implement
asynchronous one. (We omit this detail from here.)
It is assumed that no graph changes occur during the computation of
vertex tables.
A vector $\zeta\in\matha{R}^V$ is \textit{distributed} if $\zeta_v$ is
stored at $v$, for all $v$. A matrix $M\in\matha{R}^{V\times V}$
is \textit{local} (with respect to $G$)
if $M_{u,v}\neq 0$ implies $u\sim v$ or $u = v$. It is straightforward that
if $\zeta$ is distributed and $M$ is local,
then $M\zeta$ can be computed in a single step, resulting in a new distributed
vector. Extending this technique shows that for any polynomial $q(\cdot)$,
the vector $q(M)\zeta$ can be computed in $\deg(q)$ steps.
The Power Method gives us a matrix polynomial $q(\cdot)$ of degree $O(\ln n /
\lambda)$ such that $q(L)$ is a ``good'' approximation of $L^{\dag}$. We compute
the distributed vectors $\zeta^{[w]}:=q(L)\chi_w$, for all $w$, in parallel.
As a result, each vertex $u$ obtains
$\tilde{\varphi}^{[u]}=(\zeta^{[1]}_u,\dots,\zeta^{[n]}_u)$, which approximates
$\varphi^{[u]}$ according to Theorem~\ref{coro:power} and the symmetry of $L$.
In one last step, every processor $u$ sends
$\tilde{\varphi}^{[u]}$ to its neighbors. The approximation error $n^{-5}$ is
chosen to suffice (in accordance with Corollary~\ref{coro:approx})
as discussed next.
\begin{theorem}\label{coro:power}
Let $\lambda$ be the Fiedler (smallest non-zero) eigenvalue of $G$'s Laplacian
$L$, and let $G$ be of bounded degree $d_{\max}$. Then
$\|\zeta^{[v]}-\varphi^{[v]}\|_2\le n^{-5}$, where
$\zeta^{[v]} = (2d_{\max})^{-1} \sum_{\omega=0}^k M^\omega \chi_v$
and $M=I-L/2d_{\max}$, as long as $k \ge \Omega(\lambda^{-1}\cdot \ln n)$.
\end{theorem}
\paragraph{Robustness and latency}
In order to get a handle on the analysis of routing in an ever-changing network
we use a simplifying assumption: the graph does not change during the
computation phase while it can change afterwards, during the routing phase.
This assumption is informally justified because the computation phase in
expander graphs (which we consider to be the typical case) is relatively fast,
it takes $O(\ln n)$ steps. The routing phase, on the other hand, should be as
``long'' as possible before we have to recompute the scheme. Roughly, a
routing scheme can be used until the graph changes so much from its shape when
the scheme was computed that both the probability of reaching destinations and
the congestion properties of the scheme deteriorate with respect to the
new shape of the graph. We quantify the robustness of electric routing against
edge removals in the following two theorems:
\begin{theorem}\label{theo:robust1}
Let $G$ be an unweighted graph with Fiedler eigenvalue $\lambda=\Theta(1)$
and maximum degree $d_{\max}$,
and let $f^{[s,t]}$ denote the unit electric flow between $s$ and $t$.
For any $0 < p \le 1$, let $Q_p =\{e\in E: |f^{[s,t]}_e|\ge p\}$
be the set of edges carrying more than $p$ flow.
Then,
$|Q_p|\le \min\{ 2 / (\lambda p^2), 2d_{\max}\|L^{\dag}\|_{1\to 1}/p\}$.
\end{theorem}
Note that part one of this theorem, i.e. $|Q_p|\le 2/(\lambda p^2)$,
distinguishes electric routing from simple schemes like shortest-path routing.
The next theorem studies how edge removals affect demands when ``the entire
graph is in use:''
\begin{theorem}\label{theo:robust2}
Let graph $G$ be unweighted of bounded degree $d_{\max}$ and
vertex expansion $\alpha$.
Let $f$ be a routing of the uniform multi-commodity demand set over $V$
(single unit of demand between every pair of vertices),
produced by an $\eta$-competitive oblivious routing scheme. %
Then,
for any $0\le x\le 1$, removing a $x$-fraction of edges from $G$
removes at most
$x \cdot (\eta \cdot d_{\max}\cdot \ln n\cdot \alpha^{-1})$-fraction
of flow from $f$.
\end{theorem}
The expected number of edges traversed between source and sink
reflects the latency of a routing.
We establish (Proven in Appendix~\ref{sec:latency-proof}):
\begin{theorem}\label{theo:stretch}
The latency of every electric walk on an undirected
graph of bounded degree $d_{\max}$
and vertex expansion $\alpha$
is at most $O(\min\{m^{1/2},d_{\max}\alpha^{-2}\ln n \})$.
\end{theorem}
\paragraph{Analysis}
The main hurdle is Theorem~\ref{theo:main}, which we attack in two steps.
First, we show that any linear routing scheme $R$ (i.e. scheme for which
the operator $R:\matha{R}^V\to\matha{R}^E$ is linear) has a distinct worst-case
demand set, known as \textit{uniform demands}, consisting of a unit demand
between the endpoints of every edge of $G$. Combinging this with the formulaic
expression for electric flow~(\ref{eq:el}) gives us an operator-based geometric
bound for $\eta_{\mathcal{E}}$, which in the case
of a bounded degree graph is simply $\eta_{\mathcal{E}} \le O(\|L^{\dag}\|_{1\to 1})$
where the operator norm $\|\cdot\|_{1\to 1}$ is defined by
$\|A\|_{1\to 1} := \sup_{x\neq 0} \|Ax\|_1 / \|x\|_1$.
This is shown in Theorem~\ref{theo:eta:ub}. Second, we give a rounding-type
argument that establishes the desired bound on $\|L^{\dag}\|_{1\to 1}$. This
argument relies on a novel technique we dub \textit{concurrent flow cutting}
and is our key technical contribution. This is done in Theorem~\ref{theo:ve:ub}.
This concludes the analysis of the congestion properties of electric flow.
The computational procedure for the vertex potentials $\varphi^{[v]}$'s (above)
only affords us approximate versions $\tilde{\varphi}^{[v]}$ with $\ell_2$ error
guarantees. We need to ensure that, when using these in place of the exact
ones, all properties of the exact electric flow are preserved. For this
purpose, it is convenient to view the electric flow as a
distribution over paths (i.e. the electric walk, defined above)
and measure the total variation distance between the walks induced
by exact and approximate vertex potentials.
This is achieved in Theorem~\ref{theo:approx} and Corollary~\ref{coro:approx}.
It is then easy to verify that any two multi-commodity flows, whose respective
individual flows have sufficiently small variation distance, have essentially
identical congestion and robustness properties.
\section{Proof of Theorem~\ref{theo:stretch}}
\label{sec:latency-proof}
\begin{proof}[Proof of Theorem~\ref{theo:stretch}]
Let $X^{[s,t]}_e$ be the indicator that edge $e$ participates in the
electric walk between $s$ and $t$. Then the latency can be expressed as
\begin{align*}
\max_{s\neq t} \sum_e \E X^{[s,t]}_e
&= \max_{s\neq t}
\sum_{(u,v)}\big|(\delta_u-\delta_v)L^{\dag}(\delta_s-\delta_t)\big| \\
&= \max_{s\neq t} \|BL^{\dag}(\delta_s-\delta_t)\|_1
= \|BL^{\dag} B^*\|_{1\to 1} = \|\Pi\|_{1\to 1}.
\end{align*}
The latter is bounded by Theorem~\ref{theo:ve:ub} and
$\|\Pi\|_{1\to 1} \le m^{1/2}$, as in~(\ref{ineq:univ}) e.g.
\end{proof}
\begin{remark}
For expanders, this theorem is not trivial. In fact, there exist
path realizations of the electric walk which can traverse up to $O(n)$ edges.
Theorem~\ref{theo:stretch} asserts that this happens with small probability. On
the other hand, in a bounded-degree expander, even if $s$ and $t$ are adjacent
the walk will still take a $O(\log n)$-length path with constant probability.
\end{remark}
\subsection{Latency}
The expected number of edges traversed between $s$ and $t$
reflects the routing latency.
Using the machinery we've developed so far, it is quite easy to establish
(Proven in Appendix~\ref{sec:latency-proof}):
\begin{theorem}\label{theo:stretch}
The latency of every electric walk on an undirected
graph of bounded degree $d_{\max}$
and vertex expansion $\alpha$
is at most $O(\min\{n^{1/2},d_{\max}\alpha^{-2}\ln n \})$.
\end{theorem}
For expanders, this theorem is not trivial. In fact, there exist
path realizations of the electric walk which can traverse up to $O(n)$ edges.
Theorem~\ref{theo:stretch} asserts that this happens with small probability. On
the other hand, in a bounded-degree expander, even if $s$ and $t$ are adjacent
the walk will still take a $O(\log n)$-length path with constant probability.
\section{$L_1$ operator inequalities}
\label{sec:l1}
The main results here are an upper and lower bound on $\|L^{\dag}\|_{1\to 1}$, which
match for bounded-degree expander graphs. In this section, we present vertex
expansion versions of these bounds that assume bounded-degree.
\begin{theorem}\label{theo:ve:ub}
Let graph $G=(V,E)$ be unweigthed, of bounded degree $d_{\max}$, and
vertex expansion
\begin{align}\label{ve:iso}
\alpha = \min_{S\subseteq V}\frac{|E(S,S^\cmpl)|}{\min\{|S|,|S^\cmpl|\}},
\quad\text{then}\quad
\|L^{\dag}\|_{1\to 1}
\le \Big(4\ln \frac{n}{2}\Big) \cdot
\Big(\alpha\ln \frac{2d_{\max}}{2d_{\max}-\alpha}\Big)^{-1}.
\end{align}
\end{theorem}
The proof of this theorem (given in the next Section) boils down to a
structural decomposition of unit $(s,t)$-electric flows in a graph (not
necessarily an expander). We believe that this decomposition is of independent
interest. In the case of bounded-degree expanders, one can informally say that
the electric walk corresponding to the electric flow between $s$ and $t$ takes
every path with probability exponentially inversely proportional to its length.
We complement Theorem~\ref{theo:ve:ub} with a lower bound on $\|L^{\dag}\|_{1\to 1}$
proven in Appendix~\ref{sec:norm:lower}:
\begin{theorem}\label{theo:m:lb}
Let graph $G=(V,E)$ be unweighted, of bounded degree $d_{\max}$,
with metric diameter $D$.
Then, $\|L^{\dag}\|_{1\to 1} \ge 2D/d_{\max}$
and, in particular,
\mbox{$\|L^{\dag}\|_{1\to 1}
\ge \big(2\ln n\big)\cdot\big(d_{\max}\ln d_{\max}\big)^{-1}$}
for all bounded-degree, unweighted graphs with vertex expansion $\alpha=O(1)$.
\end{theorem}
\subsection{Proof of upper bound on $\|L^{\dag}\|_{1\to 1}$ for expanders}
\begin{proof}[Proof of Theorem~\ref{theo:ve:ub}]
\textit{Reformulation:}
We start by transforming the problem in a more manageable form,
\begin{align}\label{ineq:ve:form}
\|L^{\dag}\|_{1\to 1} := \sup_{y\neq 0} \frac{\|L^{\dag} y\|_1 }{ \|y\|_1}
= \max_{w} \|L^{\dag}\chi_w\|_1 \overset{(*)}{\le}
\frac{n-1}{n}\max_{s\neq t} \|L^{\dag}(\chi_s-\chi_t)\|_1,
\end{align}
where the latter inequality comes from
\begin{align*}
\|L^{\dag} \chi_s \|_1
= \|L^{\dag}\pi_{\bot\vi} \chi_s\|_1
= \|n^{-1}\sum_{t\neq s}L^{\dag} (\chi_s-\chi_t)\|_1
\le \frac{n-1}{n} \max_{t} \|L^{\dag} (\chi_s-\chi_t)\|_1.
\end{align*}
Pick any vertices $s\neq t$ and set $\psi=L^{\dag}(\chi_s-\chi_t)$. In light of
(\ref{ineq:ve:form}) our goal is to bound $\|\psi\|_1$. We think of $\psi$ as the
vertex potentials corresponding to electric flow with imbalance
$\chi_s-\chi_t$. By
an easy perturbation argument we can assume that no two vertex potentials
coincide.
Index the vertices in $[n]$ by increasing potential as $\psi_1 < \cdots < \psi_n$.
Further, assume that $n$ is even and choose a median $c_0$ so that
$\psi_1<\cdots<\psi_{n/2} < c_0 < \psi_{n/2+1} < \cdots < \psi_n$.
(If $n$ is odd, then
set $c_0$ to equal the pottential of the middle vertex.)
We aim to upper bound $\|\psi\|_1$, given as
$\|\psi\|_1 = \sum_v |\psi_v|
= \sum _{v:\psi_v > 0} \psi_v - \sum_{u:\psi_u < 0} \psi_u$.
Using that $\sum_w \psi_w = 0$, we get
$\|\psi\|_1 = 2\sum_{v:\psi_v>0} \psi_v = - 2\sum_{u:\psi_u < 0} \psi_u$.
Assume, without loss of generality, that $0 < c_0$, in which case
\begin{align}\label{def:ve:N}
\|\psi\|_1 = -2\sum_{u:\psi_u < 0} \psi_u
\le 2\sum_{i=1}^{n/2}|\psi_i - c_0| =: 2N
\end{align}
In what follows we aim to upper-bound $N$.
\textit{Flow cutting:}
Define a collection of cuts $(S_i,S_i^\cmpl)$ of the form
$S_i = \{v : \psi_v \le c_i \}$,
for integers $i\ge 0$, where $S_i$ will be the smaller side of the cut
by construction. Let $k_i$ be the number of edges cut by $(S_i,S_i^\cmpl)$ and
$p_{ij}$ be the length of the $j^{\mathrm{th}}$ edge across the same cut. The
cut points $c_i$, for $i\ge 1$, are defined according to
$c_i = c_{i-1}-\Delta_{i-1},\text{ where }
\Delta_{i-1}:=2\sum_j \dfrac{p_{i-1,j}}{k_{i-1}}$.
The last cut, $(S_{r+1},S_{r+1}^\cmpl)$, is the first cut in the sequence
$c_0,c_1,\dots,c_{r+1}$ with $k_{r+1}=0$ or, equivalently, $S_{r+1}=\emptyset$.
\textit{Bound on number of cuts:}
Let $n_i=|S_i|$. The isoperimetric inequality for vertex expansion
(\ref{ve:iso}) applied to
$(S_i,S_i^\cmpl)$ and the fact that $n_i\le n/2$, by construction, imply
\begin{align}\label{ineq:ve:iso}
\frac{k_i}{n_i} \ge \alpha.
\end{align}
Let $l_i$ be the number of edges crossing $(S_i,S_i^\cmpl)$ that do not
extend across $c_{i+1}$, i.e. edges that are not adjacent to $S_{i+1}$.
The choice $\Delta_i:=2\sum_j p_{ij} / k_i$ ensures that $l_i \ge k_i/2$.
These edges are supported on at least $l_i / d_{\max}$ vertices in $S_i$,
and therefore $n_{i+1} \le n_i - l_i/d_{\max}$. Thus,
\begin{align}\label{ineq:ve:shrink}
n_{i+1}
\le n_i - \frac{l_i}{d_{\max}}
\le n_i - \frac{k_i}{2d_{\max}}
\overset{(\ref{ineq:ve:iso})}{\le} n_i - \frac{\alpha n_i}{2d_{\max}}
= n_i\Big(1-\frac{\alpha}{2d_{\max}}\Big),
\end{align}
Combining inequality (\ref{ineq:ve:shrink})
with $n_0 = n / 2$, we get
\begin{align}\label{ineq:ve:n}
n_i \le \frac{n}{2}\Big(1-\frac{\alpha}{2d_{\max}}\Big)^i
\end{align}
The stopping condition implies $S_r\neq\emptyset$, or $n_r \ge 1$, and together
with (\ref{ineq:ve:n}) this results in
\begin{align}\label{ineq:ve:r}
r \le \log_{1/\theta} \frac{n}{2},\text{ with }
\theta=1-\frac{\alpha}{2 d_{\max}}.
\end{align}
\textit{Amortization argument:}
Continuing from (\ref{def:ve:N}),
\begin{align}\label{ineq:ve:N}
N = \sum_{i=1}^{n/2}|\psi_i-c_0|
\overset{(*)}{\le} \sum_{i=0}^r(n_i-n_{i+1})\sum_{j=0}^i \Delta_j,
\end{align}
where $(*)$ follows from the fact that
for every vertex $v\in S_i-S_{i+1}$ we can write
$|\psi_v-c_0|\le \sum_{j=0}^i\Delta_j$.
Because $BL^{\dag}(\chi_s-\chi_t)$ is a
unit flow, we have the crucial (and easy to verify)
property that, for all $i$, $\sum_j p_{ij}=1$. In other words, the total
flow of ``simulatenous'' edges is 1. So,
\begin{align}
\Delta_i
= 2\sum_{j}\frac{p_{ij}}{k_i}
= \frac{2}{k_i}
\overset{(\ref{ineq:ve:iso})}{\le} \frac{2}{\alpha n_i}
\end{align}
Now we can use this bound on $\Delta_j$ in (\ref{ineq:ve:N}),
\begin{align*}
\sum_{i=0}^r(n_i-n_{i+1})\sum_{j=0}^i \Delta_j
\overset{(*)}{=} \sum_{i=1}^r n_i \Delta_i
\le \frac{2}{\alpha}\sum_{i=0}^r 1
= \frac{2}{\alpha}(r+1),
\end{align*}
where to derive $(*)$ we use $n_{r+1}=0$. Combining the above inequality
with (\ref{ineq:ve:r}) concludes the proof.
\end{proof}
\section{Proof of Theorem~\ref{theo:m:lb}}
\label{sec:norm:lower}
\begin{proof}[Proof of Theorem~\ref{theo:m:lb}]
Let $s\neq t$ be a pair of vertices in $G$
at distance $D$. We consider the
flow $f=BL^{\dag}(\chi_s-\chi_t)$. Set $\psi=L^{\dag}(\chi_s-\chi_t)$, and note that
we can use $\|f\|_1$ as a lower bound on $\|L^{\dag}\|_{1\to1}$,
\begin{align*}
\|f\|_1
= \sum_{(u,v)}|\psi_u - \psi_v|
\le d_{\max} \sum_{v} |\psi_v|
\le d_{\max} \|L^{\dag} (\chi_s-\chi_t)\|_1
\le \frac{d_{\max}}{2}\|L^{\dag}\|_{1\to 1}.
\end{align*}
Now, let $\{\pi_i\}_i$ be a path decomposition of $f$ and let
$l(\pi_i)$ and $f(\pi_i)$ denote the length and value, respectively, of
$\pi_i$. Then,
\begin{align*}
\|f\|_1
= \sum_{(u,v)}|\psi_u-\psi_v|
= \sum_i l(\pi_i) f(\pi_i)
\ge D \sum_i f(\pi_i)
= D.&\qedhere
\end{align*}
\end{proof}
\section{Proof of Theorem~\ref{coro:power}}
Theorem~\ref{coro:power} is implied by the following theorem
by specializing $\epsilon=O(n^{-5})$:
\begin{theorem}\label{theo:power}
Let $G$ be a graph, whose Laplacian $L$ has smallest eigenvalue $\lambda$
and whose maximum degree is $D$. Then, for every $y$ with $\|y\|_2=1$
the vector $x=L^{\dag} y$ can be approximated using
\begin{align*}
\tilde{x} = \frac{1}{2D}\sum_{i=0}^d \big(I-\frac{L}{2D}\big)^i y,
\end{align*}
so that for every $\epsilon>0$,
\begin{align*}
\|x-\tilde{x}\|_2 \le \epsilon,
\text{ as long as }
d \ge \Omega(1)\cdot
\ln \frac{1}{\lambda\epsilon D}\cdot
\Big(\ln\frac{1}{1-\lambda}\Big)^{-1}.
\end{align*}
\end{theorem}
\newcommand{N^{\dag}}{N^{\dag}}
\begin{proof}[Proof of Theorem~\ref{theo:power}]
We normalize $L$ via $N=L/\tau$ (and so $L^{-1}=N^{-1}/\tau$),
where $\tau=2D$. Since $\tau= 2D\ge \lambda_{\max}(L)$, the
eigenvalues of $N$ are in $[0,1]$.
In this case, the Moore-Penrose inverse of $N$ is
given by $N^{\dag}=\sum_{i=0}^{\infty} (I-N)^i$.
Set $N^{\dag}_0=\sum_{i=0}^d (I-N)^i$ and $N^{\dag}_1 = N^{\dag} - N^{\dag}_0$.
Our aim is to minimize $d$ so that
\begin{align*}
\|x-\tilde{x}\|_2
= \Big\|\frac{N^{\dag}_0+N^{\dag}_1}{\tau}y - \frac{N^{\dag}_0}{\tau}y\Big\|_2
= \Big\|\frac{N^{\dag}_1}{\tau}y\Big\|_2
\le \Big\|\frac{N^{\dag}_1}{\tau}\Big\|_{2\to 2}
\le \epsilon,
\end{align*}
where $\|A\|_{2\to 2}:=\sup_{x\neq 0}\|Ax\|_2/\|x\|_2$
denotes the matrix spectral norm. Set
$\kappa:=\tau/\lambda_{\min}$, so that $\kappa^{-1}$ is the smallest
eigenvalue of $N$,
\begin{align}
\|N^{\dag}_1\|_{2\to 2} = \big\|\sum_{i=d+1}^{\infty} (I-N)^i\big\|_{2\to 2}
&\le \sum_{i=d+1}^\infty\|(I-N)^i\|_{2\to 2} \notag \\
&\le \sum_{i=d+1}^\infty\left(1-\kappa^{-1}\right)^i
= (1-\kappa^{-1})^{d+1} \kappa \label{pow:deg}
\end{align}
Setting~(\ref{pow:deg}) less than $\tau\epsilon$ gives
\begin{align*}
d \ge \frac{\ln \kappa/(\tau\epsilon)}{\ln \kappa/(\kappa-1)}. &\qedhere
\end{align*}
\end{proof}
\section{Preliminaries}
\label{sec:prelim}
The Spectral Theory of graphs is comprehensively covered in~\cite{chung}.
The object of interest is a graph $G=(V,E)$ (with $V=[n]$ and $|E|=m$)
undirected, positively edge-weighted by
$w_{u,v}\ge 0$, and not necessarily simple. Whenever we use unweighted
graphs, we have $w_{u,v}=1$ if $u\sim v$ and $w_{u,v}=0$ otherwise.
The Laplacian is positive semi-definite, $L\succeq 0$, and thus can
be diagonalized as $L=U\Lambda U^*$, where $U\in \matha{R}^{n\times n}$ is
unitary and $\Lambda=\vdiag(\lambda_1,\dots,\lambda_n)$. By convention, we
write $0\le\lambda_1\le\lambda_2\le\cdots\le\lambda_n$. For every $G$,
$\lambda_n\le 2D$, where $D$ is the maximum degree.
When $G$ is connected, $\vker(L) = \vi$, and so
$LL^{\dag}=L^{\dag} L = \pi_{\bot\vi}$ where $\pi_W$ denotes projection onto $W$
and $L^{\dag}$ denotes the pseudo-inverse of $L$.
On occasion we use
$\lambda_{\min}:=\lambda_2$ and $\lambda_{\max} := \lambda_n$.
The \textit{vertex expansion} of an unweighted $G$ is defined as
$\alpha:=\min_{S\subseteq V}\frac{|E(S,S^\cmpl)|}{\min\{|S|,|S^\cmpl|\}}$.
\section{Proof of Theorem~\ref{theo:robust2}}
\label{sec:robust-proof}
\begin{proof}[Proof of Theorem~\ref{theo:robust2}]
Let $f_{\opt}$ be a max-flow routing of the uniform demands and
let $\theta$ be the fraction of the demand set that is routed by $f_{\opt}$.
The Multi-commodity Min-cut Max-flow Gap Theorem (Theorem 2, in~\cite{mcmf})
asserts
\begin{align*}
O(\ln n)\cdot \theta
\ge \min_{S\subset V} \frac{|E(S,S^\cmpl)|}{|S|\cdot|S^\cmpl|}
\ge \frac{1}{n}\cdot
\min_{S\subset V} \frac{|E(S,S^\cmpl)|}{\min\{|S|,|S^\cmpl|\}}
= \frac{\alpha}{n}
\end{align*}
Thus the total demand flown by $f_{\opt}$ is no less than
$\theta \binom{n}{2} \ge \Omega(\alpha n / \ln n)$. Normalize $f$ (by scaling)
so it routes the same demands as $f_{\opt}$. If $k$ edges are removed,
then at most $\eta k$ flow is removed from $f$, which is at most a fraction
$\eta k \cdot O(\ln n / \alpha n)$ of the total flow.
Substitute $x=k/m$ and use $m\le d_{\max}n$ to complete the proof.
\end{proof}
\subsection{Robustness}
We prove Theorem~\ref{theo:robust1} here, since its proof interestingly
relies on the flow cutting techniques developed in this paper.
Theorem~\ref{theo:robust2} is proved in Appendix~\ref{sec:robust-proof}.
\begin{proof}[Proof of Theorem~\ref{theo:robust1}]
For the first part, let $\{(u_1,v_1),\dots,(u_k,v_k)\}=Q_p$ and let
$p_i =
|f^{[s,t]}_{(u_i,v_i)}|=|(\chi_{u_i}-\chi_{v_i})^*L^{\dag}(\chi_s-\chi_t)|$.
Consider the embedding $\zeta:V\to\matha{R}$, defined by
$\zeta(v)=\chi_v^*L^{\dag}(\delta_s-\delta_t)$.
Assume for convenience that $\zeta(u_i)\le \zeta(v_i)$ for all $i$.
Let $\zeta_{\min}=\min_v\,\zeta(v)$ and
$\zeta_{\max}=\max_v\,\zeta(v)$.
Choose $c$ uniformly at random
from $[\zeta_{\min},\zeta_{\max}]$ and let
$X_i=p_i\cdot\I\{\zeta(u_i)\le c\le \zeta(v_i)\}$, where $\I\{\cdot\}$
is the indicator function. Observe that the random variable
$X=\sum_i X_i$ equals the total electric flow of all edges in $Q_p$ cut by $c$.
Since these edges are concurrent (in the electric flow) by construction,
we have $X\le 1$. On the other hand,
\begin{align*}
\E X = \sum_i p_i \cdot \Pr\Big\{\zeta(u_i)\le c\le \zeta(v_i)\Big\}
\ge \sum_i p_i \frac{p_i}{\zeta_{\max}-\zeta_{\min}}
\ge \sum_i \frac{\lambda p_i^2}{2}
\ge k \frac{\lambda p^2}{2}
\end{align*}
Combining this with $\E X \le 1$ produces $|Q_p|\le 2/(\lambda p^2)$.
For the second part,
$kp \le \sum_{e\in Q_p} |f^{[s,t]}_e| \le \sum_{e\in E} |f^{[s,t]}_e|
= \|BL^{\dag}(\chi_s-\chi_t)\|_1\le2d_{\max}\cdot \|L^{\dag}\|_{1\to 1}$.
This gives $|Q_p|\le 2d_{\max}\cdot\|L^{\dag}\|_{1\to 1}/ p$.
\end{proof}
\section{Symmetrized algorithm}
\label{sec:symmetrize}
In this section we discuss how to modify the computational procedure, given in
the Section~\ref{sec:intro}, in order to apply it to graphs of
unbounded degree.
The described algorithm for computing $\varphi^{[w]}=L^{\dag}\chi_w$ relies on the
approximation of $L^{\dag}$ via the Taylor series
$\frac{1}{1-x}=\sum_{i=0}^\infty (1-x)^i$.
The series converges only when $\|x\|_2<1$,
which is ensured by setting $x=\frac{L}{2d_{\max}}$,
and using that $\|L\|_{2\to 2} < 2d_{\max}$.
Thus we arrive at
$2d_{\max}\cdot L^{\dag} = \sum_{i=0}^\infty (I-\frac{L}{2d_{\max}})^i$.
This approach continues to work if we replace $d_{\max}$ with any upper bound
$h_{\max}\ge d_{\max}$, obtaining $L^{\dag} = \frac{1}{2h_{\max}} \sum_{i=0}^\infty
(I - M)^i$ where $M=\frac{L}{2h_{\max}}$, however this is done at the expense
of slower convergence of the series. Since in a distributed setting all
vertices must agree on what $M$ is, a worst-case upper bound $h_{\max}=n$ must
be used, which results in a prohibitively slow convergence even for expander
graphs.
Instead, we pursue a differnt route. Let $\mathcal{L}= D^{-1/2}LD^{-1/2}$
be the \textit{normalized Laplacian} of $G$, where $D\in\matha{R}^{n\times n}$
is diagonal with $D_{v,v}=\deg(v)$. One always has $\|\mathcal{L}\|_{2\to 2}
\le 2$ (Lemma 1.7 in~\cite{chung}) while at the same time
$\lambda_{\min}(\mathcal{L})
\ge \max \big\{\frac{\beta^2}{2},\frac{\alpha^2}{4d_{\max} +
2d_{\max}\alpha}\big\}$ (Theorems 2.2 and 2.6 in~\cite{chung}),
where $\alpha$ and $\beta$ are the vertex- and
edge-expansion of $G$, respectively.
Set $M=\mathcal{L}/3$, so that $\|M\|_{2\to 2} < 1$.
Recall that the aim of our distributed procedure is to
compute $\varphi^{[w]}_u$ at $u$ (for all $w$). We achieve this using
the following:
\begin{align*}
\varphi^{[w]}_u
= \chi_u^* L^{\dag} \chi_w
= \chi_u^* D^{-1/2}\frac{M^\dag}{3}D^{-1/2} \chi_w
= \frac{\chi_u^*}{\sqrt{\deg(u)}}
\frac{\sum_{i=0}^\infty (I-M)^i}{3}
\frac{\chi_w}{\sqrt{\deg(w)}}
\end{align*}
The key facts about the series in the left-hand side are that
(i) it converges quickly when $G$ is an expander and (ii) all vertices
can compute $M$ locally, in particular, without requiring any global knowledge
like e.g. an upper bound on $d_{\max}$.
\section{Proof of Theorem~\ref{theo:cong:univ}}
\label{sec:cong:u}
\begin{proof}[Proof of Theorem~\ref{theo:cong:univ}]
The upper bound follows from:
\begin{align}
\eta
\overset{(\ref{ineq:eta:ub})}{\le} \|\Pi\|_{1\rightarrow 1}
\overset{(\ref{ineq:pi})}{\le} m^{1/2}\cdot\|\Pi\|_{2\rightarrow 2}
\overset{(\ref{ineq:pi:proj})}{=} m^{1/2} \label{ineq:univ}
\end{align}
The second step is justified as follows
\begin{align}
\|\Pi\|_{1\rightarrow 1}
= \max_{e} \|\Pi\chi_e \|_1
\le m^{1/2} \cdot \max_e \|\Pi \chi_e\|_2
\le m^{1/2} \cdot \|\Pi\|_{2\rightarrow 2}. \label{ineq:pi}
\end{align}
The third step is the assertion
\begin{align}
\|\Pi\|_{2\to 2} \le 1 \label{ineq:pi:proj},
\end{align}
which follows from the (easy) fact that $\Pi$ is a projection,
shown by Spielman, et al. in Lemma~\ref{lem:spiel}.
The lower bound is achieved by a graph obtained by
gluing the endpoints of $\sqrt{n}$ copies of a path
of length $\sqrt{n}$ and a single edge. Routing
a flow of value $\sqrt{n}$ between these endpoints incurs
congestion $\sqrt{n}/2$.
\end{proof}
\begin{lemma}[Proven in~\cite{spiel}]\label{lem:spiel}
$\Pi$ is a projection; $\vim(\Pi)=\vim(W^{1/2}B)$;
The eigenvalues of $\Pi$ are 1 with multiplicity $n-1$
and $0$ with multiplicity $m-n+1$; and $\Pi_{e,e}=\|\Pi\chi_e\|^2$.
\end{lemma}
\section{Electric walk}
\label{sec:walk}
To every unit flow $f\in\matha{R}^E$, not necessarily electric, we associate a
random walk $W=W_0,W_1,\dots$ called the \textit{flow walk}, defined as follows.
Let $\sigma := B^*f$ and so $\sum_v \sigma_v = 0$.
The walk starts at $W_0$, with
\begin{align*}
\Pr\{W_0=v\}
=\frac{2\cdot \max\{0,\sigma_v\}}{\sum_w |\sigma_w|}
= \frac{\sum_w f_{v\to w} - \sum_w f_{w\to v}}
{\sum_u\big(\sum_w f_{u\to w} - \sum_w f_{w\to u}\big)}
\end{align*}
If the walk is currently at $W_t$, the
next vertex is chosen according to
$\Pr\big\{W_{t+1}=v\,|\,W_t=u\big\}
= \dfrac{f_{u\to v}}{\sum_{w}f_{u\to w}}$, where
\begin{align}
f_{u\to v} &=
\begin{cases}
|f_{(u,v)}|,&\text{$(u,v)\in E$ and $f$ flows from $u$ to $v$} \\
0,&\text{otherwise.}
\end{cases} \label{dir:pot}
\end{align}
When the underlying flow $f$ is an electric flow, i.e. when $f={\mathcal{E}}(B^*f)$, the
flow walk deserves the specialized name \textit{electric walk}. We study two
aspects of electric walks here: (i)~stability against perturbations of the
vertex potentials, and (ii)~robustness against edge removal.
\subsection{Stability}
The set of vertex potential vectors $\varphi^{[v]}=L^{\dag} \chi_v$,
for all $v\in V$, encodes all electric flows, as argued in~(\ref{fwd:flow}).
In an algorithmic setting,
only approximations $\tilde{\varphi}^{[v]}$ of these vectors are available. We ensure
that when these approximations are sufficiently good in an $\ell_2$ sense,
the path probabilities (and congestion properties)
of electric walks are virtually unchanged. The
next theorem is proven in Appendix~\ref{sec:approx}:
\begin{theorem}\label{theo:approx}
Let $\tilde{\varphi}^{[v]}$ be
an approximation of $\varphi^{[v]}$, for all $v\in G$, in the sense that
\begin{align}
\|\varphi^{[v]}-\tilde{\varphi}^{[v]}\|_2 \le \nu,
\text{ for all $v\in V$, with } \label{ineq:pot:approx}
\nu = n^{-A},
\end{align}
where $A>4$ is a constant. Then for every electric walk,
defined by vertex potentials $\varphi=\sum_v \alpha_v \varphi^{[v]}$,
the corresponding ``approximate'' walk, defined by vertex potentials
$\tilde{\varphi}=\sum_v \alpha_v\tilde{\varphi}^{[v]}$, induces a
distribution over paths $\gamma$ with
\begin{align}
\sum_\gamma \Big|\Pr_\varphi\{W=\gamma\}
- \Pr_{\tilde{\varphi}}\{W=\gamma\}\Big| \le O(n^{2-\frac{A}{2}}),
\label{stat:diff}
\end{align}
where $\gamma$ ranges over all paths in $G$, and
$\Pr_\varphi\{W=\gamma\}$ denotes the probability of $\gamma$ under $\varphi$
(respectively for $\Pr_{\tilde{\varphi}}\{W=\gamma\}$).
\end{theorem}
As shown in Theorem~\ref{theo:power},
the Power Method affords us any sufficiently large
exponent $A$, say $A=5$, without sacrificing efficiency in terms
of distributed computation time. In this case,
the following corollary asserts that routing with approximate potentials
preserves both the congestion properties of the exact electrical flow as well
as the probability of reaching the sink.
\begin{corollary}\label{coro:approx}
Under the assumptions of Theorem~\ref{theo:approx} and $A=5$,
the electric walk
defined by vertex potentials
$\tilde{\varphi}=\tilde{\varphi}^{[s]}-\tilde{\varphi}^{[t]}$ reaches $t$
with probability $1-o_n(1)$. Furthermore,
for every edge $(u,v)$ with
non-negligible load, i.e. $|\tilde{\varphi}_u-\tilde{\varphi}_v|=\omega(n^{-2})$,
we have
$|\tilde{\varphi}_u-\tilde{\varphi}_v|\to_n |\varphi_u-\varphi_v|$,
where $\varphi=\varphi^{[s]}-\varphi^{[t]}$.
\end{corollary}
|
0909.3286
|
\section{Introduction}
In 1880, P. G. Tait \cite{PGT}\ showed that the four colour theorem is equivalent to
the assertion that every 3-regular planar graph without cut-edges is
3-edge-colourable (and that the latter is true if and only if every 3-regular
3-edge-connected planar graph is 3-edge-colourable). As is well known, Tait
actually felt that he had proven the four colour theorem since he had assumed that every 3-regular
3-edge-connected planar graph was hamiltonian (it being easily seen that hamiltonian
3-regular graphs are 3-edge-colourable), and it was not until 1946 that
W. Tutte showed in \cite{WTT}\ that this is not the case.
In this paper, we introduce the notion of a vertex-oriented 4-regular planar graph, and
use it to transform Tait's theorem into another equivalent formulation of the
four colour theorem. This came about as a result of our wish to provide a more conceptual proof
of the four colour theorem, and was motivated by our work with 4-regular graphs in the study
of knot theory. In the third section of this paper, we establish that every vertex-oriented 4-regular
planar graph without nontransversally oriented cut-vertex (VOGWOC) is o-colourable (although we are not able to prove
3-o-colourability). It does follow from this result that every vertex-oriented
4-regular planar graph is an edge-disjoint union of o-cycles (this is of course obvious from
the four colour theorem, but we were unable to prove directly that a given VOGWOC even had
a single o-cycle). We conclude that section with some remarks on how the proof of the
o-colourability result might be improved upon to give 3-o-colourability. We conclude the paper
with a study of the vertex-orientations of a regular projection of the Borromean rings (that is,
the basic polyhedral graph $6^*$).
\section{An equivalent formulation of the four colour theorem}
A vertex $v$ with no incident loop in a 4-regular planar graph $G$ shall
be said to be oriented if the four edges incident to $v$ have been
partitioned into two cells (called the edge cells at $v$) of two edges
each so that the two edges in each cell are consecutive in the embedding
order at $v$. If there is exactly one loop $e$ incident to $v$, then if
we denote the other two incident edges by $f$ and $g$, the set of
two subsets $\set e,f\endset$, $\set e,g\endset$ is said to be the
transverse orientation of $v$ (and we shall refer to the sets $\set e,f\endset$
and $\set e,g\endset$ as the edge cells at $v$, even though they are not disjoint),
while the set of two subsets $\set
f,g\endset$, $\set e\endset$ is the nontransverse orientation of $v$.
Finally, if there are two loops $e_1$, $e_2$ incident to $v$, then
we only define one orientation at $v$; namely $\set\set e_1, e_2\endset\endset$,
and shall refer to this as the transverse orientation of $v$ (if $G$ is connected with two or more vertices,
this situation will never arise).
A vertex that has an incident loop shall be called a loop-anchor in $G$.
For example, if $v$ has incident edges $e$, $f$, $g$, $h$, (or a loop $e$
and incident edges $f$ and $g$), labelled in a
clockwise order, then one orientation of $v$ would be the partition
$\set \set e,f\endset,\set g, h\endset\endset$, while the other
orientation would be $\set \set f,g\endset,\set e, h\endset\endset$ (in
the case of the loop, the transverse orientation of $v$ would be
$\set \set e,f\endset, \set e,g\endset\endset$, while the nontraverse
orientation of $v$ would be $\set \set f,g\endset, \set e\endset\endset$).
In a plane embedding of $G$, we shall indicate these by a double headed
arrow passing through $v$ in such a way that for each cell, the arrow
separates the two edges in the cell.
\medskip
\begin{figure}[h]
\begin{tabular}{c@{\hskip10pt}c@{\hskip10pt}c@{\hskip10pt}c}
\centering
\hbox{\xy /r20pt/:,
(0,0)="1";(1,1)="3"**\dir{-},
(1,0)="2";(0,1)="4"**\dir{-},
(.5,.5)="0"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"0"*!<0pt,9pt>{v},
"3"*!<-4pt,-4pt>{h},
"2"*!<-4pt,4pt>{g},
"1"*!<4pt,4pt>{f},
"4"*!<4pt,-4pt>{e},
(0,.5)="x";(1,.5)="y"**\dir{-}*\dir{>},"x"*\dir{<},
\endxy}
&
\hbox{\xy /r20pt/:,
(0,0)="1";(1,1)="3"**\dir{-},
(1,0)="2";(0,1)="4"**\dir{-},
(.5,.5)="0"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"0"*!<-9pt,0pt>{v},
"3"*!<-4pt,-4pt>{h},
"2"*!<-4pt,4pt>{g},
"1"*!<4pt,4pt>{f},
"4"*!<4pt,-4pt>{e},
(.5,0)="x";(.5,1)="y"**\dir{-}*\dir{>},"x"*\dir{<},
\endxy}
&
\hbox{\xy /r20pt/:,
(0,0)="1";(1,1)="3",
(1,0)="2";(0,1)="4",
(.5,.5)="o"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"1";"o"**\dir{-},
"4";"o"**\dir{-},
"o";"o"**\crv{"3"+(.25,.25) & (1.75,.5) & "2"+(.25,-.25)},
"o"*!<9pt,0pt>{v},
(1.5,.5)*!<-4pt,0pt>{e},
"4"*!<4pt,-4pt>{f},
"1"*!<4pt,4pt>{g},
(.5,0)="x";(.5,1)="y"**\dir{-}*\dir{>},"x"*\dir{<},
\endxy}
&
\hbox{\xy /r20pt/:,
(0,0)="1";(1,1)="3",
(1,0)="2";(0,1)="4",
(.5,.5)="o"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"1";"o"**\dir{-},
"4";"o"**\dir{-},
"o";"o"**\crv{"3"+(.25,.25) & (1.75,.5) & "2"+(.25,-.25)},
"o"*!<0pt,9pt>{v},
(1.5,.5)*!<-4pt,0pt>{e},
"4"*!<4pt,-4pt>{f},
"1"*!<4pt,4pt>{g},
(0,.5)="x";(1,.5)="y"**\dir{-}*\dir{>},"x"*\dir{<},
\endxy}
\\
\noalign{\vskip4pt}
$\set \set f,g\endset,\set e\endset\endset$ & $\set \set e,f\endset,\set g, h\endset\endset$ & $\set \set f,g\endset,\set e, h\endset\endset$ &
$\set \set e,f\endset,\set e,g\endset\endset$\\
\end{tabular}
\caption{}\label{figure: vertex orientation}
\end{figure}
\medskip
A mapping $\sigma$ such that for each vertex $v$ of $G$, $\sigma(v)$ is
an orientation of $v$, shall be called a vertex-orientation of $G$, and
we say that $G$ has been vertex-oriented by $\sigma$, or that
$(G,\sigma)$ is a vertex-oriented graph. Suppose that $(G,\sigma)$ is a
vertex-oriented 4-regular planar graph. We say that an edge colour
assignment $\varepsilon$ is an o-colouring of $(G,\sigma)$ if for each
$v\in G$, exactly two colours appear on the four edges incident to $v$,
and in each cell of $\sigma(v)$, both colours appear. The colour assignment
$\varepsilon$ is then called an o-colouring of the vertex-oriented graph
$(G,\sigma)$. If at most $k$ colours have been used, then we say that
$(G,\sigma)$ has been $k$-o-coloured. The least $k$ such that there is a
$k$-o-colouring of $(G,\sigma)$ shall be called the o-chromatic index of
$(G,\sigma)$ and denoted by $\chi_o(G)$. Note that
$\chi_o(G)\ge2$ for every vertex oriented 4-regular planar graph
$(G,\sigma)$.
If a vertex-oriented 4-regular planar graph has been o-coloured,
then the set of all edges of a given colour form one or more (vertex
and edge) disjoint cycles in the graph. In particular, if a
vertex-oriented 4-regular planar graph has an
o-colouring, then every cut-vertex of the graph must be oriented transversely.
\begin{theorem}\label{theorem: 4ct equivalence}
The 4-colour theorem is equivalent to the assertion that every
vertex-orientation of any 4-regular planar graph with no cut-vertex
can be 3-o-coloured.
\end{theorem}
\proof
By Tait's result, it suffices to prove that the assertion that every
3-regular planar graph with no cut-edge can be 3-edge-coloured is
equivalent to the assertion that every vertex-orientation of any
4-regular planar graph with no cut-vertex can be
3-o-coloured.
Suppose that every 3-regular planar graph with no cut-edge can be
3-edge-coloured, and let $G$ be a 4-regular planar graph with
no cut-vertex. Further suppose that $\alpha$ is a vertex-orientation of $G$. At
each vertex $v$ with orientation $\set \set e,f\endset,\set g,
h\endset\endset$, replace $v$ by a new edge with endpoints $x$ and $y$,
with $e,f$ incident to $x$ and $g,h$ incident to $y$. The result is a
3-regular planar graph $H$. If $H$ has a cut-edge, $t$ say, then either
$t$ is an edge in $G$, in which case each endpoint of $t$ is a
cut-vertex of $G$, or else $t$ was one of the newly created edges,
replacing vertex $v$ say, in which case $v$ is a cut-vertex of $G$.
Since $G$ was without cut-vertices, neither of these situations is
possible. Thus $H$ has no cut-edge, and so by hypothesis, $H$ can be
3-edge-coloured. Suppose that $H$ has been 3-edge-coloured.
Contract all edges of $H$ that were not edges of $G$, thereby
obtaining $G$, but now each edge of $G$ has been assigned one of three
colours. Moreover, at each vertex $v$ with orientation $\set \set
e,f\endset,\set g, h\endset\endset$, the two colours that appear on $e$
and $f$ are the same as the two colours that appear on $g$ and $h$.
The result is therefore a 3 o-colouring of $(G,\alpha)$.
Conversely, suppose that every vertex-orientation of any 4-regular
planar graph with no cut-vertex can be 3-o-coloured. We prove that
every 3-regular planar graph with no cut-edge can be 3-edge-coloured by
induction on the number of vertices. To begin with, we observe that a
3-regular graph without cut-edge is also without loops. Thus the base
case consists of the 3-regular planar graphs without cut-edge on two vertices,
of which there is only one and it can be 3-edge-coloured. Suppose now
that $n>2$ is an integer such that any 3-regular planar graph without cut-edge
and fewer than $n$ vertices can be 3-o-coloured, and let $G$ be a 3-regular planar
graph without cut-edge on $n$ vertices. As observed above, $G$ can't have
any loops. By our inductive hypothesis, we may assume that $G$ is connected.
Furthermore, suppose that $G$ contains a digon. Then we may replace the digon (two vertices
and the four edges incident to one or the other of the two vertices) by
a single edge, resulting in a 3-regular planar graph without cut-edge on $n-2$
vertices, which by our induction hypothesis is 3-edge-colourable. But then $G$ is
3-edge-colourable. Thus we may further assume that $G$ is without digons.
Petersen established in \cite{JP}\ that every 3-regular graph with at most two cut-edges has a
1-factor, so let $F$ be a 1-factor of $G$. Contract each edge $f\in F$,
putting the two edges incident to an endpoint of $f$ into a cell. The
result is an orientation of the vertex formed by contracting $f$, and
so we have formed a 4-regular planar graph $G'$ and given it a
vertex-orientation.
Suppose that $G'$ has a cut-vertex $v$, say. By
the handshake lemma, $G'-v$ must consist of two components, and for
each component, there are exactly two edges incident to $v$ with
endpoints in the component. Furthermore, since $G'$ is planar, the
two edges incident to $v$ with endpoints in the same component of
$G'-v$ must be consecutive in the embedding order at $v$. Let $f\in F$
denote the edge of $G$ that was contracted to form $v$, and let $x$ and
$y$ denote the endpoints of $f$. Furthermore, let $e_1$ and $e_2$,
respectively $f_1$ and $f_2$, denote the edges different from $f$ that
are incident to $x$, respectively $y$. As well, let $x_{1}$ and $x_2$
denote the non-$x$ endpoints of $e_1$ and $e_2$, respectively, and let
$y_{1}$ and $y_2$ denote the non-$y$ endpoints of $f_1$ and $f_2$,
respectively. As $G$ is without digons, it follows that $x_1\ne x_2$ and
$y_1\ne y_2$. Since $f$ is not a cut-edge of $G$, there is a path in
$G$ from $x$ to $y$ that does not use $f$, and so there is a path in
$G'$ from either $x_1$ or $x_2$ to either $y_1$ or $y_2$ that does not
use any of $e_1$, $e_2$, $f_1$, or $f_2$. We may suppose without loss
of generality that the vertices have been labelled so that there is a
path in $G'$ from $x_1$ to $y_1$ that does not use any of $e_1$, $e_2$,
$f_1$, or $f_2$, so that $x_1$ and $y_1$ belong to the same component of
$G'-v$, and thus $x_2$ and $y_2$ belong to the other component of
$G'-v$. It follows that there exist simple closed curves $S_1$ and $S_2$
(see Figure \ref{figure: cut-vertex}) such that of the edges of
$G$, $S_1$ meets only $e_1$ and $f_1$ and contains vertices $x_1$ and
$y_1$ in its interior, while $S_2$ meets only $e_2$ and $f_2$ and
contains vertices $x_2$ and $y_2$ in its interior. Let $G_1$ and $G_2$
denote the subgraphs of $G$ that are induced by the vertices of $G$ that lie
in the interior of $S_1$ and $S_2$, respectively, with an additional edge to
join $x_1$ to $y_1$ in $G_1$, and an additional edge to join $x_2$ to
$y_2$ in $G_2$. Then $G_1$ and $G_2$ are 3-regular planar graphs with
no cut-edge and fewer than $n$ vertices, so by the
induction hypothesis, we may 3-edge-colour each of $G_1$ and $G_2$. By
permuting the colours if necessary, we may arrange to have the new edge
in $G_1$ coloured differently from the new edge in $G_2$, which then
allows us to extend the colouring to obtain a 3-edge-colouring of $G$.
\begin{figure}[h]
\centering
\table{c}
$\vcenter{\xy /r17pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(2,0)+(0,.25)="y"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(2,0)+(0,-.25)="x"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"x";"y"**\dir{-},
(.6,-.5)="x1";"x"**\dir{-},
"y";(3.4,.5)="y2"**\dir{-},
(.6,.5)="y1";"y"**\dir{-},
"x";(3.4,-.5)="x2"**\dir{-},
"x1"*!<7pt,2pt>{x_1},
"y1"*!<7pt,-2pt>{y_1},
"x2"*!<-8pt,2pt>{x_2},
"y2"*!<-8pt,-2pt>{y_2},
"x1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"y1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"x2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"y2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"x"*!<0pt,7pt>{x},
"y"*!<0pt,-7pt>{y},
(0,0)*!(1.1,-1){S_1},
(4,0)*!(-1,-1){S_2},
(0,-1.8)*{},
(0,1.7)*{},
(9,0)="t",
(0,0)+"t"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)+"t"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(2,0)+(0,.25)+"t"="y",
(2,0)+(0,-.25)+"t"="x",
(.6,-.5)+"t"="x1",
(3.4,.5)+"t"="y2",
(.6,.5)+"t"="y1",
(3.4,-.5)+"t"="x2",
"x1"*!<7pt,2pt>{x_1},
"y1"*!<7pt,-2pt>{y_1},
"x2"*!<-8pt,2pt>{x_2},
"y2"*!<-8pt,-2pt>{y_2},
"x1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"y1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"x2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"y2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"x1";"y1"**\dir{-},
"x2";"y2"**\dir{-},
(0,0)+"t"*!(1.1,-1){G_1},
(4,0)+"t"*!(-1,-1){G_2},
(2,-1.75)*{\hbox{(a)}},
(2,-1.75)+"t"*{\hbox{(b)}},
\endxy}$
\endtable\caption{}\label{figure: cut-vertex}
\end{figure}
\noindent
We may therefore assume that $G'$ has no cut-vertex; that is, $G'$ is a
vertex-oriented 4-regular planar graph without cut-vertex, with vertex-orientation
$\alpha$ say, and by assumption, every such graph may be 3-o-coloured.
Suppose then that $(G',\alpha)$ has been 3-o-coloured. Give
each edge of $G$ the colour it has in $G'$, so that the only edges of $G$ that
have not been coloured are those of $F$. Let $f\in F$, and let $x$ and
$y$ denote the endpoints of $f$. Then the two edges incident to $x$ in
$G'$ will be coloured with two different colours, say $c_1$ and $c_2$,
and the two edges incident to $y$ will be coloured with the same two
colours, one with $c_1$ and the other with $c_2$. Thus $f$ can be
coloured with the third colour. The result is a 3-edge-colouring of
$G$. This completes the proof of the inductive step, and so the result
follows by induction.
\edproofmarker\vskip10pt
\section{O-colourings and o-cycles}
A walk $v_0,e_1,v_1,\ldots,e_n,v_n$ of length $n\ge 2$ in a vertex-oriented
4-regular planar graph $(G,\sigma)$ shall be
called an o-walk if for each $i=1,2,\ldots,n-1$, $e_i$ and $e_{i+1}$
belong to different cells of $\sigma(v_i)$. An o-trail (respectively
o-circuit, o-cycle) is an o-walk that is a trail (respectively circuit,
cycle). If $(G,\sigma)$ has been o-coloured, then for each assigned
colour, the set of edges of $G$ that have been assigned that colour
forms a set of o-cycles with the property that no two have a vertex in
common. Thus an o-colouring of $(G,\sigma)$ provides a decomposition of
the edge set of $G$ into o-cycles, each of which has only edges of one
colour and such that any two cycles of the same colour have no vertex in
common.
Let $G_1$ and $G_2$ be (disjoint) graphs. Choose edges $e$ in $G_1$ and $f$ in
$G_2$ and remove them. Then join one end point of $e$ to one endpoint of
$f$, and join the other endpoint of $e$ to the other endpoint of $f$.
Denote the result by $G_1\#_{e,f}G_2$, or simply $G_1\# G_2$ when the
edges $e$ and $f$ are understood (there are two ways to carry out this construction,
but for convenience, we shall refer to both graphs -- in general, nonisomorphic --
by the same notation). Note that if $G_1$ and $G_2$ are
4-regular graphs, then $G_1\# G_2$ is also 4-regular, and if both $G_1$
and $G_2$ are planar, then $G_1\# G_2$ is planar. Conversely, suppose
that $G$ is a 4-regular graph. By the handshake lemma, it is not
possible for $G$ to have a cut-edge. However, $G$ might have a cut-set
of size 2. Suppose that $\set e,f\endset$ is in fact a cut-set for $G$.
Then again by the handshake lemma, $G-\set e,f\endset$ must have exactly
two connected components. Let $G_1$ denote the graph obtained from one
of these two components by creating an edge joining the endpoints of $e$
and $f$ that belong to the component (so the new edge is a loop if these
two endpoints are equal). Let $G_2$ denote the graph obtained by
applying the same procedure to the second component. Then $G=G_1\# G_2$ (that is
to say, one of the two ways to carry out the construction yields $G$).
Moreover, if $G$ is planar, then so are $G_1$ and $G_2$. Finally,
observe that there is a natural way to obtain vertex-orientations
$\sigma_1$ of $G_1$ and $\sigma_2$ of $G_2$ from a vertex-orientation
$\sigma$ of $G$, and vice-versa, and we shall say that $\sigma$ is
compatible with $\sigma_1$ and $\sigma_2$ and vice-versa.
\begin{lemma}\label{lemma: prime is ok}
Let $G$, $G_1$, and $G_2$ be 4-regular planar graphs such that $G=G_1\#
G_2$. Suppose further that $G$ is vertex-oriented by $\sigma$, and give
$G_1$ and $G_2$ the induced vertex-orientations $\sigma_1$ and
$\sigma_2$, respectively. For every positive integer $k$, if
$(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ can
be $k$-o-coloured, then $(G,\sigma)$ can be $k$-o-coloured.
\end{lemma}
\proof
Suppose that $G_1$ and $G_2$ have been $k$-o-coloured. If the new edges
in $G_1$ and $G_2$ have been coloured differently, then we may permute
the colours in the colouring of $G_2$ to arrange that the two new edges
have been coloured the same, say with colour $c_1$. Then assign $e$ and
$f$ colour $c_1$ to obtain a $k$-o-colouring of $(G,\sigma)$.
\edproofmarker\vskip10pt
Now suppose that $G$ is a 4-regular graph with a cut-vertex $v$. As we
have seen in the proof of Theorem \ref{theorem: 4ct equivalence}, $G-v$ must
consist of two components, and for each component, there are exactly two
edges incident to $v$ with endpoints in the component. Furthermore, since
$G$ is planar, the two edges incident to $v$ with endpoints in the
same component of $G-v$ must be consecutive in the embedding order at
$v$. Thus in any plane embedding of $G$, there exists a simple closed
curve $S_1$ that meets exactly two edges incident to $v$ and no other
edges of $G$ and contains one of the components of $G-v$ in its
interior, and a simple closed curve $S_2$ that meets the other two edges
incident to $v$ and no other edges of $G$ and contains the other
component of $G-v$ in its interior. Let $G_1$ be the graph formed from
one of the components of $G-v$ by creating a new edge whose endpoints
are those of the two edges incident to $v$ that meet the component in
question, and let $G_2$ be the graph constructed by the same process but
applied to the other component of $G-v$. We shall use the notation
$G=G_1\#_v G_2$ to denote this situation. Moreover, there is a natural
way to associate two different vertex-orientations of $G$ corresponding
to a vertex-orientation of each of $G_1$ and $G_2$, depending on the
orientation assigned to $v$. We shall let
$\connsumparallelvertex{G_1}{G_2}{v}$ indicate the choice of
orientation at $v$ whose cells are the pairs of edges incident to $G_1$,
respectively $G_2$, and we shall call this the nontransverse orientation at
$v$. The other orientation, called the transverse orientation at $v$, shall be denoted by
$\connsumtransversevertex{G_1}{G_2}{v}$.
If $G$ is a vertex-oriented 4-regular planar graph, then for any vertex
$v$ that is not a loop-anchor, form a new 4-regular planar graph by
removing $v$ and identifying each edge $e$ in an edge cell at $v$ with
the unique edge in the other edge cell at $v$ that is adjacent to $e$ in
the embedding order at $v$ (see Figure \ref{figure: smoothing figure}). If $v$ is
a loop-anchor, oriented transversely or non-transversely, smoothing $v$
is achieved by removing the loop and $v$ and identifying the other two
edges incident to $v$. The resulting graph $G'$ is vertex-oriented,
and shall be said to have been obtained from $G$ by smoothing $v$.
\begin{figure}[h]
\centering
\table{c@{\hskip60pt}c}\\
$\vcenter{\xy /r25pt/:,
(0,0);(1,1)**\dir{-},
(0,1);(1,0)**\dir{-},
(.5,.5)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(.5,0);(.5,1)**\dir{-}*\dir{>},(.5,0)*\dir{<},
\endxy}$
&
$\vcenter{\xy /r25pt/:,
(0,0);(0,1)**\crv{(.4,.5)},
(1,0);(1,1)**\crv{(.6,.5)},
\endxy}$
\\
\noalign{\vskip6pt}
(a) & (b)
\endtable\caption{}\label{figure: smoothing figure}
\end{figure}
\begin{lemma}\label{lemma: transverse cut vertex can be o-coloured}
Let $(G,\sigma)$ be a vertex-oriented 4-regular planar graph with a
cut-vertex $v$ transversely oriented, so that
$G=\connsumtransversevertex{G_1}{G_2}{v}$ for some vertex-oriented
4-regular planar graphs $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ such that
$\sigma_1$ and $\sigma_2$ are consistent with $\sigma$. If
$(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ can be $k$-o-coloured, then
$(G,\sigma)$ can be $k$-o-coloured.
\end{lemma}
\proof
Embed $G$ in the plane as shown in Figure \ref{figure: transverse cut figure}
(a), where each of the closed curves $S_1$ and $S_2$ contain at least
one vertex in their respective interiors, and then smooth $v$,
obtaining 4-regular planar graphs $G_1$ and $G_2$ as shown in
Figure \ref{figure: transverse cut figure} (b). By assumption, we may
o-colour each of $G_1$ and $G_2$ with $k\ge2$ colours. Suppose that colour $c_1$ appears on $e$, and choose
a second colour $c_2$. By permuting the colours in $G_2$ if
necessary, we can arrange to have $f$ coloured with $c_2$. Then colour
every edge of $G$ that is an edge in either $G_1$ or $G_2$ with the
colour it has in the respective graphs, and colour the edges incident
to $v$ as shown in Figure \ref{figure: transverse coloured figure}.
The result is a $k$-o-colouring for $(G,\sigma)$.
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5);(3.4,.5)**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5);(3.4,-.5)**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(2,-.45);(2,.45)**\dir{-}*\dir{>},(2,-.45)*\dir{<},
(0,0)*!(1,-1){S_1},
(4,0)*!(-1,-1){S_2},
(0,-1.8)*{},
(0,1.7)*{},
(8,0)="x",
(0,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)+"x";(.6,.5)+"x"**\crv{(1.3,-.5)+"x" & (1.5,0)+"x" & (1.3,.5)+"x"},
(1.5,0)+"x"*!<-3pt,0pt>{e},
(2.5,0)+"x"*!<3pt,0pt>{f},
(3.4,-.5)+"x";(3.4,.5)+"x"**\crv{(2.8,-.5)+"x" & (2.5,0)+"x" & (2.8,.5)+"x"},
(0,0)+"x"*!(0,1.6){G_1},
(4,0)+"x"*!(0,1.6){G_2},
(2,-2.5)*{\hbox{(a)}},
(2,-2.5)+"x"*{\hbox{(b)}},
\endxy}$
\endtable\caption{}\label{figure: transverse cut figure}
\end{figure}
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r25pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)="a";(3.4,.5)="b"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="c";(3.4,-.5)="d"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(2,-.45);(2,.45)**\dir{-}*\dir{>},(2,-.45)*\dir{<},
"a"*!<-15pt,5pt>{c_1},
"c"*!<-15pt,-4pt>{c_1},
"b"*!<15pt,-4pt>{c_2},
"d"*!<15pt,5pt>{c_2},
\endxy}$
\endtable\caption{}\label{figure: transverse coloured figure}
\end{figure}
\edproofmarker\vskip10pt
\begin{lemma}\label{lemma: flyping parallel}
If $(G,\sigma)$ is a vertex-oriented 4-regular planar graph of the form
as shown in Figure \ref{figure: flyping parallel figure} (a), (where it is not
intended that the endpoints of the edges entering $S_1$, repectively
$S_2$, need be distinct), and each of $S_1$ and $S_2$ contain at least
one vertex in their interior, and each of the compatibly
vertex-oriented 4-regular planar graphs $(G_1,\sigma_1)$ and
$(G_2,\sigma_2)$ in Figure \ref{figure: flyping parallel figure} (b) can be
$k$-o-coloured, then $(G,\sigma)$ can be $k$-o-coloured.
\end{lemma}
\begin{figure}[h]
\centerline{\table{c}
$\vcenter{\xy /r15pt/:,
(0,2.2)*{},
(-.5,0)="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(4.5,0)="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(.6,-.5)="a";(3.4,.5)="b"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="c";(3.4,-.5)="d"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<0pt,-8pt>{v},
(1.5,0);(2.5,0)**\dir{-}*\dir{>},(1.5,0)*\dir{<},
"a"+(-.2,-.5);"d"+(.2,-.5)**\dir{-};
"b"+(.2,.5);"c"+(-.2,.5)**\dir{-};
(-.5,0)*!(1.5,-1.5){S_1},
(4.5,0)*!(-1.5,-1.5){S_2},
(2,-2.5)*{\hbox{(a)}},
(10,0)="x",
(1,0)="y",
(-.5,0)+"x"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(4.5,0)+"x"+"y"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(.6,-.5)+"x"="a",
"a"+(-.2,-.5)="bl",
(3.4,.5)+"x"+"y"="b",
(.6,.5)+"x"="c",
(3.4,-.5)+"x"+"y"="d",
"a";"c"**\dir{}?(.5)="m",
"m"+(1.1,0)="arrow"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"arrow"-(.45,0)="le";"arrow"+(.45,0)**\dir{-}*\dir{>},"le"*\dir{<},
"d"+(.2,-.5)="br",
"b"+(.2,.5)="tr",
"c"+(-.2,.5)="tl",
"tl";"a"**\crv{"tl"+(3,0) & "a"+(.8,0)},
"bl";"c"**\crv{"bl"+(3,0) & "c"+(.8,0)},
"tr";"d"**\crv{"tr"+(-3,0) & "d"+(-.8,0)},
"br";"b"**\crv{"br"+(-3,0) & "b"+(-.8,0)},
"b";"d"**\dir{}?(.5)="n",
"n"+(-1.1,0)="arrow"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"arrow"-(.45,0)="le";"arrow"+(.45,0)**\dir{-}*\dir{>},"le"*\dir{<},
(-.5,0)+"x"*!(1.5,-1.5){S_1},
(4.5,0)+"x"+"y"*!(-1.5,-1.5){S_2},
(-.5,0)+"x"*!(0,2){G_1},
(4.5,0)+"x"+"y"*!(0,2){G_2},
(2.5,-2.5)+"x"*{\hbox{(b)}},
\endxy}$
\endtable}\caption{}\label{figure: flyping parallel figure}
\end{figure}
\proof
By hypothesis, both $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ can be
$k$-o-coloured. Label the (necessarily distinct) colours on the top
and bottom edges incident to the copy of $v$ in $G_1$ as $c_1$ and
$c_2$, and label the colours on the other two edges incident to that
vertex with $x$ and $y$, so that $\set x,y\endset=\set
c_1,c_2\endset$. By permutating the colours in $G_2$ if necessary, we
can ensure that the $k$-o-colouring of $(G_2,\sigma_2)$ is as shown
in Figure \ref{figure: new graphs}, where $\set r,s\endset=\set c_1,c_2\endset$.
\medskip
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="x",
(,0)="y",
(-.5,0)+"x"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(4.5,0)+"x"+"y"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(.6,-.5)+"x"="a",
"a"+(-.2,-.5)="bl",
(3.4,.5)+"x"+"y"="b",
(.6,.5)+"x"="c",
(3.4,-.5)+"x"+"y"="d",
"a";"c"**\dir{}?(.5)="m",
"m"+(1.1,0)="arrow"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"arrow"-(.45,0)="le";"arrow"+(.45,0)**\dir{-}*\dir{>},"le"*\dir{<},
"d"+(.2,-.5)="br",
"b"+(.2,.5)="tr",
"c"+(-.2,.5)="tl",
"tl";"a"**\crv{"tl"+(3,0) & "a"+(.8,0)},
"bl";"c"**\crv{"bl"+(3,0) & "c"+(.8,0)},
"tr";"d"**\crv{"tr"+(-3,0) & "d"+(-.8,0)},
"br";"b"**\crv{"br"+(-3,0) & "b"+(-.8,0)},
"b";"d"**\dir{}?(.5)="n",
"n"+(-1.1,0)="arrow"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"arrow"-(.45,0)="le";"arrow"+(.45,0)**\dir{-}*\dir{>},"le"*\dir{<},
(-.5,0)+"x"*!(1.5,-1.5){S_1},
(4.5,0)+"x"+"y"*!(-1.5,-1.5){S_2},
(-.5,0)+"x"*!(0,2){G_1},
(4.5,0)+"x"+"y"*!(0,2){G_2},
"tl"+(1,0)*!<-8pt,-2pt>{c_1},
"tr"+(-1,0)*!<8pt,-2pt>{c_1},
"bl"+(1,0)*!<-8pt,2pt>{c_2},
"br"+(-1,0)*!<8pt,2pt>{c_2},
"c"+(.25,0)*!<-9pt,0pt>{x},
"b"+(-.25,0)*!<9pt,0pt>{r},
"a"+(.25,0)*!<-9pt,1pt>{y},
"d"+(-.25,0)*!<9pt,1pt>{s},
\endxy}$
\endtable\caption{}\label{figure: new graphs}
\end{figure}
\medskip
\noindent Now assign to each edge of $G$ that is also an edge of either
$G_1$ or $G_2$ the colour it has been assigned in the $k$-o-colouring
of the respective graphs, and complete the colouring of the edges
incident to $v$ as shown in Figure \ref{figure: another one}.
\medskip
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,2.2)*{},
(-.5,0)="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(4.5,0)="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(.6,-.5)="a";(3.4,.5)="b"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="c";(3.4,-.5)="d"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
(2,0)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<0pt,-8pt>{v},
"v"+(0,1.3)*{c_1},
"v"+(0,-1.3)*{c_2},
(1.5,0);(2.5,0)**\dir{-}*\dir{>},(1.5,0)*\dir{<},
"a"+(-.2,-.5);"d"+(.2,-.5)**\dir{-};
"b"+(.2,.5);"c"+(-.2,.5)**\dir{-};
"a"*!<-18pt,3pt>{y},
"c"*!<-18pt,-2pt>{x},
"b"*!<18pt,-2pt>{r},
"d"*!<18pt,3pt>{s},
(-.5,0)*!(1.5,-1.5){S_1},
(4.5,0)*!(-1.5,-1.5){S_2},
\endxy}$
\endtable
\caption{}\label{figure: another one}
\end{figure}
\medskip
\noindent The result is a $k$-o-colouring of $G$.
\edproofmarker\vskip10pt
Before continuing on to the main theorem, we introduce one final bit of
terminology. We say that an edge-colouring of a 4-regular planar graph
$G$ is alternating at $v$ if exactly two colours appear on the edges
incident to $v$, and they appear in alternating order as we examine the
edges in the embedding order. If an edge-colouring of $G$ is
alternating at $v$, then it is compatible with either of the two
possible vertex-orientations at $v$.
\begin{theorem}\label{theorem: o-colouring theorem}
Every vertex-oriented 4-regular planar graph $(G,\sigma)$ in which each
cut-vertex or loop-anchor is oriented transversely can be o-coloured.
\end{theorem}
\proof
The proof is by induction on the number of vertices. There is only one
such graph on a single vertex, and two such graphs
on two vertices. O-colourings for each are shown in Figure \ref{figure: base
case figure}. Note that in Figure \ref{figure: base case figure} (b), we have
given an edge-colouring that is alternating at each vertex, and is
therefore an o-colouring for any vertex-orientation of the graph.
\begin{figure}[h]
\centerline{\table{c@{\hskip40pt}c@{\hskip40pt}c}
$\vcenter{\xy /r20pt/:,
(0,0)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"v"-(0,.45)="le";"v"+(0,.45)**\dir{-}*\dir{>},"le"*\dir{<},
"v";"v"**\crv{ "v"+(.7,-.7) & "v"+(1.2,0) & "v"+(.7,.7)},
"v";"v"**\crv{ "v"+(-.7,-.7) & "v"+(-1.2,0) & "v"+(-.7,.7)},
"v"+(-1,.6)*{c_1},
"v"+(1,.6)*{c_2},
\endxy}$
&
$\vcenter{\xy /r20pt/:,
(0,0)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(0,1)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"v";"w"**\crv{ (-.6,.5)},
"v";"w"**\crv{ (.6,.5)},
"v";"w"**\crv{ "v"+(-1,0) & (-2,.5) & "w"+(-1,0)},
"v";"w"**\crv{ "v"+(1,0) & (2,.5) & "w"+(1,0)},
"v"+(-1.9,.7)*{c_1},
"v"+(1.9,.7)*{c_2},
"v"+(-.7,.5)*{c_2},
"v"+(.7,.5)*{c_1},
\endxy}$
&
$\vcenter{\xy /r20pt/:,
(0,0)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"v"-(0,.45)="le";"v"+(0,.45)**\dir{-}*\dir{>},"le"*\dir{<},
"v";"v"**\crv{ "v"+(-.7,-.7) & "v"+(-1.2,0) & "v"+(-.7,.7)},
"v"+(-1,.6)*{c_1},
(1.25,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"w"-(0,.45)="wle";"w"+(0,.45)**\dir{-}*\dir{>},"wle"*\dir{<},
"w";"w"**\crv{ "w"+(.7,-.7) & "w"+(1.2,0) & "w"+(.7,.7)},
"w"+(1,.6)*{c_1},
"v";"w"**\dir{}?(.5)="x",
"v";"w"**\crv{"x"+(0,.75)},
"v";"w"**\crv{"x"+(0,-.75)},
"x"+(0,.75)*{c_2},
"x"-(0,.75)*{c_2},
\endxy}$
\\
\noalign{\vskip1pt}
(a) & (b) & (c)
\endtable}\caption{}\label{figure: base case figure}
\end{figure}
Suppose now that $n>2$ is an integer such that every vertex-oriented
4-regular planar graph on fewer than $n$ vertices for which any
cut-vertex or loop-anchor is oriented transversely can be o-coloured,
and let $(G,\sigma)$ be a vertex-oriented 4-regular planar graph on
$n$ vertices in which any cut-vertex or loop-anchor has been oriented
transversely. By Lemma \ref{lemma: prime is ok}, we may suppose that $G$ is
3-edge-connected.
Suppose first of all that $G$ does have a cut-vertex $v$, so that $G$
is as shown in Figure \ref{figure: transverse cut figure} (a). If either of
$G_1$ or $G_2$ as shown in Figure \ref{figure: transverse cut figure} (b)
contains a cut-vertex that is oriented nontransversely, then that
vertex is a cut-vertex of $G$ oriented nontransversely, which is not
possible. If either of $G_1$ or $G_2$ contains a loop-anchor $w$ that
is oriented nontransversely, then in $G$, $w$ is either a cut-vertex or
a loop-anchor that is oriented nontransversely, neither of which is possible. Thus by
our inductive hypothesis, each of $G_1$ and $G_2$, with the
vertex-orientations induced by $\sigma$, can be o-coloured, and then by
Lemma \ref{lemma: transverse cut vertex can be o-coloured}, $G$ can be
o-coloured. If $G$ contains a loop-anchor $v$, then $v$ is oriented
transversely, in which case we can o-colour the vertex-oriented graph that
is obtained from $(G,\sigma)$ by smoothing $v$, and consequently we can o-colour
$(G,\sigma)$. Thus we may assume that $G$ has no loops or cut-vertices.
\noindent Case 1: $G$ contains a vertex $v$ such that $(G,\sigma)$ is of
the form shown in Figure \ref{figure: flyping parallel figure}
(a). Smooth $v$ to form the vertex-oriented graphs $(G_1,\sigma_1)$ and
$(G_2,\sigma_2)$ as shown in Figure \ref{figure: flyping parallel figure} (b).
Neither can contain a cut-vertex or a loop-anchor, so by our induction
hypothesis, each can be o-coloured. Then by Lemma \ref{lemma: flyping
parallel}, $G$ can be o-coloured.
We may therefore suppose that Case 1 does not occur.
\noindent Case 2: $G$ contains a vertex $v$ such that $(G,\sigma)$ is of
the form shown in Figure \ref{figure: transverse 4-tangle figure} (a), where
each of the closed curves $S_1$ and $S_2$ contain at least one vertex in
their respective interiors. Smooth $v$ to form the vertex-oriented graph
$(G^{(1)},\sigma_1)$ as shown in Figure \ref{figure: transverse 4-tangle figure}
(b), where the marked colours are for later reference.
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)="dl";(3.4,.5)="ur"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="ul";(3.4,-.5)="dr"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
"ul"+(-.3,.2)="tl",
"ur"+(.3,.2)="tr",
"tl";"tr"**\crv{"tl"+(.2,.2) & "tr"+(-.2,.2)},
"dl"+(-.3,-.2)="bl",
"dr"+(.3,-.2)="br",
"bl";"br"**\crv{"bl"+(.2,-.2) & "br"+(-.2,-.2)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(2,-.45);(2,.45)**\dir{-}*\dir{>},(2,-.45)*\dir{<},
(0,0)*!(1,-1){S_1},
(4,0)*!(-1,-1){S_2},
(0,-1.8)*{},
(0,1.7)*{},
(8,0)="x",
(0,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)+"x"="dl";(.6,.5)+"x"="ul"**\crv{(1.3,-.5)+"x" & (1.5,0)+"x" & (1.3,.5)+"x"},
(1.5,0)+"x"*!<-4pt,-2pt>{c_1},
(2.5,0)+"x"*!<3pt,-2pt>{c_2},
(3.4,-.5)+"x"="dr";(3.4,.5)+"x"="ur"**\crv{(2.8,-.5)+"x" & (2.5,0)+"x" & (2.8,.5)+"x"},
"ul"+(-.3,.2)="tl",
"ur"+(.3,.2)="tr",
"tl";"tr"**\crv{"tl"+(.2,.2) & "tr"+(-.2,.2)},
"dl"+(-.3,-.2)="bl",
"dr"+(.3,-.2)="br",
"bl";"br"**\crv{"bl"+(.2,-.2) & "br"+(-.2,-.2)},
"tl";"tr"**\dir{}?(.5)="labeltop",
"labeltop"*!<0pt,-8pt>{c_3},
"bl";"br"**\dir{}?(.5)="labelbot",
"labelbot"*!<0pt,9pt>{c_3},
(2,-2.5)*{\hbox{(a) }},
(2,-2.3)+"x"*{\hbox{(b) $(G^{(1)},\sigma_1)$}},
\endxy}$
\endtable\caption{}\label{figure: transverse 4-tangle figure}
\end{figure}
\noindent If $(G^{(1)},\sigma_1)$ contains a cut-vertex or a
loop-anchor $w$ oriented nontraversely, then in $G$, $w$ provides a
Case 1 scenario, and we have assumed that there are no such vertices in
$G$. Thus by our inductive hypothesis, there is an o-colouring of
$(G^{(1)},\sigma_1)$, as shown in Figure \ref{figure: transverse 4-tangle
figure} (b). Note that the top and bottom edges between the subgraphs
enclosed by closed curves $S_1$ and $S_2$ must belong to the same
o-cycle, and thus have the same colour, labelled $c_3$. There are three
subcases to consider.
\noindent Case 2 (i): $c_1\ne c_2$. Then give each edge of $G$ that is also
an edge of $G^{(1)}$ the colour it received in the o-colouring
of $(G^{(1)},\sigma_1)$, and colour the edges incident to $v$ as shown
in Figure \ref{figure: transverse original coloured figure}. The result is an
o-colouring of $(G,\sigma)$.
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)="dl";(3.4,.5)="ur"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="ul";(3.4,-.5)="dr"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
"ul"+(-.3,.2)="tl",
"ur"+(.3,.2)="tr",
"tl";"tr"**\crv{"tl"+(.2,.2) & "tr"+(-.2,.2)},
"dl"+(-.3,-.2)="bl",
"dr"+(.3,-.2)="br",
"bl";"br"**\crv{"bl"+(.2,-.2) & "br"+(-.2,-.2)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(2,-.45);(2,.45)**\dir{-}*\dir{>},(2,-.45)*\dir{<},
(0,0)*!(1,-1){S_1},
(4,0)*!(-1,-1){S_2},
"tl";"tr"**\dir{}?(.5)="labeltop",
"labeltop"*!<0pt,-8pt>{c_3},
"bl";"br"**\dir{}?(.5)="labelbot",
"labelbot"*!<0pt,9pt>{c_3},
"ul"+(.25,0)*!<-13pt,-1pt>{c_1},
"ur"+(-.25,0)*!<14pt,-1.5pt>{c_2},
"dl"+(.25,0)*!<-13pt,2.5pt>{c_1},
"dr"+(-.25,0)*!<14pt,1.5pt>{c_2},
\endxy}$
\endtable\caption{}\label{figure: transverse original coloured figure}
\end{figure}
\noindent Case 2 (ii): $c_1=c_2\ne c_3$. As shown in
Figure \ref{figure: transverse g2 figure} (a), form the vertex-oriented
4-regular planar graph $(G^{(2)},\sigma_2)$, where the
vertex-orientation is that induced by $\sigma_1$, and give it the
o-colouring obtained from that of $(G^{(1)},\sigma_1)$ as shown in the
figure. Choose a third colour $c\ne c_2,c_3$ (so $c$ is a new colour if
the o-colouring of $(G^{(1)},\sigma_1)$ used only two colours), and in
this o-colouring of $(G^{(2)},\sigma_2)$, swap $c$ and $c_2$, so that
now $(G^{(2)},\sigma_2)$ is o-coloured as shown in Figure \ref{figure: transverse
g2 figure} (b).
\begin{figure}[h]
\centering\table{c@{\hskip60pt}c}
$\vcenter{\xy /r20pt/:,
(-2,0)="x",
(1,0)="y",
(4.5,2.1)+"x"+"y"*{},
(4.5,0)+"x"+"y"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(3.4,.5)+"x"+"y"="b",
(3.4,-.5)+"x"+"y"="d",
"d"+(.2,-.5)="br",
"b"+(.2,.5)="tr",
"tr";"br"**\crv{"tr"+(-2.5,0) & "br"+(-2.5,0)}?(.5)="t",
"d";"b"**\crv{"d"+(-1.2,0) & "b"+(-1.2,0)}?(.5)="c",
(4.5,0)+"x"+"y"*!(-1.5,-1.5){S_2},
"t"*!<6pt,0pt>{c_3},
"c"*!<6pt,0pt>{c_2},
(4.5,-2.3)+"x"+"y"*{\hbox{(a)\quad $(G^{(2)},\sigma_2)$}},
\endxy}$
&
$\vcenter{\xy /r20pt/:,
(-2,0)="x",
(1,0)="y",
(4.5,0)+"x"+"y"="centre",{(0,0);(1.5,0):,"centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)}},
(3.4,.5)+"x"+"y"="b",
(3.4,-.5)+"x"+"y"="d",
"d"+(.2,-.5)="br",
"b"+(.2,.5)="tr",
"tr";"br"**\crv{"tr"+(-2.5,0) & "br"+(-2.5,0)}?(.5)="t",
"d";"b"**\crv{"d"+(-1.2,0) & "b"+(-1.2,0)}?(.5)="c",
(4.5,0)+"x"+"y"*!(-1.5,-1.5){S_2},
"t"*!<6pt,0pt>{c_3},
"c"*!<6pt,0pt>{c},
(4.5,-2.3)+"x"+"y"*{\hbox{(b)\quad $(G^{(2)},\sigma_2)$}},
\endxy}$
\endtable\caption{}\label{figure: transverse g2 figure}
\end{figure}
\noindent Lift this colouring back to $(G^{(1)},\sigma_1)$ as shown in
Figure \ref{figure: transverse rebuilt figure}. Then we are back in Case 2 (i),
and so $(G,\sigma)$ can be o-coloured.
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="x",
(0,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)+"x",{\ellipse(1){.}},
(4,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)+"x"="dl";(.6,.5)+"x"="ul"**\crv{(1.3,-.5)+"x" & (1.5,0)+"x" & (1.3,.5)+"x"},
(1.5,0)+"x"*!<-4pt,-2pt>{c_1},
(2.5,0)+"x"*!<3pt,-2pt>{c},
(3.4,-.5)+"x"="dr";(3.4,.5)+"x"="ur"**\crv{(2.8,-.5)+"x" & (2.5,0)+"x" & (2.8,.5)+"x"},
"ul"+(-.3,.2)="tl",
"ur"+(.3,.2)="tr",
"tl";"tr"**\crv{"tl"+(.2,.2) & "tr"+(-.2,.2)},
"dl"+(-.3,-.2)="bl",
"dr"+(.3,-.2)="br",
"bl";"br"**\crv{"bl"+(.2,-.2) & "br"+(-.2,-.2)},
"tl";"tr"**\dir{}?(.5)="labeltop",
"labeltop"*!<0pt,-8pt>{c_3},
"bl";"br"**\dir{}?(.5)="labelbot",
"labelbot"*!<0pt,9pt>{c_3},
(0,0)*!(1,-1){S_1},
(4,0)*!(-1,-1){S_2},
\endxy}$
\endtable\caption{}\label{figure: transverse rebuilt figure}
\end{figure}
\noindent Case 2 (iii): $c_1=c_2=c_3$. Suppose first that in the o-colouring
of $(G^{(1)},\sigma_1)$ as shown in Figure \ref{figure: transverse 4-tangle figure} (b),
the edges $e$ and $f$ with colour labels $c_1$ and $c_2$, respectively, do not
belong to the same o-cycle. Then the two edges $u$ (up) and $d$ (down) shown in
Figure \ref{figure: transverse 4-tangle figure} (b) with colour
label $c_3$ must belong to the same o-cycle, $O$ say, and not both
$e$ and $f$ can belong to $O$. Without loss of generality, suppose
that $f$ is not in $O$. Then we may permute the colours of the edges
that appear in the interior of $S_2$ other than those that belong to
$O$ in such a way that $f$ is not coloured with colour $c_1$ (if the
edges of $S_2$ had been coloured with only two colours, then a third
colour would need to be introduced). This would then place us in the
context of Case 2 (i), and so $(G,\sigma)$ is o-colourable.
Suppose now that $e$ and $f$ belong to the same o-cycle, which
is then the o-cycle $O$ that contains $u$ and $d$. Since at least one
vertex of $G$ is contained in the interior of $S_2$, $S_2$ must contain
at least one o-cycle in addition to $O$, so let $C$ be an o-cycle
contained in the interior of $S_2$. Since $G$ contains no loop-anchors,
$C$ must pass through at least two vertices. Remove the edges of $C$ from $G$.
Now each vertex of $C$ has two incident edges, and
both have the same colour, so we may remove the vertex and identify the
two edges, giving this new edge the common colour of the two that have
been identified. Denote the resulting vertex-oriented graph by $(G^{(3)},\sigma_3)$,
and note that the o-colouring of $(G^{(1)},\sigma_1)$ provides an o-colouring
of $(G^{(3)},\sigma_3)$. As a result, any cut-vertex of $G^{(3)}$ must be
oriented transversely by $\sigma_3$. It follows therefore that if we modify
$G^{(3)}$ by re-introducing $v$, calling the vertex-oriented result
$(G'',\sigma'')$, then the only vertex of $G''$ that could possibly
be a cut-vertex oriented nontransversely is $v$. Suppose that in fact,
$v$ is a nontransversely oriented cut-vertex of $(G'',\sigma'')$.
Then there exist simple closed curves $U_1$ and $U_2$, as shown in Figure
\ref{figure: modified G'}, such that one of the two components of $G''-v$
is contained within $U_1$ and the other component is contained within $U_2$.
\begin{figure}[h]
\centering\table{c}
$\vcenter{\xy /r20pt/:,
(0,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(4,0)="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(.6,-.5)="dl";(3.4,.5)="ur"**\crv{(1.5,-.5) & (2,0) & (2.5,.5)},
(.6,.5)="ul";(3.4,-.5)="dr"**\crv{(1.5,.5) & (2,0) & (2.5,-.5)},
"ul"+(-.3,.2)="tl",
"ur"+(.3,.2)="tr",
"tl";"tr"**\crv{"tl"+(.2,.2) & "tr"+(-.2,.2)},
"dl"+(-.3,-.2)="bl",
"dr"+(.3,-.2)="br",
"bl";"br"**\crv{"bl"+(.2,-.2) & "br"+(-.2,-.2)},
(2,0)*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-8pt,0pt>{v},
(2,-.45);(2,.45)**\dir{-}*\dir{>},(2,-.45)*\dir{<},
(0,0)*!(1,-1){S_1},
(4,0)*!(-1,-1){S_2},
"tl";"tr"**\dir{}?(.5)="labeltop",
"bl";"br"**\dir{}?(.5)="labelbot",
"ur"+(.85,0)="a";"dr"+(.85,0)="b"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "a"+(.8,0) & "b"+(.8,0) },
"a";"b"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "a"+(-.8,0) & "b"+(-.8,0) },
"a"*!<0pt,5pt>{C},
(0,0)+(0,.25)="s";(4,0)+(0,.25)="e"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "s"+(-2,0) & (0,0)+(-2,2) & (4,0)+(2,2) & "e"+(2,0)},
(2,0)+(0,2.4)*{U_1},
"s";"e"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "s"+(1,0) & (2,0)+(0,1) & "e"+(-1,0)},
(0,0)+(0,-.25)="s";(4,0)+(0,-.25)="e"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "s"+(-2,0) & (0,0)+(-2,-2) & (4,0)+(2,-2) & "e"+(2,0)},
(2,0)+(0,-2.4)*{U_2},
"s";"e"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} "s"+(1,0) & (2,0)+(0,-1) & "e"+(-1,0)},
\endxy}$
\endtable\caption{$(G'',\sigma'')$}\label{figure: modified G'}
\end{figure}
\noindent Since $S_1$ does contain vertices of $G$, we have a contradiction to
the fact that $G$ is 3-edge-connected. Thus in $G''$, $v$ is oriented transversely
by $\sigma''$. As $C$ contained at
least two vertices, the number of vertices in $G'$ is at least two fewer than the number of
vertices in $G$ and thus $G''$ contains fewer vertices than $G$. We may therefore apply the
induction hypothesis to $(G'',\sigma'')$ to obtain an o-colouring
of $(G'',\sigma'')$. Finally, reintroduce the vertices and edges of the o-cycle $C$, colouring
the edges of $C$ with a new colour if necessary. The result is an o-colouring of $(G,\sigma)$.
\noindent Case 3: no vertex in $(G,\sigma)$ is of the type in either
Case 1 or Case 2. In particular, $G$ must be simple (it was loopfree,
and since we are not in Case 1 or Case 2, there are no multiple edges).
Furthermore, each vertex of $G$ can be smoothed without creating either
a cut-vertex or a loop-anchor oriented nontransversely (since $G$ was loopfree, such a
vertex would establish that $(G,\sigma)$ belonged in Case 1 or Case
2). Choose any vertex $v$ and smooth it, thereby obtaining a 4-regular
planar vertex-oriented graph $(G',\sigma')$ on $n-1$ vertices with no
cut-vertices or loop-anchors, so by hypothesis, there is an o-colouring
for this graph. Consider a particular o-colouring of this graph. If the
two edges that resulted from the smoothing of $v$ belong to different
o-cycles, then the o-colouring lifts to an o-colouring of $(G,\sigma)$
(they may be coloured the same, but since they are different o-cycles,
we may then change the colour of one, possibly requiring a new colour).
Thus we may assume that the two edges that resulted from the smoothing
belong to the same o-cycle, which we shall denote by $C_0$. Every other
o-cycle of this o-colouring of $(G',\sigma')$ is an o-cycle of
$(G,\sigma)$, while the edges of $C_0$ other than the two edges of the
smoothing, together with the four edges incident to $v$, form two
cycles in $G$, $C_1$ and $C_1'$ say, that meet only at $v$, and which
meet the o-cycle requirement at every vertex except $v$. If the removal
of any one of the o-cycles other than $C_0$ from $G$ results in a graph
with no cut-vertex or loop-anchor oriented nontransversely, then we
could o-colour the result and reinsert the o-cycle, giving it a new
colour if necessary, thereby obtaining an o-colouring of $(G,\sigma)$.
Suppose then that the removal of any of these o-cycles other than $C_0$
from $G$ results in a cut-vertex or loop-anchor oriented
nontransversely. Since the removal of the same o-cycle from $G'$ does
not result in such a vertex (since $(G',\sigma')$ was o-coloured), we
see that the vertex that has become a nontransversely oriented
cut-vertex or loop-anchor is $v$. Consider the abstract graph whose
vertices are the cycles $C_1$, $C_1'$, and the o-cycles of the
o-colouring of $(G',\sigma')$ other than $C_0$. Two vertices of this
graph are to be joined by an edge if they have a vertex of $G$ in
common. This graph is connected with the property that every vertex
other than $C_1$ and $C_1'$ lies on every path in this graph from $C_1$
to $C_1'$. Thus this graph is a chain with endpoints $C_1$ and $C_1'$.
Note that there is at least one o-cycle in this chain. Begin at $C_1$
and follow this chain, labelling each vertex on the chain (o-cycle, or
at the end, $C_1'$) as $C_i$, $i=1,2,\ldots,m+1$, where
$C_{m+1}=C_1'$. Then for any plane embedding of $G$, there exist simple
closed curves $S_1$, $S_2,\ldots,S_{m}$ such that for each
$i=1,2,\ldots,m$, all vertices in common to $C_i$ and $C_{i+1}$ lie
within $S_i$ and no other vertices of $G$ lie within $S_i$, and any
edge joining two vertices of $G$ that lie within $S_i$ also lies within
$S_i$. Now, every vertex of $G$ other than $v$ lies within one and only
one $S_i$, and for each $i=1,2,\ldots,m$, we shall let $G_i$ denote the
subgraph of $G$ that is induced by the vertices lying within $S_i$.
Additionally, let $G_0=G_{m+1}$ denote the null graph whose only vertex
is $v$. Note that any edge not contained within any of these simple
closed curves must join a vertex in $G_i$ to a vertex in $G_{i+1}$ for
some $i$.
We shall demonstrate that there is at least one vertex $v$ such that,
when smoothed, $m=2$; that is, there are two subgraphs $G_1$ and $G_2$, and three
cycles $C_1$, $C_2$, and $C_3$, to use the notation introduced above.
To do this, we shall examine the 3-faces in $G$, of which there must
be at least eight.
Suppose first that $G$ has at least one 3-face with orientation as shown
in Figure \ref{figure: 3-faces} (a). Choose any vertex $v$ not belonging
to the 3-face boundary and smooth it. By hypothesis, the resulting graph
can be o-coloured. Since no two edges of the 3-face can be coloured the
same, it follows that no two of the edges of the 3-face belong to the same
o-cycle. As observed above, this means that for some $i$, the o-cycles
$C_{i-1}$, $C_i$, and $C_{i+1}$ each have an edge on the 3-face. But this
means that $C_{i-1}$ and $C_{i+1}$ have a vertex in common, which is not
possible. Thus no 3-face of $G$ can be as in Figure \ref{figure: 3-faces} (a).
\vskip10pt
\begin{figure}[h]
\centering\table{c@{\hskip20pt}c@{\hskip20pt}c@{\hskip20pt}c}
\xy /r30pt/:,
(0,0)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(1,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(.5,.866)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u";"w"**\dir{-};"v"**\dir{-};"u"**\dir{-},
"u";"u"+a(30)**\dir{}?(.35)="u1",
"u";"u1"**\dir{-}*\dir{>},
"u";"u"+"u"-"u1"**\dir{-}*\dir{>},
"w";"w"+a(150)**\dir{}?(.35)="w1",
"w";"w1"**\dir{-}*\dir{>},
"w";"w"+"w"-"w1"**\dir{-}*\dir{>},
"v";"v"+(0,.35)**\dir{-}*\dir{>},
"v";"v"-(0,.35)**\dir{-}*\dir{>},
\endxy
&
\xy /r30pt/:,
(0,0)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(1,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(.5,.866)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u";"w"**\dir{-};"v"**\dir{-};"u"**\dir{-},
"u";"u"+a(30)**\dir{}?(.35)="u1",
"u";"u1"**\dir{-}*\dir{>},
"u";"u"+"u"-"u1"**\dir{-}*\dir{>},
"w";"w"+a(150)**\dir{}?(.35)="w1",
"w";"w1"**\dir{-}*\dir{>},
"w";"w"+"w"-"w1"**\dir{-}*\dir{>},
"v";"v"+(.35,0)**\dir{-}*\dir{>},
"v";"v"-(.35,0)**\dir{-}*\dir{>},
"v"*!<0pt,-7pt>{v},
\endxy
&
\xy /r30pt/:,
(0,0)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(1,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(.5,.866)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u";"w"**\dir{-};"v"**\dir{-};"u"**\dir{-},
"u";"u"+a(-60)**\dir{}?(.35)="u1",
"u";"u1"**\dir{-}*\dir{>},
"u";"u"+"u"-"u1"**\dir{-}*\dir{>},
"w";"w"+a(150)**\dir{}?(.35)="w1",
"w";"w1"**\dir{-}*\dir{>},
"w";"w"+"w"-"w1"**\dir{-}*\dir{>},
"v";"v"+(.35,0)**\dir{-}*\dir{>},
"v";"v"-(.35,0)**\dir{-}*\dir{>},
\endxy
&
\xy /r30pt/:,
(0,0)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(1,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(.5,.866)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u";"w"**\dir{-};"v"**\dir{-};"u"**\dir{-},
"u";"u"+a(-60)**\dir{}?(.35)="u1",
"u";"u1"**\dir{-}*\dir{>},
"u";"u"+"u"-"u1"**\dir{-}*\dir{>},
"w";"w"+a(60)**\dir{}?(.35)="w1",
"w";"w1"**\dir{-}*\dir{>},
"w";"w"+"w"-"w1"**\dir{-}*\dir{>},
"v";"v"+(.35,0)**\dir{-}*\dir{>},
"v";"v"-(.35,0)**\dir{-}*\dir{>},
\endxy\\
\noalign{\vskip4pt}
(a) & (b) & (c) & (d)
\endtable\caption{}\label{figure: 3-faces}
\end{figure}
Suppose now that $G$ has a 3-face as in Figure \ref{figure: 3-faces} (b). Choose vertex $v$
as shown in (b) and smooth it. By hypothesis, the resulting graph may be o-coloured. The Case 3
restrictions mandate that the two new edges that resulted from smoothing $v$ are necessarily on
the same o-cycle $C_i$, as shown in Figure \ref{figure: 3-face b} (a), where $C_i\ne C_j$. But
then we may exchange the colours on the two arcs as shown in Figure \ref{figure: 3-face b} (b),
thereby obtaining an o-colouring of the graph in which the two new arcs that resulted from smoothing
$v$ have different colours, which is not possible. Thus no 3-face of $G$ can be as in Figure
\ref{figure: 3-faces} (b), which means that every 3-face of $G$ is of the form shown in Figure
\ref{figure: 3-faces} (c) or (d).
\vskip10pt
\begin{figure}[h]
\centering\table{c@{\hskip20pt}c}
\xy /r30pt/:,
(-1.3,0)*{\null},
(-.3,.5)="v1",(.3,.5)="v2",
"v1";"v2"**\crv{(0,.2)},
(-.6,-.2)="u1",(.6,-.2)="u2",
"u1";"u2"**\dir{-},
"u1"+(0,-.4)="w1",
"u2"+(0,-.4)="w2",
"w1";"w2"**\crv{(0,1)},
"u1"+(.18,0)="t1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u2"-(.18,0)="t2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"v2"*!<-8pt,0pt>{C_i},
"u2"*!<0pt,-10pt>{C_i},
"u1";"u2"**\dir{}?(.5)="x",
"x"*!<0pt,10pt>{C_j},
"t1";"t1"+a(30)**\dir{}?(.35)="t1a",
"t1";"t1a"**\dir{-}*\dir{>},
"t1";"t1"+"t1"-"t1a"**\dir{-}*\dir{>},
"t2";"t2"+a(-30)**\dir{}?(.35)="t2a",
"t2";"t2a"**\dir{-}*\dir{>},
"t2";"t2"+"t2"-"t2a"**\dir{-}*\dir{>},
\endxy
&
\xy /r30pt/:,
(-1.15,0)*{\null},
(-.3,.5)="v1",(.3,.5)="v2",
"v1";"v2"**\crv{(0,.2)},
(-.6,-.2)="u1",(.6,-.2)="u2",
"u1";"u2"**\dir{-},
"u1"+(0,-.4)="w1",
"u2"+(0,-.4)="w2",
"w1";"w2"**\crv{(0,1)},
"u1"+(.18,0)="t1"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u2"-(.18,0)="t2"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"v2"*!<-8pt,0pt>{C_i},
"u2"*!<0pt,-10pt>{C_j},
"u1";"u2"**\dir{}?(.5)="x",
"x"*!<0pt,10pt>{C_i},
"t1";"t1"+a(30)**\dir{}?(.35)="t1a",
"t1";"t1a"**\dir{-}*\dir{>},
"t1";"t1"+"t1"-"t1a"**\dir{-}*\dir{>},
"t2";"t2"+a(-30)**\dir{}?(.35)="t2a",
"t2";"t2a"**\dir{-}*\dir{>},
"t2";"t2"+"t2"-"t2a"**\dir{-}*\dir{>},
\endxy\\
\noalign{\vskip4pt}
(a) & (b)
\endtable\caption{}\label{figure: 3-face b}
\end{figure}
Choose a 3-face as shown in Figure \ref{figure: G_1 and G_2}, with either orientation at
$w$ (it is in fact possible to prove that there can be no 3-face of the type shown in Figure
\ref{figure: 3-faces} (d), but this is not necessary for our argument), and smooth $v$. Then
$u$ is in $G_1$, and $w$ is in $G_m$. As $u$ and $w$ are adjacent, it follows that $m=2$, as
desired. We have o-cycle $C_1$ contained entirely within $G_1$, except for the two edges incident to
$v$, one of which is $e=vu$, o-cycle $C_3$ contained entirely within $G_2$ except for the two edges incident
to $v$, one of which is $vw$, and o-cycle $C_2$, which has edges incident to vertices of $G_1$ and
to vertices of $G_2$.
\vskip10pt
\begin{figure}[h]
\centering\table{c}
\xy /r30pt/:,
(0,0)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(1,0)="w"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
(.5,.866)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"u";"w"**\dir{-};"v"**\dir{-};"u"**\dir{-},
"u";"u"+a(-60)**\dir{}?(.35)="u1",
"u";"u1"**\dir{-}*\dir{>},
"u";"u"+"u"-"u1"**\dir{-}*\dir{>},
"v";"v"+(.35,0)**\dir{-}*\dir{>},
"v";"v"-(.35,0)**\dir{-}*\dir{>},
"v"*!<0pt,-7pt>{v},
"u"*!<6pt,6pt>{u},
"w"*!<-6pt,6pt>{w},
\endxy
\endtable\caption{}\label{figure: G_1 and G_2}
\end{figure}
Let $\it O$ denote the set of all triples $(P_1,P_2,P_3)$, where $P_1$ is a
subpath of $C_1$ with initial vertex $v$ and initial edge $e$, $P_2$ is a subpath of $C_2$, and $P_3$
is a subpath of $C_3$ such that the terminal vertex of $P_1$ is the initial vertex of $P_2$, the
terminal vertex of $P_2$ is the initial vertex of $P_3$, and $P_1+P_2+P_3$ is an o-cycle. We show first
that $\it O$ is not empty. Let $R_1$ be the o-path of length 1 with initial vertex $v$, initial edge
$e$, and terminal vertex $u$. Let $R_2$ denote the o-path of length 1 with initial vertex $u$ and initial
edge $uw$, so $R_2$ has terminal vertex $w$. Note that $R_2$ is a subpath of $C_2$ and that $R_1+R_2$ is
defined and is an o-path. Finally, let $R_3$ denote the o-path with initial vertex $w$ and which follows
$C_3$ in the direction which will make $R_2+R_3$ an o-path (this is uniquely determined), terminating at $v$.
Thus $R_1+R_2+R_3$ is defined and is an o-cycle, so $\it O_1=(R_1,R_2,R_3)\in {\it O}$.
For o-paths $P$ and $Q$, we shall say that $P\le Q$ if $P$ is a subpath of $Q$. This defines a partial order
relation on the set of all o-paths in $G$. Now consider the lexical order relation on $\it O$ that is defined
by this partial order relation on o-paths; that is, for $(P_1,P_2,P_3),(Q_1,Q_2,Q_3)\in \it O$, we have
$(P_1,P_2,P_3)<(Q_1,Q_2,Q_3)$ if $P_1<Q_1$, or else $P_1=Q_1$ and $P_2<Q_2$ (we note that if $P_1=Q_1$ and
$P_2=Q_3$, then necessarily $P_3=Q_3$). We claim that this is a total order relation on $\it O$. For
let $(P_1,P_2,P_3),(Q_1,Q_2,Q_3)\in \it O$, and suppose without loss of generality that $P_1\le Q_1$.
If $P_1<Q_1$, then $(P_1,P_2,P_3)<(Q_1,Q_2,Q_3)$, so suppose that $P_1=Q_1$. Then both $P_2$ and $Q_2$
have the same initial vertex, and travel along $C_2$ in the same direction. Thus we have exactly one of
$P_2=Q_2$, $P_2<Q_2$, or $Q_2<P_2$. In every case, $(P_1,P_2,P_3)$ and $(Q_1,Q_2,Q_3)$ are comparable.
Thus $\it O$ is a finite chain, in fact with minimum element $\it O_1$ defined above. Suppose that there
are $t$ elements in the chain. Label the remaining $t-1$ as $\it O_2,\ldots,\it O_t$, so that for any
$i$ and $j$ with $1\le i<j\le t$, we have $\it O_i<\it O_j$. For each $\it O_i=(P_1,P_2,P_3)$, let
${\it \hat{O}}_i$ denote the o-cycle $P_1+P_2+P_3$.
If for some $i$, $G-E(\it \hat{O}_i)$ (where we shall think of the vertices of degree 2 as having been removed by an elementary
subdivision operation) has no non-transversal cut-vertex, then by our induction hypothesis, $G-E(\it \hat{O}_i)$ may be
o-coloured, in which case we may re-introduce the edges of
$\it \hat{O}_i$, and colour them with a colour that is different from that used at
any vertex of $\it\hat{O}_i$ (this may require introducing a new colour). The
result is an o-colouring of $G$. Suppose to the contrary that for every $i$, $G-E(\it\hat{O}_i)$ has
at least one non-transversally oriented cut-vertex.
We shall say that $(P_1,P_2,P_3)\in \it O$ satisfies Condition A if $P_1$ meets $C_2$ only at vertices
of one of the two arcs of $C_2$ that are determined by
$u$ and the terminal vertex of $P_1$, and $C_1-P_1$ does not meet $C_2$ at any vertex of this arc.
We note that $\it O_1$ trivially satisfies Condition A.
Suppose now that $\it O_i=(P_1,P_2,P_3)\in \it O$ satisfies Condition A, and that $G-E(\it \hat{O}_i)$
has a non-transversally oriented cut-vertex $z$ belonging to $G_1$. Let
$U_1$ and $U_2$ denote the two components of $G-E(\it \hat O_i)-z$. Then the $C_1$ arc and the $C_2$ arc
determined by one orientation cell at $z$ enter $U_1$, while the $C_1$ arc and the $C_2$ arc
determined by the other orientation cell at $z$ enter $U_2$. One of the $C_2$ arcs must meet the terminal vertex
of $P_1$, $x$ say, and we shall suppose that $U_1$ and $U_2$ are labelled so that $x$ is in $U_1$. Thus the $C_1$
and $C_2$ arcs entering $U_2$ must meet $v$ and the terminal vertex of $P_2$, $y$ say, respectively. As $C_1-P_1$ meets
$C_2$ at $z$, it follows by Condition A that $P_1$ can only meet $C_2$ at vertices on the arc of $C_2$ between
$u$ and $x$ which does not contain $z$, and that $C_1-P_1$ does not meet this arc of $C_2$ (see Figure \ref{figure: bad guy 1}
for a schematic diagram of this situation, with very few actual crossings depicted).
\vskip10pt
\begin{figure}[h]
\centering\table{c}
\hbox{\xy /r30pt/:,
(-1,0)="x"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<6pt,0pt>{x},
(1,0)="z"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<0pt,-6pt>{z},
(2.3,.566)="v"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<0pt,-6pt>{v},
(3,1)="u"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-2pt,-6pt>{u},
(3.4,-.2)="y"*{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}}*!<-5pt,7pt>{y},
"x"+(0,-.1);"v"+(.1,.1)**\crv{~*=<8pt>{}~**!/1pt/\dir{-} "x"+(-.25,-1) & (2.5,-1.5) & "u"+(1.5,-1.25) & "u"+(1.5,.3) & "u"+(0,.15)},
"x"+(2.5,-1.5)*{P_1},
"x"+(.0,-.1);"y"+(-.05,.05)**\crv{~*=<8pt>{}~**!/1pt/\dir{-} "x"+(.25,-.5) & "x"+(2.4,-1) & "y"+(-.5,.2) "y"+(-.1,.1)},
"x"+(2.45,-.95)*{P_2},
"x";"z"**\crv{ "x"+(0,.3) & "x"+(.75,.75) & "z"+(-.3,-.5)},
"x"+(.75,.85)*{C_2},
"x";"z"**\crv{ "x"+(0,.75) & "x"+(.75,1)},
"x"+(.95,0)*{C_1},
"z";"u"**\crv{ "z"+(.4,.2) &"z"+(.5,.75) & "u"+(-.7,.3)},
"z"+(.4,.8)*{C_2},
"z";"v"**\crv{"z"+(.25,-.5) & "v"+(-.3,-.2)},
"u";"y"**\crv{"u"+(.5,-.2) & "u"+(1.4,-.5) & "y"+(.4,-.2)},
"v"+(.1,.01);"y"**\crv{~*=<7pt>{}~**!/1pt/\dir{-} "v"+(.3,.1) & "v"+(1.3,0) & "y"+(.5,.3)},
"v"+(.95,-.25)*{P_3},
"v";"y"**\crv{ "v"+(-.3,-.4) & "v"+(-.6,-1) & "y"+(-.3,-.3)},
"v"+(.05,-.45)*{C_3},
"z";"z"+<9pt,0pt>**\dir{-}*\dir{>},
"z";"z"-<9pt,0pt>**\dir{-}*\dir{>},
(0,0);(1.5,0):,
(-.7,0)="x",
(0,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(-1.2,1.1)*{U_1},
(2.1,0)="x",
(0,0)+"x"="centre","centre"+(1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(.72,.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,.45) & "centre"+(-.72,.72) & "e"-(.45,0)},
"centre"+(1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(.72,-.72) & "e"+(.45,0)},
"centre"+(-1,0)="s";"centre"+(0,-1)="e"**\crv{~*=<2.5pt>{\hbox{\Large.}} "s"+(0,-.45) & "centre"+(-.72,-.72) & "e"-(.45,0)},
(2.5,1.1)*{U_2},
\endxy}
\endtable\caption{}\label{figure: bad guy 1}
\end{figure}
\noindent Let $P_1^{(1)}$ denote the o-path obtained by extending $P_1$ (following $C_1$) to $z$, let $P_2^{(1)}$ denote the
o-path obtained by following $C_2$ from $z$ into $U_2$, stopping at the first encountered vertex, $r$ say, on $C_3$, and
let $P_3^{(1)}$ denote the o-path obtained by following $C_3$ from $r$ to $v$, travelling in the correct direction on $C_3$ in
order that $P_2^{(1)}+P_3^{(1)}$ meets the o-path criteria at $r$. It then follows from our construction of
$P_2^{(1)}$ and $P_3^{(1)}$ that $P_2^{(1)}+P_3^{(1)}$ is an o-path from $z$ to $v$. Moreover,
$P_1^{(1)}+P_2^{(1)}+P_3^{(1)}$ meets the o-path criteria at $z$ since $z$ was a non-transversely oriented cut-vertex
and $P_1^{(1)}$ is coming out of $U_1$ while $P_2^{(1)}$ is entering $U_2$. Finally, $P_2^{(1)}$ lies on the
arc of $C_2$ that did not meet $P_1$, while the extension of $P_1$ (except for the edge to $z$) was contained
within $U_1$ and $P_2^{(1)}$ is
contained within $U_2$, so $P_1^{(1)}$ does not meet $P_2^{(1)}$ other than at $z$. Thus $P_1^{(1)}+P_2^{(1)}+P_3^{(1)}$
is an o-cycle with $P_1^{(1)}$ lying on $C_1$, $P_2^{(1)}$ lying on $C_2$, and $P_3^{(1)}$ lying on $C_3$, so
$(P_1^{(1)},P_2^{(1)},P_3^{(1)})\in \it O$ and $P_1<P_1^{(1)}$, so $(P_1,P_2,P_3)<(P_1^{(1)},P_2^{(1)},P_3^{(1)})$.
We claim that $(P_1^{(1)},P_2^{(1)},P_3^{(1)})$ satisfies Condition $A$. Let $I$ denote the arc
of $C_2$ from $u$ to $z$ which passes through $x$. Now $P_1$ only meets $C_2$ at vertices on the arc of $C_2$ between $u$
and $x$ that does not contain $z$, which is a subpath of $I$, so $P_1$ only meets $C_2$ at vertices of $I$.
As well, the extension of $P_1$ can only meet $C_2$ at vertices of $U_1$ or $z$, hence only at vertices of $I$. Thus
$P_1^{(1)}$ only meets $C_2$ at vertices of $I$. It remains to prove
that $C_1-P_1^{(1)}$ does not meet $C_2$ at vertices of $I$. As the vertices of $C_1-P_1^{(1)}$ form a subset of
the set of vertices of $C_1-P_1$, and $C_1-P_1$ could only meet $C_2$ on the arc of $C_2$ from $x$ to $u$ that
passes through $z$, it follows that $C_1-P_1^{(1)}$ can only meet $C_2$ at vertices on the arc of $C_2$
from $x$ to $u$ that passes through $z$. However, $C_1-P_1^{(1)}$ lies in $U_2$ and thus does not pass through
any vertex of the arc of $C_2$ from $x$ to $z$ that lies in $U_1$, so it follows that $C_1-P_1^{(1)}$ does not
meet $C_2$ at any vertex of $I$. Thus $(P_1^{(1)},P_2^{(1)},P_3^{(1)})$ satisfies Condition A.
We have now established that for every $\it O_i\in \it O$ that satisfies Condition A and is such that
$G-E(\it \hat{O}_i)$ has a non-transversely oriented cut-vertex belonging to $G_1$, there is a larger element of $\it O$ that also
satisfies Condition A. Since $\it O_1$ satisfies Condition A and $\it O$ is finite, it follows that there is a greatest element $\it O_i=(P_1,P_2,P_3)$
of $\it O$ that can be produced by applying this construction to an element of $\it O$ that satisfies Condition
A. Thus for any $\it O_j\in \it O$ that satisfies Condition $A$ and is greater than or equal to $\it O_i$,
$G-E(\it\hat{O}_j)$ does not have a non-transversely oriented cut-vertex in $G_1$. Since $\it O_i$ was constructed
by an application of the procedure described above, we know that the terminal vertex of $P_2$, $s$ say, is the only
vertex on $P_2$ that lies on $C_3$. We shall say that an element $(Q_1,Q_2,Q_3)\in \it O$ satisfies Condition
B if $Q_1=P_1$ and $Q_2$ meets $C_3$ only at vertices of one of the arcs of $C_3$ determined by $s$ and the
terminal vertex of $P_2$, while $C_2-P_2$ does not meet $C_3$ at any vertex of this arc. In particular,
$(P_1,P_2,P_3)$ satisfies Condition B as well as Condition A.
By assumption, every element $\it O_j$ of $\it O$ is such that $G-E(\it\hat O_j)$ contains a non-transversely
oriented cut-vertex, and thus in particular, $G-E(\it\hat O_i)$ has a non-transversely oriented cut-vertex,
necessarily in $G_2$.
For any $\it O_j=(Q_1,Q_2,Q_3)\in \it O$ that satisfies Condition B (so $Q_1=P_1$ and thus it must satisfy Condition A)
for which $\it O_j\ge \it O_i$ and $G-E(\it\hat O_j)$ has a non-transversely oriented cut-vertex (necessarily in $G_2$),
we may carry out a procedure completely analagous to that described above for $C_1$ and
$C_2$ to obtain an element $(Q_1^{(1)},Q_2^{(1)},Q_3^{(1)})$ that satisfies condition B (and thus A),
and which is greater than $(Q_1,Q_2,Q_3)$. Again, since $\it O$ is finite, there is a maximum such element of
$\it O$, which we shall denote by $M$. Thus $M$ satisfies both Condtions A and B, and $G-E(\hat M)$ can
not contain a non-transversely oriented cut-vertex in either $G_1$ or $G_2$, which contradicts our assumption
that every element $\it O_k$ of $\it O$ was such that $G-E(\it\hat O_k)$ contains a non-transversely oriented
cut-vertex.
This completes the proof of the inductive step, and so the result follows.
\edproofmarker\vskip10pt
Of course, the goal is to obtain an alternative proof of the four colour
theorem. This would be accomplished if we could sharpen our theorem above
to say that every vertex-oriented 4-regular planar graph with all
cut-vertices oriented transversely can be 3-o-coloured. There are three places
in our proof of the inductive step where the number of colours used to colour
$(G,\sigma)$ may be increased over the number used to colour the smaller graph.
Two of these situations involve the removal of a cycle, o-colouring
the result, and finally reinserting the cycle, possibly needing an additional
colour for it, while the other appears in a simplification step during the
proof of Case 3, where after smoothing a vertex and o-colouring the resulting
graph, if the two new edges that resulted from the smoothing were coloured the
same but belonged to different o-cycles, then we observed that one of the o-cycles
could have its colour changed, possibly requiring a new colour. It might in fact
be possible to argue that the case itself never happens, in the sense that it may be impossible
that each and every vertex of $G$ can result in the scenario of Case 3. If that can be established, the remaining problem occurs in
Case 2 (iii), and is potentially the more intractable one. We offer an example
below of the situation that may occur.
\medski
\begin{figure}[h]
\centering\table{c@{\hskip10pt}c@{\hskip10pt}c}
$\vcenter
\xy /r.2pt/:,(0,0);(1,0):(0,.65)::,
(75.5228,154.173)="1";
(116.084,171.016)="flex18";
(119.936,250.534)="flex19";
(120.93,289.768)="3";
(116.522,329.991)="flex20";
(5.10846,250.711)="flex21";
(88.9067,194.31)="flex22";
(120.71,211.271)="2";
(151.901,227.879)="flex23";
(199.713,271.97)="flex24";
(225.692,282.866)="6";
(264.805,283.61)="flex25";
(330.95,321.421)="flex26";
(338.496,367.79)="8";
(318.885,409.793)="flex27";
(153.17,451.891)="flex28";
(76.0992,347.012)="4";
(89.2568,306.86)="flex29";
(152.001,273.021)="flex30";
(176.354,250.415)="5";
(199.636,228.786)="flex31";
(264.731,216.865)="flex32";
(300.752,209.342)="11";
(330.683,178.682)="flex33";
(318.059,90.2756)="flex34";
(279.213,52.0458)="13";
(151.924,48.9551)="flex35";
(256.33,98.0217)="flex36";
(260.53,143.002)="14";
(281.013,176.771)="flex37";
(312.455,250.121)="flex38";
(300.85,290.936)="7";
(281.268,323.608)="flex39";
(257.148,402.418)="flex40";
(280.319,448.173)="9";
(394.1,450.64)="flex41";
(415.699,249.837)="flex42";
(432.676,162.643)="17";
(393.072,48.9368)="flex43";
(384.486,144.128)="flex44";
(337.948,132.242)="12";
(297.861,132.409)="flex45";
(230.283,173.92)="flex46";
(225.586,217.8)="10";
(225.063,250.34)="flex47";
(230.589,326.721)="flex48";
(260.99,357.459)="15";
(298.383,367.841)="flex49";
(384.99,355.672)="flex50";
(433.097,336.894)="16";
(494.916,249.628)="flex51";
"1";"flex18"**\crv{ (91.4283,152.433) & (107.83,157.582) },
"flex18";"2"*[o]=(0,0){\,}**\crv{ (123.364,182.864) & (121.636,197.43) },
"2"*[o]=(0,0){\,};"flex19"**\crv{ (119.835,224.337) & (119.899,237.44) },
"flex19";"3"**\crv{ (119.973,263.62) & (119.984,276.715) },
"3";"flex20"**\crv{ (121.932,303.585) & (123.734,318.123) },
"flex20";"4"*[o]=(0,0){\,}**\crv{ (108.345,343.447) & (91.9916,348.664) },
"4"*[o]=(0,0){\,};"flex21"**\crv{ (31.7432,342.402) & (5.21707,297.779) },
"flex21";"1"**\crv{ (5,203.711) & (31.2529,159.017) },
"1"*[o]=(0,0){\,};"flex22"**\crv{ (75.1843,168.89) & (78.5694,183.887) },
"flex22";"2"**\crv{ (97.489,202.963) & (109.426,206.768) },
"2";"flex23"**\crv{ (131.691,215.654) & (142.348,220.903) },
"flex23";"5"*[o]=(0,0){\,}**\crv{ (160.874,234.432) & (168.69,242.378) },
"5"*[o]=(0,0){\,};"flex24"**\crv{ (183.685,258.103) & (190.956,265.942) },
"flex24";"6"**\crv{ (207.534,277.353) & (216.351,281.152) },
"6";"flex25"**\crv{ (238.57,285.229) & (251.732,283.512) },
"flex25";"7"*[o]=(0,0){\,}**\crv{ (277.218,283.703) & (289.693,285.47) },
"7"*[o]=(0,0){\,};"flex26"**\crv{ (314.008,297.383) & (324.238,308.397) },
"flex26";"8"**\crv{ (338.316,335.713) & (341.242,351.949) },
"8";"flex27"**\crv{ (335.82,383.223) & (327.981,397.063) },
"flex27";"9"*[o]=(0,0){\,}**\crv{ (308.245,424.682) & (295.754,438.32) },
"9"*[o]=(0,0){\,};"flex28"**\crv{ (242.33,472.421) & (194.276,470.992) },
"flex28";"4"**\crv{ (110.378,432.007) & (77.4163,393.639) },
"4";"flex29"**\crv{ (75.6841,332.316) & (78.9845,317.32) },
"flex29";"3"*[o]=(0,0){\,}**\crv{ (97.7839,298.177) & (109.686,294.323) },
"3"*[o]=(0,0){\,};"flex30"**\crv{ (131.876,285.333) & (142.49,280.03) },
"flex30";"5"**\crv{ (160.938,266.434) & (168.723,258.472) },
"5";"flex31"**\crv{ (183.658,242.705) & (190.9,234.842) },
"flex31";"10"*[o]=(0,0){\,}**\crv{ (207.442,223.376) & (216.25,219.551) },
"10"*[o]=(0,0){\,};"flex32"**\crv{ (238.466,215.384) & (251.647,217.033) },
"flex32";"11"**\crv{ (277.149,216.706) & (289.62,214.872) },
"11";"flex33"**\crv{ (313.877,202.823) & (324.047,191.749) },
"flex33";"12"*[o]=(0,0){\,}**\crv{ (337.967,164.339) & (340.789,148.076) },
"12"*[o]=(0,0){\,};"flex34"**\crv{ (335.176,116.797) & (327.244,102.977) },
"flex34";"13"**\crv{ (307.317,75.4227) & (294.727,61.8322) },
"13";"flex35"**\crv{ (241.056,27.9758) & (192.963,29.6086) },
"flex35";"1"*[o]=(0,0){\,}**\crv{ (109.286,69.0555) & (76.5946,107.584) },
"13"*[o]=(0,0){\,};"flex36"**\crv{ (266.709,64.3593) & (259.212,80.7113) },
"flex36";"14"**\crv{ (253.81,113.153) & (254.86,128.746) },
"14";"flex37"**\crv{ (265.418,155.291) & (273.498,165.909) },
"flex37";"11"*[o]=(0,0){\,}**\crv{ (288.243,187.223) & (295.003,198.006) },
"11"*[o]=(0,0){\,};"flex38"**\crv{ (307.236,222.128) & (312.442,235.769) },
"flex38";"7"**\crv{ (312.468,264.475) & (307.285,278.125) },
"7";"flex39"**\crv{ (295.144,302.296) & (288.446,313.118) },
"flex39";"15"*[o]=(0,0){\,}**\crv{ (273.817,334.5) & (265.791,345.148) },
"15"*[o]=(0,0){\,};"flex40"**\crv{ (255.422,371.738) & (254.512,387.324) },
"flex40";"9"**\crv{ (260.164,419.679) & (267.758,435.955) },
"9";"flex41"**\crv{ (311.117,478.128) & (359.816,477.674) },
"flex41";"16"*[o]=(0,0){\,}**\crv{ (427.737,424.117) & (439.589,379.605) },
"16"*[o]=(0,0){\,};"flex42"**\crv{ (428.634,307.525) & (415.777,279.614) },
"flex42";"17"**\crv{ (415.622,220.042) & (428.349,192.05) },
"17";"flex43"**\crv{ (438.975,119.841) & (426.894,75.3203) },
"flex43";"13"*[o]=(0,0){\,}**\crv{ (358.611,22.0551) & (309.855,21.8718) },
"17"*[o]=(0,0){\,};"flex44"**\crv{ (417.338,154.763) & (400.894,149.406) },
"flex44";"12"**\crv{ (369.215,139.216) & (353.862,134.337) },
"12";"flex45"**\crv{ (324.644,130.491) & (311.171,130.717) },
"flex45";"14"*[o]=(0,0){\,}**\crv{ (284.937,134.053) & (272.151,137.092) },
"14"*[o]=(0,0){\,};"flex46"**\crv{ (247.256,149.754) & (235.915,160.143) },
"flex46";"10"**\crv{ (224.646,187.712) & (225.583,202.944) },
"10";"flex47"**\crv{ (225.587,228.649) & (225.046,239.491) },
"flex47";"6"*[o]=(0,0){\,}**\crv{ (225.079,261.185) & (225.652,272.021) },
"6"*[o]=(0,0){\,};"flex48"**\crv{ (225.745,297.722) & (224.88,312.959) },
"flex48";"15"**\crv{ (236.29,340.462) & (247.684,350.785) },
"15";"flex49"**\crv{ (272.644,363.305) & (285.448,366.27) },
"flex49";"8"*[o]=(0,0){\,}**\crv{ (311.71,369.459) & (325.192,369.611) },
"8"*[o]=(0,0){\,};"flex50"**\crv{ (354.404,365.613) & (369.739,360.663) },
"flex50";"16"**\crv{ (401.377,350.309) & (417.796,344.858) },
"16";"flex51"**\crv{ (467.498,318.989) & (495,287.832) },
"flex51";"17"*[o]=(0,0){\,}**\crv{ (494.832,211.414) & (467.181,180.371) },
"5"*!<0pt,7pt>{\hbox{$v$}},
"5";"5"+(40,0)**\dir{-}*\dir{>},
"5";"5"-(40,0)**\dir{-}*\dir{>},
"1"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"2"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"3"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"4"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"5"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"6"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"7"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"8"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"9"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"10"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"11"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"12"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"13"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"14"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"15"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"16"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"17"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
\endxy}$
&
$\vcenter
\xy /r.2pt/:,(0,0);(1,0):(0,.65)::,
(75.5228,154.173)="1";
(116.084,171.016)="flex18";
(119.936,250.534)="flex19";
(120.93,289.768)="3";
(116.522,329.991)="flex20";
(5.10846,250.711)="flex21";
(88.9067,194.31)="flex22";
(120.71,211.271)="2";
(151.901,227.879)="flex23";
(199.713,271.97)="flex24";
(225.692,282.866)="6";
(264.805,283.61)="flex25";
(330.95,321.421)="flex26";
(338.496,367.79)="8";
(318.885,409.793)="flex27";
(153.17,451.891)="flex28";
(76.0992,347.012)="4";
(89.2568,306.86)="flex29";
(152.001,273.021)="flex30";
(176.354,250.415)="5";
(199.636,228.786)="flex31";
(264.731,216.865)="flex32";
(300.752,209.342)="11";
(330.683,178.682)="flex33";
(318.059,90.2756)="flex34";
(279.213,52.0458)="13";
(151.924,48.9551)="flex35";
(256.33,98.0217)="flex36";
(260.53,143.002)="14";
(281.013,176.771)="flex37";
(312.455,250.121)="flex38";
(300.85,290.936)="7";
(281.268,323.608)="flex39";
(257.148,402.418)="flex40";
(280.319,448.173)="9";
(394.1,450.64)="flex41";
(415.699,249.837)="flex42";
(432.676,162.643)="17";
(393.072,48.9368)="flex43";
(384.486,144.128)="flex44";
(337.948,132.242)="12";
(297.861,132.409)="flex45";
(230.283,173.92)="flex46";
(225.586,217.8)="10";
(225.063,250.34)="flex47";
(230.589,326.721)="flex48";
(260.99,357.459)="15";
(298.383,367.841)="flex49";
(384.99,355.672)="flex50";
(433.097,336.894)="16";
(494.916,249.628)="flex51";
"1";"flex18"**\crv{ (91.4283,152.433) & (107.83,157.582) },
"flex18";"2"*[o]=(0,0){\,}**\crv{ (123.364,182.864) & (121.636,197.43) },
"2"*[o]=(0,0){\,};"flex19"**\crv{ (119.835,224.337) & (119.899,237.44) },
"flex19";"3"**\crv{ (119.973,263.62) & (119.984,276.715) },
"3";"flex20"**\crv{ (121.932,303.585) & (123.734,318.123) },
"flex20";"4"*[o]=(0,0){\,}**\crv{ (108.345,343.447) & (91.9916,348.664) },
"4"*[o]=(0,0){\,};"flex21"**\crv{ (31.7432,342.402) & (5.21707,297.779) },
"flex21";"1"**\crv{ (5,203.711) & (31.2529,159.017) },
"1"*[o]=(0,0){\,};"flex22"**\crv{ (75.1843,168.89) & (78.5694,183.887) },
"flex22";"2"**\crv{ (97.489,202.963) & (109.426,206.768) },
"2";"flex23"**\crv{ (131.691,215.654) & (142.348,220.903) },
"flex23"
"flex23";"flex30"**\dir{}?(.5)="x",
"flex30"**\crv{"flex23"+<2pt,2pt> & "x"+<2pt,0pt> & "flex30"+<2pt,-2pt> },
"flex24";"6"**\crv{ (207.534,277.353) & (216.351,281.152) },
"6";"flex25"**\crv{ (238.57,285.229) & (251.732,283.512) },
"flex25";"7"*[o]=(0,0){\,}**\crv{ (277.218,283.703) & (289.693,285.47) },
"7"*[o]=(0,0){\,};"flex26"**\crv{ (314.008,297.383) & (324.238,308.397) },
"flex26";"8"**\crv{ (338.316,335.713) & (341.242,351.949) },
"8";"flex27"**\crv{ (335.82,383.223) & (327.981,397.063) },
"flex27";"9"*[o]=(0,0){\,}**\crv{ (308.245,424.682) & (295.754,438.32) },
"9"*[o]=(0,0){\,};"flex28"**\crv{ (242.33,472.421) & (194.276,470.992) },
"flex28";"4"**\crv{ (110.378,432.007) & (77.4163,393.639) },
"4";"flex29"**\crv{ (75.6841,332.316) & (78.9845,317.32) },
"flex29";"3"*[o]=(0,0){\,}**\crv{ (97.7839,298.177) & (109.686,294.323) },
"3"*[o]=(0,0){\,};"flex30"**\crv{ (131.876,285.333) & (142.49,280.03) },
"flex24";"flex31"**\dir{}?(.5)="y",
"flex24";"flex31"**\crv{"flex24"+<-2pt,-2pt> & "y"+<-2pt,0pt> & "flex31"+<-2pt,2pt> },
"flex31";"10"*[o]=(0,0){\,}**\crv{ (207.442,223.376) & (216.25,219.551) },
"10"*[o]=(0,0){\,};"flex32"**\crv{ (238.466,215.384) & (251.647,217.033) },
"flex32";"11"**\crv{ (277.149,216.706) & (289.62,214.872) },
"11";"flex33"**\crv{ (313.877,202.823) & (324.047,191.749) },
"flex33";"12"*[o]=(0,0){\,}**\crv{ (337.967,164.339) & (340.789,148.076) },
"12"*[o]=(0,0){\,};"flex34"**\crv{ (335.176,116.797) & (327.244,102.977) },
"flex34";"13"**\crv{ (307.317,75.4227) & (294.727,61.8322) },
"13";"flex35"**\crv{ (241.056,27.9758) & (192.963,29.6086) },
"flex35";"1"*[o]=(0,0){\,}**\crv{ (109.286,69.0555) & (76.5946,107.584) },
"13"*[o]=(0,0){\,};"flex36"**\crv{ (266.709,64.3593) & (259.212,80.7113) },
"flex36";"14"**\crv{ (253.81,113.153) & (254.86,128.746) },
"14";"flex37"**\crv{ (265.418,155.291) & (273.498,165.909) },
"flex37";"11"*[o]=(0,0){\,}**\crv{ (288.243,187.223) & (295.003,198.006) },
"11"*[o]=(0,0){\,};"flex38"**\crv{ (307.236,222.128) & (312.442,235.769) },
"flex38";"7"**\crv{ (312.468,264.475) & (307.285,278.125) },
"7";"flex39"**\crv{ (295.144,302.296) & (288.446,313.118) },
"flex39";"15"*[o]=(0,0){\,}**\crv{ (273.817,334.5) & (265.791,345.148) },
"15"*[o]=(0,0){\,};"flex40"**\crv{ (255.422,371.738) & (254.512,387.324) },
"flex40";"9"**\crv{ (260.164,419.679) & (267.758,435.955) },
"9";"flex41"**\crv{ (311.117,478.128) & (359.816,477.674) },
"flex41";"16"*[o]=(0,0){\,}**\crv{ (427.737,424.117) & (439.589,379.605) },
"16"*[o]=(0,0){\,};"flex42"**\crv{ (428.634,307.525) & (415.777,279.614) },
"flex42";"17"**\crv{ (415.622,220.042) & (428.349,192.05) },
"17";"flex43"**\crv{ (438.975,119.841) & (426.894,75.3203) },
"flex43";"13"*[o]=(0,0){\,}**\crv{ (358.611,22.0551) & (309.855,21.8718) },
"17"*[o]=(0,0){\,};"flex44"**\crv{ (417.338,154.763) & (400.894,149.406) },
"flex44";"12"**\crv{ (369.215,139.216) & (353.862,134.337) },
"12";"flex45"**\crv{ (324.644,130.491) & (311.171,130.717) },
"flex45";"14"*[o]=(0,0){\,}**\crv{ (284.937,134.053) & (272.151,137.092) },
"14"*[o]=(0,0){\,};"flex46"**\crv{ (247.256,149.754) & (235.915,160.143) },
"flex46";"10"**\crv{ (224.646,187.712) & (225.583,202.944) },
"10";"flex47"**\crv{ (225.587,228.649) & (225.046,239.491) },
"flex47";"6"*[o]=(0,0){\,}**\crv{ (225.079,261.185) & (225.652,272.021) },
"6"*[o]=(0,0){\,};"flex48"**\crv{ (225.745,297.722) & (224.88,312.959) },
"flex48";"15"**\crv{ (236.29,340.462) & (247.684,350.785) },
"15";"flex49"**\crv{ (272.644,363.305) & (285.448,366.27) },
"flex49";"8"*[o]=(0,0){\,}**\crv{ (311.71,369.459) & (325.192,369.611) },
"8"*[o]=(0,0){\,};"flex50"**\crv{ (354.404,365.613) & (369.739,360.663) },
"flex50";"16"**\crv{ (401.377,350.309) & (417.796,344.858) },
"16";"flex51"**\crv{ (467.498,318.989) & (495,287.832) },
"flex51";"17"*[o]=(0,0){\,}**\crv{ (494.832,211.414) & (467.181,180.371) },
"1"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"2"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"3"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"4"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"6"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"7"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"8"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"9"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"10"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"11"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"12"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"13"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"14"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"15"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"16"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"17"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
\endxy}$
&
$\vcenter
\xy /r.2pt/:,(0,0);(1,0):(0,.65)::,
(75.5228,154.173)="1";
(116.084,171.016)="flex18";
(119.936,250.534)="flex19";
(120.93,289.768)="3";
(116.522,329.991)="flex20";
(5.10846,250.711)="flex21";
(88.9067,194.31)="flex22";
(120.71,211.271)="2";
(151.901,227.879)="flex23";
(199.713,271.97)="flex24";
(225.692,282.866)="6";
(264.805,283.61)="flex25";
(330.95,321.421)="flex26";
(338.496,367.79)="8";
(318.885,409.793)="flex27";
(153.17,451.891)="flex28";
(76.0992,347.012)="4";
(89.2568,306.86)="flex29";
(152.001,273.021)="flex30";
(176.354,250.415)="5";
(199.636,228.786)="flex31";
(264.731,216.865)="flex32";
(300.752,209.342)="11";
(330.683,178.682)="flex33";
(318.059,90.2756)="flex34";
(279.213,52.0458)="13";
(151.924,48.9551)="flex35";
(256.33,98.0217)="flex36";
(260.53,143.002)="14";
(281.013,176.771)="flex37";
(312.455,250.121)="flex38";
(300.85,290.936)="7";
(281.268,323.608)="flex39";
(257.148,402.418)="flex40";
(280.319,448.173)="9";
(394.1,450.64)="flex41";
(415.699,249.837)="flex42";
(432.676,162.643)="17";
(393.072,48.9368)="flex43";
(384.486,144.128)="flex44";
(337.948,132.242)="12";
(297.861,132.409)="flex45";
(230.283,173.92)="flex46";
(225.586,217.8)="10";
(225.063,250.34)="flex47";
(230.589,326.721)="flex48";
(260.99,357.459)="15";
(298.383,367.841)="flex49";
(384.99,355.672)="flex50";
(433.097,336.894)="16";
(494.916,249.628)="flex51";
"1";"flex18"**\crv{ (91.4283,152.433) & (107.83,157.582) },
"flex18";"2"*[o]=(0,0){\,}**\crv{ (123.364,182.864) & (121.636,197.43) },
"2"*[o]=(0,0){\,};"flex19"**\crv{ (119.835,224.337) & (119.899,237.44) },
"flex19";"3"**\crv{ (119.973,263.62) & (119.984,276.715) },
"3";"flex20"**\crv{ (121.932,303.585) & (123.734,318.123) },
"flex20";"4"*[o]=(0,0){\,}**\crv{ (108.345,343.447) & (91.9916,348.664) },
"4"*[o]=(0,0){\,};"flex21"**\crv{ (31.7432,342.402) & (5.21707,297.779) },
"flex21";"1"**\crv{ (5,203.711) & (31.2529,159.017) },
"1"*[o]=(0,0){\,};"flex22"**\crv{ (75.1843,168.89) & (78.5694,183.887) },
"flex22";"2"**\crv{ (97.489,202.963) & (109.426,206.768) },
"2";"flex23"**\crv{ (131.691,215.654) & (142.348,220.903) },
"flex23";"5"*[o]=(0,0){\,}**\crv{ (160.874,234.432) & (168.69,242.378) },
"5"*[o]=(0,0){\,};"flex24"**\crv{ (183.685,258.103) & (190.956,265.942) },
"flex24";"6"**\crv{ (207.534,277.353) & (216.351,281.152) },
"6";"flex25"**\crv{ (238.57,285.229) & (251.732,283.512) },
"flex25";"7"*[o]=(0,0){\,}**\crv{ (277.218,283.703) & (289.693,285.47) },
"7"*[o]=(0,0){\,};"flex26"**\crv{ (314.008,297.383) & (324.238,308.397) },
"flex26";"8"**\crv{ (338.316,335.713) & (341.242,351.949) },
"8";"flex27"**\crv{ (335.82,383.223) & (327.981,397.063) },
"flex27";"9"*[o]=(0,0){\,}**\crv{ (308.245,424.682) & (295.754,438.32) },
"9"*[o]=(0,0){\,};"flex28"**\crv{ (242.33,472.421) & (194.276,470.992) },
"flex28";"4"**\crv{ (110.378,432.007) & (77.4163,393.639) },
"4";"flex29"**\crv{ (75.6841,332.316) & (78.9845,317.32) },
"flex29";"3"*[o]=(0,0){\,}**\crv{ (97.7839,298.177) & (109.686,294.323) },
"3"*[o]=(0,0){\,};"flex30"**\crv{ (131.876,285.333) & (142.49,280.03) },
"flex30";"5"**\crv{ (160.938,266.434) & (168.723,258.472) },
"5";"flex31"**\crv{ (183.658,242.705) & (190.9,234.842) },
"flex31";"10"*[o]=(0,0){\,}**\crv{ (207.442,223.376) & (216.25,219.551) },
"10"*[o]=(0,0){\,};"flex32"**\crv{ (238.466,215.384) & (251.647,217.033) },
"flex32";"11"**\crv{ (277.149,216.706) & (289.62,214.872) },
"11";"flex33"**\crv{ (313.877,202.823) & (324.047,191.749) },
"flex33";"12"*[o]=(0,0){\,}**\crv{ (337.967,164.339) & (340.789,148.076) },
"12"*[o]=(0,0){\,};"flex34"**\crv{ (335.176,116.797) & (327.244,102.977) },
"flex34";"13"**\crv{ (307.317,75.4227) & (294.727,61.8322) },
"13";"flex35"**\crv{ (241.056,27.9758) & (192.963,29.6086) },
"flex35";"1"*[o]=(0,0){\,}**\crv{ (109.286,69.0555) & (76.5946,107.584) },
"13"*[o]=(0,0){\,};"flex36"**\crv{ (266.709,64.3593) & (259.212,80.7113) },
"flex36";"14"**\crv{ (253.81,113.153) & (254.86,128.746) },
"14";"flex37"**\crv{ (265.418,155.291) & (273.498,165.909) },
"flex37";"11"*[o]=(0,0){\,}**\crv{ (288.243,187.223) & (295.003,198.006) },
"11"*[o]=(0,0){\,};"flex38"**\crv{ (307.236,222.128) & (312.442,235.769) },
"flex38";"7"**\crv{ (312.468,264.475) & (307.285,278.125) },
"7";"flex39"**\crv{ (295.144,302.296) & (288.446,313.118) },
"flex39";"15"*[o]=(0,0){\,}**\crv{ (273.817,334.5) & (265.791,345.148) },
"15"*[o]=(0,0){\,};"flex40"**\crv{ (255.422,371.738) & (254.512,387.324) },
"flex40";"9"**\crv{ (260.164,419.679) & (267.758,435.955) },
"9";"flex41"**\crv{ (311.117,478.128) & (359.816,477.674) },
"flex41";"16"*[o]=(0,0){\,}**\crv{ (427.737,424.117) & (439.589,379.605) },
"16"*[o]=(0,0){\,};"flex42"**\crv{ (428.634,307.525) & (415.777,279.614) },
"flex42";"17"**\crv{ (415.622,220.042) & (428.349,192.05) },
"17";"flex43"**\crv{ (438.975,119.841) & (426.894,75.3203) },
"flex43";"13"*[o]=(0,0){\,}**\crv{ (358.611,22.0551) & (309.855,21.8718) },
"17"*[o]=(0,0){\,};"flex44"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (417.338,154.763) & (400.894,149.406) },
"flex44";"12"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (369.215,139.216) & (353.862,134.337) },
"12";"flex45"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (324.644,130.491) & (311.171,130.717) },
"flex45";"14"*[o]=(0,0){\,}**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (284.937,134.053) & (272.151,137.092) },
"14"*[o]=(0,0){\,};"flex46"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (247.256,149.754) & (235.915,160.143) },
"flex46";"10"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (224.646,187.712) & (225.583,202.944) },
"10";"flex47"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (225.587,228.649) & (225.046,239.491) },
"flex47";"6"*[o]=(0,0){\,}**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (225.079,261.185) & (225.652,272.021) },
"6"*[o]=(0,0){\,};"flex48"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (225.745,297.722) & (224.88,312.959) },
"flex48";"15"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (236.29,340.462) & (247.684,350.785) },
"15";"flex49"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (272.644,363.305) & (285.448,366.27) },
"flex49";"8"*[o]=(0,0){\,}**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (311.71,369.459) & (325.192,369.611) },
"8"*[o]=(0,0){\,};"flex50"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (354.404,365.613) & (369.739,360.663) },
"flex50";"16"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (401.377,350.309) & (417.796,344.858) },
"16";"flex51"**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (467.498,318.989) & (495,287.832) },
"flex51";"17"*[o]=(0,0){\,}**\crv{~*=<2.5pt>{\hbox{\LARGE.}} (494.832,211.414) & (467.181,180.371) },
"5"*!<0pt,7pt>{\hbox{$v$}},
"5";"5"+(40,0)**\dir{-}*\dir{>},
"5";"5"-(40,0)**\dir{-}*\dir{>},
"1"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"2"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"3"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"4"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"5"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"7"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"9"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"11"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"13"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
\endxy}$\\
\noalign{\vskip4pt}
(a) & (b) & (c)
\endtable\caption{}\label{figure: our difficulties figure}
\end{figure}
\noindent In Figure \ref{figure: our difficulties figure} (a), we have not shown the orientations of any vertex
other than $v$, but it is intended that the graph in (b) (in which vertex $v$ has been smoothed)
has been o-coloured in such a way that the four simple smooth curves are o-cycles. As in the proof of
Case 2 (iii), we choose an o-cycle to remove, and our choice is the curve $C$ shown dotted in (c). Now
o-colour $G-E(C)$. No matter what orientations had been assigned to the vertices of $G$ (other than $v$, which
is to be oriented as shown), $C$ will meet o-cycles of $G-E(C)$ of three different colours, and so the edges of
$C$ must be assigned a new colour.
We conclude this section with a brief discussion of vertex-orientation for
arbitrary 4-regular graphs. By an orientation of a vertex $v$, we mean a partition of the four incident edges
into two cells of size 2 (where we treat each loop at $v$ as two incident edges). Then define o-colouring
of a vertex-oriented 4-regular graph just as was done for planar vertex-oriented 4-regular graphs. If a
vertex-oriented 4-regular graph can be o-coloured, then its edge set can be decomposed into a collection of
edge-disjoint cycles (each an o-cycle of the vertex-oriented graph).
Note that given a 3-regular graph and a 1-factor of the graph, we may obtain a vertex-oriented 4-regular graph
by collapsing the edges of the 1-factor (with the orientation of each vertex determined by the 1-factor edge
that gave rise to the vertex, just as was done in the proof of Theorem \ref{theorem: 4ct equivalence}).
In an initial examination of snarks, we observed that many snarks had the property that there was at least one
1-factor of the snark that gave rise to a non-o-colourable vertex-oriented 4-regular graph, and frequently,
this was true for every 1-factor of the snark. This appears to be an interesting avenue of exploration.
\section{Examples}
In many of the early examples of vertex-oriented 4-regular planar graphs that we had examined, it was noticed that
there was at least one 3-o-colouring in which there is one colour and
exactly one o-cycle component of the subgraph induced by the edges of
that colour, and that o-cycle meets all other o-cycles determined by the
3-o-colouring. Often, this o-cycle has maximum
length over all o-cycles determined by the o-colouring. Our first example
to demonstrate that it is possible to have an o-cycle of maximum length
and which meets every o-cycle, yet the o-cycle does not participate in
any o-colouring, is a vertex-orientation of a link projection of the Whitehead link.
\medspace
\noindent{\bf Example 4.1.}
We have assigned a vertex-orientation to the Whitehead link as shown
below. In (a), we have shown a 2-o-colouring, where the dotted curve is
an o-cycle of maximum length (four). In (b), for the same vertex-orientation,
we show an o-cycle of maximum length (dotted) which is not an o-cycle
for any o-colouring of the graph. Thus not every o-cycle of maximum
length is necessarily an o-cycle for some o-colouring.
\vskip8pt
\centerline{\table{c@{\hskip70pt}c}
\hbox
\xy /r.2pt/:,
(391.91488791533, 352.3413638559693)="1";
(488.23963523795584, 250.7659748989001)="flex6";
(304.63185851208146, 179.87197583238355)="flex7";
(255.21548123737043, 250.45696392221237)="3";
(217.7750076748735, 303.9474653663747)="flex8";
(16.54168002921108, 259.5628135887432)="flex9";
(130.07853913415153, 143.5014649620283)="5";
(209.87045061497656, 196.28196062290476)="flex10";
(307.14380480112897, 317.5545833255335)="flex11";
(427.17900728907483, 251.27837781359727)="flex12";
(392.2309370041537, 142.25732773272796)="2";
(251.42071093860625, 81.96490741602179)="flex13";
(91.92866235553595, 254.6097743849504)="flex14";
(134.3575638708463, 357.2138212500434)="4";
(263.9883631944421, 421.1817969437918)="flex15";
"1";"flex6"**\crv{ (445.34326907488906, 349.45561157773267) & (486.56031604159915, 304.72088971862195) },
"flex6";"2"*[o]=(0,0){\,}**\crv{ (490.0, 194.20714380008747) & (448.1102077772812, 145.47224681727857) },
"2"*[o]=(0,0){\,};"flex7"**\crv{ (359.0690074005095, 140.3494123485753) & (328.06252036965066, 156.1999502981455) },
"flex7";"3"**\crv{ (284.3314086465915, 200.38154446913785) & (270.2574733679984, 225.87474382493315) },
"3";"flex8"**\crv{ (243.84077093663188, 269.0459666106394) & (231.7982171140436, 287.25928530147655) },
"flex8";"4"*[o]=(0,0){\,}**\crv{ (195.77447028784547, 330.12898472767824) & (168.19341986481308, 352.62245744914907) },
"4"*[o]=(0,0){\,};"flex9"**\crv{ (75.12058734742762, 365.25199559776934) & (22.333924386248313, 320.02705403474664) },
"flex9";"5"**\crv{ (10.0, 191.27534333958482) & (65.15707992221807, 133.10145207054518) },
"5";"flex10"**\crv{ (162.52191950281167, 148.6986921681496) & (187.51546846253603, 172.21151232245245) },
"flex10";"3"*[o]=(0,0){\,}**\crv{ (225.90932145756702, 213.5516148226535) & (241.3109492912824, 231.4239015481662) },
"3"*[o]=(0,0){\,};"flex11"**\crv{ (271.9379099677385, 273.34727265786063) & (286.57430558515404, 297.99559277726166) },
"flex11";"1"**\crv{ (330.25163864549603, 339.527208862377) & (360.1381918399302, 354.0576740099409) },
"1"*[o]=(0,0){\,};"flex12"**[thicker][thicker]\crv{ (413.5408772608842, 322.9452190236518) & (425.9242918944621, 287.75123436850686) },
"flex12";"2"**[thicker][thicker]\crv{ (428.5332488010918, 211.9124334563536) & (416.81391758050387, 173.0244299991158) },
"2";"flex13"**[thicker][thicker]\crv{ (358.45789920296585, 99.98830731640845) & (305.47728776929796, 78.34366147388141) },
"flex13";"5"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (204.27690609597215, 85.12306729207455) & (160.40651882788342, 107.27327845453624) },
"5"*[o]=(0,0){\,};"flex14"**[thicker][thicker]\crv{ (104.03989213273773, 174.6058434763985) & (89.8725403309002, 214.1070463173396) },
"flex14";"4"**[thicker][thicker]\crv{ (93.8554646006644, 292.56508367080056) & (109.87496835320914, 328.1437560715922) },
"4";"flex15"**[thicker][thicker]\crv{ (166.99456697019883, 395.96624056130514) & (213.445657049406, 421.6563385261186) },
"flex15";"1"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (315.0547785053475, 420.70233829636896) & (361.5829581705556, 393.5714683184209) },
"1"*!<9pt,-3pt>{1},
"2"*!<-2pt,7pt>{2},
"3"*!<-8pt,0pt>{3},
"4"*!<-8pt,-3pt>{4},
"5"*!<2pt,7pt>{5},
"1"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"2"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"3"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"4"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"5"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
a(63);a(243)**\dir{},
"1"+/10pt/="1x";"1"+/-10pt/="1y"**\dir{-},
"1x"*\dir{<},"1y"*\dir{>},
a(30);a(210)**\dir{},
"2"+<0pt,.5pt>+/10pt/="2x";"2"+<-0pt,.5pt>+/-10pt/="2y"**\dir{-},
"2x"*\dir{<},"2y"*\dir{>},
a(-90);a(90)**\dir{},
"3"+/10pt/="3x";"3"+/-10pt/="3y"**\dir{-},
"3x"*\dir{<},"3y"*\dir{>},
a(-63);a(-243)**\dir{},
"4"+/10pt/="4x";"4"+/-10pt/="4y"**\dir{-},
"4x"*\dir{<},"4y"*\dir{>},
a(-30);a(-210)**\dir{},
"5"+<0pt,.5pt>+/10pt/="5x";"5"+<-0pt,.5pt>+/-10pt/="5y"**\dir{-},
"5x"*\dir{<},"5y"*\dir{>},
\endxy}
&
\hbox{\xy /r.2pt/:,
(391.91488791533, 352.3413638559693)="1";
(488.23963523795584, 250.7659748989001)="flex6";
(304.63185851208146, 179.87197583238355)="flex7";
(255.21548123737043, 250.45696392221237)="3";
(217.7750076748735, 303.9474653663747)="flex8";
(16.54168002921108, 259.5628135887432)="flex9";
(130.07853913415153, 143.5014649620283)="5";
(209.87045061497656, 196.28196062290476)="flex10";
(307.14380480112897, 317.5545833255335)="flex11";
(427.17900728907483, 251.27837781359727)="flex12";
(392.2309370041537, 142.25732773272796)="2";
(251.42071093860625, 81.96490741602179)="flex13";
(91.92866235553595, 254.6097743849504)="flex14";
(134.3575638708463, 357.2138212500434)="4";
(263.9883631944421, 421.1817969437918)="flex15";
"1";"flex6"**\crv{ (445.34326907488906, 349.45561157773267) & (486.56031604159915, 304.72088971862195) },
"flex6";"2"*[o]=(0,0){\,}**\crv{ (490.0, 194.20714380008747) & (448.1102077772812, 145.47224681727857) },
"2"*[o]=(0,0){\,};"flex7"**\crv{ (359.0690074005095, 140.3494123485753) & (328.06252036965066, 156.1999502981455) },
"flex7";"3"**\crv{ (284.3314086465915, 200.38154446913785) & (270.2574733679984, 225.87474382493315) },
"3";"flex8"**\crv{ (243.84077093663188, 269.0459666106394) & (231.7982171140436, 287.25928530147655) },
"flex8";"4"*[o]=(0,0){\,}**\crv{ (195.77447028784547, 330.12898472767824) & (168.19341986481308, 352.62245744914907) },
"4"*[o]=(0,0){\,};"flex9"**\crv{ (75.12058734742762, 365.25199559776934) & (22.333924386248313, 320.02705403474664) },
"flex9";"5"**\crv{ (10.0, 191.27534333958482) & (65.15707992221807, 133.10145207054518) },
"5";"flex10"**[thicker][thicker]\crv{ (162.52191950281167, 148.6986921681496) & (187.51546846253603, 172.21151232245245) },
"flex10";"3"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (225.90932145756702, 213.5516148226535) & (241.3109492912824, 231.4239015481662) },
"3"*[o]=(0,0){\,};"flex11"**[thicker][thicker]\crv{ (271.9379099677385, 273.34727265786063) & (286.57430558515404, 297.99559277726166) },
"flex11";"1"**[thicker][thicker]\crv{ (330.25163864549603, 339.527208862377) & (360.1381918399302, 354.0576740099409) },
"1"*[o]=(0,0){\,};"flex12"**\crv{ (413.5408772608842, 322.9452190236518) & (425.9242918944621, 287.75123436850686) },
"flex12";"2"**\crv{ (428.5332488010918, 211.9124334563536) & (416.81391758050387, 173.0244299991158) },
"2";"flex13"**\crv{ (358.45789920296585, 99.98830731640845) & (305.47728776929796, 78.34366147388141) },
"flex13";"5"*[o]=(0,0){\,}**\crv{ (204.27690609597215, 85.12306729207455) & (160.40651882788342, 107.27327845453624) },
"5"*[o]=(0,0){\,};"flex14"**[thicker][thicker]\crv{ (104.03989213273773, 174.6058434763985) & (89.8725403309002, 214.1070463173396) },
"flex14";"4"**[thicker][thicker]\crv{ (93.8554646006644, 292.56508367080056) & (109.87496835320914, 328.1437560715922) },
"4";"flex15"**[thicker][thicker]\crv{ (166.99456697019883, 395.96624056130514) & (213.445657049406, 421.6563385261186) },
"flex15";"1"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (315.0547785053475, 420.70233829636896) & (361.5829581705556, 393.5714683184209) },
"1"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"2"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"3"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"4"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
"5"*!<0pt,0pt>{\raise.5pt\hbox{\scriptsize$\,\bullet\,$}},
a(63);a(243)**\dir{},
"1"+/10pt/="1x";"1"+/-10pt/="1y"**\dir{-},
"1x"*\dir{<},"1y"*\dir{>},
a(30);a(210)**\dir{},
"2"+<0pt,.5pt>+/10pt/="2x";"2"+<-0pt,.5pt>+/-10pt/="2y"**\dir{-},
"2x"*\dir{<},"2y"*\dir{>},
a(-90);a(90)**\dir{},
"3"+/10pt/="3x";"3"+/-10pt/="3y"**\dir{-},
"3x"*\dir{<},"3y"*\dir{>},
a(-63);a(-243)**\dir{},
"4"+/10pt/="4x";"4"+/-10pt/="4y"**\dir{-},
"4x"*\dir{<},"4y"*\dir{>},
a(-30);a(-210)**\dir{},
"5"+<0pt,.5pt>+/10pt/="5x";"5"+<-0pt,.5pt>+/-10pt/="5y"**\dir{-},
"5x"*\dir{<},"5y"*\dir{>},
\endxy}\\
\noalign{\vskip-15pt}
(a) & (b)
\endtable}
\medskip
\noindent{\bf Example 4.2.}
This next example, a vertex-orientation of one of the basic polyhedra ($8^*$ in Figure 6 of \cite{JHC}), is
interesting in that it contains an o-cycle of maximum length that does
not participate in any o-colouring.. For this vertex-orientation, there
were a total of twelve o-cycles, of lengths 3,4,5,6, and 7, and there were 4 different ways to
decompose the edge-set as an edge-disjoint union of o-cycles (what we have called an o-colouring,
although we have not assigned any colours to the o-cycles). There were two o-cycles of length 7,
and neither participated in any of the four o-colourings (listed in Table 1).
\medskip
\table{l@{\hskip0pt}c}
\table[t]{lll}
O-cycle & Length & In o-colourings\\
1,2,3,4,5,8,6,1 & 7 &-\\
1,2,6,1 & 3 & 1,2,3\\
1,2,6,5,1& 4& 4\\
1,5,8,3,7,1& 5& 1\\
1,5,8,3,4,7,1& 6& 2\\
1,6,8,4,7,1& 5& 4\\
1,5,6,8,4,7,1& 6& 3\\
2,6,5,4,8,3,7,2& 7 &-\\
2,3,4,7,2& 4& 1\\
2,3,7,2& 3& 2,3,4\\
3,4,5,8,3& 4& 3,4\\
4,5,6,8,4& 4& 1,2\\
\endtable
&
\vbox to 0pt{\vskip.25in\hbox to 1in{\hskip0in
\xy /r.25pt/:,
(164,246.883)="1";
(138.363,195.579)="flex9";
(250.956,5.35538)="flex10";
(372.97,124.618)="3";
(360.934,194.119)="flex11";
(291.806,290.621)="flex12";
(247.555,333.211)="5";
(194.847,361.28)="flex13";
(5.40639,250.364)="flex14";
(126.818,125.817)="2";
(196.94,136.654)="flex15";
(291.991,201.795)="flex16";
(333.411,245.967)="4";
(361.597,299.41)="flex17";
(248.868,494.957)="flex18";
(123.927,373.58)="6";
(135.326,300.193)="flex19";
(205.009,202.868)="flex20";
(248.457,161.851)="7";
(301.274,134.724)="flex21";
(494.369,248.9)="flex22";
(373.625,373.095)="8";
(300.715,360.973)="flex23";
(203.676,290.969)="flex24";
"1";"flex9"**\crv{ (152.845,231.234) & (144.725,213.705) },
"flex9";"2"*[o]=(5,5){\,}**\crv{ (130.48,173.117) & (125.273,149.591) },
"2"*[o]=(5,5){\,};"flex10"**\crv{ (131.173,58.8004) & (184.622,5) },
"flex10";"3"**\crv{ (316.302,5.70547) & (368.725,58.6935) },
"3";"flex11"**\crv{ (374.499,148.354) & (369.2,171.835) },
"flex11";"4"*[o]=(5,5){\,}**\crv{ (354.094,212.558) & (345.224,230.236) },
"4"*[o]=(5,5){\,};"flex12"**\crv{ (321.173,262.264) & (306.065,276.077) },
"flex12";"5"**\crv{ (277.455,305.26) & (263.852,320.752) },
"5";"flex13"**\crv{ (231.65,345.371) & (213.611,354.322) },
"flex13";"6"*[o]=(5,5){\,}**\crv{ (172.101,369.714) & (148.148,375.237) },
"6"*[o]=(5,5){\,};"flex14"**\crv{ (57.878,369.063) & (5.24994,315.931) },
"flex14";"2"**\crv{ (5.56503,183.878) & (59.6302,130.642) },
"2";"flex15"**\crv{ (150.656,124.105) & (174.309,129.03) },
"flex15";"7"*[o]=(5,5){\,}**\crv{ (215.151,142.789) & (232.801,150.692) },
"7"*[o]=(5,5){\,};"flex16"**\crv{ (264.534,173.311) & (278.02,187.866) },
"flex16";"4"**\crv{ (306.304,216.064) & (321.257,229.803) },
"4";"flex17"**\crv{ (345.582,262.156) & (354.661,280.387) },
"flex17";"8"*[o]=(5,5){\,}**\crv{ (370.217,323.053) & (375.521,347.985) },
"8"*[o]=(5,5){\,};"flex18"**\crv{ (368.535,440.525) & (315.44,495) },
"flex18";"6"**\crv{ (182.537,494.914) & (129.66,440.751) },
"6";"flex19"**\crv{ (121.795,348.608) & (126.7,323.71) },
"flex19";"1"*[o]=(5,5){\,}**\crv{ (142.313,281.145) & (151.712,263.033) },
"1"*[o]=(5,5){\,};"flex20"**\crv{ (176.162,230.899) & (190.913,217.161) },
"flex20";"7"**\crv{ (219.015,188.667) & (232.479,173.813) },
"7";"flex21"**\crv{ (264.402,149.915) & (282.433,141.163) },
"flex21";"3"*[o]=(5,5){\,}**\crv{ (324.36,126.834) & (348.657,122.436) },
"3"*[o]=(5,5){\,};"flex22"**\crv{ (439.853,130.62) & (493.989,182.898) },
"flex22";"8"**\crv{ (494.75,315.17) & (440.712,368.319) },
"8";"flex23"**\crv{ (348.767,374.865) & (324.139,369.427) },
"flex23";"5"*[o]=(5,5){\,}**\crv{ (281.808,354.148) & (263.594,345.343) },
"5"*[o]=(5,5){\,};"flex24"**\crv{ (231.325,320.936) & (217.796,305.602) },
"flex24";"1"**\crv{ (189.928,276.722) & (175.5,263.015) },
"1"*!<6pt,0pt>{\hbox{1}},
"1";"1"+(0,32)**\dir{-}*\dir{>},
"1";"1"+(-,-32)**\dir{-}*\dir{>},
"2"*!<4.5pt,-7pt>{\hbox{2}},
"2";"2"+(20,20)**\dir{-}*\dir{>},
"2";"2"+(-20,-20)**\dir{-}*\dir{>},
"3"*!<-4.5pt,-7pt>{\hbox{3}},
"3";"3"+(20,-20)**\dir{-}*\dir{>},
"3";"3"+(-20,20)**\dir{-}*\dir{>},
"4"*!<.5pt,-8pt>{\hbox{4}},
"4";"4"+(32,0)**\dir{-}*\dir{>},
"4";"4"+(-32,0)**\dir{-}*\dir{>},
"5"*!<-8pt,1pt>{\hbox{5}},
"5";"5"+(0,32)**\dir{-}*\dir{>},
"5";"5"+(0,-32)**\dir{-}*\dir{>},
"6"*!<4.5pt,7pt>{\hbox{6}},
"6";"6"+(20,-20)**\dir{-}*\dir{>},
"6";"6"+(-20,20)**\dir{-}*\dir{>},
"7"*!<0pt,7pt>{\hbox{7}},
"7";"7"+(32,0)**\dir{-}*\dir{>},
"7";"7"+(-32,0)**\dir{-}*\dir{>},
"8"*!<-4.5pt,7pt>{\hbox{8}},
"8";"8"+(20,20)**\dir{-}*\dir{>},
"8";"8"+(-20,-20)**\dir{-}*\dir{>},
\endxy\hss}\vss}
\endtable
\medskip
The o-colourings for this vertex-orientation are (the colours assigned to each o-cycle
are shown in brackets):
\medskip
\centerline{\table{lllll}
Number & 1 & 2 & 3 & 4\\
& 1,5,8,3,7,1 (r) & 1,5,8,3,4,7,1 (r) & 1,5,6,8,4,7,1 (r) & 1,6,8,4,7,1 (r)\\
& 4,5,6,8,4 (g) & 4,5,6,8,4 (g) & 3,4,5,8,3 (g) & 3,4,5,8,3 (g)\\
& 2,3,4,7,2 (b) & 2,3,7,2 (b) & 2,3,7,2 (b) & 1,2,6,5,1 (g)\\
& 1,2,6,1 (y) & 1,2,6,1 (g) & 1,2,6,1 (g) & 2,3,7,2 (b)\\
\noalign{\vskip5pt}
\multicolumn{5}{c}{{ Table} 1}
\endtable}
\medskip
\leavevmode\hskip-\parindent\vbox{\hsize=3in We have shown the first o-cycle of length 7, namely $1,2,3,4,5,8,6,1$, and the other one
is obtained by reflecting this one across the axis of (vertex-orientation) symmetry through
vertices 2 and 8. It is evident that this o-cycle can't participate in an o-colouring of the graph, as
the cycle $7,4,8,3,7$ would have to be an o-cycle, and it fails to meet the requirement at vertex 7.}
\hskip .25in
\lower .01in\hbox{
\xy /r.18pt/:,
(164,246.883)="1";
(138.363,195.579)="flex9";
(250.956,5.35538)="flex10";
(372.97,124.618)="3";
(360.934,194.119)="flex11";
(291.806,290.621)="flex12";
(247.555,333.211)="5";
(194.847,361.28)="flex13";
(5.40639,250.364)="flex14";
(126.818,125.817)="2";
(196.94,136.654)="flex15";
(291.991,201.795)="flex16";
(333.411,245.967)="4";
(361.597,299.41)="flex17";
(248.868,494.957)="flex18";
(123.927,373.58)="6";
(135.326,300.193)="flex19";
(205.009,202.868)="flex20";
(248.457,161.851)="7";
(301.274,134.724)="flex21";
(494.369,248.9)="flex22";
(373.625,373.095)="8";
(300.715,360.973)="flex23";
(203.676,290.969)="flex24";
"1";"flex9"**[thicker][thicker]\crv{ (152.845,231.234) & (144.725,213.705) },
"flex9";"2"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (130.48,173.117) & (125.273,149.591) },
"2"*[o]=(0,0){\,};"flex10"**[thicker][thicker]\crv{ (131.173,58.8004) & (184.622,5) },
"flex10";"3"**[thicker][thicker]\crv{ (316.302,5.70547) & (368.725,58.6935) },
"3";"flex11"**[thicker][thicker]\crv{ (374.499,148.354) & (369.2,171.835) },
"flex11";"4"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (354.094,212.558) & (345.224,230.236) },
"4"*[o]=(0,0){\,};"flex12"**[thicker][thicker]\crv{ (321.173,262.264) & (306.065,276.077) },
"flex12";"5"**[thicker][thicker]\crv{ (277.455,305.26) & (263.852,320.752) },
"5";"flex13"**\crv{ (231.65,345.371) & (213.611,354.322) },
"flex13";"6"*[o]=(0,0){\,}**\crv{ (172.101,369.714) & (148.148,375.237) },
"6"*[o]=(0,0){\,};"flex14"**\crv{ (57.878,369.063) & (5.24994,315.931) },
"flex14";"2"**\crv{ (5.56503,183.878) & (59.6302,130.642) },
"2";"flex15"**\crv{ (150.656,124.105) & (174.309,129.03) },
"flex15";"7"*[o]=(0,0){\,}**\crv{ (215.151,142.789) & (232.801,150.692) },
"7"*[o]=(0,0){\,};"flex16"**\crv{ (264.534,173.311) & (278.02,187.866) },
"flex16";"4"**\crv{ (306.304,216.064) & (321.257,229.803) },
"4";"flex17"**\crv{ (345.582,262.156) & (354.661,280.387) },
"flex17";"8"*[o]=(0,0){\,}**\crv{ (370.217,323.053) & (375.521,347.985) },
"8"*[o]=(0,0){\,};"flex18"**[thicker][thicker]\crv{ (368.535,440.525) & (315.44,495) },
"flex18";"6"**[thicker][thicker]\crv{ (182.537,494.914) & (129.66,440.751) },
"6";"flex19"**[thicker][thicker]\crv{ (121.795,348.608) & (126.7,323.71) },
"flex19";"1"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (142.313,281.145) & (151.712,263.033) },
"1"*[o]=(0,0){\,};"flex20"**\crv{ (176.162,230.899) & (190.913,217.161) },
"flex20";"7"**\crv{ (219.015,188.667) & (232.479,173.813) },
"7";"flex21"**\crv{ (264.402,149.915) & (282.433,141.163) },
"flex21";"3"*[o]=(0,0){\,}**\crv{ (324.36,126.834) & (348.657,122.436) },
"3"*[o]=(0,0){\,};"flex22"**\crv{ (439.853,130.62) & (493.989,182.898) },
"flex22";"8"**\crv{ (494.75,315.17) & (440.712,368.319) },
"8";"flex23"**[thicker][thicker]\crv{ (348.767,374.865) & (324.139,369.427) },
"flex23";"5"*[o]=(0,0){\,}**[thicker][thicker]\crv{ (281.808,354.148) & (263.594,345.343) },
"5"*[o]=(0,0){\,};"flex24"**\crv{ (231.325,320.936) & (217.796,305.602) },
"flex24";"1"**\crv{ (189.928,276.722) & (175.5,263.015) },
"1"*!<6pt,0pt>{\hbox{1}},
"1";"1"+(0,32)**\dir{-}*\dir{>},
"1";"1"+(-,-32)**\dir{-}*\dir{>},
"2"*!<4.5pt,-7pt>{\hbox{2}},
"2";"2"+(20,20)**\dir{-}*\dir{>},
"2";"2"+(-20,-20)**\dir{-}*\dir{>},
"3"*!<-4.5pt,-7pt>{\hbox{3}},
"3";"3"+(20,-20)**\dir{-}*\dir{>},
"3";"3"+(-20,20)**\dir{-}*\dir{>},
"4"*!<.5pt,-8pt>{\hbox{4}},
"4";"4"+(32,0)**\dir{-}*\dir{>},
"4";"4"+(-32,0)**\dir{-}*\dir{>},
"5"*!<-8pt,1pt>{\hbox{5}},
"5";"5"+(0,32)**\dir{-}*\dir{>},
"5";"5"+(0,-32)**\dir{-}*\dir{>},
"6"*!<4.5pt,7pt>{\hbox{6}},
"6";"6"+(20,-20)**\dir{-}*\dir{>},
"6";"6"+(-20,20)**\dir{-}*\dir{>},
"7"*!<0pt,7pt>{\hbox{7}},
"7";"7"+(32,0)**\dir{-}*\dir{>},
"7";"7"+(-32,0)**\dir{-}*\dir{>},
"8"*!<-4.5pt,7pt>{\hbox{8}},
"8";"8"+(20,20)**\dir{-}*\dir{>},
"8";"8"+(-20,-20)**\dir{-}*\dir{>},
\endxy}
\medspace
\noindent{\bf Example 4.3.}
We complete our discussion with a case study of the basic polyhedral graph $6^*$, the 4-regular
simple graph that is obtained as an alternating six crossing projection of the borromean rings.
This graph has an automorphism group of size 48, and the natural action of the
automorphism group on the set of vertex orientations of $6^*$ has seven orbits. We offer a representative
of each orbit below, and for each, we present the complete collection of o-cycles, as well as every
way of decomposing the edge set into edge-disjoint o-cycles (what we refer to as o-colourings).
For each, we label the different o-colourings with indices based at 0, and then for each o-cycle, we indicate
its length and, by listing the indices, the different o-colourings in which the o-cycle participates.
\begin{tabular}{cc}
\begin{tabular}{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<3pt,7pt>{\hbox{1}},
"2"*{\bullet}*!<-8pt,0pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,7pt>{\hbox{4}},
"5"*{\bullet}*!<-4pt,7pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"flex7";"flex14"**\dir{}?(.5)="1:2-6",
"1";"1:2-6"**\dir{}?(.4)="x","1";"x"**\dir{-}*\dir{>},
"flex10";"flex11"**\dir{}?(.5)="1:4-5",
"1";"1:4-5"**\dir{}?(.7)="y",
"1";"1"+"1"-"y"**\dir{-}*\dir{>},
"flex7";"flex15"**\dir{}?(.5)="2:1-5",
"2";"2:1-5"**\dir{}?(1.1)="y",
"2";"y"**\dir{-}*\dir{>},
"2";"2"+"2"-"y"**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(-30,0)**\dir{-}*\dir{>},
"4";"4"+(30,0)**\dir{-}*\dir{>},
"5";"5"+(20,14)**\dir{-}*\dir{>},
"5";"5"-(20,14)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy} \\
\begin{tabular}{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,6,3,5,4,1),(1,5,2,3,4,6,1)\\
2 & (1,2,6,4,3,5,1),(1,4,5,2,3,6,1)\\
\end{tabular}
\end{tabular}
&
{
\begin{tabular}{lll}
\multicolumn{3}{c}{List of o-cycles (11)}\\
(1,2,6,3,5,4,1) & 6 & 0\\
(1,4,5,2,3,6,1) & 6 & 1\\
(1,5,2,3,4,6,1) & 6 & 2\\
(1,2,6,4,3,5,1) & 6 & \\
(1,4,5,3,6,1) & 5 & 0\\
(1,5,2,3,6,1) & 5 & 2\\
(1,2,6,3,5,1) & 5 &\\
(1,5,3,4,6,1) & 5 &\\
(2,5,4,6,2) & 4 & 0\\
(1,2,3,4,1) & 4 & 1\\
(1,5,3,6,1) & 4 & \\
\end{tabular}}
\end{tabular}
\medskip
\centerline{\begin{tabular}{cc}
\begin{tabular}{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<5pt,-7pt>{\hbox{1}},
"2"*{\bullet}*!<-9pt,1pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,7pt>{\hbox{4}},
"5"*{\bullet}*!<-4pt,7pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(17,25)**\dir{-}*\dir{>},
"1";"1"-(17,25)**\dir{-}*\dir{>},
"2";"2"+(0,30)**\dir{-}*\dir{>},
"2";"2"-(0,30)**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(-30,0)**\dir{-}*\dir{>},
"4";"4"+(30,0)**\dir{-}*\dir{>},
"5";"5"+(20,14)**\dir{-}*\dir{>},
"5";"5"-(20,14)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\begin{tabular}{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,3,6,1),(1,4,3,5,1),(2,5,4,6,2)\\
2 & (1,2,6,3,5,4,1),(1,5,2,3,4,6,1)\\
\end{tabular}
\end{tabular}
&
\begin{tabular}{lll}
\multicolumn{3}{c}{List of o-cycles (11)}\\
(1,2,6,3,5,4,1) & 6 & 0\\
(1,5,2,3,4,6,1) & 6 &\\
(1,4,3,2,5,1) & 5 & 0\\
(1,2,3,4,6,1) & 5 & 1\\
(1,5,2,3,6,1) & 5 & 2\\
(1,5,3,4,6,1) & 5 &\\
(1,4,3,5,1) & 4 & 1\\
(1,2,3,4,1) & 4 & 2\\
(1,2,3,6,1) & 4 &\\
(1,5,3,6,1) & 4 &\\
(2,5,4,6,2) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,1\hss}\\
\end{tabular}
\end{tabular}}
\medskip
\centerline{\table{cc}
\table{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<4pt,9pt>{\hbox{1}},
"2"*{\bullet}*!<0pt,-9pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,7pt>{\hbox{4}},
"5"*{\bullet}*!<-4pt,9pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(20,-15)**\dir{-}*\dir{>},
"1";"1"-(20,-15)**\dir{-}*\dir{>},
"2";"2"+(30,0)**\dir{-}*\dir{>},
"2";"2"-(30,0)**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(-30,0)**\dir{-}*\dir{>},
"4";"4"+(30,0)**\dir{-}*\dir{>},
"5";"5"+(20,15)**\dir{-}*\dir{>},
"5";"5"-(20,15)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\table{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,5,1),(1,4,5,3,6,1),(2,3,4,6,2)\\
2 & (1,2,5,4,1),(1,5,3,6,1),(2,3,4,6,2)\\
3 & (1,2,5,4,1),(1,5,3,4,6,1),(2,3,6,2)\\
\endtable
\endtable
&
\table{lll}
\multicolumn{3}{c}{List of o-cycles (9)}\\
(1,4,5,3,6,1) & 5 & 0\\
(1,5,3,4,6,1) & 5 & 1\\
(1,2,3,4,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{2,3\hss}\\
(1,2,5,4,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,2\hss}\\
(1,5,3,6,1) & 4 & 1\\
(2,5,4,6,2) & 4 & 3\\
(2,3,4,6,2) & 4 & 0\\
(1,2,5,1) & 3 & \setbox0=\hbox{0}\hbox to \wd0{1,2\hss}\\
(2,3,6,2) & 3 & 3\\
\endtable
\endtable}
\medskip
\centering\table{cc}
\table{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<5pt,-9pt>{\hbox{1}},
"2"*{\bullet}*!<0pt,-9pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,7pt>{\hbox{4}},
"5"*{\bullet}*!<-4pt,9pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(15,21)**\dir{-}*\dir{>},
"1";"1"-(15,21)**\dir{-}*\dir{>},
"2";"2"+(30,0)**\dir{-}*\dir{>},
"2";"2"-(30,0)**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(-30,0)**\dir{-}*\dir{>},
"4";"4"+(30,0)**\dir{-}*\dir{>},
"5";"5"+(20,15)**\dir{-}*\dir{>},
"5";"5"-(20,15)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\table{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,3,6,1),(1,4,3,5,1),(2,5,4,6,2)\\
2 & (1,2,5,4,1),(1,5,3,6,1),(2,3,4,6,2)\\
3 & (1,2,5,4,1),(1,5,3,4,6,1),(2,3,6,2)\\
4 & (1,2,5,4,6,1),(1,4,3,5,1),(2,3,6,2)\\
\endtable
\endtable
&
\table{lll}
\multicolumn{3}{c}{List of o-cycles (11)}\\
(1,2,3,4,6,1) & 5 & 0\\
(1,5,3,4,6,1) & 5 & 1\\
(1,2,5,4,6,1) & 5 & \\
(2,5,4,6,2) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,1\hss}\\
(1,2,5,4,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,2\hss}\\
(1,4,3,5,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{1,4\hss}\\
(1,2,3,4,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{2,3\hss}\\
(1,2,3,6,1) & 4 & 4\\
(1,5,3,6,1) & 4 & 3\\
(2,3,4,6,2) & 4 & 2\\
(2,3,6,2) & 3 & \setbox0=\hbox{0}\hbox to \wd0{3,4\hss}\\
\endtable
\endtable
\medskip
\centerline{\table{cc}
\table{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<5pt,9pt>{\hbox{1}},
"2"*{\bullet}*!<0pt,-9pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<-9pt,0pt>{\hbox{4}},
"5"*{\bullet}*!<-4pt,9pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(20,-15)**\dir{-}*\dir{>},
"1";"1"-(20,-15)**\dir{-}*\dir{>},
"2";"2"+(30,0)**\dir{-}*\dir{>},
"2";"2"-(30,0)**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(0,27)**\dir{-}*\dir{>},
"4";"4"-(0,27)**\dir{-}*\dir{>},
"5";"5"+(20,15)**\dir{-}*\dir{>},
"5";"5"-(20,15)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\table{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,5,1),(1,4,6,1),(2,3,6,2),(3,4,5,3)\\
\endtable
\endtable
&
\table{lll}
\multicolumn{3}{c}{List of o-cycles (7)}\\
(1,2,3,4,1) & 4 & 0\\
(2,5,4,6,2) & 4 & 0\\
(1,5,3,6,1) & 4 & 1\\
(1,4,6,1) & 3 & 0\\
(1,2,5,1) & 3 & 1\\
(2,3,6,2) & 3 & 1\\
(3,4,5,3) & 3 & 1\\
\endtable
\endtable}
\medskip
\centerline{\table{cc}
\table{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<5pt,9pt>{\hbox{1}},
"2"*{\bullet}*!<0pt,-9pt>{\hbox{2}},
"3"*{\bullet}*!<-3pt,-9pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,7pt>{\hbox{4}},
"5"*{\bullet}*!<-7pt,-7pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(20,-15)**\dir{-}*\dir{>},
"1";"1"-(20,-15)**\dir{-}*\dir{>},
"2";"2"+(30,0)**\dir{-}*\dir{>},
"2";"2"-(30,0)**\dir{-}*\dir{>},
"3";"3"+(-25,16)**\dir{-}*\dir{>},
"3";"3"-(-25,16)**\dir{-}*\dir{>},
"4";"4"+(-30,0)**\dir{-}*\dir{>},
"4";"4"+(30,0)**\dir{-}*\dir{>},
"5";"5"+(20,-24)**\dir{-}*\dir{>},
"5";"5"-(20,-24)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\table{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,3,4,1),(1,5,4,6,1),(2,5,3,6,2)\\
2 & (1,2,3,5,1),(1,4,3,6,1),(2,5,4,6,2)\\
3 & (1,2,5,4,1),(1,5,3,6,1),(2,3,4,6,2)\\
\endtable
\endtable
&
\table{lll}
\multicolumn{3}{c}{List of o-cycles (9)}\\
(1,2,3,4,1) & length 4 & \setbox0=\hbox{0}\hbox to \wd0{0,1\hss}\\
(2,5,4,6,2) & length 4 & \setbox0=\hbox{0}\hbox to \wd0{0,2\hss}\\
(1,5,3,6,1) & length 4 & \setbox0=\hbox{0}\hbox to \wd0{0,3\hss}\\
(1,5,4,6,1) & length 4 & 1\\
(2,5,3,6,2) & length 4 & 1\\
(1,2,3,5,1) & length 4 & 2\\
(1,4,3,6,1) & length 4 & 2\\
(1,2,5,4,1) & length 4 & 3\\
(2,3,4,6,2) & length 4 & 3\\
\endtable
\endtable}
\medskip
\centerline{\table{cc}
\table{c}
\hbox{\xy /r.20pt/:,
(119.113,311.02)="1";
(191.876,317.16)="flex7";
(283.884,255.041)="flex8";
(302.887,204.917)="3";
(291.639,144.326)="flex9";
(46.7613,118.174)="flex10";
(249.972,470.153)="flex11";
(380.834,311.02)="5";
(349.774,244.943)="flex12";
(249.969,196.31)="flex13";
(197.053,204.892)="6";
(150.169,244.915)="flex14";
(249.984,296.571)="2";
(308.08,317.151)="flex15";
(453.206,118.124)="flex16";
(249.947,84.3883)="4";
(208.268,144.307)="flex17";
(216.073,255.021)="flex18";
"1";"flex7"**\crv{ (142.378,319.265) & (167.397,320.203) },
"flex7";"2"*[o]=(5,5){\,}**\crv{ (212.611,314.583) & (233.207,309.056) },
"2"*[o]=(5,5){\,};"flex8"**\crv{ (264.485,285.78) & (274.858,270.676) },
"flex8";"3"**\crv{ (292.909,239.406) & (300.8,222.871) },
"3";"flex9"**\crv{ (305.301,184.151) & (299.777,163.562) },
"flex9";"4"*[o]=(5,5){\,}**\crv{ (282.03,121.611) & (268.714,100.414) },
"4"*[o]=(5,5){\,};"flex10"**\crv{ (186.091,29.8623) & (88.5278,45.8136) },
"flex10";"1"**\crv{ (5,190.526) & (39.9736,282.974) },
"1"*[o]=(5,5){\,};"flex11"**\crv{ (103.842,393.609) & (166.419,470.147) },
"flex11";"5"**\crv{ (333.535,470.16) & (396.127,393.611) },
"5";"flex12"**\crv{ (376.341,286.753) & (364.647,264.619) },
"flex12";"3"*[o]=(5,5){\,}**\crv{ (337.174,228.275) & (322.087,213.205) },
"3"*[o]=(5,5){\,};"flex13"**\crv{ (286.29,197.752) & (268.024,196.315) },
"flex13";"6"**\crv{ (231.916,196.306) & (213.65,197.733) },
"6";"flex14"**\crv{ (177.851,213.175) & (162.766,228.246) },
"flex14";"1"*[o]=(5,5){\,}**\crv{ (135.293,264.598) & (123.601,286.744) },
"2";"flex15"**\crv{ (266.758,309.053) & (287.35,314.575) },
"flex15";"5"*[o]=(5,5){\,}**\crv{ (332.556,320.193) & (357.571,319.259) },
"5"*[o]=(5,5){\,};"flex16"**\crv{ (460.009,282.979) & (495,190.491) },
"flex16";"4"**\crv{ (411.413,45.7587) & (313.821,29.8402) },
"4";"flex17"**\crv{ (231.186,100.41) & (217.868,121.596) },
"flex17";"6"*[o]=(5,5){\,}**\crv{ (200.137,163.542) & (194.63,184.131) },
"6"*[o]=(5,5){\,};"flex18"**\crv{ (199.149,222.848) & (207.046,239.384) },
"flex18";"2"**\crv{ (225.103,270.663) & (235.476,285.776) },
"1"*{\bullet}*!<5pt,-7pt>{\hbox{1}},
"2"*{\bullet}*!<0pt,-9pt>{\hbox{2}},
"3"*{\bullet}*!<-7pt,7pt>{\hbox{3}},
"4"*{\bullet}*!<0pt,9pt>{\hbox{4}},
"5"*{\bullet}*!<-7pt,-7pt>{\hbox{5}},
"6"*{\bullet}*!<7pt,7pt>{\hbox{6}},
"1";"1"+(13,21)**\dir{-}*\dir{>},
"1";"1"-(13,21)**\dir{-}*\dir{>},
"2";"2"+(30,0)**\dir{-}*\dir{>},
"2";"2"-(30,0)**\dir{-}*\dir{>},
"3";"3"+(-17,-25)**\dir{-}*\dir{>},
"3";"3"-(-17,-25)**\dir{-}*\dir{>},
"4";"4"+(27,0)**\dir{-}*\dir{>},
"4";"4"-(27,0)**\dir{-}*\dir{>},
"5";"5"+(20,-24)**\dir{-}*\dir{>},
"5";"5"-(20,-24)**\dir{-}*\dir{>},
"6";"6"+(17,-25)**\dir{-}*\dir{>},
"6";"6"-(17,-25)**\dir{-}*\dir{>},
\endxy}\\
\table{ll}
0 & (1,2,3,4,1),(1,5,3,6,1),(2,5,4,6,2)\\
1 & (1,2,3,4,1),(1,5,4,6,1),(2,5,3,6,2)\\
2 & (1,2,3,4,6,1),(1,4,5,1),(2,5,3,6,2)\\
3 & (1,2,3,6,1),(1,4,3,5,1),(2,5,4,6,2)\\
4 & (1,2,3,6,1),(1,4,5,1),(2,5,3,4,6,2)\\
5 & (1,2,5,3,4,1),(1,5,4,6,1),(2,3,6,2)\\
6 & (1,2,5,3,4,6,1),(1,4,5,1),(2,3,6,2)\\
7 & (1,2,5,3,6,1),(1,4,5,1),(2,3,4,6,2)\\
8 & (1,2,5,4,1),(1,5,3,6,1),(2,3,4,6,2)\\
9 & (1,2,5,4,1),(1,5,3,4,6,1),(2,3,6,2)\\
10 & (1,2,5,4,6,1),(1,4,3,5,1),(2,3,6,2)\\
\endtable
\endtable
&
\table{lll}
\multicolumn{3}{c}{List of o-cycles (18)}\\
(1,2,5,3,4,6,1) & 6 & \setbox0=\hbox{0}\hbox to \wd0{0,1\hss}\\
(1,2,3,4,6,1) & 5 & 2\\
(1,2,5,3,4,1) & 5 & \setbox0=\hbox{0}\hbox to \wd0{3,4\hss}\\
(1,2,5,3,6,1) & 5 & 5\\
(1,2,5,4,6,1) & 5 & 6\\
(2,5,3,4,6,2) & 5 & 7\\
(1,5,3,4,6,1) & 5 & \setbox0=\hbox{0}\hbox to \wd0{8,9\hss}\\
(1,2,3,4,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,3\hss}\\
(2,5,4,6,2) & 4 & \setbox0=\hbox{0}\hbox to \wd0{0,8\hss}\\
(1,2,3,6,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{1,2\hss}\\
(1,4,3,5,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{1,5\hss}\\
(1,5,3,6,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{2,4,6,7\hss}\\
(1,5,4,6,1) & 4 & \setbox0=\hbox{0}\hbox to \wd0{3,10\hss}\\
(1,2,5,4,1) & 4 & 4\\
(2,5,3,6,2) & 4 & 9\\
(2,3,4,6,2) & 4 & 10\\
(2,3,6,2) & 3 & \setbox0=\hbox{0}\hbox to \wd0{5,6,9,10\hss}\\
(1,4,5,1) & 3 & \setbox0=\hbox{0}\hbox to \wd0{7,8\hss}\\
\endtable
\endtable}
|
1712.07677
|
\section{#1}\setcounter{equation}{0}}
\newcommand{\mathinner{\mathrm{Id}}}{\mathinner{\mathrm{Id}}}
\newcommand{\mathop\bigcirc}{\mathop\bigcirc}
\newcommand{\pint}[1]{\mathaccent23{#1}}
\newcommand{\Lw}[1]{\mathbf{L^#1_{w}}}
\newcommand{\Lloc}[1]{\mathbf{L^{#1}_{loc}}}
\newcommand{\C}[1]{\mathbf{C^{#1}}}
\newcommand{\CB}[1]{\mathbf{C_B^{#1}}}
\newcommand{\mathbf{PC}}{\mathbf{PC}}
\newcommand{\Cc}[1]{\mathbf{C_c^{#1}}}
\newcommand{\Cw}[2]{\mathbf{C^{#1}_{#2}}}
\newcommand{\modulo}[1]{{\left|#1\right|}}
\newcommand{\norma}[1]{{\left\|#1\right\|}}
\newcommand{\Ref}[1]{{\rm(\ref{#1})}}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\newcommand{{\mathbb{N}}}{{\mathbb{N}}}
\newcommand{\mathrm{TV}}{\mathrm{TV}}
\newcommand{\mathbf{BV}}{\mathbf{BV}}
\newcommand{\mathinner\mathbf{Lip}}{\mathinner\mathbf{Lip}}
\newcommand{\mathop{\mathrm{supp}}}{\mathop{\mathrm{supp}}}
\newcommand{\mathop{\mathrm{diam}}}{\mathop{\mathrm{diam}}}
\newcommand{\mathop{\rm ess~sup}}{\mathop{\rm ess~sup}}
\renewcommand{\epsilon}{\varepsilon}
\renewcommand{\phi}{\varphi}
\renewcommand{\theta}{\vartheta}
\renewcommand{\O}{{\mathcal O}(1)}
\renewcommand{\L}[1]{\mathbf{L^#1}}
\renewcommand{\sf}[2]{\mathinner{\mathcal{B}\left(#1,#2\right)}}
\newcommand{\caratt}[1]{{\chi_{\strut#1}}}
\newcommand{\piu}[1]{{[\![#1]\!]_+}}
\newcommand{\bar{\bar \gamma}}{\bar{\bar \gamma}}
\newcommand{\bar \gamma}{\bar \gamma}
\newcommand{\mathcal{RS}_J}{\mathcal{RS}_J}
\begin{document}
\title{A Riemann solver at a junction compatible with a homogenization limit}
\author{M. Garavello\thanks{Dipartimento di Matematica e Applicazioni,
Universit\`a di Milano Bicocca, Via R. Cozzi 55, 20125 Milano, Italy.
E-mail: \texttt{mauro.garavello@unimib.it}} \and
F. Marcellini\thanks{Dipartimento di Matematica e Applicazioni,
Universit\`a di Milano Bicocca, Via R. Cozzi 55, 20125 Milano, Italy.
E-mail: \texttt{francesca.marcellini@unimib.it}}}
\maketitle
\begin{abstract}
We consider a junction regulated by a traffic lights,
with $n$ incoming roads and only one outgoing road.
On each road the Phase Transition traffic model,
proposed in~\cite{ColomboMarcelliniRascle}, describes the evolution of
car traffic. Such model is an extension of the classic
Lighthill-Whitham-Richards one, obtained by assuming that
different drivers may have different maximal speed.
By sending to infinity the number of cycles of the traffic lights,
we obtain a justification of the Riemann solver introduced
in~\cite{GaravelloMarcellini} and in particular of the rule
for determining the
maximal speed in the outgoing road.
\noindent\textit{2000~Mathematics Subject Classification:} 35L65,
90B20
\medskip
\noindent\textit{Key words and phrases:} Phase Transition Model,
Hyperbolic Systems of
Conservation Laws, Continuum Traffic Models,
Homogenization Limit.
\end{abstract}
\section{Introduction}
\label{sec:Intro}
This paper deals with the Phase Transition traffic model,
proposed by Colombo, Marcellini, and Rascle in~\cite{ColomboMarcelliniRascle},
at a junction with $n \ge 2$ incoming roads and only one outgoing road.
The aim is to give a mathematical derivation for the solution
of the Riemann problem at the crossroad
proposed in~\cite{GaravelloMarcellini}. We obtain the justification
for such solution by using a homogenization procedure.
The traffic model considered in this paper is a system of $2 \times 2$
conservation laws; it belongs to the class of macroscopic second
order models as the famous Aw-Rascle-Zhang model,
see~\cite{AwRascle, Zhang2002}. As the name Phase Transition suggests,
the model is characterized by two different phases,
the free one and the congested one;
see~\cite{BlandinWorkGoatinPiccoliBayen, Colombo1.5,
MR2032809, 2017arXiv170203624F, GaravelloHanPiccoli,
Goatin2Phases, LebacqueMammarHajSalem, Marcellini, Marcellini2} and the references
therein for similar descriptions.
The Phase Transition model we consider here is derived from the famous
Lighthill-Whitham-Richards one~\cite{LighthillWhitham, Richards}
by assuming that different drivers may have different maximal speeds.
The extension of the Phase Transition model to the case of networks
is considered in~\cite{GaravelloMarcellini}.
The key point for extending a model to a network
consists in providing a concept of solution at nodes.
A possible way to do this is to construct a Riemann solver at nodes,
i.e. a function
which associates to each Riemann problem at the node a solution.
A reasonable Riemann solver has to satisfy the mass conservation,
a consistency condition; it should produce waves with
negative speed in the incoming edges and with
positive speed in the outgoing ones.
A Riemann solver satisfying
such properties is proposed in~\cite{GaravelloMarcellini}.
In particular, it prescribes that
the maximal speed in the outgoing road is a convex combination of the
maximal speed in the incoming arcs. Similar conditions are also present
in~\cite{HertyKlar2003, HertyRascle, 2017arXiv170701683K}.
In this paper we are going to investigate the delicate issue of
how the maximal speed \emph{changes} through the junction.
To this aim, we consider a single junction regulated by a time-periodic
traffic lights. At each time the green light applies only at one incoming
road. Vehicles, in the remaining incoming roads, are then stopped, waiting
for their green light.
With a limit-average procedure, we are able to find the relation between
the incoming maximal speeds and the outgoing one.
In this way, the maximal outgoing speed turns out to be a
convex combination of the $n$ incoming ones and it satisfies
the corresponding condition prescribed by the Riemann solver
in~\cite{GaravelloMarcellini}.
The paper is organized as follow. In the next section we recall the 2-Phases
Traffic Model introduced in~\cite{ColomboMarcelliniRascle} and the solution
to the classical
Riemann problem along a single road of infinite length.
In Section~\ref{sec:Main} we consider a time periodic
traffic lights regulating the intersection and we study the solution in the
outgoing road as the time period of the traffic lights tends to $0$.
More precisely, in Subsection~\ref{subsec:I} we describe in details
the solution in the simple situation
with $n = 2$ incoming roads and, finally, in Subsection~\ref{subsec:II}
we generalize the previous study to the case of $n \ge 2$ incoming roads
and we state and prove the main result, concerning
the rule for the maximal speed
in the outgoing road,
by using a limit-average procedure.
\section{Notations and the Riemann Problem on a Single Road}
\label{sec:Des}
The Phase Transition model, introduced in~\cite{ColomboMarcelliniRascle},
is given by:
\begin{equation}
\label{eq:Modeleta}
\left\{
\begin{array}{l}
\partial_t \rho +
\partial_x \left( \rho\, v (\rho,\eta) \right) = 0
\\
\partial_t \eta +
\partial_x \left( \eta\, v (\rho, \eta) \right) = 0
\end{array}
\right.
\quad \mbox{ with } \quad
v(\rho, \eta)
=
\min \left\{ V_{\max}, \frac{\eta}{\rho}\, \psi(\rho) \right\},
\end{equation}
where $t$ denotes the time, $x$ the space, $\rho \in [0, R]$
is the traffic density, $\eta$ is a generalized
momentum, $v \in [0, V_{\max}]$ is the speed of cars, and
$V_{\max}$ is a uniform bound of the cars' speed.
It is obtained as an extension of the Lighthill-Whitham-Richards
model~\cite{LighthillWhitham,Richards}, by assuming that different
drivers have different maximal speed, denoted by the quantity
$w=\eta / \rho \in \left[\check w, \hat w\right]$.
It is characterized by two phases,
the free one and congested one, which are described by the sets
\begin{align}
\label{eq:phF}
F
& =
\left\{
(\rho, \eta) \in [0,R] \times [0, \hat w R]
\colon \check w \rho \le \eta \le \hat w \rho, \,
v(\rho, \eta) = V_{\max}
\right\},
\\
\label{eq:phC}
C
& =
\left\{
(\rho, \eta) \in [0,R] \times [0, \hat w R]
\colon
\check w \rho \le \eta \le \hat w \rho, \,
v(\rho, \eta) = \frac{\eta}{\rho} \, \psi(\rho)
\right\}\,,
\end{align}
see Figure~\ref{fig:phases}.
\begin{figure}[t!]
\centering
\input{rhoeta.pdftex_t}
\hfil
\input{rhov.pdftex_t}
\caption{The free phase $F$ and the congested phase $C$ resulting
from\/ {\rm(\ref{eq:Modeleta})} in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. Note that F and C are closed sets and $F \cap C \neq \emptyset $. Note also that $F$ is $1$--dimensional in the $(\rho, \rho v)$ plane, while it is $2$--dimensional in the $(\rho,\eta)$ coordinates. In the $(\rho,\eta)$ plane, the curve
$\eta= \frac{V_{\max}}{\psi(\rho)}\rho $ divides the two phases.}\label{fig:phases}
\end{figure}
As in~\cite{ColomboMarcelliniRascle,GaravelloMarcellini}, we assume the following hypotheses.
\begin{description}
\item[(H-1)] $R,\check w, \hat w, V_{\max}$ are positive
constants, with $V_{\max} < \check w < \hat w$.
\item[(H-2)] $\psi \in \C{2} \left( [0,R];[0,1]\right)$ is such that
$\psi(0) = 1$, $\psi(R) = 0$, and, for every $\rho \in (0,R)$,
$\psi'(\rho) \le 0$,
$\frac{d^2\ }{d\rho^2} \left( \rho\, \psi(\rho) \right) \le 0$.
\item[(H-3)] Waves of the first family in the congested phase $C$ have negative speed.
\end{description}
By~\textbf{(H-1)}, \textbf{(H-2)}, and \textbf{(H-3)}, system~(\ref{eq:Modeleta})
is strictly hyperbolic in $C$, see~\cite{ColomboMarcelliniRascle}, and
\begin{equation*}
\begin{array}{@{}rcl@{\quad}rcl@{}}
\lambda_{1} (\rho, \eta)
\!\!&\!\! = \!\!&\!\!
\eta\, \psi'(\rho) + v(\rho, \eta),
&
\lambda_{2} (\rho, \eta)
\!\!&\!\! = \!\!&\!\!
v(\rho, \eta),
\\[5pt]
r_{1} (\rho, \eta)
\!\!&\!\! = \!\!&\!\!
\left[
\begin{array}{c}
-\rho
\\
-\eta
\end{array}
\right],
&
r_{2} (\rho, \eta)
\!\!&\!\! = \!\!&\!\!
\left[
\begin{array}{c}
1
\\
\eta\left( \frac{1}{\rho}-\frac{\psi'(\rho) }
{\psi(\rho) }\right)
\end{array}
\right],
\\
\nabla \lambda_1 \cdot r_1
\!\!&\!\! = \!\!&\!\!
\displaystyle
-\frac{d^2\ }{d\rho^2} \left[ \rho\, \psi(\rho) \right],
&
\nabla \lambda_2 \cdot r_2
\!\!&\!\! = \!\!&\!\!
0,
\\
\mathcal{L}_1(\rho;\rho_o,\eta_o)
\!\!&\!\! = \!\!&\!\!
\displaystyle
\eta_o \frac{\rho}{\rho_o},
&
\mathcal{L}_2(\rho;\rho_o,\eta_o)
\!\!&\!\! = \!\!&\!\!
\displaystyle
\frac{\rho \, v(\rho_o, \eta_o)}{\psi(\rho)},
\; \rho_o < R,
\end{array}
\end{equation*}
where $\lambda_i$ and $r_i$ are respectively the eigenvalues and
the right eigenvectors of the Jacobian matrix of the flux, and
$\mathcal L_i$ are the Lax curves.
When $\rho_o = R$, the 2-Lax curve through $(\rho_o, \eta_o)$ is given by
the segment $\rho=R$, $\eta \in [R \check w, R \hat w]$.
In view of the results of the next section, we recall the description of
the solutions of the Riemann problem for the model~(\ref{eq:Modeleta}).
First, we enumerate all the possible waves for~(\ref{eq:Modeleta}).
\begin{itemize}
\item \textsl{A Linear wave} is a wave connecting two states in the free
phase. It always travels with speed $V_{\max}$.
\item \textsl{A Phase Transition Wave} is a wave connecting a left state
$\left(\rho_l, \eta_l\right) \in F$ with a right state
$\left(\rho_r, \eta_r\right) \in C$ satisfying
$\frac{\eta_l}{\rho_l} = \frac{\eta_r}{\rho_r}$.
It always travels with speed given by the Rankine-Hugoniot condition.
\item \textsl{A Wave of the First Family} is a wave connecting a left state
$\left(\rho_l, \eta_l\right) \in C$ with a right state
$\left(\rho_r, \eta_r\right) \in C$ such that
$\frac{\eta_l}{\rho_l} = \frac{\eta_r}{\rho_r}$. It is either a rarefaction
wave or a shock wave.
\item \textsl{A Wave of the Second Family} is a wave connecting a left state
$\left(\rho_l, \eta_l\right) \in C$ with a right state
$\left(\rho_r, \eta_r\right) \in C$ such that
$v\left(\rho_l, \eta_l\right) = v\left(\rho_r, \eta_r\right)$.
It always travels with speed $v\left(\rho_l, \eta_l\right)$.
\end{itemize}
\subsection{The Riemann Problem along a Single Road}
Under the assumptions \textbf{(H-1)}, \textbf{(H-2)} and \textbf{(H-3)},
for all states $(\rho_l,\eta_l)$ and $(\rho_r, \eta_r) \in F \cup C$,
the Riemann problem consisting of~(\ref{eq:Modeleta}) with initial data
\begin{equation}
\label{eq:RD}
\rho(0,x) = \left\{
\begin{array}{l@{\quad\mbox{ if }\,}rcl}
\rho_l & x & < & 0
\\
\rho_r & x & > & 0
\end{array}
\right.
\qquad
\eta(0,x) = \left\{
\begin{array}{l@{\quad\mbox{ if }\, }rcl}
\eta_l & x & < & 0
\\
\eta_r & x & > & 0
\end{array}
\right.
\end{equation}
admits a unique self similar weak solution $(\rho,\eta) =
(\rho,\eta) (t,x)$ constructed as follows:
\begin{enumerate}[(1)]
\item If $(\rho_l,\eta_l), (\rho_r,\eta_r) \in F$, then the solution attains values in $F$ and consists of a linear wave separating $(\rho_l,\eta_l)$ from $(\rho_r,\eta_r)$.
\item If $(\rho_l,\eta_l), (\rho_r,\eta_r) \in C$, then the solution attains values in $C$ and consists of a wave of the first family (shock or rarefaction) between $(\rho_l, \eta_l)$ and a middle state $(\rho_m, \eta_m)$, followed by a wave of the second family between $(\rho_m, \eta_m)$ and $(\rho_r, \eta_r)$. The middle state $(\rho_m, \eta_m)$ belongs to $C$ and is uniquely characterized by the two conditions $\frac{\eta_m}{\rho_m} = \frac{\eta_l}{\rho_l}$ and
$v(\rho_m, \eta_m) = v(\rho_r, \eta_r)$.
\item If $(\rho_l,\eta_l) \in C$ and $(\rho_r,\eta_r) \in F$, then the solution attains values in $F \cup C$ and consists of a wave of the first family separating $(\rho_l, \eta_l)$ from a middle state $(\rho_m, \eta_m)$ and by a linear wave separating $(\rho_m, \eta_m)$ from $(\rho_r,\eta_r)$. The middle state $(\rho_m, \eta_m)$ belongs to the intersection between $F$ and $C$ and is uniquely characterized by the two conditions $\frac{\eta_m}{\rho_m} = \frac{\eta_r}{\rho_r}$ and $v(\rho_m, \eta_m) = V_{\max}$.
\item If $(\rho_l,\eta_l) \in F$ and $(\rho_r,\eta_r) \in C$, then the solution attains values in $F \cup C$ and consists of a phase transition wave between $(\rho_l, \eta_l)$ and a middle state $(\rho_m, \eta_m)$, followed by a wave of the second family between $(\rho_m, \eta_m)$ and $(\rho_r, \eta_r)$. The middle state $(\rho_m, \eta_m)$ is in $C$ and is uniquely characterized by the two conditions $\frac{\eta_m}{\rho_m} = \frac{\eta_l}{\rho_l}$ and $v(\rho_m, \eta_m) = v(\rho_r, \eta_r)$.
\end{enumerate}
\section{The Limit at a Junction with Traffic Lights}
\label{sec:Main}
Fix a junction with $n$ incoming roads and a single outgoing road.
In~\cite{GaravelloMarcellini}, it is introduced a Riemann solver at the
junction, which conserves the mass
\begin{equation}
\label{eq:conservation1}
\sum_{i = 1}^n \rho_i v(\rho_i, \eta_i) =
\rho_{n+1} v\left(\rho_{n+1}, \eta_{n+1}\right)\,,
\end{equation}
and prescribes that the maximal speed in the outgoing road is given by
\begin{equation}
\label{eq:conservation2}
w_{n+1} = \frac{\displaystyle\sum_{i=1}^n \sigma_i \rho_i
v(\rho_i, \eta_i) \,w_i}
{\displaystyle\sum_{i=1}^n \sigma_i \rho_i v(\rho_i, \eta_i) },
\end{equation}
for suitable coefficients $\sigma_1 > 0, \cdots, \sigma_n > 0$, satisfying
$\sigma_1 + \cdots + \sigma_n = 1$.
In this section we provide a justification for the
rule~(\ref{eq:conservation2}).
To this aim, fix a positive time $T > 0$ and assume that the junction is
regulated by a traffic lights, which alternates periodically the right of
way among the incoming roads. More precisely, assume that the traffic lights
repeats $\ell \in {\mathbb{N}} \setminus \left\{0\right\}$ cycles in the time
interval $[0,T]$ and that each cycle of length $\frac{T}{\ell}$ is divided
into $n$ subintervals of length $\tau_1^\ell, \cdots, \tau_n^\ell$,
which represent respectively
the duration of the green light for the corresponding
incoming road. The first cycle $\left[0, \frac{T}{\ell}\right[$
is thus composed by
\begin{equation*}
\left[0, \frac{T}{\ell}\right[ =
\left[0, \tau_1^\ell\right) \bigcup
\left[\tau_1^\ell, \tau_1^\ell + \tau_2^\ell\right)
\bigcup \cdots \bigcup
\left[\tau_1^\ell + \cdots + \tau_{n-1}^\ell, \tau_1^\ell + \cdots
+ \tau_n^\ell\right),
\end{equation*}
where $\left[0, \tau_1^\ell\right)$ is the time interval of the green for
the road $I_1$ and so on.
Denote, for every $i \in \left\{1, \ldots, n-1\right\}$,
\begin{equation}
\label{eq:sigma}
\sigma_i = \frac{\tau_i^\ell}{\tau_n^\ell},
\end{equation}
which we suppose that it does not depend on $\ell$. Thus the constant
$\sigma_i$ is the ratio between the green time
interval for roads $I_i$ and $I_n$. For simplicity we put $\sigma_n = 1$.
\subsection{Basic Situations}
\label{subsec:I}
In this subsection we treat only the special junction with $n = 2$ incoming
roads (namely $I_1$ and $I_2$) and a single outgoing road $I_3$.
We assume that
assumptions~\textbf{\textup{(H-1)}}, \textbf{\textup{(H-2)}},
and~\textbf{\textup{(H-3)}} hold. As introduced in~(\ref{eq:sigma}), the
positive constants
\begin{equation*}
\sigma_1 = \frac{\tau_1^\ell}{\tau_2^\ell},\qquad
\qquad\sigma_2 = 1
\end{equation*}
do not depend on the number of cycles $\ell$.
This means that the ratio between the green and red times is constant
in each incoming road.
Given, for every $i \in \left\{1, 2, 3\right\}$, initial conditions
$(\bar \rho_{i}, \bar \eta_{i}) \in F \cup C$, we denote
with $\left(\rho_{\ell,i} (t,x), \eta_{\ell,i} (t,x)\right)$
($i \in \left\{1, 2, 3\right\}$)
the solution to the Riemann problem, the junction being governed by
the traffic lights with $\ell$ cycles.
Introduce the following notation. With
$\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and
$\left(\rho_2^\sharp, \eta_2^\sharp\right)$
we call the points in the congested region $C$ satisfying
\begin{equation*}
\frac{\eta_1^\sharp}{\rho_1^\sharp} = \bar w_1,
\qquad
\frac{\eta_2^\sharp}{\rho_2^\sharp} = \bar w_2,
\qquad
v\left(\rho_1^\sharp, \eta_1^\sharp\right) =
v\left(\rho_2^\sharp, \eta_2^\sharp\right) =
v\left(\bar \rho_3, \bar \eta_3\right).
\end{equation*}
Moreover with
$\left(\rho_1^\flat, \eta_1^\flat\right)$ and
$\left(\rho_2^\flat, \eta_2^\flat\right)$
we call the points in the intersection between the free and congested region $F \cap C$ satisfying
\begin{equation*}
\frac{\eta_1^\flat}{\rho_1^\flat} = \bar w_1,
\qquad
\frac{\eta_2^\flat}{\rho_2^\flat} = \bar w_2,
\qquad
v\left(\rho_1^\flat, \eta_1^\flat\right) =
v\left(\rho_2^\flat, \eta_2^\flat\right) = V_{\max} .
\end{equation*}
Note that the points $\left(\rho_i^\sharp, \eta_i^\sharp\right)$ and
$\left(\rho_i^\flat, \eta_i^\flat\right)$, $i = 1,2$,
are uniquely defined; see Figure~\ref{fig:ex}.
\begin{figure}[t!]
\centering
\input{rhoeta_1.pdftex_t}
\hfil
\input{rhoeta_2.pdftex_t}
\caption{Left, an example of representation for the points $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(\rho_2^\sharp, \eta_2^\sharp\right)$, right an example of representation for the points $\left(\rho_1^\flat, \eta_1^\flat\right)$ and
$\left(\rho_2^\flat, \eta_2^\flat\right)$.}
\label{fig:ex}
\end{figure}
The next lemmas describe the solution at the junction with the traffic lights
for the possible different situations that may happen.
\begin{lemma}
\label{Lemma1}
Assume that the initial conditions belong to the congested phase, i.e.,
for every $i \in \left\{1, 2, 3\right\}$,
$(\bar \rho_{i}, \bar \eta_{i}) \in C$.
Then the solution in the outgoing road $I_3$ is
\begin{equation}
\label{eq:sol_I_3_case1}
\left(\rho_{\ell, 3}, \eta_{\ell, 3}\right) (t,x) =
\left\{
\begin{array}{l@{\quad}l}
\left(\bar \rho_3, \bar \eta_3\right),
&
\textrm{ if }\, 0 < t < T, \quad
x > v\left(\bar \rho_3, \bar \eta_3\right) t,
\\
\left(\rho_{1}^\sharp, \eta_1^\sharp\right),
&
\textrm{ if }\, (t, x) \in A_1^\ell,
\\
\left(\rho_{2}^\sharp, \eta_2^\sharp\right),
&
\textrm{ if }\, (t, x) \in A_2^\ell,
\end{array}
\right.
\end{equation}
where
\begin{equation}
\label{eq:A_1}
A_1^\ell = \bigcup_{i=0}^{\ell - 1} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - \tau_1^\ell - i \frac{T}{\ell}
< \frac{x}{v\left(\bar \rho_3, \bar \eta_3\right)}
< t - i \frac{T}{\ell}
\end{array}
\right\}
\end{equation}
and
\begin{equation}
\label{eq:A_2}
A_2^\ell = \bigcup_{i=1}^{\ell} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - i \frac{T}{\ell}
< \frac{x}{v\left(\bar \rho_3, \bar \eta_3\right)}
< t - i \frac{T}{\ell} + \tau_1^\ell
\end{array}
\right\}.
\end{equation}
\end{lemma}
\begin{proof}
In the time interval $[0,\tau_{1}^\ell[$ the traffic lights is green
in the first incoming road; this permits to study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_1$ and $I_3$:
the classic Riemann problem between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\bar \rho_3, \bar \eta_3\right)$
produces a first family wave between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and a second family wave between $\left(\rho_1^\sharp, \eta_1^\sharp\right)$
and $\left(\bar \rho_3, \bar \eta_3\right)$, see Figure~\ref{fig:tau1}.
In $I_2$, instead, the flow at the junction is equal to zero and
so the trace of the solution is $\left(R,R \bar w_2\right)$.
The solution in the road $I_2$ is given by
a shock wave of the first family connecting
$\left(\bar \rho_2, \bar \eta_2\right)$ to $\left(R,R \bar w_2\right)$;
see Figure~\ref{fig:tau11}.
\begin{figure}[t!]
\centering
\input{rhoeta_tau1.pdftex_t}
\hfil
\input{rhoeta_tau11.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[0,\tau_{1}^\ell[$ for the first incoming road and the
outgoing road in the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau1}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau111.pdftex_t} \hfil
\input{rhoeta_tau1111.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[0,\tau_{1}^\ell[$ for the second incoming road in
the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau11}
\end{figure}
At time $t = \tau_1^\ell$, the traffic lights becomes red for the road $I_1$
and green for $I_2$. This situation remains constant in the whole
time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$; this permits
to study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_2$ and $I_3$; see Figure~\ref{fig:tau2}.
We have to solve the Riemann problem between
$\left(R,R \bar w_2\right)$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$.
The solution is given by a rarefaction curve of the first family
between $(R, R \bar w_2)$ and $\left(\rho_2^\sharp, \eta_2^\sharp\right)$
followed by a second family wave between $\left(\rho_2^\sharp, \eta_2^\sharp\right)$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$, see Figure~\ref{fig:tau2}.
In $I_1$, instead, the flow at the junction is equal to zero and
so the trace of the solution is $\left(R,R \bar w_1\right)$.
More precisely a shock wave of the first family starts from the point
$\left(\tau_1^\ell, 0\right)$ connecting the states
$\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(R,R \bar w_1\right)$;
see Figure~\ref{fig:tau22}.
\begin{figure}[t!]
\centering \input{rhoeta_tau2.pdftex_t} \hfil
\input{rhoeta_tau22.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the second
incoming road and the outgoing road in the coordinates, from
left to right, $(\rho,\eta)$ and $(x, t)$.}\label{fig:tau2}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau222.pdftex_t} \hfil
\input{rhoeta_tau2222.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the first
incoming road in the coordinates, from left to right,
$(\rho,\eta)$ and $(x, t)$.}\label{fig:tau22}
\end{figure}
Similarly, in the time interval
$[\tau_{1}^\ell + \tau_{2}^\ell, 2\tau_{1}^\ell + \tau_{2}^\ell[$,
the traffic light is green for road $I_1$ and red for $I_2$;
so we need to consider a Riemann
problem between
$(R,R \bar w_1)$ and $\left(\rho_2^\sharp, \eta_2^\sharp\right)$,
see Figure~\ref{fig:tau3}.
The solution consists in a rarefaction curve of the first family
between $(R,R \bar w_1)$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$
followed by a second family wave between
$\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(\rho_2^\sharp, \eta_2^\sharp\right)$.
The situation of $I_2$ is analogous to that represented
in Figure~\ref{fig:tau11}. More precisely at the point
$\left(\tau_{1}^\ell + \tau_{2}^\ell, 0\right)$ a shock wave with negative
speed is generated and it connects $\left(\rho_2^\sharp, \eta_2^\sharp\right)$
with $\left(R,R \bar w_2\right)$.
\begin{figure}[t!]
\centering \input{rhoeta_tau3.pdftex_t} \hfil
\input{rhoeta_tau33.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[\tau_{1}^\ell + \tau_{2}^\ell, 2\tau_{1}^\ell + \tau_{2}^\ell[$ for the first incoming road and the outgoing road in the coordinates,
from left to right, $(\rho,\eta)$ and $(x, t)$.}\label{fig:tau3}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau_star.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma1} in the time interval $[0,2\tau_{1}^\ell+2\tau_{2}^\ell[$ for the outgoing road
in the coordinates $(x, t)$. The two states $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(\rho_2^\sharp, \eta_2^\sharp\right)$, separated by second family waves, alternate periodically.}\label{fig:tau*}
\end{figure}
We proceed in the same way until we arrive at time $t = T$.
In this way we deduce that
the solution in $I_3$ is given by~(\ref{eq:sol_I_3_case1});
see Figure~\ref{fig:tau*}.
\end{proof}
\begin{lemma}
\label{Lemma2}
Assume that the initial conditions satisfy
\begin{equation*}
\left(\bar \rho_{1}, \bar \eta_1\right) \in F, \quad
\left(\bar \rho_{2}, \bar \eta_2\right) \in C, \quad
\left(\bar \rho_{3}, \bar \eta_3\right) \in C.
\end{equation*}
Then either the solution in the outgoing road $I_3$ is given
by~(\ref{eq:sol_I_3_case1}) or has a structure similar
to~(\ref{eq:sol_I_3_case1})
except for a set, whose Lebesgue measure is bounded by a constant times
$\frac{1}{\ell^2}$.
\end{lemma}
\begin{proof}
We proceed in the same way as the previous Lemma~\ref{Lemma1}.
In the time interval $[0,\tau_{1}^\ell[$ the traffic lights is green
in the first incoming road and we study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_1$ and $I_3$. We have two possible cases: the Riemann problem between $(\bar \rho_{1}, \bar \eta_{1})$ and $\left(\bar \rho_3, \bar \eta_3\right)$
produces a shock wave with negative speed between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and a second family wave between $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(\bar \rho_3, \bar \eta_3\right)$, or produces a shock wave with positive speed between $(\bar \rho_{1}, \bar \eta_{1})$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and a second family wave between $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and $\left(\bar \rho_3, \bar \eta_3\right)$, see Figure~\ref{fig:rhov}.
In $I_2$ the situation is the same as that of the previous Lemma~\ref{Lemma1}: the flow at the junction is equal to zero and the solution is given by a shock wave of the first family connecting $\left(\bar \rho_2, \bar \eta_2\right)$ to $\left(R,R \bar w_2\right)$, see Figure~\ref{fig:tau11}.
\begin{figure}[t!]
\centering
\input{rhov_RPRP.pdftex_t}
\hfil
\input{rhoeta_tau11XT.pdftex_t}
\hfil
\input{rhov_RP.pdftex_t}
\hfil
\input{rhoeta_tau11XTXT.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma2} in the time interval $[0,\tau_{1}^\ell[$ for the first incoming road and the
outgoing road in the coordinates, from left to right, $(\rho,\rho v)$ and
$(x, t)$. Above, the first case, a shock with negative speed; below, the second case, a shock with positive speed.}\label{fig:rhov}
\end{figure}
\begin{itemize}
\item \textit{First case: a shock with negative speed.}
In this case, from time $\tau_{1}^\ell$ to time $t = T$, the solution becomes as that described in the previous Lemma~\ref{Lemma1} and it is given by~(\ref{eq:sol_I_3_case1}); see Figure~\ref{fig:tau*}.
\item \textit{Second case: a shock with positive speed.} In the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ the traffic light is red for the road $I_1$
and green for $I_2$ and we study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_2$ and $I_3$. We have to solve the Riemann problem between
$\left(R,R \bar w_2\right)$ and $(\bar \rho_{1}, \bar \eta_{1})$.
The solution is given by a rarefaction curve of the first family
between $(R, R \bar w_2)$ and
$\left(\rho_2^\flat, \eta_2^\flat\right)\in F \cap C$
followed by a linear wave wave between
$\left(\rho_2^\flat, \eta_2^\flat \right)$ and
$(\bar \rho_{1}, \bar \eta_{1})$. At time $t=\bar t_1$,
the linear wave generated at the time $t=\tau_{1}^\ell$ between
$\left(\rho_2^\flat, \eta_2^\flat\right)$
and $(\bar \rho_{1}, \bar \eta_{1})$ interacts with the shock with positive speed genearated at time $t=0$ between $(\bar \rho_{1}, \bar \eta_{1})$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$, see Figure~\ref{fig:part}. The intersection point is determined by solving the system:
\begin{equation}
\label{eq:Iintersection}
\left\{
\begin{array}{l}
x(t)= v_s t
\\
x(t)=V_{\max}(t-\tau_{1}^\ell)\,,
\end{array}
\right.
\end{equation}
where $v_s=\frac{\bar \rho_{1} V_{\max}- \rho_1^\sharp v(\rho_1^\sharp,\eta_1^\sharp) }{\bar \rho_{1} - \rho_1^\sharp}$. We denote the intersection point by
\begin{equation}
\label{eq:PI}
\left(\bar t_1,\bar x_1\right)
= \left( \frac{V_{\max} \tau_{1}^\ell}{V_{\max}- v_s},
v_s \frac{V_{\max} \tau_{1}^\ell}{V_{\max}- v_s} \right) \,.
\end{equation}
At this point we have to solve a Riemann problem
between $\left(\rho_2^\flat, \eta_2^\flat\right)$
and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ that generates a first family
wave between $\left(\rho_2^\flat, \eta_2^\flat\right)$
and $\left(\rho_2^\sharp, \eta_2^\sharp\right)\in C$
and a second family wave between $\left(\rho_2^\sharp, \eta_2^\sharp\right)$
and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$, see Figure~\ref{fig:part}.
\begin{figure}[t!]
\centering
\input{part.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma2} in the time interval $[0,\bar t_2[$ for the outgoing road in the coordinates $(x, t)$: the interaction between the shock with positive speed genearated at time $t=0$ between $(\bar \rho_{1}, \bar \eta_{1})$ and $\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and the linear wave generated at the time $t=\tau_{1}^\ell$ between $\left(\rho_m, \eta_m\right)$ and $(\bar \rho_{1}, \bar \eta_{1})$.}\label{fig:part}
\end{figure}
The first family wave between
$\left(\rho_2^\flat, \eta_2^\flat\right)$ and
$\left(\rho_2^\sharp, \eta_2^\sharp\right)$ could interact
again at a time $t=t_2$ with the linear wave generated at time
$t=\tau_{2}^\ell$ when the traffic lights is red for the road $I_2$
and green for $I_1$, producing again a first family wave and a second
family wave.
A first family wave with negative speed could be produced at each interaction up to a time $t=t_*$, when it is absorbed and the solution becomes as that described in the previous Lemma~\ref{Lemma1} and it is given by~(\ref{eq:sol_I_3_case1}) except for a set, whose Lebesgue measure is bounded by a constant times $\frac{1}{\ell^2}$.
Indeed, if for example we suppose that the first famiy wave
is absorbed at time $t_*=\bar t_2$ and, denoting with $\mathcal L$ the Lebesgue measure of a set, we estimate the area of the triangle $A^\ell$ generated up to time $t=\bar t_2$, see Figure~\ref{fig:part}. By posing $\bar t_1=\frac{K_{1}}{\ell}$ and $\bar t_2= K_{2}\bar x_1$, we have:
\begin{equation}
\label{eq:AT}
\mathcal L (A^\ell)=\frac{1}{2}v_s\frac{K_{1}}{\ell}\left(\frac{K_{1}}{\ell}+ K_{2}v_s\frac{K_{1}}{\ell}\right) =\frac{K_{3}}{\ell^{2}} \,,
\end{equation}
\end{itemize}
for $K_{1},K_{2},K_{3}$ positive costants.
\end{proof}
\begin{lemma}
\label{Lemma3}
Assume that the initial conditions satisfy
\begin{equation*}
\left(\bar \rho_{1}, \bar \eta_1\right) \in F, \quad
\left(\bar \rho_{2}, \bar \eta_2\right) \in C, \quad
\left(\bar \rho_{3}, \bar \eta_3\right) \in F.
\end{equation*}
Then the solution in the outgoing road $I_3$ is
\begin{equation}
\label{eq:I_3_Lemma3}
\left(\rho_{\ell, 3}, \eta_{\ell, 3}\right) (t,x) =
\left\{
\begin{array}{l@{\quad}l}
\left(\bar \rho_3, \bar \eta_3\right),
&
\textrm{ if }\, 0 < t < T, \quad x > V_{\max} t,
\\
\left(\bar \rho_1, \bar \eta_1\right),
&
\textrm{ if }
\left\{
\begin{array}{l}
0 < t < T,\, x < V_{\max} t,
\\
x > \max \left\{0, V_{\max} \left(t - \tau_1^\ell\right)\right\},
\end{array}
\right. \\
\left(\rho_{2}^\flat, \eta_2^\flat\right),
&
\textrm{ if }\, (t, x) \in A_1^\ell,
\\
\left(\rho_{1}^\flat, \eta_1^\flat\right),
&
\textrm{ if }\, (t, x) \in A_2^\ell,
\end{array}
\right.
\end{equation}
where
\begin{equation}
\label{eq:A_1}
A_1^\ell = \bigcup_{i=0}^{\ell - 1} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - \tau_1^\ell - i \frac{T}{\ell}
< \frac{x}{V_{max}}
< t - i \frac{T}{\ell}
\end{array}
\right\}
\end{equation}
and
\begin{equation}
\label{eq:A_2}
A_2^\ell = \bigcup_{i=1}^{\ell} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - i \frac{T}{\ell}
< \frac{x}{V_{max}}
< t - i \frac{T}{\ell} + \tau_1^\ell
\end{array}
\right\};
\end{equation}
\end{lemma}
\begin{proof}
We proceed in the same way of the previous lemmas.
In the time interval $[0,\tau_{1}^\ell[$ the traffic lights is green
in the first incoming road and we study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_1$ and $I_3$:
the Riemann problem between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\bar \rho_3, \bar \eta_3\right)$
produces a linear wave between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\bar \rho_3, \bar \eta_3\right)$, see
Figure~\ref{fig:tau1_Lemma3}.
In $I_2$, the flow at the junction is equal to zero and the solution in
the road $I_2$ is given by a shock wave of the first family connecting
$\left(\bar \rho_2, \bar \eta_2\right)$ to $\left(R,R \bar w_2\right)$,
see Figure~\ref{fig:tau11_Lemma3}.
\begin{figure}[t!]
\centering
\input{rhoeta_tau1_Lemma3.pdftex_t}
\hfil
\input{rhoeta_tau11_Lemma3.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[0,\tau_{1}^\ell[$ for the first incoming road and the
outgoing road in the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau1_Lemma3}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau111_Lemma3.pdftex_t} \hfil
\input{rhoeta_tau1111.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[0,\tau_{1}^\ell[$ for the second incoming road in
the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau11_Lemma3}
\end{figure}
In the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$, the traffic lights becomes red for the road $I_1$
and green for $I_2$ and the solution of the Riemann problem in the unique road given
by the union of the $I_2$ and $I_3$ between
$\left(R,R \bar w_2\right)$ and $(\bar \rho_{1}, \bar \eta_{1})$ is given by a rarefaction curve of the first family
between $(R, R \bar w_2)$ and $\left(\rho_2^\flat, \eta_2^\flat\right)$
followed by a linear wave between $\left(\rho_2^\flat, \eta_2^\flat\right)$ and $(\bar \rho_{1}, \bar \eta_{1})$, see Figure~\ref{fig:tau2_Lemma3}.
In $I_1$, instead, the flow at the junction is equal to zero and
so the trace of the solution is $\left(R,R \bar w_1\right)$,
see Figure~\ref{fig:tau22_Lemma3}. More precisely a shock wave with negative speed
starts from the point $\left(\tau_1^\ell, 0\right)$ connecting the states
$(\bar \rho_{1}, \bar \eta_{1})$ and $\left(R,R \bar w_1\right)$.
\begin{figure}[t!]
\centering \input{rhoeta_tau2_Lemma3.pdftex_t} \hfil
\input{rhoeta_tau22_Lemma3.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the second
incoming road and the outgoing road in the coordinates, from
left to right, $(\rho,\eta)$ and $(x, t)$.}\label{fig:tau2_Lemma3}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau222_Lemma3.pdftex_t} \hfil
\input{rhoeta_tau2222_Lemma3.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the first
incoming road in the coordinates, from left to right,
$(\rho,\eta)$ and $(x, t)$.}\label{fig:tau22_Lemma3}
\end{figure}
Similarly, in the time interval
$[\tau_{1}^\ell + \tau_{2}^\ell, 2\tau_{1}^\ell + \tau_{2}^\ell[$,
the traffic light is green for road $I_1$ and red for $I_2$;
so we need to consider a Riemann
problem between
$(R,R \bar w_1)$ and $\left(\rho_2^\flat, \eta_2^\flat\right)$,
see Figure~\ref{fig:tau3_Lemma3}.
The solution consists in a rarefaction curve of the first family
between $(R,R \bar w_1)$ and $\left(\rho_1^\flat, \eta_1^\flat\right)$
followed by a linear wave between
$\left(\rho_1^\flat, \eta_1^\flat\right)$ and $\left(\rho_2^\flat, \eta_2^\flat\right)$.
The situation of $I_2$ is analogous to that represented
in Figure~\ref{fig:tau11_Lemma3}. More precisely at the point
$\left(\tau_{1}^\ell + \tau_{2}^\ell, 0\right)$ a shock wave with negative
speed is generated and it connects $\left(\rho_2^\flat, \eta_2^\flat\right)$
with $\left(R,R \bar w_2\right)$.
\begin{figure}[t!]
\centering \input{rhoeta_tau3_Lemma3.pdftex_t} \hfil
\input{rhoeta_tau33_Lemma3.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[\tau_{1}^\ell + \tau_{2}^\ell, 2\tau_{1}^\ell + \tau_{2}^\ell[$ for the
first incoming road and the outgoing road in the coordinates,
from left to right, $(\rho,\eta)$ and $(x, t)$.}\label{fig:tau3_Lemma3}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau_star_Lemma3.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma3} in the time interval $[0,2\tau_{1}^\ell+2\tau_{2}^\ell[$ for the outgoing road
in the coordinates $(x, t)$. The two states $\left(\rho_1^\flat, \eta_1^\flat\right)$ and $\left(\rho_2^\flat, \eta_2^\flat\right)$, separated by linear waves, alternate periodically.}\label{fig:tau*_Lemma3}
\end{figure}
We proceed in the same way until we arrive at time $t = T$.
The solution in $I_3$ is given by~(\ref{eq:I_3_Lemma3});
see Figure~\ref{fig:tau*_Lemma3}.
\end{proof}
\begin{lemma}
\label{Lemma4}
Assume that the initial conditions satisfy
\begin{equation*}
\left(\bar \rho_{1}, \bar \eta_1\right) \in C, \quad
\left(\bar \rho_{2}, \bar \eta_2\right) \in C, \quad
\left(\bar \rho_{3}, \bar \eta_3\right) \in F.
\end{equation*}
Then the solution in the outgoing road $I_3$ is
\begin{equation}
\label{eq:I_3_Lemma4}
\left(\rho_{\ell, 3}, \eta_{\ell, 3}\right) (t,x) =
\left\{
\begin{array}{l@{\quad}l}
\left(\bar \rho_3, \bar \eta_3\right),
&
\textrm{ if }\, 0 < t < T, \quad x > V_{\max} t,
\\
\left(\rho_{1}^\flat, \eta_1^\flat\right),
&
\textrm{ if }\, (t, x) \in A_1^\ell,
\\
\left(\rho_{2}^\flat, \eta_2^\flat\right),
&
\textrm{ if }\, (t, x) \in A_2^\ell,
\end{array}
\right.
\end{equation}
where
\begin{equation}
\label{eq:A_1}
A_1^\ell = \bigcup_{i=0}^{\ell - 1} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - \tau_1^\ell - i \frac{T}{\ell}
< \frac{x}{V_{max}}
< t - i \frac{T}{\ell}
\end{array}
\right\}
\end{equation}
and
\begin{equation}
\label{eq:A_2}
A_2^\ell = \bigcup_{i=1}^{\ell} \left\{(t, x):\,
\begin{array}{c}
0 < t < T, \quad x > 0
\\
t - i \frac{T}{\ell}
< \frac{x}{V_{max}}
< t - i \frac{T}{\ell} + \tau_1^\ell
\end{array}
\right\}.
\end{equation}
\end{lemma}
\begin{proof}
We proceed in the same way of the previous lemmas. In the time interval
$[0,\tau_{1}^\ell[$ the traffic lights is green
in the first incoming road and we study the Riemann problem
as a classical one considering a unique road given
by the union of the $I_1$ and $I_3$:
the Riemann problem between $(\bar \rho_{1}, \bar \eta_{1})$
and $\left(\bar \rho_3, \bar \eta_3\right)$
produces a rarefaction curve of the first family
between $(\bar \rho_{1}, \bar \eta_{1})$ and $\left(\rho_1^\flat, \eta_1^\flat\right)$
followed by a linear wave between
$\left(\rho_1^\flat, \eta_1^\flat\right)$ and $\left(\bar \rho_3, \bar \eta_3\right)$, see Figure~\ref{fig:tau1_Lemma4}.
In $I_2$, the flow at the junction is equal to zero and the solution in the road $I_2$ is given by a shock wave of the first family connecting
$\left(\bar \rho_2, \bar \eta_2\right)$ to $\left(R,R \bar w_2\right)$, see Figure~\ref{fig:tau11_Lemma4}.
\begin{figure}[t!]
\centering
\input{rhoeta_tau1_Lemma4.pdftex_t}
\hfil
\input{rhoeta_tau11_Lemma4.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma4} in the time interval $[0,\tau_{1}^\ell[$ for the first incoming road and the
outgoing road in the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau1_Lemma4}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau111_Lemma4.pdftex_t} \hfil
\input{rhoeta_tau1111.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma4} in the time interval $[0,\tau_{1}^\ell[$ for the second incoming road in
the coordinates, from left to right, $(\rho,\eta)$ and
$(x, t)$.}\label{fig:tau11_Lemma4}
\end{figure}
In the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$, when the traffic lights becomes red for the road $I_1$
and green for $I_2$, the solution of the Riemann problem in the unique road given
by the union of the $I_2$ and $I_3$ between
$\left(R,R \bar w_2\right)$ and $\left(\rho_1^\flat, \eta_1^\flat\right)$ is given by a rarefaction curve of the first family
between $(R, R \bar w_2)$ and $\left(\rho_2^\flat, \eta_2^\flat\right)$
followed by a linear wave between $\left(\rho_2^\flat, \eta_2^\flat\right)$ and $\left(\rho_1^\flat, \eta_1^\flat\right)$, see Figure~\ref{fig:tau2_Lemma4}.
In $I_1$, instead, a shock wave with negative speed
starts from the point $\left(\tau_1^\ell, 0\right)$ connecting the states
$\left(\rho_1^\flat, \eta_1^\flat\right)$ and $\left(R,R \bar w_1\right)$, see Figure~\ref{fig:tau22_Lemma4}.
\begin{figure}[t!]
\centering \input{rhoeta_tau2_Lemma4.pdftex_t} \hfil
\input{rhoeta_tau22_Lemma4.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma4} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the second
incoming road and the outgoing road in the coordinates, from
left to right, $(\rho,\eta)$ and $(x, t)$.}\label{fig:tau2_Lemma4}
\end{figure}
\begin{figure}[t!]
\centering \input{rhoeta_tau222_Lemma4.pdftex_t} \hfil
\input{rhoeta_tau2222_Lemma4.pdftex_t}
\caption{The situation of Lemma~\ref{Lemma4} in the time interval $[\tau_{1}^\ell,\tau_{1}^\ell+\tau_{2}^\ell[$ for the first
incoming road in the coordinates, from left to right,
$(\rho,\eta)$ and $(x, t)$.}\label{fig:tau22_Lemma4}
\end{figure}
From time $\tau_{1}^\ell+\tau_{2}^\ell$ to time $t = T$, the solution becomes as that described in the previous Lemma~\ref{Lemma3} and it is given by~(\ref{eq:I_3_Lemma4}) with a structure similar to Figure~\ref{fig:tau*_Lemma3}.
\end{proof}
\subsection{The General Case}
\label{subsec:II}
Assume that the values $\sigma_1 > 0, \cdots, \sigma_{n-1} > 0, \sigma_n = 1$,
defined in~(\ref{eq:sigma}), are all positive constant, not depending
on the number of cycles $\ell$.
We can now state and prove the main result of the paper.
\begin{theorem}
Assume~\textbf{\textup{(H-1)}}, \textbf{\textup{(H-2)}},
and~\textbf{\textup{(H-3)}} hold.
Fix, for every $i \in \left\{1, \ldots, n+1\right\}$,
$(\bar \rho_{i}, \bar \eta_{i}) \in F \cup C$.
Consider the Riemann problem at the junction for the phase transition
model~(\ref{eq:Modeleta}) where the initial conditions are given by
$(\bar \rho_{1}, \bar \eta_{1}), \cdots, (\bar \rho_{n+1}, \bar \eta_{n+1})$.
Denote with $\left(\rho_{\ell,i} (t,x), \eta_{\ell,i} (t,x)\right)$
($i \in \left\{1, \cdots, n+1\right\}$)
the solution to the Riemann problem, where the junction is governed by
the traffic lights with $\ell$ cycles.
If $\ell \to +\infty$, then there exists a function
$\left(\widetilde \rho_{n+1} (t,x), \widetilde\eta_{n+1} (t,x)\right)$,
defined in the outgoing road $I_{n+1}$, such that
\begin{equation}
\label{eq:convergence}
\left(\rho_{\ell,n+1}, \eta_{\ell,n+1}\right)
\rightharpoonup^\ast
\left(\widetilde \rho_{n+1}, \widetilde\eta_{n+1}\right)
\end{equation}
converges
in the weak$^\ast$ topology of $\L{\infty}\left([0, T] \times I_{n+1}\right)$
and the limit function
$\left(\widetilde \rho_{n+1} (t,x), \widetilde\eta_{n+1} (t,x)\right)$
is a weak solution to~(\ref{eq:Modeleta}) on $I_{n+1}$.
Moreover, if $\left(\bar \rho_{n+1}, \bar \eta_{n+1}\right) \in F$,
then
\begin{equation}
\label{eq:solution_F}
\begin{split}
\widetilde \rho_{n+1} (t,x) = & \left\{
\begin{array}{ll}
\bar \rho_{n+1},
& \textrm{ if } x > V_{\max} t,\, 0 < t < T,
\\
\frac{\displaystyle 1}{\displaystyle
\left[\sum_{i = 1}^{n} \sigma_i\right]}
\displaystyle\sum_{i = 1}^n \sigma_i \rho_i^\flat,
& \textrm{ if } 0 < x < V_{\max} t,\, 0 < t < T,
\end{array}
\right.
\\
\widetilde \eta_{n+1} (t,x) = & \left\{
\begin{array}{ll}
\bar \eta_{n+1},
& \textrm{ if } x > V_{\max} t,\, 0 < t < T,
\\
\frac{\displaystyle 1}{\displaystyle
\left[\sum_{i = 1}^{n} \sigma_i\right]}
\displaystyle\sum_{i = 1}^n \sigma_i \eta_i^\flat,
& \textrm{ if } 0 < x < V_{\max} t,\, 0 < t < T.
\end{array}
\right.
\end{split}
\end{equation}
Instead, if $\left(\bar \rho_{n+1}, \bar \eta_{n+1}\right) \in C$,
then
\begin{equation}
\label{eq:solution_C}
\begin{split}
\widetilde \rho_{n+1} (t,x) = & \left\{
\begin{array}{ll}
\bar \rho_{n+1},
& \textrm{ if } x > \lambda t,\, 0 < t < T,
\\
\frac{\displaystyle 1}{\displaystyle
\left[\sum_{i = 1}^{n} \sigma_i\right]}
\displaystyle\sum_{i = 1}^n \sigma_i \rho_i^\sharp,
& \textrm{ if } 0 < x < \lambda t,\, 0 < t < T,
\end{array}
\right.
\\
\widetilde \eta_{n+1} (t,x) = & \left\{
\begin{array}{ll}
\bar \eta_{n+1},
& \textrm{ if } x > \lambda t,\, 0 < t < T,
\\
\frac{\displaystyle 1}{\displaystyle
\left[\sum_{i = 1}^{n} \sigma_i\right]}
\displaystyle\sum_{i = 1}^n \sigma_i \eta_i^\sharp,
& \textrm{ if } 0 < x < \lambda t,\, 0 < t < T,
\end{array}
\right.
\end{split}
\end{equation}
where $\lambda = v\left(\bar \rho_{n+1}, \bar \eta_{n+1}\right)$.
Finally,
for a.e. $t \in [0, T]$, the trace at $x=0^+$ of the maximal speed
$\widetilde w_{n+1} = \frac{\widetilde \eta_{n+1}}{\widetilde \rho_{n+1}}$
satisfies
\begin{equation}
\label{eq:cond_w_outgoing}
\widetilde w_{n+1}(t, 0^+)
=
\frac{1}{\sigma_1 + \cdots + \sigma_{n}}
\left[\sigma_1 \bar w_1 + \cdots + \sigma_{n} \bar w_{n}
\right].
\end{equation}
\end{theorem}
\begin{proof}
First consider the case $n=2$, i.e. the junction with
$n = 2$ incoming roads and one outgoing road.
In the time interval $[0, \tau_1^\ell[$ the traffic lights for the incoming
road $I_2$ is red. Hence the trace of the solution in $I_2$ has maximal
density $R$. This means that we may assume
that
\begin{equation}
\label{eq:ic-I2}
\left(\bar \rho_2, \bar \eta_2\right) = \left(R, \bar \eta_2\right)
\in C.
\end{equation}
If the initial condition for the road $I_3$ belongs to the free phase $F$,
then Lemma~\ref{Lemma3} and Lemma~\ref{Lemma4} imply that
the sequence of solutions in $I_3$ is of Rademacker type,
see for instance~\cite[Exercise~4.18]{MR2759829}.
Hence we deduce that the limit
of such sequence is given by~(\ref{eq:solution_F}).
If the initial condition for the road $I_3$ belongs to the congested
phase $C$,
then Lemma~\ref{Lemma1} and Lemma~\ref{Lemma2} imply that
the sequence of solutions in $I_3$ is again of Rademacker type,
and so the limit
of such sequence is given by~(\ref{eq:solution_C}).
The functions~(\ref{eq:solution_F}) and~(\ref{eq:solution_C}) are piecewise
constant and the discontinuity travels with speed satisfying
the Rankine-Hugoniot condition. Hence they are weak solutions
to~(\ref{eq:Modeleta}).
Finally, consider the maximal speed
\begin{equation*}
\widetilde w_3(t, 0^+) = \frac{\widetilde \eta_3(t, 0^+)}
{\widetilde \rho_3(t, 0^+)}
\end{equation*}
of the solution
$\left(\widetilde \rho_3, \widetilde \eta_3\right)$ at the junction.
If $\left(\bar \rho_3, \bar \eta_3\right) \in F$, then
\begin{align*}
\widetilde w_3(t, 0^+)
& = \frac{\sigma_1 \eta_1^\flat + \sigma_2 \eta_2^\flat}
{\sigma_1 \rho_1^\flat + \sigma_2 \rho_2^\flat}
= \frac{\sigma_1 \bar w_1\rho_1^\flat + \sigma_2 \bar w_2 \rho_2^\flat}
{\sigma_1 \rho_1^\flat + \sigma_2 \rho_2^\flat}
\\
& =
\frac{\sigma_1 \rho_1^\flat V_{\max}}{\sigma_1 \rho_1^\flat V_{\max}+\sigma_2 \rho_2^\flat V_{\max}} \bar w_{1} + \frac{\sigma_2\rho_2^\flat V_{\max}}{\sigma_1 \rho_1^\flat V_{\max}+\sigma_2 \rho_2^\flat V_{\max}} \bar w_{2}
\\
& =
\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\bar w_{1} + \frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}} \bar w_{2}\,,
\end{align*}
with $\gamma_{1} = \sigma_1 \rho_1^\flat V_{\max}$ and
$\gamma_{2} = \sigma_2 \rho_2^\flat V_{\max}$.
Therefore the maximal speed $\widetilde w(t,0^+)$ in the outgoing road is a convex combination of the maximal speeds in the two incoming roads $\bar w_{1}$ and $\bar w_{2}$ and it coincides with the condition on the maximal speed proposed in~\cite{GaravelloMarcellini}.
If $\left(\bar \rho_3, \bar \eta_3\right) \in C$, then
\begin{align*}
\widetilde w_3(t, 0^+)
& = \frac{\sigma_1 \eta_1^\sharp + \sigma_2 \eta_2^\sharp}
{\sigma_1 \rho_1^\sharp + \sigma_2 \rho_2^\sharp}
= \frac{\sigma_1 \bar w_1\rho_1^\sharp + \sigma_2 \bar w_2 \rho_2^\sharp}
{\sigma_1 \rho_1^\sharp + \sigma_2 \rho_2^\sharp}
\\
& = \!
\frac{\sigma_1 \rho_1^\sharp v\left(\bar \rho_3, \bar \eta_3\right)}
{\sigma_1 \rho_1^\sharp v\left(\bar \rho_3, \bar \eta_3\right) +
\sigma_2 \rho_2^\sharp v\left(\bar \rho_3, \bar \eta_3\right)} \bar w_{1}
\!+ \!\frac{\sigma_2\rho_2^\sharp v\left(\bar \rho_3, \bar \eta_3\right)}
{\sigma_1 \rho_1^\sharp v\left(\bar \rho_3, \bar \eta_3\right)
+ \sigma_2 \rho_2^\sharp v\left(\bar \rho_3, \bar \eta_3\right)} \bar w_{2}
\\
& =
\frac{\gamma_{1}}{\gamma_{1}+\gamma_{2}}\bar w_{1} + \frac{\gamma_{2}}{\gamma_{1}+\gamma_{2}} \bar w_{2}\,,
\end{align*}
with $\gamma_{1} = \sigma_1 \rho_1^\sharp
v\left(\rho_1^\sharp, \eta_1^\sharp\right)$ and
$\gamma_{2} = \sigma_2 \rho_2^\sharp v\left(\rho_2^\sharp, \bar \eta_2^\sharp
\right)$, since
\begin{equation*}
v\left(\rho_1^\sharp, \eta_1^\sharp\right) =
v\left(\rho_2^\sharp, \eta_2^\sharp\right) =
v\left(\bar \rho_3, \bar \eta_3\right).
\end{equation*}
We conclude as in the previous case.
The proof for the general case $n \ge 2$ can be obtained in a similar way.
Indeed in the incoming roads where the traffic lights is red, the trace for
the density is $R$. Instead, in the incoming road where the traffic lights
is green, then the initial condition for such road is propagated in $I_{n+1}$
with speed $v\left(\bar \rho_{n+1}, \bar \eta_{n+1}\right)$.
Hence the solution
$\left(\tilde \rho_{\ell, n+1}, \tilde \eta_{\ell, n+1}\right)$ is similar
to that of Lemmas~\ref{Lemma1}-\ref{Lemma4} in the sense that
there exist subsets $A_1^\ell, \cdots A_n^\ell$ of $[0,T] \times I_{n+1}$
with a ``periodic'' structure in which
$\left(\tilde \rho_{\ell, n+1}, \tilde \eta_{\ell, n+1}\right)$
is given respectively by
$\left(\bar \rho_1, \bar \eta_1\right), \cdots,
\left(\bar \rho_n, \bar \eta_n\right)$. This permits to conclude.
\end{proof}
\section*{Acknowledgments}
The authors were partially supported by the INdAM-GNAMPA 2017 project ``Conservation Laws: from Theory to Technology''.
{\small{
\bibliographystyle{abbrv}
|
1712.07942
|
\section*{Results}
\subsection*{Absence of static magnetic order}
The heat capacity of PbCuTe$_{2}$O$_{6}$\, reveals an anomaly at the temperature T$_{\rm an}{=}0.87$~K~\cite{Koteswararao2014}. Although this is clearly not a sharp $\lambda$-anomaly and is unlikely to indicate a phase transition, further verification of the magnetic ground state is necessary. Neutron diffraction directly measures the spatial Fourier transform of the spin-spin correlation function and would show resolution-limited magnetic Bragg peaks in the case of long-range magnetic order. Figure~\ref{Figure:2}a shows the neutron powder diffraction patterns of PbCuTe$_{2}$O$_{6}$\, measured above and below T$_{\rm an}$ at temperatures T$=2$~K and 0.1~K, respectively. Both patterns can be described entirely by considering only the known crystal structure of PbCuTe$_{2}$O$_{6}$\,~\cite{Wulff1997}. The absence of any additional Bragg peaks that could correspond to long-range magnetic order is further revealed by taking the difference between the diffraction patterns at these two temperatures as shown by the lower green curve. To establish an upper limit on the maximum size of any possible static ordered moment, several magnetic structures were simulated and compared to the data. Figure~\ref{Figure:2}b shows a magnetic Bragg peak compatible with the magnetic structure of the iso-structural compound SrCuTe$_{2}$O$_{6}$~\cite{Chillal2018}. The ordered moment if present must be smaller than $\approx 0.05 \mu_{B}$/Cu$^{2+}$ which is much less than the total spin moment of the Cu$^{2+}$ ion of $1 \mu_{B}$ indicating that static magnetism is suppressed. These results compliment and confirm the lack of static magnetism revealed by $\mu$SR~\cite{Khuntia2016}.
\subsection*{Diffuse continuum of excitations}
To explore the magnetic excitations of PbCuTe$_{2}$O$_{6}$\,, we performed inelastic neutron scattering. This technique directly measures the dynamical structure factor S$(\mathbf{Q},E)$, which is the Fourier transform in space and time of the spin-spin correlation function and allows the magnetic excitation spectrum to be mapped out as a function of energy $E$ and momentum (or wavevector) transfer $\mathbf{Q}$. Figure~\ref{Figure:3}a shows the excitation spectrum of a powder sample measured at T$=0.1$~K. A dispersionless, broad diffuse band of magnetic signal is clearly visible around momentum transfer $|Q| \approx 0.8$~\AA$^{-1}$. The magnetic excitations extend up to $3$ meV and are much broader than the instrumental resolution. Figure~\ref{Figure:3}b shows the magnetic signal at $|Q| \approx 0.8$~\AA$^{-1}$ plotted as a function of energy. The intensity is greatest at $E=0.5$~meV and weakens gradually with increasing energy. The intensity also decreases rapidly with decreasing energy and the presence of an energy gap smaller than 0.15~meV is possible, but cannot be confirmed within the experimental uncertainty.
To obtain a more detailed picture, inelastic neutron scattering was performed on single crystal samples at several fixed energy transfers. Figure~\ref{Figure:3}c-e show the excitations in the $\lbrack h,k,0\rbrack$-plane measured at $E{=}0.75$, $1.5$ and $2$~meV, respectively, while Figure~\ref{Figure:4}b gives the scattering at $E{=}0.5$~meV. For all energy transfers, the excitations form a diffuse ring at $|Q|\approx 0.8$~\AA$^{-1}$, while additional weaker branches of scattering extend outwards to higher wavevectors. At low energy transfers ($E < 1$~meV) the diffuse ring has double maxima at wavevectors $(1.69,\sim\pm0.3,0)$ and $(\sim\pm0.3,1.69,0)$, etc (see Figure~\ref{Figure:4}h) while at higher energies it broadens and becomes weaker. The ring can also be observed in the $\lbrack{{h,h,l}}\rbrack$-plane where its intensity also modulates (as shown in Figure~\ref{Figure:4}a for $E{=}0.5$~meV), together these results indicate that the excitations in fact form a diffuse sphere in reciprocal space with a radius of $|Q|\approx 0.8$~\AA$^{-1}$. The excitations of PbCuTe$_{2}$O$_{6}$\, are clearly very different from the sharp and dispersive spin-wave excitations expected in conventional magnets with long-range magnetically ordered ground states or from the gapped and dispersive magnon excitations of dimer magnets \cite{PhysRevB.81.014415,PhysRevLett.102.177204}. Diffuse scattering features indicate a multi-spinon continuum of excitations as have been well-documented in one-dimensional antiferromagnets formed from half-integer spin magnetic ions~\cite{lake2005,Lake2013,Mourigal2013}. They have also been observed in several two-dimensional quantum spin liquids where similar diffuse ring-like features have been found~\cite{Han2012,Balz2016}. In three-dimensions, most spin liquid candidates are based on the pyrochlore structure and their scattering forms a distinctive pinch-point pattern \cite{PhysRevB.71.014424}.
\subsection*{Magnetic Hamiltonian}
Having confirmed that PbCuTe$_{2}$O$_{6}$\, exhibits the characteristic features of a quantum spin liquid, we now investigate the origins of this behaviour by deriving the exchange interactions. For this purpose, we employ density functional theory (DFT). The resulting values of the interaction strengths are plotted as a function of the onsite interaction ``$U$'' in Figure~\ref{Figure:1}c for $U{=}5.5$~eV to $8$~eV as this range spans the usual values for Cu$^{2+}$. We find that all the interactions are antiferromagnetic. In contrast to previous DFT calculations where the hyperkagome interaction $J_{2}$ was found to be much stronger than the other interactions~\cite{Koteswararao2014}, we find that the two frustrated interactions $J_{1}$ and $J_{2}$ are of almost equal strength and are significantly stronger than the chain interactions $J_{3}$ and $J_{4}$. The combined effect of $J_{1}$ and $J_{2}$ is to couple the Cu$^{2+}$ ions into a highly frustrated three-dimensional network of corner-sharing triangles similar to the hyperkagome lattice ($J_{2}$ only) but with a higher density of triangles. In the hyperkagome lattice each magnetic ion participates in two corner-sharing triangles, while in PbCuTe$_{2}$O$_{6}$\, each Cu$^{2+}$ ion participates in three triangles resulting in a higher connectivity - we name this lattice the hyper-hyperkagome. An important difference between these two lattices is the size of the smallest possible closed loops (beyond the triangles) around which the spins can resonate. The hyperkagome lattice consists of interconnected loops of 10 spins. The hyper-hyperkagome can also be viewed as interconnected loops which however are smaller, consisting of 6 spins as is the case for the 2D kagome lattice (see Figure~\ref{Figure:1}a). As shown in Figure~\ref{Figure:1}c, the values of the exchange interactions decrease as the value of $U$ increases. For each value of $U$ the resulting set of interactions strengths can be used to calculate the Curie-Weiss temperature $\theta_{\rm CW}$. Since DC susceptibility measurements yield $\theta_{\rm CW}=-22$~K~\cite{Koteswararao2014,Khuntia2016}, we use $U{=}7.5$~eV (corresponding to $\theta_{\rm CW}=-23$~K) giving interaction sizes $J_{1}{=}1.13$~meV, $J_{2}{=}1.07$~meV, $J_{3}{=}0.59$~meV, and $J_{4}{=}0.12$~meV.
\subsection*{Comparison to theory}
To gain further insight into the magnetic behavior of PbCuTe$_{2}$O$_{6}$\,, the static susceptibility expected from this set of interactions was calculated using the theoretical technique of pseudo-fermion functional renormalization group (PFFRG). This method calculates the real part of the static spin susceptibility which corresponds to the energy-integrated neutron scattering cross-section as discussed in the methods section. In agreement with the experimental observations, the susceptibility does not show any sign of long-range magnetic order even down to the lowest temperatures, confirming that static magnetism is suppressed by this Hamiltonian. The momentum resolved susceptibility calculated at $T{=}0.2$~K is shown in Figure~\ref{Figure:4}c-d for the $\lbrack{h,k,0}\rbrack$- and $\lbrack{h,h,l}\rbrack$-planes respectively. It predicts a diffuse sphere of scattering at the same wave-vectors and with similar intensity modulations as those observed experimentally (Figure~\ref{Figure:3}c-e and~\ref{Figure:4}a-b), and is even able to reproduce the weaker features. The accuracy of the calculations can be further demonstrated by comparing cuts through the data and simulations. As shown in Figure~\ref{Figure:4}g-h, the theory reproduces the double maxima as well as the structure of the slopes of these peaks to high precision. We emphasize that this level of agreement has hardly ever been achieved for a material with many competing interactions on a complicated three-dimensional lattice and in the extreme quantum (spin-$\frac{1}{2}$) limit. From a more general viewpoint, it demonstrates that the combination of DFT and PFFRG provides a powerful and flexible numerical framework for the investigation of real quantum magnetic materials. The PFFRG method was also used to test the robustness of the spin liquid state to variations in the Hamiltonian. We find that the ground state shows no tendency toward long-range magnetic order when the ratio of interactions are varied over $0.975\leqslant J_{1}/J_{2}\leqslant1.08$ (corresponding to $-37~{\rm K}\leqslant \theta_{\rm CW}\leqslant-21~{\rm K}$) while the momentum-resolved susceptibility changes only slightly [Figure~\ref{Figure:4}g-h].
\section*{Discussion}
In total, the neutron data and numerical simulations, together with the small spin-$\frac{1}{2}$ moments and the isotropic interactions point to the presence of strong quantum fluctuations that destroy long-range magnetic order or any static magnetism in the ground state of PbCuTe$_{2}$O$_{6}$\,. This is in stark contrast to the previously studied 3D pyrochlore spin ice materials with large moments and highly anisotropic interactions, where the magnetic moments are static in the ground state \cite{fen09,mor09}. A fluctuating ground state as observed for PbCuTe$_{2}$O$_{6}$\, is known to provide the right physical environment for spin fractionalization associated with deconfined spinon excitations. Such particles are generally observed as a multi-spinon spectrum that is broad and diffuse in momentum and energy. This, in turn, is the type of signal which we independently observed in both inelastic neutron experiments and PFFRG calculations making our quantum spin-liquid interpretation consistent. The issue of whether this is a gapped or gapless quantum spin liquid remains unresolved, however a clear depletion of magnetic states at low energy suggests that a gap smaller than 0.15~meV could exist.
An important remaining question is why the complex model we propose for PbCuTe$_{2}$O$_{6}$\, induces sufficiently strong quantum fluctuations for quantum spin liquid formation. According to common understanding, quantum effects for small spins are particularly strong when the corresponding classical (large spin) model exhibits an infinite ground state degeneracy, as is the case for the kagome or pyrochlore models with isotropic antiferromagnetic interactions. Performing a classical Monte Carlo analysis of our system, we found that the full $J_{1}{-}J_{2}{-}J_{3}{-}J_{4}$ model in fact does not exhibit infinite degeneracy for large spin but instead shows long-range magnetic order \cite{Reuther2018}. However, we have identified an infinite degeneracy in the classical model with only the $J_{1}$ and $J_{2}$ interactions. From this perspective, the weaker $J_{3}$ and $J_{4}$ couplings act as perturbations inducing a small energy splitting in the degenerate classical $J_{1}{-}J_{2}$ -only system. We, hence, propose that the strong quantum fluctuations of the full $J_{1}{-}J_{2}{-}J_{3}{-}J_{4}$ model with quantum spin-$\frac{1}{2}$ originate from the degeneracy of the classical $J_{1}{-}J_{2}$ model. This is supported by PFFRG calculations showing that the correlation profiles of both systems resemble each other (see Figure~\ref{Figure:4}c, where the degeneracy of the $J_{1}{-}J_{2}$-only model manifests as streaks in the [1,1,1] direction). Finally, the degeneracy in the classical $J_{1}{-}J_{2}$ model can be understood from the fact that the associated hyper-hyperkagome lattice forms a network of corner-sharing triangles. As for the classical antiferromagnetic kagome and pyrochlore lattices, the ground states in such corner-sharing geometries must obey the local constraint that the vector sum of the spins in each triangle or tetrahedron is zero. The large ground-state degeneracy then follows from the fact that there are infinitely many states which fulfill all constraints.
In conclusion, we show using a combination of theory and experiment that PbCuTe$_{2}$O$_{6}$\, exhibits all the features expected of a quantum spin liquid including the absence of static magnetism and the presence of diffuse dispersionless spinon-like excitations. Although PbCuTe$_{2}$O$_{6}$\, has a complex Hamiltonian it is clear that the frustration arises from the network of corner-sharing triangles due to the dominant $J_1$ and $J_2$ interactions. While this has been explored in the hyperkagome lattice where each spin participates in two corner-sharing triangles giving closed loops of 10 spins \cite{Hopkinson2007,Zhou2008,Bergholtz2010}, there has until now been no experimental or theoretical exploration of this more highly connected hyper-hyperkagome lattice where each spin participates in three corner-sharing triangles resulting in smaller closed loops of 6 spins. Three-dimensional spin liquids are very rare and current examples are confined mostly to the pyrochlore and hyperkagome lattices, thus our results are of high importance because they reveal a new type of three-dimensional lattice capable of supporting spin liquid behaviour.
\section*{Methods}
\subsection*{Neutron scattering measurements}
Powder neutron diffraction was performed on the time-of-flight diffractometer WISH at the ISIS Facility, Didcot, U.K. The sample (weight $13$~g) was placed into a copper can and the diffraction patterns were collected at $T{=}2$~K and $0.1$~K. The powder inelastic neutron scattering data was obtained at the time-of-flight spectrometer LET also located at the ISIS facility. For these measurements the same powder sample (weight $13$~g) was placed between two coaxial copper cans to achieve a cylindrical sample shape, and Helium exchange gas was used for better temperature stability. The measurements were performed at $T{=}0.1$~K with incident energies: $E_i{=}18.2$~meV, $5.64$~meV, $2.72$~meV, $1.59$~meV. Single crystal inelastic neutron measurements in the $\lbrack{h,k,0}\rbrack$-plane were obtained at the ThALES triple-axis spectrometer using the flatcone detector at the ILL, Grenoble, France, and also at the MACS triple-axis spectrometer at NIST, Gaithursburg, USA. Wavevector maps at constant energy were measured on ThALES at $T{=}0.05$~K while rotating the crystal in $0.5~\deg$ steps with a fixed final energy of $E_f{=}4.06$~meV giving an energy resolution $0.097$~meV. The wavevector resolution in the plots is $0.05$~r.l.u ${\times}0.05$~r.l.u. At MACS, the initial energy was set to $E_i{=} 4$~meV for energy transfer of $E{=}0.75$~meV (giving energy resolution of $0.24$~meV) and $E_i{=}5$~meV for $E {=}1.5$~meV and $2$~meV (energy resolution $0.35$~meV). The wavevector maps were obtained by rotating the crystal with a step size of $1~\deg$ and the data were plotted by rebinning to $0.04$~r.l.u${\times}0.04$~r.l.u pixels. The maps in $\lbrack{h,h,l}\rbrack$ plane were obtained at the LET spectrometer in ISIS at $T{=}0.03$~K with incident energies of $E_i{=}26.24$~meV, $5.46$~meV, $2.29$~meV, $1.25$~meV, and $0.79$~meV. For $E_i{=}5.46$~meV this gives an energy resolution of $0.18$~meV.
\subsection*{Density functional theory calculations}
We determined the parameters of the Heisenberg Hamiltonian in Eq.~\ref{eq:1} for {PbCuTe$_{2}$O$_{6}$\,} using density functional theory (DFT) calculations with the all electron full potential local orbital (FPLO) basis~\cite{Koepernik1999}. We based our calculations on the structure determined via powder X-ray diffraction by Koteswararao {\it et al.}~\cite{Koteswararao2014}. The exchange couplings were extracted by mapping the total energies of many different spin configurations onto the classical energies of the Heisenberg Hamiltonian~\cite{Guterding2016}. Note that this approach is different from the second order perturbation theory estimates using $J{=}\frac{4t^{2}}{U}$ for the exchange interactions reported in Ref.~\cite{Koteswararao2014} which includes only the antiferromagnetic super-exchange contribution based on one virtual process. In order to increase the number of inequivalent Cu$^{2+}$ ions from one to six and thus to allow for different spin configurations, we lowered the symmetry of the crystal from $P\,4_132$ to $P\,2_1$. We converged the total energies with $6{\times} 6{\times} 6$ $\mathbf{k}$-meshes and accounted for the strong electronic correlations using a GGA+$U$ exchange correlation functional~\cite{Liechtenstein1995}. The value of the Hund's rule coupling was fixed at the typical value $J_{\rm H}{=}1$~eV, and the onsite correlation strength $U$ was varied between $5.5$~eV and $8$~eV. We determined the most relevant $U$ by using the constraint that the exchange couplings reproduce the experimentally determined Curie-Weiss temperature of $\theta_{\rm CW}=-22$~K~\cite{Koteswararao2014,Khuntia2016}. This led to a DFT result for the first four exchange couplings of {PbCuTe$_{2}$O$_{6}$\,} of $J_{1}=1.13$~meV, $J_{2}=1.07$~meV, $J_{3}=0.59$~meV, and $J_{4}=0.12$~meV. The full results are given in the Supplementary Information.
\subsection*{Pseudofermion functional renormalization group calculations}
The microscopic spin model proposed by DFT calculations is treated within the PFFRG approach~\cite{Reuther2010,Iqbal2016}, which first reformulates the original spin operators in terms of Abrikosov fermions. The resulting fermionic model is then explored within the well-developed FRG framework~\cite{Polchinski1984,Reuther2011}. Particularly, due to this property, no bias towards either magnetic order or non-magnetic behaviour is built-in. Effectively, the PFFRG method amounts to generating and summing up a large number of fermionic Feynman diagrams, each representing a spin-spin interaction process that contributes to the magnetic susceptibility. In terms of the original spin degrees of freedom, this summation corresponds to a simultaneous expansion in $1/S$ and $1/N$, where $S$ is the spin magnitude and $N$ generalizes the symmetry group of the spins from SU$(2)$ to SU$(N)$. The exactness of the PFFRG in the limits $1/S\to0$ and $1/N\to 0$ ensures that magnetically ordered states (typically obtained at large $S$) and non-magnetic spin liquids (favoured at large $N$) can both be faithfully described within the same numerical framework. In principle, the PFFRG treats an infinitely large lattice, however, spin-spin correlations are only taken into account up to a certain distance while longer correlations are put to zero. The computation times of the PFFRG scale quadratically with the correlated volume, which in our calculations comprises $2139$ lattice sites (this corresponds to correlations up to a distance of $\approx$10 nearest-neighbor distances). Likewise, continuous frequency variables (such as the dynamics of the magnetic susceptibility) are approximated by a finite and discrete frequency grid, which leads to a quartic scaling of the computational effort in the number of grid points. In our calculations we use $64$ discrete frequencies. The central outcome of the PFFRG approach is the real part of the static and momentum-resolved magnetic susceptibility which can be directly related to the experimental neutron scattering cross section through the Kramers-Kronig relation as follows:
\begin{equation}
\chi_{real}(Q,0)\propto\int \frac{\chi_{img}(Q,E)}{E}dE
\end{equation}
where, $S(Q,E)\propto\chi_{img}(Q,E)$ at very low temperatures.
Eq.2 implies that ideally PFFRG should be compared to the integral of the experimental data weighted by the inverse energy. However, it is clear from this equation that the PFFRG is dominated by the low-energy part of the neutron structure factor due to the factor of 1/E in the integrand. As show in Figure~\ref{Figure:3}b, the intensity of the excitation spectrum is maximum at E$\sim$0.5~meV and decreases continuously towards smaller energy transfers. Since the excitations evolve only weakly with energy, we choose the E=0.5~meV dataset for comparison to the PFFRG results since it has the strongest signal.
If a magnetic system develops magnetic order, the spin susceptibility $\chi_{real}(Q,0)$ manifests in a breakdown of the renormalization group flow, accompanied by distinct peaks. An important advantage of the PFFRG is that even in strongly fluctuating non-magnetic phases, short-range spin correlations and their momentum profiles can be accurately calculated and compared to neutron scattering results. For consistency, the PFFRG spin susceptibility is corrected for the magnetic form factor of the Cu$^{2+}$ ion in the dipole approximation~\cite{Brown2004}.
\subsection*{Classical simulations}
For the numerical treatment of spin systems in the classical limit $S\to\infty$ we have employed a spin-$S$ generalization of the PFFRG approach. On a technical level, this requires the introduction of $4S$ fermionic degrees of freedom per lattice site as discussed in Ref. \cite{PhysRevB.96.045144}. In the classical limit, the PFFRG equations can be solved analytically to obtain the momentum resolved magnetic susceptibility. It can be shown that the classical wave vector at which the susceptibility is strongly peaked is identical to the one predicted within the Luttinger-Tisza method \cite{PhysRev.70.954}. The final susceptibility is corrected for the Cu$^{2+}$ magnetic form factor in the dipole approximation.
\section*{Data availability}
Raw powder neutron diffraction data were measured on the time-of-flight diffractometer WISH at the ISIS facility, Didcot, UK. Raw powder and single crystal inelastic neutron scattering data were measured on the time-of-flight spectrometer LET also at the ISIS facility. Single-crystal inelastic neutron scattering data were also collected on the triple-axis spectrometers ThALES with the flat cone option at the Institut Laue-Langevin, Grenoble, France, and MACS II at the NIST Center for Neutron Research, Gaithersburg, USA. All other raw and derived data used to support the findings of this study are available from the authors on request.
\section*{Acknowledgements}
We thank K. Siemensmeyer for his help with the susceptibility measurements, D. Voneshen for his help with the inelastic neutron experiments performed on LET at the ISIS facility and Tobias M\"uller for performing classical Monte Carlo simulations. S.C., B.L., A.T.M.N.I., and J.R. acknowledge the Helmholtz Gemeinschaft for funding via the Helmholtz Virtual Institute (Project No. HVI-521). Access to MACS was provided by the Center for High Resolution Neutron Scattering, a partnership between the National Institute of Standards and Technology and the National Science Foundation under Agreement No. DMR-1508249. Y.I. and R.T. gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ). J.R. is supported by the Freie Universit\"at Berlin within the Excellence Initiative of the German Research Foundation.
\section*{Author contributions}
A.T.M.N.I. made the powder and single crystal samples. S.C. performed or participated in all neutron measurements, and analyzed the data with help from the other authors. J.A.R.-R., R.B., P.S. supported the INS measurements and D.K., P.M. supported the neutron diffraction measurements. B.L. participated in most measurements and directed the experimental aspects of the project. The DFT calculations were performed by H.O.J., Y.I carried out the quantum PFFRG calculations with the help of J.R. and R.T., while J.R performed the classical simulations and directed the theoretical aspects of the project. S.C. and B.L. wrote the manuscript with contributions from all authors.
\section*{Additional information}
{\bf Supplementary information:} Included in the attachment.
\noindent{\bf Competing financial interests statement:} The authors declare no competing financial interests.
|
1801.10306
|
\section{Introduction}
A \textit{$d$-dimensional matrix $A$ of order $n$} is an array $(a_\alpha)_{\alpha \in I^d_n}$, $a_\alpha \in\mathbb R$, where the set of indices $I_n^d= \left\{ (\alpha_1, \ldots , \alpha_d):\alpha_i \in \left\{0,\ldots,n-1 \right\}\right\}$. Given $k\in \left\{0,\ldots,d\right\}$, a \textit{$k$-dimensional plane} in $A$ is a submatrix obtained by fixing $d-k$ indices and letting the other $k$ indices vary from 1 to $n$. A 1-dimensional plane is said to be a \textit{line}, and a $(d-1)$-dimensional plane is a \textit{hyperplane}.
A matrix $A$ is called a \textit{$(0,1)$-matrix} if all its entries are equal to 0 or 1, and $A$ is a \textit{nonnegative} matrix if for all $\alpha \in I_n^d$ we have $a_\alpha \geq 0$. A nonnegative matrix is \textit{polystochastic} if for each line $l$ it holds $\sum\limits_{\alpha \in l} a_\alpha = 1$. 2-dimensional polystochastic matrices are known as \textit{doubly stochastic} matrices.
A \textit{partial diagonal $p$ of length $m$} in a $d$-dimensional matrix $A$ of order $n$ is a set $\{\alpha^1, \ldots, \alpha^m\}$ of $m$ indices such that each pair of indices $\alpha^i$ and $\alpha^j$ is distinct in all components. A partial diagonal $p$ is \textit{positive} in a matrix $A$ if all entries of $A$ with indices from $p$ are greater than zero.
A \textit{diagonal} in a $d$-dimensional matrix $A$ of order $n$ is a partial diagonal of length $n$ (the maximal possible length). Denote by $D(A)$ the set of all diagonals in $A$. The \textit{permanent} of a multidimensional matrix $A$ is
$${\rm per} A = \sum\limits_{p \in D(A)} \prod\limits_{\alpha \in p} a_{\alpha}.$$
The permanent of polystochastic matrices is applied for counting transversals in latin squares and hypercubes. A \textit{$d$-dimensional latin hypercube $Q$ of order $n$} is a multidimensional matrix filled by $n$ symbols so that each line contains all different symbols. $2$-dimensional latin hypercubes are usually called \textit{latin squares}. Two latin hypercubes are said to be \textit{equivalent} if one can be put to another by permutations of hyperplanes and by permutations of symbols. A \textit{transversal} in a latin hypercube $Q$ is a diagonal containing all $n$ symbols.
There is a one-to-one correspondence between $d$-dimensional latin hypercubes $Q$ of order $n$ and $(d+1)$-dimensional polystochastic $(0,1)$-matrices $A$ of order $n$: an entry $q_{\alpha_1, \ldots, \alpha_d}$ of $Q$ equals $\alpha_{d+1}$ if and only if an entry $a_{\alpha_1, \ldots, \alpha_{d+1}}$ of $A$ equals $1$. The number of transversals in a latin hypercube $Q$ coincides with the permanent of the corresponding polystochastic matrix $A$. For the first time this correspondence was observed in~\cite{jurkat}.
The main aim of this paper is to put together all recent results on positiveness of the permanent of polystochastic matrices and prove that the permanent of all polystochastic matrices of order and dimension $4$ is positive.
\section{History and motivation}
We start our overview with the well-known Birkhoff theorem stating that every doubly stochastic matrix not only has a positive permanent but can be decomposed into a convex combination of permutation matrices.
\begin{thm}[Birkhoff]
Let $A$ be a doubly stochastic matrix of order $n$. Then ${\rm per} A > 0$ and moreover
$A = \sum \limits_{i=1}^k \theta_i P_i,$
where $P_1, \ldots, P_k$ are permutation matrices, $\theta_1, \ldots, \theta_k$ are nonnegative, and $\sum \limits_{i=1}^k \theta_i = 1.$
\end{thm}
Meanwhile, for dimensions $d$ greater than $2$ there exist $d$-dimensional po\-ly\-sto\-chas\-tic matrices with zero permanent. The simplest example is a $3$-di\-men\-si\-onal $(0,1)$-matrix corresponding to the Cayley table of a group $\mathbb{Z}_n$ of even order $n$. The fact that the Cayley tables of such groups have no transversals was proved by Euler~\cite{euler}. For latin hypercubes this observation was generalized by Wanless that gives us the following construction of polystochastic matrices $Z_n^d$ with a zero permanent.
\begin{prop}[Wanless,~\cite{wanless}]
Let $Z^{d+1}_{n}$ be the $(d+1)$-dimensional polystochastic $(0,1)$-matrix of order $n$ such that $z_\alpha = 1$ if and only if $\alpha_1 + \ldots + \alpha_{d+1} \equiv 0 \mod n$ and let $Q_n^d$ be a $d$-dimensional latin hypercube corresponding to this matrix.
If $d$ and $n$ are even then the latin hypercube $Q_n^d$ has no transversals.
\end{prop}
There are no known examples of latin squares of odd order with no transversals, and in 1967 Ryser conjectured the following.
\begin{con}[Ryser,~\cite{ryser}]
All latin squares of odd order have a transversal.
\end{con}
This conjecture is related to the conjecture of Stein~\cite{stein} and Brualdi~\cite{brualdi} claiming that every latin square of order $n$ has a partial transversal of length $n-1$.
Both conjectures have attracted a lot of attention and motivated a number of researchers in last years (see, e.g., the recent works~\cite{aharoni,keevtrans,pokrovskiy} and survey~\cite{wanless} for some history). The Ryser's conjecture is equivalent to that all 3-dimensional polystochastic $(0,1)$-matrices have a positive permanent.
In~\cite{sun} Sun proved that latin hypercubes corresponding to even-dimensional matrices $Z^d_n$ have a transversal, and so all such matrices have a positive permanent. He also proposed that all $4$-dimensional polystochastic $(0,1)$-matrices have a permanent greater than zero.
\begin{con}[Sun,~\cite{sun}]
Every $3$-dimensional latin hypercube has a transversal.
\end{con}
In~\cite{wantrans} McKay, McLeod, and Wanless and in~\cite{cencus} McKay and Wanless looked through all latin squares and latin hypercubes of small orders and dimensions. Counting transversals in all of them yields the following.
\begin{prop}
\begin{itemize}
\item Every latin square of odd order $n \leq 9$ has a transversal.
\item Every $3$-dimensional latin hypercube of order $n \leq 6$ has a transversal.
\item Except for latin hypercubes corresponding to matrices $Z^5_2$ and $Z^5_4$, all $4$-dimensional latin hypercubes of order $n \leq 5$ have a transversal.
\item Every $5$-dimensional latin hypercube of order $n \leq 5$ has a transversal.
\end{itemize}
\end{prop}
On the basis of these results, Wanless put forward the following conjecture.
\begin{con}[Wanless,~\cite{wanless}] \label{hyp01}
Every latin hypercube of odd order or odd dimension has a transversal.
\end{con}
This conjecture generalizes the Ryser's and the Sun's conjectures, and the following conjecture, in turn, generalizes all of them.
\begin{con}[Taranenko,~\cite{myobz}] \label{mainhyp}
The permanent of every polystochastic matrix of odd order or even dimension is greater than zero.
\end{con}
For multidimensional matrices of small orders this conjecture is confirmed by the author.
\begin{thm}
\begin{itemize}
\item Except for matrices $Z_2^d$ of odd dimensions, all polystochastic matrices of order $2$ have a positive permanent. (Taranenko,~\cite{myobz})
\item All polystochastic matrices of order $3$ have a positive permanent. (Taranenko,~\cite{myobz})
\item Except for matrices $Z_4^d$ of odd dimensions, all polystochastic $(0,1)$-matrices of order $4$ have a positive permanent. (Taranenko,~\cite{myquasi})
\end{itemize}
\end{thm}
The main result of the present note is a new supporting case for Conjecture~\ref{mainhyp}.
\begin{thm} \label{poly44}
The permanent of every $4$-dimensional polystochastic matrix of order $4$ is greater than zero.
\end{thm}
The following table summarizes all known results on Conjecture~\ref{mainhyp}.
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
$n \setminus d$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & $\ldots$ & $2k$ & $2k+1$ \\
\hline \hline
2 & ~~+~~ & \cellcolor{gray!50} & + & \cellcolor{gray!50} & + & \cellcolor{gray!50} ~~~~~ & + & $\ldots$ & + & \cellcolor{gray!50} \\
\hline
3 & + & + & + & + & + & + & + & $\ldots$ & + & + \\
\hline
4 & + & \cellcolor{gray!50} & + & \cellcolor{gray!50} & $(0,1)$ & \cellcolor{gray!50} & $(0,1)$ & $\ldots$ & $(0,1)$ & \cellcolor{gray!50} \\
\hline
5 & + & $(0,1)$ & $(0,1)$ & $(0,1)$ & $(0,1)$ & & & $\ldots$ & & \\
\hline
6 & + & \cellcolor{gray!50} & $(0,1)$ & \cellcolor{gray!50} & & \cellcolor{gray!50} & & $\ldots$ & & \cellcolor{gray!50} \\
\hline
7 & + & $(0,1)$ & & & & & & $\ldots$ & & \\
\hline
8 & + & \cellcolor{gray!50} & & \cellcolor{gray!50} & & \cellcolor{gray!50} & & $\ldots$ & & \cellcolor{gray!50} \\
\hline
9 & + & $(0,1)$ & & & & & & $\ldots$ & & \\
\hline
10 & + & \cellcolor{gray!50} & & \cellcolor{gray!50} & & \cellcolor{gray!50} & & $\ldots$ & & \cellcolor{gray!50} \\
\hline
11 & + & & & & & & & $\ldots$ & & \\
\hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ \\
\hline
$2m$ & + & \cellcolor{gray!50} & & \cellcolor{gray!50} & & \cellcolor{gray!50} & & $\ldots$ & & \cellcolor{gray!50} \\
\hline
$2m+1$ & + & & & & & & & $\ldots$ & & \\
\hline
\end{tabular}
\end{center}
\begin{center}
\textbf{Table 1.}
Gray cells correspond to parameters for which there exist polystochastic matrices with zero permanent, ``$+$'' means that all polystochastic matrices of such dimension and order have a positive permanent, and ``$(0,1)$'' is used for cases when a proof of the conjecture is known only for polystochastic $(0,1)$-matrices. For empty cell parameters Conjecture~\ref{mainhyp} remains completely open.
\end{center}
\section{Auxiliary lemmas}
A $k \times m$ \textit{row-latin rectangle} $R$ is a table with $k$ rows and $m$ columns filled by $m$ symbols in such a way so that each row contains all $m$ symbols. A \textit{transversal} in the rectangle $R$ is the set of $\min\left\{k,m\right\}$ entries hitting each row, each column and each symbol no more than once. Two row latin rectangles are said to be \textit{equivalent} if one can be put to the other by row, column and symbol permutations.
\begin{lem} \label{rectangle}
Up to equivalence, the row-latin rectangle
$$T = \begin{array} {ccc}
1 & 2 & 3 \\ 1 & 2 & 3 \\ 2 & 3 & 1 \\ 2 & 3 & 1
\end{array}$$
is the unique $4 \times 3$ row-latin rectangle with no transversals. Moreover, if we change any symbol of this rectangle to other one, then we get a (not necessary row-latin) rectangle with a transversal.
\end{lem}
\begin{proof}
Let us list all $4 \times 3$ row-latin rectangles up equivalence:
$$\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{2}} & 3 \\ 1 & 2 & \textbf{\underline{3}} \\ 1 & 2 & 3 \end{array}~~~~~~
\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{2}} & 3 \\ 1 & 2 & \textbf{\underline{3}} \\ 1 & 3 & 2 \end{array}~~~~~~
\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{2}} & 3 \\ 1 & 2 & \textbf{\underline{3}} \\ 2 & 3 & 1 \end{array}~~~~~~
\begin{array} {ccc} 1 & 2 & 3 \\ \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{3}} & 2 \\ 1 & 3 & \textbf{\underline{2}} \end{array}~~~~~~
\begin{array} {ccc} 1 & \textbf{\underline{2}} & 3 \\ 1 & 2 & \textbf{\underline{3}} \\ \textbf{\underline{1}} & 3 & 2 \\ 2 & 3 & 1 \end{array}
$$$$
\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{2}}& 3 \\ 1 & 3 & 2 \\ 2 & 1 & \textbf{\underline{3}} \end{array}~~~~~~
\begin{array} {ccc} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 2 & 3 & 1 \\ 2 & 3 & 1 \end{array}~~~~~~
\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & 2 & 3 \\ 2 & \textbf{\underline{3}} & 1 \\ 3 & 1 & \textbf{\underline{2}} \end{array}~~~~~~
\begin{array} {ccc} \textbf{\underline{1}} & 2 & 3 \\ 1 & \textbf{\underline{3}} & 2 \\ 2 & 1 & 3 \\ 3 & 1 & \textbf{\underline{2}} \end{array}~~~~~~
\begin{array} {ccc} 1 & \textbf{\underline{2}} & 3 \\ \textbf{\underline{1}} & 3 & 2 \\ 2 & 1 & \textbf{\underline{3}} \\ 3 & 2 & 1 \end{array}
$$
For each row-latin rectangle, except for the rectangle $T$, a transversal is underlined. The second property of the rectangle $T$ is verified directly.
\end{proof}
\begin{lem} \label{partdiag}
In a doubly stochastic matrix of order $4$ every positive partial diagonal of length $2$ can be extended to a positive partial diagonal of length $3$.
\end{lem}
\begin{proof}
Assume that $A$ is a doubly stochastic matrix of order $4$ and $p$ is a positive partial diagonal of length $2$ that cannot be extended to a positive partial diagonal of length $3$. Then all positive entries of the matrix $A$ share a row or a column with at least one of elements of $p$. Equivalently, all positive entries of $A$ can be covered by exactly two rows and columns. Since the sum of entries in each row and each column is exactly $1$, we have that in the intersection of these rows and columns all entries are zero: a contradiction with positivity of $p$.
\end{proof}
\section{Proof of Theorem~\ref{poly44}}
\begin{proof}
Let us try to construct a 4-dimensional polystochastic matrix $A$ of order 4 with a zero permanent. The construction takes four steps.
\textbf{Step 1.} Without loss of generality, assume that entry $a_{0,0,0,0}$ is greater than zero.
Consider the $2$-dimensional plane $B$ composed of indices of the form $(*,*,0,0)$, where $*$ means arbitrary symbol from $\left\{0, \ldots, 3\right\}$. The matrix $B$ is doubly stochastic, so by the Birkhoff theorem, it contains a positive diagonal. Without loss of generality, let entries of the matrix $A$ with indices $(i,i,0,0)$, $i \in \left\{0, \ldots, 3\right\}$ be positive.
\textbf{Step 2.} Let us denote by $B_i$ the 2-dimensional planes of $A$ composed of indices $(i,i,*,*)$. As before, each $B_i$ is a doubly stochastic matrix. Assume that $p_i = \left\{(i,i,\beta_i^j,\gamma_i^j)\right\}_{j=1}^4$ is a positive diagonal in the matrix $B_i$ containing index $(i,i,0,0)$. Consider the $4 \times 3$ rectangle $R$ for which an entry in a $(i+1)$-th row and in a $\beta_i^j$-th column is equal to $\gamma_i^j$. It is not hard to observe that $R$ is a row-latin rectangle and that each transversal in $R$ gives a positive diagonal in the matrix $A$.
By Lemma~\ref{rectangle}, rectangle $T$ is the unique up to equivalence $4 \times 3$ row-latin rectangle with no transversals. Moreover, changing any symbol of $T$ produces a transversal. So we may assume that entries of the matrix $A$ with the following indices obtained from the rectangle $T$
\begin{gather*}
(0,0,0,0),~(0,0,1,1),~(0,0,2,2),~(0,0,3,3), \\
(1,1,0,0),~(1,1,1,1),~(1,1,2,2),~(1,1,3,3), \\
(2,2,0,0),~(2,2,1,2),~(2,2,2,3),~(2,2,3,1), \\
(3,3,0,0),~(3,3,1,2),~(3,3,2,3),~(3,3,3,1)
\end{gather*}
are positive and that for all other indices of the form $(i,i,\beta, \gamma)$, where $i \in \left\{0, \ldots, 3\right\}$ and $\beta, \gamma \in \left\{1, 2, 3\right\}$, the entries of $A$ are equal to zero.
\textbf{Step 3.} For $k \in \left\{1,2,3\right\}$ denote by $C_k$ the $2$-dimensional planes of $A$ composed of indices $(*,*,k,k)$. Note that the doubly stochastic matrices $C_k$ contain positive partial diagonals of length 2 formed by indices $(0,0,k,k)$ and $(1,1,k,k)$. By Lemma~\ref{partdiag}, each of these diagonals can be extended to a positive partial diagonal of length 3 by new indices $(\mu_k, \nu_k,k,k)$, where $\mu_k, \nu_k \in \left\{2,3\right\}$ and $\mu_k \neq \nu_k$.
If for some $k_1, k_2$ it holds $\mu_{k_1} = \nu_{k_2} = 2$ and $\mu_{k_2} = \nu_{k_1} = 3$ then we have a positive diagonal
$$\left\{(0,0,0,0), (1,1, k_3, k_3), (\mu_{k_1}, \nu_{k_1}, k_1, k_1), (\mu_{k_2}, \nu_{k_2}, k_2, k_2)\right\}$$
in the matrix $A$, where $k_3 \neq k_2, k_1$.
Therefore, the last remaining possibility for $A$ do not have a positive diagonal is that for each $k \in \left\{1,2,3\right\}$ all entries with indices $(2,3, k, k)$ are positive and all entries with indices $(3,2, k, k)$ are zero (or vice versa).
\textbf{Step 4.} For each $k \in \left\{1,2,3\right\}$ consider the line composed of indices of the form $(*,2,k,k)$. Note that this line contains two zero entries, namely entries with indices $(2,2,k,k)$ and $(3,2,k,k)$. If we suppose that an entry with index $(1,2,k,k)$ is equal to zero too, we obtain a contradiction with polystochaticity of the matrix $A$, because in this case we have $a_{0,2,k,k} = 1$ and $a_{0,0,k,k} >0$. Therefore, all entries $a_{1,2,k,k}$ are greater than zero. By similar reasoning, we have that for each $k \in \left\{1,2,3\right\}$ entries $a_{3,1,k,k}$ are positive. But then the matrix $A$ has a positive diagonal, for example:
$$\left\{ (0,0,0,0), (1,2,1,1), (2,3,2,2), (3,1,3,3) \right\}.$$
$$\begin{array} {cccc|cccc|cccc|cccc}
+_1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\
. & +_2 & 0_2 & 0_2 & . & . & . & . & . & . & . & . & . & . & . & . \\
. & 0_2 & +_2 & 0_2 & . & . & . & . & . & . & . & . & . & . & . & . \\
. & 0_2 & 0_2 & +_2 & . & . & . & . & . & . & . & . & . & . & . & . \\
\hline
. & . & . & . & +_1 & . & . & . & . & . & . & . & . & . & . & . \\
. & . & . & . & . & +_2 & 0_2 & 0_2 & . & +_4 & . & . & . & . & . & . \\
. & . & . & . & . & 0_2 & +_2 & 0_2 & . & . & +_4 & . & . & . & . & . \\
. & . & . & . & . & 0_2 & 0_2 & +_2 & . & . & . & +_4 & . & . & . & . \\
\hline
. & . & . & . & . & . & . & . & +_1 & . & . & . & . & . & . & . \\
. & . & . & . & . & . & . & . & . & 0_2 & +_2 & 0_2 & . & +_3 & . & . \\
. & . & . & . & . & . & . & . & . & 0_2 & 0_2 & +_2 & . & . & +_3 & . \\
. & . & . & . & . & . & . & . & . & +_2 & 0_2 & 0_2 & . & . & . & +_3 \\
\hline
. & . & . & . & . & . & . & . & . & . & . & . & +_1 & . & . & . \\
. & . & . & . & . & +_4 & . & . & . & 0_3 & . & . & . & 0_2 & +_2 & 0_2 \\
. & . & . & . & . & . & +_4 & . & . & . & 0_3 & . & . & 0_2 & 0_2 & +_2 \\
. & . & . & . & . & . & . & +_4 & . & . & . & 0_3 & . & +_2 & 0_2 & 0_2 \\
\end{array}$$
\begin{center}
\textbf{Table 2.} The 4-dimensional matrix $A$ of order 4 after the last step.
``$+$'' denotes a positive entry, ``$0$'' is a zero entry, dots are used for insignificant entries. Subscripts indicate steps on which entries are specified.
\end{center}
\end{proof}
|
1801.10376
|
\section{Introduction}
Recently, non-Lorentzian geometry has gained interest in theoretical
physics community from many reasons. Firstly, today it is well known
that strong correlated systems in condensed matter can be
successfully described with the help of non-relativistic holography
\cite{Christensen:2013lma,Christensen:2013rfa,Hartong:2014oma} , for
review see for example \cite{Hartnoll:2016apf}. This duality is
based on the idea that the strongly coupled theory on the boundary
can be described by string theory in the bulk. Further, when the
curvature of the space-time is small we can use the classical
gravity instead of the full string theory machinery. In case of
non-relativistic holography the situation is even more interesting
since we have basically two possibilities: Either we use Einstein
metric with non-relativistic isometries
\cite{Son:2008ye,Balasubramanian:2008dm,Herzog:2008wg} or we
introduce non-relativistic gravities in the bulk
\cite{Son:2013rqa,Janiszewski:2012nb}, like Newton-Cartan gravity
\cite{Cartan:1923zea} \footnote{For some recent works, see
\cite{Bergshoeff:2017dqq,Bergshoeff:2016lwr,Afshar:2015aku,
Bergshoeff:2015ija,Bergshoeff:2015uaa,Bergshoeff:2014uea,
Andringa:2010it,Hartong:2015zia,Hartong:2016yrf,Bergshoeff:2017btm,Bergshoeff:2017dqq,
Grosvenor:2017dfs,Jensen:2014aia,Jensen:2014ama,Jensen:2014wha}.}
or Ho\v{r}ava gravity \cite{Horava:2009uw}. It is also very instructive
to analyze extended objects in Newton-Cartan theory
\cite{Andringa:2012uz,Harmark:2017rpg} \footnote{For the analysis of
point particles in this background, see
\cite{Barducci:2017mse,Kluson:2017pzr}.}. In \cite{Andringa:2012uz}
the action for non-relativistic string in Newton-Cartan background
was proposed that has many interesting properties. For example, in
was argued in \cite{Andringa:2012uz} that in order to define
correctly an action for non-relativistic string in Newton-Cartan
background two longitudinal directions have to be selected and hence
we obtain more general form of the Newton-Cartan geometry. The
canonical analysis of this string was performed recently in
\cite{Kluson:2017abm}. During this analysis we met an obstacle which
was an impossibility to derive Hamiltonian constraint for the string
with non-zero gauge field $m_\mu^{ \ a}$ that will be defined in the
next section. For that reason we were forced to restrict to the case
of zero gauge field $m_\mu^{ \ a}$ and then we were able to
determine canonical structure of the non-relativistic string in
Newton-Cartan background. In the same way we proceeded with the case
of non-relativistic p-brane. We defined it using the limiting
procedure introduced in \cite{Bergshoeff:2015uaa}. We again found
corresponding action for non-relativistic p-brane in Newton-Cartan
background and determined canonical structure for this theory on
condition that the gauge field $m_\mu^{ \ a}$ is zero.
The fact that in our previous work we considered the situation when
the gauge field $m_\mu^{ \ a}$ vanishes is rather unsatisfactory
since this field is crucial for the invariance of the theory under
Milne boost. It would be nice to develop full canonical formalism
where this field is non-zero. We suggested in the conclusion of our
previous paper \cite{Kluson:2017abm} that one way how to proceed is
to start with the Hamiltonian for the string in general background
and then perform the limiting procedure when we generalize the
approach introduced in \cite{Bergshoeff:2015uaa} to the case of two
longitudinal directions. Exactly this is the goal of our paper. We
start with the Hamiltonian for relativistic string in general
background, introduce relativistic vierbeins and NSNS two form that
are functions of fields that define Newton-Cartan background. These
fields also depend on the free parameter that goes to infinity when
we define Newton-Cartan gravity \cite{Bergshoeff:2015uaa}. As a
result we will be able to find corresponding Hamiltonian for the
string in Newton-Cartan background. However this is not certainly
the end of the story since we have to perform consistency checks of
this proposal. Explicitly, we have to show that constraints, that
define this theory, are the first class constraints. It turns out
that this is a non-trivial task due to the complicated form of the
Hamiltonian. Secondly, we would like to find Lagrangian for this
non-relativistic string and investigate how it is related to the
Lagrangian density proposed in \cite{Andringa:2012uz}. To do this we
carefully examine an invariance of the Hamiltonian constraint under
generalized Milne boost. We show that the Hamiltonian constraint can
be rewritten with the help of variables that are manifestly
invariant under Milne transformation so that Hamiltonian is
invariant too. Then we can proceed to the analysis of corresponding
Lagrangian. As a warm up we consider the case of the
non-relativistic string in flat background. We show that there is a
crucial difference between inverse Legendre transformation in case
of the relativistic string and non-relativistic one. Explicitly, we
show that in case of non-relativistic string the Lagrange
multipliers corresponding to Hamiltonian and spatial diffeomorphism
constraints are determined by projections of the equations of motion
for $x^\mu$ to longitudinal directions instead of their equations of
motion. Then we will be able to find Lagrangian that agrees with the
Lagrangian found in \cite{Andringa:2012uz}. Further we proceed to
the most general case of the non-relativistic string in
Newton-Cartan background where the analysis is much more
complicated. Despite of this fact we find Lagrangian form of the
non-relativistic string in Newton-Cartan background which is
manifestly diffeomorphism invariant and which agrees with the Lagrangian
density proposed in \cite{Andringa:2012uz}.
This paper is organized as follows. In the next section (\ref{second}) we introduce canonical form of the relativistic string action and perform limiting procedure that leads to the Hamiltonian for non-relativistic string in
Newton-Cartan background and determine Poisson algebra of constraints. In section (\ref{third}) we find the Lagrangian for non-relativistic string in Newton-Cartan background. Finally in conclusion (\ref{fourth}) we outline our results and suggest possible extension of this work.
\section{Canonical Formulation of Non-relativistic String in Newton-Cartan Background}\label{second}
We start with the Nambu-Gotto form of the action for relativistic string in general background
\begin{equation}\label{funstringact}
S=-\tilde{\tau}_F \int d\tau d\sigma\sqrt{-\det (E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB}
\partial_\alpha x^\mu\partial_\beta x^\nu)}+\tilde{\tau}_F
\int d\tau d\sigma B_{\mu\nu}\partial_\tau x^\mu \partial_\sigma x^\nu \ ,
\end{equation}
where $E_\mu^{ \ A}$ is $d-$dimensional vierbein so that the metric components have the form
\begin{equation}
G_{\mu\nu}=E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB} \ , \eta_{AB}=\mathrm{diag}(-1,\dots,1) \ .
\end{equation}
Note that the metric inverse $G^{\mu\nu}$ is defined with the help of the inverse vierbein $E^\mu_{ \ B}$ that obeys the relation
\begin{equation}
E_\mu^{ \ A}E^\mu_{ \ B}=\delta^A_{B} \ , \quad E_\mu^{ \ A}E^\nu_{ \ A}=
\delta^\mu_{\nu} \ .
\end{equation}
Further, $B_{\mu\nu}$ is NSNS two form field. Finally $x^\mu \ ,\mu=0,\dots,d-1$ are embedding coordinates of the string where the two dimensional world-sheet is parameterized by $\sigma^\alpha\equiv(\tau,\sigma)$ and $\tilde{\tau}_F$ is the string tension that could be eventually rescaled when we define non-relativistic string.
Our goal is to find Hamiltonian non-relativistic string in Newton-Cartan background with the help of the following procedure. As the first step we determine
Hamiltonian from the action (\ref{funstringact}).
Explicitly, from (\ref{funstringact}) we find following conjugate momenta
\begin{equation}\label{pmu}
p_\mu=-\tilde{\tau}_F E_\mu^{ \ A}E_\nu^{ \ B}\eta_{AB}\partial_\beta x^\nu
g^{\beta\tau}\sqrt{-\det g_{\alpha\beta}}+\tilde{\tau}_F B_{\mu\nu}\partial_\sigma x^\nu \ ,
\end{equation}
where
\begin{equation}
g_{\alpha\beta}\equiv G_{\mu\nu}\partial_\alpha x^\mu \partial_\beta x^\nu \ , \quad g^{\alpha\beta}g_{\beta\gamma}=\delta^\alpha_\gamma \ .
\end{equation}
Using (\ref{pmu}) we immediately find that the bare Hamiltonian
$H_B=\int d\sigma (p_\mu \partial_\tau x^\mu-\mathcal{L})$ is zero
while we have following two primary constraints
\begin{eqnarray}
\mathcal{H}_\tau &\equiv& (p_\mu-\tilde{\tau}_F B_{\mu\rho}\partial_\sigma x^\rho)
E^\mu_{ \ A}E^\nu_{ \ B}\eta^{AB}(p_\nu-\tilde{\tau}_F B_{\nu\sigma}
\partial_\sigma x^\sigma)+ \nonumber \\
&+& \tilde{\tau}_F^2 \partial_\sigma x^\mu E_\mu^{ \ A}
\eta_{AB}E_\nu^{ \ B}\partial_\sigma x^\nu\approx 0 \ , \quad
\mathcal{H}_\sigma \equiv p_\mu \partial_\sigma x^\mu \approx 0 \ . \nonumber \\
\end{eqnarray}
Now we are ready to find Hamiltonian for the string in Newton-Cartan background with the help
of the non-relativistic limit of relativistic vierbein $E_\mu^{ \ A}$ \cite{Bergshoeff:2015uaa}. However as we argued
in our recent paper \cite{Kluson:2017abm} in order to find correct non-relativistic
limit we have to introduce the generalization of Newton-Cartan gravity following
\cite{Andringa:2012uz}. Explicitly,
we split target-space indices $A$ into $A=(a',a)$ where now $a=0,1$ and $a'=2,\dots,d-1$. Then we introduce $\tau_\mu^{ \ a}$ so that we write
\begin{equation}
\tau_{\mu\nu}=\tau_\mu^{ \ a}\tau_\nu^{ \ b}
\eta_{ab} \ , \quad a,b=0,1 \ .
\end{equation}
In the same way we introduce vierbein $e_\mu^{ \ a'}, a'=2,\dots,d-1$ and also introduce gauge field $m_\mu^{ \ a}$. The $\tau_\mu^{ \ a}$ can be interpreted as the gauge fields of the longitudinal translations while $e_\mu^{ \ a'}$ as the gauge fields of the transverse translations
\cite{Andringa:2012uz}. Then we can also introduce their inverses with respect to their longitudinal and transverse translations
\begin{eqnarray}
e_\mu^{ \ a'}e^\mu_{ \ b'}=\delta^{a'}_{b'} \ , \quad
e_\mu^{ \ a'}e^\nu_{ \ a'}=\delta_\mu^\nu-\tau_\mu^{ \ a}
\tau^\nu_{ \ a} \ , \quad \tau^\mu_{ \ a}\tau_\mu^{ \ b}=\delta_a^b \ , \quad
\tau^\mu_{ \ a}e_\mu^{ \ a'}=0 \ , \quad
\tau_\mu^{ \ a}e^\mu_{ \ a'}=0 \ . \nonumber \\
\end{eqnarray}
Now we are ready
to introduce following parameterization of the vierbein $E_\mu^{ \ A}$
\cite{Bergshoeff:2015uaa}
\begin{equation}\label{relvier}
E_\mu^{ \ a}=\omega \tau_\mu^{ \ a}+\frac{1}{2\omega}m_\mu^{ \ a} \ , \quad
E_\mu^{ \ a'} =e_\mu^{ \ a'} \ ,
\end{equation}
where $\omega$ is a free parameter that we take to infinity when we define non-relativistic limit. Note that the inverse vierbein to (\ref{relvier}) has the form (up to terms of order $\omega^{-3}$)
\begin{equation}\label{relvierinv}
E^\mu_{ \ a}=\frac{1}{\omega}\tau^\mu_{ \ a}-\frac{1}{2\omega^3}\tau^\mu_{ \ b}m_\rho^{ \ b}
\tau^\rho_{ \ a} \ , \quad E^\mu_{ \ a'}=e^\mu_{ \ a'}
-\frac{1}{2\omega^2} \tau^\mu_{ \ a}m_\rho^{ \ a}e^\rho_{ \ a'} \ .
\end{equation}
Then with the help of (\ref{relvier}) and (\ref{relvierinv}) we obtain following form of the metric $G_{\mu\nu}$ and its inverse
\begin{eqnarray}
G_{\mu\nu}&=&E_\mu^{ \ a}E_\nu^{ \ b}\eta_{ab}+E_\mu^{ \ a'}E_\nu^{ \ b'}\delta_{a'b'}
=\nonumber \\
&=&\omega^2 \tau_{\mu\nu}+h_{\mu\nu}+\frac{1}{2}\tau_\mu^{ \ a}m_\nu^{ \ b}\eta_{ab}+
\frac{1}{2}m_\mu^{ \ a}\tau_\nu^{ \ b}\eta_{ab}+\frac{1}{4\omega^2}m_\mu^{ \ a}m_\nu^{ \ b}
\eta_{ab} \ , \nonumber \\
G^{\mu\nu}&=&E^\mu_{ \ a}E^\nu_{ \ b}\eta^{ab}+E^\mu_{ \ a'}E^\nu_{ \ b'}\delta^{a' b'}=
\nonumber \\
&=& \frac{1}{\omega^2}\tau^{\mu\nu}+h^{\mu\nu}
-\frac{1}{2\omega^2}(\tau^\nu_{ \ b}m_\rho^{ \ b}h^{\rho\mu}
+\tau^\mu_{ \ b}m_\rho^{ \ b}h^{\rho\nu})
-\nonumber \\
&-&\frac{1}{2\omega^4}
(\tau^\mu_{ \ c}m^c_{ \ \rho}\tau^{\rho\nu}+
\tau^\nu_{ \ d}m^d_{ \ \rho}\tau^{\rho\mu})
+\frac{1}{4\omega^4}\tau^\mu_{ \ a}m_\rho^{ \ a}
h^{\rho\sigma}\tau^\nu_{ \ b}m_\sigma^{\ b}+O(\omega^{-6}) \ , \nonumber \\
\end{eqnarray}
where
\begin{equation}
h^{\mu\nu}=e^\mu_{ \ a'}e^\nu_{ \ b'}\delta^{a'b'} \ , \quad
h_{\mu\nu}=e_\mu^{ \ a'}e_\nu^{ \ b'}\delta_{a'b'} \ , \quad
\tau^{\mu\nu}=\tau^\mu_{ \ a}\tau^\nu_{ \ b}\eta^{ab} \ .
\end{equation}
As the next step we have to introduce an appropriate parameterization of NSNS two form. We suggested in \cite{Kluson:2017abm}
that it is natural to consider following form of NSNS two form
\begin{eqnarray}
B_{\mu\nu}&=&\left(\omega\tau_\mu^{ \ a}-\frac{1}{2\omega}m_\mu^{ \ a}\right)\left( \omega\tau_\nu^{ \ b}-\frac{1}{2\omega}m_\nu^{ \ b}\right)\epsilon_{ab}
=\nonumber \\
&=&\omega^2\tau_\mu^{ \ a}\tau_\nu^{ \ b}\epsilon_{ab}-
\frac{1}{2}\left(m_\mu^{ \ a}\tau_{\nu}^{ \ b}+
\tau_\mu^{\ a}m_\nu^{ \ b}\right)\epsilon_{ab}+\frac{1}{4\omega^2}
m_\mu^{ \ a}m_\nu^{ \ b}\epsilon_{ab} \ , \nonumber \\
\end{eqnarray}
where
\begin{equation}
\epsilon_{ab}=-\epsilon_{ba} \ , \quad \epsilon_{01}=1 \ .
\nonumber \\
\end{equation}
With the help of this definition we easily find
\begin{eqnarray}
\frac{1}{\omega^2}\tilde{\tau}_F^2 B_{\mu\sigma}\partial_\sigma x^\sigma \tau^{\mu\nu}
B_{\nu\rho}
\partial_\sigma x^\rho
=-\omega^2\tilde{\tau}_F^2 \tau_{\mu\nu}\partial_\sigma x^\mu \partial_\sigma x^\nu
\nonumber \\
\end{eqnarray}
and we see that this divergent contribution to the Hamiltonian constraint
\begin{equation}
\frac{1}{\omega^2}\tilde{\tau}_F^2 B_{\mu\sigma}\partial_\sigma x^\sigma \tau^{\mu\nu}
B_{\nu\omega}
\partial_\sigma x^\omega
+\tilde{\tau}_F^2\omega^2\partial_\sigma x^\mu\tau_{\mu\nu}\partial_\sigma x^\nu
\end{equation}
vanishes.
Then we obtain that the Hamiltonian constraint has the form in the limit $\omega\rightarrow \infty$
\begin{eqnarray}\label{mHtau}
\mathcal{H}_\tau
&=&p_\mu h^{\mu\nu}p_\nu -2\tau_Fp_\mu
\tau^\mu_{ \ a}\eta^{ab}\epsilon_{bc}\tau_\rho^{ \ c}\partial_\sigma x^\rho
+2\tau_F p_\mu h^{\mu\rho}m_\rho^{ \ b}\epsilon_{bd}
\tau_\rho^{ \ d}\partial_\sigma x^\rho+\nonumber \\
&+&2\tau_F^2\partial_\sigma x^\mu \tau_\mu^{ \ c}\epsilon_{cd}\tau^\nu_{ \ e}
\eta^{ed}m_\nu^{ \ a}\tau_\rho^{ \ b}\epsilon_{ab}\partial_\sigma x^\rho+
2\tau_F^2\partial_\sigma x^\mu \tau_\mu^{ \ a}\eta_{ab}m_\nu^{ \ b}\partial_\sigma x^\nu
-\nonumber \\
&-&\tau_F^2\partial_\sigma x^\sigma \tau_\sigma^{ \ b}\epsilon_{ba}m_\mu^{ \ a}
h^{\mu\nu}m_\nu^{ \ c}\epsilon_{cd}\tau_\rho^{ \ d}\partial_\sigma x^\rho+
\tau_F^2\partial_\sigma x^\mu h_{\mu\nu}\partial_\sigma x^\nu
\nonumber \\
&\equiv &p_\mu h^{\mu\nu}p_\nu+p_\mu V^\mu+\tau_F^2\partial_\sigma x^\mu \bar{H}_{\mu\nu}\partial_\sigma x^\nu \ , \quad V^\mu=V^\mu_{ \ \nu}\partial_\sigma x^\nu \ , \nonumber \\
\end{eqnarray}
where we identify $\tilde{\tau}_F$ with $\tau_F$ since as follows from the analysis above it is not necessary
to rescale $\tau_F$ in order to have finite Hamiltonian in the limit $\omega\rightarrow \infty$.
We see that this form of the Hamiltonian constraint is rather complicated. For that reason it is necessary to check whether it defines consistent theory. Especially we would like to see whether Hamiltonian and spatial diffeomorphism constraints are the first class constraints.
To do this we
calculate Poisson algebra of constraints. As usually we introduce smeared form of these constraints
\begin{eqnarray}
\mathbf{T}_\tau(N)=\int d\sigma N \mathcal{H}_\tau \ , \quad
\mathbf{T}_\sigma(N^\sigma)=\int d\sigma N^\sigma \mathcal{H}_\sigma
\nonumber \\
\end{eqnarray}
and we easily find
\begin{eqnarray}\label{pbbSbS}
\pb{\mathbf{T}_\sigma(N^\sigma),\mathbf{T}_\sigma(M^\sigma)}=
\int d\sigma (N^\sigma\partial_\sigma M^\sigma-N^\sigma
\partial_\sigma M^\sigma)p_\mu\partial_\sigma x^\mu=
\mathbf{T}_\sigma(N^\sigma\partial_\sigma M^\sigma-N^\sigma
\partial_\sigma M^\sigma) \ . \nonumber \\
\end{eqnarray}
In case of the calculation of the Poisson brackets of two Hamiltonian constraints the situation is more involved since the explicit calculation gives
\begin{eqnarray}
& & \pb{\mathbf{T}_\tau(N),\mathbf{T}_\tau(M)}=\int d\sigma
(N\partial_\sigma M-M\partial_\sigma N)2\tau_F^2(p_\mu h^{
\mu\nu}\bar{H}_{\nu\rho}\partial_\sigma x^\rho+
\partial_\sigma x^\rho \bar{H}_{\rho \mu}h^{\mu\nu}p_\nu)
+
\nonumber \\
&-&2\int d\sigma \tau_F (N\partial_\sigma M-M\partial_\sigma N)
p_\mu V^\mu_{ \ \nu}h^{\nu\omega}p_\omega+
\nonumber \\
&+&\int d\sigma (N\partial_\sigma M-M\partial_\sigma N)
p_\rho V^\rho_{ \ \sigma}V^\sigma_{\ \omega}\partial_\sigma x^\omega+
\nonumber \\
&-&\tau_F^2\int d\sigma (N\partial_\sigma M-M\partial_\sigma N)(V^\mu_{ \ \nu}\partial_\sigma x^\nu \bar{H}_{\mu\rho}\partial_\sigma x^\rho+
\partial_\sigma x^\rho \bar{H}_{\rho\mu}V^\mu_{ \ \nu}\partial_\sigma x^\nu) \ .
\nonumber \\
\end{eqnarray}
To proceed further we calculate
\begin{eqnarray}
& & 2p_\mu h^{\mu\nu}\bar{H}_{\nu\rho}\partial_\sigma x^\rho+2\partial_\sigma x^\rho
\bar{H}_{\rho\mu}h^{\mu\nu}p_\nu= \nonumber \\
& &=
4\tau_F^2p_\mu h^{\mu\nu}h_{\nu\rho}\partial_\sigma x^\rho+
4\tau_F^2 \partial_\sigma x^\mu \tau_\mu^{ \ a}\eta_{ab}m_\nu^{ \ b}
h^{\nu\rho}p_\rho \ ,
\nonumber \\
& & p_\rho V^\rho_{ \ \mu}V^\mu_{ \ \nu}\partial_\sigma x^\nu
= 4\tau_F^2 p_\mu \tau^{\mu\nu}\tau_{\nu\rho}\partial_\sigma x^\rho-
4\tau_F^2p_\mu h^{\mu\nu}m_\nu^{ \ a}\tau_\rho^{ \ b}\eta_{ab}\partial_\sigma x^\rho
\ , \nonumber \\
& & V^\mu_{ \ \nu}\partial_\sigma x^\nu \bar{H}_{\mu\rho}\partial_\sigma x^\rho+
\partial_\sigma x^\rho \bar{H}_{\rho\mu}V^\mu_{ \ \nu}\partial_\sigma x^\nu=0 \ ,
\quad p_\mu V^\mu_{ \ \nu}h^{\nu\omega}p_\omega=0 \ . \nonumber \\
\end{eqnarray}
Collecting these results together we finally obtain
\begin{equation}\label{pbbTbT}
\pb{\mathbf{T}_\tau(N),\mathbf{T}_\tau(M)}=\mathbf{T}_\sigma ((N\partial_\sigma M-M\partial_\sigma N)
4\tau_F^2 ) \
\end{equation}
which is the correct form of the Poisson bracket between Hamiltonian constraints.
Finally we calculate the Poisson bracket
\begin{equation}
\pb{\mathbf{T}_\sigma(N^\sigma),\mathbf{T}_\tau(M)} \ .
\end{equation}
Since
\begin{eqnarray}
\pb{\mathbf{T}_\sigma(N^\sigma),x^\mu}=-N^\sigma\partial_\sigma x^\mu \ , \quad
\pb{\mathbf{T}_\sigma(N^\sigma),p_\mu}=-\partial_\sigma N^\sigma p_\mu-N^\sigma \partial_\sigma p_\mu
\nonumber \\
\end{eqnarray}
we easily find
\begin{equation}
\pb{\mathbf{T}_\sigma (N^\sigma),\mathcal{H}_\tau}=-2\partial_\sigma N^\sigma \mathcal{H}_\tau-
N^\sigma \partial_\sigma \mathcal{H}_\tau
\end{equation}
or alternatively
\begin{equation}\label{pbbTbS}
\pb{\mathbf{T}_\sigma (N^\sigma),\mathbf{T}_\tau(M)}=\mathbf{T}_\sigma (N^\sigma\partial_\sigma M-
\partial_\sigma N^\sigma M) \ .
\end{equation}
We see that all Poisson brackets (\ref{pbbSbS}),(\ref{pbbTbT}) and
(\ref{pbbTbS}) vanish on the constraint surface
$\mathcal{H}_\tau\approx 0 ,\mathcal{H}_\sigma \approx 0 $ and hence they are the first class constraints and the non-relativistic string is well defined system from the canonical point of view.
\section{Lagrangian Form}
\label{third}
In this section we focus on the Lagrangian formulation of the proposed Hamiltonian form of non-relativistic string in Newton-Cartan background. Recall that this string is defined with the Hamiltonian constraint (\ref{mHtau}) and the spatial diffeomorphism constraint $\mathcal{H}_\sigma\approx 0$. In order to understand subtle points in the
transformation from the Hamiltonian to Lagrangian description of this system we firstly start with the simpler problem of non-relativistic string in the flat background.
\subsection{Flat space-time limit}
The non-relativistic string in the flat background has following Hamiltonian
\begin{equation}\label{Hamflat}
H=\int d\sigma (\lambda^\tau \mathcal{H}_\tau+\lambda^\sigma \mathcal{H}_\sigma) \ ,
\end{equation}
where
\begin{equation}
\mathcal{H}_\tau=-2\tau_F p_a \eta^{ab}e_{bc}\partial_\sigma x^c+
p_i h^{ij} p_j+\tau^2_F h_{ij}\partial_\sigma x^i\partial_\sigma x^j \ , \quad
\mathcal{H}_\sigma=p_i \partial_\sigma x^i +p_a\partial_\sigma x^a \ ,
\end{equation}
where $a,b,c,\dots=0,1$ and where $h_{ij}=\delta_{ij} \ , h^{ij}=\delta^{ij} \ , i,j,\dots=2,\dots,d-1$.
Our goal is to find Lagrangian formulation of the non-relativistic string
in flat background. With the help of the Hamiltonian
(\ref{Hamflat}) we obtain following equations of motion for $x^0,x^1$ and $x^i$
\begin{eqnarray}\label{eqflat}
\partial_\tau x^0&=&\pb{x^0,H}=-2\tau_F\lambda^\tau \partial_\sigma x^1+\lambda^\sigma \partial_\sigma x^0 \ , \nonumber \\
\partial_\tau x^1&=&\pb{x^1,H}=-2\tau_F\lambda^\tau \partial_\sigma x^0+\lambda^\sigma
\partial_\sigma x^1 \ , \nonumber \\
\partial_\tau x^i&=&\pb{x^i,H}=2\lambda^\tau h^{ij}p_j+\lambda^\sigma \partial_\sigma x^i \ .
\nonumber \\
\end{eqnarray}
Then it is easy to find corresponding Lagrangian density
\begin{eqnarray}\label{Lagdenflat}
& &\mathcal{L}=p_a\partial_\tau x^a+p_i\partial_\tau x^i-\mathcal{L}=\lambda^\tau p_i h^{ij}p_j-\lambda^\tau \partial_\sigma x^i\partial_\sigma x^j h_{ij}=\nonumber \\
& &=\frac{1}{4\lambda^\tau}(\partial_\tau x^i-\lambda^\sigma \partial_\sigma x^i)h_{ij}
(\partial_\tau x^j-\lambda^\sigma \partial_\sigma x^j)-\lambda^\tau \tau^2_F h_{ij}
\partial_\sigma x^i\partial_\sigma x^j \ . \nonumber \\
\end{eqnarray}
We see that this Lagrangian does not depend on the variables $x^a$ which is confusing since if we perform inverse Legendre transformation from
(\ref{Lagdenflat}) and determine corresponding Hamiltonian we will find that it does not depend on $p_a$. We can resolve this problem when we closely examine equations of motion for $x^0$ and $x^1$. We firstly consider the
first equation in (\ref{eqflat}) and
multiply it with $\partial_\sigma x^0$ while we multiply the second one with $\partial_\sigma x^1$. Then if we take their difference we obtain
\begin{equation}
-\partial_\tau x^0\partial_\sigma x^0+\partial_\tau x^1\partial_\sigma x^1=
\lambda^\sigma (\partial_\sigma x^1\partial_\sigma x^1-\partial_\sigma x^0\partial_\sigma x^0)
\end{equation}
that can be solved for $\lambda^\sigma$ as
\begin{equation}\label{lambdasigmasol}
\lambda^\sigma=\frac{\mathbf{a}_{\tau\sigma}}{\mathbf{a}_{\sigma\sigma}} \ , \quad \mathbf{a}_{\alpha\beta}=
\partial_\alpha x^a\partial_\beta x^b\eta_{ab} \ .
\nonumber \\
\end{equation}
On the other hand from the equations of motion for $x^0$ and $x^1$ we obtain
\begin{eqnarray}
(\partial_\tau x^0-\lambda^\sigma \partial_\sigma x^0)^2=4(\lambda^\tau)^2\partial_\sigma x^1\partial_\sigma x^1 \ , \quad
(\partial_\tau x^1-\lambda^\sigma \partial_\sigma x^1)^2=4(\lambda^\tau)^2\partial_\sigma x^0\partial_\sigma x^0 \ \nonumber \\
\end{eqnarray}
that implies
\begin{equation}
-\mathbf{a}_{\tau\tau}+2\lambda^\sigma \mathbf{a}_{\sigma\tau}-(\lambda^\sigma)^2\mathbf{a}_{\sigma\sigma}=
4(\lambda^\tau)^2\mathbf{a}_{\sigma\sigma} \tau^2_F \ .
\end{equation}
Inserting (\ref{lambdasigmasol}) into this equation we find that $\lambda^\tau$ is equal to
\begin{equation}\label{lambdatausol}
\lambda^\tau=
\frac{\sqrt{-\det \mathbf{a}_{\alpha\beta}}}{2\tau_F \mathbf{a}_{\sigma\sigma}} \ .
\end{equation}
Then if we combine (\ref{lambdasigmasol}) together with (\ref{lambdatausol})
we get
\begin{eqnarray}\label{lambdafinal}
& &\frac{1}{\lambda^\tau}=-2\tau_F \mathbf{a}^{\tau\tau}\sqrt{-\det\mathbf{a}} \ , \quad
\frac{2\lambda^\sigma}{\lambda^\tau}=4\tau_F \mathbf{a}^{\tau\sigma}\sqrt{-\det \mathbf{a}}
\ , \nonumber \\
& &\frac{(\lambda^\sigma)^2}{4(\lambda^\tau)^2}-\lambda^\tau \tau^2_F=-\tau_F \mathbf{a}^{\sigma\sigma}\sqrt{-\det\mathbf{a}} \ .
\end{eqnarray}
Finally inserting (\ref{lambdafinal}) into
(\ref{Lagdenflat}) we obtain
\begin{eqnarray}
\mathcal{L}
=-\frac{\tau_F}{2}\sqrt{-\det \mathbf{a}}\mathbf{a}^{\alpha\beta}h_{\alpha\beta} \
\nonumber \\
\end{eqnarray}
which has exactly the same form as the Lagrangian density that was derived in \cite{Andringa:2012uz}.
\subsection{Lagrangian for String in Newton-Cartan Background}
Now we proceed to the case of non-relativistic string in Newton-Cartan background. As the first step we formulate the Hamiltonian constraint
with the help of the variables that reflect an invariance of the theory under generalized Galilean boosts that have the form \cite{Andringa:2012uz}
\begin{equation}\label{Galltr}
\delta e_\mu^{ \ a'}=\tau_\mu^{ \ a}\lambda_a^{ \ a'} \ ,
\quad
\delta \tau^\mu_{\ a}=e^\mu_{ \ a'}\lambda^{a'}_{ \ a} \ , \quad \delta m_\mu^{ \ a}=
e_\mu^{ \ a'}\lambda_{a'}^{ \ a} \ ,
\end{equation}
where $\lambda_a^{ \ a'}$ are parameters that obey following relations
\begin{eqnarray}\label{Galltrpar}
& & \eta_{ac}\lambda^c_{ \ a'}+
\delta_{a'b'}\lambda^{b'}_{ \ a}=0 \ , \quad \lambda_{a'}^{ \ c}\eta_{ca}+
\lambda_a^{ \ b'}\delta_{b'a'}=0 \ ,
\nonumber \\
& & \lambda_{a'}^{ \ a}+\lambda^{a}_{ \ a'}=0 \ , \quad \lambda^{a'}_{ \ a}+\lambda_a^{ \ a'}=0 \ .
\nonumber \\
\end{eqnarray}
Now we define boost invariant temporary vierbein as $\hat{\tau}^\mu_{ \ a}$
\begin{equation}
\hat{\tau}^\mu_{ \ a}=\tau^\mu_{ \ a}-h^{\mu\nu}m_\nu^{ \ b}\eta_{ba} \ .
\end{equation}
This is invariant under (\ref{Galltr}) since
\begin{equation}
\delta \hat{\tau}_\mu^{ \ a}=
e^\mu_{ \ a'}\lambda^{a'}_{ \ a}-e^{\mu}_{ \ c'}\delta^{c'b'}\lambda_{b'}^{ \ b}\eta_{ba}=
e^\mu_{ \ a'}\lambda^{ a'}_{ \ a}+e^\mu_{ \ a'}\lambda_a^{ \ a'}=0 \ ,
\end{equation}
where in the last step we used (\ref{Galltrpar}). With the help of $\hat{\tau}_\mu^{ \ a}$ we can rewrite $V^\mu$ into manifestly invariant form
\begin{eqnarray}
V^\mu=
V^\mu_{ \ \nu}\partial_\sigma x^\nu \ ,
V^\mu_{ \ \nu}=
-2\tau_F \hat{\tau}^\mu_{ \ a}\epsilon^{ab}
\hat{\tau}^\sigma_{ \ b}\tau_{\sigma\nu}=V^{\mu\sigma}\tau_{\sigma\nu} \ , \nonumber \\
\end{eqnarray}
where $\epsilon^{ab}\equiv \eta^{ac}\eta^{bd}\epsilon_{cd}$ and where
$V^{\mu\nu}=-V^{\nu\mu}$.
Let us now analyze in more details the object $\bar{H}_{\mu\nu}$. After some calculations we obtain that it can be rewritten into the form
\begin{eqnarray}\label{bHnew}
\bar{H}_{\mu\nu}&=&-\tau_F^2 \tau_\mu^{ \ c}\epsilon_{cd}\Phi^{da}\epsilon_{ab}
\tau_\nu^{ \ b}+\tau_F^2\bar{h}_{\mu\nu} \ , \nonumber \\
\bar{h}_{\mu\nu}&=&h_{\mu\nu}+m_\mu^{ \ a}\tau_\nu^{ \ b}\eta_{ab}+
\tau_\mu^{ \ a}m_\nu^{ \ b}\eta_{ab} \ , \nonumber \\
\Phi^{ab}&=&-\tau^{\mu}_{ \ d}\eta^{da}m_\mu^{ \ b}-
m_\mu^{ \ a}\tau^{\mu}_{ \ d}\eta^{db}+m_\mu^{ \ a}h^{\mu\nu}
m_\nu^{ \ b} \ . \nonumber \\
\end{eqnarray}
An important property of (\ref{bHnew}) is that it is written
with the help of the objects that are invariant under
(\ref{Galltr}). To see this let us firstly consider variation of
$\bar{h}_{\mu\nu}$
\begin{eqnarray}
& &\delta \bar{h}_{\mu\nu}=\tau_\mu^{\ a}\lambda_a^{ \ a'}\delta_{a'b'}
e_\nu^{ \ b'}+e_\mu^{ \ a'}\delta_{a'b'}\tau_\nu^{ \ b}\lambda_b^{ \ b'}+
\nonumber \\
& &+e_\mu^{ \ a'}\lambda_{a'}^{ \ a}\tau_\nu^{ \ b}\eta_{ab}+
\tau_\mu^{ \ a}\eta_{ab}e_\nu^{ \ b'}\lambda_{b'}^{ \ b}=0 \ ,
\nonumber \\
\end{eqnarray}
where we used (\ref{Galltrpar}). Finally we consider the variation of
$\Phi^{ab}=\Phi^{ba}$. Note that the matrix $\Phi^{ab}$ can be interpreted as the matrix of Newton-potential in generalized Newton-Cartan gravity which is now
matrix valued as opposite to the scalar form of $\Phi$ in ordinary Newton-Cartan gravity. On the other hand it is still invariant under
(\ref{Galltr}) since
\begin{eqnarray}
\delta \Phi^{ab}=
-e^\mu_{ \ a'}\lambda^{a'}_{ \ c}m_\mu^{ \ b}\eta^{ca}-
m_\mu^{ \ a}e^{\mu}_{ \ a'}\lambda^{a'}_{ \ d}\eta^{db}
+e^\nu_{ \ d'}\delta^{c'a'}\lambda_{a'}^{ \ a}m_\nu^{ \ b}+
m_\mu^{ \ a}e^\mu_{ \ c'}\delta^{c'b'}\lambda_{b'}^{ \ b}=
\nonumber \\
=-e^\mu_{ \ a'}\lambda^{a'}_{ \ c}m_\mu^{ \ b}\eta^{ca}-
m_\mu^{ \ a}e^{\mu}_{ \ a'}\lambda^{a'}_{ \ d}\eta^{db}
-e^\nu_{ \ c'}\delta^{c'a'}\lambda_{ \ a'}^{a}m_\nu^{ \ b}-
m_\mu^{ \ a}e^\mu_{ \ c'}\delta^{c'b'}\lambda_{ \ b'}^{b}=
\nonumber \\
=-e^\mu_{ \ a'}\lambda^{a'}_{ \ c}m_\mu^{ \ b}\eta^{ca}-
m_\mu^{ \ a}e^{\mu}_{ \ a'}\lambda^{a'}_{ \ d}\eta^{db}+
e^\mu_{ \ c'}\lambda^{c'}_{ \ c}\eta^{ca}m_\mu^{ \ b}
+m_\mu^{ \ a}e^\mu_{ \ c'}\lambda^{c'}_{ \ c}\eta^{cb}=0 \ ,
\nonumber \\
\end{eqnarray}
where in the first step we used the relation $\lambda_{a'}^{ \ a}=-\lambda^a_{ \ a'}$ and in the last step we used the fact that
\begin{equation}
\lambda^{c'}_{ \ b}\eta^{ba}+\lambda^c_{ \ b'}\delta^{b'c'}=0 \ .
\end{equation}
It is also instructive to elaborate more about an expression that contain the potential $\Phi^{ab}$. We find that after some calculations it can be rewritten into the form
\begin{eqnarray}
&- &\tau_F^2 \partial_\sigma x^\mu \tau_\mu^{ \ c} \epsilon_{cd}\Phi^{da}\epsilon_{ab}\tau^b_{ \ \nu}\partial_\sigma x^\nu=\nonumber \\
&=&\tau_F^2\partial_\sigma x^\mu\tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}\tau^{ \ d}_\nu\partial_\sigma x^\nu-\tau_F^2 \mathbf{a}_{\sigma\sigma}
\Phi^{ab}\eta_{ba} \nonumber \\
\end{eqnarray}
so that we finally obtain manifestly invariant Hamiltonian in the form
\begin{eqnarray}\label{Hinvariant}
& &H=\int d\sigma (\lambda^\tau \mathcal{H}_\tau+\lambda^\sigma \mathcal{H}_\sigma) \ , \quad \mathcal{H}_\sigma=p_\mu\partial_\sigma x^\mu \ , \nonumber \\
& &\mathcal{H}_\tau=p_\mu h^{\mu\nu}p_\nu-2p_\mu \tau_F \hat{\tau}^\mu_{ \ a}\epsilon^{ab}
\hat{\tau}^\sigma_{ \ b}\tau_{\sigma\nu}\partial_\sigma x^\nu
+\nonumber \\
& &+\tau_F^2\tau^c\eta_{ca}\Phi^{ab}\eta_{bd}\tau^d-\tau_F^2 \mathbf{a}_{\sigma\sigma}
\Phi^{ab}\eta_{ba}+\tau_F^2\bar{h}_{\sigma\sigma} \ , \nonumber \\
\end{eqnarray}
where $\tau^a=\partial_\sigma x^\mu \tau_\mu^{ \ a},\bar{h}_{\sigma\sigma}=
\partial_\sigma x^\mu \bar{h}_{\mu\nu}\partial_\sigma x^\nu$ and where $\lambda^\tau$ and
$\lambda^\sigma$ are corresponding Lagrange multipliers.
Before we proceed to the Lagrangian formulation of the theory let us introduce
vierbein $\hat{e}_\mu^{ \ a'}$ defined
as
\begin{equation}\label{hate}
\hat{e}_\mu^{ \ a'}=e_\mu^{ \ a'}+m_\nu^{ \ a}e^\nu_{ \ c'}\delta^{c'a'}
\tau_\mu^{ \ b}\eta_{ba} \ ,
\end{equation}
that is again invariant under (\ref{Galltr})
\begin{eqnarray}
\delta \hat{e}_\mu^{ \ a'}
=\tau_\mu^{ \ a}\lambda_a^{ \ a'}+\lambda_{c'}^{ \ a}\delta^{c'a'}
\tau_\mu^{ \ b}\eta_{ba}=\nonumber \\
=\tau_\mu^{ \ a}\lambda_a^{ \ a'}
-\lambda_b^{ \ b'}\delta_{b'c'}\delta^{c'a'}\tau_\mu^{ \ b}=0 \ . \nonumber \\
\end{eqnarray}
Note that we have following useful identity
\begin{eqnarray}
\hat{\tau}^\mu_{ \ a}\hat{e}_\mu^{ \ a'}=0
\end{eqnarray}
and also
\begin{equation}
\hat{e}_\mu^{ \ a'}e^\mu_{ \ b'}=\delta^{ a'}_{ \ b'} \ , \quad
\hat{e}_\mu^{ \ a'}h^{\mu\nu}=e^\nu_{ \ c'}\delta^{c'a'} \ .
\end{equation}
Now we are ready to proceed to the Lagrangian formulation of the theory. We begin with the canonical equations of motion for $x^\mu$ that follow from the
Hamiltonian (\ref{Hinvariant})
\begin{equation}\label{eqmXmu}
\partial_\tau x^\mu=\lambda^\tau (2h^{\mu\nu}p_\nu+V^\mu)+\lambda^\sigma \partial_\sigma x^\nu \ .
\end{equation}
Let us now multiply this equation with $\hat{e}_\mu^{ \ a'}$. Using the fact that
$\hat{e}_\mu^{ \ a'}\hat{\tau}^\mu_{ \ b'}=0$ we find that
$\hat{e}_\mu^{ \ a'}V^\mu=0$ and from (\ref{eqmXmu}) we obtain
\begin{equation}
\hat{e}_\mu^{ \ a'}\partial_\tau x^\mu=2\lambda^\tau e^\mu_{ \ c'}\delta^{c'a'}
p_\mu+\lambda^\sigma \hat{e}_\mu^{ \ a'}\partial_\sigma x^\mu \
\end{equation}
and consequently
\begin{eqnarray}
(\partial_\tau x^\mu-\lambda^\sigma\partial_\sigma x^\mu)\hat{e}_\mu^{ \ a'}\delta_{a'b'}
\hat{e}_\nu^{ \ b'}(\partial_\tau x^\nu-\lambda^\sigma \partial_\sigma x^\nu)
=4(\lambda^\tau)^2p_\mu h^{\mu\nu}p_\nu \ . \nonumber \\
\end{eqnarray}
With the help of this result we easily find the Lagrangian density in the form
\begin{eqnarray}\label{mLHam}
\mathcal{L}&=&p_\mu\partial_\tau x^\mu-\lambda^\tau \mathcal{H}_\tau-\lambda^\sigma \mathcal{H}_\sigma=
\nonumber \\
&=&\frac{1}{4\lambda^\tau}(\partial_\tau x^\mu-\lambda^\sigma\partial_\sigma x^\mu)\hat{e}_\mu^{ \ a'}\delta_{a'b'}
\hat{e}_\nu^{ \ b'}(\partial_\tau x^\nu-\lambda^\sigma \partial_\sigma x^\nu)-
\tau_F^2 \lambda^\tau \bar{H}_{\sigma\sigma} \ . \nonumber \\
\end{eqnarray}
To proceed further we observe that (\ref{eqmXmu})
implies
\begin{eqnarray}
& & \hat{e}_\mu^{ \ a'}\delta_{a'b'}\hat{e}_\nu^{ \ b'}
=\bar{h}_{\mu\nu}+\tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}\tau_\nu^{ \ d} \ ,
\nonumber \\
\end{eqnarray}
where we also used
\begin{equation}
e_\mu^{ \ a'}e^\nu_{ \ a'}=\delta_\mu^\nu-\tau_\mu^{ \ a}\tau^\nu_{ \ a} \ .
\end{equation}
Then we can rewrite the Lagrangian density
(\ref{mLHam}) into the form
\begin{eqnarray}\label{mLHam1}
& &\mathcal{L}=\frac{1}{4\lambda^\tau}(\bar{h}_{\tau\tau}-2\lambda^\sigma
\bar{h}_{\sigma\tau}+(\lambda^\sigma)^2\bar{h}_{\sigma\sigma}+\nonumber \\
&+&\partial_\tau x^\mu \tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}
\tau_\nu^{ \ d}\partial_\tau x^\nu-2\lambda^\sigma
\partial_\tau x^\mu \tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}\tau_\nu^{ \ d}
\partial_\sigma x^\nu+(\lambda^\sigma)^2
\partial_\sigma x^\mu \tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}\tau_\nu^{ \ d}\partial_\sigma x^\nu)-\nonumber \\
&-&\lambda^\tau\tau_F^2\partial_\sigma x^\mu\tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}\eta_{bd}\tau_\nu^{ \ d}\partial_\sigma x^\nu+\lambda^\tau
\tau_F^2 \mathbf{a}_{\sigma\sigma}
\Phi^{ab}\eta_{ba}-\lambda^\tau\tau_F^2\bar{h}_{\sigma\sigma} \ , \nonumber \\
\end{eqnarray}
where $\bar{h}_{\alpha\beta}=\bar{h}_{\mu\nu}\partial_\alpha x^\mu \partial_\beta x^\nu$.
Finally we eliminate $\lambda^\tau$ and $
\lambda^\sigma$ from (\ref{mLHam1}). As in case of the flat space-time limit their form is not determined
by their equations of motion. Instead they can be determined using the equations of motion for $x^\mu$. In fact, if we multiply (\ref{eqmXmu}) by $\tau_{\mu\nu}$ and use
the fact that $\tau_{\mu\nu}h^{\nu\rho}=0$ we obtain
\begin{equation}
\tau_{\mu\nu}(\partial_\tau x^\nu-\lambda^\sigma \partial_\sigma x^\nu)
-\lambda^\tau \tau_{\mu\nu}V^\nu=0 \ .
\end{equation}
We can multiply this equation with $\partial_\sigma x^\mu$ and we obtain
\begin{equation}
\lambda^\sigma=\frac{\mathbf{a}_{\sigma\tau}}{\mathbf{a}_{\sigma\sigma}} \ , \quad \mathbf{a}_{\alpha\beta}=
\partial_\alpha x^\mu \tau_{\mu\nu}\partial_\beta x^\nu
\end{equation}
using the fact that
\begin{equation}
\partial_\sigma x^\mu \tau_{\mu\nu}V^\nu=
2\tau_F\partial_\sigma x^\mu \tau_\mu^{ \ a}\epsilon_{ab}\tau_\nu^{ \ b}\partial_\sigma x^\nu=0 \ .
\end{equation}
In the similar way we obtain
\begin{eqnarray}
(\partial_\tau x^\mu -\lambda^\sigma \partial_\sigma x^\mu)\tau_{\mu\nu}
(\partial_\tau x^\nu-\lambda^\sigma \partial_\sigma x^\nu)=(\lambda^\tau)^2V^\mu \tau_{\mu\nu}V^\nu \nonumber \\
\end{eqnarray}
that can be solved for $\lambda^\tau$ as
\begin{equation}
\lambda^\tau=\frac{\sqrt{-\det\mathbf{a}_{\alpha\beta}}}{\sqrt{-V^\mu\tau_{\mu\nu}V^\nu}\sqrt{\mathbf{a}_{\sigma\sigma}}} \ ,
\nonumber \\
\end{equation}
where
\begin{eqnarray}
V^\mu \tau_{\mu\nu}V^\nu
=-4\tau_F^2\mathbf{a}_{\sigma\sigma} \ .
\nonumber \\
\end{eqnarray}
Now we see that we can proceed as in the case of the non-relativistic
string in flat space-time and we obtain the final result
\begin{eqnarray}\label{mLfin}
\mathcal{L}=-\frac{\tau_F}{2}
\sqrt{-\det\mathbf{a}}\left(\mathbf{a}^{\alpha\beta}\bar{h}_{\alpha\beta}
+\mathbf{a}^{\alpha\beta}\partial_\alpha x^\mu \tau_\mu^{ \ c}\eta_{ca}\Phi^{ab}
\eta_{bd}\tau_\nu^{ \ d}\partial_\beta x^\nu-\Phi^{ab}\eta_{ba}\right)
\nonumber \\
\end{eqnarray}
We see that this Lagrangian density almost coincides with the Lagrangian density
found \cite{Andringa:2012uz} up to terms that contain matrix valued Newton potential
$\Phi_{ab}$. Now we are going to argue that these terms cancel each other. In fact,
note that $\mathbf{a}_{\alpha\beta}$ is defined as
\begin{equation}
\mathbf{a}_{\alpha\beta}=\tau_\alpha^{ \ a}\tau_\beta^{ \ b}\eta_{ab} \ ,
\end{equation}
where $\tau_{\alpha}^{ \ a}\equiv \partial_\alpha x^\mu \tau_\mu^{ \ a}$ is $2\times 2$ matrix. Since
$\tau_{\alpha\beta}$ is
non-singular so that $\tau_{\alpha}^{ \ a}$ is non-singular as well
due to the fact that
\begin{equation}
\det \mathbf{a}_{\alpha\beta}=(\det \tau_\alpha^{ \ a})^2\det \eta_{ab}
=-(\det\tau_\alpha^{ \ a})^2\neq 0 \ .
\end{equation}
Then we can introduce
an inverse matrix $\tau^\beta_{\ a}$ that obeys
the relation
\begin{equation}
\tau^\alpha_{ \ a}\tau_\alpha^{ \ b}=\delta_a^{ \ b} \ .
\end{equation}
As a result we can
define $\mathbf{a}^{\alpha\beta}$ as
\begin{equation}
\mathbf{a}^{\alpha\beta}=\tau^\alpha_{ \ a}\tau^\beta_{ \ b}\eta^{ab}
\end{equation}
that obeys
\begin{equation}
\mathbf{a}^{\alpha\beta}\tau_\beta^{ \ a}=\tau^\alpha_{ \ c}\eta^{ca} \ ,
\end{equation}
and hence
\begin{equation}
\mathbf{a}^{\alpha\beta}\tau_\beta^{ \ b}\tau_\alpha^{ \ a}=
\tau_\beta^{ \ b}\tau^\beta_{ \ c}\eta^{ca}=\eta^{ba} \ .
\end{equation}
With the help of these results we can manipulate with the second term in
(\ref{mLfin}) as
\begin{eqnarray}
-\frac{\tau_F}{2}\sqrt{-\det \mathbf{a}}\mathbf{a}^{\alpha\beta}
\tau_\alpha^{ \ a}\Phi_{ab}\tau_\beta^{ \ b}=-\frac{\tau_F}{2}
\sqrt{-\det\mathbf{a}}\eta^{ab}\Phi_{ba}
\nonumber \\
\end{eqnarray}
and we see that it exactly cancels the last term in (\ref{mLfin}). As
a result we derive
the Lagrangian density in the final form
\begin{equation}
\mathcal{L}=-\frac{T}{2}\sqrt{-\det\mathbf{a}}\mathbf{a}^{\alpha\beta}
\bar{h}_{\alpha\beta}
\end{equation}
which is Lagrangian density proposed in \cite{Andringa:2012uz}. This result
again confirms validity of our approach.
\section{Conclusion}\label{fourth}
Let us outline our results and suggest possible extension of this work. We found Hamiltonian for non-relativistic string in Newton-Cartan background from the Hamiltonian of relativistic string in general background when we used the limiting procedure introduced in
\cite{Bergshoeff:2015uaa}. The corresponding Hamiltonian is linear combination of two constraints and we checked that they are the first class constraints which is a consequence of diffeomorphism invariance of world-sheet theory. We also introduced variables that are invariant under Milne boost and we showed that the Hamiltonian constraint is invariant under this transformation too. Finally we found Lagrangian formulation of the non-relativistic string in Newton-Cartan background that agrees with the Lagrangian density proposed in
\cite{Andringa:2012uz}. We mean that this is very nice consistency check of our result.
This paper can be extended in different directions. It would be possible to perform similar analysis in the case of non-relativistic p-brane in Newton-Cartan background. Secondly, we could also extend this analysis to the case of superstring.
We hope to return to some of these problems in future.
\\
\\
{\bf Acknowledgment:}
This work was
supported by the Grant Agency of the Czech Republic under the grant
P201/12/G028.
|
1601.03547
|
\section{INTRODUCTION }
\label{sec:intro}
Fast radio bursts (FRBs) -- bright millisecond radio pulses -- have generated considerable excitement within the astronomy community, with potential uses in cosmology, the intergalactic and interstellar media, coherent emission processes, compact objects, and more \citep{SKATransients}. To date, the majority of FRB discoveries have been made at the Parkes radio telescope \citep{Lorimer07,Keane12,Thornton13,SarahFRB,PetroffFRB,Ravi2015,Champion2015}, with additional discoveries made at the Arecibo and Green Bank telescopes \citep{Spitler14,GBTBurst}. With mounting evidence for their astrophysical origin ever more projects dedicated to FRB searches are arising at new and existing telescopes \citep{Law15,Karastergiou2015,Tingay2015,alfaburst}. At present, the rate of discovery is still rather slow, but new facilities are under construction which promise an explosion in FRB discovery rates \citep{SKATransients}. It is therefore timely to catalogue what we know so far and to re-measure quantities for the published FRBs in a uniform and systematic fashion. Furthermore, it is important to identify commonly used derived parameters which are model-dependent and examine the various degrees to which these are uncertain.
Our catalogue is partially presented here for the currently published FRBs in tabular form but is also fully available in the form of an online catalogue and database at \texttt{http://www.astronomy.swin.edu.au/pulsar/frbcat/} for community use. Where data were publicly available, we have performed a systematic re-analysis with software using the methods outlined in this work.
In the following sections we describe the online interface and how to use the catalogue. In Section~\ref{sec:quantities} we present the entries in the catalogue used to describe each burst. Section~\ref{sec:howto} provides information on how to use the catalogue; we conclude in Section~\ref{sec:conclusions}.
\section{PARAMETERS}
\label{sec:quantities}
A list is presented in the following subsections of all the quantities and parameters displayed in the catalogue. These have been divided into three categories: observation parameters, observed parameters, and derived parameters. Observation parameters are related to the telescope and instrument used for observation. Observed parameters relate to the detected burst and quantities obtained through direct processing of the data. Derived parameters related to cosmology and distance are based upon combining the observed parameters with model-dependent numbers for cosmological values ($\Omega_\mathrm{M}$, $\Omega_\Lambda$, $H_0$) and the ionised component of the Galaxy (DM$_\mathrm{Galaxy}$).
The separation of these three categories is for several reasons. Firstly, they have been separated to draw clear distinctions between quantities that are directly measured, speculative based on FRB position in the telescope beam, and speculative due to a combination of the positional uncertainty and the uncertainty of a model or a models. Errors are provided for measured quantities. Secondly, this catalogue is intended for use as the FRB population continues to grow.
Each observation will also have its own set of observed and derived parameters which will be attached to it. Additional sets of derived parameters can be added when data are re-analysed in new ways.
Observed parameters are included with a reference to the measurements leading to the numbers presented. In the cases where the data are publicly available two (or more) sets of observed parameters for the original detection observation are included: those published in the discovery paper and those obtained through our systematic reanalysis described here. A reference is included to make note of the measurement method.
The signal to noise (S/N), width, and dispersion measure (DM) for each burst are re-derived using the \textsc{destroy}\footnote{https://github.com/evanocathain/destroy\_gutted} single pulse search code, the \textsc{psrchive}\footnote{http://psrchive.sourceforge.net/} package and scripts made publicly available through the FRBCAT github page\footnote{https://github.com/frbcat/FRBCAT\_analysis}. These numbers have in turn been independently verified using a python-based fitting routine.
For the python-based approach a filterbank data block of a few seconds around the event was extracted. To remove the effects of narrowband interference, frequency channels with abnormally high or low variance over the data segment were masked. We used the outlier detection method known as Tukey's rule for outliers \citep[see e.g.][]{Chandola}. The resultant data block was then de-dispersed at different trial DM values using the smallest DM trial step allowed by the original time resolution of the raw data. In the case of Parkes search-mode data since 2008, $\Delta \mathrm{DM}$ = 0.0488.
Every de-dispersed time series was then separately searched for pulses. A running median of length 0.5 seconds was subtracted to mitigate the effects of low-frequency noise, then the time series was normalized to zero mean and unit variance by carefully excluding outlier values (i.e. time samples containing the FRB signal). Finally, a standard pulse search algorithm was used, which consists in convolving the time series with a set of top-hat pulse templates \citep{PulsarHandbook}, here with widths covering all values from from 1 to 400 time samples, trying to maximize the response:
\begin{equation}
S/N = \frac{1}{\sqrt{W}} \sum_{j=i}^{i+W} t_j
\end{equation}
where $W$ is the width in number of bins of the top-hat pulse template, $i$ the trial starting sample index of the pulse, and $t_j$ the j-th bin of the time series. The maximum S/N value points to the optimal FRB parameters: dispersion measure, pulse width, and arrival time.
Both the published and re-derived parameters are included on the page for an individual burst. If available, the re-derived values are the ones presented on the catalogue homepage to maintain consistency across the sample where possible, as some search codes used for the initial discoveries have been found to under-report signal to noise \citep{KeanePetroff}.
Values for some of the following parameters for the published FRBs are presented in Table~\ref{tab:tab1}. Values for cosmological parameters, as derived by \textsc{cosmocalc} \citep{CosmologyCalc}, are also included in the catalogue.
\subsection{Observation parameters}
\noindent \textsc{\textbf{Telescope}} --- Telescope used to take the observation.
\noindent \textsc{\textbf{Receiver}} --- Receiver system on the telescope used to take observations. At Parkes the primary instrument is the 13-beam multibeam receiver (MB20; \citeauthor{multibeam}, \citeyear{multibeam}), at Arecibo the 7-beam ALFA receiver \citep{Spitler14}, and at the Green Bank Telescope (GBT) the 800 MHz receiver~\citep{GBTBurst}.
\noindent \textsc{\textbf{Backend}} --- The data recording system used for the observation. At Parkes two primary data recording systems have been used: the analogue filterbank (AFB) and Berkeley Parkes Swinburne Recorder (BPSR; \citeauthor{Keith10}, \citeyear{Keith10}). The Mock spectrometers and the GUPPI backends have been used at Arecibo and the GBT, respectively.
\noindent \textsc{\textbf{Beam}} --- The telescope beam number in which the FRB was detected. For FRBs detected with single-pixel feeds this value is set to 1.
\noindent \textsc{\textbf{RAJ}} --- The right ascension in J2000 coordinates of the pointing centre of the detection beam. This is not the location of the burst and corresponds only to the position of the centre of the telescope beam. Error on pointing accuracy of the telescope is also listed.
\noindent \textsc{\textbf{DECJ}} --- The declination in J2000 coordinates of the pointing centre of the detection beam. As with RAJ, this is not the location of the burst but only the position of the centre of the beam. Error on pointing accuracy of the telescope is also listed.
\noindent \textit{\textbf{gl}} --- The Galactic longitude, in degrees, of the pointing centre of the beam.
\noindent \textit{\textbf{gb}} --- The Galactic latitude, in degrees, of the pointing centre of the beam.
\noindent \textsc{\textbf{Beam FWHM}} --- The diameter of the full width half maximum of the primary lobe of the detection beam in arcminutes. This parameter serves as the best estimate of the positional uncertainty of the burst as a detection is most likely to lie within this area.
\noindent \textsc{\textbf{Sampling time}} --- The length of a time sample for the observation, in milliseconds. Sampling times between 1 ms and 64 $\upmu$s are common for observations in which FRBs were detected.
\noindent \textsc{\textbf{Bandwidth}} --- The observing bandwidth in MHz. In the case of observations using BPSR, the system bandwidth is 400 MHz but the usable bandwidth (after excision of channels rendered unusable by interference) is 338.281 MHz; the latter is quoted in this catalogue as it is the relevant number for calculations of signal to noise and flux density.
\noindent \textsc{\textbf{Centre frequency}} --- The centre frequency of the observation in MHz.
\noindent \textsc{\textbf{Number of polarisations}} --- The number of polarisations used to record the total signal.
\noindent \textsc{\textbf{Channel bandwidth}} --- The bandwidth of the individual frequency channels in MHz.
\noindent \textsc{\textbf{Bits per sample}} --- The number of bits recorded for an individual time sample in the final data product.
\noindent \textsc{\textbf{Gain}} --- The telescope gain in units of K Jy$^{-1}$.
\noindent \textsc{\textbf{System temperature}} --- The receiver system temperature in Kelvin. Throughout this analysis the Parkes MB20 system temperature is taken to be 28 K, as given in the Parkes Telescope Users Guide\footnote{www.parkes.atnf.csiro.au/observing/documentation/user\_guide/}. We note that this is up to 5 K higher than the system temperature used in many publications and that true system temperature also depends on observing elevation.
\noindent \textsc{\textbf{Reference}} --- The journal reference for the burst discovery paper where the event was first reported.
\subsection{Observed parameters}
\noindent \textsc{\textbf{DM}} --- The dispersion measure of the FRB in units of cm$^{-3}$ pc. The integrated electron column density along the line of sight to the burst obtained either with a pulse fitting algorithm or by the search code. For bursts with a published DM produced with a detailed fitting code such as the one described in \citet{Thornton13} this DM is used throughout the re-analysis.
\noindent \textsc{\textbf{DM index}} --- The dispersion measure index of the burst $\alpha$ such that DM $\propto$ $\nu^{-\alpha}$ obtained with a pulse fitting algorithm. The DM index for the propagation of waves through a cold plasma is $\alpha$ = 2 \citep{PulsarHandbook}.
\noindent \textsc{\textbf{Scattering index}} --- The evolution of pulse width as a function of frequency due to scattering such that W $\propto$ $\nu^{-\beta}$ obtained with a pulse fitting algorithm. The index for the propagation of radio waves through an inhomogeneous turbulent medium is $\beta$ = 4 \citep{PulsarHandbook}.
\noindent \textsc{\textbf{Scattering time}} --- A measure of the fluctuations in electron density along the line of sight contributing to scattering of the pulse obtained with a pulse fitting algorithm. The number presented in the catalogue is the scattering time for a radio pulse at 1 GHz in ms. Scattering time scales with frequency as $\tau(\nu) = \tau_s (\nu/\nu_0)^{-\beta}$, where $\tau_s$ and $\nu_0$ are at the reference frequency and $\beta$ is the scattering index.
\noindent \textsc{\textbf{Linear polarisation fraction}} --- If polarised data were recoreded for the FRB the fractional linear polarisation is reported with errors. The total linear polarisation is the quadrature sum of Stokes $Q$ and $U$ such that $\sqrt{Q^2+U^2}/I$.
\noindent \textsc{\textbf{Circular polarisation fraction}} --- If polarised data were recorded for the FRB the fractional circular polarisation is reported with errors. The total absolute value of circular polarisation is given by $|V|/I$.
\noindent \textsc{\textbf{S/N}} --- The signal-to-noise of the burst.
\noindent \textsc{\textbf{W$_\textsc{obs}$}} --- The observed width of the FRB in ms obtained either with a pulse fitting algorithm or by the search code. The width reported here is not the \textit{intrinsic} width.
\noindent \textsc{\textbf{\textit{S}$_\textsc{peak,obs}$}} --- The observed peak flux density of the burst in Jy calculated using quantities above via the single pulse radiometer equation \citep{Cordes2003}. Note that this flux density is derived from observed values and is not necessarily the true peak flux density that would be measured if the burst occurred at beam centre; this value should be taken as a lower limit on the true flux density.
\noindent \textsc{\textbf{\textit{F}$_\textsc{obs}$}} --- The observed fluence of the FRB in units of Jy ms calculated as $F_\mathrm{obs} = S_\mathrm{peak,obs} \times W_\mathrm{obs}$. Again, the observed fluence should be taken as a lower limit on the true fluence due to the likely off-axis detection of the burst.
\subsection{Derived parameters}
\noindent \textsc{\textbf{DM$_\textsc{Galaxy}$}} --- The modeled contribution to the FRB DM by the electrons in the Galaxy. The Galactic DM contribution is derived using the NE2001 Galactic electron density model \citep{Cordes02} and should be taken as an estimate as the free electron content of the Galactic halo is not well constrained \citep{Dolag2015}.
\noindent \textsc{\textbf{DM$_\textsc{excess}$}} --- The DM excess of the FRB over the estimated Galactic DM. This is calculated as $\mathrm{DM}_\mathrm{excess} = \mathrm{DM}_\mathrm{FRB} - \mathrm{DM}_\mathrm{Galaxy}$.
\noindent \textsc{\textbf{z}} --- The estimated redshift of the FRB based on DM$_\mathrm{excess}$. The redshift is calculated as $z = \mathrm{DM}_\mathrm{excess}/1200$ pc cm$^{-3}$ from estimates of the intergalactic medium (IGM) electron density from \citet{Ioka03}. This relation approximates the full expression to better than 2\% for z < 2. This uncertainty is much less than the line-of-sight variation expected \citep{McQuinn2014}. This value for redshift assumes that any host galaxy or surrounding material contributes nothing to the DM and should be taken as an approximate upper limit on the true redshift.
\noindent \textsc{\textbf{D$_\textsc{comov}$}} --- Comoving distance in units of Gpc derived using the cosmology calculator CosmoCalc \citep{CosmologyCalc}. One should note that this parameter is highly uncertain as, unless an independent redshift measurement is made for an FRB, the comoving distance depends on models of electron density in the Galaxy and IGM as well as the chosen cosmological parameters.
\noindent \textsc{\textbf{D$_\textsc{luminosity}$}} --- Luminosity distance in units of Gpc calculated as $D_L = D_\mathrm{comov} \times (1+z)$. Again, this value is an upper limit based on the upper limit on redshift and comoving distance.
\noindent \textsc{\textbf{Energy}} --- The estimated FRB energy in units of 10$^{32}$ joules calculated as
\begin{equation}
E_\mathrm{FRB} = F_\mathrm{obs} \; \mathrm{BW} \; D_L^2 \times 10^{-29} \; (1+z) \; \mathrm{J}
\end{equation}
\noindent where fluence is in units of Jy ms, bandwidth is in units of Hz, $D_L$ is in units of metres, and $10^{-29}$ is a conversion factor between Jy ms and joules. The equivalent conversion factor from Jy ms to ergs is 10$^{-22}$.
\begin{table*}
\begin{threeparttable}
\caption{Table of select observation parameters (left) and observed parameters (right) from the catalogue. Discovery observations for each burst are listed first. Where a re-analysis of the data has been performed for this work, it is presented as a second entry for the burst. The references are [1] \citet{SarahFRB}, [2] This work, [3] \citet{Keane11}, [4] \citet{Lorimer07}, [5] \citet{Champion2015}, [6] \citet{Thornton13}, [7] (GBT Burst), [8] \citet{Spitler14}, [9] \citet{Ravi2015}, [10] \citet{PetroffFRB}. The FWHM for the telescopes include in this table are 15$'$ (Parkes), 7$'$ (Arecibo), and 16$'$ (GBT). Question marks denote values that were not speicified in the original publication or were not available publicly. }
\begin{center}
\setlength{\extrarowheight}{3 pt}
\begin{tabular}{lccc|cccccc}
\hline\hline
FRB name & Telescope & $gl^\textbf{a}$ & $gb^\textbf{a}$ & DM & S/N & W$_\mathrm{obs}$ & $S_\mathrm{peak,obs}$ & $F_\mathrm{obs}$ & Ref. \\
& & (deg) & (deg) & (pc cm$^{-3}$) & & (ms) & (Jy) & (Jy ms) & \\
\hline%
FRB 010125$^{\textbf{b}}$ & Parkes & 356.641 & -20.021 & 790(3) & 17 & 9.4(2) & 0.3 & 2.82 & [1] \\
& & & & 790(2) & 25 & 10.6$^{+2.8}_{-2.5}$ & 0.54$^{+0.11}_{-0.07}$ & 5.7$^{+2.9}_{1.9}$ & [2] \\
FRB 010621 & Parkes & 25.434(3) & -4.004(3) & 745(10) & -- & 7 & 0.410 & 2.870 & [3] \\
& & & & 748(3) & 18 & 8$^{+4.0}_{-2.3}$ & 0.53$^{+0.26}_{-0.09}$ & 4.2$^{+5.2}_{-1.7}$ & [2] \\
FRB 010724 & Parkes & 300.653(3) & -41.805(3) & 375 & >23 & 5.0 & 30(10) & 150 & [4] \\
& & & & 375(2) & >100 & >20 & >1.57 & >31.4 & [2] \\
& & 300.913(3) & -42.427(3) & -- & -- & -- & -- & -- & [4] \\
& & & & 375(2) & 16 & 13.0$^{+5.0}_{-11.0}$ & 0.29$^{+0.18}_{-0.09}$ & 3.7$^{+4.7}_{-3.4}$ & [2] \\
& & 301.310(3) & -41.831(3) & -- & -- & -- & -- & -- & [4] \\
& & & & 375(2) & 26 & 16$^{+5}_{-4}$ & 0.54$^{+0.14}_{-0.07}$ & 8.6$^{+5.6}_{-3.0}$ & [2] \\
& & 300.367(3) & -41.368(3) & 375(2) & 6 & 33$^{+12}_{-28}$ & 0.09$^{+0.03}_{-0.04}$ & 2.9$^{+2.4}_{-2.6}$ & [2] \\
FRB 090625 & Parkes & 226.444(3) & -60.030(3) & 899.6(1) & 28 & -- & -- & 2.2 & [5] \\
& & & & 899.6(1) & 30 & 1.9$^{+0.8}_{-0.7}$ & 1.14$^{0.42}_{-0.21}$ & 2.2$^{+2.1}_{-1.1}$ & [2] \\
FRB 110220 & Parkes & 50.829(3) & -54.766(3) & 944.38(5) & 49 & 5.6(1) & 1.3 & 7.3(1) & [6] \\
& & & & 944.38(5) & 54 & 6.6$^{+1.3}_{-1.0}$ & 1.1$^{+0.2}_{-0.1}$ & 7.3$^{+2.6}_{-1.7}$ & [2] \\
FRB 110523 & GBT & 56.12(?) & -37.82(?) & 623.30(6) & 42 & 1.73(17) & 0.6 & -- & [7] \\
FRB 110626$^{\textbf{c}}$ & Parkes & 355.862(3) & -41.752(3) & 723.0(3) & 11 & 1.4 & 0.4 & 0.56 & [6] \\
& & & & 723.0(3) & 12 & 1.4$^{+1.2}_{-0.5}$ & 0.6$^{+1.2}_{-0.1}$ & 0.9$^{+4.0}_{-0.4}$ & [2] \\
FRB 110703 & Parkes & 80.998(3) & -59.019(3) & 1103.6(7) & 16 & 4.3 & 0.5 & 2.15 & [6] \\
& & & & 1103.6(7) & 17 & 3.9$^{+2.2}_{-1.9}$ & 0.45$^{+0.28}_{-0.10}$ & 1.7$^{+2.7}_{-1.0}$ & [2] \\
FRB 120127 & Parkes & 49.287(3) & -66.204(3) & 553.3(3) & 11 & 1.1 & 0.5 & 0.55 & [6] \\
& & & & 553.3(3) & 13 & 1.2$^{+0.6}_{-0.3}$ & 0.6$^{+0.4}_{-0.1}$ & 0.7$^{+1.0}_{-0.3}$ & [2] \\
FRB 121002 & Parkes & 308.220(3) & -26.265(3) & 1629.18(2) & 16 & -- & -- & 2.3 & [5] \\
& & & & 1629.18(2) & 16 & 5.4$^{+3.5}_{-1.2}$ & 0.43$^{+0.33}_{-0.06}$ & 2.3$^{+4.5}_{-0.8}$ & [2] \\
FRB 121102 & Arecibo & 174.950(2) & -0.225(2) & 557(2) & 14 & 3.0(5) & 0.4$^{+0.4}_{-0.1}$ & 1.2$^{+1.6}_{-0.5}$ & [8] \\
FRB 130626 & Parkes & 7.450(3) & 27.420(3) & 952.4(1) & 20 & -- & -- & 1.5 & [5] \\
& & & & 952.4(1) & 21 & 1.98$^{+1.2}_{-0.44}$ & 0.74$^{+0.49}_{-0.11}$ & 1.5$^{+2.5}_{-0.5}$ & [2] \\
FRB 130628 & Parkes & 225.955(3) & 30.656(3) & 469.88(1) & 29 & & & 1.2 & [5]\\
& & & & 469.88(1) & 29 & 0.64(13) & 1.9$^{+0.3}_{-0.2}$ & 1.2$^{+0.5}_{-0.4}$ & [2] \\
FRB 130729 & Parkes & 324.788(3) & 54.745(3) & 861(2) & 14 & -- & -- & 3.5 & [5] \\
& & & & 861(2) & 14 & 15.6$^{+9.9}_{-6.2}$ & 0.22$^{+0.17}_{-0.05}$ & 3.4$^{+6.5}_{-1.8}$ & [2] \\
FRB 131104 & Parkes & 260.466(3) & -21.839(3) & 779(1) & 30 & 2.08 & 1.12 & 2.33 & [9] \\
& & & & 779(2) & 34 & 2.4$^{+0.9}_{-0.5}$ & 1.16$^{+0.35}_{-0.13}$ & 2.8$^{+2.2}_{-0.8}$ & [2] \\
FRB 140514 & Parkes & 50.841(3) & -54.612(3) & 562.7(6) & 16 & 2.8$^{+3.5}_{0.7}$ & 0.47$^{+0.11}_{-0.08}$ & 1.3$^{+2.3}_{-0.5}$ & [10] \\
& & & & 562.7(6) & 16 & 2.82$^{+0.64}_{-2.11}$ & 0.47$^{+0.10}_{-0.14}$ & 1.3$^{+0.6}_{-1.1}$ & [2] \\
\hline\hline
\end{tabular}
\begin{tablenotes}
\small
\item $^\textbf{a}$ Errors in $gl$ and $gb$ refer to the pointing accuracy of the given telescope.
\item $^\textbf{b}$ Originally incorrectly published as FRB 011025.
\item $^\textbf{c}$ Originally incorrectly published as FRB 110627.
\end{tablenotes}
\end{center}
\label{tab:tab1}
\end{threeparttable}
\end{table*}
\section{HOW TO USE THE CATALOGUE}
\label{sec:howto}
The catalogue can be viewed on the web or downloaded as a plain text file for use offline. The homepage of the catalogue\footnote{http://www.astronomy.swin.edu.au/pulsar/frbcat/} presents all the FRBs with a few key parameters: telescope name, $gl$, $gb$, FWHM, DM, S/N, W$_\mathrm{obs}$, $S_\mathrm{peak,obs}$, $F_\mathrm{obs}$, and reference. For each burst there is a detailed page containing all the parameters described in Section~\ref{sec:quantities}. Individual pages for the bursts contain relevant images and figures such as the dispersion sweep for the burst, polarisation (if available). Images are also available for download from the catalogue page.
Cosmological parameters such as $D_\mathrm{comov}$, $D_\mathrm{luminosity}$, and Energy are derived on-the-fly using the Cosmology Calculator module from \citet{CosmologyCalc} with the input parameters displayed on the individual burst sub-pages. Inputs for $\Omega_\mathrm{M}$, $\Omega_\Lambda$, and $H_0$ can be modified on the catalogue webpage; default values are $\Omega_\mathrm{M}$ = 0.286, $\Omega_\mathrm{vac}$ = 0.714, and $H_0$ = 69.6, Figure~\ref{fig:cosmocalc}. Updating these numbers will automatically update the calculated values of comoving distance, luminosity distance, and energy.
\begin{figure*}
\centering
\includegraphics[height=22cm]{FRBCAT_View.pdf}
\caption{Example of the beginning of an FRB entry on the catalogue webpage. Telescope-specific observation parameters have been separated from the observed parameters measured from the available data. Where the data have been re-analysed for the catalogue multiple measurement methods are available. \label{fig:cosmocalc}}
\end{figure*}
Alternatively, the catalogue can be downloaded either in CSV or tabular plain-text format from the catalogue homepage. The file generated will have all information contained in the most recent version of the online catalogue. If derived and measured parameters for a burst have been derived using multiple methods (i.e. values from publication and re-analysis) all methods will be included in the downloaded table with the appropriate reference. The value used in any studies of the bursts is the choice of the user.
\section{CLOSING REMARKS}
\label{sec:conclusions}
In this paper we present the FRBCAT, an online catalogue of fast radio bursts. This catalogue includes all bursts currently available in the literature and will be updated as new bursts are published in the future. The catalogue presents an overview of all bursts on the main page but also includes a page for each individual burst with a number of parameters that describe the observational setup, the observed burst properties, and model-dependent cosmological parameters. Additionally, all bursts for which the data are public have been re-analysed using a standardised method in an effort to make detections more consistent and directly comparable. The tools for our re-analysis have been made available through github and the data can be processed with freely available software packages and processing tools. It is our hope that data for all future bursts will be made public upon publication and this catalogue will encourage communication and collaboration.
\begin{acknowledgements}
The Parkes radio telescope and the Australia Telescope Compact Array are part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. This work was performed on the gSTAR national facility at Swinburne University of Technology. gSTAR is funded by Swinburne and the Australian Government's Education Investment Fund. The authors would like to thank the SUPERB\footnote{https://sites.google.com/site/publicsuperb/} collaboration for their help in beta testing the website.
\end{acknowledgements}
\bibliographystyle{apj}
|
1909.00267
|
\section{Introduction}
The classical electromagnetic field can be successfully used to model some basic features of genuine quantum physical systems (see, e.g., \cite{NL0,NL,NLG} and recent review \cite{REV} and the references herein). In particular, classical field modeling is a helpful tool for simulation in quantum information theory. However,
the foundational output of quantum-like modeling with classical light and other types of waves is not straightforward. In this note, I would like to discuss the foundational meaning of so-called ``classical entanglement'' and the widely spread view that ``quantum nonlocality''
(whatever it means) plays the crucial role in distinguishing classical and genuine quantum entanglements. The ``quantum nonlocality'' viewpoint was clearly formulated by Spreeuw in widely cited paper \cite{NL} which is often mentioned by experimenters for foundational justification of their activity:
\medskip
\footnotesize{``It is found that the model system ({\it classical electromagnetic
field}) can
successfully simulate most features of entanglement, but fails
to simulate quantum nonlocality. Investigations of how far
the classical simulation can be pushed show that quantum
nonlocality is the essential ingredient of a quantum computer,
even more so than entanglement. The well known problem
of exponential resources required for a classical simulation of
a quantum computer, is also linked to the nonlocal nature
of entanglement, rather than to the nonfactorizability of the
state vector.''}
and then
\footnotesize{``However, the ({\it classical-quantum}) analogy fails to produce effects
of quantum nonlocality, thus signaling a profound
difference between two types of entanglement: (i) ``true,''
multiparticle entanglement and (ii) a weaker form of entanglement
between different degrees of freedom of a single
particle. Although these two types look deceptively
similar in many respects, only type (i) can yield nonlocal
correlations. Only the type (ii) entanglement has a
classical analogy.''}
[Comments in italic were added by the author of the present paper.]
However, one can proceed without referring to mysterious quantum nonlocality - by taking into
account that quantum theory is about acts of observations. These acts are characterized
by {\it individuality and discreteness.} This crucial point in understanding of quantum theory was missed by authors discussing classical
entanglement. They missed that the main deviation of classical light models from quantum theory is not only in the states, but in
descriptions of measurement procedures.\footnote{As a comment on the first version \cite{ARV} of this paper, I received email from Gerd Leuchs with the reference to recent review \cite{GL}. Authors of this review do not refer to quantum nonlocality to distinguish classical and ``true quantum entanglement''. Their position is closer to my own. We shall discuss it in more detail in section \ref{CS}. } The classical and semiclassical descriptions of measurements are based on intensities of
signals in different channels. The quantum description of measurements is based on counting the discrete events, clicks of detectors,
with the aid of the Born's rule. Operating with intensities obscures the problem of {\it coincidence detection.}
We recall that quantum theory predicts that the relative probability of coincidence detection given by the coefficient of second order coherence $g^{(2)}(0)$ is zero (for one photon states), but in (semi-)classical models $g^{(2)}(0) \geq 1.$
\medskip
{\it Genuine quantum theory differs from classical light models reproducing quantum correlations
not by ``quantum nonlocality'', but by the magnitude of second order coherence.} Classical and semiclassical models were
rejected long ago as the result of Grangier et al. \cite{GR} experiment on coincidence detection (see \cite{GRT} for
historical review on such experiments).
\medskip
In fact, the main issue is the difference between classical and quantum superpositions, not entanglements. Our message is that each state-superposition has to be endowed with a proper measurement procedure. Otherwise superposition is just ``thing-in-itself''. Superposition endowed with the classical measurement procedure crucially differs from superposition endowed with the quantum measurement procedure. This difference is elevated to the level of entangled states, classical vs. quantum, without measurement procedures they are also just ``things-in-themselves'' and as such they are not interesting for physics.
Finalizing the introduction, we stress that``quantum nonlocality'' is really misleading notion. As shown in \cite{BELL}, the Bell tests can be consistently interpreted in the purely quantum theoretical framework (without any coupling to Bell's hidden variables theory
\cite{B1,B2,B3,CHSH,Jaeger1}) as statistical tests of local incompatibility of quantum observables, i.e., as tests of the most fundamental principle of quantum mechanics, {\it the complementarity principle}
\cite{BR0} (see also \cite{PL1,PL2,Jaeger1}). For reader's convenience, the compact presentation of the ``Bohr against Bell argument'' is given
in section \ref{NLB}.
\section{Nonlocality mess}
Nowadays playing with the notion ``quantum nonlocality'' is the real mess.
People widely operate with this notion and often without any specification
on its meaning. We briefly recall the history of its appearance.
The starting point of propagating of quantum nonlocality through the quantum community
was the EPR-paper \cite{EPR}. Here nonlocality in the form of a spooky action at a distance was counted as a possible
alternative to incompleteness of quantum mechanics. The EPR-argument was based on invention of elements of reality and counterfactual reasoning. This reasoning can lead to the idea on mystical a spooky action at a distance. Bohr replied Einstein \cite{BR} by pointing that
EPR's criterion of physical reality becomes ambiguous in quantum physics (see \cite{SPD} for details of this debate).
Einstein and Bohr did not understand each other, because they behaved towards quantum mechanics in the totally different ways. For Bohr, quantum mechanics is observational theory, it is about measurements performed by classical measurement apparatuses on microsystems. In modern terminology, quantum mechanics is an epistemic theory \cite{HARALD}; it is about extraction of knowledge
about nature with the aid of measurements. For Einstein, quantum mechanics (as any physical theory) was a descriptive theory providing consistent and complete description of nature. Philosophers also use the notion of ontic theory, i.e., theory describing nature as it is
- when nobody looks at it (see also \cite{KHERTZ}).
For me, the root of disagreement between Einstein and Bohr can be found already in the interpretations of measurement on a single system.
(Consideration of compound systems and the EPR-states had just strengthen this disagreement.) For Bohr, quantum mechanics generates predictions on outputs of interaction of a quantum system and a measurement apparatus; for Einstein, quantum mechanics (as any ``good physical theory'') should generate prediction of ``real physical properties of a system'' (see also section \ref{RWR}).
In any event, Einstein's message on a spooky action at a distance approached and excited the quantum community. And at the same time, the seeding issue of (in)completeness of quantum mechanics was totally forgotten. (Only philosophers continue to debate the EPR-paper \cite{EPR}
from the completeness-incompleteness viewpoint, see, e.g., \cite{SPD} for the most recent analysis of this issue).
By criticizing the interpretational output of extended research on classical entanglement, I only criticize coupling to mystical quantum nonlocality. As shown in \cite{BELL} (see also \cite{Accardi}-\cite{Griffiths} and section \ref{NLB}), quantum mechanics by itself has no coupling to such kind of nonlocality. (This statement is also strongly supported by quantum field theory, e.g.,
\cite{BS,Haag}.) At the same time, a subquatum theory can in principle be nonlocal, as Bohmian mechanics and other theories with hidden variables considered by Bell \cite{B1,B2,B3}. However, generally, in spite of the Bell theorem, a subquantum theory can be free of nonlocality of a spooky action at a distance type; see \cite{PCT1,PQ} and Appendix 1 for {\it prequantum classical statistical field theory} (PCSFT). The latter is the classical random field model beyond quantum theory. (Coupling between PCSFT and quantum mechanics is not
so straightforward as in the Bell framework \cite{B1,B2,B3}). PCSFT pretends \cite{PCT1,PQ} that genuine quantum systems can be mathematically represented by classical random fields. So, it is not a part the classical entanglement project. It is
a part of extensive studies on classical probabilistic reconstruction of quantum theory (see, e.g., \cite{Feynman}-\cite{KC7}).
Thus I also contributed to random field modeling of quantum correlations. Therefore generally I am sympathetic to the classical entanglement project. Moreover, by reading Spreeuw's paper \cite{NL}, I had the impression that, in fact, by writing about ``quantum nonlocality'' he had in mind the correlations of spatially separated signals, as,e.g., in radiophysics (cf. with the above discussion on quantum nonlocality mess).
\section{Inter-intra system versus quantum-classical entanglements}
As stated in recent review \cite{REV}, \footnotesize{``...the name classical entanglement denotes the occurrence of some mathematical and physical aspects of quantum entanglement in classical beams of light. ... the term ‘classical’ in the name classical entanglement, indicates the nonquantum nature of the excitation of the electromagnetic field. .... A typical example thereof is given by a collimated optical beam with non-uniform polarization pattern.''} We continue by citing \cite{NL}:
\footnotesize{``It should be noted that the choice of optical waves is not essential for the analogy. Other classical
waves such as sound, water waves, or even coupled pendula could be used in principle.''}
In short, classical entanglement is associated with the ``nonquantum nature of the excitation of the electromagnetic field'' or
modes of classical waves of any origin. Generally, classical systems of any origin can be considered, see, e.g., \cite{AL} on entanglement
of classical Brownian motions.
This paper is directed against the statement that the difference between classical and genuine quantum entanglements is due to quantum nonlocality.
Now we remark that comparison classical-quantum entanglements
is typically coupled to comparison intra-inter system entanglements.
Intra-entanglement is between degrees of freedom of a single system and inter-entanglement is between degrees of freedom of two
systems, $S_1$ and $S_2.$
Classical entanglement modeling
is possible only in the intrasystem context \cite{NL0,NL,NLG,REV}. One may conclude that this feature of
entanglement plays the crucial role in distinguishing classical and quantum entanglements.
This reasoning also leads to conclusion that only intersystem entanglement is ``true quantum entanglement''
(since intrasystem entanglement can be generated even with classical fields).
In this paper, we demonstrate that classical intrasystem entanglement differs fundamentally not only from quantum intersystem entanglement, but even from quantum intrasystem entanglement. Thus, comparison classical-quantum entanglements has no relation to intra-inter comparison
(and, hence, no relation to quantum nonlocality).
This comparative analysis of inter-intra system versus quantum-classical entanglements can be completed by the following remark.
The impossibility of classical representation of intersystem entanglement is related only to the very spacial class of
the field models elaborated in the classical entanglement project \cite{REV} (cf. \cite{PCT1,PQ}: in PCSFT, both types of entanglement
(intra and inter) can be realized, but they have different mathematical representations, see Appendix 1 for further discussion).
Finally, we note that the intra-inter system difference of entanglements is invisible in the quantum theoretical framework. In particular, this difference cannot be justified with the aid of the Bell type inequalities (see \cite{BELL} and section
\ref{NLB}). To distinguish intra-intersystem entanglements, we have to go beyond quantum theory (see \cite{PCT1,PQ} and Appendix 1).
\section{Grangier et al. experiment separating classical field theories from quantum mechanics}
We start with citing the breakthrough paper of Grangier et al. \cite{GR}:
\footnotesize{ ``However, there has still been no test of the conceptually very simple situation dealing with single-photon states of the light impinging on a beam splitter. In this case, quantum mechanics predicts a perfect anticorrelation for photodetections on both sides of the beam splitter (a single-photon can only be detected once!), while any description involving classical fields would predict some amount of coincidences.''}
Following \cite{GR}, denote by $p_1, p_2$ the probabilities of detection in two channels after beam splitter and by
$p_c$ the coincidence probability. Then by using the semiclassical model of detection it is easy to show that
\begin{equation}
\label{GRE}
p_c \geq p_1 p_2.
\end{equation}
This inequality means clearly that the classical coincidence probability $p_c$ is always greater than the accidental coincidence probability, which is equal to $p_1p_2.$ The violation of inequality (\ref{GRE}) thus gives an anticorrelation criterion, for characterizing a nonclassical behaviour of light (see \cite{GR}).
The crucial theoretical point is that in classical and semiclassical models the basic physical quantity is intensity of a signal.
In Grangier et al. experiment, these are $I(t),$ intensity of imprinting on the beam splitter, and $I_1(t), I_2(t)$ are intensities of signals
in the two output channels. The use of intensities, instead of counting of clicks, obscures the coincidence detection problem.
We claim that this is not just a technicality, but the very important foundational issue. And we shall continue discussion in following sections. However, the reader who is not so much interested in foundational questions can jump directly to
conclusion-section \ref{CS}.
The main critical point has already been presented.
Finally, we remark PCSFT \cite{PCT1,PQ} suffers of the same problem as the classical entanglement models - the ``double detection loophole'' (see Appendix 2 for further discussion and attempts to close this loophole by using the treshold detection scheme).
\section{Quantum measurements}
\label{QMM}
Consider a quantum observable $A$ represented by Hermitian operator $\hat A$
with purely discrete spectrum $(a_i).$ By the spectral postulate of quantum mechanics
any measurement of $A$ produces one of the values $a_i$ (as the result of interaction of
a quantum system with an apparatus used for $A$-measurement). Thus quantum measurements are characterized
by individuality of outputs. This crucial feature of quantum measurements was emphasized by Bohr who invented the notion of
{\it phenomenon} \cite{BR0} (see also \cite{PL1,PL2}):
\footnotesize{`` ... in actual experiments, all observations are expressed by
unambiguous statements referring, for instance, to the registration of the point at which
an electron arrives at a photographic plate. ... the appropriate physical interpretation of the symbolic quantum mechanical
formalism amounts only to predictions, of determinate or statistical character,
pertaining to individual phenomena ... .} [2,v. 2, p. 64]
Thus, although quantum theory produces statistical predictions, its observables generate individual
phenomena. Discreteness of detection events is the fundamental feature of quantum physics justifying existence of quantum systems, carriers of quanta. It is commonly accepted that axiomatic of quantum theory does not contain the special postulate on discrete clicks and the statistical interpretation of probabilities.\footnote{However, as was pointed by Plotnitsky (unpublished paper), ``it depends on what he sees as axiomatic of quantum theory. The structure of complex Hilbert space does not. But once one introduces projectors and the Born's rule to a relate quantum state to the outcome of experiment, both discreteness and probability enter. The Born rule is not part of the Hilbert space structure and, while mathematically natural (in connections the complex quantities of the formalism to real one and the probability) it is brought ad hoc, and why it works, and it works perfectly, is enigmatic. In a way — that is the main quantum mystery — why Born's rule works.''}
One may point to the existence of quantum observables with continuous spectra. The problem of their measurement was analyzed in detail
by von Neumann \cite{VN}. His analysis implies that measurement of an observable with continuous spectrum has to be reduced to measurements
of observables with discrete spectra approximating it. This is the complex foundational issue and we would not go into a deeper discussion;
our considerations are restricted to observables with discrete spectra.
In the classical wave framework the origin of the analog of the quantum Born's rule, so to say the Born's rule for intensities is straightforward. If a classical wave has two orthogonal components, i.e.,
\begin{equation}
\label{I}
\Phi(x)= \Phi_1(x) + \Phi_2(x),
\end{equation}
with intensities
$I_1$ and $I_2,$ then corresponding probabilities can be expressed in the form $p_j=I_j/(I_1+I_2), j=1,2,$ and
intensities are given by the ``classical Born's rule'':
\begin{equation}
\label{I1}
I_j = \Vert \Phi_j\Vert_{L_2}= \int \vert \Phi_j(x) \vert^2 dx.
\end{equation}
However, this is the separate question whether the coefficients $p_j$ can really be interpreted as probabilities of discrete events.
In papers on classical entanglement, there are considered expansions of state-vectors with respect to
orthonormal bases in complex Hilbert spaces.\footnote{The tensor product structure is typically emphasized.
But this is not the main issue, see section \ref{Le} on classical vs. quantum interpretation of states' superposition.} Such expansions may make the impression that the standard quantum mechanical scheme of measurement can be applied. This is not the case. For classical signals, it is impossible to project the initial state on the state corresponding to one concrete outcome. In the two slit experiment with classical waves, a wave propagating from the source passes both slits at the same time.
Finally, we remark that in classical field theory the method of complex Hilbert space started to be used even before appearance of quantum mechanics. We can mention, for example, {\it Riemann-Silberstein representation}, $\Psi(x)= E(x) + i H(x),$ for the
classical electromagnetic field. In this representation, the Maxwell equation has the form
of the Schr\"odinger equation. So, studies on classical entanglement are consistent with complex Hilbert space analysis of classical signals.
\section{Reality without realism (RWR) interpretation of quantum mechanics}
\label{RWR}
The above discussion on quantum discreteness matches perfectly the RWR interpretation of quantum mechanics that was elaborated in a series
of works of Plotnitsky (see \cite{SPD} and references herein). This is one of the versions of the Copenhagen interpretation\footnote{Bohr
had never formulated the Copenhagen interpretation exactly. The quantum community uses a variety of interpretations pretending to express Bohr's views.
Therefore, Plotnitsky proposed to speak about interpretations in the spirit of Copenhagen. RWR is one of such interpretations.}. I now present RWR. (This is my interpretation of RWR. It may differ from Plotnitsky's own views.)
We start by remarking that often Bohr's views are presented as idealism. But, this is misunderstanding. He definitely
did not deny reality of {\it quantum systems}, say electrons or atoms. However, as pointed out in section \ref{QMM}, quantum mechanics does not describe genuine physical properties of quantum systems. Bohr stressed that measurement's
output cannot be disassociated from a measurement apparatus and generally the complex of experimental physical conditions, experimental context. We can point to two common missuses of quantum theory (well, from the RWR-viewpoint). On one hand, one may neglect the role
of experimental context and try to assign measurement's output directly to a quantum system. We call this approach ``naive realism''.
From the Bohr-Heinseberg viewpoint, this approach should be rejected as contradicting Heinseberg's uncertainty relation and generally the complementarity principle. Another misinterpretation is forgetting about the existence of quantum systems (the reality counterpart of RWR). By Bohr, electron exists! And the output of measurement is assigned to this concrete electron (prepared for measurement), but, of course, the WR-counterpart of Bohr-Plotnitsky interpretation has also to be taken into account. Therefore quantum theory is about such individual assigning of outputs (quantum phenomena). This is the origin of discreteness of quantum measurements. That is why measurement of intensity of á beam of classical light is not a quantum phenomenon. As was found by realization of the classical entanglement project, generally the WR-counterpart of RWR should be taken into account even for
classical light. But, surprisingly, the R-counterpart cannot be applied. Hence, classical optics measurements do not produce quantum phenomena in Bohr's meaning.
Finally, we remark that measurements for a quantum system in the intra-entangled state satisfies both the R- and WR-counterparts of
of WRW; so, their outputs are quantum phenomena.
\section{Comparing classical and quantum superpositions}
\label{Le}
From my viewpoint, the misleading journey towards classical entanglement starts already with identifying classical and quantum superpositions. Physically these superpositions are totally different, in spite of the
possibility to represent them by the same mathematical expression.
Consider a classical electromagnetic field with $n$ orthogonal modes corresponding to frequencies $(\nu_j, j=1,...,n)$ with complex amplitudes $(C_j).$ This field can be represented in $n$-dimensional complex Hilbert space with the basis $(e_j\equiv \vert \nu_1\rangle, j=1,...,n):$
\begin{equation}
\label{SUP}
\Psi = \sum_j C_j \vert \nu_j\rangle.
\end{equation}
This vector can be normalized:
\begin{equation}
\label{SUP1}
\vert \psi\rangle = \sum_j c_j \vert \nu_j\rangle,
\end{equation}
where $c_j= C_j/\sqrt{ \sum_j \vert C_j\vert^2}.$
\medskip
{\it What is the main difference of classical field superposition (\ref{SUP1}) from the genuine quantum superposition?}
\medskip
The main difference is in measurement procedures determining probabilities $p_j= \vert c_j\vert^2.$ For the classical field, it is impossible to detect discrete clicks in $n$ channels without coincidence detections, where the degree of coincidence is determined by coefficient $g^{(2)}(0).$ Thus, to see the difference between classical and quantum light, on need not consider formal entanglement expressions corresponding for different degrees of freedom. It is sufficient to consider one degree of freedom and superposition.
\section{Comparing classical and quantum entanglements}
\label{COMP}
Following papers on classical entanglement, consider two degrees of freedom of the classical electromagnetic field which can be jointly measured. The four dimensional complex Hilbert space contains states of this field
that are nonseparable and formally they can be treated as entangled. Here ``entangled'' is understood purely mathematically,
as the special form of representation in complex Hilbert space endowed with the tensor product structure.
As in the case of superposition, the devil is not in the state, but in measurement.
For the classical electromagnetic field, photo-detectors cannot produce phenomena, in Bohr's sense. The measurement
procedure suffers from coincidence detection.
\section{Quantum information: role of discrete clicks of detectors}
The bit is a portmanteau of binary digit and this preassumes the discrete structure of information represented by bits.
In Wikipedia, it is stated that \footnotesize{``a qubit is the quantum version of the classical binary bit that is physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics.''} Although Wikipedia is not the best source of citation in a research article, this definition is really
useful for our further analysis. Its second part reflects common neglect by the role of measurement. At the same time the first part points perfectly to a two-state device, the source of discrete counts; a device that can distinguish two states. Unfortunately, in quantum
information research one typically operates with states forgetting about extracting information from them. As we have seen in section
\ref{COMP}, two states superpositions by their selves are not quantum. Genuine quantum superposition is combination of a state and measurement procedure extracting discrete alternatives.
Thus, quantum information theory is not reduced to linear algebra in complex Hilbert space. Its main component is quantum measurement procedure. The main value of quantum information (as well as classical one) is in the possibility to extract from states discrete
events.
\section{Has classical entanglement anything to do with original Bell argument?}
The above critique of attempts to couple studies on classical entanglement with quantum theory can also
be applied to attempts to couple classical random field correlations violating Bell type inequalities
with the original Bell argument \cite{B1,B2,B3}. Bell applied classical probability theory to derive his inequality. The
later was used to compare the classical probabilistic representation of correlations with the quantum theoretical description.
We recall that in classical probability theory observables are represented
by random variables, functions on sample space. Denote the latter by symbol $\Lambda$ (although mathematicians
typically use symbol $\Omega).$ Then a random variable $\xi: \Lambda \to \mathbf{R}$ and by definition of a function it takes only one value $\xi(\lambda)$ for each $\lambda \in \Lambda.$ Thus by getting the clicks
in both channels one understand that it is impossible to represent such measurements by classical random variables.
We remark that originally (following EPR-paper \cite{EPR}) Bell was interested in explanation of perfect correlations. In his original inequality \cite{B1}, it was assumed that ranges of values of quantum and classical observables should coincide, i.e., the range of values is the two point set $\{\pm 1\}.$ A classical random variable is a function $\xi: \Lambda \to \{\pm 1\}.$ And if one would accept that for some set of
$\lambda$s, $\xi$ is multivalued, i.e., at the same time $\xi(\lambda)= -1$ and $\xi(\lambda)= +1,$ then classical probability theory stops to work. There is no way to derive the Bell inequality. In the CHSH-framework, the range of values of observables was extended to the segment $[1, +1].$ However, this was done with only one purpose, namely, to include value 0 corresponding to non-detection event. Thus in real physical modeling the range of values is given by the discrete set $\{-1, 0, +1\}.$
Moreover, as was already pointed out, von Neumann emphasized \cite{VN} that any Hermitian operator $\hat A$ with continuous spectrum is just a symbolic expression of converging sequence of quantum observables with discrete spectra, representing approximate measurements.
The above remarks on discreteness of quantum and classical observables were presented only to underline the astonishing difference between measurement procedures in the Bell framework and in classical optics. Even classical random variables with continuous range of values are mathematically represented by single-valued functions.
\subsection{``Superstrong quantum correlations'': comparing original Bell inequality and CHSH-inequality}
Excitement of researchers violating the CHSH inequality (theoretically or experimentally) with classical field correlations is well understandable.
The statement on ``superstrong quantum correlations'' that cannot be represented as classical correlations has been emphasized in the quantum community. Typically correlations were associated with states and the issue of quantum vs. classical measurement procedures was practically ignored.
This is the good place to point that transition from the original Bell inequality \cite{B1} to the CHSH-inequality \cite{CHSH} was not so innocent from the foundational viewpoint. The original Bell inequality is about explicit correlations and hence comparison of
the concrete values of observables (cf. \cite{EPR}). It is evident that, for this inequality, transition from
discrete clicks to intensities is nonsense. In the CHSH-framework, this basic issue was obscured. Instead,
the issue of ``superstrong quantum correlations'' was elevated (see \cite{AB} for discussion; see also \cite{AE} for related theoretical study). Nowadays we are much closer to performance of experiments on violation of the original Bell inequality (see \cite{AB} for analysis of the present situation in theory and experiment). Such experiments will immediately distance
quantum physics from its classical simulation.
\section{Bell's inequalities as tests of observables' incompatibility}
\label{NLB}
The unconventional interpretation of Bell's type inequalities was proposed in recent author's paper \cite{BELL}.
This paper presents purely quantum mechanical treatment of these inequalities, i.e., without
any relation to hidden variables. Observables measured in experiments are coupled directly to
quantum observables. It was shown that in this framework these {\it inequalities express the compatibility-incompatibility
interplay for local observables.} Thus quantum theory has nothing to do with nonlocality. For reader's
convenience we briefly present the aforementioned analysis.
The quantum theoretical CHSH-correlation function has the form:
\begin{equation}
\label{LC}
\langle {\cal B} \rangle_{\psi} =\frac{1}{2} [\langle \hat A_1 \hat B_1 \rangle_{\psi} + \langle \hat A_1 \hat B_2 \rangle_{\psi} + \langle \hat A_2 \hat B_1 \rangle_{\psi} - \langle \hat A_2 \hat B_2 \rangle_{\psi}],
\end{equation}
where $\psi$ is a pure quantum state (mixed states can be considered as well).
(This quantum theoretical correlation functions is compared with the experimental CHSH-correlation function.)
In the quantum framework, the CHSH-correlation function can be expressed with the aid of the Bell-operator:
\begin{equation}
\label{L1}
\hat {\cal B} = \frac{1}{2}[\hat A_1(\hat B_1+ \hat B_2) +\hat A_2(\hat B_1-\hat B_2)]
\end{equation}
as
\begin{equation}
\label{L1T}
\langle {\cal B} \rangle_{\psi}= \langle \psi\vert \hat {\cal B} \vert \psi\rangle.
\end{equation}
By straightforward calculation one can derive the Landau identity:
\begin{equation}
\label{L2}
\hat{{\cal B}}^2=I - (1/4) [\hat A_1, \hat A_2][\hat B_1,\hat B_2].
\end{equation}
This identity implies that if at least one of commutators $[\hat A_1, \hat A_2], [\hat B_1,\hat B_2]$
equals zero, i.e., if at least one pair of observables, $(A_1, A_2)$ or (and) $(B_1, B_2),$ is compatible,
then for any state $\psi,$
\begin{equation}
\label{L1T}
\sup_{\Vert \psi \Vert=1} \vert \langle {\cal B} \rangle_{\psi}\vert= \Vert \hat{{\cal B}} \Vert =1,
\end{equation}
i.e., for each state $\psi,$
\begin{equation}
\label{L1Ta}
\vert \langle {\cal B} \rangle_{\psi}\vert= \Vert \hat{{\cal B}} \Vert \leq 1.
\end{equation}
This is the quantum version of the CHSH-inequality. The classical bound by 1 has the purely quantum explanation.
Simple spectral analysis shows (see \cite{}) that if the product of commutators is not equal to
zero, i.e., in both pairs $(A_1, A_2)$ and $(B_1, B_2)$ of observables are incompatible,
then either
\begin{equation}
\label{L1Tzm}
\sup_{\Vert \psi \Vert=1} \vert \langle {\cal B} \rangle_{\psi}\vert= \Vert \hat{{\cal B}} \Vert >1,
\end{equation}
or, for $\hat{{\cal B}}_-= \frac{1}{2}[\hat A_1(\hat B_2 - \hat B_1) +\hat A_2(\hat B_1-\hat B_2)],$
\begin{equation}
\label{L1Tzmk}
\sup_{\Vert \psi \Vert=1} \vert \langle {\cal B}_- \rangle_{\psi}\vert= \Vert \hat{{\cal B}}_- \Vert >1
\end{equation}
This condition can be rewritten in a compact form. Denote by $\sigma$ some permutation of the indexes
for the $A$-observables and the indexes for the $B$-observables and denote by $\hat{{\cal B}}_\sigma$ the operator
with corresponding permutation of indexes. If the product of commutators is not equal to
zero, then
\begin{equation}
\label{L1Tzmh}
\max_\sigma \sup_{\Vert \psi \Vert=1} \vert \langle {\cal B}_\sigma \rangle_{\psi}\vert >1,
\end{equation}
i.e., there exists some state $\psi$ such that the CHSH-inequality is violated at least for one
of correlations $\langle {\cal B}_\sigma \rangle_{\psi}.$
The issue of locality can be formalized by introducing the tensor product structure on the state space $H,$
i.e., $H=H_1\otimes H_2$ and considering observables represented by Hermitian operators in the form
$\hat A_i = \hat {\bf A}_i \otimes I$ and $\hat B_i = I \otimes \hat {\bf B}_i,$ where Hermitian operators
$\hat {\bf A}_i$ and $\hat {\bf B}_i$ act in spaces $H_1$ and $H_2,$ respectively. Then the condition of commutativity respects the tensor product structure, since $[A_1, A_2]= [\hat {\bf A}_1, \hat {\bf A}_2]\otimes I $ and $[B_1, B_2]= I \otimes [\hat {\bf B}_1, \hat {\bf B}_2].$
Now, if the tensor product structure corresponds to the compound system structure, then
$[\hat {\bf A}_1, \hat {\bf A}_2]\not=0$ and $[\hat {\bf B}_1, \hat {\bf B}_2]\not=0$ are conditions of {\it local
incompatibility of observables.} Thus satisfaction-violation of the CHSH-inequality is completely determined
by these local conditions.\footnote{In the absence of the tensor product structure, we have to impose the constraint that
the product of commutators differs from zero. The presence of tensor product structure makes this constraint redundant.}
{\it By interpreting the Bell type inequalities as describing the compatibility-incompatibility interplay we cannot point to any difference between ``intersystem and intrasystem entanglement''.}
At the same time, analysis presented in the previous sections
points to crucial difference between classical and quantum entanglements.\footnote{This is the reply to the question of Mario Krenn
who told me about extended studies on classical entanglement and asked me to comment them in the light
the purely quantum mechanical analysis of the Bell type inequalities (after my talk at FQMT19, Prague).} For classical light, the presented incompatibility interpretation of the CHSH inequality for quantum systems has to used with caution. We restrict considerations to intra-entanglement. Of course, the mathematical structure of states and operators is the same. Thus, all above calculations are valid even
in the classical entanglement framework. However, the physical meaning of operators is not the same. In quantum physics, the operators represent measurements procedures which do not suffer of the double detection loophole; in classical optics, the same operators
represent measurements procedures which suffer of this loophole. It is not clear whether one can extend the complementarity principle
to such measurements. (This is the good question to experts in quantum foundations).
PCSFT reproduces quantum correlations without establishing isomorphism of state spaces and physical variables, subquantum$\to$quantum
map has a more complex structure. Therefore the above operator analysis of the CHSH-inequality has no direct impact to this theory.
To couple consequences of this analysis with PCSFT, we have to understand how complementarity arises through transition from a subquantum
theory to quantum mechanics.
\section{Concluding remarks}
\label{CS}
The aim of this note is to distance the technical impact of ``classical entanglement'' research (both for theory and experiment) \cite{REV}
from its misleading interpretation, as supporting ``quantum nonlocality'' \cite{NL}.
First we present the main points of our analysis of the notion ``quantum nonlocality'':
\begin{itemize}
\item In modern physics, its using is the real mess.
\item The ontic-epistemic (descriptive-observational) viewpoint on scientific theories clarifies misunderstanding
between Einstein and Bohr.
\item Einstein's treatment of elements of reality as components of observational theory leads him to really misleading
notion of quantum nonlocality, based on a spooky action at a distance.
\item Bell type inequalities have the purely quantum interpretation as tests of local incompatibility.
\end{itemize}
We now list the main conclusions from our analysis of interrelation of classical and quantum entanglements:
\begin{itemize}
\item The main issue is the difference between classical and quantum superpositions.
It can be explained by Grangier et al. experiment \cite{GR}
\item The distinguishing feature of quantum measurements is discreteness and
individuality of outcomes (as expressed in Bohr's notion of phenomenon).
\item Derivation of quantum(-like) correlations with classical entanglement \cite{NL,REV} implies
that the Hilbert space formalism has to be distinguished from genuine quantum physics.
\item Classical entanglement is not consistent with Bell's hidden variables theory:
coincidence detection blocks the use of random variables.
\end{itemize}
This comparison of classical and quantum entanglements and critique of the ``quantum nonlocality'' interpretation
of their difference is the main output of the paper.
Finally, we point to the recent review of Korolkova and Leuchs \cite{GL} which is the important step towards resolution of the interpretation problems related to
interrelation of classical and quantum entanglements. Its authors do not more refer to quantum nonlocality (cf. with previous review \cite{REV}). They recognize that the main issue is not the impossibility to generate intersystem entanglement with classical optics. The main issue is the difference between intra system entanglements, classical vs. quantum. And Korolkova and Leuchs, as well as the author of this paper, also emphasize the role of measurement procedures in distinguishing two types of entanglement. They made the following remark on
an intra-entangled state of a genuine quantum system: {\it ``This state is the quantum entangled state of type
$|01>+|10>$ a strict correlation of one photon in one arm and no photon in the other or vice versa.''} I would just add that,
in fact, the root of the problem lies already in classical vs. quantum interpretation of superposition-state $|0>+|1>.$
\section*{Appendix 1: Subquantum modeling of inter-intra system entanglements}
One possibility is to appeal to so-called {\it prequantum classical random field theory} (PCSFT) \cite{PCT1,PQ} that is devoted to modeling of {\it both forms of entanglement}, intra and inter system, with the aid of classical random fields. PCSFT provides the abstract random field representation of quantum averages and correlations. In PCSFT, intra and inter system entanglements have different mathematical representations. The crucial point is that representation of intersystem entanglement (in PCSFT) is impossible without assuming the presence of {\it a random background field}, a kind of the zero point field (field of vacuum fluctuations) explored in stochastic electrodynamics. In principle, the presence of such a background field can be interpreted as nonlocality, although the use of such a terminology would be really misleading. Say in radiophysics, nobody would associate some mystical features with a random background. However, in this note we shall not present the details of the PCSFT modeling of intra and inter system entanglements. We plan to do this in a future publication.
\section*{Appendix 2: Extracting discrete events from continuous random fields}
We remark that the Grangier-type experiments were directed against one special
model of photo-detection, the semiclassical model (see \cite{MANDEL}). ``In the semiclassical theory of
photoelectric detection, it is
found that the conversion of continuous electromagnetic radiation
into discrete photoelectrons is a random process.'' (see \cite{GRT}) One can propose other detection
models for such conversion. The simplest way to extract discrete events from continuous random fields is to use threshold detection procedures. Such a project was started in the PCSFT-framework \cite{PCT1,PQ,PCT6,PCT5}.
First we consider intrasystem entanglement. In this case, the threshold detection scheme
can be designed to exclude the double detection (clicks in both channels) for a dichotomous observable. Here theoretical research was completed by numerical simulation \cite{PCT6}. The coefficient of second order coherence $g^{(2)}(0)$ is used as the measure ``quantumness''; it is possible to construct such classical random fields and the threshold detection scheme that $g^{(2)}(0)< 1.$
Now consider intersystem entanglement, in the PCSFT-realization. This realization can also be equipped with a threshold detection scheme and
correlations based on probabilities for discrete counts can violate the CHSH-inequality \cite{PCT5}. However, in this case I was not able to
close the double detection loophole. The model of (classical field based) intersystem entanglement endowed with the threshold detection scheme is so complex \cite{PCT5} that it is difficult to estimate the magnitude of coefficient $g^{(2)}(0).$
\section*{Acknowledgments}
This work was supported by the research project of the Faculty of Technology, Linnaeus University,
``Modeling of complex hierarchical structures''.
|
1910.04218
|
\section{Introduction}
The Atacama Large Millimeter/submillimeter Array (ALMA) was the top-ranked priority for a new ground-based facility in the 2000 Long Range Plan. Ten years later, at the time of LRP2010, ALMA construction was well underway, with first science observations anticipated for 2011. In the past 8 years, ALMA has proved itself to be a high-impact, high-demand observatory, with record numbers of proposals submitted to the past few annual calls (more proposals than are submitted annually to the Hubble Space Telescope) and large numbers of highly cited scientific papers across fields from protoplanetary disks to high-redshift galaxies and quasars. Since Cycle 4, ALMA has also begun to carry out large programs using more than 60 hours of observing time on the 12-m array or more than 200 hours of time on the Atacama Compact Array (ACA).
The organization of this White Paper is as follows. In Section~\ref{science}, we give some scientific highlights from research with ALMA since the start of observing in fall 2011. Section~\ref{sec-success} reviews the success criteria for Canadian participation in ALMA that were laid out in the LRP2010 ALMA White Paper. Section~\ref{sec-science_drivers} describes the three new science drivers that have been formulated to guide ALMA development over the next 10--15 years. Section~\ref{sec-development} describes the plans for ALMA development over the short to medium term. Section~\ref{sec-cdn_development} discusses some of the possible Canadian contributions to ALMA development in the next decade. Section~\ref{sec-recommendations} contains our recommendations to the LRP 2020 panel and the responses to the eight specific questions posed by them.
\section{ALMA Science highlights 2011--2019}\label{science}
The scientific potential of the submillimetre waveband, as laid out in previous Canadian planning documents \citep{2010arXiv1008.4159S, 2013arXiv1312.5013W}, has been realised through almost a decade of successful ALMA operations.
ALMA was the enabling facility for the first image of a supermassive black hole in the centre of M87 \citep{2019ApJ...875L...1E}. These results and images (Fig.~\ref{fig-russell}) received intensive media coverage across the globe and are without a doubt the single highest-impact science result to come out of ALMA to date.
Canadians led the analysis that extracted the physics from that image, such as black hole
mass and spin. Avery Broderick
(U. Waterloo) was on the panel that
presented the results at a media event in Washington, D.C.
The Event Horizon Telescope (EHT) Collaboration, which includes several Canadians,
has recently been awarded the Breakthrough Prize in Fundamental Physics.
Extremely high-resolution imaging with ALMA has been used to provide evidence for substructure in the lensing halo of SDP.81
\citep{2016ApJ...823...37H}. By applying Bayesian modelling to the uv-data of this strongly lensed source, they find evidence for a dark matter sub-halo of mass around $10^9\,$M$_\odot$ and are able to put constraints on the population of dark matter sub-halos down to masses of $2 \times 10^7\,$M$_\odot$.
Understanding that galaxies evolve
under the influence of energetic feedback from supermassive black holes represents a
significant advance in our understanding of galaxy evolution.
A collaboration led by Canadian researchers beginning in Cycle 0 has
shown that radio galaxies located in clusters and groups drive molecular
gas flows (inflow/outflow) several to tens of kpc in altitude
\citep[e.g.,][]{2017ApJ...836..130R}.
The masses range between $10^9\,$M$_\odot$ to
upward of $10^{10}\,$M$_\odot$, the largest observed among AGN including quasars (Fig.~\ref{fig-russell}). How radio jets and the X-ray bubbles they
inflate into their surrounding hot atmospheres are able to lift such large masses is not understood.
Evidence suggests that the feedback mechanism itself stimulates atmospheric
cooling into molecular clouds. The molecular gas fuels a long-lived feedback loop
that drives the co-evolution of massive black holes and their host galaxies. This process may be
occurring at some level in all massive galaxies, indicating a significant scientific advance
enabled by ALMA.
\begin{figure}[htbp!]
\includegraphics[width=0.44\textwidth]{russell.pdf}
\includegraphics[trim=0 -0.5cm 0 0, width=0.55\textwidth]{apjab30fef9_hr.jpg}
\includegraphics[trim=0 -1.0cm 0 0, width=0.6\textwidth]{HUDF_HST_ALMA.png}
\hspace{0.25cm}
\includegraphics[width=0.35\textwidth]{apjlab0ec7f3_hr.jpg}
\caption{A montage illustrating the diversity of extragalactic science targets observed with ALMA. {\bf Top left}: Composite X-ray (blue) and CO image of the central galaxy in the Phoenix
Cluster. The image shows upward of $10^{10}\,$M$_\odot$ of molecular gas
being lifted out of the galaxy along the edges of rising X-ray bubbles (black).
The X-ray bubbles were inflated by the galaxy’s radio source (not shown).
Much of the molecular gas may have cooled and condensed out of hot atmospheric
gas being lifted behind the X-ray bubbles \citep{2017ApJ...836..130R}.
{\bf Top right}: Redshift evolution of the cosmic molecular gas surface density from ASPECS and other sources \citep{2019ApJ...882..138D}.
{\bf Bottom left}: 12 brightest mm-wave galaxies in the Hubble Ultradeep Field, with ALMA contours shown on top of 3-colour HST images, illustrating the diversity of high-$z$ sources \citep{2017MNRAS.466..861D}. {\bf Bottom right}: EHT image of the black hole in M87 from \citep{2019ApJ...875L...1E}.\label{fig-russell}}
\end{figure}
ALMA has revealed a large diversity of structures in protoplanetary disks in both gas and dust, e.g. gaps, rings and asymmetries (Fig.~\ref{fig-sadavoy}), which could be linked to the presence of planets
\citep[e.g.,][]{2015ApJ...809...93D,2019ApJ...872..112V}.
Detailed studies of GW Ori (Bi et al., in prep.) have revealed multiple, mis-aligned dust rings that may be produced by disk-star interactions in this triple-star system.
Also, ALMA has uncovered the observational evidence for dust trapping in disks through multi-wavelength observations, a phenomenon to enhance dust growth at the start of planet formation \citep[e.g.,][]{2013Sci...340.1199V}.
Furthermore, ALMA disk snapshot surveys have mapped the disk mass distributions in all nearby star forming regions
\citep[e.g.,][]{2018ApJ...859...21A}.
Disk masses appear to be too low to form exoplanetary systems at $\sim$2 Myr, indicating that either disk masses are severely underestimated or planet formation is already close to finished at this stage \citep[e.g.,][]{2018A&A...618L...3M}.
ALMA observations of the kinematics of protostellar disks have shown that these disks can have extremely low levels of turbulence, with turbulent broadening at a level of less than 10\% of the sound speed \citep{2018ApJ...856..117F}. These results limit the magnetic viscosity parameter $\alpha < 0.007$ and are driving a major re-examination of our thinking on turbulence in disks, which can have dramatic effects on models of planet formation \citep[e.g.][]{2019ApJ...875...43K}.
Polarization observations of protoplanetary disks with ALMA, the only facility that can resolve their polarized emission, are also producing surprises. The changing polarization morphology as a function of wavelength observed in HL Tau points to the polarization being produced by dust self-scattering processes at shorter wavelengths and possibly by grains aligned via radiation anisotropy at longer wavelengths \citep[e.g.][and references therein]{2017ApJ...851...55S}, rather than the classical picture of dust grains aligned via magnetic fields. These observations bring into question our ability to use polarization to trace magnetic field morphology, at least in very massive disks like HL Tau. \citet{2019arXiv190902591S} have used ALMA to carry out a complete polarization survey with 35 AU resolution of all the deeply embedded protostars in the nearby Ophiuchus molecular cloud (Fig.~\ref{fig-sadavoy}). They find that the majority of these lower-mass disks have morphologies consistent with dust self-scattering in optically thick disks (Fig.~\ref{fig-sadavoy}).
\begin{figure} [htbp!]
\includegraphics[width=0.42\textwidth]{make_grid-eps-converted-to.pdf}
\includegraphics[width=0.6\textwidth]{Overview_alph.png}
\caption{A gallery of protostellar and protoplanetary disks observed with ALMA. {\bf Left}: The 14 continuum sources detected in polarized emission in the Ophiuchus molecular cloud by \citet{2019arXiv190902591S}. Lines denote the normalized polarization pseudo-vectors and background colour is the integrated intensity (Stokes $I$) on a logarithmic scale. {\bf Right}: Survey of 16 proto-planetary disks with rings and gaps from \cite{2019ApJ...872..112V}.
\label{fig-sadavoy}}
\end{figure}
The surveys carried out with ALMA large proposals have also produced interesting results. One of the first two surveys, ASPECS, the ALMA Spectral line survey in the UDF (the Hubble Ultra-Deep Field, used
ALMA to conduct the first blind CO survey of the high-redshift universe \citep[e.g.,][]{2019ApJ...882..138D}. ALMA has helped to demonstrate that the rise and fall in the cosmic star formation rate as a function of redshift is primarily driven by a similar rise and fall in the molecular gas content, the fuel for star formation (Fig.~\ref{fig-russell}).
The second survey, DSHARP, Disk Substructures at High Angular Resolution Project, is a survey of 20 nearby protostellar disks with 5-AU resolution \citep{2018ApJ...869L..41A} in both CO and continuum emission.
More recent ALMA large projects include an ambitious survey of 100,000 giant molecular clouds across 70 nearby spiral galaxies (PHANGS, PI E. Schinnerer), a complete astrochemical survey of the nearest starburst galaxy NGC 253 (ALCHEMI, PI. S. Martin), a systematic investigation of [{\sc Cii}] in the early Universe (ALPINE, PI. O Le F\`evre), and a systematic study of the conditions in molecular clouds that set the stellar initial mass function (PI F. Motte). The PHANGS survey is providing unmatched data on the structure of the star-forming interstellar medium in the local Universe \citep{2018ApJ...860..172S}. The ALPINE survey has recently discovered a rare triple merger at $z=4.56$ \citep{2019arXiv190807777J}, indicating a major growth phase which will likely produce a single massive galaxy by $z\sim 2.5$.
In Cycle 7, the first Canadian-led large program (VERTICO, PI. T. Brown) will survey CO emission in Virgo spirals with the ACA to probe the effect of the cluster environment on the molecular gas.
\section{Has
Canadian participation in ALMA been a success?}
\label{sec-success}
The ALMA White Paper submitted to LRP2010
\citep{wilson2010}
laid out a number of specific accomplishments by ALMA and the Canadian ALMA user community that would likely lead to our community viewing its participation in ALMA as a success. Ten years on, it is enlightening to review this list and see how we did. All publications statistics are from C. Wilson's ALMA overview at the June 2018 CASCA meeting.
\begin{enumerate}
\item {\it Significant numbers of Canadian first author papers are based on
ALMA data or theoretical
interpretations of ALMA data}
As of June 2018, approximately 2.2\% of ALMA papers (24 papers total) had a Canadian-based researcher as first author. This fraction is very close to our financial share of global ALMA operations. An additional 11.6\% of ALMA papers (approximately 130 papers in total) had a Canadian-based researcher as a co-author of the paper.
\item {\it Canadian first author papers based on
ALMA data or theoretical
interpretations of ALMA data have a high impact}
As of June 2018, four of the 70 most highly cited ALMA papers (5.7\%) had a Canadian-based researcher as first author.
\citet{2013ApJ...767..132H}
described ALMA observations of strongly lensed dusty galaxies discovered by the South Pole Telescope. \citet{2013ApJ...770...13W,2015ApJ...807..180W} measured the star-formation rates and dynamical masses of galaxies hosting supermassive black holes at a redshift $z=6$. \citet{2014ApJ...785...44M} presented high-resolution observations of a flow of $10^{10}\,$M$_\odot$ of molecular gas in the cooling flow of the brightest cluster galaxy in the Abell 1835 cluster.
\item {\it Canadians are playing important roles in some of the very
highest profile ALMA papers coming from large international teams}
Canadian research with ALMA spans a wide range of topics. Three examples are given here.
(1) Canadian researchers have played leading roles in many of the results coming from ALMA observations of high-redshift sources identified by the South Pole Telescope \citep[e.g.,][]{2013ApJ...767..132H}. \citet{2018Natur.556..469M} revealed a massive core for a cluster of galaxies at a redshift of 4.3 that could be building one of the most massive structures in the Universe.
(2) Canadians are playing leadership roles in two of the 14 ALMA Large Programs approved as of July 2019. The PHANGS survey is mapping 100,000 giant molecular clouds in 70 nearby galaxies (co-PI E. Rosolowsky, U. Alberta). The Canadian-led VERTICO survey is mapping molecular gas on 0.5-kpc scales in all the spiral galaxies in the Virgo Cluster (PI T. Brown, co-PI C. Wilson, McMaster University).
(3) Canadians are also making significant contributions in studying protoplanetary disks. For example, \citet{2019ApJ...872..112V} have published a survey of 16 disks showing ring-like structures. This paper is the first systematic study of morphologies and gap locations and has already collected 24 citations in less than a year.
\item {\it Significant numbers of Canadian astronomers who do not have
radio astronomy as their primary background are involved in
ALMA science, either as a primary user of data or as a significant
collaborator}
Canadians at NRC and the University of Calgary led the development of the ``ALMA Primer.'' This ${>}20$-page overview of ALMA's capabilities was first introduced in Cycle 0 and has been updated at each subsequent Cycle. By describing ALMA capabilities in simpler terms and including a variety of example projects and observations, the ALMA Primer has played a significant role in making ALMA more accessible to new users, be they students or more senior researchers without significant millimetre or interferometry experience.
One example of diverse Canadian participation is the VERTICO large program approved for ALMA Cycle 7. VERTICO has a total of 26 co-Is and includes 5 out of 9 Canadian co-Is whose expertise is in optical astronomy or numerical simulations of galaxies.
\item {\it Canadian graduate students are participating or leading ALMA
papers and using ALMA data in their theses}
As of June 2018, 12 of 23 Canadian first-author papers were led by graduate students and a further four were led by postdocs. Indeed, the highest cited Canadian-led paper at that time was by a graduate student \citep{2013ApJ...767..132H}.
The 2019 Plaskett Medal winner, Alexandra Tetarenko, used data from ALMA (and many other telescopes!) in her award-winning Ph.D. thesis. An incomplete list of other Canadian Ph.D. theses to use ALMA data includes K. Sliwa (McMaster), A. Vantyghem (Waterloo), Y. Hezaveh (McGill), and J. White (UBC). Ongoing Ph.D. theses using ALMA work include
R. Hill (UBC), L. Francis (UVic), J. Bi (UVic), A. Bemis (McMaster), N. Brunetti (McMaster), and P. Tamhane (Waterloo).
Postdocs working in Canada have led high-impact programs and papers. For example, T. Brown (McMaster) is the PI of the first Canadian-led ALMA large program (VERTICO), which will measure the effect of environment on the molecular gas in spiral galaxies in the VIRGO cluster. H. Russell (Waterloo) played a major role in the studies of cooling-flows around brightest cluster galaxies \citep{2017ApJ...836..130R}. R. Mann (NRC) carried out a detailed study of five proplyds in Orion, revealing rapid dissipation of disks that would inhibit planet formation in UV-dominated environments \citep{2014ApJ...784...82M}. R. Friesen (Dunlap Institute) identified what may be the first hydrostatic core in high-resolution observations of the Ophiuchus star forming region \citep{2014ApJ...797...27F}.
\item {\it The Band-3 receivers are delivered on time, meet or exceed ALMA
specifications, and are
being used for good science by international ALMA community}
The Band 3 receivers were one of the four receiver bands available in ALMA first Early Science call for proposals (Cycle 0). They continue to be in high demand, second only to Band~6 in the amount of time requested or awarded per Band across all 66 ALMA antennas.
\item {\it Canada is leading or playing a major role in an interesting ALMA
development project such as the Band-1 receivers}
ALMA Development Projects are meant to provide physical or software deliverables to be incorporated in the ALMA observatory. Across the observatory, examples of Development Projects range from the construction of the Band-5 receivers to the phasing hardware and software that enabled ALMA to join the Event Horizon Telescope.
Two ALMA Development Projects have been led by Canadians. One was ``CARTA: The Next Generation ALMA Viewer'' led by E. Rosolowsky (UBC/U.Alberta), which was designed to replace the existing viewer in CASA and has been released to the user community (\url{https://cartavis.github.io}). The second was ``Band-3 cold cartridge assembly magnet and heater installation for deflux operation'' led by L. Knee (NRC) . Canada has also been participating in the Band-1 receiver development project led by Taiwan.
ALMA Development Studies are typically one-year programs with relatively modest funding, that may eventually lead to a Development Project. As an example, the Canadian-led CARTA viewer initially began as a development study. Canadians have led six of the 29 development studies awarded funding. In the most recent call in 2019, two ALMA development studies were awarded to Canadian teams from NRC. One program is ``High level design and integration of NRC TALON-based correlator for increased channels, bandwidth, and baselines.'' The other program is ``ARCADE: ALMA Reduction in the CANFAR Data Environment'' (see Section~\ref{sec-cdn_development} for more on ARCACE).
\item {\it Canadians are playing a leadership role in some aspect of ALMA
operations in North America or internationally}
The current ALMA Director, Sean Dougherty, is a Canadian. Eduardo Hardy was the legal representative of AUI (the U.S.-based not-for-profit corporation that holds the NSF grant to operate ALMA and NRAO) in Chile for many years and remains their senior advisor in Chile. Rachel Friesen (currently at UofT) was a staff member at the North American ALMA Science Center. Christine Wilson served on the search committees for the second and fourth ALMA Directors. Jim Hesser was a long-serving member of the ALMA Board and chaired its Budget Committee. James Di Francesco is currently the Canadian member of the ALMA Board. There has always been a Canadian as one of the North-American representatives on the ALMA Science Advisory Committee (currently C. Wilson).
\end{enumerate}
Based on these metrics laid out a decade ago, the Canadian astronomical community should view
our participation in ALMA as a
success. {\bf The successful achievement of these wide-ranging goals argues strongly for Canada's continuing participation in operating and developing ALMA over the next decade and beyond.}
\section{New fundamental science drivers for ALMA}
\label{sec-science_drivers}
In consultation with the ALMA user community, a new set of top-level science goals has been developed to guide ALMA development over the next decade
\citep{2019arXiv190202856C}.
These new goals are as follows:
\begin{itemize}
\item {\bf Origins of galaxies:} Trace the cosmic evolution of key elements from the first galaxies ($z>10$) through the peak
of star formation ($z=2–4$) by detecting their cooling lines, both atomic ([{\sc Cii}], [{\sc Oiii}]) and
molecular (CO), and dust continuum, at a rate of 1--2 galaxies per hour.
\item {\bf Origins of chemical complexity:} Trace the evolution from simple to complex organic molecules through the process of star
and planet formation down to solar system scales (10--100\,au) by performing full-band
frequency scans at a rate of 2--4 protostars per day.
\item {\bf Origins of planets:} Image protoplanetary disks in nearby ($d\,{<}\,150\,$pc) star-formation regions to resolve the Earth-forming zone ($\sim$1\,au) in the dust continuum at wavelengths shorter than 1\,mm, enabling
detection of the tidal gaps and inner holes created by planets undergoing formation.
\end{itemize}
Achieving these goals will require a set of ambitious upgrades to ALMA over the next 10--15 years. These upgrades will keep ALMA at the cutting edge of astronomy and allow it to continue producing transformational scientific results in future decades.
\section{ALMA
development in the next decade and beyond}
\label{sec-development}
The ALMA2030 report was
submitted to the ALMA Board in March 2015. In it, four development paths were recommended based on their long-term scientific potential:
\begin{itemize}
\item improvements to the ALMA archive to achieve gains in usability and increase the impact of the observatory;
\item larger bandwidths and improved receiver sensitivity to achieve gains in speed;
\item longer baselines to enable qualitatively new science, and;
\item increased mapping speed to enable more efficient wide-field imaging.
\end{itemize}
The ALMA Development Working Group subsequently divided these four paths into short-term and medium-term development goals \citep{2019arXiv190202856C}.
\subsection{Near-term
ALMA development priorities}
The current ALMA development priorities are to:
\begin{itemize}
\item broaden the receiver IF bandwidth by at least a factor of 2;
\item upgrade the digitizers and digital processing to allow for larger bandwidth, and;
\item upgrade the correlator to process these larger bandwidths with high spectral resolution.
\end{itemize}
These developments will significantly reduce the time required for a wide range of scientific applications. For example, the time required would be reduced by a factor of 2 for
blind redshift surveys, spectral scans, and deep continuum surveys, while the time required for high spectral-resolution spectral scans (for example in low-mass protostars or evolved stellar envelopes) would be reduced by factors of 8--16.
In terms of receiver upgrades, the priority is: (1) intermediate frequencies 200--425\,GHz; (2) lower frequences ${<}\,$200\,GHz; and
(3) higher frequencies ${>}\,$425\,GHz. This frequency prioritization was selected to have the most direct impact on enabling the new science drivers. For more details, see \citet{2019arXiv190202856C}.
\subsection{Archive development}
In addition, the Working Group recommended that a committee be tasked with prioritizing the archive capabilities that will be required to facilitate increased scientific exploitation
of the ALMA archive over the next decade. Particular attention will need to be paid towards facilitating data mining of ALMA's spectral products, especially in light of the planned receiver upgrades. The Canadian-led ARCADE project has the potential to be a useful contribution to archive capabilities (see Section~\ref{sec-cdn_development}).
\subsection{Medium-term development}
The medium-term opportunities include: extended baselines with the addition of at least six additional antennas to ensure the minimum required $uv$ coverage on the longest baselines; focal-plane arrays to increase the mapping speed; and additional 12-m antennas to enhance the sensitivity. All these opportunities were recommended for further development studies of their scientific, technical, and logistical potential and scope.
A large (25--50m) single-dish telescope was noted to provide strong scientific synergies with ALMA, but was thought to be outside the scope of the current ALMA project.
\section{Potential Canadian contributions to ALMA development 2020--2030}
\label{sec-cdn_development}
There are a variety of paths open to Canada to contribute to ALMA development. For example, the ARCADE study could lead to a significant contribution relevant to archival research.
ARCADE is a new initiative to make ALMA data more accessible to Canadian astronomers, which aims to provide a virtual computing environment for any researcher’s ALMA data needs (archival projects as well as PI science). Within the ARCADE environment, users will have access to the large amounts of RAM and storage space needed for processing ALMA data, which may be challenging to obtain at their home institutions. Researchers will be able to run their preferred version of CASA (note that older versions of CASA are required for re-calibrating archival data sets), the first step needed for analysis, since only raw data is provided in the ALMA archive. At present, ARCADE is in an early prototyping stage. The NRC’s Millimetre Astronomy Group and the Canadian Astronomical Data Centre, in collaboration with C.~Wilson’s group at McMaster University, were recently awarded a one year ALMA North American Development Study Proposal for ARCADE, which will allow the system to be further developed. Specifically, issues related to scalability in a virtual environment (processing resources and storage space) to allow for multiple parallel users will be investigated by CADC. Meanwhile, the Millimetre Astronomy Group (NRC) and McMaster will undertake significant external user testing to improve the functionality of the system and identify any additional software needs (e.g., independent but CASA-affiliated software such as ADMIT). Over the course of the next decade, the goal is for ARCADE to be offered to astronomers across Canada, where it can reduce the hardware and software access hurdles that may discourage analysis of ALMA data by experienced and new users alike.
Another area where Canada could take the lead would be the upgrade of the Band-3 receivers to wider bandwidth.
Canadians could also take the lead on studies of a particular medium-term development, such as focal-plane arrays. Given our experience with correlator development for the JCMT and the Karl. G. Jansky Very Large Array, it is possible that Canada could play a significant role in some aspects of the upgrade to the ALMA correlator.
Contributing to ALMA development would bring in ``new'' money from the ALMA development funding stream, but could also involve significant in-kind contributions in time and effort
by scientists at NRC and Canadian universities.
\section{Recommendations}
\label{sec-recommendations}
Canada is a partner in ALMA, and the observatory is being successfully exploited by Canadians. It is important, however, to continue to build on this success. As we move forward, we need to:
\begin{itemize}
\item maintain Canadian access to ALMA and our competitiveness in using ALMA;
\item preserve full Canadian funding for our share of ALMA operations;
\item identify components of ALMA development in which Canada can play a significant role, including stimulating expertise in submillimetre instrumentation to capitalize on future opportunities; and
\item keep Canadians fully trained and engaged in ALMA, as new capabilities become available, reaching the widest possible community of potential users.
\end{itemize}
Over the past decade, the successful achievement of the wide-ranging goals laid out in the LRP 2010 white paper \citep{wilson2010}
argues strongly for Canada's continuing participation in operating and developing ALMA over the next decade and beyond.
\bigskip
\bigskip
\begin{lrptextbox}[How does the proposed initiative result in fundamental or transformational advances in our understanding of the Universe?]
In terms of sensitivity, speed, and angular resolution, ALMA is the most powerful (sub-)millimetre interferometer in the world and is already delivering transformational science (see Section~\ref{science}). The ALMA upgrades planned for the next 10--15 years will ensure that ALMA will continue to enable fundamental scientific advances for decades to come (see Section~\ref{sec-development}).
\end{lrptextbox}
\begin{lrptextbox}[What are the main scientific risks and how will they be mitigated?]
There are no significant scientific risks to ALMA development upgrades. However, we will need to ensure that the community has the resources to deal with the (already) large data volumes produced by ALMA. The ARCADE development project (Section~\ref{sec-cdn_development}) is an interesting step in this direction.
\end{lrptextbox}
\clearpage
\begin{lrptextbox}[Is there the expectation of and capacity for Canadian scientific, technical or strategic leadership?]
Canadians have already demonstrated significant scientific leadership with ALMA (see Sections~\ref{science} and~\ref{sec-success}). The first Canadian-led large proposal (VERTICO, PI. T. Brown) is only the most recent example of our successful scientific leadership. There will be opportunities for Canadian technical and strategic leadership in specific areas of ALMA development over the next 10--15 years(see Section~\ref{sec-cdn_development}).
\end{lrptextbox}
\begin{lrptextbox}[Is there support from, involvement from, and coordination within the relevant Canadian community and more broadly?]
ALMA is clearly the premier instrument for high-resolution (sub-)millimetre-wave astronomy. The high demand for time on ALMA by Canadians and the increasing number of papers and projects led by Canadians (Sections ~\ref{sec-science_drivers} and~\ref{sec-success})
are signs of our community's involvement, as are the large number of development studies and programs led by Canadians (see Section~\ref{sec-cdn_development}).
\end{lrptextbox}
\begin{lrptextbox}[Will this program position Canadian astronomy for future opportunities and returns in 2020--2030 or beyond 2030?]
Yes: for example, in scientific results (Section~\ref{science}), student training (Section~\ref{sec-success}), and technical and software development (Section~\ref{sec-cdn_development}).
\end{lrptextbox}
\begin{lrptextbox}[In what ways is the cost-benefit ratio, including existing investments and future operating costs, favourable?]
As an observatory, ALMA has completed construction and is into steady-date operations. Its annual operating costs are on the order of \$80M/yr (USD) over the three regional partners. Spending on the order of \$80M (shared by all ALMA partners over 10 years) on development work that will produce a more powerful ALMA is an extremely cost-efficient investment. Canada's contribution to ALMA development is part of our share of operating costs. We can recover some of that funding back to Canada by participation and/or leadership of one or more ALMA development programs.
\end{lrptextbox}
\begin{lrptextbox}[What are the main programmatic risks
and how will they be mitigated?]
International co-ordination of ALMA Development programs could be improved, a point which has been highlighted several times by the ASAC in its reports to the ALMA Board. Without better co-ordination of development efforts across the ALMA community, there is the real risk that these planned improvements to ALMA do not happen in a timely manner.
Another possible risk is that lead partners may turn their attention (and funding) to new projects, such as ESO's ELT, NRAO's ngVLA, or a new NAOJ project. Downward pressure on the ALMA operating budget is also a risk for all partners.
\end{lrptextbox}
\clearpage
\begin{lrptextbox}[Does the proposed initiative offer specific tangible benefits to Canadians, including but not limited to interdisciplinary research, industry opportunities, HQP training,
EDI,
outreach or education?]
ALMA is having a major impact on HQP training in Canada. This is indicated by the large number of Canadian-led papers for which a student or postdoc is the first author and in the Ph.D. theses that use significant data from ALMA (see Section~\ref{sec-success}). ALMA is in the process of becoming a partner in the CREATE program New Technologies for Canadian Observatories (NTCO, PI. K. Venn) and will soon welcome its first graduate intern from Canada, bringing an additional dimension to graduate student training.
Within Canada, the ALMA user community has a higher proportion of women than average, giving it a role in EDI. For example, in ALMA papers published to June 2018, 2/8 students, 4/4 postdocs, and 2/4 faculty/staff (so 50\% in total) who were first authors on one or more ALMA papers were women.
ALMA results also play a significant role in public outreach, for example in the press coverage of the recent black-hole image from the EHT team, or in public talks, such as C. Wilson's keynote address at Starfest 2016, the largest star party in North America.
\end{lrptextbox}
|
1609.08763
|
\section{Introduction}
Let $X$ be a metric space, $E\subset X$ and $f:E \to X$ be a map in a class $\mathscr{F}$. When can $f$ be extended to a mapping $F:X \to X$ in the same class? We are interested in the above extension question for the classes of bi-Lipschitz maps and quasisymmetric maps. Questions related to quasisymmetric extensions have been considered by Beurling and Ahlfors \cite{BA}, Ahlfors \cite{Ah,Ahl}, Carleson \cite{CA}, Tukia and V\"ais\"al\"a \cite{TukVais, TukiaVais-LIPandLQCextension, TuVaext2}, V\"ais\"al\"a \cite{Vaisext}, Kovalev and Onninen \cite{Kovalev} and Fujino \cite{Fujino}. Results related to bi-Lipschitz extension appear in the work of Tukia \cite{TukiaBLExt, Tukia-ext}, David and Semmes \cite{DS}, MacManus \cite{MM} and Alestalo and V\"ais\"al\"a \cite{AlestaloVais}.
Tukia and V\"ais\"al\"a \cite{TuVaext2} showed that if $X=\mathbb{R}^p$ or $X=\mathbb{S}^p$ and $n>p$, then any quasisymmetric mapping $f:X \to \mathbb{R}^n$ extends to a quasisymmetric homeomorphism of $\mathbb{R}^{n}$ when $f$ is locally close to being a similarity, and every bi-Lipschitz mapping $f:X\to \mathbb{R}^n$ extends to a bi-Lipschitz mapping of $\mathbb{R}^n$ when $f$ is close to being an isometry. Later, V\"ais\"al\"a \cite{Vaisext} extended these results to all compact, $C^1$ or piecewise linear $(n-1)$-manifolds $X$ in $\mathbb{R}^n$. Similar results appeared recently in the work of Azzam, Badger and Toro \cite{AzBaTo}. The requirements on the embedding $f$ in these three papers, ensured the homeomorphic extension of $f$ to $\mathbb{R}^n$.
In this article we looka at the extension problem from a different perspective: assuming that there is a homeomorphic extension, when can we extend the mapping in question to a quasisymmetric or bi-Lipschitz homeomorphism? Given a metric space $X$ we say that $E\subset X$ has the \emph{quasisymmetric extension property} (resp. \emph{bi-Lipschitz extension property}) in $X$ or \emph{QSEP} in short (resp. \emph{BLEP}) if every quasisymmetric (resp. bi-Lipschitz) embedding $f:E \to X$ that can be extended as a homeomorphism of $X$ can also be extended as a quasisymmetric (resp. bi-Lipschitz) homeomorphism of $X$.
When $X=\mathbb{R}$ or $X=\mathbb{S}^1$, trivially every subset of $X$ has the BLEP in $X$ but the same is not true in the quasisymmetric class. If $E =\{0\}\cup\{e^{-n!}\}_{n\geq 2}$, then $f:E \to \mathbb{R}$ with $f(x)=(-\log{x})^{-1}$ is monotone and quasisymmetric but can not be extended quasisymmetrically in any open set containing the point $0$ \cite[p. 89]{Heinonen}. Thus, more regularity for sets $E$ must be assumed. Trotsenko and V\"ais\"al\"a \cite{TroVa} introduced the notion of \emph{relative connectedness}, a weak version of uniform perfectness, and as a corollary of their main theorem, \emph{if $E\subset \mathbb{R}^n$ is not relatively connected, then there exists a quasisymmetric embedding $f:E \to\mathbb{R}^n$ that can be extended homeomorphically to $\mathbb{R}^n$ but not quasisymmetrically}; see \textsection\ref{sec:relcon2}. Conversely, we showed in \cite{V} that if $E\subset \mathbb{R}$ is relatively connected, then it has the QSEP in $\mathbb{R}$.
On the other hand, for each $n\geq 2$ there exists a relatively connected, compact and countable set $E_n\subset \mathbb{R}^n$ and a bi-Lipschitz embedding $f:E_n\to\mathbb{R}^n$ that admits a homeomorphic extension to $\mathbb{R}^n$ but not a quasisymmetric extension \cite[Theorem 5.1]{V}. These examples show that in dimensions $n\geq 2$ relative connectedness does not suffice for either the QSEP or the BLEP and the geometry of the complement of $E$ comes into play.
It follows from the celebrated work of Ahlfors \cite{Ah}, Beurling and Ahlfors \cite{BA} and Tukia \cite{TukiaBLExt} that $\mathbb{R}$ and $\mathbb{S}^1$ have both extension properties in $\mathbb{R}^2$. In this paper we extend their results to boundaries of planar \emph{uniform domains}, a broad family of domains in $\mathbb{R}^2$ whose local geometry resembles that of the disk and of the upper half-plane. Uniform domains were already known to be extension domains for Sobolev spaces \cite{Jones} and, more generally, Newtonian spaces \cite{BjSh}.
\begin{thm}\label{thm:main}
Let $U \subset \mathbb{R}^2$ be a $c$-uniform domain and $f: \partial U \to \mathbb{R}^2$ be an embedding that can be extended homeomorphically to $\overline{U}$.
\begin{enumerate}
\item If $f$ is $L$-bi-Lipschitz, then $f$ extends to an $L'$-bi-Lipschitz homeomorphism $f:\mathbb{R}^2\to \mathbb{R}^2$ with $L'>1$ depending only on $L$ and $c$.
\item If $\partial U$ is $C$-relatively connected and $f$ is $\eta$-quasisymmetric, then $f$ extends to an $\eta'$-quasisymmetric homeomorphism $f:\mathbb{R}^2 \to \mathbb{R}^2$ with $\eta'$ depending only on $\eta$, $c$ and $C$.
\end{enumerate}
\end{thm}
The second part of Theorem \ref{thm:main} can be viewed as a converse of a boundary quasiconformal extension result of V\"ais\"al\"a \cite{VaisQM}: if $U,U'\subset \mathbb{R}^2$ are uniform domains and $f:U \to U'$ is a quasiconformal homeomorphism that can be extended homeomorphically to $\overline{U}$, then $f$ can be extended quasisymmetrically to $\overline{U}$; see Lemma \ref{lem:QCtoQM}.
Roughly speaking, uniformity is a combination of two other notions: a domain is uniform if every pair of points can be joined by a curve whose length is comparable to the distance between the points (\emph{quasiconvexity}) and the curve does not go too close to the boundary of the domain (\emph{John property}); see \textsection\ref{sec:uniform} for precise definition. The assumption of uniformity of $U$ is somewhat necessary for both extensions as neither quasiconvexity nor John property alone are sufficient; see \textsection\ref{sec:assumptions}.
In $\mathbb{R}^3$, Theorem \ref{thm:main} fails in both cases as there exists a bi-Lipschitz embedding of $\mathbb{S}^2$ into $\mathbb{R}^3$ that can be extended homeomorphically to $\mathbb{R}^3$ but not quasisymmetrically \cite[\textsection15]{TukiaBLExt}.
As a corollary, we obtain a sufficient condition for sets $E$ to satisfy the QSEP and the BLEP in $\mathbb{R}^2$. The arguments apply verbatim in the case that $E\subset \mathbb{S}^2$.
\begin{cor}\label{cor:main}
If $E \subset \mathbb{R}^2$ is such that each component of $\mathbb{R}^2\setminus \overline{E}$ is uniform with the same constant, then $E$ has the BLEP in $\mathbb{R}^2$. If additionally $E$ is relatively connected, then it has the QSEP in $\mathbb{R}^2$.
\end{cor}
The tameness of Cantor sets in $\mathbb{R}^2$ implies that in Theorem \ref{thm:main} the assumption of homeomorphic extension of $f$ to $\mathbb{R}^2\setminus E$ can be dropped when $E$ is totally disconnected. However, in higher dimensions, due to the existence of wild Cantor sets, an increase in dimension is needed. For simple examples of wild Cantor sets we refer to Daverman \cite{Daverman}.
Moreover, in the plane, the complement of a closed set $E\subset\mathbb{R}^2$ with empty interior is uniform if and only if $E$ is \emph{uniformly disconnected} \cite{MM2} but this is not true in $\mathbb{R}^n$ when $n\geq 3$. Uniform disconnectedness is in a sense the opposite of uniform perfectness: for each point $x$ there exists an ``isolated island'' $E' \subset E$ of practically any diameter whose distance from the rest of $E$ is at least a fixed multiple of its diameter. In dimensions $n\geq 3$, uniform disconnectedness of $E$ can be used as a natural analogue of uniformity of $\mathbb{R}^n \setminus E$.
\begin{thm}\label{thm:cantor}
Let $n\geq 3$ be an integer, let $E$ be a $c$-uniformly disconnected subset of $\mathbb{R}^n$ and let $f:E \subset \mathbb{R}^n$.
\begin{enumerate}
\item If $f$ is $L$-bi-Lipschitz, then it extends to an $L'$-bi-Lipschitz homeomorphism $F:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ with $L'>1$ depending only on $L$, $c$ and $n$.
\item If $E$ is $C$-relatively connected and $f$ is $\eta$-quasisymmetric, then $f$ extends to an $\eta'$-quasisymmetric homeomorphism $F:\mathbb{R}^{n+1}\to \mathbb{R}^{n+1}$ with $\eta'$ depending only on $\eta$, $c$, $C$ and $n$.
\end{enumerate}
\end{thm}
In the statement of Theorem \ref{thm:cantor}, $E$ is identified with the set $E\times\{0\}\subset \mathbb{R}^{n+1}$.
In \cite{Vaisquest}, V\"ais\"al\"a asked if the \emph{Klee trick} holds true in the quasisymmetric class, i.e., if $E\subset \mathbb{R}^n$ is compact and $f:E \to \mathbb{R}^m$ is a quasisymmetric embedding, is there a quasisymmetric homeomorphism $F:\mathbb{R}^{m+n}\to\mathbb{R}^{m+n}$ that extends $f$? Since all uniformly perfect and uniformly disconnected sets quasisymmetrically embed in $\mathbb{R}$ \cite{DS}, Theorem \ref{thm:cantor} provides an affirmative answer for this class of sets. However, the general case remains open.
\subsection{Organization of the paper}
In \textsection\ref{sec:prelim} we review the notions of quasisymmetric maps, uniform domains, relatively connected sets, uniformly disconnected sets and Whitney-type decompositions.
In \textsection\ref{sec:isolated} and \textsection\ref{sec:unbounded} we reduce the proof of Theorem \ref{thm:main} to the case where $U$ is the complement of a compact perfect set, and the proof of Theorem \ref{thm:cantor} to the case where $E$ is compact and perfect. In \textsection\ref{sec:extcompl}, given a uniform domain $U\subset \mathbb{R}^2$ and a bi-Lipschitz embedding $f:\partial U \to \mathbb{R}^2$ (resp. uniform domain $U\subset \mathbb{R}^2$ with relatively connected boundaty and a quasisymmetric embedding $f:\partial U \to \mathbb{R}^2$) that can be extended homeomorphically to $\mathbb{R}^2$, we extend $f$ bi-Lipschitzly (resp. quasisymmetrically) to $\mathbb{R}^2\setminus U$. After that reduction, it suffices to extend $f$ to a map $f:\overline{U} \to \overline{U'}$ where $U'$ is a uniform domain.
The extension of $f$ to $\overline{U}$ follows Carleson's method \cite{CA}. The main idea is the construction of two combinatorially equivalent Whitney-type decompositions $\mathscr{Q}$ and $\mathscr{Q}'$ for $U$ and $U'$ respectively. That is, $\mathscr{Q}$ (resp. $\mathscr{Q}'$) is a family of mutually disjoint open subsets of $U$ (resp. $U'$) such that the union of their closures is the whole $U$ (resp. $U'$), the diameter of each element of $\mathscr{Q}$ (resp. $\mathscr{Q}'$) is comparable to its distance to $\partial U$ (resp. $\partial U'$) and there exists a homeomorphism of $\overline{U}$ onto $\overline{U'}$ that maps each element of $\mathscr{Q}$ onto exactly one element of $\mathscr{Q}'$. Moreover, the boundary of every domain in $\mathscr{Q}$ and $\mathscr{Q}'$ is a finite union of $L$-bi-Lipschitz circles whose mutual distances and diameters are bounded below by a constant $d>0$. We show in \textsection\ref{sec:Ext} that such domains possess both the BLEP and the QSEP.
The main novelty in this approach is that, unlike a quasidisk, the boundary of a uniform domain may have uncountably many components.
Nevertheless, we show in \textsection\ref{sec:separation} that the boundary of a uniform domain satisfies a weak form of uniform connectedness: given a point $x\in \partial U$, for any $r>0$ there exists a closed set $A\subset \partial U$ containing $x$ whose distance from $\partial U \setminus A$ is at least a constant multiple of $r$.
In \textsection\ref{sec:whitney}, using the results of \textsection\ref{sec:separation}, we construct the decompositions $\mathscr{Q}$ and $\mathscr{Q}'$ and we prove Theorem \ref{thm:main}. Towards the construction we distinguish two cases: one for the part of $U$ around non-degenerate components of $\partial U$, which we treat in \textsection\ref{sec:qcircledecomp}, and another for the rest of $U$ which we treat in \textsection\ref{sec:whitney}. In the first case the decomposition resembles that of the exterior of a quasidisk (although extra care has to be taken for all of the components of $\partial U$ around the quasidisk) while in the second $U$ resembles the exterior of a uniformly disconnected set.
The proof of Theorem \ref{thm:cantor} relies on a uniformization result for Cantor sets with bounded geometry that generalizes a $2$-dimensional result of MacManus \cite{MM2} to higher dimensions. Namely, in \textsection\ref{sec:cantor} we show that a compact set $E\subset \mathbb{R}^n$ is uniformly perfect and uniformly disconnected if and only if there exists a quasiconformal homeomorphism of $\mathbb{R}^{n+1}$ mapping $E$ onto the standard middle-third Cantor set $\mathcal{C} \subset \mathbb{R}$.
\subsection*{Acknowledgements} We wish to thank David Herron, Pekka Koskela, Kai Rajala and Jang-Mei Wu for various discussions on this subject and Jussi V\"ais\"al\"a for bringing Lemma \ref{lem:couniform} to our attention. Part of the research that led to this paper was done while the author was visiting the University of Illinois. We would like to express our gratitude to this institution for their hospitality. Finally, we would like to thank the anonymous referee whose numerous comments and corrections have significantly improved the exposition of this article.
\section{Preliminaries}\label{sec:prelim}
A set $E$ with one point is called a \emph{degenerate} set. A non-degenerate compact connected set is called a \emph{continuum}.
For the rest of the paper, for all integers $m<n$, we identify sets $E\subset \mathbb{R}^m$ with sets $E\times\{0\}^{n-m} \subset \mathbb{R}^n$ via the natural embedding of $\mathbb{R}^m$ into $\mathbb{R}^n$
\[ (x^1,\dots,x^m) \mapsto (x^1,\dots,x^m,0,\dots,0).\]
\subsection{Mappings}
A homeomorphism $f\colon D\to D'$ between two domains in $ \mathbb{R}^n$ is called $K$-\emph{quasiconformal} for some $K\geq 1$ if, for all $x\in D$, $f$ satisfies the distortion inequality
\[\limsup_{r\to 0} \frac{\sup_{y\in\partial B^n(x,r)}|f(x)-f(y)|}{\inf_{y\in\partial B^n(x,r)}|f(x)-f(y)|} \leq K.\]
An embedding $f$ of a metric space $X$ into a metric space $Y$ is said to be $\eta$-\emph{quasisymmetric} if there exists a homeomorphism $\eta \colon [0,\infty) \to [0,\infty)$ such that for all $x,a,b \in X$ with $x\neq b$
\[ \frac{d_Y(f(x),f(a))}{d_Y(f(x),f(b))} \leq \eta \left ( \frac{d_X(x,a)}{d_X(x,b)} \right ) \]
where $d_X$ and $d_Y$ are the metrics of $X$ and $Y$ respectively. An $\eta$-quasisymmetric map with $\eta(t) = C\max\{t^{\alpha},t^{1/\alpha}\}$ for some $C>1$ and $\alpha>1$ is known in literature as \emph{power quasisymmetric map}.
For doubling connected metric spaces it is known that the quasisymmetric condition is equivalent to a weaker (but simpler) condition known in literature as \emph{weak quasisymmetry}. Recall that a metric space is \emph{$C$-doubling} ($C>1$) if every ball of radius $r$ can be covered by at most $C$ balls of radius at most $r/2$.
\begin{lem}[{\cite[Theorem 4.1]{WZ}}]\label{lem:weakQS}
Suppose that $X$ and $Y$ are $C$-doubling and $c$-uniformly perfect metric spaces. Suppose also that $f:X \to Y$ is an embedding for which there are constants $h>0$ and $H\geq 1$ such that for all $x,a,b \in X$,
\[d_X(x,a) \leq h d_X(x,b) \qquad\text{implies}\qquad d_Y(f(x),f(a)) \leq H d_Y(f(x),f(b)).\]
Then, $f$ is $\eta$-quasisymmetric for some $\eta$ depending only on $c$, $C$, $h$ and $H$.
\end{lem}
A quasisymmetric mapping between two domains in $\mathbb{R}^n$ is quasiconformal. The converse holds true for uniform domains; see Lemma \ref{lem:QCtoQM}. For a systematic treatment of quasiconformal mappings see \cite{Vais1}.
A map $f\colon X \to Y$ between metric spaces is \emph{$L$-bi-Lipschitz} for some $L \geq 1$ if
\[ L^{-1}d_X(x,y) \leq d_Y(f(x),f(y)) \leq Ld_X(x,y)\]
for all $x,y \in X$. Note that an $L$-bi-Lipschitz mapping is $L^2t$-quasisymmetric.
A weaker notion of bi-Lipschitz mappings is that of \emph{bounded length distortion} (\emph{BLD}) mappings. A mapping $f\colon X \to Y$ between metric spaces is $L$-BLD for some $L \geq 1$ if
\[ L^{-1}\ell(\gamma) \leq \ell(f(\gamma)) \leq \ell(\gamma)\]
for all paths $\gamma : [0,1] \to X$. Here and for the rest, $\ell$ denotes the length of a path. Clearly, $L$-bi-Lipschitz mappings are $L$-BLD mappings but BLD mappings need not be bi-Lipschitz even if they are homeomorphisms. However, BLD homeomorphisms between quasiconvex spaces are bi-Lipschitz.
\begin{lem}\label{lem:BLD}
Let $f: X \to Y$ be an $L$-BLD homeomorphism between two $c$-quasiconvex metric spaces. Then $f$ is $Lc$-bi-Lipschitz.
\end{lem}
A mapping $f\colon X \to Y$ between metric spaces is a \emph{$(\lambda,L)$-quasisimilarity} for some $\lambda>0$ and $L\geq 1$ if $L^{-1}\lambda d_X(x,y) \leq d_Y(f(x),f(y)) \leq L\lambda d_X(x,y)$ for all $x,y \in X$. Note that $(\lambda,1)$-quasisimilarities are similarities, $(1,L)$-quasisimilarities are $L$-bi-Lipschitz and $(1,1)$-quasisimilarites are isometries.
A simple curve $\Gamma\subset \mathbb{R}^2$ is a \emph{$K$-quasicircle} with $K\geq 1$ if $\Gamma = f(\mathbb{S}^1)$ or $\Gamma =f(\mathbb{R})$ for some $K$-quasiconformal $f:\mathbb{R}^2 \to \mathbb{R}^2$. A simply connected domain $D\subset \mathbb{R}^2$ is called a \emph{$K$-quasidisk} if $\partial D$ is a $K$-quasicircle. A geometric characterization of quasicircles was given by Ahlfors \cite{Ah} in terms of the the bounded turning property; see \textsection\ref{sec:uniform}.
A curve $\Gamma \subset \mathbb{R}^2$ is called an \emph{$L$-chordarc circle} with $L\geq 1$ if $\Gamma = f(\mathbb{S}^1)$ for some $(\lambda,L)$-quasisimilarity $f:\mathbb{R}^2\to\mathbb{R}^2$. A simply connected domain $D\subset \mathbb{R}^2$ is called an \emph{$L$-chordarc disk} if $\partial D$ is an $L$-chordarc circle.
\subsection{Relative distance}
For two non-degenerate closed sets $E,E' \subset \mathbb{R}^n$ define the \emph{relative distance}
\[ \text{dist}^*(E,E') = \frac{\dist(E,E')}{\min\{\diam{E},\diam{E'}\}}\]
where $\dist(E',E') = \min\{|x-y| : x\in E, y \in E'\}$. If both $E$ and $E'$ have infinite diameter we set $\text{dist}^*(E,E')=0$.
If $E,E' \subset \mathbb{R}^n$ and $f : \mathbb{R}^n \to \mathbb{R}^n$ is a similarity, then $d^*(f(E),f(E')) = d^*(E,E')$. In general, if $f: E\cup E' \to Y$ is $\eta$-quasisymmetric, then
\begin{equation}\label{eq:relQS}
\frac{1}{2}\phi\left ( \frac{\dist(E,E')}{\diam{E}}\right ) \leq \frac{\dist(f(E),f(E'))}{\diam{f(E)}} \leq \eta\left ( 2\frac{\dist(E,E')}{\diam{E}} \right )
\end{equation}
where $\phi(t) = (\eta(t^{-1}))^{-1}$; see for example \cite[p. 532]{Tyson}.
\subsection{Relatively connected sets}\label{sec:relcon}
Relatively connected sets were first introduced by Trotsenko and V\"ais\"al\"a \cite{TroVa} in the study of spaces for which every quasisymmetric mapping is power quasisymmetric. A metric space $X$ is called \emph{$c$-relatively connected} for some $c\geq 1$ if for any $x\in X$ and any $r>0$ either $\overline{B}(x,r) = \{x\}$ or $\overline{B}(x,r) = X$ or $\overline{B}(x,r) \setminus B(x,r/c) \neq \emptyset$. The definition given in \cite{TroVa} is equivalent to the one above quantitatively \cite[Theorem 4.11]{TroVa}.
A connected space is $c$-relatively connected for any $c>1$. Relative connectedness is a weak form of the well known notion of uniform perfectness. A metric space $X$ is \emph{$c$-uniformly perfect} for some $c>1$ if for all $x\in X$, $\overline{B}(x,r)\neq X$ implies $\overline{B}(x,r)\setminus B(x,r/c) \neq \emptyset$. The difference between the two notions is that relatively connected sets allow isolated points. In particular, if $E$ is $c$-uniformly perfect, then it is $c'$-relatively connected for all $c'>c$, and if $E$ is $c$-relatively connected and perfect, then it is $(2c+1)$-uniformly perfect \cite[Theorem 4.13]{TroVa}.
The connection between relative connectedness and quasisymmetric mappings is illustrated in the following theorem from \cite{TroVa}.
\begin{lem}[{\cite[Theorem 6.20]{TroVa}}]\label{thm:TroVa}
A subset $E$ of a metric space $X$ is relatively connected if and only if every quasisymmetric map $f:E \to X$ is power quasisymmetric.
\end{lem}
It easily follows from its definition that the image of a relatively connected (resp. uniformly perfect) space under a quasisymmetric mapping is relatively connected (resp. uniformly perfect) quantitatively. We conclude the discussion on relatively connected sets with the following remark.
\begin{rem}\label{rem:relcon}
Suppose that $X$ is a $c$-uniformly perfect metric space and $E\subset X$ is compact. Then $\dist(E,E\setminus X) \leq c\diam{E}$.
\end{rem}
\subsection{Uniformly disconnected sets}
In \cite{DSbook}, David and Semmes introduced a scale-invariant version of total disconnectedness towards a uniformization of all metric spaces that are quasisymmetric to the standard middle-third Cantor set $\mathcal{C}$. A metric space $X$ is \emph{$c$-uniformly disconnected} for some $c\geq 1$ if for all $x\in X$ and all positive $r<\frac{1}{4}\diam{X}$, there exists $E\subset X$ containing $x$ such that $\diam{E} \leq r$ and $\dist(E,X\setminus E) \geq r/c$.
\begin{thm}[{\cite[Proposition 15.11]{DSbook}}]\label{thm:UDQS}
A metric space is quasisymmetrically homeomorphic to $\mathcal{C}$ if and only if it is compact, doubling, uniformly disconnected and uniformly perfect.
\end{thm}
This result was later improved by MacManus \cite{MM2} for sets in $\mathbb{R}^2$; see \textsection\ref{sec:cantor}. In the same article, MacManus found an elegant connection between planar uniform domains and uniformly disconnected sets: \emph{a set $E \subset \mathbb{R}^2$ with empty interior is uniformly disconnected if and only if its complement is uniform} \cite[Theorem 1.1]{MM2}. In higher dimensions only the necessity is true.
It is easy to check that if $X$ is a $c$-uniformly disconnected space and $f:X\to Y$ is $\eta$-quasisymmetric, then $f(X)$ is $c'$-uniformly disconnected with $c'$ depending only on $\eta$ and $c$.
\subsection{Uniform domains}\label{sec:uniform}
A domain $U \subset \mathbb{R}^n$ is said to be \emph{$c$-uniform} for some $c\geq 1$ if for all $x,y \in U$, there exists a curve $\gamma\subset U$ joining $x$ with $y$ such that
\begin{enumerate}
\item $\ell(\gamma) \leq c|x-y|$ and
\item for all $z\in \gamma$, $\dist(z,\partial U) \geq c^{-1}\min\{|x-z|,|y-z|\}$.
\end{enumerate}
A curve $\gamma$ as in the above definition is called a $c$-\emph{cigar curve}. The definition above is equivalent to the classical definition of Martio and Sarvas \cite{MartioSarvas} quantitatively; see Theorem 2.10 in \cite{Vais-Tohoku}.
Metric spaces for which, for every two points there exists curve satisfying the first property of uniformity are called $c$-\emph{quasiconvex}. If in the definition of quasiconvexity the length of curves is replaced by diameter, then the space is called $c$-\emph{bounded turning}. Metric spaces for which, for every two points there exists curve satisfying the second condition are called $c$-\emph{John spaces}.
A simple curve $\Gamma\subset \mathbb{R}^2$ is a $K$-quasicircle if and only if it is $c$-bounded turning with $c$ and $K$ being related quantitatively \cite{Ah}. A simple curve $\Gamma \subset \mathbb{R}^2$ is an $L$-chordarc circle if and only if it is $c$-quasiconvex with $c$ and $L$ being related quantitatively \cite{JeK}. Finally, a simply connected domain $D\subset \mathbb{R}^2$ is $c$-uniform if and only if it is a $K$-quasidisk (with $K$ and $c$ quantitatively related) and $D$ is a $c$-John domain if and only if its complement is $C$-bounded turning (with $c$ and $C$ quantitatively related) \cite{NakkiVais-John}.
Two remarks are in order.
\begin{rem}
It is easy to check that all curves in the definition of uniform domains can be chosen to be simple. For the rest of the paper, all cigar curves are assumed to be simple.
\end{rem}
\begin{rem}\label{rem:dist}
Let $U\subset \mathbb{R}^n$ be a $c$-uniform domain, $x,y \in U$ and $\gamma$ be a $c$-cigar curve joining $x,y$. Then,
\[ \dist(\gamma,\partial U) \geq (2c)^{-1}\min\{\dist(x,\partial U), \dist(y,\partial U)\}.\]
\end{rem}
Indeed, set $\epsilon = \min\{\dist(x,\partial U), \dist(y,\partial U)\}$ and let $z \in \gamma$. If $z\in \overline{B}^n(x,\epsilon/2)\cup \overline{B}^n(y,\epsilon/2)$, then $\dist(z,\partial U) \geq \epsilon/2$. If $z$ is in the exterior of these balls, then $\dist(z,\partial U) \geq c^{-1}\min\{|z-x|,|z-y|\} \geq (2c)^{-1}\epsilon$.
The following proposition describes the geometry of uniform domains. For the proof see Corollary 2.33 in \cite{MartioSarvas}, Theorem 2 and Lemma 3 in \cite{Geh-qsisom} and Theorem 1.1 in \cite{Herron}.
\begin{prop}\label{prop:bndryuniform}
Let $U$ be a $c$-uniform domain.
\begin{enumerate}
\item (Boundary components) Each component of $\partial U$ is either a point or a $K$-quasicircle for some $K>1$ depending only on $c$.
\item (Relative distance) If $A_1,A_2$ are non-degenerate components of $\partial U$, then $\text{dist}^*(A_1,A_2) \geq (2c)^{-2}$.
\item (Porosity) For every $x\in \overline{U}$ and every $0<r\leq \frac{1}{4}\diam{U}$, there exists $x' \in \partial B^2(x,r)$ such that $B^2(x,r/c) \subset U$.
\end{enumerate}
\end{prop}
The porosity of $\partial U$ implies that if $U$ is bounded then, there exists a point $x \in U$ such that $B^2(x, \frac{1}{4c}\diam{U}) \subset U$.
Although Proposition \ref{prop:bndryuniform} provides a lot of information about the boundaries of uniform domains, it fails to characterize them. Namely, if $E\subset \mathbb{R}$ is a Cantor set with $\mathcal{H}^1(E)>0$, then $\mathbb{R}^2\setminus E$ trivially satisfies all three properties of Proposition \ref{prop:bndryuniform} but it is not uniform. If $U$ is finitely connected and satisfies properties (1) and (2), then it is uniform. We record this observation as a remark.
\begin{rem}\label{rem:removeqsdisks}
Let $U\subset \mathbb{R}^2$ be a $c$-uniform domain and $D \subset U$ be a $K$-quasidisk such that $\text{dist}^*(\overline{D}, \partial U) \geq d >0$. Then $U \setminus \overline{D}$ is $c'$-uniform with $c'$ depending only on $c$, $K$ and $d$.
\end{rem}
A quasiconformal homeomorphism between uniform domains of $\mathbb{R}^n$ is quasisymmetric quantitatively.
\begin{lem}[{\cite[Theorem 5.6]{VaisQM}}]\label{lem:QCtoQM}
Let $U,U'$ be $c$-uniform domains in $\overline{\mathbb{R}^n}$ and $f : U \to U'$ be a $K$-quasiconformal homeomorphism. Then $f$ is $\eta$-quasisymmetric with $\eta$ depending only on $K$, $c$ and $n$.
\end{lem}
We conclude the discussion on uniform domains with two results on the invariance of uniformity under quasisymmetric mappings. The first result says that uniform domains are preserved under quasisymmetric mappings while the second result roughly says that complements of uniform domains are preserved under quasisymmetric mappings.
\begin{lem}[{\cite[Corollary 3]{GehOs}}]
If $U\subset\mathbb{R}^2$ is $c$-uniform and $f: U \to \mathbb{R}^2$ is $\eta$-quasisymmetric, then $f(U)$ is $c'$-uniform with $c'$ depending only on $c$ and $\eta$.
\end{lem}
\begin{lem}[{\cite[Theorem 5.6]{Vais-inv}}]\label{lem:couniform}
If $E\subset \mathbb{R}^2$ is closed, $\mathbb{R}^2 \setminus E$ is $c$-uniform and $f: E \to \mathbb{R}^2$ is $\eta$-quasisymmetric, then $\mathbb{R}^2 \setminus f(E)$ is $c'$-uniform with $c'$ depending only on $c$ and $\eta$.
\end{lem}
\subsection{Whitney-type decompositions}
Let $D$ be a proper open subset of $\mathbb{R}^2$. A collection of sets $\mathscr{Q}$ is called a $(L,c)$-\emph{Whitney-type decomposition for $D$} for some $c>1$ and $L>1$, if the following properties hold true.
\begin{enumerate}
\item The elements of $\mathscr{Q}$ are $L$-chordarc disks and are mutually disjoint.
\item $D = \bigcup_{Q\in\mathscr{Q}}\overline{Q}$.
\item For all $Q\in \mathscr{Q}$, $c^{-1}\diam{Q} \leq \dist(Q,\partial D) \leq c\diam{Q}$.
\item If $Q_1,Q_2\in \mathscr{Q}$ are such that $\overline{Q_1}\cap \overline{Q_2} \neq \emptyset$ then either $\overline{Q_1}$ and $\overline{Q_2}$ intersect only at a point, or their intersection is an arc $\Gamma$ satisfying
\[ \diam{\Gamma} \geq c^{-1}\max\{\diam{Q_1},\diam{Q_2}\}.\]
\end{enumerate}
It is well known that every proper open subset of $\mathbb{R}^2$ has a Whitney-type decomposition \cite[Theorem IV.1.1]{Stein}.
Two Whitney-type decompositions $\mathscr{Q}$, $\mathscr{Q}'$ of open sets $D$, $D'$, respectively, are called \emph{combinatorially equivalent} if there exists a homeomorphism $f: D \to D'$ such that for each $Q\in\mathscr{Q}$ there exists $Q'\in \mathscr{Q}$ with $f(Q) = Q'$.
\section{Separation of boundary components for planar uniform domains}\label{sec:separation}
For this section, fix an unbounded $c$-uniform domain $U\subset \mathbb{R}^2$ with bounded boundary. The goal of this section is to break the boundary of $\partial U$ into sets that are contained in chord-arc disks and are far from the boundary of those disks.
\subsection{Square thickening}\label{sec:thickening}
Here we show that, given a continuum $E\subset\mathbb{R}^2$ and some $\epsilon>0$, there exists a chordarc disk $D$ containing $E$ so that each point of $\partial D$ is of distance roughly $\epsilon>0$ from $E$.
We first review some terminology from \cite{MM2}. Let $\epsilon>0$. We define the square grid
\[\mathcal{G}_{\epsilon} = \{[m\epsilon,(m+1)\epsilon]\times[n\epsilon,(n+1)\epsilon] : m,n\in\mathbb{Z}\}\]
and the $1$-skeleton of the grid
\[\mathcal{G}^1_{\epsilon} = \{e : e\text{ is an edge of some square }S\in\mathcal{G}_{\epsilon}\}\]
Given a bounded set $W\subset \mathbb{R}^2$ let $W^{\epsilon}$ be the union of the elements of $\mathcal{G}_{\epsilon}$ that intersect $W$. Let $\mathcal{T}_{\epsilon}(W) = (W^{4\epsilon})^{\epsilon}$.
\begin{lem}\label{lem:BLsep}
There exists a decreasing homeomorphism $L:(0,+\infty) \to (1,+\infty)$ with the following property. If $E \subset \mathbb{R}^2$ is a continuum and $\epsilon>0$, there exists an $L(\epsilon)$-chordarc disk $D_{\epsilon} \subset \mathbb{R}^2$ containing $E$ such that, for all $x\in \partial D$,
\[ \epsilon\diam{E} \leq \dist(x,E) \leq 8\epsilon\diam{E}. \]
\end{lem}
\begin{proof}
If $\epsilon\geq 3$, then the set $\gamma_{E} = \{x\in \mathbb{R}^2 \colon \dist(x,E) = \epsilon\diam{E}\}$ is the boundary of an $L_0$-chordarc disk with $L_0$ being a universal constant \cite[Lemma 1]{Brown}.
For the rest of the proof we fix $\epsilon<3$ and set $\delta = \epsilon\diam{E}$. By Lemma 2.1 in \cite{MM2}, $\mathcal{T}_{\delta}(E)$ is the closure of a domain whose boundary consists of at most $N_1/\epsilon$ disjoint Jordan curves, each of which is a subset of $\mathcal{G}_{\delta}$. Here, the number $N_1>1$ is a universal constant. Moreover, the distance from any boundary point of $\mathcal{T}_{\delta}(E)$ to $E$ is less than $8\delta$ and greater than $\delta$. Let $D_{\epsilon}$ be the domain bounded by the outermost component of $\partial\mathcal{T}_{\delta}(E)$, that is, $D_{\epsilon}$ is the exterior of the unbounded component of $\mathbb{R}^2 \setminus\mathcal{T}_{\delta}(E)$. Then, $\delta \leq \dist(x,E) \leq 8\delta$ for all $x\in \partial D_{\epsilon}$.
Notice now that, for some universal $N_2>1$, there are at most $N_2/\epsilon$ squares of $\mathcal{G}_{\delta}$ intersecting an $8\delta$-neighborhood $N(E,8\delta)$. Therefore, there are at most $(N_2/\epsilon)^{N_2/\epsilon}$ different ways to form $D$. As each Jordan curve consisting of edges of $\mathcal{G}_{\delta}^1$ is a chordarc circle, $\partial{D}$ is $L$-bi-Lipschitz for some constant $L>1$ depending only on $\epsilon$.
\end{proof}
As $\epsilon\to \infty$, the disk $D_{\epsilon}$ constructed in Lemma \ref{lem:BLsep} resembles a big disk and $L(\epsilon) \to 1$. On the other hand, if $E$ is not the closure of a chordarc disk, $L(\epsilon)$ may increase without control as $\epsilon\to 0$. Nevertheless, we show in the next lemma that if $E$ is the closure of a quasidisk, then the disk $D_{\epsilon}$ given in Lemma \ref{lem:BLsep} is always a $K'$-quasidisk with $K'$ depending only on $K$.
\begin{lem}\label{lem:QCsep}
Suppose that $E \subset \mathbb{R}^2$ is the closure of a bounded $K$-quasidisk and let $D_{\epsilon}$ be the $L(\epsilon)$-chordarc disk of Lemma \ref{lem:BLsep}. Then, for all $\epsilon>0$, $D_{\epsilon}$ is a $K'$-quasidisk with $K'$ depending only on $K$.
\end{lem}
\begin{proof}
Fix $\epsilon>0$. As with the proof of Lemma \ref{lem:BLsep}, we may assume that $\epsilon<3$. Let $\delta = \epsilon\diam{E}$, $D=D_{\epsilon}$, $\Gamma = \partial D$ and $\gamma=\partial E$. Since $\gamma$ is a $K$-quasicircle, it satisfies the $c$-bounded turning property for some $c>1$ depending only on $K$. We show that $\Gamma$ is $136c$-bounded turning and the lemma follows.
Let $x_1,x_2 \in \Gamma$. Since, $\Gamma$ is a polygonal curve with edges in $\mathcal{G}_\delta^1$, it is enough if we assume that $x,y$ are in non-adjacent edges of $\Gamma$. Therefore, $|x_1-x_2|\geq \delta$. Contrary to our claim assume that there exist $x_3,x_4\in \Gamma$ such that $x_3$ and $x_4$ are in different components of $\Gamma \setminus \{x_1,x_2\}$ and $\min\{|x_3-x_1|,|x_4-x_1|\} \geq 68c|x_1-x_2|$. For each $i\in\{1,2,3,4\}$ let $x_i'$ be the point in $\gamma$ closest to $x_i$. Then, $|x_i-x_i'| \in (\delta,8\delta)$ for each $i=1,2,3,4$ and $x_3'$ and $x_4'$ are in different components of $\gamma\setminus\{x_1',x_2'\}$. Therefore, for $i=3,4$, $|x_1'-x_i'| \geq |x_1-x_i| - 16\delta \geq 34c|x_1-x_2|$ while $2c|x_1-x_2'| \leq 2c|x_1-x_2|+ 32c\delta \leq 34c|x_1-x_2|$. Therefore, $\min\{|x_1'-x_3'|,|x_1'-x_4'|\} > c|x_1'-x_2'|$ and the $c$-bounded turning property for $\gamma$ is violated.
\end{proof}
\begin{rem}
In addition to Lemma \ref{lem:QCsep}, note that for each $M>0$ there exists $L(M)>1$ such that any subarc $\gamma$ of $\partial D_{\epsilon}$ whose endpoints $x,y$ satisfy $|x-y| \leq M\epsilon$, is an $L(M)$-bi-Lipschitz arc.
\end{rem}
\subsection{Local separation in the boundary of $U$}\label{sec:sep1}
Here, given a compact $A\subset U$ that is disjoint from $\overline{\partial U \setminus A}$ and some
$\epsilon>0$ sufficiently small, we construct a chordarc circle that separates the two sets $A$ and $\partial U \setminus A$ and its distance from $\partial U$ is at least a fixed multiple of $\epsilon$.
\begin{lem}\label{lem:uniform+s+d=us}
Suppose that $A \subset \partial U$ is compact and disjoint from $\overline{\partial U \setminus A}$ and let $\epsilon\leq (32c)^{-1}\dist(A,\partial U \setminus A)$. There exists $C>1$ and $L>1$ depending only on $c$ and $\epsilon(\diam{A})^{-1}$, and there exists an $L$-chordarc disk $\Delta$ with the following properties.
\begin{enumerate}
\item $\partial \Delta \subset U$ and $\overline{\Delta} \cap \partial U = A$,
\item $\dist(z,\partial \Delta) \leq 8\epsilon$ for all $z \in A$,
\item $C^{-1}\diam{A} \leq \dist(z',\partial U)$ for all $z'\in\partial \Delta$.
\end{enumerate}
\end{lem}
\begin{proof}
As in the proof of Lemma \ref{lem:BLsep}, consider the thickening $\mathcal{T}_{\epsilon}(A)$. Then, $\partial \mathcal{T}_{\epsilon}(A)$ consists of at most $N_0$ components in $U$, each being an $L'$-chordarc circle with $N_0$ and $L'$ depending only on $c$ and $\epsilon(\diam{A})^{-1}$. The choice of $\epsilon$ implies that $\mathcal{T}_{\epsilon}(A)\cap \partial U = A$. Let $D$ be the closure of $\mathbb{R}^2 \setminus V$ where $V$ is the unbounded component of $\mathbb{R}^2\setminus \mathcal{T}_{\epsilon}(A)$. Observe that $\dist(x,A) \leq 8\epsilon$ for all $x\in\partial D$.
We claim that $D\cap \partial= A$. Contrary to the claim, assume that there exists $x\in (\partial U \setminus A)\cap D$. Note that $\dist(x,\partial D) > \frac{1}{2}\dist(A,\partial U \setminus A)$ as otherwise $\dist(x,A) < \dist(A,\partial U \setminus A)$. Let $y \in \overline{U}$ be exterior to $D$ and satisfying $\dist(y,A) \geq \dist(A,\partial U\setminus A)$. Let $\gamma$ be a $c$-cigar curve joining $x$ and $y$ in $U$ and let $z \in \partial D \cap \gamma$. Then,
\begin{align*}
\dist(A,\partial U \setminus A) &> 16c\epsilon \geq 2c\dist(z,A) \geq 2\min\{|x-z|, |y-z|\}\\
&\geq 2\min\{\dist(x,\partial D), \dist(y,\partial D)\} \geq \dist(A,\partial U \setminus A)
\end{align*}
which is a contradiction and the claim follows.
If $D$ has only one component, then set $\Delta=D$ and the proof is complete.
Let $D_1,\dots, D_n$ be the components of $D$. Then, $n\leq N_0$ and $\dist(D_i,D_j) \geq \delta_0\diam{A}$ for some $\delta_0>0$ and $N_0>1$ depending only on $c$ and $\epsilon(\diam{A})^{-1}$. Set $D^{(0)} = D$ and $U^{(0)} = U \setminus \overline{D}$. By Remark \ref{rem:removeqsdisks}, $U^{(0)}$ is $c_0$-uniform for some $c_0>1$ depending only on $c$ and $\epsilon(\diam{A})^{-1}$.
Inductively, suppose that for some $0\leq i\leq n-1$, $D^{(i)}$ is a union of $n-i$ many $L_i$-chord-arc disks $D^{(i)}_1,\dots,D^{(i)}_{n-i}$ such that
\begin{enumerate}
\item $\partial D^{(i)} \subset U$,
\item $\dist(\partial D^{(i)},\partial U) \geq C_i \diam{A}$ and $\dist(D^{(i)}_j,D^{(i)}_{j'}) \geq \delta_i \diam{A}$,
\item $U^{(i)} = U \setminus \overline{D^{(i)}}$ is $c_i$-uniform
\end{enumerate}
for some $C_i>0$, $\delta_i>0$, $c_i>1$ and $L_i>1$ depending only on $c$, $\epsilon(\diam{A})^{-1}$ and $i$. Let $x\in \partial D^{(i)}_1$, $y\in \partial D^{(i)}_2$ and $\gamma \subset U^{(i)}$ be a simple $c_i$-cigar curve joining $x_1$ with $x_2$. Applying Lemma \ref{lem:BLsep} with $E = \gamma$ and $\epsilon = (32c_i)^{-1}\min\{\delta_i,C_i\}$ we have an $L'$-chordarc disk $D'$ containing $\gamma$. Set $D^{(i+1)}_1 = D^{(i)}_1\cup D'$, $D^{(i+1)}_{j} = D^{(i)}_{j+1}$ for $j=2,\dots,n-i-1$, $D^{(i+1)} = \bigcup_{j=1}^{n-i-1}D^{(i+1)}_{j}$ and $U^{(i+1)} = U \setminus \overline{D^{(i+1)}}$. Then,
\begin{enumerate}
\item each $D^{(i+1)}_j$ is a $L_{i+1}$-chordarc disk with $\partial D^{(i+1)}_j \subset U$,
\item $\dist(\partial D^{(i+1)},\partial U) \geq d_{i+1} \diam{A}$,
\item $\dist(D^{(i+1)}_j,D^{(i+1)}_{j'}) \geq \delta_{i+1} \diam{A}$ and
\item $U^{(i+1)}$ is $c_{i+1}$-uniform
\end{enumerate}
for some $d_{i+1}>0$, $\delta_{i+1}>0$, $c_{i+1}>1$ and $L_{i+1}>1$ depending only on $d_i$, $\delta_i$, $c_i$ and $L_i$.
Set $\Delta=D^{(n-1)}$ and note that $\Delta$ satisfies the desired properties with constants depending only on $c$ and $\epsilon(\diam{A})^{-1}$.\end{proof}
The chordarc disk $\Delta$ constructed in the proof of Lemma \ref{lem:uniform+s+d=us} is denoted by $V(A,U,\epsilon)$.
\begin{rem}\label{rem:whitney2}
The construction of $V(A,U,\epsilon)$ involves creating curves in a neighborhood of $A$. Therefore, if $A$ and $A'$ are mutually disjoint compact subsets of $\partial U$ such that $A\cap\overline{\partial U \setminus A} = A'\cap\overline{\partial U \setminus A'} = \emptyset$ and $\text{dist}^*(A,A')$ is big compared to $\diam{A}$ and $\diam{A'}$, then $V(A,U,\epsilon)\cap V(A',U,\epsilon') = \emptyset$ for all $\epsilon\leq (32c)^{-1}\dist(A,\partial U \setminus A)$ and $\epsilon'\leq (32c)^{-1}\dist(A',\partial U \setminus A')$.
\end{rem}
\subsection{A weak form of uniform disconnectedness}\label{sec:wud}
Here we consider a separation of $\partial U$ that resembles uniform disconnectedness. Given $x \in \partial U$ and $r>0$ we construct in Proposition \ref{prop:wud} an $L$-chordarc disk $\Delta$ that contains $x$ such that every point of $\partial\Delta$ has distance from $\partial U$ at least a fixed multiple of $r$ and
\begin{enumerate}
\item either $\diam{\Delta}$ is comparable to $r$,
\item or $\Delta$ contains a component of $\partial U$ whose diameter is at least a fixed multiple of $\diam{\Delta}$.
\end{enumerate}
If the first condition was satisfied for all $x$ and $r$, then $\partial U$ would be uniformly disconnected.
\begin{lem}\label{lem:uniform=wud1}
There exists $C>1$ depending only on $c$ such that for every non-degenerate component $A$ of $\partial U$ and for every positive $r \leq C^{-2}\diam{A}$ there exists $A' \subset \partial U$ containing $A$ and a bounded Jordan domain $D\subset \mathbb{R}^2$ such that, $\partial D \subset U$, $D\cap \partial U = A'$ and
\begin{equation}\label{eq:wud1}
C^{-1}\dist(z, A) \leq r \leq C\dist(z, \partial U) \text{ for all }z\in \gamma.
\end{equation}
\end{lem}
\begin{proof}
By Proposition \ref{prop:bndryuniform}, $A$ is a bounded $K$-quasicircle with $K$ depending only on $c$. Therefore, $A$ satisfies the $c_1$-bounded turning property for some $c_1>1$ depending only on $c$. Set $c_2 = \max\{c,c_1\}$.
Fix now $r \leq (4c_2)^{-2}\diam{A}$. Find ordered points $x_1,\dots,x_n$ on $A$ such that $r/2 \leq |x_i-x_{i+1}| \leq r$ for all $i=1,\dots,n$ with the convention $x_{n+1} = x_1$. For each $i=1,\dots n$, join $x_i$ to $x_{i+1}$ with a $c$-cigar curve $\gamma_i$. On each $\gamma_i$, $i=1,\dots,n$, let $z_i\in\gamma_i$ be a point such that $\min\{|z_i-x_i|,|z_i-x_{i+1}|\} \geq |x_i-x_{i+1}|/2 \geq r/4$. Join each $z_i$ to $z_{i+1}$ with a $c$-cigar curve $\gamma_i'$. As before, we conventionally set $z_{n+1} = z_1$. Then, $|z_i-z_{i+1}|\leq 2c_2\epsilon$ and $\diam{\gamma_i'}\leq 2(c_2)^2r$. The upper bound of $r$ implies that $\gamma_i \cup \gamma_i'\cup\gamma_{i+1} \cup A(x_i,x_{i+1})$ is contractible in $\mathbb{R}^2 \setminus A$ for any $i$. In particular, $\gamma'= \bigcup\gamma_i'$ separates $A$ from $\infty$. Let $\gamma\subset \gamma'$ be a simple closed curve homotopic to $\gamma$ in $U$.
For the proof of (\ref{eq:wud1}) fix $z\in \gamma$ and $i\in\{1,\dots,n\}$ such that $z\in\gamma'_i$. Then, $\dist(z,A) \leq |z-x_i| \leq \diam{\gamma'_i} + \diam{\gamma_i}\leq (2c_2)^2r$. On the other hand, by Remark \ref{rem:dist}, $\dist(z,\partial U) \geq (c_2)^{-1}\min\{\dist(z_i,\partial U),\dist(z_i,\partial U)\} \geq (c_2)^{-2}r/2$. Thus, the lemma holds with $C = (4c_2)^2$.
\end{proof}
\begin{rem}
Note that $\diam{\gamma_i\cup\gamma_i'\cup\gamma_{i+1}\cup A(x_{i},x_{i+1})} \leq Cr$. Therefore, if $A_1 \subset A'$ and $A_1\neq A$, then $\diam{A_1} \leq Cr \leq C^{-1}\diam{A}$.
\end{rem}
Given a non-degenerate component $A$ of $\partial U$ and $r< C^{-2}\diam{A}$ we set $N_1(A,r)$ to be a set $A'\subset \partial U$ as in the statement of Lemma \ref{lem:uniform=wud1}. Moreover, if $\gamma$ is a simple closed curve as in Lemma \ref{lem:uniform=wud1} associated to $A' = N_1(A,r)$, let $D_1(A,r)$ be the $L(r)$-chordarc disk containing $\gamma$ as in Lemma \ref{lem:BLsep} with $\epsilon = r/24$. Although $L(r)$ may be large, arguments similar to that of Lemma \ref{lem:QCsep} show that $D_1(A,r)$ is a $K'$-quasidisk for some $K'>1$ depending only on $c$.
The next lemma provides us with a different kind of a neighborhood where the radius $r$ is big compared to the diameters of the components of $\partial U$ in a $r$-neighborhood of $A$.
\begin{lem}\label{lem:uniform=wud2}
Let $A$ be a component of $\partial U$, $x\in A$ and $r>8\diam{A}$ is such that every component $A'$ of $\mathbb{R}^2\setminus U$ intersecting $B^2(x,r)$ satisfies $\diam{A}\leq c'r$ for some $c'>1$. Then, there exists $C'>1$ depending only on $c,c'$ and there exists a simple closed curve $\gamma$ separating $A$ from $\infty$ satisfying
\begin{equation}\label{eq:wud2}
\frac{r}{2(C')^{2}} \leq \frac{\dist(z,\partial U)}{2C'} \leq \frac{r}{2} \leq \diam{\gamma} \leq C'r \text{ for all }z\in\gamma.
\end{equation}
\end{lem}
\begin{proof}
The proof follows closely that of Lemma 2.2 in \cite{MM2}.
Let $A_1, \cdots, A_n$ be the components of $\partial U \setminus A$ intersecting $\partial B^2(x,r)$ such that $\diam{A_i} \geq (16c)^{-1} r$. By Lemma \ref{lem:numberofcomp}, $n$ is bounded above by a constant depending only on $c,c'$. For each $i=1,\dots,n$ let $D_i$ be the Jordan domain given by Lemma \ref{lem:uniform=wud1} for $A_i$ and $r_i = (2C)^{-1}\min\{\diam{A_i},r\}$. Note that
\[(32Cc)^{-1}r \leq r_i \leq(2C)^{-1}c' r.\]
Let $V$ be the component of $B^2(x,r)\setminus \bigcup \overline{D}_i$ that contains $x$. By the uniformity of $U$ and the choice of $r_i$, there exists at least one nontrivial component in $\partial B^2(x,r)\cap V$. Suppose that $V \cap \partial B^2(x,r) = \Gamma_1 \cup \Gamma_2 \cup \cdots$ where each $\Gamma_i$ is an open subarc of $\partial B^2(x,r)$. If $\diam{\Gamma_i} < (2c)^{-1}r$, then replace $\Gamma_i$ by a $c_1$-cigar curve $\Gamma_i'$ joining the endpoints of $\Gamma_i$.
Assume now that $\diam{\Gamma_i} \geq (2c)^{-1}r$. Let $y_1,\dots,y_{n_i}$ be consecutive points on $\Gamma_i$ such that $y_1$ and $y_{n_i}$ are the endpoints of $\Gamma_i$ and $(8c)^{-1}r \leq |y_j-y_{j+1}| \leq (4c)^{-1}r $. Set $w_1 = y_1$, $w_{n_i} = y_{n_i}$ and if $\dist(y_j,\partial U) > (32c)^{-1}r$ for some $j=2,\dots,n_i-1$ set $w_j=y_j$. Otherwise, take $z_j\in \partial U$ such that $|y_j-z_j| = \dist(y_j,\partial U)$. By the porosity of $\partial U$ there exists $w_j \in U\cap \partial B^2(z_j,(16c)^{-1}r)$ satisfying the third conclusion of Proposition \ref{prop:bndryuniform}. Then, $|w_j-w_{j+1}| \leq 6(16c)^{-1}r$. For each $j=2,\dots,n_i$ let $\gamma_j$ be a $c$-cigar curve in $U$ joining $w_{j-1}$ with $w_j$ and let $\Gamma_i' = \bigcup_{j=1}^{n_i} \gamma_j$. The distance estimates above imply that $\Gamma_i'$ is homotopic to $\Gamma_i$ in $\mathbb{R}^2 \setminus \{x\}$. Replace $\Gamma_i$ with $\Gamma_i'$.
Thus, we obtain a closed curve
\[ \Gamma = (\partial V \cap B^2(x,r)) \cup \bigcup_{i\in\mathbb{N}}\Gamma_i'\]
that is homotopic to $\partial B^2(x,r)$ in $\mathbb{R}^2 \setminus \{x\}$. Take $\gamma \subset \Gamma$ to be a simple closed curve that is homotopic to $\Gamma$ in $\mathbb{R}^2 \setminus \{x\}$.
\end{proof}
For the rest of the paper, Lemma \ref{lem:uniform=wud2} is applied with $c' = C^2$ where $C$ is as in Lemma \ref{lem:uniform=wud1}. Given a component $A$ of $\partial U$ and $r>8\diam{A}$, if $\gamma$ is as in Lemma \ref{lem:uniform=wud2}, then we denote by $N_2(A,r)$ the subset of $\partial U$ that is enclosed by $\gamma$. Moreover, applying Lemma \ref{lem:BLsep} for $E = \gamma$ and $\epsilon=(3C')^{-1}$ ($C'$ is as in Lemma \ref{lem:uniform=wud2}), there exists an $L$-chordarc disk $D_2(A,r)$ that contains $N_2(A,r)$ with $L$ depending only on $c$.
Lemma \ref{lem:uniform=wud1} and Lemma\ref{lem:uniform=wud2} combined yield the next proposition.
\begin{prop}\label{prop:wud}
Let $x\in \partial U$, let $A_x$ be the component of $\partial U$ that contains $x$ and let $r>0$.
\begin{enumerate}
\item If $\overline{B}(x,r)$ intersects a non-degenerate component $A$ of $\partial U$ with diameter at least $C^2 r$, then $x$ is contained in a set $N_1(A,r)$.
\item If $r\leq 8\diam{A_x}$, then $x$ is contained in a set $N_1(A_x,\frac{1}{8C^2}r)$.
\item If $r> 8\diam{A_x}$ and $\overline{B}(x,r)$ intersects only components of $\partial U$ with diameters less than $C^2 r$, then $x$ is contained in a set $N_2(A_x,r)$.
\end{enumerate}
\end{prop}
Given a non-degenerate component $A$ of $\partial U$, $N_1(A,r)$ is always defined when $r$ is sufficiently small compared to $\diam{A}$. On the other hand, $N_2(A,r)$ is not defined for $r$ which are small compared to $\diam{A}$, and even when $r$ is large, it still may not be defined.
The properties of the sets $N_i(A,r)$ and $D_i(A,r)$ are summarized in the next lemma.
\begin{lem}\label{lem:propertiesofnbhd}
Suppose that $A$ is a component of $\partial U$ and $r>0$. There exists $c'>1$ depending only on $c$ and there exists $c''$ depending only on $c$ and $r$ with the following properties.
\begin{enumerate}
\item Every component of $\mathbb{R}^2 \setminus N_1(A,r)$ is $c'$-uniform. If $\partial U$ is $c$-relatively connected, then each component of $\mathbb{R}^2 \setminus N_1(A,r)$ has $c'$-relatively connected boundary.
\item Every component of $D_1(A,r) \setminus N_1(A,r)$ is $c''$-uniform. If $\partial U$ is $c$-relatively connected, then each component of $D_1(A,r) \setminus N_1(A,r)$ has $c'$-relatively connected boundary.
\item If $N_2(A,r)$ is defined, then all the components of $\mathbb{R}^2 \setminus N_2(A,r)$ are $c'$-uniform. If $\partial U$ is $c$-relatively connected, then each component of $\mathbb{R}^2 \setminus N_2(A,r)$ has $c'$-relatively connected boundary.
\item If $N_2(A,r)$ is defined, then all the components of $D_2(A,r) \setminus N_2(A,r)$ are $c'$-uniform. If $\partial U$ is $c$-relatively connected, then each component of $D_2(A,r) \setminus N_2(A,r)$ has $c'$-relatively connected boundary.
\end{enumerate}
\end{lem}
\begin{proof}
We show (1) and (2). The proofs of (3) and (4) are similar. As every quasidisk is uniform with relatively connected boundary, it is enough to show (1) for the unbounded component and (2) for the component of $D_1(A,r) \setminus N_1(A,r)$. whose boundary contains $\partial D_1(A,r)$. For the rest of the proof, $C$ is the constant of Lemma \ref{lem:uniform=wud1}.
To prove (1), let $U'$ be the unbounded component of $\mathbb{R}^2 \setminus N_1(A,r)$. To show uniformity of $U'$, fix $x,y \in(\mathbb{R}^2 \setminus N_1(A,r))\cap U$. If $x,y \in \mathbb{R}^2 \setminus D_1(A,r)$, then uniformity follows from the fact that $D_1(A,r)$ is a $K'$-quasidisk for some $K'$ depending only on $c$. If $x\in \overline{D_1(A,r)}$ and $y \in \mathbb{R}^2 \setminus D_1(A,r)$, then join $x$ to a point $z\in\partial D_1(A,r)$ with a $c$-cigar curve $\gamma_1 \subset D_1(A,r)$ using uniformity of $U$ and then $z$ to $y$ with a $c'$-cigar curve $\gamma_2 \subset \mathbb{R}^2 \setminus D_1(A,r)$ using uniformity of $\mathbb{R}^2 \setminus D_1(A,r)$. Since $\dist(z,N_1(A,r)) > d|x-z|$ for some $d$ depending only on $c$, the curve $\gamma = \gamma_1\cup\gamma_2$ is $c''$-cigar for some $c''$ depending only on $c$. Finally, if $x,y \in \overline{D_1(A,r)}$, then $x,y\in U$ and we use uniformity of $U$.
Suppose now that $\partial U$ is $c$-relatively connected. Let $x\in \partial U'$ and $R>0$, and assume that $\overline{B}^2(x,R)\cap \partial U' \setminus \{x\} \neq \emptyset$ and $\partial U' \setminus \overline{B}^2(x,R) \neq \emptyset$. The second assumptiom implies that $R < 2\diam{A}$. If $R\leq 8Cr$, then $\overline{B}^2(x,(8C^2)^{-1}R) \cap \partial U' = \overline{B}^2(x,(8C^2)^{-1}R) \cap \partial U$ and relative connectedness is satisfied with $c' = 8C^2c$. Suppose now that $8rC < R < 2\diam{A}$. Then, if $\overline{B}^2(x,R/8)$ intersects $A$ we have $A\setminus \overline{B}^2(x,R/8) \neq \emptyset$ and relative connectedness is satisfied with $c'=8$.
To prove (2), let $U''$ be the bounded domain with boundary $\partial D_1(A,r) \cup N_1(A,r)$. The uniformity of $U''$ follows from (1) and Remark \ref{rem:removeqsdisks}. If $\partial U$ is relatively connected, for the relative connectedness of $\partial U''$ we work as above.
\end{proof}
\subsection{Total separation of $\partial U$}\label{sec:totalwud}
Fix $\epsilon\in (0,\diam{\partial U})$. For each point $x \in \partial U$ let
$D_{i_x}(A_x,r_x)$ be as in \textsection\ref{sec:wud} where $i_x\in\{1,2\}$, $r_x \in \{80c\epsilon,10c\epsilon\}$ and $A_x$ is a component of $\partial U$. Set $\gamma_x = \partial D_{i_x}(A_x,r_x)$ and note that $\dist(\gamma_x,\partial U) \geq 10\epsilon$.
Define $G = \bigcup_{x\in\partial U}\gamma_x$. Then, $\dist(G,\partial U) \geq 10\epsilon$.
The boundary of $\mathcal{T}_{\epsilon}(G)$ is a finite disjoint union of polygonal Jordan curves each of which has edges in $\mathcal{G}_{\epsilon}^1$ and is at least distance $\epsilon$ from $\partial U$. Define
\begin{align*}
\mathscr{G} = \{\overline{D} \colon D \text{ is a bounded Jordan domain whose }&\text{boundary is a }\\
&\text{component of }\mathcal{T}_{\epsilon}(G)\}.
\end{align*}
Note that two elements of $\mathscr{G}$ are either disjoint or one is contained in the other. An element $D$ of $\mathscr{G}$ is called \emph{minimal} if for all $D'\in \mathscr{G}$, $D'\subset D$ implies $D' = D$.
For each component $A$ of $\partial U$ let $D_{A}$ be the minimal element of $\mathscr{G}$ that contains $A$ and $D'_A$ be the \emph{maximal element} in $\{D_{A'} \colon A'\text{ is a component of }\partial U\}$ that contains $A$, that is, if $D_{A}' \subset D_{A'}$ for some component $A'$ of $\partial U$, then $D_{A'} = D'_{A}$. Let $D_1,\dots,D_n$ be the elements of the set $\{D_{A}':A\text{ is a component of }\partial U\}$ and let $A_i = \partial U \cap D_i$. Note the following.
\begin{enumerate}
\item By the doubling property of $\mathbb{R}^2$, $n\leq \epsilon^{-2}(\diam{\partial U})^2$.
\item Each $D_i$ has diameter at least $\epsilon$ and, by the $c$-relative connectedness of $\partial U$, each $A_i$ has size at least $\epsilon/M$ for some $M >1$ depending only on $c$.
\item For all $i\neq j$, $\dist(D_i,D_j) \geq \epsilon$.
\end{enumerate}
The size of each $D_i$ can be estimated from the following lemma.
\begin{lem}\label{lem:totalwud}
Let $\epsilon\in(0,\diam{\partial U})$ and $D_1,\dots,D_n$ be as above.
\begin{enumerate}
\item For all $i=1,\dots,n$
\[ \epsilon + \sup_{A}\diam{A} \leq \diam{D_i} \leq 80c^2(\epsilon + \sup_{A}\diam{A})\]
where the supremum is taken over all components $A$ of $A_i$.
\item Each $D_i$ is an $L$-chordarc disk for some $L$ depending only on $\epsilon^{-1}\diam{D_i}$ and $c$.
\end{enumerate}
\end{lem}
\begin{proof}
The lower bound of the first claim follows from the fact that for each $z\in\partial D_i$, $\dist(z,\partial U) \geq \epsilon$. For the upper bound note that for each $x\in A_i$ we have $\diam{D_i} \leq \diam{D_{i_x}(A_x,r_x)} \leq c((80c)\epsilon + \diam{A_x})$. The second claim follows from the first claim and Lemma \ref{lem:BLsep}.
\end{proof}
\section{Quasisymmetric and bi-Lipschitz extension for a class of finitely connected domains}\label{sec:Ext}
The classical Sch\"onflies theorem states that every embedding of $\mathbb{S}^1$ in $\mathbb{R}^2$ extends to a homeomorphism of $\mathbb{R}^2$. Beurling and Ahlfors, in their celebrated paper \cite{BA}, proved the quasisymmetric version of Sch\"onflies theorem and, later, Tukia proved the bi-Lipschitz version.
\begin{thm}\cite{BA, TukiaBLExt}\label{thm:BAT}
If $f:\mathbb{S}^1\to\mathbb{R}^2$ is $\eta_1$-quasisymmetric (resp. $L_1$-bi-Lipschitz), then it extends $\eta_2$-quasisymmetrically (resp. $L_2$-bi-Lipschitzly) to $\mathbb{R}^2$ with $\eta_2$ depending only on $\eta_1$ (resp. $L_2$ depending only
on $L_1$).
\end{thm}
\begin{rem}\label{rem:BAT}
In the quasisymmetric case (resp. bi-Lipschitz case) of Theorem \ref{thm:BAT}, $\mathbb{S}^1$ can be replaced by any $K$-quasicircle (resp. $\lambda$-chordarc circle) with $\eta'$ depending only on $\eta$ and $K$ (resp. $L'$ depending only on $L$ and $\lambda$).
\end{rem}
In \textsection\ref{sec:BLextQD} we use Carleson's method to show that $\mathbb{S}^1$, in the bi-Lipschitz case of Theorem \ref{thm:BAT}, can be replaced by quasicircles. In \textsection\ref{sec:ext} we extend Theorem \ref{thm:BAT} for the class of finitely connected uniform domains.
\subsection{Bi-Lipschitz extensions to quasidisks}\label{sec:BLextQD}
The main result of \textsection\ref{sec:BLextQD} is the following lemma.
\begin{lem}\label{lem:extcomplBL}
Let $D$ be a $K$-quasidisk and let $f:\partial D$ be an $L$-bi-Lipschitz embedding. Then, there exists an $L'$-bi-Lipschitz extension $f_D:\overline{D} \to \mathbb{R}^2$.
\end{lem}
For the rest of \textsection\ref{sec:BLextQD}, for two positive quantities $A,B$ we write $A\lesssim B$ if there exists constant $C^*$, depending only on $K$ and $L$ such that $A\leq C^* B$. We write $A\simeq B$ if $A\lesssim B$ and $B\lesssim A$. For the case that $D$ is unbounded we make the following observation that is also used in \textsection\ref{sec:unbounded}; the proof is straightforward and is left to the reader. Given $x_0 \in \mathbb{R}^2$, define $I_{x_0} :\mathbb{R}^2 \setminus \{x_0\} \to \mathbb{R}^2\setminus\{0\}$ to be the inversion map given by
\[ I_{x_0}(x) = \frac{x-x_0}{|x-x_0|^2}.\]
\begin{rem}\label{rem:inversion}
Let $E\subset\mathbb{R}^n$ be non-degenerate, $x_0\in E$ and $f:E \to \mathbb{R}^n$ be $L$-bi-Lipschitz. Then, the map $g: I_{x_0}(E\setminus\{x_0\}) \to\mathbb{R}^n$ given by
\[g = I_{f(x_0)} \circ f \circ I_{x_0}^{-1} |_{I_{x_0}(E\setminus\{x_0\})} \]
is $L'$-bi-Lipschitz with $L'$ depending only on $L$.
\end{rem}
\begin{proof}[{Proof of Lemma \ref{lem:extcomplBL}}]
By Remark \ref{rem:inversion}, we only need to prove the lemma in the case that $D$ is bounded. After rescaling, we may further assume that $\diam{D} = \diam{D'} =1$. Set $\Gamma_{\infty} =\partial D$. Then, $\Gamma_{\infty}' = f(\partial D)$ is a bounded $K'$-quasicircle. Let $D'\subset\mathbb{R}^2$ be the bounded domain that is bounded by $\Gamma_{\infty}'$. Then, both $D$ and $D'$ are $c$-uniform domains for some $c>1$ depending only on $K$ and $L$. Moreover, we assume that both $\Gamma_{\infty}$ and $\Gamma_{\infty}'$ are $C$-bounded turning for some $C$ depending only on $K$ and, for simplicity, we assume $C=c$.
Fix an orientation for $\Gamma_{\infty}$. Through $f$, an orientation for $\Gamma_{\infty}'$ is also defined. Given a set of points $\{p_1,\dots p_n\} \subset \Gamma_{\infty}$ we say that $p_i$ and $p_j$ are \emph{neighbors in the set $\{p_1,\dots p_n\}$ }if one of the two subarcs of $\Gamma_{\infty} \setminus \{p_i,p_j\}$ contains no point from $\{p_1,\dots,p_n\}$. Such a subarc is denoted by $\Gamma_{\infty}(p_i,p_j)$. Moreover, we say that \emph{$p_i$ is on the right of $p_j$ in the set $\{p_1,\dots p_n\}$} if $p_i$ and $p_j$ are neighbors and, under the orientation of $\Gamma_{\infty}$, $p_j$ and $p_i$ are the starting and ending, respectively, points of $\Gamma_{\infty}(p_i,p_j)$. In opposite case, we say that \emph{$p_i$ is on the left of $p_j$}.
Fix points $x_0 \in D$ and $x_0' \in D'$ such that
\[\dist(x_0,\Gamma_{\infty}) \geq (4c)^{-1}\text{ and }\dist(x_0',\Gamma_{\infty}') \geq (4c)^{-1}.\]
The existence of these points follows from Proposition \ref{prop:bndryuniform}.
Recall the definitions of square thickenings $\mathcal{T}_{\delta}(E)$ from \textsection\ref{sec:thickening}. For each $m\in \mathbb{N}$, let $D_m$ (resp. $D_m'$) be the component of $\mathcal{T}_{\epsilon_m}(\Gamma_{\infty})$ (resp. $\mathcal{T}_{\epsilon_m}(\Gamma_{\infty}')$) that contains $x_0$ (resp. $x_0'$) where $\epsilon_m = 2^{-lm}$ and $l$ is an integer such that $2^l\geq 2c$. For each $m\in\mathbb{N}$ set $\Gamma_m = \partial D_m$. Choosing $l$ appropriately, we may assume that, for each $m\in\mathbb{N}$, $D_m \subset D_{m+1} \subset D$ and $\dist(D_m,\Gamma_{m+1}) \geq 2^{-l(m+1)}$ and similarly for the domains $D'_m$.
Choose points $x_1,\dots,x_k \in \Gamma_{\infty}$, following its orientation, such that
\[ 16c^3 \leq |x_i-x_{i+1}| \leq 32c^3\]
with the convention $x_{n+1} = x_1$. Note that $k\leq N$ for some $N\in \mathbb{N}$ depending only on $c$ and $C$.
For each $i\in\{1,\dots,k\}$ let $\hat{y}_i \in \Gamma_1$ be a point closest to $x_i$ and join $x_i$ to $\hat{y}_i$ with a $c$-cigar curve $\sigma_i$. For each $\sigma_i$ we construct a broken line $\gamma_i$ as follows. For each $z \in \gamma_i$ let $\Sigma(z)$ be the union of all squares in $\mathcal{G}_{2^{-l(z)}}$ that contain $z$ where $l(z)$ is the smallest integer such that $2^{-l(z)} \leq \epsilon_1$ and $2^{-l(z)} \leq \frac{1}{6}\dist(z,\Gamma_{\infty})$. Let $\gamma_i$ be a subarc in the boundary of $\bigcup_{z\in\sigma_i}\Sigma(z)$ that connects $x_i$ with $\Gamma_1$ and is entirely contained (except for its endpoints) in $D \setminus D_1$. Denote by $y_i$ the endpoint of $\gamma_i$ which is on $\Gamma_1$. Note that the broken lines $\gamma_i$ are mutually disjoint and that $\dist(\gamma_i,\gamma_j) \gtrsim 1$ when $i\neq j$.
Next, for each $n\in\{2,3,\cdots\}$, we modify $\gamma_i$ close to its intersection points with $\Gamma_n$. We start with $\Gamma_2$.
Let $T_2$ be the union of all squares in $\mathcal{G}_{\epsilon_2/4}$ that intersect with $\Gamma_2$. Note that $\partial T_2$ consists of exactly two Jordan curves; one contained in $D_2$, the other contained in $D_1 \setminus \overline{D_2}$. Let $p_{i}$ and $q_{i}$ be the points of $\gamma_i\cap \partial T_2$ such that the part of $\gamma_i$ joining $x_i$ with $p_{i}$ is entirely in $D_2$ while the part of $\gamma_i$ joining $y_i$ with $q_{i}$ is entirely in $D_1\setminus\overline{D_2}$. Let $\hat{q}_i$ be the \emph{flat vertex} (i.e. $\hat{q}_i$ is the common vertex of two co-linear edges) on the component of $\partial T_2$ containing $q_{i}$ that is closest to $p_{i}$ and let $\tau_1$ be the shorter in diameter subarc of $T_2$ joining $q_i$ with $\hat{q}_i$. Let $t_i$ be the flat vertex of $\Gamma_1$ closest to $\hat{q}_i$ and $\tau_2$ be the line segment $[t_i,\hat{q}_i]$. Let $\hat{p}_i$ be the flat vertex on the component of $\partial T_2$ containing $p_{i}$ that is closest to $t_{i}$ and let $\tau_3$ be the line segment $[t_i,\hat{p}_i]$ Finally, let $\tau_4$ be the shorter in diameter subarc of $T_2$ joining $q_i$ with $\hat{q}_i$. Replace $\gamma_i(\hat{p}_i,q_i)$ with $\bigcup_{i=1}^4\tau_i$ and note that $\gamma_i$ intersects $\Gamma_2$ Note that the two curves $\gamma_i$ and $\Gamma_2$ intersect orthogonally and their intersection is only one point $t_i$ which we denote for the rest by $y_{i1}$.
Similarly, we modify $\gamma_i$ close to its intersection points with $\Gamma_k$. We denote by $y_{i1^{n-1}}$ the unique intersection point of $\Gamma_n$ and $\gamma_i$.
We proceed inductively. Assume that for some $m\in\mathbb{N}$, we have defined points $x_w \in \Gamma_{\infty}$, curves $\gamma_w$ and points $y_{w1^l}$ where $w \in \mathbb{N}^m$ is a finite word formed from $m$ letters in $\mathbb{N}$ and $l\in\mathbb{N}\cup\{0\}$. We denote by $|w|$ the number of letters the word $w$ has. Conventionally, $|\emptyset|=0$.
Fix $x_{w}$ and $x_{u}$ such that $|w|=|u|=m$ and $x_{w}$ is on the left of $x_{u}$ in the collection $\{x_v : |v|=m\}$. Choose points $x_{wi}$ in $\Gamma_{\infty}^{(w)} = \Gamma_{\infty}(x_{w},x_{u})$, with $i = 1,\dots,N_{w}+1$, following the orientation of $\Gamma_{\infty}$ such that $x_{w1} = x_{w}$, $x_{w(N_{wi}+1)}=x_{u}$ and for each $i = 1,\dots, N_{wi}$
\[ 4c(2c)^{3-2m} \leq |x_{wi}-x_{w(i+1)}| \leq 8c(2c)^{3-2m}.\]
Note that $N_{w}\leq N_0$ where $N_0$ depends only on $c$. Without loss of generality, we assume for the rest that $N=N_0$.
For $i=\in\{2,\dots,N_{wi}\}$ and $m\geq |w|+2$, let $\hat{y}_{wi}$ be a point of $\Gamma_{|w|+1}$ closest to $x_{wi}$ and let $\sigma_{wi}$ be a $c$-cigar curve joining $x_{wi}$ with $\hat{y}_{wi}$. Construct $\gamma_{wi}$ as before and let $y_{wi}$ be the point on $\Gamma_{|w|+1}$ such that the part of $\gamma_{wi}$ connecting $x_{wi}$ with $y_{wi}$ is entirely in $D \setminus D_{|w|+1}$. As before, for each $k\in\mathbb{N}$, we modify $\gamma_{wi}$ close to its intersection points with $\Gamma_{|w|+1+k}$ and denote with $y_{wi1^k}$ the unique intersection point of $\gamma_{wi}$ and $\Gamma_{|w|+1+k}$.
Let $\mathcal{W}$ be the set of finite words $w$ formed by letters $\{1,\dots,N\}$ for which $x_w$ has been defined. Let also $\mathcal{W}_k$ be the set of words $w\in\mathcal{W}$ whose length is $k$. Again, numbers $\epsilon_m$ have been chosen so that
\[\dist(\gamma_w,\gamma_u) \gtrsim \min\{\diam{\gamma_w},\gamma_u\} \simeq \min\{\epsilon_w,\epsilon_u\}.\]
Fix $w \in \mathcal{W}$ and let $x_u$ be on the left of $x_w$ in the collection $\{x_v : |v|=|w|\}$. Define $\mathcal{Q}_{w}$ to be the Jordan domain bounded by $\gamma_{w}$, $\gamma_{u}$, $\Gamma_{|w|}$ and $\Gamma_{|w|+1}$. Set $\mathscr{Q} = \{\mathcal{Q}_w : w\in \mathcal{W}\}$. Then, for each $\mathcal{Q}_w$ there exists $l_w \in\mathbb{N}$ such that $\partial \mathcal{Q}_w$ is a poygonal arc with edges in $\mathcal{G}_{2^{-l_w}}^1$ and $\diam{\mathcal{Q}_w} \simeq 2^{-l_w}$. Moreover, the distance of each $\mathcal{Q}_w$ from $\Gamma_{\infty}$ is comparable to its distance from $x_w$. Therefore,
\begin{enumerate}
\item each $\mathcal{Q}_w \in \mathscr{Q}$ is an $L_1$-chordarc disk with $L_1\simeq 1$,
\item $\dist(\mathcal{Q}_w,x_w) \simeq \dist(\mathcal{Q}_w,\partial U) \simeq \diam{\mathcal{Q}_w}$.
\end{enumerate}
Using now the points $x_w' = f(x_w)$ and working as above, we obtain a Whitney-type decomposition $\mathscr{Q}'$ of $D'$ that is combinatorially equivalent to $\mathscr{Q}$ and satisfies properties (1) and (2) above. Moreover, for each $w\in \mathcal{W}$,
\[ \diam{\mathcal{Q}_w} \simeq \diam{\Gamma_{\infty}^{(w)}} \simeq \diam{f(\Gamma_{\infty}^{(w)})} \simeq \diam{\mathcal{Q}_w'}.\]
We can now extend $f$ to the $1$-skeleton of the decomposition
\[ f: \overline{\bigcup_{w\in\mathcal{W}} \partial \mathcal{Q}_w} \to \overline{\bigcup_{w\in\mathcal{W}}\partial\mathcal{Q}_w}\]
so that $f|_{\partial \mathcal{Q}_w} : \partial \mathcal{Q}_w \to \partial \mathcal{Q}_w'$ is an $L_2$-bi-Lipschitz homeomorphism. By Theorem \ref{thm:BAT}, each $f|_{\partial \mathcal{Q}_w}$ can be extended to an $L_3$-bi-Lipschitz $f:\mathcal{Q}_w\to \mathcal{Q}_w'$. Therefore, the map $f:D \to D'$ is $L_4$-BLD and, by Lemma \ref{lem:BLD}, $f$ is $L_5$-bi-Lipschitz with constants $L_2,\dots,L_5$ depending only on $L$ and $K$.
\end{proof}
\subsection{Extension for a class of finitely connected domains}\label{sec:ext}
Let $L\geq 1$, $K\geq 1$ and $d\geq 1$. Denote by $\mathscr{QC}(K,d)$ (resp. $\mathscr{CA}^*(L,d)$) the collection of planar bounded domains $U\subset \mathbb{R}^2$ whose boundary consists of mutually disjoint $K$-quasicircles (resp. $L$-chordarc circles) with mutual distances and diameters bounded below by $d^{-1}\diam{U}$. Let also $\mathscr{CA}(L,d)$ be the collection of bounded domains $U\subset \mathbb{R}^2$ whose boundary consists of mutually disjoint $L$-chordarc circles with mutual distances bounded below by $d^{-1}\diam{U}$. Note that $\mathscr{CA}^*(L,d) \subset \mathscr{CA}(L,d)$ and $\mathscr{CA}^*(L,d) \subset \mathscr{QC}(L^2,d)$.
The following proposition, which is the main result of this section, generalizes Theorem \ref{thm:BAT} and is a special case of Theorem \ref{thm:main}.
\begin{prop}\label{prop:BLext}
Let $U\subset \mathbb{R}^2$ be a bounded domain and $f:\partial U \to \mathbb{R}^2$ be an embedding that can be extended homeomorphically to $\overline{U}$.
\begin{enumerate}
\item If $U\in \mathscr{QC}(K,d)$ and $f$ is $\eta_1$-quasisymmetric, then it extends to an $\eta_2$-quasisymmetric embedding of $\overline{U}$ with $\eta_2$ depending only on $\eta_1$, $K$ and $d$.
\item If $U\in \mathscr{CA}(L,d)$ and $f$ is $L_1$-bi-Lipschitz, then it extends to an $L_2$-bi-Lipschitz embedding of $\overline{U}$ with $L_2$ depending only on $L_1$, $K$ and $d$.
\end{enumerate}
\end{prop}
We first show that domains in $\mathscr{QC}(K,d)$ and $\mathscr{CA}(L,d)$ are finitely connected quantitatively. Although this result follows almost immediately from the doubling property, with a little more effort, one can show the following stronger statement.
\begin{lem}\label{lem:numberofcomp}
For each $n\in\mathbb{N}$, $c>1$ and $d>1$ there exists $N>1$ depending only on $n$, $c$ and $d$ that satisfies the following property. If $U_1,\dots,U_m \subset \mathbb{R}^n$ are disjoint $c$-uniform domains of mutual relative distances at most $d$, then $m\leq N$.
\end{lem}
\begin{proof}
Let $U_1,\dots, U_m \subset \mathbb{R}^n$ be mutually disjoint $c$-uniform domains of mutual relative distances at most $d$. The proof is divided in two cases.
\emph{Case 1.} Assume first that at least one of the $U_i$ is bounded. In particular, assume that $U_m$ is bounded and that it has the smallest diameter among the $U_1$. Applying a dilation, we may further assume that $\diam{U_m}=1$. Fix a point $z_0\in U_m$. Since $\text{dist}^*(U_i,U_m)\leq d$, every domain $U_i$ intersects $B^n(z_0,2d)$.
We claim that for each $i\in\{1,\dots,m\}$, there exists $x_i\in U_i$ such that
\[ B^n(x_i,(4c)^{-1}) \subset U_i\cap B^n(z_0,4d).\]
To prove the claim, suppose first that $U_i$ is contained entirely in $B^n(z_0,4d)$. Then, the claim follows from the third assertion of Proposition \ref{prop:bndryuniform} and the fact that $\diam{U_i} \geq 1$. Suppose now that $U_i$ has a point $y_i \in \mathbb{R}^n \setminus B^n(z_0,4d)$. Then, there exists a point $y_i' \in U_i \cap B^n(z_0,2d)$ and a $c$-cigar curve $\gamma_i$ joining $y_i$ with $y_i'$. Fix a point $x_i \in \gamma_i \cap \partial B^n(z_0,3d)$ and note that the John property of $\gamma_i$ implies that $B^n(x_i,d/c) \subset U_i$ which completes the proof of the claim.
By the doubling property of $\mathbb{R}^n$, there exists some $c_0>1$ depending only on $n$ such that the ball $B^n(z_0,4d)$ can contain at most $N = c_0(4d/(4c)^{-1})^n = c_0(16cd)^n$ mutually disjoint balls of radius at least $(4c)^{-1}$. Therefore, $m\leq c_0(16cd)^n$.
\emph{Case 2.} Assume that all $U_i$ are unbounded. Fix a point $z_0 \in \mathbb{R}^n$ and let $r>0$ be such that $B^n(z_0,r)\cap U_i \neq \emptyset$ for all $i\in\{1,\dots,m\}$. As with \emph{Case 1}, we can show that, for each $i\in\{1,\dots,m\}$, there exists $x_i\in U_i$ such that $B^n(x_i,r(2c)^{-1}) \subset U_i\cap B^n(z_0,2r)$. Now, the doubling property of $\mathbb{R}^n$ yields $m\leq c_0(4c)^n$.
\end{proof}
\begin{cor}\label{cor:numberofcomp}
For each $c>1$ and $d>1$ there exists $N>1$ depending only on $L$ and $d$ such that every domain in $\mathscr{QC}(c,d)\cup\mathscr{CA}(c,d)$ has at most $N$ boundary components.
\end{cor}
For the rest of \textsection\ref{sec:ext}, set $\mathcal{U}_0=(-1,1)\times(-1,1)$ and for each $m\in\mathbb{N}$ and $k\in \{1,\dots,m\}$ set
\[\mathcal{S}_{m,k} = [\frac{4k-2m-3}{2m+1},\frac{4k-2m-1}{2m+1}]\times[\frac{-1}{2m+1},\frac{1}{2m+1}]\]
and
\[ \mathcal{U}_m = \mathcal{U}_0 \setminus \bigcup_{k=1}^m \mathcal{S}_{m,k}.\]
\begin{lem}\label{lem:standdom}
Each $U \in \mathscr{QC}(K,d)$ (resp. $\mathscr{CA}^*(L,d)$) is $\eta$-quasisymmetric (resp. $(L',\diam{U})$-quasisimilar) to $\mathcal{U}_m$ for some $0\leq m\leq N$ with $\eta$ and $N$ depending only on $K$ and $d$ (resp. $N$ and $L'$ depending only on $L$ and $d$).
\end{lem}
For the proof of the lemma recall that a \emph{dyadic $n$-cube} $D\subset \mathbb{R}^n$ is an $n$-cube of the form $D = [i_12^{k}, (i_1+1)2^k]\times \cdots\times [i_n2^{k}, (i_n+1)2^k]$ where $k,i_1,\dots,i_n \in \mathbb{Z}$. If $n=2$, $D$ is called a dyadic square.
\begin{proof}[{Proof of Lemma \ref{lem:standdom}}]
The lemma is trivial if $U$ is simply connected. Suppose now that $U = D_0 \setminus(\overline{D_1}\cup\dots\cup \overline{D_m})$ where
\begin{enumerate}
\item $D_1,\dots,D_m$ are mutually disjoint subsets of $D_0$,
\item each $D_i$, $i\in\{0,\dots,m\}$, is a $K$-quasidisk with $\diam{D_i} \geq d^{-1}\diam{U}$
\item $\dist(\partial D_i, \partial D_j) \geq d^{-1}\diam{U}$ for each $i,j \in \{0,\dots,m\}$ with $i\neq j$.
\end{enumerate}
By Corollary \ref{cor:numberofcomp}, $m\leq N$ for some $N\in\mathbb{N}$ depending only on $L$ and $d$.
Assume first that $U \in \mathscr{CA}^*(L,d)$. Applying a $(\diam{U},L_1)$-quasisimilarity, with $L_1$ depending only on $L$, we may assume that $D_0 = \mathcal{U}_0$. By Lemma \ref{lem:BLsep}, there exists $L_2>1$ depending only on $L$ and $d$ such that for each $i=1,\dots,m$ there exists an $L_2$-chordarc disk $D_i'$ containing $D_i$ such that $(24d)^{-1} \leq \dist(z,\partial D_i) \leq (3d)^{-1}$. Moreover, each $D_i$ contains a dyadic square $S_i$ with side-length $2^{-m_0}$ where $m_0$ is the smallest integer such that $2^{-m_0} \leq \min\{(4Ld)^{-1},\log(2N^{-1})/\log2\}$. Note that both $\dist(\partial D_i, S_i)$ and $\diam{S_i}$ are bounded below by $\delta$ for some $\delta\in(0,1)$ depending only on $L$ and $d$.
There exists $L_3$ depending only on $L$, $d$ such that for each $i=1,\dots,m$ there exists an $L_3$-bi-Lipschitz mapping $f_i : \partial D_i' \cup \partial D_i \to \partial D_i' \cup \partial S_i$ with $f_i|_{\partial D_i'} = \text{Id}$ and $f_i(\partial D_i) = \partial S_i$. By the Annulus Theorem in the LIP category (see Theorem 3.4 in \cite{Vais-PLaprox}), each $f_i$ can be extended to an $L_4$-bi-Lipschitz mapping $f_i: \overline{D_i'\setminus D_i} \to \overline{D_i'\setminus S_i}$ with $L_4$ depending only on $L$ and $d$. Moving the squares $S_i$ around and properly dialating them, we can map $D_0 \setminus (\bigcup_{i=1}^n S_i)$ bi-Lipschitzly onto $\mathcal{U}_n$. Since there are at most $(2^{2m_0+2})!$ different configurations for the position of the squares $S_1,\dots,S_n$ inside $D_0$, the bi-Lipschitz constant of the latter map depends at most on $L$ and $d$.
The proof of the quasisymmetric case is almost identical. The only difference is that, now, we use the Annulus Theorem in the LQC category (see Theorem 3.12 in \cite{TukiaVais-LIPandLQCextension}) to obtain $\eta$-quasisymmetric extensions of the mappings $f_i$ with $\eta$ depending only on $K$ and $d$.
\end{proof}
\begin{proof}[{Proof of Proposition \ref{prop:BLext}}]
Since the embedding $f$ in both cases can be extended homeomorphically to $\overline{U}$, there exists a domain $U' \subset \mathbb{R}^2$ such that $\partial U' = f(\partial U)$ and $f$ can be extended to a homeomorphism of $\overline{U}$ onto $\overline{U'}$.
Suppose first that $U\in \mathscr{QC}(K,d)$ and that $f$ is $\eta_1$-quasisymmetric. By Lemma \ref{lem:standdom}, we may assume that $U=U'=\mathcal{U}_m$ where $m\leq N$ for some $N$ depending only on $K$ and $d$. Moreover, applying a $\lambda$-bi-Lipschitz homeomorphism of $\mathcal{U}_m$ onto itself with $\lambda>1$ depending only on $N$, we may assume that $f$ maps $\partial \mathcal{S}_{m,k}$ onto $\partial \mathcal{S}_{m,k}$.
If $m=0$, the claim follows from Theorem \ref{thm:BAT} while if $m=1$, it follows from the Annulus theorem in the LQC category. Assume for the rest that $m\geq 2$.
Let $\mathcal{S}_0' = [\frac{1/2}{2m+1}-1,1-\frac{1/2}{2m+1}]^2$ and for each $k=1,\dots,m$ let
\[ \mathcal{S}_{m,k}' = [\frac{4k-2m-7/2}{2m+1},\frac{4k-2m-1/2}{2m+1}]\times [-\frac{3/2}{2m+1},\frac{3/2}{2m+1}]\]
so that $\mathcal{S}_{k,m}\subset \mathcal{S}_{k,m}' \subset \mathcal{S}_{0}' \subset (-1,1)^2$ for each $k=1,\dots,m$. Extend $f$ to $\partial \mathcal{S}_0'$ and to each $\mathcal{S}_{k,m}'$ with identity and note that the new embedding, which we still denote by $f$, is $\eta_1'$-quasisymmetric with $\eta_1'$ depending only on $\eta_1$ and $d$. Applying the Annulus theorem in the LQC category on the interior of each $\mathcal{S}_{k,m}'\setminus \mathcal{S}_{k,m}$ we obtain an $\eta_2$-quasisymmetric extension $F: U \to U'$ with $\eta_2$ depending only on $K$, $d$ and $\eta$.
Suppose now that $U\in\mathscr{CA}(L,d)$ and $f:\partial U \to \mathbb{R}^2$ is $L_1$-bi-Lipschitz and can be extended homeomorphically to $\overline{U}$. If $U$ is simply connected, then the claim follows from Theorem \ref{thm:BAT}. Assume now that $U$ is not simply connected. As before, there exists $N\in\mathbb{N}$ depending only on $L$ and $d$ such that $\mathbb{R}^2 \setminus U$ has at most $N$ bounded components $D_1,\dots,D_m$. Moreover, there exists $U'\in\mathscr{CA}(L,d)$ such that $\partial U' = f(\partial U)$ and $f$ extends to a homeomorphism from $U$ onto $U'$.
For each $i=1,\dots,m$, set $D_i'$ to be the bounded component of $\mathbb{R}^2 \setminus U'$ such that $\partial D_i' = f(\partial D_i)$. Let $k_i$ be the maximal integer such that
\[ \diam{D_i} \leq 2^{-k_i-3}(L_1d)^{-1}d\diam{U}.\] If $k_i\leq 1$, then $\diam{D_i}\geq \frac{1}{32L_1d}\diam{U}$ and set $\tilde{D}_i = D_i$.
Suppose that $k_i\geq 2$. Fix a point $x\in\partial D_i$ and let $x'=f(x) \in \partial D_i$. Let $B_i = B^2(x,2L_1\diam{\partial D_i})$, $\tilde{D}_i = B^2(x',2L_1\diam{\partial D_i})$, $B_i' = B^2(x,2^{k_i}L_1\diam{\partial D_i})$ and $\tilde{D}_i' = B^2(x',2^{k_i}L_1\diam{\partial D_i})$. Note that $D_i\subset B_i \subset \tilde{D}_i$ and $\tilde{D}_i \cap (\partial U \setminus \partial D_i) = \emptyset$ and similarly for $D_i'$. Moreover, $\diam{\tilde{D}_i} \geq \frac{1}{32L_1d}\diam{U}$ and
\[\min\{\dist(\tilde{D}_i,\partial U \setminus \partial D_i), \dist(\tilde{D}_i',\partial U' \setminus \partial D_i')\} \geq (4Ld)^{-1}\diam{U}.\]
Therefore, $\tilde{U} = U \setminus \bigcup_{i=1}^m \tilde{D}_i \in \mathscr{CA}^*(L,d')$ for some $d'$ depending only on $d$ and $L_1$. For each $i=1,\dots,m$ define $f|_{\tilde{D}_i\setminus\partial B_i}$ by translation and apply the Annulus Theorem in the LIP category \cite[Theorem 3.4]{Vais-PLaprox} to extend $f$ to $B_i \setminus D_i$ $L_1'$-bi-Lipschitz with $L_1'>1$ depending only on $L$, $d$ and $L_1$.
Applying Lemma \ref{lem:standdom} and the Annulus Theorem in the LIP category, we obtain the extension of $f$ to $\tilde{U}$ as in the first part of Proposition \ref{prop:BLext}.
\end{proof}
\subsection{A higher dimensional extension}
It is well known that both cases of Theorem \ref{thm:BAT} are false in $\mathbb{R}^3$ due to the existence of a Lipschitz embedding of $\mathbb{S}^2$ into $\mathbb{R}^{3}$ that can be extended homeomorphically to $\mathbb{R}^3$ but not quasisymmetrically; see \cite[\textsection15]{TukiaBLExt}.
In this subsection we work with a much simpler setting. For $d>1$ denote by $\mathscr{C}_n(d)$ the collection of domains $U\subset \mathbb{R}^n$ whose boundary components are boundaries of $n$-cubes of mutual distances bounded below by $d^{-1}\diam{U}$.
\begin{prop}\label{prop:ext-ndim}
Let $U\in\mathscr{C}_n(d)$ and $f:\partial U \to \mathbb{R}^n$ be an $L$-bi-Lipschitz map that is a similarity on each component of $\partial U$ and that extends homeomorphically to $\overline{U}$. Then $f$ extends $L'$-bi-Lipschitzly to $\overline{U}$ with $L'$ depending only on $L$, $d$ and $n$.
\end{prop}
For the proof of Proposition \ref{prop:ext-ndim}, given a set $A \subset \mathbb{R}^n$ and $\delta>0$, we define the \emph{$\delta$-neighborhood} of $A$ in $\mathbb{R}^n$ by $N_d^n(A) = \bigcup_{x\in A}B^n(x,\delta)$.
\begin{proof}[{Proof of Proposition \ref{prop:ext-ndim}}]
We only give a sketch of the proof as it is similar to that of Proposition \ref{prop:BLext}. Since $f$ extends to $\overline{U}$, there exists a domain $U'\subset \mathbb{R}^n$ whose boundary is a union of disjoint cubes such that $f$ maps $\partial U$ on $\partial U'$ and any homeomorphic extension to $\overline{U}$ maps $U$ on $U'$.
Firstly, by the doubling property of $\mathbb{R}^n$, there exists $N\in\mathbb{N}$ depending only on $n$ and $d$ such that $\partial U$ has at most $N$ components. In particular, $U = D_0 \setminus \bigcup_{i=1}^m \overline{D_i}$ where $D_i$ are open $n$-cubes and $m\leq N$.
Secondly, applying the Annulus Theorem in the LIP category, we obtain a small $\delta>0$ and an $L_1$-bi-Lipschitz map $F:\mathbb{R}^n \to \mathbb{R}^n$ such that
\begin{enumerate}
\item $F$ is the identity in $U\setminus N_{\delta}(\partial U)$,
\item $F$ maps $\partial D_0$ on a dyadic cube $D_0'$ of side-length $2^{k_0+1}$ where $k_0$ is the minimal integer such that $\diam{D_0} \leq 2^{k_0-1}$,
\item $F$ maps each cube $D_i$ is mapped to a dyadic cube of side-length $2^{k_i+1}$ where $k_i\in\mathbb{Z}$ is the maximal integer such that $2^{k_i} \leq \min\{\frac{1}{N}2^{k_0},\frac{1}{16}\diam{D_i}\}$.
\end{enumerate}
Here, $\delta$ and $L_1$ depend only on $n$, $L$ and $d$.
If $\diam{D_i'}$ is very small compared to $\diam{D_0'}$, then, as in the proof of Proposition \ref{prop:BLext}, we can replace $D_i'$ with a new dyadic cube which we still denote by $D_i'$ whose side-length is comparable to that of $D_0$ but no more than $\frac{1}{N}2^{k_0}$.
Applying a uniformization result like Lemma \ref{lem:standdom}, we may assume that $U=U'$. The rest of the proof follows from applying the Annulus Theorem in the LIP category $m$ times.
\end{proof}
\section{First reduction: perfect boundary}\label{sec:isolated}
In this section, we reduce the proof of Theorem \ref{thm:main} to the case that $\partial U$ is a perfect set, and the proof of Theorem \ref{thm:cantor} to the case that $E$ is perfect.
Let $E \subset \mathbb{R}^n$ be a closed set. For each isolated point $x\in E$ let $\hat(x) \in E\setminus\{x\}$ be a point of smallest distance to $x$ and $E_x$ be the image of the standard middle-third Cantor set $\mathcal{C}$ under a similarity with scaling factor $\frac{1}{10}|x-\hat(x)|$ such that $E_x$ contains $x$. If $x\in E$ is not isolated, set $E_x = \{x\}$. Set $\hat{E}=\bigcup_{x\in E}E_x$ and note that $\hat{E}$ is closed.
\begin{lem}
For each $c\geq 1$ there exists $c'\geq 1$ depending only on $c$ that satisfies the following properties.
\begin{enumerate}
\item If $E \subset \mathbb{R}^n$ is $c$-relatively connected, then $\hat{E}$ is $c'$-uniformly perfect.
\item If $E \subset \mathbb{R}^n$ is $c$-uniformly disconnected, then $\hat{E}$ is $c'$-uniformly disconnected.
\item If $U\subset \mathbb{R}^2$ is $c$-uniform and $U' \subset U$ is the domain with $\partial U' = \hat{\partial U}$, then $U'$ is $c'$-uniform.
\end{enumerate}
\end{lem}
\begin{proof}
The proof of the first claim is similar to that of Lemma 3.3 in \cite{V}. Let $x\in \hat{E}$ and $r>0$. From the fact that $\hat{E}$ is perfect, we have $\{x\} \subsetneq \overline{B}(x,r)\cap \hat{E}$. Suppose that $\hat{E} \setminus \overline{B}(x,r) \neq \emptyset$. If $x\in E$ and is not isolated in $E$,
\[\emptyset \neq E \cap(\overline{B}^n(x,r)\setminus B^n(x,r/c)) \subset \hat{E}\cap(\overline{B}^n(x,r)\setminus B^n(x,r/c)).\]
Suppose $x \in E_z$ for some isolated point $z\in E$. If $r > 2c\dist(z,E\setminus\{z\})$, then $\emptyset \neq (E\setminus\{z\})\cap \overline{B}^n(z,r/2) \subset \hat{E}\cap \overline{B}^n(x,r)$. Therefore,
\[\emptyset \neq E\cap(\overline{B}^n(z,r/2)\setminus B^n(z,(2c)^{-1}r)) \subset \hat{E}\cap(\overline{B}^n(x,r)\setminus B^n(x,(4c)^{-1}r)).\]
If $r \leq 2c\dist(z,E\setminus\{z\})$, then $(20c)^{-1}r \leq \frac{1}{10}\dist(z,E\setminus\{z\})$. The relative connectedness of $\mathcal{C}$ gives
\[\emptyset \neq E_z\cap(\overline{B}^n(x,r)\setminus B^n(z,(20c_0)^{-1}r)) \subset \hat{E}\cap(\overline{B}^n(x,r)\setminus B^n(x,(20c_0)^{-1}r))\]
for some $c_0>1$ depending only on $c$.
To show the second claim, let $x\in \hat{E}$ and $0<r<\frac{1}{4}\diam{\hat{E}}$ and let $z\in E$ be the unique point of $E$ such that $x\in E_z$. If $z$ is an accumulation point, then $z=x$. Let $E'$ be the subset of $E$ containing $x$ with $\diam{E'} \leq r$ and $\dist(E', E \setminus E') \geq c^{-1}r$. Then $\diam{\hat{E'}} \leq \frac{11}{10}r$ and $\dist(\hat{E} \setminus \hat{E'}) \geq \frac{9}{10}c^{-1}r$.
Assume now that $z$ is isolated point. Since $\mathcal{C}$ is $c_0$-uniformly disconnected, the claim of the lemma follows with $c' = c_0$ if $r < \frac{1}{8}\diam{E_z}$. Also, by uniform disconnectedness of $\mathcal{C}$, if $r < 100\diam{E_z}$, then the claim of the lemma is true for $c' = c_0/400$. If $100\diam{E_z} \geq \frac{1}{4}\diam{\hat{E}}$, then we are done. Assume the opposite and let $r >100\diam{E_z}$. By uniform disconnectedness of $E$, there exists $E' \subset \overline{E}$ containing $z$ such that $\diam{E'} \leq r/2$ and $\dist(E', \overline{E}\setminus E') \geq (2c)^{-1}r$. Then $x \in \hat{E'}$, $\diam{\hat{E'}} \leq r$ and $\dist(\hat{E'},\hat{E}\setminus \hat{E'}) \geq (4c)^{-1}r$.
For the third claim let $E$ be the set of isolated points of $\partial U$. Then $U' = U \setminus \hat{E}$. The uniformity of $U'$ follows from the fact that $\hat{E}$ is uniformly disconnected and therefore a NUD set (nullset for uniform domains) in the sense of V\"ais\"al\"a \cite{Vais-Tohoku}; see Theorem 1 and Corollary 2 in \cite{MM2}.
\end{proof}
Let $E\subset \mathbb{R}^n$ and $f:E \to \mathbb{R}^n$ be a mapping. We extend $f$ to $\hat{f}: \hat{E} \to \mathbb{R}^2$ as follows. If $f$ is $\eta$-quasisymmetric, then for any isolated point $x\in E$ and any $y\in E_x$ define
\[\hat{f}|_{E_x}(y) = f(x) + \frac{1}{\eta(1)}\frac{|f(x)-f(\hat(x))|}{|x-\hat(x)|}(y-x).\]
If $f$ is $L$-bi-Lipschitz, then for any isolated point $x\in E$ and any $y\in E_x$ define
\[\hat{f}|_{E_x}(y) = f(x) + \frac1{L^2}\frac{|f(x)-f(\hat(x))|}{|x-\hat(x)|}(y-x).\]
\begin{lem}\label{lem:isolated}
Let $E \subset \mathbb{R}^n$ and $f:E \to \mathbb{R}^n$ be $\eta$-quasisymmetric (resp. $L$-bi-Lipschitz). Then $\hat{f}$ is $\eta'$-quasisummetric (resp. $L'$-bi-Lipschitz) with $\eta'$ depending only on $\eta$ (resp. $L'$ depending only on $L$).
\end{lem}
\begin{proof}
We first show the claim for bi-Lipschitz mappings. Given two distinct points $x,y\in \hat{E}$, there exist unique $x_1,y_1 \in E$ such that $x\in E_{x_1}$ and $y\in E_{y_1}$. If $x_1=y_1$ there is nothing to prove as $\hat{f}$ is affine on $E_{x_1}$. Suppose that $x_1\neq y_1$ and note that
\[ |x-y|\geq \max\{\frac9{10}|x_1-\hat(x_1)|,\frac9{10}|y_1-\hat(y_1)|,|x-x_1|,|y-y_1|\}.\]
Then, $|f(x)-f(y)| \leq |f(x_1)-f(y_1)| + L^{-1}(|x-x_1|+|y-y_1|) \leq 5L|x-y|$ and $|f(x)-f(y)| \geq L^{-1}|x_1-y_1| - L^{-1}(|x-x_1|+|y-y_1|) \geq (2L)^{-1}|x-y|$.
The proof in the case that $f$ is quasisymmetric is similar to that of Lemma 3.3 in \cite{V}. Let $x,y,z \in E^*$ be three distinct points with $x\in E_{x_1}$, $y\in E_{y_1}$ and $z\in E_{z_1}$ for some $x_1,y_1,z_1 \in E$. If $x_1=y_1=z_1$, then $x,y,z$ are in the same $E_{x_1}$ where $\hat{f}$ is affine. If $x_1\neq z_1$ and $x_1 = y_1$, then the prerequisites of Lemma 2.29 in \cite{Semmes3} are satisfied (see also Remark 3.2 in \cite{V}) for $A = E\setminus \{x_1\}$, $A^* = E \cup E_{x_1}$ and $H=\hat{f}|_{A^*}$. Thus, $\hat{f}|_{E\cup E_{x_1}}$ is $\eta'$-quasisymmetric for some $\eta'$ depending only on $\eta$. Hence,
\[ \frac{|\hat{f}(x)-\hat{f}(y)|}{|\hat{f}(x)-\hat{f}(z)|} \leq C_1\frac{|\hat{f}(x)-\hat{f}(y)|}{|\hat{f}(x)-\hat{f}(z_1)|} \leq C_1\eta'\left (\frac{|x-y|}{|x-z_1|} \right ) \leq C_1 \eta'\left (C_2\frac{|x-y|}{|x-z|} \right )\]
for some $C_1,C_2>1$ depending only on $\eta$. Similarly for $x_1 = z_1 \neq y_1$. If $x_1,y_1,z_1$ are distinct, then by Remark 3.2 in \cite{V},
\begin{align*}
\frac{|\hat{f}(x)-\hat{f}(y)|}{|\hat{f}(x)-\hat{f}(z)|} &\leq C_3\frac{|\hat{f}(x_1)-\hat{f}(y_1)|}{|\hat{f}(x_1)-\hat{f}(z_1)|} \leq C_3 \eta\left (\frac{|x_1-y_1|}{|x_1-z_1|}\right ) \leq C_3 \eta\left(C_4\frac{|x-y|}{|x-z|}\right )
\end{align*}
for some $C_3,C_4 >1$ depending only on $\eta$. Therefore, $\hat{f}$ is quasisymmetric.
\end{proof}
\section{Extension to the complements of quasicircle domains}\label{sec:extcompl}
A $K$-\emph{quasicircle domain} $U\subset \mathbb{R}^2$ is a planar domain such that every component of $\partial U$ is either a point or a $K$-quasicircle. The next proposition, which is the main result of this section, reduces the proof of Theorem \ref{thm:main} to extending $f$ to $\overline{U}$.
\begin{prop}\label{prop:extcompl}
Let $U\subset \mathbb{R}^2$ be a $K$-quasicircle domain and let $f:\partial U \to \mathbb{R}^2$ be an embedding which can be extended homeomorphically to $\overline{U}$.
\begin{enumerate}
\item If $f$ is $L$-bi-Lipschitz, then it extends $L'$-bi-Lipschitzly to $\mathbb{R}^2 \setminus U$ with $L'$ depending only on $L$ and $K$.
\item If $\partial U$ is $c$-relatively connected and $f$ is $\eta$-quasisymmetric, then it extends $\eta'$-quasisymmetrically to $\mathbb{R}^2 \setminus U$ with $\eta'$ depending only on $\eta$, $c$ and $K$.
\end{enumerate}
\end{prop}
Note that Proposition \ref{prop:extcompl} and Lemma \ref{lem:couniform} yield the next corollary.
\begin{cor}\label{cor:extcompl}
Let $U\subset \mathbb{R}^2$ be a $c$-uniform domain (resp. $c$-uniform domain with $C$-relatively connected boundary) and let $f:\partial U \to \mathbb{R}^2$ be an $L$-bi-Lipschitz (resp. $\eta$-quasisymmetric) embedding that can be extended homeomorphically to $\mathbb{R}^2$. Then, for some $c'$ depending only on $c$ and $L$ (resp. depending only on $c$, $C$ and $\eta$), there exists a $c'$-uniform domain $U' \subset \mathbb{R}^2$ and a homeomorphic extension $F:\overline{U} \to \overline{U'}$.
\end{cor}
\begin{proof}
Assume first that $U$ is $c$-uniform with $C$-relatively connected boundary and that $f:\partial U \to \mathbb{R}^2$ is an $\eta$-quasisymmetric embedding that can be extended homeomorphically to $\mathbb{R}^2$. Set $E = \mathbb{R}^2\setminus U$. By the second part of Proposition \ref{prop:extcompl}, there exists an $\eta'$-quasisymmetric extension $g:E\to\mathbb{R}^2$ of $f$. Since $\mathbb{R}^2 \setminus E$ is $c$-uniform, by Lemma \ref{lem:couniform}, $\mathbb{R}^2 \setminus g(E)$ is $c'$-uniform for some $c'$ depending only on $c$ and $\eta'$, thus only on $c$ and $\eta$. Since $f$ admits a homeomorphic extension to $\overline{U}$ it follows that $g$ admits a homeomorphic extension to $\mathbb{R}^2$.
The bi-Lipschitz case follows similarly from Lemma \ref{lem:couniform} and the first part of Proposition \ref{prop:extcompl}.
\end{proof}
In \textsection\ref{sec:extcomplQS} we prove the quasisymmetric case of Proposition \ref{prop:extcompl} while in \textsection\ref{sec:extcomplBL} we prove the bi-Lipschitz case of Proposition \ref{prop:extcompl}.
\subsection{Quasisymmetric case of Proposition \ref{prop:extcompl}}\label{sec:extcomplQS}
For the rest of \textsection\ref{sec:extcomplQS}, given a $K$-quasidisk $D$ in $\mathbb{R}^2\setminus \overline{U}$ and an $\eta$-quasisymmetric $f:\partial D \to \mathbb{R}^2$, we denote by $f_D: \overline{D} \to \mathbb{R}^2$ the $\eta^*$-quasisymmetric extension of $f$ given by the Beurling-Ahlfors extension; see Theorem \ref{thm:BAT} and Remark \ref{rem:BAT}. Here, $\eta^*$ depends only on $\eta$ and $K$.
To prove the quasisymmetric case of Proposition \ref{prop:extcompl}, we use the following lemma.
\begin{lem}\label{lem:extcompl}
Let $E\subset \mathbb{R}^2$ be a closed set and $D\subset \mathbb{R}^2$ be a $K$-quasidisk such that $D\cap E = \emptyset$ and $\partial D \cup E$ is $c$-uniformly perfect. Suppose that $f: \partial D \cup E \to \mathbb{R}^2$ is an $\eta$-quasisymmetric embedding that can be extended homoeomorphically on $\overline{D}\cup E$. Then the map $F:\overline{D}\cup E \to \mathbb{R}^2$ defined by
\[ F|_{\overline{D}} = f_D \qquad\text{and}\qquad F|_{E} = f|_E\]
is $\eta'$ quasisymmetric for some $\eta'$ depending only on $\eta$, $K$ and $c$.
\end{lem}
Assuming Lemma \ref{lem:extcompl}, the proof of the quasisymmetric case of Proposition \ref{prop:extcompl} is as follows.
\begin{proof}[{Proof of Proposition \ref{prop:extcompl}(2)}]
Fix a $K$-quasicircle domain $U\subset \mathbb{R}^2$ with $c$-relatively connected boundary. Let $f:\partial U\to \mathbb{R}^2$ be a quasisymmetric embedding that can be extended homeomorphically to $\overline{U}$. Applying the arguments of \textsection\ref{sec:isolated} we may assume that $\partial U$ is uniformly perfect. Extend $f$ to $F:\mathbb{R}^2 \setminus U \to \mathbb{R}^2$ by setting $F|_{\overline{D}} = f_D$ for every component $D$ of $\mathbb{R}^2\setminus \overline{U}$. The proof of quasisymmetry of $F$ is given in 2 steps.
First, iterating Lemma \ref{lem:extcompl} three times, it is easy to see that if $D_1$, $D_2$, $D_3$ are three components $\mathbb{R}^2\setminus \overline{U}$, then that the restriction of $F$ on $\partial U \cup \bigcup_{i=1}^3\overline{D_i} $ is $\eta''$ quasisymmetric for some $\eta''$ depending only on $\eta$, $K$ and $c$.
To show that the map $F$ is $\eta'$-quasisymmetric take points $x,a,b \in \mathbb{R}^2 \setminus U$. Find components $D_1$, $D_2$, $D_3$ of $\mathbb{R}^2 \setminus \overline{U}$ such that $x,a,b \in \partial U \cup \bigcup_{i=1}^3\overline{D_i}$; if less than 3 components exist then the proof is a double iteration of Lemma \ref{lem:extcompl}. The quasisymmetry of $F$ follows now from the quasisymmetry of $F$ restricted on $\partial U \cup \bigcup_{i=1}^3\overline{D_i}$.
\end{proof}
The next lemma is used in the proof of Lemma \ref{lem:extcompl}. For the rest of \textsection\ref{sec:extcomplQS}, for two positive quantities $A,B$ we write $A\lesssim B$ if there exists constant $C^*$, depending only on $c$, $K$ and $\eta$, such that $A\leq C^* B$. We write $A\simeq B$ if $A\lesssim B$ and $B\lesssim A$. Furthermore, for a point $z\in\mathbb{R}^2$ we denote by $\pi(z)$ the radial projection of $z$ on $\mathbb{S}^1$.
\begin{lem}\label{lem:extcompl2}
Suppose that $D$ is a $K$-quasidisk and $E$ is a closed set such that $E\cap D = \emptyset$. Suppose also that $F: E\cup \overline{D} \to \mathbb{R}^2$ is an embedding such that the restrictions $F|_{\overline{D}}$ and $F_{E\cup \partial D}$ are $\eta$-quasisymmetric. If $x\in \overline{D}$ and $y\in E$, then there exists $x' \in \partial D$ such that $|x-y| \simeq |x'-y|$ and $|F(x)-F(y)| \simeq |F(x')-F(y)|$.
\end{lem}
\begin{proof}
Assume first that $D$ is bounded. Applying quasisymmetric homeomorphisms of $\mathbb{R}^2$, we may assume that $D = F(D) = \mathbb{B}^2$. Let $x$ and $y$ be as in the statement of Lemma \ref{lem:extcompl2}. We consider four possible cases.
\textsc{Case I.} Suppose that $\dist(y,\mathbb{S}^1)\geq 1/10$. By the quasisymmetry of $F|_{E\cup \mathbb{S}^1}$, $\dist(F(y),\mathbb{S}^1)\gtrsim 1$. Let $x' \in \mathbb{S}^1$ be a point such that $|x-x'|=1$. Then, $|x-y| \simeq |x'-y|$ and by the quasisymmetry of $F|_{E\cup \mathbb{S}^1}$ we have
\[ |F(y)-F(x')| \simeq \dist(F(y),\mathbb{S}^1) \simeq |F(y)-F(x')|.\]
\textsc{Case II.} Suppose that $\dist(y,\mathbb{S}^1)< \eta^{-1}(1/4)/10$ and $\dist(x,\mathbb{S}^1)\geq \eta^{-1}(1/4)/10$. Let $x'\in \mathbb{S}^1$ such that such that $|x-x'| \geq \eta^{-1}(1/4)/10$ and $|y-x'| \geq \eta^{-1}(1/4)/10$. Then, $|x-y| \simeq 1\simeq |x'-y|$. Moreover, the quasisymmetry of $F|_{E\cup \mathbb{S}^1}$ implies that $|F(y)-F(x')| \simeq 1$ while the quasisymmetry of $F|_{\overline{\mathbb{B}^2}}$ gives $|F(x)-F(x')| \simeq 1$. Thus,
\[ |F(y)-F(x)| \simeq 1 \simeq |F(y)-F(x')| .\]
\textsc{Case III.} Suppose that $\dist(y,\mathbb{S}^1)< \eta^{-1}(1/4)/10$, $\dist(x,\mathbb{S}^1)< \eta^{-1}(1/4)/10$. We consider two subcases.
\textsc{Case III(1).} Suppose that $\max\{|x-\pi(x)|, |y-\pi(y)|\} \leq \eta^{-1}(1/4)|\pi(x)-\pi(y)|$. Then, the quasisymmetry of $F|_{\overline{\mathbb{B}^2}}$ gives
\[ \dist(F(x),\mathbb{S}^1) \simeq |F(x)-F(\pi(x))| \leq |F(\pi(x)) - F(\pi(y))|/4\]
while the quasisymmetry of $F|_{E\cup \mathbb{S}^1}$ gives
\[ \dist(F(y),\mathbb{S}^1) \simeq |F(y)-F(\pi(y))| \leq |F(\pi(x)) - F(\pi(y))|/4.\]
Set $x' = \pi(x)$ and note that $|x-y| \simeq |x'-y|$ and
\[ |F(x)-F(y)| \simeq |F(x') - F(\pi(y))| \simeq |F(x')-F(y)|.\]
\textsc{Case III(2).} Suppose that $\max\{|x-\pi(x)|, |y-\pi(y)|\} \geq \eta^{-1}(1/4)|\pi(x)-\pi(y)|$. Choose a point $x'\in \mathbb{S}^1$ such that
\[ |\pi(x) - x'| = \max\{|x-\pi(x)|,|y-\pi(y)|\}\]
and $\pi(x)$ is contained in the smaller subarc of $\mathbb{S}^1$ joining $\pi(y)$ and $x'$. It is easy to see that
\[ |x-y| \simeq |x-\pi(x)| + |\pi(x) - \pi(y)| + |\pi(y) - y| \simeq |\pi(x)-x'| \simeq |y-x'|.\]
On the other hand,
\begin{align*}
|F(x)-F(y)| &\simeq |F(x)-\pi(F(x))| + |\pi(F(x))-\pi(F(y))| + |F(y)-\pi(F(y))| \\
&\simeq |F(x)-F(\pi(x))| + |\pi(F(x))-\pi(F(y))| + |F(y)-F(\pi(y))| \\
&\simeq |F(x')-F(\pi(x))| + |\pi(F(x))-\pi(F(y))| + |F(y)-F(x')| \\
&\simeq |\pi(F(x))-\pi(F(y))| + |F(y)-F(x')|.
\end{align*}
We conclude this case and the proof by showing that
\[ |\pi(F(x))-\pi(F(y))| \lesssim |F(y)-F(x')|.\]
Indeed,
\begin{align*}
|\pi(F(x))-\pi(F(y))| &\leq |\pi(F(x)) - F(x)| + |F(x) - F(\pi(x))|\\
&+ |F(\pi(y)) - F(\pi(x))|\\
&+ |\pi(F(y)) - F(y)| + |F(y) - F(\pi(y))|\\
&\lesssim |F(x) - F(\pi(x))| + |F(\pi(y)) - F(\pi(x))| + |F(y) - F(\pi(y))|\\
&\lesssim |F(x) - F(x')| + |F(\pi(y)) - F(x')| + |F(y) - F(x')|\\
&\lesssim |F(y) - F(x')|.
\end{align*}
Suppose now that $D$ is unbounded. As before, we may assume that $D = F(D) = \mathbb{R} \times (0,+\infty)$. The proof is virtually the same with the difference that $\dist(y,\mathbb{R})$ and $\dist(x,\mathbb{R})$ do not matter and we only need to consider Case III(1) and Case III(2).
\end{proof}
\begin{rem}\label{rem:extcompl}
With similar reasoning we can show that if $D$ is bounded, $x\in \overline{D}$ and $y\in E$ and $\dist(y,\partial D)\leq 3\diam{D}$, then there exists $y' \in \partial D$ such that $|x-y| \simeq |x-y'|$ and $|F(x)-F(y)| \simeq |F(x)-F(y')|$.
\end{rem}
We conclude now \textsection\ref{sec:extcomplQS} by proving Lemma \ref{lem:extcompl}.
\begin{proof}[{Proof of Lemma \ref{lem:extcompl}}]
Assume that $D$ is bounded; the proof in the case that $D$ is unbounded is similar. Setting $\eta_1(t) = \max\{\eta^*(t),\eta(t)\}$, we may assume that $F|_{\overline{D}}$ and $F|_{E\cup\partial D}$ are $\eta_1$-quasisymmetric.
Note that $\overline{D}\cup f(E)$ is $c'$-uniformly perfect for some $c'$ depending only on $\eta$ and $c$. Moreover, both $\overline{D} \cup E$ and $F(\overline{D} \cup E)$ are $C_0$-doubling for some universal $C_0>1$. Therefore, by Lemma \ref{lem:weakQS}, it suffices to show that there exists $H\geq 1$ such that for all $x,a,b \in \overline{D}\cup E$ we have
\begin{equation}\label{eq:weakQS2}
|x-a|\leq |x-b| \qquad\text{implies}\qquad |F(x) - F(a)| \leq H |F(x) - F(b)|.
\end{equation}
Fix now $x,a,b\in \overline{D}\cup E$ with $|x-a|\leq |x-b|$. The proof of Lemma \ref{lem:extcompl} is case study with respect to the position of points $x,a,b$ in $\overline{D} \cup E$.
\emph{Case 1.} If $x,a,b \in \partial D\cup E$ or $x,a,b \in \overline{D}$, then (\ref{eq:weakQS2}) holds with $H=\eta_1(1)$.
\emph{Case 2.} Suppose that $a \in \overline{D}$ and $x,b \in E$. Applying Lemma \ref{lem:extcompl2}, there exists $a'\in\partial D$ such that $|x-a|\simeq |x-a'|$ and $|F(x)-F(a)| \simeq |F(x)-F(a')|$. Apply now the quasisymmetry of $F|_{E\cup\partial D}$ for the points $a',x,b$.
\emph{Case 3.} Suppose that $a,x \in \overline{D}$ and $b\in E$.
\emph{Case 3.1.} Assume that $\dist(b,\partial D)\geq \diam{D}$. Then, by the quasisymmetry of $F|_{E\cup\partial D}$,
\[ |F(x)-F(b)| \simeq \dist(F(b),\mathbb{S}^1) \gtrsim \diam{D} \gtrsim |F(x)-F(a)|.\]
\emph{Case 3.2.} Assume that $\dist(b,\partial D)\leq \diam{D}$. As in Remark \ref{rem:extcompl}, choose a point $b' \in \partial D$ such that $|x-b'| \simeq |x-b|$ and $|F(x)-F(b')| \simeq |F(x)-F(b)|$. Then, apply quasisymmetry of $F|_{\overline{D}}$ on the points $x,a,b'$.
\emph{Case 4.} Suppose that $a,b \in \overline{D^2}$ and $x\in E$. As in Lemma \ref{lem:extcompl2}, choose points $a',b' \in \partial D$ such that $|x-a|\simeq |x-a'|$, $|x-b|\simeq |x-b'|$, $|x-a|\simeq |F(x)-F(a')|$, $|F(x)-F(b)|\simeq |F(x)-F(b')|$. Equation (\ref{eq:weakQS2}) follows now applying the quasisymmetry of $F|_{E\cup\partial D}$ on the the points $x,a',b'$.
\emph{Case 5.} Suppose that $x\in \overline{D}$ and $a,b\in E$.
\emph{Case 5.1.} Assume that $\dist(a,\partial D) \geq 3\diam{D}$. Then, $\dist(b,\partial D)\geq \diam{D}$. Choose any point $x'\in\partial D$ and note that $|x-a|\simeq |x'-a|$, $|x-b|\simeq |x'-b|$ and, by the quasisymmetry of $F|_{E\cup\partial D}$, $|F(x)-F(a)|\simeq |F(x')-F(a)|$, $|F(x)-F(b)|\simeq |F(x')-F(b)|$. Equation (\ref{eq:weakQS2}) follows now applying the quasisymmetry of $F|_{E\cup\partial D}$ on the the points $x',a,b$.
\emph{Case 5.2.} Assume that $\dist(a,\partial D) < 3\diam{D}$. Choose a point $a'\in\partial D$ such that $|x-a|\simeq |x-a'|$ and $|F(x)-F(a)|\simeq |F(x)-F(a')|$. This case is now reduced to Case 3.
\emph{Case 6.} Suppose that $x,b \in \overline{D}$ and $a\in E$. Note that $\dist(a,\partial D) \leq 2\diam{D}$. As in Remark \ref{rem:extcompl}, choose a point $a'\in\partial D$ such that $|x-a'|\simeq |x-a|$ and $|F(x)-F(a')|\simeq |F(x)-F(a)|$. Then, apply the quasisymmetry of $F|_{\overline{D}}$ on points $x,a',b$.
\emph{Case 7.} Suppose that $b\in \overline{D}$ and $a,x \in E$. As in Lemma \ref{lem:extcompl2}, choose a point $b'\in \partial D$ such that $|x-b'|\simeq |x-b|$ and $|F(x)-F(b')|\simeq |F(x)-F(b)|$. Then, apply the quasisymmetry of $F|_{E\cup\partial D}$ on points $x,a,b'$.
\end{proof}
\subsection{Bi-Lipschitz case of Proposition \ref{prop:extcompl}}\label{sec:extcomplBL}
The proof of the bi-Lipschitz case of Proposition \ref{prop:extcompl} is almost the same as in the quasisymmetric case with one notable difference: instead of using the Beurling-Ahlfors quasisymmetric extension, for each component $D$ of $\mathbb{R}^2\setminus \overline{U}$, we use the extension $f_D:\overline{D} \to \mathbb{R}^2$ given by Lemma \ref{lem:extcomplBL}.
\begin{proof}[{Proof of Proposition \ref{prop:extcompl}(2)}]
The proof is similar to the quasisymmetric case so we only outline the steps of the proof.
\emph{Step 1.} We show that if $D$ is a $K$-quasidisk, $E\subset \mathbb{R}^2$ is a closed set disjoint from $D$ and $F:\overline{D}\cup E \to \mathbb{R}^2$ is an embedding such that the restrictions $F|_{\overline{D}}$ and $F|_{\partial D\cup E}$ are $L$-bi-Lipschitz, then $F$ is $L'$-bi-Lipschitz for some $L'$ depending only on $L$ and $K$. To prove this claim, fix $x,y \in \overline{D}\cup E$ and consider the only nontrivial case $x\in \overline{D}$, $y\in \partial D\cup E$. Lemma \ref{lem:extcompl2} can be used to reduce this setting to either $x,y\in \overline{D}$ or $x,y\in \partial D\cup E$.
\emph{Step 2.} To prove the proposition, fix $x,y \in \mathbb{R}^2\setminus U$, find two components $D_1,D_2$ of $ \mathbb{R}^2\setminus \overline{U}$ such that $x_1,x_2 \in \partial U \cup \overline{D_1}\cup\overline{D_2}$ and use Step 1 twice.
\end{proof}
\section{Second reduction: bounded boundary}\label{sec:unbounded}
In this section, we reduce the proof of Theorem \ref{thm:main} to the case that $U$ is the complement of a compact set, and the proof of Theorem \ref{thm:cantor} to the case that $E$ is compact.
\subsection{Uniform domains}\label{sec:unboundeddom} Assume for the rest of \textsection\ref{sec:unboundeddom} that $U\subset \mathbb{R}^2$ is $c$-uniform and that $f :\partial U \to \mathbb{R}^2$ is $L$-bi-Lipschitz (resp. $f$ is $\eta$-quasisymmetric and $\partial U$ is $c$-relatively connected) that admits a homeomorphic extension to $\overline{U}$. Assume, moreover, that Theorem \ref{thm:main} holds for unbounded uniform domains with bounded boundary.
To simplify the exposition, we use complex coordinates for the rest of \textsection\ref{sec:unboundeddom}.
By Corollary \ref{cor:extcompl}, there exist $c'>1$ depending only on $c$ and $L$ (resp. on $c$ and $\eta$) and a $c'$-uniform domain $U'$ such that $f$ extends to a homeomorphism between $U$ and $U'$. There are three cases to consider.
\emph{Case 1: Suppose that $U$ is bounded.} In this case, $U'$ is bounded. By the porosity of $\partial U$ and $\partial U'$, there exist points $x_0\in U$ and $x_0'\in U'$ such that
\[B^2(x_0,(4c)^{-1}\diam{U}) \subset U \qquad\text{and}\qquad B^2(x_0',(4c')^{-1}\diam{U'}) \subset U'.\]
Applying similarity mappings we may assume that $x_0=x_0'=0$ and $\diam{U} = \diam{U'} = 1$.
Assume first that $f$ is $L$-bi-Lipschitz. The domain $U\setminus\{x_0\}$ is $c_1$-uniform for some $c_1$ depending only on $c$ and the map $f_1:\partial U\cup\{x_0\} \to \partial U\cup\{x_0'\}$ with $f_1|_{\partial U} = f$ and $f_1(x_0) = x_0'$ is $L_1$-bi-Lipschitz for some $L_1>1$ depending only on $L$ and $c$. Moreover, $f_1$ admits a homeomorphic extension to $\overline{U}$.
Recall the definitions of inversion maps $I_{x_0}$ from \textsection\ref{sec:BLextQD}. Set $V = I_0(U\setminus\{0\})$, $V'=I_0(U'\setminus\{0\})$ and $g:V \to V'$ with $g = I_0\circ f \circ I_0|_{V}$. Then, $V$ is an unbounded $c_1$-uniform domain, $\partial 1$ is bounded. By Remark \ref{rem:inversion}, $g$ is a bi-Lipschitz embedding defined on the boundary of an unbounded uniform domain with bounded boundary and the extension of $g$ follows by our assumption. Taking inversions again, we obtain an $L'$-bi-Lipschitz extension of $f$ to $U$ with $L'$ depending only on $L$ and $c$.
Assume now that $f$ is $\eta$-quasisymmetric. The inversion map $I_0\colon \mathbb{R}^2\setminus\{0\}\to \mathbb{R}^2\setminus\{0\}$ is $1$-quasiconformal while the restrictions of $I_0$ on $B^2(0,2)\setminus B^2(0,(8c)^{-1})$ and on $B^2(0,2)\setminus B^2(0,(8c')^{-1})$ are $L_2$-bi-Lipschitz for some $L_2$ depending only on $c$ and $\eta$. As in the bi-Lipschitz case, the domain $V = I_0(U\setminus\{0\})$ is an unbounded $c_1$-uniform domain and $\partial V$ is bounded and $c_1$-relatively connected. Furthermore, $g = I_0\circ f \circ I_0|_{\partial V}$ is an $\eta_1$-quasisymmetric embedding that can be extended as a homeomorphism of $V$. Here, $c_1$ and $\eta_1$ depending only on $c$ and $\eta$. By our assumption, there exists an $\eta_1'$-quasisymmetric extension $G:V \to V'$. Let $F:U \to U'$ with $F = I_0\circ G \circ I_0|_{U}$. Then, $F$ is $K$-quasiconformal for some $K$ depending only on $c$ and $\eta$ and, by Lemma \ref{lem:QCtoQM}, $F$ is $\eta'$-quasisymmetric with $\eta'$ depending only on $c$ and $\eta$.
\emph{Case 2: Suppose that $U$ is unbounded and $\partial U$ contains an unbounded component.} By Proposition \ref{prop:bndryuniform}, $\partial U$ contains an unbounded quasicircle $\Gamma$, all other components of $\partial U$ are bounded and $U$ is contained in one of the two components of $\mathbb{R}^2 \setminus \Gamma$. Fix $z_0 \in \Gamma$ and let $z_0' = f(z_0)$.
The bi-Lipschitz case is similar to Case 1. Let $r>0$ and let $x_0$ be a point on $\partial B^2(z_0,r)$ such that $B^2(x_0,r/c) \subset U$. Similarly define a point $x_0'\in \partial B^2(z_0',r)$. The rest is as in Case 1.
Assume now that $f$ is $\eta$-quasisymmetric. Applying an $\eta_0$-quasisymmetric homeomorphism of $\mathbb{R}^2$ we may assume that $\Gamma = f(\Gamma) = \mathbb{R}$, $z_0 = z_0' = 0$ and that $U$ and $U'$ are subsets of the upper half-plane. Here $\eta_0$ depends only on $c$ and $\eta$. For each $k\in\mathbb{N}$ let $z_k = (12c^2)^{k-1}$ and let $\gamma_k$ be a $c$-cigar curve in $U$ joining $z_k$ with $-z_k$. Note that for $k\geq 2$, $\gamma_k \subset B^2(z_0,3c(12c^2)^{k-1}) \setminus B^2(z_0,(2c)^{-1}(12c^2)^{k-1})$ and, therefore, $\dist(\gamma_k,\gamma_{k+1}) \geq (4c)^{-1}(12c^2)^k$.
For each $k\in\mathbb{N}$ let $U_k$ be the domain bounded by $\gamma_{k}$ and $[-z_k,z_k]$, and set $E_k = \partial U \cap \overline{U_k}$ and $E_k'=f(E_k)$. Each $U_k$ is bounded and it is easy to check that each $U_k$ is $c'$-uniform with $c'$-relatively connected boundary for some $c'>1$ depending only on $c$. Note also that $\diam{E_k'} \leq C|f(z_k)-f(-z_k)|$ for some $C$ depending only on $c$ and $\eta$. Define
\[\gamma_k' = [f(z_k), f(z_k)- 2i\diam{E_k'}]\cup[f(-z_k), f(-z_k)- 2i\diam{E_k'}]\cup \sigma_1\cup \sigma_2\]
where $\sigma_1$, $\sigma_2$ are circular arcs of $\partial B^2(f(z_k),\diam{E_k})$, $\partial B^2(f(-z_k),\diam{E_k})$ respectively so that $\gamma_k\cup f([-z_k,z_k])$ is the boundary of a $K$-quasidisk $D_k$ which contains $E_k$ in its closure. Here $K$ depends only on $c$ and $\eta$.
Applying the quasisymmetric extension property of relatively connected subsets of quasicircles \cite{V}, we extend $f$ to an $\eta_1$-quasisymmetric $f_k:\partial U_k\to \mathbb{R}^2$ with $f_k(\gamma_k)=\gamma_k'$. Here $\eta_1$ depends only on $\eta$ and $c$. The extension $F_k$ of $f_k$ to each $U_k$ follows from Case 1. Since $U = \bigcup_{k\in\mathbb{N}}U_k$, by standard converging arguments \cite[Corollary 10.30]{Heinonen}, $\{F_k\}$ subconverges to a mapping $F:U \to \mathbb{R}^2$ with $F|_{\partial U} = f$ that is $\eta'$-quasisymmetric.
\emph{Case 3: Suppose that $U$ is unbounded and all components of $\partial U$ are bounded.} Fix $x\in \partial U$ and let $A_x$ be the component of $\partial U$ containing $x$. Let $r_1> 8 \diam{A_x}$. By Proposition \ref{prop:wud}, $A_x$ is contained in a neighborhood $N_{i_1}(A_1,r_1)$ where $i_1\in\{1,2\}$ and $A_1$ is a component of $\partial U$. Let $U_1$ be the subset of $U$ with boundary $N_{i_1}(A_1,r_1)$. Let $r_2 > 8\diam{N_{i_1}(A_1,r_1)}$. Inductivelly, having defined $r_k$ and $N_{i_k}(A_k,r_k)$, let $r_{k+1} > 8\diam{N_{i_k}(A_k,r_k)}$ and arguments similar to that of Proposition \ref{prop:wud} show that $N_{i_k}(A_k,r_k)$ is contained in a neighborhood $N_{i_{k+1}}(A_{k+1},r_{k+1})$ where $i_{k+1}\in\{1,2\}$ and $A_{k+1}$ is a component of $\partial U$.
For each $k\in\mathbb{N}$ let $U_k \subset \mathbb{R}^2$ be the unbounded domain with $\partial U_k = N_{i_k}(A_k,r_k)$. By Lemma \ref{lem:propertiesofnbhd}, each $U_k$ is $c'$-uniform and each $\partial U_k$ is $c'$-relatively connected for some $c'>1$ depending only on $c$. By our assumption, there exists a mapping $F_k : U_k \to \mathbb{R}^2$ that extends $f|_{\partial U_k}$ which is $\eta'$-quasisymmetric (resp. $L'$-bi-Lipschitz) with $\eta'$ depending only on $c$ and $\eta$ (resp. $L'\geq 1$ depending only on $c$ and $L$). As in Case 2, $\{F_k\}$ subconverges to a mapping $F:U \to \mathbb{R}^2$ with $F|_{\partial U} = f$ that is $\eta'$-quasisymmetric (resp. $L'$-bi-Lipschitz).
\subsection{Reduction for Theorem \ref{thm:cantor}}
Assume that $n\geq 3$ and that $E\subset \mathbb{R}^n$ ($n\geq 3$) is unbounded and $c$-uniformly disconnected. Assume also that $f:E \to \mathbb{R}^n$ is $L$-bi-Lipschitz (resp. $E$ is $c$-uniformly perfect and $f$ is $\eta$-quasisymmetric) and that Theorem \ref{thm:cantor} holds for bounded sets.
Suppose that $E$ is unbounded. Fix $x\in E$. For each $k\in\mathbb{N}$ let $E_k$ be a subset of $E$ containing $x$ such that $\diam{E_k} \leq 2^k$ and $\dist(E_k, E \setminus E_k)\geq c^{-1}2^k$. Note that each $E_k$ is $c$-uniformly disconnected as the property is preserved on subsets.
Suppose that $E$ is uniformly perfect and that $f$ is $\eta$-quasisymmetric; the bi-Lipschitz case is identical. By $c$-uniform perfectness, $\diam{E_k}\geq c^{-2}2^k$. We show that each $E_k$ is $c^2$-uniformly perfect. Let $y\in E_k$ and $r>0$. Since $E_k$ is perfect, either $\overline{B}^n(y,r)\cap E_k = E_k$ or $E_k \setminus \overline{B}^n(y,r) \neq \emptyset$. Assume the latter. Then, $r\leq 2^k$ and by uniform discontinuity, $\overline{B}^n(y,c^{-1}r)\cap E_k = \overline{B}^n(y,c^{-1}r)\cap E$. By $c$-uniform perfectness of $E$, $(\overline{B}^n(y,c^{-1}r)\setminus B^n(y,c^{-2}r))\cap E_k = (\overline{B}^n(y,c^{-1}r)\setminus B^n(y,c^{-2}r))\cap E \neq \emptyset$.
Therefore, each $E_k$ is $c^2$-uniformly disconnected and $c^2$-uniformly perfect. By our assumption, each $f|_{E_k}$ extends to a mapping $F_k:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ that is $\eta'$-quasisymmetric. with $\eta'$ depending only on $c$ and $\eta$. As in \textsection\ref{sec:unboundeddom}, $\{F_k\}$ subconverges to a mapping $F:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ with $F|_E = f$ that is $\eta'$-quasisymmetric.
\section{Whitney-type decompositions around quasidisks}\label{sec:qcircledecomp}
Let $D,D' \subset \mathbb{R}^2$ be Jordan domains, $A,A'$ be unions of disjoint closed quasidisks in $D,D'$ respectively and $\Delta_{\infty},\Delta_{\infty}'$ be closed quasidisks contained in $A,A'$ respectively. Also, let $f : A \to A'$ be an $\eta$-quasisymmetric homeomorphism with $f(\Delta_{\infty}) = \Delta_{\infty}'$. For the rest of this section, we assume that there exist $c>1$ and $C>1$ with the following properties.
\begin{enumerate}[label=(\Roman*)]
\item $A$ and $A'$ are compact with $c$-uniform complements.
\item The Jordan curves $\Gamma_{\infty} = \partial \Delta_{\infty}$ and $\Gamma_{\infty}' = \partial \Delta_{\infty}'$ are $c$-bounded turning and
\[ \diam{\Gamma_{\infty}} = \diam{\Gamma_{\infty}'} = 1 \]
\item For all $z\in \partial D$ and all $z'\in \partial D'$
\begin{align*}
C^{-1} \leq \dist(z,A)&\leq \dist(z,\Delta_{\infty}) \leq (8c)^{-3}\\
C^{-1} \leq \dist(z',A')&\leq \dist(z',\Delta_{\infty}') \leq C.
\end{align*}
\end{enumerate}
For some $L>1$ and $c_1>1$ depending only on $c$, $C$ and $\eta$, we construct two families of sets $\mathscr{Q}$ and $\mathscr{Q}'$ with the following properties.
\begin{description}[align=left]
\item[(P1)] The family $\mathscr{Q}$ is a $(L,c_1)$-Whitney-type decomposition of $D\setminus \Delta_{\infty}$ and $\mathscr{Q}$ is a $(L,c_1)$-Whitney-type decomposition of $D'\setminus \Delta_{\infty}'$.
\item[(P2)] For all $Q \in \mathscr{Q}$, $c_1^{-1}\diam{Q} \leq \dist(\partial Q,A) \leq c_1\diam{Q}$. Similarly for $\mathscr{Q}'$.
\item[(P3)] There exists homeomorphism $g: \overline{D} \to \overline{D'}$ such that $f|_{\partial A} = g|_{\partial A}$ and for each $Q\in\mathscr{Q}$, $g(Q) \in \mathscr{Q}'$.
\end{description}
The construction of $\mathscr{Q}$ is very similar to that of \textsection\ref{sec:extcomplBL}. However, the construction of $\mathscr{Q}'$ in this setting is more complicated for two reasons. The first is that, unlike \textsection\ref{sec:extcomplBL}, the map $f$ is assumed to be only quasisymmetric. The second reason is that $A$ may have infinite (even uncountably many) components and we need to make sure that the boundaries of all Whitney domains properly avoid all the components of $A\setminus \Delta_{\infty}'$ around $\Delta_{\infty}'$.
For the rest of \textsection\ref{sec:qcircledecomp}, given two positive quantities $a,b$ we write $a\lesssim b$ if there exists constant $C_0$, depending only on $c$, $C$ and $\eta$, such that $a\leq C_0 b$. We write $a\simeq b$ if $a\lesssim b$ and $b\lesssim a$.
In \textsection\ref{sec:preim} we construct $\mathscr{Q}$. In \textsection\ref{sec:decompimageprelim} we perform some preliminary steps towards the construction of $\mathscr{Q}'$ while the actual construction is given in \textsection\ref{sec:decompimage} in an inductive manner. In \textsection\ref{sec:concluding} we record some observations which imply the desired properties of $\mathscr{Q}$ and $\mathscr{Q}'$.
\subsection{Decomposition around the preimage}\label{sec:preim}
The construction of $\mathscr{Q}$ is almost identical to the construction in the proof of Lemma \ref{lem:extcomplBL} so we only outline the steps.
Fix an orientation on $\Gamma_{\infty}$. As in the proof of Lemma \ref{lem:extcomplBL}, two points $p_i$ and $p_j$ are \emph{neighbors} in the collection $\{p_k : k=1,\dots,n\}$ if one of the two components of $\Gamma_{\infty} \setminus \{p_i,p_j\}$ contains no point in $\{p_k : k=1,\dots,n\}$. Furthermore, $p_i$ is on the left (resp. right) of $p_j$ in $\{p_k : k=1,\dots,n\}$ if $p_i$ and $p_j$ are neighbors in $\{p_k : k=1,\dots,n\}$ and, following the orientation of $\Gamma_{\infty}$, $p_i$ and $p_j$ are the starting and ending points (resp. the ending and starting points) of $\Gamma_{\infty}(p_i,p_j)$.
Since $\Gamma_{\infty}$ is $c$-bounded turning, by assumption (III) we have that
\[C^{-1} \leq \dist(w,\partial D) \leq 3c (8c)^{-3} \leq (32c^2)^{-1}\]
for all $w\in \Gamma_{\infty}$ \cite[Lemma 3.4]{VW2}.
Set $D_1 = D$ and for each $m=2,3,\dots$ apply Lemma \ref{lem:uniform=wud1} on $A$ with $\epsilon_m = (2c)^{-2m}C^{-1}$ and obtain a chordarc disk $D_m$. Let $l_1\in\mathbb{N}$ be the smallest integer such that $2^{-l_1} \leq 2^{-6}C^{-1}$ and let $\Delta_1$ be the chordarc disk containing $D$ with $\epsilon = 2^{-l_1}C^{-1}$. Note that $\Delta_1 \cap A = D\cap A$. The constant $l_1$ will be chosen in \textsection\ref{sec:decompsteps} towards the proof of Theorem \ref{thm:main} but, in any case, it will be bounded above by a constant depending only on $c$ and $\eta$.
For each $m=2,3,\dots$ let $l_m\in\mathbb{N}$ be the smallest integer such that $2^{-l_m} \leq \frac{1}{16}(2c)^{-2m-1}C^{-1}$ and let $\Delta_m$ be as in Lemma \ref{lem:BLsep}, where $E = \partial D_m$ and $\epsilon = 2^{-l_m}$. Choose points $x_1,\dots,x_n \in \Delta_{\infty}$ following the orientation of $\Gamma_{\infty}$ such that
\[ (32)^{-1}\diam{\Delta_{\infty}} \leq |x_i-x_{i+1}| \leq (16)^{-1}\]
with the convention $x_{n+1} = x_1$. Note that $k\leq N_0$ for some $N\in \mathbb{N}$ depending only on $c$. For each $i\in\{1,\dots,k\}$ we construct a broken line $\gamma_k$ as in the proof of Lemma \ref{lem:extcomplBL}, that connects $\partial D$ with $x_i$ and intersects each $\Gamma_k$ at exactly one point, denoted by $y_{i 1^{k-1}}$.
Proceeding inductively, we define a set $\mathcal{W}$ of words formed from letters $\{1,\dots,N\}$ ($N$ depending only on $c$), points $x_w \in \Gamma_{\infty}$ satisfying
\[ \frac{c}{16C}(2c)^{-2|w|} \leq |x_{wi}-x_{w(i+1)}| \leq \frac{c}{8C}(2c)^{-2|w|}.\]
and broken lines $\gamma_w$ joining $x_w$ with $\partial\Delta_{|w|}$ intersecting each $\partial\Delta_{|w|+k}$, $k\geq 0$, only once.
As in the proof of Lemma \ref{lem:extcomplBL}, two points $p_i$ and $p_j$ are \emph{neighbors} in the collection $\{p_k : k=1,\dots,n\}$ if one of the two components of $\Gamma_{\infty} \setminus \{p_i,p_j\}$ contains no point in $\{p_k : k=1,\dots,n\}$. Furthermore, $p_i$ is on the left (resp. right) of $p_j$ in $\{p_k : k=1,\dots,n\}$ if $p_i$ and $p_j$ are neighbors in $\{p_k : k=1,\dots,n\}$ and, following the orientation of $\Gamma_{\infty}$, $p_i$ and $p_j$ are the starting and ending points (resp. the ending and starting points) of $\Gamma_{\infty}(p_i,p_j)$. The number $(8c)^3$ in assumption (III) is chosen so that for all $w,u\in\mathcal{W}$,
\[ \dist(\gamma_w,\gamma_u) \gtrsim \min\{\diam{\gamma_w},\diam{\gamma_u}\}.\]
Finally, given $w \in \mathcal{W}$, points $x_u,x_w \in \Gamma_{\infty}$ (with $x_w$ being on the left of $x_w$ in the collection $\{x_v : |v|=|w|\}$) define $\mathcal{Q}_{w}$ to be the Jordan domain bounded by $\gamma_{w}$, $\gamma_{u}$, $\Delta_{|w|}$ and $\Delta_{|w|+1}$. As in Lemma \ref{lem:extcomplBL}, the following remark holds true.
\begin{rem}\label{rem:whitney3}
For each $w\in \mathcal{W}$,
\begin{enumerate}
\item each $\mathcal{Q}_w$ is an $L_1$-chordarc disk for some $L_1\simeq 1$,
\item $\dist(\mathcal{Q}_w,x_w) \simeq \dist(\mathcal{Q}_w,\partial U) \simeq \diam{\mathcal{Q}_w}$.
\end{enumerate}
\end{rem}
If $A \cap \mathcal{Q}_w \neq \emptyset$ set $A_w = A \cap \mathcal{Q}_w$. Otherwise, let $z_w \in \mathcal{Q}_w$ be a point such that $\dist(z_w,\partial \mathcal{Q}_w) \geq \frac1{L}\diam{\mathcal{Q}_w}$ and set $A_w = \overline{B}^2(z_w,\frac1{2L}\diam{\mathcal{Q}_w})$.
\begin{rem}\label{rem:whitney1}
If $A$ is $c$-uniformly perfect, then $\diam{\mathcal{Q}_{w}} \lesssim \diam{A_w}$ for all $w\in\mathcal{W}$.
\end{rem}
\subsection{Preliminary steps for the construction of $\mathscr{Q}'$}\label{sec:decompimageprelim}
As with $D$, we first define $\Delta_1'$. Fix $l_1'\in\mathbb{N}$ and set $\epsilon = 2^{-l_1'-3}$. As with the integer $l_1$, an integer $l_1'$ will be chosen in the proof of Theorem \ref{thm:main}. Let $\Delta_1'$ be as in Lemma \ref{lem:BLsep} with $E = \overline{D}$.
For each $w\in\mathcal{W}$ let $x_w' = f(x_w)$. The notion of neighbor points follows from the orientation of $\Gamma_{\infty}'$ induced by $f$. For the rest of \textsection \ref{sec:qcircledecomp}, two words $w,u\in\mathcal{W}$ with $|w|=|u|$ are called \emph{neighbors} if $x_w'$ and $x_u'$ are neighbors in the collection $\{x_v':v\in\mathcal{W}_k\}$. Similarly, if $w,u\in\mathcal{W}_{k}$ we say that $w$ is at the left (resp. right) of $u$ if $x_w'$ is at the left (resp. right) of $x_u'$ in the collection $\{x_v':v\in\mathcal{W}_k\}$.
For each $w\in\mathcal{W}$, set $A_w' = f(A_w)$ (could be empty for some words $w\in\mathcal{W}$). For each $w\in \mathcal{W}$ such that $A_w\neq \emptyset$, set $V_w = \mathbb{R}^2\setminus \overline{V_w'}$, where $V_w'$ is the unbounded component of $\mathbb{R}^2 \setminus \mathcal{T}_{d_w}(A_w')$, $d_w = 2^{-n_w}$ and $n_w$ is the smallest integer such that $2^{-n_w} \leq (32c')^{-1}\dist(A_w', A'\setminus A_w' \cup\partial\Delta_1')$. If $A_w = \emptyset$, set $V_w = \emptyset$.
The quasisymmetry of $f$ along with Remark \ref{rem:whitney3} imply that $d_w\simeq \diam{V_w}$ (when $A_w'\neq \emptyset$). We show in the next lemma, that if we remove the extra sets $V_w$ from $\mathbb{R}^2\setminus \Delta_{\infty}$ we still get a uniform domain.
\begin{lem}\label{lem:decompunif2}
There exists $c'>1$ depending only on $c$ and $\eta$ satisfying the following properties.
\begin{enumerate}
\item Each $V_w$ is a disjoint union of at most $c'$ many $c'$-chordarc disks of mutual distances and diameters bounded below by $(c')^{-1}\dist(A_w',\partial Y)$;
\item For any $w,u\in \mathcal{W}$ with $w\neq u$, $\dist(V_w, V_{u}) \geq \frac1{c'}\max\{\diam{A_w'},\diam{A_u'}\}$. In particular, $V_w\cap V_u = \emptyset$.
\item $V = \Delta_1 \setminus (\Delta_{\infty} \cup \bigcup_{w\in \mathcal{W}}\overline{V_w})$ is $c'$-uniform.
\end{enumerate}
\end{lem}
\begin{proof}
The first claim follows from Lemma \ref{lem:uniform+s+d=us} while the second claim follows almost immediately from the definition of the domains $V_w$. To show the third claim, fix $x,y \in V$. For each $w\in\mathcal{W}$ such that $A_w'\neq \emptyset$, we define $V_w^* = \mathcal{T}_{d_w/2}(V_w)$.
Let $\tau$ be a $c$-cigar arc in $\Delta_1'\setminus A'$ joining $x$ with $y$. Since $\gamma$ does not get too close to $\Delta_{\infty}$ and its length is comparable to $|x-y|$, there exists some $N'\in\mathbb{N}$ depending only on $c$ such that $\gamma$ intersects at most $N'$ components of $\bigcup_{w\in\mathcal{W}}V_w$.
Suppose that $\gamma$ does not intersect any $V_w^*$. Let $z\in \gamma$ and let $w\in\mathcal{W}$ be such that $V_w$ is nonempty and closest to $z$ among all nonempty $V_u$, $u\in\mathcal{W}$. Then,
\[\dist(z,\partial V) = \dist(z,V_w) \gtrsim \dist(z,A_w') \geq \dist(z,A') \gtrsim \min\{|x-z|,|y-z|\}.\]
Therefore, $\gamma$ is a $c_1$-cigar curve in $V$ for some $c_1$ depending only on $c$ and $\eta$.
Suppose now that $\gamma$ intersects some component $H$ of $V_w^*$. We replace all the pieces of $\gamma$ inside $H$ by a subarc on the boundary of $H$. Since this procedure is performed at most $N'$ times, working as above, we can show that the final curve $\gamma$ is a $c_2$-cigar curve in $V$ for some $c_2$ depending only on $c$ and $\eta$.
\end{proof}
To reduce the use of constants, we assume for the rest that $V$ is $c$-uniform.
\subsubsection{Domes}
For each $x,y\in \Gamma_{\infty}'$ fix a $c$-cigar curve $\tau_{x,y}'$ that joins $x$ with $y$. As in \textsection\ref{sec:preim}, for each $z\in\tau_{x,y}'$ let $\Sigma(z)$ be the union of all squares in $\mathcal{G}_{2^{-l(z)}}$ that contain $z$ where $l(z)$ is the smallest integer such that $2^{-l(z)} \leq (16c^2)^{-1}\dist(z,\partial U_2')$. Let $\tau_{x,y}$ be an arc on the boundary of $\bigcup_{z\in\tau_{x,y}'}\Sigma(z)$ that connects $x$ with $y$ such that $\tau_{x,y}' \setminus \{x,y\}$ is contained in the Jordan domain bounded by $\Gamma_{\infty}'$ and $\tau_{x,y}$.
Fix $k\in\mathbb{N}$ and $w\in \mathcal{W}$. Suppose that $w_1$ and $w_2$ are the left and right, respectively, neighbors of $w1^k$ in $\mathcal{W}_{|w|+k}$. Define $R^{(k)}_w$ to be the Jordan domain bounded by $\tau_{x_{w_1}',x_{w_2}'}$ and $\Gamma_{\infty}$.
Note that for each $k,l\in\mathbb{N}$ and $w\in\mathcal{W}$, $R^{(k+l)}_w = R^{(k)}_{w1^l}$. Moreover, each $\partial R^{(k)}_w$ is a $c'$-cigar curve for some $c'\simeq 1$. Thus, for each $w\in\mathcal{W}$ and $k\in\mathbb{N}$ there exists a point $p_{w}^{(k)}\in \partial R^{(k)}_w$ which is a midpoint of an edge of $\partial R^{(k)}_w$ such that $\dist(p_{w}^{(k)},\partial V) \gtrsim \diam{R^{(k)}_w}$. Subdividing each edge of $\partial R^{(k)}_w$ into 2 edges we may assume that $p_{w}^{(k)}$ is a flat vertex of $\partial R^{(k)}_w$. Recall that $z$ is a flat vertex of a polygon $P$ if the two edges of $P$ with $z$ as their common point are co-linear.
The next lemma follows from a straightforward application of the quasisymmetry of $f$ and the uniformity of $V$. The proof is left to the reader.
\begin{lem}\label{lem:domes}
Given a small positive number $\delta_0\in (0,1)$, there exists $k_0$ depending only on $c$, $C$, $\eta$ and $\delta_0$ such that if $k\geq k_0$ and $w\in \mathcal{W}$, then the following hold.
\begin{enumerate}
\item $\diam{R_w^{(k)}} \leq \delta_0$ and $x_w' \in \partial R_{w}^{(k)}$.
\item If $u\in\mathcal{W}$ with $|u|=|w|$, then $\dist(R_w^{(k)},R_{u}^{(k)}) \geq (1-\delta_0)|x_w'-x_{u}'|$.
\item If $u\in \mathcal{W}$ with $|u|<|w|$, then $\dist(R_w^{(k)},V_{u}) \geq (1-\delta_0)\dist(x_w', V_{u})$.
\item If $uv\in \mathcal{W}$, $|u| = |w|$ and $u$ is not a neighbor of $w$, then
\[ \dist(R_w^{(k)},V_{uv}) \geq (1-\delta_0)\dist(x_w', V_{uv}).\]
\item If $l\in\mathbb{N}$, then $\diam{R_w^{(k+l)}} \leq \delta_0\diam{R_w^{(k)}}$.
\item If $l\in\mathbb{N}$ and $u\in \mathcal{W}$ with $|u|\geq |w|+k$, then
\[\diam{R_w^{(l)}} \leq (1-\delta_0)\dist(p_{w}^{(l)}, R_u^{(l)}).\]
\end{enumerate}
\end{lem}
We specify $\delta_0$ in \textsection\ref{sec:step0}, \textsection\ref{sec:stepRw} and \textsection\ref{sec:stepTw1w2}. For each $w\in \mathcal{W}$ set $R_w = R^{(k_0)}_w$ and $p_w = p_{w}^{(k)}$. We call the domain $R_w$ \emph{dome} and the point $p_w$, the \emph{peak of $R_w$}. To simplify the notation, we write $\tau_{w} = \tau_{x_{w_1}',x_{w_2}'}$ where $x_{w_1}'$ (resp. $x_{w_2}'$) is the neighbor of $x_{w1^{k_0}}$ at its left (resp. right) in $\mathcal{W}_{|w|+k_0}$. We call the points $x_{w_1}'$ and $x_{w_2}'$ the left and right, respectively, endpoints of $\tau_w$. In what follows, we only consider domes $R_w$ for words $w$ satisfying $|w|=l k_0 +1$ for some $l\in\mathbb{N}\cup\{0\}$.
Before proceeding to the construction of $\mathscr{Q}'$, we make one final modification to the domes $R_w$. Given a word $w\in\mathcal{W}_{lk_0+1}$ and a word $u = w1^{mk_0} \in \mathcal{W}$, note that $\tau_{w}$ intersects $\tau_{u}$. By modifying $\tau_{w}$ as in \textsection\ref{sec:preim}, we may assume that the two polygonal arcs $\tau_{w}$ and $\tau_{u}$ intersect only at $p_u$.
\subsection{Decomposition around the image}\label{sec:decompimage}
Here we construct $\mathscr{Q}' =\{\mathcal{Q}_w'\}$ in an inductive manner. In \textsection\ref{sec:step0} we construct the chord-arc disks $\mathcal{Q}_1,\dots,\mathcal{Q}_N'$. In \textsection\ref{sec:stepRw} and in \textsection\ref{sec:stepTw1w2} we perform the inductive step.
\subsubsection{Construction of $\mathscr{Q}'$: Step 0}\label{sec:step0}
Given $i\in\{1,\dots,N\}$, we define a simple polygonal path $\sigma_{i,i+1}$ that joins $R_i$ with $R_{i+1}$ as follows. Apply Lemma \ref{lem:uniform=wud1} on $\Delta_{\infty}'$ with
\[ r = (cC)^{-2}\min_{i=1,\dots,N}\dist(p_i,\Delta_{\infty}')\]
and obtain a Jordan domain $\hat{D}_1$ containing $\Delta_{\infty}'$. Now apply Lemma \ref{lem:BLsep} on $\hat{D}_1$ with $\epsilon = 2^{-l(1)}$, where $l(1)$ is the smallest positive integer such that
\[ 2^{-l(1)} \leq \frac{1}{16} \dist(\partial \hat{D}_1, \partial V \cup\{p_1,\dots, p_N\}).\]
Thus, we obtain a chord-arc disk $D_1'$ containing $\hat{D}_1$. For each $i=1,\dots,N$, there exists a subarc $\sigma_{i,i+1}$ of $\partial D_1'$ such that
\begin{enumerate}
\item except of its endpoints, $\sigma_{i,i+1}$ is in $\Delta_1' \setminus \bigcup R_i$;
\item one of its endpoints is on $\tau_{i}$ between the peak of $R_i$ and the right endpoint of $\tau_i$ and the other endpoint is on $\tau_{i+1}$ between the peak of $R_{i+1}$ and the left endpoint of $\tau_{i+1}$.
\end{enumerate}
Choosing $\delta_0$ sufficiently small in Lemma \ref{lem:domes}, we may assume that $\bigcup_{i=1}^N V_i$ is contained in the open annulus $T_{\emptyset}$ whose boundary is $\partial\Delta_0'$ and a polygonal Jordan curve which is the union of the curves $\sigma_{i,i+1}$ and subarcs of $\tau_i$. For each $i=1,\dots,n$, define $\tilde{T}_{i,i+1}$ to be the bounded Jordan domain that does not contain $\Delta_{\infty}$ and whose boundary is the union of a subarc of $R_i$, a subarc of $R_{i+1}$, a subarc of $\Gamma_{\infty}$ and $\sigma_{i,i+1}$.
Note that $T_{\emptyset}$ contains every set $V_i$ and at most $C_1$ components of $\bigcup_{w\in\mathcal{W}, |w|\geq 2} V_w$ for some $C_1>1$ depending only on $c$, $C$ and $\eta$. Suppose that $H_1 \dots, H_m$ are components of $T_{\emptyset} \cap \bigcup_{w\in\mathcal{W}, |w|\geq 2} V_w$. There exists $m_1>l_1'+4$, $m_1\simeq 1$, and polygonal curves $s_j$ with edges in $\mathcal{G}_{2^{-m_1}}^1$, $j=1,\dots,m$ joining $H_j$ with $\Delta_1'\setminus T_{\emptyset}$ such that,
\begin{enumerate}
\item except for its endpoints, each $s_j$ is entirely in $\tilde{T}_0$;
\item $\dist(s_j,\partial T_{\emptyset} \setminus \sigma_{i,i+1}) \geq 2^{-m_1}$ and $\dist(s_j,s_{j'}) \geq 2^{-m_1}$ when $j\neq j'$;
\item if $H_j$ is a component of $V_w$ and $x_{w}'$ is on $\Gamma_{\infty}' \cap \partial R_i$, then $s_j$ joins $V_j$ with a point on $\tau_i$ other than $y_i$;
\item if $H_j$ is a component of $V_w$ and $x_{w}'$ is on $\Gamma_{\infty}'$ between the right endpoint of $\tau_i$ and the left endpoint of $\tau_{i+1}$, then $s_j$ joins $V_j$ with a point on $\sigma_{i,i+1}$.
\end{enumerate}
Let $\mathsf{Q}_{\emptyset}' = T_{\emptyset} \setminus \mathcal{T}_{2^{-m_1}}(\bigcup_j H_j \cup \bigcup_j s_j)$. Given $i=1,\dots,N$, if $H_{j_1},\dots,H_{j_l}$ are all the components of $T_{\emptyset} \cap\bigcup_{w\in\mathcal{W}, |w|\geq 2} V_w$ connected to $R_i$ as above, then set
\[ \mathcal{R}_i = R_i \cup \mathcal{T}_{2^{-m_1}}(\bigcup_{n=1}^l(H_{j_n} \cup s_{j_n})).\]
Similarly, if $H_{j_1},\dots,H_{j_l}$ are all the components of $T_{\emptyset} \cap\bigcup_{w\in\mathcal{W}, |w|\geq 2} V_w$ connected to $\tilde{T}_{i,i+1}$ for some $i=1,\dots,N$, then set
\[ T_{i,i+1} = \tilde{T}_{i,i+1} \cup \mathcal{T}_{2^{-m_1}}(\bigcup_{n=1}^l(H_{j_n} \cup s_{j_n})).\]
Finally, in the preimage side, define $\mathsf{Q}_{\emptyset}$ to be the interior of $\bigcup_{i=1}^N\mathcal{Q}_i$.
Subdividing $\mathsf{Q}_{\emptyset}'$ we obtain Jordan domains $\mathcal{Q}_i'$ with the following properties.
\begin{description}[align=left]
\item [(1a)] Domains $\mathcal{Q}_i'$ are mutually disjoint and the union of their closures is all of $\overline{\mathsf{Q}_{\emptyset}'}$.
\item [(1b)] There exists some positive integer $m(0)\simeq 1$ such that each $\partial \mathcal{Q}_i'$ is a polygonal curve with edges in $\mathcal{G}_{2^{-m(0)}}^1$ and $\dist(\partial\mathcal{Q}_i',\partial V \setminus \partial\Delta_1') \geq 2^{-m(0)}$.
\item [(1c)] Collection $\{\mathcal{Q}_i\}$ is combinatorially equivalent to $\{\mathcal{Q}_i'\}$ in the following sense: if $g:(A\cap \mathsf{Q}_{\emptyset})\cup \partial \mathsf{Q}_{\emptyset} \to (A'\cap \mathsf{Q}_{\emptyset}')\cup \partial \mathsf{Q}_{\emptyset}'$ is a homeomorphism such that $g|_{A\cap \mathsf{Q}_{\emptyset}} = f|_{A\cap \mathsf{Q}_{\emptyset}}$, then $g$ extends to a homeomorphism $G:\mathsf{Q}_{\emptyset} \to \mathsf{Q}_{\emptyset}'$ with $G(\mathcal{Q}_i) = \mathcal{Q}_i'$.
\end{description}
Note that
\[ \overline{\Delta_0'} = \overline{\Delta_{\infty}'} \cup \bigcup_{i=1}^N \overline{\mathcal{Q}_i} '\cup \bigcup_{i=1}^N \overline{\mathcal{R}_i} \cup \bigcup_{i=1}^N \overline{T_{i,i+1}}.\]
For the induction step, we consider the following two possible cases.
\subsubsection{Construction of $\mathscr{Q}'$: Decomposition in $\mathcal{R}_w$}\label{sec:stepRw}
Let $w\in\mathcal{W}_{l k_0 +1}$ with $l$ being a nonnegative integer. Let also $w_1$ and $w_2$ be the left and right, respectively, endpoints of $\tau_w$.
We work as in \textsection\ref{sec:step0} to obtain a polygonal path $\sigma_{w_1,w1^{k_0}}$ joining $R_{w_1}$ and $R_{w1^{k_0}}$. Set $r = (cC)^{-2}\min\{\dist(p_{w_1},\Delta_{\infty}'),\dist(p_{w1^{k_0}},\Delta_{\infty}')\},$ and let $\hat{\Omega}$ be the domain obtained by Lemma \ref{lem:uniform=wud1} for $\Delta_{\infty}$ and $r$. Let $\Omega$ be the chord-arc disk obtained by Lemma \ref{lem:BLsep} for $\hat{\Omega}$ and $\epsilon = 2^{-l}$ where $l$ is the smallest integer such that
\[ \epsilon \leq \frac{1}{16}\dist(\partial \hat{\Omega}, (R_w\cap \partial V)\cup\{p_{w_1},p_{w1^{k_0}}, p_{w_2}\}).\]
Let now $\sigma_{w_1,w1^{k_0}}$ be a subarc of $\partial \Omega$ such that its first endpoint is on $\tau_{w_1}$ between its peak and its right endpoint, and its second endpoint is on $\tau_{w1^{k_0}}$ between its peak and its left endpoint. Modifying the curve on its intersection points with $\tau_{w}$ we may assume that the curve is contained in $R_w$, Similarly we obtain a curve $\sigma_{w1^{k_0},w_2}$ that does not intersect with $\sigma_{w_1,w1^{k_0}}$.
Define $\tilde{T}_{w_1,w1^{k_0}}$ to be the bounded Jordan domain that does not contain $\Delta_{\infty}$ and whose boundary is the union of $\sigma_{w_1,w1^{k_0}}$, a subarc of $R_{w_1}$, a subarc of $R_{w1^{k_0}}$ and a subarc of $\Gamma_{\infty}$. Similarly we define $\tilde{T}_{w1^{k_0},w_2}$. Define now
\[ \mathsf{Q}_w' = \mathcal{R}_{w} \setminus (\overline{R_{w_1}} \cup \overline{R_{w1^{k_0}}} \cup \overline{R_{w_2}} \cup \overline{\tilde{T}_{w_1,w1^{k_0}}} \cup \overline{\tilde{T}_{w1^{k_0},w_2}}).\]
Choosing $\delta_0$ sufficiently small in Lemma \ref{lem:domes}, we may assume that if $u\in\mathcal{W}$ with $|u|\leq |w|+k_0$ and $V_u \subset \mathcal{R}_w$, then $V_u$ is contained in $\mathsf{Q}_w'$.
As in \textsection\ref{sec:step0}, if $\mathsf{Q}_w'$ contains a component $H$ of $V_u$ for some $|u|> (l+1)k_0+1 $, we construct a polygonal curve $s_H$ joining $H$ with the appropriate domain from the list: $R_{w_1}$, $R_{w1^{k_0}}$, $R_{w_2}$, $\tilde{T}_{w_1,w1^{k_0}}$, $\tilde{T}_{w1^{k_0},w_2}$. Then remove a thickening of $s_H$ and $H$ from $\mathsf{Q}_w'$. After all possible removals, we denote the new Jordan domain again by $\mathsf{Q}_w'$. Furthermore, in the process of adding these thickenings, we obtain new domains $T_{w_1,w1^{k_0}}$ and $T_{w1^{k_0},w_2}$ in place of $\tilde{T}_{w_1,w1^{k_0}}$ and $\tilde{T}_{w1^{k_0},w_2}$ respectively. Similarly, the domes $R_{w_1}$, $R_{w1^{k_0}}$ and $R_{w_2}$ are replaced by new domains $\tilde{R}_{w_1}$, $\mathcal{R}_{w1^{k_0}}$ and $\tilde{R}_{w_2}$, respectively. Further modifications on the left of $\tilde{R}_{w_1}$ and on the right of $\tilde{R}_{w_2}$ will give us the final domains $\mathcal{R}_{w_1}$ and $\mathcal{R}_{w_2}$; see \textsection\ref{sec:stepTw1w2}. Finally, in the preimage side, define $\mathsf{Q}_{w}$ to be the interior of $\bigcup \mathcal{Q}_{u}$ where the union is taken over all words $u\in\mathcal{W}$ such that $lk_0+1 < |u|\leq (l+1)k_0+1$ and $x_u$ is contained in the smaller subarc of $\Gamma_{\infty}\setminus \{x_{w_1},x_{w_2}\}$.
Now, as in \textsection\ref{sec:step0}, we subdivide $\mathsf{Q}_w$ into Jordan domains $\mathcal{Q}_u'$, where $u$ is as above, that satisfy the following properties.
\begin{description}[align=left]
\item [(2a)] Domains $\mathcal{Q}_u'$ are mutually disjoint and the union of their closures is all of $\overline{\mathsf{Q}_w'}$.
\item [(2b)] There exists some positive integer $m(w)$ with $\diam{\mathsf{Q}_w'}\simeq 2^{-m(w)}$ such that each $\partial \mathcal{Q}_u'$ is a polygonal curve with edges in $\mathcal{G}_{2^{-m(w)}}^1$ and $\dist(\partial\mathcal{Q}_u',\partial V) \geq 2^{-m(w)}$.
\item [(2c)] Collection $\{\mathcal{Q}_u\}$ is combinatorially equivalent to $\{\mathcal{Q}_u'\}$ in the following sense: if $g:(A\cap \mathsf{Q}_{w})\cup \partial \mathsf{Q}_{w} \to (A'\cap \mathsf{Q}_{w}')\cup \partial \mathsf{Q}_{w}'$ is a homeomorphism such that $g|_{A\cap \mathsf{Q}_{w}} = f|_{A\cap \mathsf{Q}_{w}}$, then $g$ extends to a homeomorphism $G:\mathsf{Q}_{w} \to \mathsf{Q}_{w}'$ with $G(\mathcal{Q}_u) = \mathcal{Q}_u'$.
\end{description}
Note that
\[ \overline{\mathcal{R}_w} = \overline{\tilde{R}_{w_1}} \cup \overline{\mathcal{R}_{w1^{k_0}}} \cup \overline{\tilde{R}_{w_2}} \cup \overline{T_{w_1,w1^{k_0}}} \cup \overline{T_{w1^{k_0},w_2}} \cup \bigcup \overline{\mathcal{Q}_{u}'}\]
where domains $\mathcal{Q}_u'$ are as in (2a)--(2c).
\subsubsection{Construction of $\mathscr{Q}'$: Decomposition in $T_{w_1,w_2}$}\label{sec:stepTw1w2}
Let $l$ be a nonnegative integer and $w_1,w_2\in\mathcal{W}_{l k_0 + 1}$ so that $w_1$ is on the left of $w_2$ in $\mathcal{W}_{j k_0 + 1}$ and $T_{w_1,w_2}$, $\mathcal{R}_{w_1}$ and $\mathcal{R}_{w_2}$ have been defined by the previous steps. The construction in this case is similar to that of \textsection\ref{sec:stepRw} and we only sketch the steps.
Consider words $u_1,\dots,u_n \in \mathcal{W}_{(l+1)k_0+1}$ such that $u_1$ is at the right of $w_1 1^{k_0}$, $u_i$ is at the left of $u_{i+1}$ for $i=1,\dots,n-1$ and $u_n$ is at the left of $w_21^{k_0}$ in $\mathcal{W}_{(l+1)k_0+1}$. Choosing $\delta_0$ is small enough in Lemma \ref{lem:domes}, we may assume that each dome $R_{u_i}$ is contained in $T_{w_1,w_2}$ and $\diam{R_{u_i}} \leq \frac{1}{2}\dist(R_{u_i}, \partial T_{w_1,w_2})$ when $i=2,\dots,n-1$. As in \textsection\ref{sec:step0} and \textsection\ref{sec:stepRw}, we join each $R_{u_i}$ with $R_{u_{i+1}}$, $i=1,\dots,k-1$, with a polygonal arc $\sigma_{u_i,u_{i+1}}$ contained in $T_{w_1,w_2}$ that, except for its endpoints, does not intersect $\partial V$, $\partial T_{w_1,w_2}$, $\partial R_{u_j}$ ($j=1,\dots,n$). We also assume that the polygonal arcs $\sigma_{u_i,u_{i+1}}$, $i=1,\dots,n-1$, are mutually disjoint.
For each $i=2,\dots,n-1$, let $\tilde{T}_{u_i,u_{i+1}}$ be the bounded Jordan domain that does not contain $\Delta_{\infty}$ and is bounded by a subarc of $R_{u_i}$, a subarc of $R_{u_{i+1}}$, $\sigma_{u_i,u_{i+1}}$ and a subarc of $\Gamma_{\infty}$. Let
\[ \mathsf{Q}_{w_1,w_2}' = T_{w_1,w_2} \setminus \bigcup_{i=1}^{n}\overline{R_{u_i}} \cup \bigcup_{i=1}^{n-1} \overline{\tilde{T}_{u_i,u_{i+1}}}.\]
As in \textsection\ref{sec:step0} and \textsection\ref{sec:stepRw}, if $H$ is a component of $V_w$ with $|w|>(l+1)k_0+1$, then we construct a polygonal curve $s_H$ joining $H$ with the appropriate $R_{u_i}$ or the appropriate $\tilde{T}_{u_i,u_{i+1}}$ and remove a thickening of $s_H$ and $H$ from $\tilde{T}_{w_1,w_2}$. This way we obtain a new domain which we still denote by $\mathsf{Q}_{w_1,w_2}'$. We also obtain new domains $T_{u_i,u_{i+1}}$ in place of $\tilde{T}_{u_i,u_{i+1}}$ for $i=1,\dots,n-1$, new domains $\mathcal{R}_{u_i}$ in place of $R_{u_i}$ for $i=2,\dots,n-1$, and new domains $\tilde{R}_{u_1}$ and $\tilde{R}_{u_n}$ in place of $R_{u_1}$ and $R_{u_n}$. Further modifications on the left of $\tilde{R}_{u_1}$ and on the right of $\tilde{R}_{u_n}$ will yield the final sets $\mathcal{R}_{u_1}$ and $\mathcal{R}_{u_n}$; see \textsection\ref{sec:stepRw}. Finally, in the preimage side, define $\mathsf{Q}_{w_1,w_2}$ to be the interior of $\bigcup\overline{\mathcal{Q}_{u}}$ where the union is taken over all words $u\in\mathcal{W}$ such that $lk_0+1 < |u|\leq (l+1)k_0+1$ and $x_u$ is contained in the smaller subarc of $\Gamma_{\infty}\setminus \{x_{u_1},x_{u_k}\}$.
As in previous sections, we subdivide $\mathsf{Q}_{w_1,w_2}$ into domains $\mathcal{Q}_u'$, where $u$ is as above, that satisfy the following properties.
\begin{description}[align=left]
\item [(3a)] Domains $\mathcal{Q}_u'$ are mutually disjoint and the union of their closures is all of $\overline{\mathsf{Q}_{w_1,w_2}'}$.
\item [(3b)] There exists some positive integer $m(w_1,w_2)$ with $\diam{\mathsf{Q}_{w_1,w_2}'}\simeq 2^{-m(w_1,w_2)}$ such that each $\partial \mathcal{Q}_u'$ is a polygonal curve with edges in $\mathcal{G}_{2^{-m(w_1,w_2)}}^1$ and $\dist(\partial\mathcal{Q}_u',\partial V) \geq 2^{-m(w_1,w_2)}$.
\item [(3c)] Collection $\{\mathcal{Q}_u\}$ is combinatorially equivalent to $\{\mathcal{Q}_u'\}$ in the following sense: if $g:(A\cap \mathsf{Q}_{w_1,w_2})\cup \partial \mathsf{Q}_{w_1,w_2} \to (A'\cap \mathsf{Q}_{w_1,w_2}')\cup \partial \mathsf{Q}_{w_1,w_2}'$ is a homeomorphism such that $g|_{A\cap \mathsf{Q}_{w_1,w_2}} = f|_{A\cap \mathsf{Q}_{w_1,w_2}}$, then $g$ extends to a homeomorphism $G:\mathsf{Q}_{w_1,w_2} \to \mathsf{Q}_{w_1,w_2}'$ with $G(\mathcal{Q}_u) = \mathcal{Q}_u'$.
\end{description}
Note that
\[ \overline{T_{w_1,w_2}} = \overline{\tilde{R}_{u_1}}\cup \overline{\tilde{R}_{u_n}} \cup \bigcup_{i=2}^{n-1}\overline{\mathcal{R}_{u_i}} \cup \bigcup_{i=1}^{n-1}\overline{T_{u_{i},u_{i+1}}} \cup \bigcup \overline{\mathcal{Q}_{u}'}\]
where $\mathcal{Q}_u'$ are as in (3a)--(3c) above.
\subsection{Concluding remarks}\label{sec:concluding} Proceeding inductively, we obtain a collection of sets $\mathscr{Q}' = \{\mathcal{Q}_w' : w\in\mathcal{W}\}$. We make the following three observations.
Firstly, by properties (1c), (2c) and (3c), $\mathscr{Q}'$ is combinatorially equivalent to $\mathscr{Q}$ in the sense that there exists homeomorphism $g: \overline{D} \to \overline{D'}$ such that $f|_{\partial A} = g|_{\partial A}$ and for each $\mathcal{Q}_w\in\mathscr{Q}$, $g(\mathcal{Q}_w) \in \mathcal{Q}_w'$. Secondly,sSince there is a finite number of different combinations for domains $\mathsf{Q}_{w}'$, $\mathsf{Q}_{w_1,w_2}'$ and a finite number of different combinations that these domains can be cut into pieces, it follows that each $\mathcal{Q}_w'$ is an $L$-chord-arc disk for some $L$ depending only on $c$ and $\eta$. Thirdly, by their construction, each $\mathcal{Q}_w'$ satisfies
\[ \dist(\partial\mathcal{Q}_w',A) \lesssim \dist(\partial \mathcal{Q}_w',\Delta_{\infty}) \simeq \diam{\mathcal{Q}_w'}.\]
On the other hand, by (1b), (2b) and (3b), we have that $ \dist(\partial\mathcal{Q}_w',A) \gtrsim \diam{\mathcal{Q}_w'}$.
These three observations, in conjunction with Remark \ref{rem:whitney3}, show that the two collections $\mathscr{Q}$ and $\mathscr{Q}'$ have the desired properties (P1)--(P3).
\section{Proof of Theorem \ref{thm:main}}\label{sec:whitney}
This section is devoted to the proof of Theorem \ref{thm:main}. We focus on the quasisymmetric case; the proof in the bi-Lipschitz case is similar and is given in \textsection\ref{sec:BLproof}.
By \textsection\ref{sec:isolated} and \textsection\ref{sec:unbounded}, we may assume for the rest of this section that $U\subset \mathbb{R}^2$ is an unbounded $c$-uniform domain and that $\partial U$ is compact and $C$-uniformly perfect. By Corollary \ref{cor:extcompl}, we assume that $f : \mathbb{R}^2\setminus U \to \mathbb{R}^2$ is an $\eta$-quasisymmetric map that can be extended homeomorphically to $\mathbb{R}^2$. By Lemma \ref{lem:couniform} and the invariance of uniformly perfect sets under quasisymmetric maps, the domain $U' = \mathbb{R}^2 \setminus f(\mathbb{R}^2\setminus U)$ is $c'$-uniform and $\partial U'$ is $C'$-relatively connected for some $c'$ depending only on $c$ and $\eta$, and some $C'$ depending only on $C$ and $\eta$. For simplicity, we assume for the rest that $C =C' = c'=c$.
Fix $x_0 \in \partial U$ and $x_0' \in \partial U'$. Let $\mathcal{D}_0 = \mathbb{R}^2 \setminus \overline{B}^2(x_0, 2c\diam{\partial U})$ and $\mathcal{D}_0' = \mathbb{R}^2 \setminus \overline{B}^2(x_0', 2c\diam{\partial U'})$. In \textsection\ref{sec:decompsteps} we apply the results of \textsection\ref{sec:separation} and \textsection\ref{sec:qcircledecomp} to construct two Whitney-type decompositions, one in $U\setminus \mathcal{D}_0$ and another in $U'\setminus \mathcal{D}_0'$ that are combinatorially equivalent. The difference here is that the two decompositions consist of domains in $\mathscr{QC}(K,d)$ and not chord-arc disks. Specifically, we show the following proposition.
\begin{prop}\label{prop:whitney}
There exist $K>1$, $d>1$, $C>1$ depending only on $c$ and $\eta$ and two families of domains $\mathscr{D} , \mathscr{D}' \subset \mathscr{QC}(K,d)$ with the following properties.
\begin{enumerate}
\item The domains in $\mathscr{D}$ are mutually disjoint and $U \setminus \mathcal{D}_0 = \bigcup_{D \in \mathscr{D}}\overline{D}$. Similarly for $\mathscr{D}'$.
\item For all $D \in \mathscr{D}$, $C^{-1}\diam{D} \leq \dist(D,\partial U) \leq C\diam{D}$. Similarly for $\mathscr{D}'$.
\item For each $D \in \mathscr{D}$, there are at most $C$ elements in $\mathscr{D}$ whose boundary intersects that of $D$. If $\Gamma = \partial D\cap \partial D' \neq \emptyset$ for $D,D'\in\mathscr{D}$, then $\Gamma$ is an $L$-bi-Lipschitz arc and $\diam{\Gamma} \geq C^{-1}\max\{\diam{D},\diam{D}\}$. Similarly for $\mathscr{D}'$.
\item There exists homeomorphism $g: \overline{U\setminus \mathcal{D}_0} \to \overline{U'\setminus \mathcal{D}_0'}$ such that $f|_{\partial U} = g|_{\partial U}$ and $g(D) \in \mathscr{D}'$ for each $D\in\mathscr{D}$.
\end{enumerate}
\end{prop}
In \textsection\ref{sec:decompsteps} we construct the families $\mathscr{D}$ and $\mathscr{D}'$ while in \textsection\ref{sec:proof} we give the proof of Theorem \ref{thm:main}.
\subsection{Decompositions for planar uniform domains}\label{sec:decompsteps}
We describe the steps in the construction of the families $\mathscr{D}$ and $\mathscr{D}'$. Here and for the rest of \textsection\ref{sec:decompsteps} we write $E = \mathbb{R}^2\setminus U$ and $E' = \mathbb{R}^2\setminus U'$. To reduce the use of constants and simplify the exposition, we assume that all constants in the lemmas and propositions in \textsection\ref{sec:separation} are equal to $c$ for both $U$ and $U'$. For two positive constants $a,b$ we write $a\lesssim b$ if $a\leq Cb$ for some constant $C>1$depending at most on $c$ and $\eta$. We write $a\simeq b$ if $a\lesssim b$ and $b\lesssim a$.
\medskip
\emph{Step $1$.} We apply the construction of \textsection\ref{sec:totalwud} on $\partial U$ inside $U\setminus\mathcal{D}_0$ with $\epsilon= (80c)^{-3}\diam{E}$ and obtain Jordan domains $D_1,\dots D_n$. Set $\mathcal{D}_{\emptyset} = (U\setminus \mathcal{D}_0) \setminus \bigcup_{i=1}^n \overline{D_i}$ and note that $\diam{\mathcal{D}_{\emptyset}} \geq \diam{\partial U}$.
For each $i=1,\dots,N$ let $E_i = D_i \cap E$ and $E_i' = f(E_i)$. Applying Lemma \ref{lem:uniform+s+d=us} repeatedly on each set $E_i'$, we obtain Jordan domains $D_1' = V(E_1',(U'\setminus \mathcal{D}_0'),r_1)$ and
\[D_i' = V(E_i',(U'\setminus \mathcal{D}_0')\setminus \bigcup_{k=1}^{i-1}D_k',r_i)\quad \text{ for }i=2,\dots,n\]
where
\[r_i = (32c)^{-1}\min\{\diam{E_i'},\dist(E_i',E'\setminus E_i')\}\quad \text{ for }i=1,2,\dots,n.\]
Set $\mathcal{D}_{\emptyset}' = (U'\setminus \mathcal{D}_0') \setminus \bigcup_{i=1}^n \overline{D_i'}$.
Since $E$ is relatively connected, by Remark \ref{rem:relcon}, $r_i$ is comparable to $\dist(E_i',E'\setminus E_i')$.
\medskip
\emph{Step $2$.} Fix $i\in \{1,\dots,n\}$ and let
\[\epsilon_i' = (8c)^{-3}\min\{\dist(E_i',\partial D_i'),\diam{E_i'}\}.\]
Observe that, although $U'\cap D_i'$ may not be $c$-uniform like $U'$, the condition of $c$-uniformity for $E'$ holds true in the scale of $\epsilon_i'$. That is, for all $x,y\in E_i'$ with $|x-y|\leq \epsilon_i'$, there exists $c$-cigar curve $\gamma$ joining them in $U\cap D_i'$. In fact, $\epsilon_i'$ has been chosen in such a way that both Lemma \ref{lem:uniform=wud1} and Lemma \ref{lem:uniform=wud2} can be applied as if $U\cap D_i'$ was $c$-uniform. Therefore, we can apply on each $E_i'$ inside $D_i'$ the construction of \textsection\ref{sec:totalwud} with $\epsilon_i'$ and obtain Jordan domains $D_{i1}',\dots,D_{in_i}'$. Set $\mathcal{D}_i' = D_i' \setminus \bigcup_{j=1}^{n_i} D_{ij}'$.
In each $D_i$, we apply now the second part of Step 1. Fix $i\in \{1,\dots,n\}$ and let $E_{ij}' = E' \cap D_{ij}'$ and $E_{ij} = f^{-1}(A_{ij}')$. By Lemma \ref{lem:propertiesofnbhd}, $D_i\cap U$ is $c'$-uniform for some $c'$ depending only on $c$. Applying Lemma \ref{lem:uniform+s+d=us} repeatedly on each set $E_{ij}$, we obtain Jordan domains $D_{i1} = V(E_{i1},U\cap D_i,(32c)^{-1})$ and
\[ D_{ij} = V(E_{ij},(U\cap D_i)\setminus \bigcup_{k=1}^{j-1}D_{ik}',(32c)^{-1})\quad \text{ for }j=2,\dots,n_i.\]
Set $\mathcal{D}_i = (U\cap D_i) \setminus \bigcup_{j=1}^{n_i} \overline{D_{ij}}$ and note that $\diam{\mathcal{D}_i'} \geq \epsilon_0'\diam{D_i}$ with $\epsilon_0'$ depending only on $c$.
\medskip
\emph{Inductive assumption.} Suppose that from \emph{Step $2k$} we have obtained mutually disjoint bounded Jordan domains $D_w\subset U$, mutually disjoint Jordan domains $D_{w}'\subset U'$, boundary sets $E_{w} = D_{w}\cap E$ and boundary sets $E_{w}' = D_{w}'\cap E' = f(E_w)$ such that, for some $M>1$ depending only on $c$ and $\eta$,
\begin{description}[align=left]
\item[(I1)] $M^{-1}\diam{E_w} \leq \dist(E_{w},E \setminus E_{w}) \leq M \dist(\partial D_w,E\setminus E_w)$;
\item[(I2)] $M^{-1}\dist(z,E_{w}) \leq \min\{\dist(E_{w},E \setminus E_{w}),\diam{E_w}\} \leq M \dist(z,E_{w})$ for all $z\in\partial D_w$;
\item[(I3)] if $\dist(z,A) \geq (8c)^{-3}\diam{A}$ for some $z\in\partial D_w$ and some component $A$ of $E_w$, then $\dist(A,\partial D_w) \geq M^{-1}\diam{A}$.
\end{description}
Similarly for $E'$, $E_w'$ and $D_w'$. Also, since $E$ is uniformly perfect, we have that $\dist(E_{w},E \setminus E_{w}) \leq M_0\diam{E_w}$ for some $M_0>1$ depending only on $c$.
Fix now a a Jordan domain $D_{w}$, $w=i_1\cdots i_{2k}$. We distinguish two cases.
\subsubsection{Case 1} For all components $A$ of $E_{w}$, there exists $z\in\partial D_{w}$ such that $\diam{A} \leq (8c)^3\dist(A,z)$.
\begin{rem}\label{rem:case1}
By (I3) and the quasisymmetry of $f$,
\[\diam{f(A)} \lesssim \dist(f(A), E' \setminus E_{w}').\]
By Lemma \ref{lem:totalwud},
\[ \diam{E_{w}'} \lesssim \diam{D_{w}'} \lesssim c_2'\dist(\diam{E_{w}'},E' \setminus \diam{E_{w}'}).\]
By quasisymmetry of $f$,
\[\diam{E_{w}} \lesssim \dist(\diam{E_{w}},E \setminus \diam{E_{w}}).\]
\end{rem}
\medskip
\emph{Step $2k+1$.} Applying the construction of \textsection\ref{sec:totalwud} on $E_{w}$ with
\[\epsilon_{w} = (80c)^{-3}\min\{\dist(E_{w},\partial D_{w}),\diam{E_w}\}\]
we obtain Jordan domains $D_{wi} \subset D_{w}$ with $i \leq n_{w}$. Define $E_{w} = D_{wi}\cap E$, $E_{wi}' = f(E_{wi})$ and $\mathcal{D}_{w} = D_{w} \setminus \bigcup_{i} D_{wi}$. In each $ D_{w}'$ apply the second part of Step 1 and obtain Jordan domains $D_{wi}'$. Set $\mathcal{D}_{w}' = D_{w}' \setminus \bigcup_{i} D_{wi}'$.
\medskip
\emph{Step $2k+2$}. In each $D'_{wi}$ apply the construction of \textsection\ref{sec:totalwud} on $E_{wi}'$ with
\[\epsilon_{wi}' = (80c)^{-3}\min\{\dist(E_{wi}',\partial D_{wi}'),\diam{E_{wi}'}\}.\]
Again, $\epsilon_{wi}'$ has been chosen in such a way that both Lemma \ref{lem:uniform=wud1} and Lemma \ref{lem:uniform=wud2} can be applied as if $U\cap D_{wi}'$ was $c$-uniform. Thus, we obtain Jordan domains $D_{wij}'\subset D_{wi}'$. Set $E_{wij}' = D_{wij}'\cap E'$, $E_{wij} = f^{-1}(E_{wij}')$ and $\mathcal{D}_{wi}' = D_{wi }' \setminus \bigcup_{j} D_{wij}'$.
In each $D_{wi }$ we apply the second part of Step 1. Applying Lemma \ref{lem:uniform+s+d=us} repeatedly on each set $E_{wij}$, we obtain Jordan domains $D_{wij}$. Set $\mathcal{D}_{wi} = D_{wi } \setminus \bigcup_{j} D_{wij}$.
Combining Lemma \ref{lem:propertiesofnbhd}, Lemma \ref{lem:totalwud}, Remark \ref{rem:case1} and using the fact that $E$ is uniformly perfect, we obtain the next corollary which completes the induction step for Case 1.
\begin{cor}\label{cor:whitney}
There exists $N_2>1$, $d>1$ and $K>1$ depending only on $c$ and $\eta$ with the following properties.
\begin{enumerate}
\item $n_w \leq N_2$.
\item Each $D_{wi}$ is an $K$-quasidisk.
\item For all $i\in \{1,\dots,n_w\}$, $\diam{D_{wi }}\geq d^{-1} \diam{D_{w}}$.
\item For all $i\in \{1,\dots,n_w\}$, $\dist(\partial D_{w},\partial D_{wi}) \geq d^{-1}\diam {D_{w}}$.
\item For all $i,i' \in \{1,\dots,n_w\}$, $\dist(\partial D_{wi},\partial D_{wi'}) \geq d^{-1}\diam {D_{w}}$.
\item Properties (I1), (I2) and (I3) hold for $E$, $E_{wi}$ and $D_{wi}$ with constant $d$.
\end{enumerate}
\end{cor}
By Remark \ref{rem:case1} and the fact that $n\leq N$, the same is true for domains $D_{wi }'$. Similarly, the conclusions of Corollary \ref{cor:whitney}, hold true for domains $D_{wij}$ and $D_{wij}'$.
\subsubsection{Case 2} There exists a component $\overline{\Delta_{\infty}}$ of $E_{w}$ such that
\[ \dist(z,\Delta_{\infty}) \leq (8c)^{-3}\diam{\Delta_{\infty}} \text{ for all } z\in\partial D_w.\]
By (I1), $\dist(D_w,\Delta_{\infty})\gtrsim \diam{E_w} \geq \diam{\Delta_{\infty}}$. Set $\Delta_{\infty}' = f(\Delta_{\infty})$ and $E_w' = f(E_w)$. We show that that $\dist(z,\Delta_{\infty}')$ is comparable to $\diam{\Delta_{\infty}'}$ for each $z\in\partial D_w'$.
\begin{lem}\label{lem:decompcases}
There exists $C>1$ depending only on $c$ and $\eta$ such that
\[ C^{-1}\diam{\Delta_{\infty}'} \leq \dist(\Delta_{\infty}',z)\leq C\diam{\Delta_{\infty}'} \text{ for all }z\in\partial D_{w}'.\]
\end{lem}
\begin{proof}
Note that $\diam{E_w} \leq \diam{D_w} \leq (2(8c)^{-3}+1)\diam{\Delta_{\infty}}$. By quasisymmetry of $f$, $\diam{E_w'} \lesssim \diam{\Delta_{\infty}'}$ \cite[Proposition 10.8]{Heinonen} and the right inequality follows from (I1) and (I2) for $E',E_w',D_w'$. The left inequality follows from (I1) and (I2) for $E',E_w',D_w'$ and the fact that $\diam{\Delta_{\infty}'} \leq \diam{E_w'}$.
\end{proof}
By Lemma \ref{lem:decompcases}, all assumptions of \textsection\ref{sec:qcircledecomp} are satisfied and by setting $D=D_w$, $D' = D_{w}'$, $A= E_w$ and $A' = E_w'$ Step $2k+1$ and Step $2k+2$ are as follows
\medskip
\emph{Step $2k+1$ and Step $2k+2$.} We replace $D_w$ with $\Delta_1$ where $l_1$ is chosen so that $\dist(\Delta_1,D_{w'}) \geq \frac{1}{2}\dist(D_w,D_{w'})$ for all $w' = j_1\cdots j_{2k} \neq w$. Define $\{\mathcal{Q}_u\}_{u\in \mathcal{W}}$, $\{\mathcal{Q}_u'\}_{u\in \mathcal{W}}$, $\{A_u\}_{u\in \mathcal{W}}$ and $\{A_u'\}_{u\in \mathcal{W}}$ as in \textsection\ref{sec:qcircledecomp}. Set $D_{wu} = Q_u$ and $D_{wu}' = Q_u'$.
If $D_{wu} \cap \partial U = \emptyset$, then set $\mathcal{D}_{wu} = D_{wu}$ and $\mathcal{D}_{wu}' = D_{wu}'$. If $D_{wu}\cap \partial U \neq \emptyset$ then set $\mathcal{D}_{wu} = D_{wu}\setminus \mathcal{T}_{\delta_w}(E\cap D_{wu})$ and $\mathcal{D}_{wu}' = D_{wu}'\setminus \mathcal{T}_{\delta_w'}(E'\cap D_{wu}')$ where
\[ \delta_w = \frac1{10}\dist(E\cap D_{wu},\partial D_{wu}) \qquad\text{and}\qquad\delta_w = \frac1{10}\dist(E'\cap D_{wu}',\partial D_{wu}').\]
It is straightforward to check that Corollary \ref{cor:whitney} holds true in Case 2 as well. This completes the inductive step and the construction of the two decompositions $\mathscr{D}$ and $\mathscr{D}'$.
\subsection{Proof of Theorem \ref{thm:main} in the quasisymmetric case}\label{sec:proof}
Suppose that $U\subset\mathbb{R}^2$ is an unbounded $c$-uniform domain with bounded $c$-uniformly perfect boundary. Let $\mathscr{D} = \{\mathcal{D}_w\}$ and $\mathscr{D}' = \{\mathcal{D}'_w\}$ be the decompositions of Proposition \ref{prop:whitney}.
Given $\mathcal{D}_w,\mathcal{D}_u \in \mathscr{D}$ whose boundaries intersect in a non-degenerate set, let $\Gamma_{w,u} = \mathcal{D}_w\cap\mathcal{D}_u$ and $\Gamma_{w,u}' = \mathcal{D}_w'\cap\mathcal{D}_u'$. Then, $\Gamma_{w,u},\Gamma_{w,u}'$ are $L$-bi-Lipschitz arcs and $\diam{\Gamma_{w,u}}\gtrsim \diam{\partial \mathcal{D}_w}$ and $\diam{\Gamma_{w,u}'}\gtrsim \diam{\partial \mathcal{D}_w'}$. Define a homeomorphism $g: \bigcup_{\mathcal{D}_w \in \mathscr{D}} \partial\mathcal{D}_w \to \bigcup_{\mathcal{D}_w '\in \mathscr{D}'} \partial\mathcal{D}_w'$ so that $g|_{\Gamma_{w,u}}$ maps $\Gamma_{w,u}$ onto $\Gamma_{w,u}'$ by arc-length parametrization and can be homeomorphically extended to each $\mathcal{D}_w$. Note that $g|_{\mathcal{D}_w}$ is a $(\lambda_w,\Lambda)$-quasisimilarity with $\Lambda\simeq 1$ and with $\lambda_w = \frac{\diam{\mathcal{D}_w'}}{\diam{\mathcal{D}_w}}$. By Proposition \ref{prop:BLext}, each $g|_{\mathcal{D}_w}$ extends to a $(\lambda_w,\Lambda')$-quasisimilarity $F_w : \mathcal{D}_w \to \mathcal{D}_w'$ with $\Lambda' \simeq 1$.
Define $F : U \to U'$ with $F|_{\mathcal{D}_w} = F_w$. By a theorem of V\"ais\"al\"a on removability of singularities \cite[Theorem 35.1]{Vais1}, $F$ is $K$-quasiconformal with $K$ depending only on $\Lambda'$, thus only on $c$ and $\eta$. By Lemma \ref{lem:QCtoQM}, $F$ is $\eta'$ quasisymmetric for some $\eta'$ depending only on $c$ and $\eta$.
\subsection{Proof of Theorem \ref{thm:main} in the bi-Lipschitz case}\label{sec:BLproof}
The proof of Theorem \ref{thm:main} in the case that $f$ is $L$-bi-Lipschitz is similar. The sets $\mathscr{D}$ and $\mathscr{D}$ are constructed as in the quasisymmetric case. Since $\partial U$ may not be uniformly perfect, in Proposition \ref{prop:whitney}, property (4) holds with $\min$ instead of $\max$ and the families $\mathscr{D},\mathscr{D}' \subset \mathscr{CA}(L_0,d)$ for some $L_0>1$ and $d>1$ depending only on $L$ and $c$. Moreover, in Corollary, \ref{cor:whitney}, the domains $D_{wi}$ are $L_0'$-chord-arc disks and property (3) may not hold.
Since $f$ is $L$-bi-Lipschitz, there exists $c_0>1$ depending only on $c$ and $L$ such that $c_0^{-1}\diam{\mathcal{D}_w'} \leq\diam{\mathcal{D}_w} \leq c_0 \diam{\mathcal{D}_w'}$ for all $\mathcal{D}_w \in \mathscr{D}$. Therefore, applying Proposition \ref{prop:BLext} and defining $F$ as above, we note that $F|_{\mathcal{D}_w}$ is $L_1$-bi-Lipschitz for all $\mathcal{D}_w \in \mathscr{D}$ with $L_1$ depending only on $c$ and $L$. Thus, $F$ is $L_1$-BLD and, by Lemma \ref{lem:BLD}, $F$ is $L'$-bi-Lipschitz with $L'$ depending only on $c$ and $L$.
\section{The assumptions of Theorem \ref{thm:main}}\label{sec:assumptions}
We discuss the assumptions of Theorem \ref{thm:main} and their necessity.
In \textsection\ref{sec:relcon2}, applying a result of Trotsenko and V\"ais\"al\"a \cite{TroVa}, we show that relative connectedness is necessary for the
QSEP in all dimensions.
In \textsection\ref{sec:examples}, we show that uniformity is somewhat necessary for the QSEP or the BLEP in the plane, as neither the John property nor quasiconvexity of $U$, alone, is sufficient in Theorem \ref{thm:main}.
\subsection{Relative connectedness}\label{sec:relcon2}
Let $E\subset \mathbb{R}^n$ be a closed set that is not relatively connected. In the proof of Theorem 6.6 in \cite{TroVa}, a quasisymmetric map $f: E \to \mathbb{R}^n$ is constructed that is not power quasisymmetric. It follows then from Lemma \ref{thm:TroVa} that $f$ can not be extended quasisymmetrically to $\mathbb{R}^n$. We present the construction here again to illustrate why the map $f$ can be extended homeomorphically to $\mathbb{R}^n$.
Since $E$ is not relatively connected, for each $i\in\mathbb{N}$, there exists $E_i$ containing at least two points so that $\text{dist}^*(E_i,E\setminus E_i) \geq i$. We assume for the rest that $\diam{E_i} \leq \diam{E\setminus E_i}$. Set $d_i = \text{dist}^*(E_i,E \setminus E_i)$. We may assume that $4 < d_1 < d_2 < \cdots$. The conditions on $E_i$ imply one of the following three cases.
\emph{Case 1.} There exists subsequence $i_1<i_2 < \cdots$ with $E_{i_1} \supset E_{i_2} \supset \cdots$. For simplicity, write $E_{i_k} = E_k$ and $d_{i_k}=d_k$. Then $\{x_0\} = \bigcap_{k\in\mathbb{N}}E_k$ for some $x_0 \in E$. Set $E^0 = E\setminus E_1$ and, for each $k\in\mathbb{N}$, set $E^k = E_{k}\setminus E_{k+1}$. Note that $E$ is a disjoint union of the sets $E^{k}$. Define $f: E \to \mathbb{R}^n$ by $f(x_0) = x_0$, $f|_{E^0}(x) = x$ and, for each $k\in\mathbb{N}$, $f|_{E^k}(x) = s_k x$ with
\[ s_k = \frac{e^{d_1 +\cdots +d_k}}{(1+d_1)\cdots (1+d_k)}. \]
\emph{Case 2.} There exists subsequence $i_1<i_2 < \cdots$ with $E_{i_1} \subset E_{i_2} \subset \cdots$. For simplicity, write $E_{i_k} = E_k$ and $d_{i_k}=d_k$. Set $E^0 = E_1$ and, for each $i\in\mathbb{N}$, set $E^k = E_{k+1}\setminus E_{k}$. Note that $E$ is a disjoint union of the sets $E^{k}$. Define $f: E \to \mathbb{R}^n$ by $f|_{E^0}(x) = x$ and for, each $k\in\mathbb{N}$, $f|_{E^k}(x) = s_k x$ with
\[ s_k = \frac{e^{e^{d_1}+\cdots+e^{d_k}}}{e^{k+d_1+\cdots+d_k}}. \]
\emph{Case 3.} There exists subsequence $i_1<i_2 < \cdots$ with $E_{i_1}, E_{i_2},\dots$ being mutually disjoint. For simplicity, write $E_{i_k} = E_k$, $d_{i_k}=d_k$ and $x_{i_k}=x_k$. Set $E^0 = E \setminus \bigcup_{k\in\mathbb{N}} E_k$ and, for each $k\in\mathbb{N}$, set $E^k = E_{k}$. Note that $E$ is a disjoint union of the sets $E^{k}$. Define $f: E \to \mathbb{R}^n$ by $f|_{E^0}(x) = x$ and, for each $k\in\mathbb{N}$, $f|_{E^k}(x) = x_k + s_k(x-x_k)$ with
\[ s_k = \frac{e^{d_k}}{(1+d_k)}. \]
Following the proof of \cite[Theorem 6.6]{TroVa}, in each case $f$ is quasisymmetric but not power quasisymmetric. Thus, it only remains to show that, in each case, $f$ extends to a self homeomorphism of $\mathbb{R}^n$. Assume the first case. For each $i\in\mathbb{N}$, set $B_i = B^n(x_i,2\diam{E_i})$, set $B_i' = B^n(x_i,\frac{1}{2}d_i\diam{E_i})$ and set $A_i = B_i\setminus B_{i+1}'$. On $\mathbb{R}^n \setminus B_1$, set $F(x)=x$ and, for each $i\in\mathbb{N}$, set $F|_{\overline{A_i}}(x) = s_i x$. By \cite[(6.8)]{TroVa}, for each $i\in\mathbb{N}$, $F(\partial B_{i+1}')$ is contained in a ball with boundary $F(\partial B_i)$, $F(B_{i+1})$ is contained in a ball with boundary $F(\partial B_{i+1}')$ and both $\dist(F(\partial B_i), F(\partial B_{i+1}'))$ and $\dist(F(\partial B_{i+1}), F(\partial B_{i+1}'))$ are nonzero. Therefore, $F$ can be extended homeomorphically to each $B_{i}'\setminus B_{i}$.
The other two cases are similar; see \cite[(6.14)]{TroVa} and \cite[(6.18)]{TroVa}.
\subsection{Uniformity}\label{sec:examples}
We construct a compact, countable, relatively connected set $E\subset \mathbb{R}^2$ whose complement is both John and quasiconvex but $E$ fails to have either the QSEP or the BLEP. For the rest of \textsection\ref{sec:examples} we use complex coordinates.
For each $n\in\mathbb{N}$ divide the interval $[0,2^{-n(n-1)/2}]$ into $2^n$ intervals of equal length and let $A$ be the set of endpoints of all the intervals produced. Set
\[ E = A \cup (iA)\cup(-A)\cup(-iA).\]
Note that $E$ is a relatively connected, compact and countable. Moreover, since the coordinate projections of $E$ both have measure zero, $E$ is quasiconvex; see \cite[Lemma 2.5, Lemma 2.7]{GM} or \cite[Theorem A]{HaHe}.
Define now $f: E \to E$ with
\[f|_{A\cup(iA)}(z) = -i \overline{z} \quad\text{ and }\quad f|_{(-A)\cup(-iA)}(z) = z.\]
Clearly $f$ is $2$-bi-Lipschitz and, as $E$ is totally disconnected, $f$ extends to a homeomorphism of $\mathbb{C}$. We show that $f$ can not be extended quasisymmetrically to $\mathbb{C}$.
Suppose that there exists an $\eta$-quasisymmetric extension $F:\mathbb{C} \to \mathbb{C}$. For each point $a\in A$, let $x_a = (-a,0)$, $y_a = (0,a)$ and $\gamma_a = \{(-at,a(1-t)):t\in[0,1]\}$. For all $a\in A$, $\gamma_a$ is $2$-bounded turning and $2$-cigar in $\mathbb{R}^2 \setminus E$; that is, $\diam{\gamma_a} \leq 2|x_r-y_r|$ and $\dist(z,E) \geq \frac{1}{2}\min\{|x_r-z|,|y_r-z|\}$ for all $z\in \gamma_a$. Since $F$ is $\eta$-quasisymmetric, there exists $C>1$ depending only on $\eta$ such that $F(\gamma_a)$ is $C$-bounded turning and $C$-cigar in $\mathbb{R}^2\setminus E$ for all $a\in A$.
Let $n\in\mathbb{N}$ such that $2^{-n} \leq (3C)^{-2}$. Let $a' = 2^{-n(n+1)/2}\in A$ and $a = m2^{-n}a' \in A$ with $m$ being the smallest integer bigger than $2C$. The $C$-bounded turning condition of $F(\gamma_a)$ implies that $F(\gamma_a)\cap (\{0\}\times\mathbb{R}) = F(\gamma_a)\cap (\{0\}\times[-a',a'])$. However, for any point $F(\gamma_a)\cap (\{0\}\times\mathbb{R})$ we have $\dist(z,E) \leq 2^{-n}a' \leq (3C)^{-2}a' \leq (2C)^{-1}a \leq (2C)^{-1}\min\{|z-F(x_a)|,|z-F(y_a)|\}$ and the John property of $F(\gamma_a)$ is violated.
\begin{rem}
It turns out that the John property and the quasiconvexity of $\mathbb{R}^2\setminus E$ are not necessary for bi-Lipschitz or quasisymmetric extensions to $\mathbb{R}^2\setminus E$. For example, let $E = (0,+\infty)\times(-1,1)$ and note that $E$ is not John while $\mathbb{R}^2\setminus \overline{E}$ is not quasiconvex. Nevertheless, any $\eta$-quasisymmetric (resp. $L$-bi-Lipschitz) embedding of $E$ or $\mathbb{R}^2\setminus E$ into $\mathbb{R}^2$ extends to an $\eta'$-quasisymmetric (resp. $L'$-bi-Lipschitz) homeomorphism of $\mathbb{R}^2$. The proofs are left to the reader.
\end{rem}
\section{Uniformization of Cantor sets with bounded geometry}\label{sec:cantor}
In \cite{DSbook}, David and Semmes characterized the metric spaces that are quasisymmetrically homeomorphic to the standard middle-third Cantor set $\mathcal{C} \subset \mathbb{R}$: {a metric space $X$ is quasisymmetric homeomorphic to $\mathcal{C}$ if and only if it is compact, doubling, uniformly disconnected and uniformly perfect.
For planar sets, MacManus \cite{MM2} proved a slightly stronger statement: \emph{for a compact set $E\subset \mathbb{R}^2$ there exists a quasisymmetric mapping $F:\mathbb{R}^2 \to \mathbb{R}^2$ with $F(\mathcal{C}) = E$ if and only if $E$ is uniformly perfect and uniformly disconnected.} However, the same is not true in dimensions $n\geq 3$ due to the existence of wild Cantor sets in $\mathbb{R}^n$ that are uniformly perfect and uniformly disconnected \cite[pp. 70--75]{Daverman}. By increasing the dimension by $1$, MacManus' result generalizes to dimensions $n\geq 3$.
\begin{thm}\label{thm:cantor-unif}
For a compact set $E\subset \mathbb{R}^n$ there exists a quasisymmetric map $F:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ with $F(\mathcal{C}) =E$ if and only if $E$ is uniformly perfect and uniformly disconnected.
\end{thm}
One direction of Theorem \ref{thm:cantor-unif} is clear. Namely, if there exists quasisymmetric homeomorphism $F:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ mapping $\mathcal{C}$ onto $E$, then $E$ is $c$-uniformly perfect and $c$-uniformly disconnected with $c$ depending only on $\eta$. For the converse, we use the fact that there exists a quasisymmetric homeomorphism $f: \mathcal{C} \to E$. Our goal is to extend this mapping quasisymmetrically to $\mathbb{R}^{n+1}$.
Consider the set of finite words $\mathcal{W}$ formed from the letters $\{1,2\}$ and denote by $\emptyset$ the empty word. The length of a word $|w|$ is the number of letters that the word contains with $|\emptyset| = 0$. Define $\mathcal{W}^N$ to be the set of words in $\mathcal{W}$ whose length is exactly $N$. Let $I_{\emptyset} = [0,1]$ and given $I_w = [a,b]$ let $I_{w1}=[a,a+\frac13(b-a)]$, $I_{w2} = [b-\frac13(b-a),b]$. For each $w \in \mathcal{W}$, let $\mathcal{C}_w = I_w \cap \mathcal{C}$.
\begin{lem}\label{lem:cantor}
Let $X$ be a metric space and $f:\mathcal{C}\to X$ be an $\eta$-quasisymmetric homeomorphism. There exists $k\in\mathbb{N}$ depending only on $\eta$ with the following property. For any $m\in\mathbb{N}$ there exist sets $\mathscr{E}_1,\dots,\mathscr{E}_k$ whose elements are sets $f(\mathcal{C}_w)$ with $w\in\mathcal{W}^m$ such that
\begin{enumerate}
\item $\mathscr{E}_i \cap \mathscr{E}_j = \emptyset$ when $i\neq j$ and $\bigcup_{i=1}^k\mathscr{E}_i = \{f(\mathcal{C}_w)\colon w\in\mathcal{W}^m\}$;
\item for any $i\in\{1,\dots,k\}$ and any $f(\mathcal{C}_w),f(\mathcal{C}_{w'}) \in \mathscr{E}_i$ with $w\neq w'$ we have
\[\dist(f(\mathcal{C}_w),f(\mathcal{C}_{w'})) \geq 5\max\{\diam{f(\mathcal{C}_w)},\diam{f(\mathcal{C}_{w'})}\}.\]
\end{enumerate}
\end{lem}
\begin{proof}
Set $d = (\eta^{-1}(1/10))^{-1}$. We prove the lemma for $k$ being the integer part of$d^{\log{2}/\log{3}} +1$.
By quasisymmetry of $f$ and (\ref{eq:relQS}), property (2) of the lemma is satisfied if $\dist(\mathcal{C}_w,\mathcal{C}_{w'}) \geq d3^{-m}$. Note that, for each $w\in\mathcal{W}^m$, there exist at most $k$ words $w_1,\dots,w_l \in \mathcal{W}^m$ such that $\dist(\mathcal{C}_w,\mathcal{C}_{w_i}) < d 3^{-m}$. Let now $\mathcal{C}'_1,\dots,\mathcal{C}'_{2^m}$ be an enumeration of $\{\mathcal{C}_w\colon w\in\mathcal{W}^m\}$ such that for all $1\leq i<j\leq 2^m$, $\mathcal{C}_i'$ lies to the left of $\mathcal{C}_j'$. For each $i=1,\dots,k$ define $A_i$ to be the integers in $\{1,\dots,2^m\}$ that are of the form $i+rk$ with $r\in\mathbb{N}\cup\{0\}$ and set $\mathscr{E}_i = \{f(C_j') \colon j\in A_i\}$. It is now straightforward to verify that the sets $\mathscr{E}_j$ satisfy the properties (1) and (2) of the lemma.
\end{proof}
We are now ready to establish Theorem \ref{thm:cantor-unif}.
\begin{proof}[{Proof of Theorem \ref{thm:cantor-unif}}]
Let $E$ be a compact, $c$-uniformly perfect and $c$-uniformly disconnected subset of $\mathbb{R}^n$. By Theorem \ref{thm:UDQS}, there exists an $\eta$-quasisymmetric homeomorphism $f: \mathcal{C} \to E$ with $\eta$ depending only on $n$ and $c$.
The first step of the proof is the construction of a bi-Lipschitz mapping $\Phi:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ that unlinks $E$. The second step is the construction of a quasiconformal mapping $G:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ that maps the unlinked image $\Phi(E)$ onto $\mathcal{C}$. The composition $G\circ\Phi$ is the desired map $F$ of Theorem \ref{thm:cantor-unif}.
Without loss of generality, we assume that $\diam{E}=1$. For the rest of the proof we write $E_w = f(\mathcal{C}_w)$.
Let $k$ be the number obtained by Lemma \ref{lem:cantor} for the set $E$. Let also $N$ be the smallest positive integer such that $3^{-N} \leq \min\{\eta^{-1}(1/16),\eta^{-1}((5k)^{-1})\}$. Then, for any $w,w'\in\mathcal{W}$ with $E_w\cap E_{w'} = \emptyset$ and any $u\in\mathcal{W}^N$, we have
\begin{align}
&\delta'\diam{E_w} \leq\diam{E_{wu}} \leq \delta\diam{E_w}\label{eq:cantor1},\\
&\dist(E_{w},E_{w}) \geq (\eta(1))^{-1} \max\{\diam{E_{w}},\diam{E_{w'}}\}\label{eq:cantor2}
\end{align}
with $\delta = \min\{\frac1{16},(5k)^{-1}\}$ and $\delta'=(2\eta(3^N))^{-1}$.
Let $\mathscr{E}^0_{1}, \dots,\mathscr{E}^0_k$ be the sets of Lemma \ref{lem:cantor} corresponding to $m=N$. Define $\phi_1: E \to \mathbb{R}$ such that
\[\phi_1|_{E_w}(x) = 5(i-1)\delta \quad\text{ for all }x\in E_w \]
where $w\in\mathcal{W}^N$, $E_{w} \in \mathscr{E}^{0}_i$ and $i=1,\dots,k$. Inductively, suppose that we have defined $\phi_j : E \to \mathbb{R}$ such that $\phi_j|_{E_{w}}$ is constant whenever $w \in\mathcal{W}^{jN}$. For each $w \in \mathcal{W}^{jN}$ let $\mathscr{E}^{w}_1, \dots, \mathscr{E}^{w}_k$ be the sets of \ref{lem:cantor} corresponding to $E=E_{w}$ and $m=N$. Define $\phi_{j+1}: E \to \mathbb{R}$ such that
\[ \phi_{j+1}|_{E_{wu}} (x) = \phi_j|_{E_{w}}(x) + 5(i-1)\delta\diam{E_{w}} \quad\text{ for all }x\in E_{wu}\]
where $w \in\mathcal{W}^{jN}$, $u\in\mathcal{W}^N$, $E_{wu} \in \mathscr{E}^{w}_i$ and $i=1,\dots,k$. Then, for all $x\in E$, $|\phi_i(x)-\phi_j(x)| \leq \delta^{\max\{i,j\}}$. Therefore, the mappings $\phi_j$ converge uniformly to a mapping $\phi: E \to \mathbb{R}$.
We claim that $\phi$ is Lipschitz. Indeed, let $x,y \in E$ and let $m_0\in\mathbb{N}$ be the greatest integer $m$ such that $x,y \in E_w$ with $w\in \mathcal{W}^{mN}$. In particular, suppose that $x,y \in E_{w_0}$ with $w_0 \in \mathcal{W}^{m_0N}$. By (\ref{eq:cantor1}), (\ref{eq:cantor2}) and the maximality of $m_0$,
\[ |\phi(x)-\phi(y)| \leq \diam{E_{w_0}} \leq \eta(1)(\delta')^{-1}|x-y|. \]
and the claim follows.
Fix $x_0\in E$, $B_0 = B^n(x_0,5\diam{E})$ and set $\phi|_{\mathbb{R}^n\setminus B_0} \equiv 0$. Then, the map
\[ \phi: (\mathbb{R}^n \setminus B_0) \cup E \to \mathbb{R}\]
is $L$-Lipschitz for some $L$ depending only on $\eta$ and, by Kirszbraun Theorem, there exists an $L$-Lipschitz extension of $\phi$ to $\mathbb{R}^n$ which we also denote by $\phi$. Then, the mapping $\Phi:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ defined by $\Phi(x,z) = (x,\phi(x)+z)$ is $L'$-bi-Lipschitz with $L'$ depending only on $L$.
For each $m=0,1,\dots$ and each $w\in \mathcal{W}^{mN}$ fix $x_w \in E_{w}$ and set
\[\mathsf{K}_w = x_w + [-2\diam{E_w}, 2\diam{E_w}]^n.\]
Note that if $w \in \mathcal{W}^{mN}$ and $u\in \mathcal{W}^N$, then $\mathsf{K}_{wu} \subset \mathsf{K}_{w}$ and $\dist(\mathsf{K}_{wu}, \partial \mathsf{K}_{w}) \geq \frac12\diam{E_w}$. However, if $w,w' \in \mathcal{W}^{mN}$ are distinct, then $\mathsf{K}_{w}$ may intersect $\mathsf{K}_{w'}$ which is why we lift different sets to different heights. For each $m=0,1,\dots$ and each $w\in\mathcal{W}^{mN}$ define
\[ \mathcal{K}_w = \mathsf{K}_w\times[\phi_m(x_w)-2\diam{E_w},\phi_m(x_w)+2\diam{E_w}]. \]
From the definition of the functions $\phi_w$, it follows that for all $m\in\mathbb{N}$, for all distinct $w, w' \in \mathcal{W}^{mN}$ and for all $u\in \mathcal{W}^{N}$,
\begin{align}
&\dist(\mathcal{K}_w,\mathcal{K}_{w'}) \geq \max\{\diam{E_w},\diam{E_{w'}}\}\label{eq:cantor4};\\
&\mathcal{K}_{wu} \subset \mathcal{K}_{w} \text{ and }\dist(\mathcal{K}_{wu}, \partial \mathcal{K}_{w}) \geq \frac12\diam{E_w};\label{eq:cantor5}\\
&\mathcal{K}_{w} \cap \Phi(E) = \Phi(E_w) \text{ and } \dist(\Phi(E_w),\partial\mathcal{K}_w) \geq \frac12\diam{E_w}.\label{eq:cantor6}
\end{align}
For each $m=0,1,\dots$ and $w\in\mathcal{W}^{mN}$, let $z_w$ be the centre of $I_w$ and define
\[ \mathcal{Q}_w = [z_w-\frac56 3^{-mN}, z_w+\frac563^{-mN}]\times [-\frac56 3^{-mN}, \frac563^{-mN}]^n.\]
For each $w\in\mathcal{W}^{mN}$, let $g_w :\partial\mathcal{K}_{w}\to\partial\mathcal{Q}_{w}$ be a sense-preserving similarity map. By Proposition \ref{prop:ext-ndim}, there exists $\Lambda>1$ depending only on $\eta$, and there exists $G:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ such that
\begin{enumerate}
\item $G$ is the identity outside of $B_0$ and
\item for all $w\in\mathcal{W}^{mN}$, the restriction of $G$ on $\mathcal{K}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{K}_{wu})$ extends $g_w$ and is a $(\frac{\diam{\mathcal{Q}_w}}{\diam{\mathcal{K}_w}},\Lambda)$-quasisimilarity that maps $\mathcal{K}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{K}_{wu})$ onto $\mathcal{Q}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{Q}_{wu})$.
\end{enumerate}
Therefore, by a theorem of V\"ais\"al\"a on removability of singularities \cite[Theorem 35.1]{Vais1}, $G$ is $K$-quasiconformal with $K$ depending only on $\eta$ and $n$. Set $F = G\circ\Phi$ and note that $F$ extends $f$ and that $F(E_w) = \mathcal{C}_w$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:cantor}}
Let $E\subset \mathbb{R}^n$ be closed and $c$-uniformly disconnected. By \textsection\ref{sec:isolated} and \textsection\ref{sec:unbounded}, the proof is reduced to the case that $E$ is compact and perfect. Hence, by Brouwer's topological characterization of Cantor sets \cite[Theorem 7.4]{Kechris} we may assume that $E$ is a Cantor set.
Suppose first that $E$ is $c$-uniformly perfect and that $f:E \to \mathbb{R}^n$ is $\eta$-quasisymmetric. By Theorem \ref{thm:cantor-unif}, we may assume that $f:\mathcal{C} \to \mathcal{C}$. By Theorem \ref{thm:main} and the tameness of planar totally disconnected sets \cite[\textsection10]{Moisebook}, $f$ extends to an $\eta_1$-quasisymmetric homeomorphism $F_1:\mathbb{R}^2 \to \mathbb{R}^2$ with $\eta_1$ depending only on $\eta$. By the Tukia-V\"ais\"al\"a extension theorem \cite{TukVais}, $F_1$ extends to an $\eta'$-quasisymmetric $F:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ with $\eta'$ depending only on $\eta$ and $n$.
Suppose now that $f : E \to \mathbb{R}^n$ is $L$-bi-Lipschitz. Denote $E' = f(E)$ and $E_w' = f(E_w)$. By choosing $N$ sufficiently large in the proof of Theorem \ref{thm:cantor-unif}, we may assume that the right inequality of (\ref{eq:cantor1}) and the inequality (\ref{eq:cantor2}) are satisfied for both $E$ and $E'$. Following the construction of $\Phi$, we can construct cubes $\mathcal{K}'_w$, corresponding to the sets $E_w'$ with $w \in \mathcal{W}^{mN}$, and an $L_2$-bi-Lipschitz map $\Phi' : \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ that satisfy properties (\ref{eq:cantor4}), (\ref{eq:cantor5}) and (\ref{eq:cantor6}).
For each $w\in\mathcal{W}^{mN}$, let $g_w :\partial\mathcal{K}_{w}\to\partial\mathcal{K}'_{w}$ be a similarity mapping. By Proposition \ref{prop:ext-ndim}, there exist $\lambda>0$ and $\Lambda>1$ depending only on $c$ and $L$ and there exists $G:\mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ such that
\begin{enumerate}
\item $G$ is the identity outside of $B_0$ and
\item for all $w\in\mathcal{W}^{mN}$, the restriction of $G$ on $\mathcal{K}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{K}_{wu})$ extends $g_w$ and is a $(\lambda,\Lambda)$-quasisimilarity that maps $\mathcal{K}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{K}_{wu})$ onto $\mathcal{Q}_{w}\setminus (\bigcup_{u\in\mathcal{W}^N}\mathcal{Q}_{wu})$.
\end{enumerate}
Therefore, $G$ is BLD and, by Lemma \ref{lem:BLD}, $G$ is $L_3$-bi-Lipschitz for some $L_3$ depending only on $L$. Thus, $F = (\Phi')^{-1}\circ G\circ\Phi$ is an $L'$-bi-Lipschitz extension of $f$ for some $L'$ depending only on $L$ and $c$ and extends $f$.
\bibliographystyle{alpha}
|
1609.09037
|
\section{Introduction}
Electric Vehicles (EVs) have several advantages over the traditional gasoline powered vehicles. For example, EVs are more environment friendly and more energy efficient. Thus, regulators (e.g. Federal Energy Regulator Commission (FERC)) are providing incentives to the consumers to switch to electric vehicles. However, the successful deployment of charging stations crucially depends on the profit of the charging stations and how efficiently the resources are used for charging the electric vehicles. Without profitable charging stations, the wide deployment of the EVs will remain a distant dream. On the other hand, because of the environment-friendly nature of the electric vehicles, it is also important for the regulators to increase the user (or, consumer) surplus to provide an incentive for the users. Hence, selecting a price is an imperative issue for the charging stations. The charging station may have limited charging spots or renewable energy harvesting devices. Hence, intelligent allocation of the resources among the EVs is a key component for fulfilling the potential of EVs' deployment.
We propose a menu based pricing scheme for charging an EV. Whenever an EV arrives, the charging station offers a variety of contracts $(l,t_{dead})$ at price $p_{l,t_{dead}}$ to the user where the user will be able to use up to $l$ units of energy within the deadline $t_{dead}$ for completion. The EV user either accepts one of the contracts by paying the specified price or rejects all of those based on its payoff. We assume that the user gets an utility for consuming $l$ amount of energy with the deadline $t_{dead}$. The payoff of the user (or, user's surplus) for a contract is the difference between the utility and the price paid for one contract. The user will select the option which fetches the highest payoff.
The various advantages of the above pricing scheme should be noted. First, it is an online pricing scheme. It can be adapted for each arriving user. Second, since the charging station offers prices for different levels of energy and the deadline, the charging station can prioritize one contract over the others depending on the energy resources available. Favorable prices for shorter deadlines can attract users to vacate the charging stations early and only use it when it is necessary. Third, the user's decision is much simplified. She only needs to select one of the contracts (or, reject all) and will receive the prescribed amount within the prescribed deadline. Fourth, the pricing mechanism is inherently {\em individual rational}\footnote{Individual rationality means that the user will only select one of the contracts if it gets non-negative payoff.}, {\em incentive compatible}\footnote{Individual compatibility denotes that the user will select only that contract which gives the optimal payoff.}, and {\em truthful}\footnote{Truthfulness implies that the users will not achieve a higher payoff by delaying their arrivals or selecting a sub-optimal strategy. Hence, the pricing mechanism is robust against the strategy selection of the users.}.
We consider that the charging station is equipped with renewable energy harvesting devices and a storage device for storing energies. The charging station may also buy conventional energy from the market to fulfill the contract of the user if required. Hence, if a new user accepts the contract $(l,t_{dead})$, a cost is incurred to the charging station. This cost may also depend on the existing EVs and their resource requirements. Hence, the charging station needs to find the optimal cost for fulfilling each contract. We show that obtaining that cost is equivalent to solve a {\em linear programming} problem.
We consider two optimization problems--i) social welfare\footnote{Social welfare is the sum of the profit of the charging station and the user surplus.} maximization, and ii) the EV charging station's profit maximization. We first propose a pricing scheme which maximizes the social welfare {\em irrespective of whether the charging station is aware of the utilities of the users or not}. The pricing scheme is simple to compute, as the charging station selects a price which is equal to the marginal cost for fulfilling a certain contract for a new user (Theorem~\ref{thm:pricestrategy}, Corollary \ref{thm:price_expected}). However, the above pricing scheme only provides {\em zero} profit to the charging stations. Thus, such a pricing scheme may not be useful to the charging station. We show that when a charging station is {\em clairvoyant} (i.e., the charging station knows the utilities of the users), there exists a pricing scheme which satisfies both the objectives (Theorem~\ref{thm:profit_max}). Though in the above pricing mechanism, the user's surplus becomes $0$. Thus, a {\em clairvoyant} charging station may not be beneficial for the user's surplus.
The charging station may not know the exact utilities of the users, however, it may know the distribution function\footnote{We do not put any assumption on the distribution function} from where it is drawn. We investigate the existence of a pricing mechanism which maximizes the ex-post social welfare i.e. maximizes the social welfare for every possible realization of the utility function. In the scenario where the charging station does not know the exact utilities of the users, we show that there {\em may not exist a pricing strategy which simultaneously maximizes the ex-post social welfare and the expected profit}. One has to give away the ex-post social welfare maximization in order to achieve expected profit maximization. Thus, unlike when the charging station {\em is clairvoyant}, there may not exist a pricing strategy which simultaneously satisfies both the objectives when the exact utilities are unknown. We propose a pricing strategy which can fetch the highest possible profit to the charging station under the condition that it maximizes the ex-post social welfare (Theorem~\ref{thm:profitmax_uncertainty}). Above pricing strategy provides a {\em worst case} maximum profit to the charging station. We show that such a pricing strategy can fetch a higher profit when the charging station can harvest a large amount of renewable energy. However, the profit only increases up to a certain threshold, beyond that threshold harvested energy has no effect on the profit.
Since the above pricing strategy may not yield the {\em maximum expected} profit to the charging station, we have to relax the constraint the social welfare to be maximized in order to yield a higher profit to the charging station. Whether a contract will be selected by the user does not depend on the price of the contract, but also the prices of other contracts. Thus, achieving a pricing scheme which maximizes the expected profit is difficult because of the discontinuous nature of the profits. We propose a pricing strategy which yields a fixed (say, $\beta$) amount of profit to the charging station. We show that the above pricing strategy also maximizes the social welfare with the desired level of probability for a suitable choice of $\beta$ (Theorem~\ref{thm:approx}). Hence, such a pricing scheme is also attractive to the regulators. Further, we show that a suitable choice of $\beta$ can maximize the profit of the charging station for a class of utility functions (Theorem~\ref{thm:aclassutility}).
Finally, we, empirically provide insights how a trade-off between the profit of the charging station and the social welfare can be achieved for various pricing schemes (Section~\ref{sec:simulation_results}). We also show that how our pricing scheme can increase greater utilization of the resources and result in a lower number of charging spots compared to the existing ones.
{\em The proofs are deferred to the technical report\cite{tech_ev} owing to the space constraint.}
\textbf{Related Literature}: {\em To the best our knowledge this is the first attempt to consider contract based online pricing for controlling both the energy and deadline of the EVs.} However, other pricing mechanisms to control the charging pattern of EV in a {\em residential} charging in a day-ahead market are proposed\cite{soltani,oren,javidi,tvt,kar,zou}. In contrast to the residential charging, in a commercial or workplace charging station, users do not have control of charging the car at each instance. They need a certain amount of energy within a deadline. In our proposed mechanism, the charging station selects different prices for different options and the user only needs to select a contract, it does not need to control the charging pattern at each instance.
Optimal pricing for a day-ahead demand response program have been studied \cite{sen,low2}. However, we need an online pricing mechanism in the EV charging station. Since the charging station is unaware of the future arrivals of the users and has limited renewable energy, determining the optimal pricing in such a setting is more challenging. The menu-based pricing is an online pricing mechanism and can enhance the efficient usage of the resources by controlling the deadline. Unlike in the demand response program, the users also do not need to control the demand for each instance in our menu-based pricing.
\cite{parkes} proposed an online VCG auction mechanism. However, in \cite{parkes} the user's payment is determined at the end of the day, and thus users are not sure how much they have to pay beforehand. Hence, it may not be preferred by the users. In contrast, in our mechanism the users select one of the contracts and pay the prescribed price beforehand. In \cite{tong,qhuang,xu_cdc,yu_allerton} online scheduling algorithms have been proposed for charging EVs. The main focus of these papers was scheduling, they did not consider the optimal pricing approach for the charging station which we did. Further, unlike in \cite{tong,qhuang,xu_cdc,yu_allerton}, in our menu-based pricing scheme, the charging station can control the energy requirement and the deadline of the users by selecting the prices to the users. Hence, a greater flexibility can be achieved. Additionally, \cite{tong,yu_allerton} did not guarantee that the energy demand will be fulfilled. In contrast, in our model once a user opts for an option, the EV charging station always fulfills the request of users.
In the deadline differentiated pricing \cite{bitar,bitar2,nayyar,Salah2016}, each user's total energy consumption is fixed, however the user can specify the deadline. On the contrary, in our proposed menu-based pricing mechanism each user can jointly choose {\em any pair} of energy level and deadline. The deadline differentiated pricing is suited for a day-ahead offline setting where an equilibrium price is attained for a specific set of pre-determined decision of the users. However, the users' utilities and thus optimal decision may change in real time and thus, the deadline differentiated pricing may not be suitable for online setting. Our menu-based pricing approach is online, where the price menu is adapted for each arriving user. Further, \cite{nayyar} assumed that the price setter knows the utilities of the users.
\cite{bitar, Salah2016} assumed that the utility functions are strictly concave, and \cite{bitar2} put some restrictions on the utility functions to achieve optimality. However, such assumptions are not necessary for our approach.
\section{Model}
We consider a charging station which wants to select a pricing strategy in order to maximize its payoff over a certain period of time $T$ (e.g. one day). Suppose that user $k=1,\ldots, K$ arrives at the charging station at time $t_k$. The charging station decides a price menu or a contract $p_{k,l,t}$ to user $k$ for different energy levels $l\in \{1,\ldots, L\}$ and deadline $t\in \{t_{k}+1,\ldots, T\}$ (Fig.~\ref{fig:ev_charging}).\footnote{In Section V, we show that if the support of the distributions of the user's utilities have the same lower end-point, our proposed menu-based price mechanism essentially becomes equivalent to a ime-of-use price mechanism.} It is needless to say that we can discretize the time and energy at any level that one may want\footnote{It is an online mechanism, we can easily extend it for fixed maximum deadline scenario for EVs by extending the time horizon.}, however, the computational cost will increase. User $k$ has to decide $l$ and $t$ based on the menu; if she decided to accept any option on the menu, she has to pay the prescribed price $p_{k,l,t}$. The user can decide not to accept any price too (Fig~\ref{fig:ev_charging}). The EV may not be charged continuously i.e. preemption is allowed. A preempted battery of the EV can be resumed charging from the previous battery level upon preemption.
\begin{figure*}
\begin{minipage}{0.49\linewidth}
\includegraphics[trim=1in 0.5in 1in 0in,width=0.99\textwidth]{ev_charging.pdf}
\caption{The trading model: Charging station offers a menu of contracts, the arriving user decides either one of them or rejects all.}
\label{fig:ev_charging}
\vspace{-0.2in}
\end{minipage}\hfill
\begin{minipage}{0.49\linewidth}
\includegraphics[trim=0.5in 0.7in 1.2in 0in, width=0.99\textwidth]{hybrid_source.pdf}
\caption{The hybrid energy source with a limited capacity Battery.}
\label{fig:hybrid_source}
\vspace{-0.2in}
\end{minipage}
\end{figure*}
\subsection{User's utilities}
If user $k$ selects the price option $p_{k,l,t}$, it will get an utility $u_{k,l,t}$. Hence, its {\em surplus} or payoff will be $u_{k,l,t}-p_{k,l,t}$. If the user rejects all the options then its utility is $0$ (Fig.~\ref{fig:ev_charging}). We assume that the realized value $u_{k,l,t}$ is drawn from a distribution function of the random variable $U_{k,l,t}$. The random variables $U_{k,l,t}$ need not be independent, in fact, they can be generated from a joint distribution. In practice, there is a correlation of the utilities among different deadlines and charging amount. For example, $U_{k,l_1,t}\geq U_{k,l,t}$ if $l_1>l$ as a higher amount of energy for a fixed deadline should induce higher utility to a user. Similarly, $U_{k,l,t_1}\leq U_{k,l,t_2}$ if $t_1>t_2$ since for a similar level of charge, the user will prefer the smaller deadline menu as it will give the user more flexibility. On the other hand, a user who wants to park a long time may not mind a longer deadline. Thus, we do not make any a priori assumptions on the utility functions since they can be different for different users. We assume that the car vacates the charging spot once it exceeds its prescribed deadline\footnote{If the user can not take away its car, the charging spot will be automatically downgraded to a mere parking spot i.e. without any charging facility.}.
\subsection{The charging Station}
\subsubsection{Hybrid Energy Source}
We assume that the charging station can obtain energies for fulfilling the charging request both from the renewable sources and conventional sources (Fig.~\ref{fig:hybrid_source}).
The charging station can buy a conventional energy $q_t$ at a price $c_t$ for usage during the interval $[t,t+1)$. We do not assume any specific type of pricing schemes for buying conventional energy, however, we assume that $c_t$ is known. If the real time pricing is used, then we consider $c_t$ as the expected real-time price at time $t$.
The charging station is also equipped with an energy harvesting device and a storage capacity of $B_{max}$ (Fig.~\ref{fig:hybrid_source}). The harvesting device harvests $\bar{E}^{t}$ amount of energy between $[t,t+1)$. We assume that the marginal cost to harvest renewable energy is $0$. The amount of energy that the charging station uses from the storage for the time $[t,t+1)$ be $r_t$.
\subsubsection{System Constraints}
The charging station must procure the $l$ amount of energy for user $k$ by time $d_k$ if the user accepts the price menu $p_{k,l,d_k}$. Let $q_{k,t}$ be the conventional energy and $r_{k,t}$ be the energy from the storage device used by the charging station to charge user $k$ for time interval $[t,t+1)$. Then, we must have the following constraint
\vspace{-0.07in}
\begin{eqnarray}
\sum_{t=t_k}^{d_k-1}(r_{k,t}+q_{k,t})\geq l \label{eq:setofaccepted}
\end{eqnarray}
Suppose that initially, there is a set of users $\mathcal{K}_0$ already present in the charging station at time $t_k$. Now, the user $i\in \mathcal{K}_0$ has a deadline of $w_i$ and additional demand $N_i$. The charging station must have to satisfy the demand of those users. Thus, the charging station must also satisfy
\vspace{-0.07in}
\begin{eqnarray}
\sum_{t=t_k}^{w_i-t_k-1}(r_{i,t}+q_{i,t})\geq N_i,\forall i\in \mathcal{K}_0\label{eq:deadline}
\end{eqnarray}
We also assume that the charging station has one kind of charging equipment (either slow charging or fast charging) and there is a maximum rate constraint ($R_{max}$). Hence,
\vspace{-0.07in}
\begin{align}\label{eq:rate_constraint}
r_{k,t}+q_{k,t}\leq R_{max}, \quad r_{i,t}+q_{i,t}\leq R_{max} \forall i\in \mathcal{K}_0\quad \forall t.
\end{align}
Since the total energy $r_t$ used for charging from the storage device and the amount of conventional energy, $q_t$ bought from the market, thus,
\vspace{-0.07in}
\begin{align}
r_{k,t}+\sum_{i\in \mathcal{K}_0}r_{i,t}=r_t, \quad q_{k,t}+\sum_{i\in \mathcal{K}_0}q_{i,t}\leq q_t\quad \forall t\label{eq:energy}
\end{align}
Note that the charging station may store the unused conventional energy bought from the market i.e. $q_t-q_{k,t}-\sum_{i\in \mathcal{K}_0}q_{i,t}$ is stored in the storage device. The charging station may buy an additional amount of conventional energy at time $t$, if the future prices are higher.
Let the battery level at time $t+1$ be $B^{t+1}$ and
$B_{0}$ be the initial battery level. The charging station also wants to keep the battery level at the end of the day as $B_{0}$. If the final battery level need not match the initial level, our pricing approach can be easily extended to that scenario. If the battery can not hold the excessive energy, then it is wasted. Let use denote the wasted energy for time $[t, t+1)$ be $D_t\geq 0$, \footnote{It is also straightforward to extend our setting when the charging station can sell the excess energy to the grid, i.e. it will sell $D_t$ to the grid.}then
\begin{align}
B^{t+1}= B^{t}+\bar{E}^{t}-r_t+q_t-q_{k,t}-\sum_{i\in \mathcal{K}_0}q_{i,t}-D_t\nonumber\\
0\leq B^{t+1}\leq B_{max}, B^{t_k}=B_{0}\quad B^{T}=B_{0}\label{eq:batterylevel}.
\end{align}
The constraints in (4) and (5) put a bound on the maximum amount of energy can be used for charging
\section{Problem Formulation}
The profit of the charging station inherently depends on whether the user will accept that menu or not. Hence, before how the charging station will select $p_{k,l,t}$ for user $k$, we delve into the decision process of the users.
\subsection{User's decision}\label{sec:userdecision}
A user selects at most one of the price menus in order to maximize its payoff or surplus. We assume that the user is a {\em price taker}. Thus, for a menu of prices $p_{k,l,t}$, the user $k$ {\em selects} $A_{k,l,t}\in [0,1]$ such that it maximizes the following
\begin{align}\label{eq:utility1}
& \text{maximize} \sum_{l=1}^{L}\sum_{t=t_k+1}^{T}A_{k,l,t}( u_{k,l,t}-p_{k,l,t})\nonumber\\
& \text{subject to } \sum_{l=1}^{L}\sum_{t=t_k+1}^{T}A_{k,l,t}\leq 1
\end{align}
Note from the formulation in (\ref{eq:utility1}) the maximum is achieved when $A_{k,l,t}=1$ for the contract which maximizes the user $k$'s payoff (i.e., $\max_{i,j}\{u_{k,i,j}-p_{k,i,j}\}=u_{k,l,t}-p_{k,l,t}$.) and is $0$ otherwise. If such a solution is not unique, any convex combination of these solutions is also optimal since a user can select any of the maximum payoff contracts. We denote the decision as $A_{k,l,t}(\mathbf{p}_{k})$. Note that the decision whether to accept the menu $p_{k,l,t}$ not only depend on the price $p_{k,l,t}$ but also other price menus i.e. $p_{k,i,j}$ where $i\in \{1,\ldots, L\}$ and $j\in \{t_k+1,\ldots, T\}$ as the user only selects the price menu which is the most favorable to himself. Note that if the maximum payoff that user gets among all the price menus (or, contracts) is negative, then the user will not charge i.e. $A_{k,l,t}=0$ for all $l$ and $t$. We also assume that if there is a tie between charging and not charging, then the user will decide to charge i.e. if the maximum payoff that user can get is $0$, then the user will decide to charge.\footnote{Our result can be readily extended to the other options, in that case the price strategies given in this paper have to be decreased by an $\epsilon>0$ amount.}
\vspace{-0.1In}
\subsection{Myopic Charging Station}
Since the users arrive for the charging request at any time throughout the day, the charging station does not know the exact arrival times for the future vehicles. We consider that the charging station is myopic or near-sighted i.e. it selects its price for user $k$ without considering the future arrival process of the vehicles. However, it will consider the cost incurred to charge the existing EVs. Note that as the number of existing users increases, the marginal cost can increase to fulfill a contract for an arriving user, hence, such a pricing strategy may not maximize the payoff in a long run. We, later show that {\em a myopic pricing strategy is optimal in the case the marginal cost of fulfilling a demand of a new user is independent of the number of existing users.}
In practice, the charging station often has fixed number of charging spots, thus, the charging station may want to select high prices for user $k$, in order to make the charging spots available for the users who can pay more but only will arrive in future\footnote{The above consideration is left for the future work.}. However, such a pricing strategy is against the first come first serve basis which is the current norm for charging a vehicle. Our approach considers a fair allocation process, where the charging station serves users based on the first come first serve basis. Later in Section~\ref{sec:simulation_results}, we show that since the charging station can control the time spent by an EV through pricing, our approach results into a lower number of charging spots compared to the existing pricing mechanism.
\subsection{Charging Station's Decisions and cost}\label{sec:chargingstationdecision}
Note that if the user $k$ accepts the menu $(l,d_k)$. Then, the charging station needs to allocate resources among the EVs in order to minimize the total cost of fulfilling the contract.
First, we introduce some notations which we use throughout.
\vspace{-0.04in}
\begin{definition}\label{defn:vlt}
The charging station has to incur the cost $v_{l,d_k}$ for fulfilling the contracts of existing customers and the contract $(l,d_k)$ of the new user $k$, where $v_{l,d_k}$ is the value of the following linear optimization problem:
\vspace{-0.05in}
\begin{eqnarray}\label{eq:vlt}
\mathcal{P}_{l,d_k}:& \text{min } \sum_{t=t_k}^{T-1}c_tq_t\nonumber\\
& \text{subject to } (\ref{eq:setofaccepted}),(\ref{eq:deadline}),(\ref{eq:rate_constraint}),(\ref{eq:energy}), (\ref{eq:batterylevel})\nonumber\\
& \text{var: } r_{k,t}, q_{k,t}, q_t, r_t, D_t\quad \geq 0
\end{eqnarray}
\end{definition}
Note that our model can also incorporate time varying, strictly increasing convex costs $C_t(\cdot)$\footnote{Our model can also incorporate the scenario where there is a constraint on the maximum value of conventional energy can be bought from the market at a given time.}. Since $\mathcal{P}_{l,d_k}$ is a linear optimization problem, it is easy to compute $v_{l,d_k}$. Also, note that if the above problem is infeasible for some $l$ and $t$, then we consider $v_{l,t}$ as $\infty$. We assume that the prediction $\bar{E}^{t}$ is perfect for all future times and known to the charging station\footnote{ Menu-based pricing approach can be extended to the setting where the estimated generation does not match the exact amount. First, we can consider a conservative approach where $\bar{E}^t$ can be treated as the worst possible renewable energy generation. As a second approach, we can accumulate various possible scenarios of the renewable energy generations, and try to find the cost to fulfill a contract for each such scenario. For example, if there are $M$ number of possible instances of the renewable energy generation amount in future. Then, we can find the optimal cost for each such instance of renewable energy generation $\bar{E}^{m,t}$ where $m\in \{1,\ldots, M\}$ instead of $\bar{E}^t$ . We then can compute the average (or, the weighted average, if some instance has greater probability) of the optimal costs, and that cost can be taken as the cost of fulfilling a certain contract.}.
\vspace{-0.04in}
\begin{definition}\label{defn:v-k}
Let $v_{-k}$ be the amount that the charging station has to incur to satisfy the requirements of the existing EVs if the new user does not opt for any of the price menus.
\end{definition}
If user $k$ does not accept any price menu, then the charging station still needs to satisfy the demand of existing users i.e. the charging station must solve the problem $\mathcal{P}_{l,t}$ with $q_{k,t}=r_{k,t}=0$. $v_{-k}$ is the value of that optimization problem. Thus, from Definitions~\ref{defn:vlt} and \ref{defn:v-k} we can visualize $v_{l,d_k}-v_{-k}$ as the additional cost or marginal cost to the charging station when the user $k$ accepts the price menu $p_{k,l,d_k}$. It is easy to discern that {\em $v_{l,d_k}-v_{-k}$ is non-negative for any $d_k$ and $l$.}
\subsection{Profit of the charging station}\label{sec:profit}
Now, we discuss the profit of the charging station based on its pricing strategies. Note that if all the spots are occupied then, the charging station can not accommodate a new user. Thus, we consider the scenario where a charging spot is available.
\subsubsection{Pricing with Perfect Foresight}
First, we consider the scenario where the charging station has a perfect foresight of the utility of the user i.e. the charging station is clairvoyant and has a perfect knowledge about the user's utility. Note that if the user $k$ selects the price menu $p_{k,l,t}$, then the charging station has to pay $v_{l,t}$ amount (Definition~\ref{defn:vlt}). Thus, the charging station has to pay additional amount $v_{l,t}-v_{-k}$ (Definition~\ref{defn:v-k}) when the user selects the price menu $p_{k,l,t}$. Thus, the profit of the charging station is
\vspace{-0.05in}
\begin{eqnarray}\label{eq:profit_maximum}
\sum_{l=1}^{L}\sum_{t=t_k+1}^{T}(p_{k,l,t}-v_{k,l,t}+v_{-k})A_{k,l,t}(\mathbf{p}_{k})
\end{eqnarray}
Note that here the charging station selects a price $p_{k,l,t}$ to maximize its own profit. $A_{k,l,t}>0$ only if $p_{k,l,t}$ gives the highest payoff for the user $k$ as discussed in Section~\ref{sec:userdecision}.
\subsubsection{Prediction Based Pricing}
In practice the charging station may not know the exact realization of the utility function of the users. Thus, it can only use predictions of the utility function in order to select the price menu. Here, we consider such a scenario where the charging station does not know the exact utilities of the user.
We assume that the charging station knows the statistic of the user's utility. Let $R_{k,l,t}$ be the event that the price menu $p_{k,l,t}$ is selected, hence, the profit of the charging station is
\vspace{-0.05in}
\begin{align}\label{eq:profitmax_expec}
& \sum_{l=1}^{L}\sum_{t=t_k+1}^{T}\mathbb{E}[(p_{k,l,t}-v_{l,t}+v_{-k})\mathbbm{1}_{R_{k,l,t}}]\nonumber\\
& =\sum_{l=1}^{L}\sum_{t=t_k+1}^{T}(p_{k,l,t}-v_{l,t}+v_{-k})\Pr(R_{k,l,t})
\end{align}
The indicator variable $R_{k,l,t}$ in (9) denotes the event that the contract $(l,t)$ is being chosen by the user $k$.
The expectation is taken over the joint distribution of $U_{k,l,t}$. {\em The expected profit maximization problem for the charging station is to maximize the above objective over} $p_{k,l,t}$.
We assume that the utilities are distributed according to some continuous distribution\footnote{However, it can be easily extended to discontinuous distribution case.}. Hence, $\Pr(R_{l,t})$ is given by
\vspace{-0.05in}
\begin{align}
\Pr(R_{k,l,t})= \Pr(U_{k,l,t}-p_{k,l,t}\geq (\max_{i,j}\{ U_{k,i,j}-p_{i,j}\})^{+})\nonumber
\end{align}
Thus, $\Pr(R_{k,l,t})$ not only depends on $p_{k,l,t}$, but also prices for other menus.
\subsection{Objectives}
We consider that the charging station decides the price menus in order to fulfill one of the two objectives (or, both)--i) Social Welfare Maximization and ii) its profit maximization.
\subsubsection{Social Welfare}
The social welfare is the sum of user surplus and the profit of the charging station. As discussed in Section~\ref{sec:userdecision} for a certain realized values $u_{k,l,t}$ if the user $k$ selects the price menu $p_{k,l,t}$, then its surplus is $u_{k,l,t}-p_{k,l,t}$, otherwise, it is $0$.
As discussed in Section~\ref{sec:profit} the profit of the charging station is $p_{k,l,t}-v_{l,t}+v_k$ for a given price $p_{k,l,t}$ if the user selects the menu, otherwise it is $0$. Hence, the social welfare maximization problem is to select the price menu $p_{k,l,t}$ which will maximize the following
\begin{align}\label{eq:socialwelfare_maxproblem}
\mathcal{P}_{perfect}: & \text{maximize} \sum_{l=1}^{L}\sum_{t=t_k+1}^{T}(u_{k,l,t}-v_{l,t}+v_{-k})A_{k,l,t}(\mathbf{p}_k)\nonumber\\
& \text{var }: p_{k,l,t}\geq 0.
\end{align}
Recall that in order to find $v_{l,t}$ we have to solve $\mathcal{P}_{l,t}$(cf.~(\ref{eq:vlt})) which is a constrained optimization problem.
Since EV is expected to increase the social value such as providing a cleaner environment, and higher energy efficiency, hence, it is important for a regulator (e.g. FERC) whether there exists a pricing strategy which maximizes the social welfare of the system. If the charging station is operated by the regulator or some government agency, then the main objective is indeed maximizing the social welfare or user surplus is maintained.
{\em Ex-ante and Ex-post Maximization}: When the charging station is unaware of the utilities of the users, then two options are considered--i) decides a price and hopes that it will maximize the social welfare for the realized values of utilities ({\em ex-post} maximization), or ii) decides a price and hopes that it will maximize the social welfare in an expected sense ({\em ex-ante} maximization).
Thus, the {\em ex-ante} maximization does not guarantee that the social welfare will be maximized for every realization of the random variables $U_{k,l,t}$. However, in the {\em ex-post} maximization, the social welfare is maximized for each realization of the random variables. Thus, {\em ex-post} maximization is a stronger concept of maximization (and thus, more desirable) and it is not necessary that there exist pricing strategies which maximize the ex-post social welfare. However, we show that in our setting there exist pricing strategies which maximize the ex-post social welfare. Note that {\em ex-post} social welfare maximization is the same as (\ref{eq:socialwelfare_maxproblem}).
\subsubsection{Profit Maximization}
Social welfare maximization does not guarantee that the charging station may get a positive profit. It is important for the wide-scale deployment of the charging stations, the charging station must have some profit. Further, if the charging station is operated by a private entity its objective is indeed to maximize the profit.
When the charging station is clairvoyant, then the charging station wants to maximize the profit given in (\ref{eq:profit_maximum}) by selecting $p_{k,l,t}$. On the other hand when the charging station does not know the user's utility, then it wants to maximize the expected payoff given in (\ref{eq:profitmax_expec}) by selecting $p_{k,l,t}$.
\subsubsection{Separation Problem}
Note that in order to select optimal $p_{k,l,t}$, the charging station has to obtain $v_{l,t}$ and $v_{-k}$ (Definitions~\ref{defn:vlt} \& \ref{defn:v-k}). However, $v_{l,t}$ and $v_{-k}$ do not depend on $p_{k,l,t}$. Hence, we can separate the problem--first the charging station finds $v_{l,t}$ and $v_k$, and then it will select $p_{k,l,t}$ to fulfill the objective. {\em We now focus on finding optimal} $p_{k,l,t}$.
\section{Results: Social Welfare Maximization}
First, we state the optimal values of the social welfare for any given realization of the user's utilites. Next, we state a pricing strategy which attains the above optimal value.
Note that if $u_{k,l,t}-v_{l,t}+v_{-k}<0$ for each $l$ and $t$, then the social welfare is maximized when the user $k$ does not charge or equivalently, the price $p_{k,l,t}$ is very high for each $l$ and $t$. In this case, the optimal value of social welfare is $0$.
On the other hand if $u_{k,l,t}-v_{l,t}\geq -v_{-k}$ for some $l$ and $t$, then the social welfare is maximized when the user $k$ charges its car. If the user accepts the price menu $p_{k,l,t}$, then the social welfare is $u_{k,l,t}-v_{l,t}+v_{-k}$. Thus, the maximum social welfare in the above scenario is $\max_{l,t}(u_{k,l,t}-v_{l,t}+v_{-k})$. Hence-
\vspace{-0.03in}
\begin{theorem}\label{thm:max_socialwelfare}
The maximum value of social welfare is $\max\{\max_{l,t}(u_{k,l,t}-v_{l,t}+v_{-k}),0\}$.
\end{theorem}
Note that even though the maximum value of social welfare is unique (as in Theorem~\ref{thm:max_socialwelfare}), the optimal pricing strategy is not unique. In the following, we give one possible pricing strategy that achieves the optimal social welfare.
\vspace{-0.04in}
\begin{theorem}\label{thm:pricestrategy}
The pricing strategy $p_{k,l,t}=v_{l,t}-v_{-k}$ maximizes the social welfare.
\end{theorem}
Note that in the pricing strategy, the charging station does not need to know the utility of the users. It optimizes the social welfare for each possible realization of the utility functions. Hence, we obtain
\vspace{-0.04in}
\begin{corollary}\label{thm:price_expected}
The pricing strategy $p_{k,l,t}=v_{l,t}-v_{-k}$ maximizes the ex-post social welfare.
\end{corollary}
Though the pricing strategy maximizes the social welfare, the above pricing strategy does not provide any positive profit to the charging station. Thus, the charging station may not prefer this pricing strategy as it will not have any incentive to provide the charging spots.
Also note that the pricing strategy also maximizes the social welfare in the long run when the additional cost of fulfilling a contract (i.e. $v_{l,t}-v_{-k}$) does not depend on the existing users in the charging station.
The condition that $v_{l,t}-v_{-k}$ is independent of the existing EVs in the charging station is satisfied if either all demand can be fulfilled using renewable energy or there is no renewable energy generation and the conventional energy is bought at a flat rate. Hence, in the {\em two above extreme cases, the myopic pricing strategy is also optimal in the long run}.
\section{Profit Maximization of the charging station}
\subsection{Charging station with perfect foresight}
We now provide a price strategy which maximizes the profit of the charging station and also the social welfare when {\em the charging station is clairvoyant}. Recall that the profit of the charging station is given by (\ref{eq:profit_maximum}).
First, we introduce a notation.
\vspace{-0.05in}
\begin{definition}\label{defn:optimal}
Let $(l^{*},t^{*})=$argmax$_{l,t}\{u_{k,l,t}-v_{l,t}\}$.
\end{definition}
\vspace{-0.03in}
\begin{theorem}\label{thm:profit_max}
Set $p_{k,l,t}=v_{l,t}-v_{-k}+(u_{k,l^{*},t^{*}}-v_{l^{*},t^{*}}+v_{-k})^{+}$ where $(l^{*},t^{*})$ is defined in Definition~\ref{defn:optimal}. Such a pricing strategy maximizes the profit as well as the social welfare.
\end{theorem}
The above pricing strategy is an example of {\em value-based} pricing strategy where prices are set depending on the valuation or the utility of the users \cite{Hinterhuber}. The user's utility dependent pricing strategy is also proposed in smart grids in some recent papers \cite{kar,personalized_pricing}. In contrast, the price strategy stated in Theorem~\ref{thm:pricestrategy} is an example of {\em cost-based} pricing strategy where the prices only depend on the costs. If the utilities of the users are same, the pricing strategy becomes similar to a time-dependent pricing scheme, which is prevalent in practice.
In the value-based pricing strategy, the user surplus decreases, in fact it is\footnote{If the user is reluctant to charge if it does not get a positive payoff, then, we can reduce the price by $\epsilon>0$ amount. In that case, it will be $(1-\epsilon)$ optimal profit-maximizing strategy.} $0$ in our case. Thus all the user surplus is transferred as the profit of the charging station. Hence, when the charging station is clairvoyant, then the pricing strategy which maximizes the profit of the charging station and it does not entail any positive {\em user surplus}.
{\em Note that there can be other pricing strategies which simultaneously maximize the social welfare and the profit.} For example, if $p_{k,l,t}$ is $\infty$ for all $(l,t)\neq (l^{*},t^{*})$ and $p_{k,l^{*},t^{*}}=v_{l^{*},t^{*}}-v_{-k}+(u_{k,l^{*},t^{*}}-v_{l^{*},t^{*}}+v_{-k})^{+}$, then it also maximizes the profit of the charging station. Thus, in this scenario, it can give only one possible contract to the EVs.
Though the joint profit maximizing and social welfare pricing strategy may not be unique, the profit of the charging station is the unique and is given by
\begin{align}
(u_{k,l^{*},t^{*}}-v_{l^{*},t^{*}}+v_{-k})^{+}
\end{align}
\subsection{Prediction based pricing}\label{sec:price_uncertainty}
\subsubsection{Maximum Profit under ex-post social welfare maximization}
Note from Theorem~\ref{thm:profit_max} that the profit maximization pricing strategy which maximizes the social welfare requires that the charging station has the complete information of the utilities of the users. Hence, such a pricing strategy can not be implemented when the charging station does not know the exact utilities of the users. Note from (\ref{eq:profitmax_expec}) that the profit maximization is a difficult problem as the user will select one menu inherently depends on the prices selected for other menus. For example, if the price selected for a particular contract is high, the user will be reluctant to take that compared to a lower price one. The profit is a discontinuous function of the prices and thus, the problem may not be convex even when the marginal distribution of the utilities are concave.
We have already seen (Corollary 1) that a pricing strategy which can maximize the ex-post social welfare, however, it does not give any positive profit.
We now show that there exists a pricing strategy which may provide better profit to the charging station while maximizing the ex-post social welfare. First, we introduce a notation which we use throughout.
\begin{definition}\label{defn:lower_endpoint}
Let $L_{k,l,t}$ be the lowest end-point of the marginal distribution of the utility $U_{k,l,t}$.
\end{definition}
\vspace{-0.04in}
\begin{theorem}\label{thm:profitmax_uncertainty}
Consider the pricing strategy:
\begin{align}\label{eq:pricesocialmax}
p_{k,l,t}=v_{l,t}-v_{-k}+(\max_{i,j}\{L_{k,i,j}-v_{i,j}+v_{-k}\})^{+}.
\end{align} The pricing strategy maximizes the ex-post social welfare.
The profit is $(\max_{i,j}\{L_{k,i,j}-v_{i,j}+v_{-k}\})^{+}$.
\end{theorem}
The pricing strategy maximizes the ex-post social welfare similar to Corollary~\ref{thm:price_expected}. This is also the {\em maximum possible profit that the charging station can have under the condition that it maximizes the ex-post social welfare with probability $1$}. However, it may not maximize the expected profit of the charging station or in other words, the pricing strategy which maximizes the expected profit needs not maximize the ex-post social welfare. Hence, unlike in the scenario where the charging station is clairvoyant (Theorem~\ref{thm:profit_max}) {\em there may not exist a profit maximization strategy which is also a social welfare maximizer when the charging station is unaware of the utilities}. Note that the user surplus is {\em not} $0$, hence, uncertainty regarding the user's utility functions is required for a positive consumer's surplus.
Also, note the similarity with Theorem~\ref{thm:profit_max}. If the user knows the utility, then $L_{k,l,t}=u_{k,l,t}$ as there is no uncertainty and we get back the pricing strategy stated in Theorem~\ref{thm:profit_max}.
Note that if $\max_{l,t}(L_{k,l,t}-v_{l,t}+v_{-k})>0$, then such a pricing strategy gives a positive profit to the charging station. If the charging station has large storage or large renewable energy harvesting devices, then, the cost $v_{l,t}-v_{-k}$ will be lower and thus, the charging station can get a higher profit. It also increases the user surplus, as the price set by the charging station decreases. Thus, the impact of higher degrees of renewable energy integration for the charging station increases both the profit of the charging station and the user surplus. The above illustrates the importance of the storage and harvesting energy devices in the charging station. The regulator (e.g. FERC) can also provide incentives to the charging station to set up those devices as the pricing strategy increases profit to the charging station as well as the ex-post social welfare.
In the extreme, when $v_{l,t}=0$ for all $l$ and $t$, then the profit of the charging station becomes maximum. However, further decreasing $v_{l,t}$ will not have any effect on the profit of the charging station as well as the user surplus, thus, it also shows the investment that the charging station needs to make for storage and renewable energy harvesting devices.
Also note that the users which have higher utilities i.e. $L_{k,l,t}$ is higher, it will give more profits to the charging station.
The charging station needs to know the lowest end-points of the support set of the utilities unlike in Corollary~\ref{thm:price_expected}. However, the charging station does not need to know the exact distribution functions of the utilities. The lowest end-point can be easily obtained from the historical data. For example, $L_{k,l,t}$ may be computed by the lowest possible price that the user accepts for the energy level $l$ and the deadline $t$.
\subsubsection{Guaranteed positive profit to the Charging station}
In Theorem~\ref{thm:profitmax_uncertainty} the charging station only has a positive profit if $\max_{l,t}\{L_{k,l,t}-v_{l,t}+v_k\}>0$. In the case, the above condition is not satisfied, then the charging station's profit will be $0$. Naturally, the question is whether there exists a pricing strategy which gives a guaranteed positive profit to the charging station without decreasing the social welfare much. In the following we provide such a pricing strategy.
First note that by the continuity of the joint distribution function we have the following
\vspace{-0.04in}
\begin{lemma}\label{lm:delta}
Let for each $\epsilon>0$, there exists a $\delta>0$ such that
\begin{align}
& \Pr(\max_{l,t}\{U_{k,l,t}-v_{l,t}+v_{-k}\}\geq 0)\nonumber\\
& \leq \epsilon+\Pr(\max_{l,t}\{U_{k,l,t}-v_{l,t}+v_{-k}-\delta\}\geq 0)
\end{align}
\end{lemma}
\vspace{-0.04in}
\begin{theorem}\label{thm:approx}
Fix an $\epsilon>0$. Now, consider the pricing strategy
\vspace{-0.12in}
\begin{align}\label{eq:approx_price}
p_{k,l,t}=v_{l,t}-v_{-k}+\delta(\epsilon)
\end{align}
where $\delta(\epsilon)$ is the $\delta$ which satisfies the Lemma~\ref{lm:delta}.
Then such a pricing strategy maximizes the ex-post social welfare with probability $1-\epsilon$.
\end{theorem}
{\em Outline of proof}: First, note that adding a constant does not change the optimal solution. Hence, if $(l^{*},t^{*})=\text{argmax}_{l,t}(u_{k,l,t}-v_{l,t}+v_{-k})$, then $(l,^{*},t^{*}) $ is also optimal for price strategy in (\ref{eq:approx_price}). The rest of the proof follows from Lemma~\ref{lm:delta}. Lemma~\ref{lm:delta} entails that there exists some $\delta>0$ such that it will ensure that the price strategy is off from the social welfare maximizer pricing strategy by at most $1-\epsilon$ in probability. \qed
Note that the pricing strategy stated in (\ref{eq:approx_price}) gives a positive profit of $\delta(\epsilon)$ amount irrespective of the menu selected by the user. Note that {\em the assumption of a continuous distribution is key}. If the distributions are discrete, then $\delta(\epsilon)$ may be $0$. Hence, the charging station may get zero profit
The expected profit of the charging station for the above pricing strategy can be readily obtained--
\vspace{-0.04in}
\begin{theorem}\label{thm:expected_payoff}
Suppose that $p_{k,l,t}=v_{l,t}-v_{-k}+\beta$, then the expected profit of the charging station is $\beta\max_{l,t}\{\Pr(U_{k,l,t}\geq v_{l,t}-v_{-k}+\beta)\}$.
\end{theorem}
{\em Outline of the Proof}: Note that if a user selects any of the contracts, then the charging station's profit is $\beta$. Hence, the charging station's expected profit is $\beta$ times the probability that at least one of the contracts will be accepted.\qed
The regulator such as FERC can select $\beta$ judiciously to trade off between the profit of the charging station and the user surplus. We empirically study the effect of $\beta$ in Section~\ref{sec:simulation_results}.
Now, we provide an example where the above pricing strategy can also be a profit maximizing for a suitable choice of $\beta$. First, we introduce a notation
\vspace{-0.06in}
\begin{definition}\label{defn:alpha}
Let $\zeta=\max\{\gamma| \gamma=\text{argmax}_{\beta\geq 0}\beta\{\max_{i,j}\Pr(U_{k,i,j}>\beta+v_{i,j}-v_{-k})\}\}$.
\end{definition}
Note that since $U_{k,l,t}$ is bounded and the probability distribution is continuous, thus, $\zeta$ exists. Note from Theorem~\ref{thm:expected_payoff} that $\zeta$ corresponds to the $\beta$ for which the charging station can get the maximum possible profit when the prices are of the form $p_{k,l,t}=v_{l,t}-v_{-k}+\beta$.
Now consider a class of widely seen utility functions
\vspace{-0.04in}
\begin{assumption}\label{assum:utility}
Suppose that the utility function $U_{k,l,t}=(Y_{k,l,t}+X_k)^{+}$ for all $l$ \& $t$; $Y_{k,l,t}$ is a constant and known to the charging station, however, $X_k$ is a random variable and whose realized value is not known to the charging station.
\end{assumption}
In the above class of utility function, the uncertainty is only regarding the realized value of the random variable $X_k$. Note that $X_k$ is independent of $l$ and $t$, hence,$X_k$ is considered to be an additive white noise.
It is important to note that we do not put any assumption whether {\em $X_k$ should be drawn from a continuous or discrete distribution.} However, if the distribution is discrete, we need the condition that $\zeta$ must exist.
\vspace{-0.07in}
\begin{theorem}\label{thm:aclassutility}
Consider the pricing strategy
$p_{k,l,t}=v_{l,t}-v_{-k}+\zeta$;
where $\zeta$ is defined in Definition~\ref{defn:alpha},
The pricing strategy maximizes the expected profit of the charging station (given in (\ref{eq:profitmax_expec})) when the utility functions are of the form given in Assumption~\ref{assum:utility}.
\end{theorem}
{\em Remark}: The above result is surprising. It shows that a simple pricing mechanism such as fixed profit can maximize the expected payoff for a large class of utility functions. However, if the utilities do not satisfy Assumption~\ref{assum:utility} then, the above pricing strategy may not be optimal.
\subsection{The pricing algorithm}
\begin{enumerate}
\item User $k$ comes at time $t_k$.
\item The charging station solves the linear programming problem $\mathcal{P}_{l,t}$ (eq. (7)) and finds the additional cost $v_{l,t}-v_{-k}$ for fulfilling the contract $(l,t)$ for user $k$ for each $l$ and $t$.
\item The charging station selects the price $p_{k,l,t}=v_{l,t}-v_{-k}+\max_{i,j}\{L_{k,i,j}-v_{i,j}+v_{-k}\}^{+}+\beta.$ where $\beta\geq 0$.
\item The user selects the contract which maximizes its payoff (eq.(6)).
\end{enumerate}
Note that when $\beta=0$ gives the worst possible payoff to the charging station as discussed before. The charging station needs to solve the linear programming problem $\mathcal{P}_{l,t}$. The linear programing problem can be efficiently solved by many solvers such as MOSEK, CPLEX, Simplex, CVX, and Linprog tool of MATLAB.
\section{Simulation Results}\label{sec:simulation_results}
We numerically study and compare various pricing strategies presented in this paper. We evaluate the profit of the charging station and the user's surplus achieved in those pricing strategies. We also show that our mechanism requires less charging spots compared to our nearest pricing model.
\vspace{-0.2in}
\subsection{Simulation Setup}
Similar to \cite{quadratic}, the user's utility for energy $x$ is taken to be of the form $\min\{-ax^2+bx, \dfrac{b^2}{4a}\}$.
Thus, the user's utility is a strictly increasing and concave function in the amount energy consumed $x$. The quadratic utility functions for EV charging have also been considered in \cite{low}. Note that the user's desired level of charging is $b/(2a)$. {\em We assume that $b/(2a)$ is a random variable}. \cite{gov} shows that in a commercial charging station, the average amount of energy consumed per EV is $6.9$kWh with standard deviation $4.9$kWh. We thus consider that $b/(2a)$ is a truncated Gaussian random variable with mean $6.9$kWh and standard deviation $4.9$kWh in the interval $[2, 20]$. We assume $a$ is a uniform random variable in the interval $[1/20, 1/8]$.
From \cite{gov}, the deadline or the time spent by an electric vehicle in a commercial charging is distributed with an exponential distribution with mean $2.5$ hours. Thus, we also consider the preferred deadline ($T_{pref}$) of the user to be an exponentially distributed random variable with mean $2.5$. The user strictly prefers a lower deadline. Hence, we assume that the utility is a convex decreasing function of the deadline \cite{tong}. The utility of the user after the preferred deadline is considered to be $0$. Hence, the user's utility is
\begin{align}\label{eq:simulation}
& U_{k,l,t}=\min\{-al^2+lb, b^2/4a\}\times\nonumber\\
& (\exp(T_{pref}-t-t_k)-1)^{+}/(\exp(T_{pref}-t_k)-1)
\end{align}
The arrival process of electric vehicles is considered to be a Poisson arrival process. However, the arrival rates vary over time. For example, during the peak-hours (8 am to 5pm) the arrival rate is higher compared to the off-peak hours. We, thus, consider a non-homogeneous Poisson process with the arrival rate is $15$ ($5$, resp.) vehicles per hour during the peak period (off-peak period, resp.). We also assume that the maximum charging rate $R_{max}$ is $3.3$ Kw.
We assume that the renewable energy is harvested according to a truncated Gaussian distribution with mean $2$ and variance $2$ per hour. The storage unit is assumed to be of capacity $20$kW-h. Initial battery level is assumed to be $0$ i.e. it is fully discharged. The prices for the conventional energy is assumed to be governed by Time-of-Use (ToU) time scale. Thus, the cost of buying the conventional energy varies over time.
\subsection{Results}
We consider the scenario where the charging station is unaware of the exact utilities of the users, however, it knows the distribution function. We consider the pricing strategy that we have introduced in Section~\ref{sec:price_uncertainty}--
\begin{eqnarray}
p_{k,l,t}=v_{l,t}-v_{-k}+\max_{i,j}\{L_{k,i,j}-v_{i,j}+v_{-k}\}^{+}+\beta.\nonumber
\end{eqnarray}
Recall from Definition~\ref{defn:lower_endpoint} that $L_{k,l,t}$ is the lowest end-point of the utility $U_{k,l,t}.$ We study the impact of $\beta$.
\subsubsection{Effect on Percentage of the users admitted}
Fig.~\ref{fig:betavsadmittedusers} shows that as $\beta$ increases the number of admitted users decreases. However, the decrement is slow initially. As $\beta$ becomes larger than a threshold, the price selected to the users becomes very large, and thus, a fewer number of EVs are admitted.
\subsubsection{Effect of $\beta$ on User's Surplus and Profit of the charging station}
The total surpluses of the users decreases with increase in $\beta$ (Fig.~\ref{fig:consumer_surplus}) as the user pays larger price when $\beta$ increases. The user surplus is maximum at $\beta=0$. The decrement of total users' surplus is not significant with $\beta$ for $\beta<1.6$. However, as $\beta>1.6$, it decreases rapidly. For $\beta<1.6$, the number of users served does not decrease much with $\beta$. Hence, the total users' surpluses decrease slowly.
As $\beta$ increases the profit increases initially (Fig.~\ref{fig:profit}). However, as $\beta>3$, the number of users served decreases rapidly, hence, the profit also drops.
At high values of $\beta$ both users' surpluses and the profit decrease significantly. Low values of $\beta$ give high users' surpluses, however, the profit is low. $\beta\in [0.8, 1.6]$ is the best candidate for the balance between profit and users' surpluses.
\begin{figure*}
\begin{minipage}{0.24\linewidth}
\includegraphics[trim=0in 0in 1in 0in,width=\textwidth]{admittedusersvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the percentage of EVs admitted with $\beta$.}
\label{fig:betavsadmittedusers}
\vspace{-0.2in}
\end{minipage}\hfill
\begin{minipage}{0.24\linewidth}
\includegraphics[trim=0in 0in .5in 0in,width=\textwidth]{surplusvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the total users' surpluses with $\beta$.}
\label{fig:consumer_surplus}
\vspace{-0.2in}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\includegraphics[trim=0in 0in .5in 0in,width=\textwidth]{profitvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the profit of the charging station with $\beta$.}
\label{fig:profit}
\vspace{-0.2in}
\end{minipage}\hfill
\begin{minipage}{0.24\linewidth}
\includegraphics[trim=0in 0in .7in 0in,width=\textwidth]{stay_timevsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the average time spent per EV with $\beta$.}
\label{fig:deadline}
\vspace{-0.2in}
\end{minipage}
\end{figure*}
\subsubsection{Effect on the average deadline}
Our analysis shows that users spend more time in the charging station with the increase in $\beta$ (Fig.~\ref{fig:deadline}). As $\beta$ increases, the users which have preferences for lower deadlines have to pay more; since the cost of fulfilling lower deadline contracts is high. Hence, those users are reluctant to accept the contract. Thus, the accepted users spend more time in the charging station. Though the increment of the average time spent by an EV is not exponential with $\beta$. The average time spent by an EV is $2.5$ hours for $\beta=1.2$ which is in accordance with the average time spent by an EV \cite{gov}.
\begin{figure*}
\begin{minipage}{0.19\linewidth}
\includegraphics[trim=.3in 3in 0.8in 3.4in, clip,width=\textwidth]{active_maxvsbeta.pdf}
\vspace{-0.3in}
\caption{Variation of the maximum of the number of active users with $\beta$ and comparison with the differentiated pricing scheme proposed in \cite{bitar,bitar2} (in dotted line).}
\label{fig:active_max}
\vspace{-0.4in}
\end{minipage}\hfill
\begin{minipage}{0.23\linewidth}
\includegraphics[trim=0in 0in 0.6in 0in,width=0.99\textwidth]{energyvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the average energy consumed per EV with $\beta$.}
\label{fig:energy_mean}
\vspace{-0.4in}
\end{minipage}
\begin{minipage}{0.23\linewidth}
\includegraphics[trim=0in 0in 0.6in 0.2in,width=0.99\textwidth]{costvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the cost incurred by the charging station with $\beta$.}
\label{fig:cost}
\vspace{-0.4in}
\end{minipage}\hfill
\begin{minipage}{0.3\linewidth}
\includegraphics[trim=.6in 0in 0.3in 0in,width=0.99\textwidth]{pricesvsbeta-eps-converted-to.pdf}
\vspace{-0.3in}
\caption{Variation of the prices set at different times with $\beta$.}
\label{fig:price}
\vspace{-0.3in}
\end{minipage}
\end{figure*}
\subsubsection{Effect on the maximum number of active users}
Since the average time spent by users in the charging station increases with $\beta$ and the number of admitted users are almost the same for $\beta\leq 1.2$, hence the number of active users increases initially as $\beta$ increases (Fig.~\ref{fig:active_max}). Though the maximum never reaches beyond $22$ for any value of $\beta$. However, when $\beta>1.2$, the number of active users decreases with $\beta$.
\subsubsection{Advantages of our proposed mechanism}
Fig.~\ref{fig:active_max} shows that our pricing algorithm requires less charging spots compared to the differentiated pricing mechanism \cite{bitar, bitar2} closest to our proposed approach. Similar to \cite{bitar, bitar2} the users select the amount of energy to be consumed for each time period based on the price set by the charging station. We assume that the user will not charge beyond the preferred deadline and before the arrival time. In \cite{bitar,bitar2} the EVs tend to spend more time as it reduces the cost\footnote{An EV is removed when it is fully charged.} and thus, the maximum of the number of EVs present at any time is also higher (Fig.~\ref{fig:active_max}) compared to our proposed mechanism.\footnote{We assume that the EV is removed after its reported deadline. When the deadline is over, the EV is moved to a parking spot without any charging facility.} In our proposed mechanism, the charging station controls the time spent by an EV through pricing and results into lower charging spots.
\subsubsection{Effect on the average energy}
As $\beta$ increases users only with higher utilities should accept the contracts. Thus, the average charging amount for each EV should increase with $\beta$. However, Fig.~\ref{fig:energy_mean} shows that for $\beta\leq 0.8$, the average energy consumed by each EV decreases with the increase in $\beta$. The apparent anomaly is due to the fact that the users with higher demand but with smaller deadline preferences, may have to pay more because of the increase in the price to fulfill the contract as $\beta$ increases. Hence, such users will not accept the offers which result into initial decrement of the average energy consumption with the increase in $\beta$. However, as $\beta$ becomes large, only the users with higher demand accept the offers, hence, the average energy consumption increases. However, the increment is only linear. In fact for $\beta=2$, the average energy consumption per EV is around $6.9$ kW-h.
\subsubsection{Effect on the Cost of the EV charging station}
The cost of the EV charging station decreases with the increase in $\beta$ (Fig.~\ref{fig:cost}). Since the time spent by users increases and thus, the demand of the users can be met through renewable energies. The charging station buys a lower amount of conventional energies which results in lower cost for the charging station. When $\beta\leq 1.6$, the number of admitted users decreases {\em sub-linearly}, still the cost decreases {\em linearly}. Hence, the FERC will prefer this setting as it decreases the cost without decreasing the admitted users much.
\subsubsection{Effect on the price selected by the charging station}
The price is higher during the peak period when the arrival rates is higher and the time-of-use price is high (Fig.~\ref{fig:price}). Hence, the pricing mechanism is consistent with the FERC's objective of selecting higher prices during the peak time to flatten the demand curve. A new price is selected when an EV arrives. As $\beta$ decreases the admitted users is higher, hence the price variation is also higher as $\beta$ decreases. Also note that when the number of active users is large, serving additional user can be significant and thus, the price is also high.
\begin{figure}
\includegraphics[trim=.6in 0in 1.1in 0in, clip, width=.95\textwidth]{gridvsbeta-eps-converted-to.pdf}
\vspace{-0.1in}
\caption{\small Left-hand figure shows the amount of energy drawn from the battery of the charging station at various times for different values of $\beta$. The right-hand figure shows the amount of energy drawn from the grid at various times for different values of $\beta$.}
\label{fig:gridvsbeta}
\end{figure}
\subsubsection{Impact on the energy drawn from the grid and the storage of the charging station}
Fig.~\ref{fig:gridvsbeta} shows that as $\beta$ increases the energy bought from the grid decreases. This is because the number of accepted users decreases with $\beta$. The energy used from the battery also decreases as $\beta$ increases. Note that by selecting a $\beta$ the charging station can also limit the peak energy consumption from the grid. Fig. ~\ref{fig:gridvsbeta} also shows that when no menu based pricing is applied i.e., the EVs are charged as soon as they arrive, then, the peak energy consumption from the grid is very high. Even $\beta=0$ lowers the energy consumption from the grid significantly. The energy used from the battery of the charging station is also low when there is no menu-based pricing. {\em The above shows the usefulness of the menu-based pricing in reducing the peak-energy consumption and efficient use of the renewable energy.}
\begin{figure}
\includegraphics[trim=.6in 0in 1.1in 0in, clip, width=.95\textwidth]{highvslowa-eps-converted-to.pdf}
\vspace{-0.1in}
\caption{\small In the left-hand figure, we consider $a\sim\mathcal{U}[1/40,1/16]$. In the right-hand figure, we consider $a\sim \mathcal{U}[1/10,1/4]$. We show the consumer surplus and profit of the charging station for $\beta=0, 1.2,2$.}
\label{fig:highvslowa}
\vspace{-0.2in}
\end{figure}
\subsubsection{Impact of $a$}
Fig.~\ref{fig:highvslowa} shows that as $a$ increases, the profit and the user's surpluses both decrease. Note that as $a$ increases, the utility decreases, and the preferred energy $\dfrac{b}{2a}$ also decreases, hence, the profit and the user's surplus both decrease.
\section{Conclusions and Future Works}
We propose an online menu-based pricing mechanism for EV charging. Specifically, we consider that the charging station will offer a price to each arriving user for a plethora of options; the user selects either one of them or rejects all. We show that there exists a {\em prior-free} pricing strategy which maximizes the ex-post social welfare. We characterize the maximum possible profit that the charging station can get while maximizing the {\em ex-post} social welfare. The charging station only needs to know the lower end-points of the utilities to implement the pricing strategy. The profit increases if the renewable energy penetration increases or the storage capacity of the charging station increases. However, the increment is bounded.
The charging station can not simultaneously maximize the profit and the ex-post social welfare unless it is {\em clairvoyant}. We propose a fixed profit pricing scheme which provides a fixed profit to the charging station. The above can also maximize the expected profit of the charging station under some assumptions which frequently arise in practice. Numerical evaluation suggests that the menu-based pricing scheme can reduce the peak-demand and utilize the limited number of charging spots more efficiently compared to the baseline approaches.
Following this work, we have considered the case where the EVs can inject energies by discharging via a Vehicle-to-Grid (V2G) service which can enhance the profits of the charging station \cite{2016arXiv161200106G}.
We considered that the EV charging station is myopic which does not consider the future arrival process while selecting an optimal price for an incoming EV. In future we consider the case where the charging station knows the statistics of the future arrival process of the EVs and selects price accordingly. We also considered that the charging station has the only type of charger (either fast or slow), the characterization of prices when the charging station selects prices for different chargers is also left for the future. Considering stochastic pattern of energy harvesting is an important next step. Finally, the consideration of the multiple charging stations which set prices in a competitive manner also constitutes a future research direction.
\bibliographystyle{IEEEtran}
|
2206.01515
|
\section{Introduction}
Neural networks (NNs) have achieved significant success in vast applications \citep{krizhevsky2012imagenet,vaswani2017attention}, including computer vision \cite{he2016deep}, natural language processing \cite{brown2020language}, and data mining \cite{wang2015collaborative}. However, the advance of NNs is arduous to be characterized by conventional statistical learning theory based on hypothesis complexity \cite{mohri2018foundations}, such as VC-dimension \citep{vapnik1994measuring} and Rademacher complexity \citep{bartlett2002rademacher}. According to the conventional theory, models of larger hypothesis complexity possess worse generalizability, while neural networks are usually over-paramterized but have excellent generalizability.
In this paper, we attempt to explain the excellent generalizability of deep learning from the perspective of decision boundary (DB) variability.
Intuitively, the decision boundary variability of a neural networks is largely determined by two means: (1) the randomness introduced by the optimization algorithm, and (2) the fluctuations of the training data when they are sampled from the data generating distribution. Following this intuition, we design two terms, {\it algorithm DB variability} and {\it $(\epsilon,\eta)$-data DB variability}, to measure the DB variability.
\begin{figure}[t]
\centering
\subfigure[Algorithm DB variability]{
\begin{minipage}[b]{0.4\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/algortim_db.pdf}
\end{minipage}
\label{figure:illustration algorithm DB}
}
\subfigure[Data DB variablity]{
\begin{minipage}[b]{0.4\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/data_db.pdf}
\end{minipage}
\label{figure:illustration data DB}
}
\caption{An illustration of decision boundary variability. (a) Two dashed curves denote the decision boundaries of the model trained on the orange points with different repeats, respectively, and the blue mismatch region is connected to the algorithm DB variability; (b) Two dashed curves denote the decision boundaries of the model trained on the orange points and all (orange and gray) points, respectively, and the blue mismatch region is connected to the data DB variability.}
\label{figure:illustration}
\end{figure}
\textbf{Algorithm DB variability} measures the variability of DBs in different training repeats. We conduct extensive experiments on the CIFAR-10/100 datasets \citep{krizhevsky2009learning} to explore which factors would determine algorithm DB variability. We visualize the trend of the algorithm DB variability with respect to ({\it w.r.t.}) training strategies, training time, sample sizes, and label noise ratios. The empirical results demonstrate that algorithm DB variability has (1) negative correlations with the training time and the sample size, (2) a positive correlation with the label noise, and (3) a negative correlation with the generalizability (or, test accuracy, in experiments).
From the theoretical view, we prove two lower bounds based on the algorithm DB variability, which fully support our experiments.
\textbf{$(\epsilon,\eta)$-data DB variability} is proposed to characterize the decision boundary from the view of the randomness in training data.
Given a neural network, if its decision boundary can be ``reconstructed'' by training a network with the same architecture from the scratch on a smaller $\eta$-subset (which contain $\eta\%$ examples of the source training set), while the ``error'' of the reconstruction is not larger than $\epsilon$, we call the model has $(\epsilon,\eta)$-data DB variability. Specifically, we may define the reconstruction error as the approximation error of the reconstructed decision boundary on the whole training set. Moreover, an $\eta$-$\epsilon$ curve can be drawn numerically.
The area under the $\eta$-$\epsilon$ curve could be an informative indicator for characterizing the generalization of NNs.
An $\mathcal{O}\left(\frac{1}{\sqrt{m}}+\epsilon+\eta\log\frac{1}{\eta}\right)$ generalization bound based on the $(\epsilon, \eta)$-data DB variability is proved, which demonstrates the relationship between the generalization of NNs and DB variability.
In contrast to many existing generalization bounds based on hypothesis complexity that require the access to the weight norm, our bounds only need the predictions. This brings significant advantages in empirically approximating the generalization bound in (1) black-model settings, where model parameters are unavailable, and (2) over-parameterized settings, where calculating the weight norm is of a prohibitively high computing burden.
To our best knowledge, this is the first work on explaining deep learning via the variability of decision boundary. Our research also sheds light to understanding a variety of interesting phenomenons, including the entropy and the complexity of decision boundary.
Through the lens of decision boundary variability, we may also design novel algorithms via reducing the decision boundary variability.
\section{Related works}
\textbf{Deep learning theory.}
In learning theory, generalizability refers to the capability of well-trained models predicting on unseen data. Conventional theory suggests that the generalizability has a negative correlation with the hypothesis complexity \citep{mohri2018foundations}, such as VC-dimension \citep{vapnik1994measuring} and Rademacher complexity \citep{bartlett2002rademacher}: models with larger complexity fit the training data better. This is usually summarised as the ``bias-variance trade-off''. This principle faces significant challenges in deep learning \citep{ma2020towards,he2020recent}.
\citet{zhang2021understanding} demonstrate that neural networks can near-perfectly fit noisy labels (which suggests that deep learning has an extremely large Rademarcher complexity), but still have impressive generalization performance. This conflict draws attentions of numerous researchers \citep{belkin2019reconciling,nakkiran2019deep,9257392}. \citet{belkin2019reconciling} show the unusual double descent phenomena of the training error {\it w.r.t.} model size following by works \citep{nakkiran2019deep,li2020benign}, which further sheds shadows to the ``variance-bias trade-off''.
Many works attribute the success of neural networks to the effectiveness of the stochastic gradient descent (SGD) algorithm \citep{bottou2010large,hardt2016train,he2019control,9222567}. For example, {\citet{jin2017escape} show that} SGD can escape from the local minima. The loss landscape of the networks has also be extensively analysed and it has been proven that there is no spurious local minima for linear NNs \citep{kawaguchi2016deep,lu2017depth,zhou2017critical}. Nevertheless, this elegant property does not hold for general networks where non-linear activation functions are involved \citep{he2020piecewise,Goldblum2020Truth}.
Recent study also attempts to explore the implicit bias of neural networks in the over-parameterized regime. \citet{soudry2018implicit} show that the over-parameterized networks converge to the max-margin solution when the training data is linear-separable. Some other research has also been conducted along this line \citep{NEURIPS2020_c76e4b2f,Lyu2020Gradient,chizat2020implicit}.
Empirical studies have also attempted to explain the decent performance of networks by uncovering their learning properties \citep{nakkiran2020deep,jiang2021assessing,9319542}. For instance, neural networks are shown tend to fit the low-frequency information first \citep{rahaman2019spectral,xu2019frequency} and then gradually learn fit more complex patterns \citep{kalimeris2019sgd} during the training procedure. \citet{he2020local} show that neural networks own the unique property of local elasticity that the predictions on the input data $\mathbf{x}'$ will not be significantly perturbed, when the neural net is updated via SGD at the training example $\mathbf{x}$ if $\mathbf{x}'$ is ``dissimilar'' to $\mathbf{x}$. Similar phenomena are also observed by \citet{fort2019stiffness}. Besides, \citet{papyan2020prevalence} uncover a novel phenomenon, neural collapse, which sheds light on interpreting the effectiveness of deep models \citep{fang2021exploring}.
\textbf{Decision boundaries in neural networks.} Decision boundary, which partitions the input space with different labels, is an important notion in machine learning. Recent studies attempt to understand neural networks from the aspect of decision boundaries \citep{he2018decision,karimi2019characterizing,karimi2020decision}. \citet{alfarra2020on} employ the tropical geometry to represent the decision boundary of neural networks. \citet{guan2020analysis} empirically show a negative correlation between the complexity of decision boundary and the generalization performance of neural networks. \citet{mickisch2020understanding} reveal an insightful phenomenon that the distance from data to the decision boundary continuously decreases during the training. More recently, researchers uncover that neural networks only rely on the most discriminative or the simplest features to construct the decision boundary \citep{ortiz2020hold,shah2020pitfalls}. Besides, \citet{samangouei2018explaingan} also explain the predictions of neural networks via constructing examples crossing the decision boundary.
To our best knowledge, this paper is the first work on (1) theoretically characterizing the complexity of decision boundary via the new measure, decision boundary variability, and (2) explaining the negative correlation between the generalizability and decision boundary variability.
\textbf{Adversarial training.} It has been shown that the adversarial examples, which are created by adding non-perceivable perturbation on data, can completely mislead the predictions of neural networks \citep{szegedy2013intriguing,goodfellow2014explaining,9302639,9288740}. To tackle this problem, adversarial training is proposed to improve the robustness of the neural networks through training on adversarial examples \citep{madry2017towards,carlini2017towards}. Nevertheless, \citet{su2018robustness} show a trade-off between the robustness and the generalization performance of neural networks. \citet{zhang2019theoretically} explain the trade-off as follows: adversarial training leads to the underfitting on the ``normal'' samples and thus undermines the model accuracy. Further, they mitigate this trade-off between robustness and accuracy by disentangling the robust error into a natural error and a boundary error, and then design new algorithms to decrease the boundary error. Moreover, \citet{zhang2020geometry} alleviate the trade-off by assigning small weights to ``non-important'' samples for reducing the model capacity and develop a reweighting framework to improve the robustness while preserving the accuracy.
\begin{figure*}[t]
\centering
\subfigure[CIFAR-10]{
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/biggan_cifar10_sample.pdf}
\end{minipage}
\label{figure:biggan_cifar10}
}
\subfigure[Fake CIFAR-100]{
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/biggan_cifar100_sample.pdf}
\end{minipage}
\label{figure:biggan_cifar100}
}
\subfigure[Algorithm DB variability vs. test accuracy]{
\begin{minipage}[b]{0.46\textwidth}
\centering
\includegraphics[width=0.48\columnwidth]{figure/test_acc_vs_consistency_cifar10.pdf}
\includegraphics[width=0.48\columnwidth]{figure/test_acc_vs_consistency_cifar100.pdf}
\end{minipage}
\label{figure:result_cifar}
}
\caption{Algorithm decision boundary variability on CIFAR-10 and CIFAR-100. (a) Examples of fake CIFAR-10 images generated by conditional BigGAN. (b) Examples of fake CIFAR-100 images generated by conditional BigGAN.
(c) Scatter plots of algorithm DB variability to accuracy on test set with different architectures and training strategies on CIFAR-10 and CIFAR-100. The colors of {\color{dodgerblue}blue}, {\color{red}red}, and {\color{gold}yellow} points denote the architectures of VGG-16 (VGG), ResNet-18 (ResN), and WideResNet-28 (WRN), respectively. The shapes of \raisebox{-0.7pt}{\Large $\bullet$}, {\normalsize $\blacktriangle$}, and {\small $\blacksquare$} designate the training strategies of standard training (S), non-data-augmentation training (N), and adversarial training (A), respectively. Each point is calculated and then averaged on $10$ trials.}
\label{figure:read-word}
\end{figure*}
\section{Preliminaries}
We denote the training set by $\mathcal{S}=\{(\mathbf{x}_i, y_i), i=1,\dots,m\}$, where $\mathbf{x}_i\in \mathbb{R}^n$, $n$ is the dimension of input data, $y_i\in [k] = \{1,\dots,k\}$, $k$ is the number of classes, and $m=|\mathcal{S}|$ is the training sample size.
We assume that $(\mathbf{x}_i, y_i)$ are independent and identically distributed (i.i.d.) random variables drawn from the data generating distribution $\mathcal{D}$. {Denote the classifier as} $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$, {which} is a neural network parameterized by $\boldsymbol{\theta}$. The output of $f_{\boldsymbol{\theta}}(\mathbf{x})$ is a $k$-dimensional vector and {is assumed to be a discrete probability density function}. Let $f^{(i)}_{\boldsymbol{\theta}}(\mathbf{x})$ {be} the $i$-th component of $f_{\boldsymbol{\theta}}(\mathbf{x})$, hence $\sum^k_{i=1}f^{(i)}_{\boldsymbol{\theta}}(\mathbf{x}) = 1$. We define $T(f_{\boldsymbol{\theta}}, \mathbf{x})=\{i\in \{1,\cdots,k\}| f^{(i)}_{\boldsymbol{\theta}}(\mathbf{x})=\max_{j} f^{(j)}_{\boldsymbol{\theta}}(\mathbf{x}) \}$ to denote the set of predicted labels by $f_{\boldsymbol{\theta}}$ on $\mathbf{x}$.
Due to the randomness of the learning algorithm $\mathcal{A}$, let $\mathbb{Q}(\boldsymbol{\theta})=\mathcal{A}(\mathcal{S})$ denote the posteriori distribution returned by the learning algorithm $\mathcal{A}$ leveraged on the training set $\mathcal{S}$. Hence, we focus on the {\it Gibbs classifier} (a.k.a. random classifier) $f_\mathbb{Q}=\{f_{\boldsymbol{\theta}} \vert \boldsymbol{\theta}\sim \mathbb{Q}\}$. $0-1$ loss is employed in this paper, and the expected risks in terms of $\boldsymbol{\theta}$ and $\mathbb{Q}$ are defined as:
\begin{equation*}
\mathcal{R}_\mathcal{D}(\boldsymbol{\theta}) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}} \left[\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right]
\end{equation*}
and
\begin{equation*}
\mathcal{R}_\mathcal{D}(\mathbb{Q}) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}} \left[\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right],
\end{equation*}
respectively. {Here} $\mathbb{I}(\cdot)$ is the indicator function. {Since the data generating distribution $\mathcal{D}$ is unknown}, {evaluating} the expected risk $\mathcal{R}_\mathcal{D}$ is not practical. Therefore, it is a practical way to estimate the expected risk by the empirical risk $\mathcal{R}_\mathcal{S}$, which is defined as:
\begin{align*}
\mathcal{R}_\mathcal{S}(\boldsymbol{\theta}) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{S}}\left[\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] = \frac{1}{m}\sum_{i=1}^m \mathbb{I}\left(y_i\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}_i\right)\right)
\end{align*}
\begin{align*}
\mathcal{R}_\mathcal{S}(\mathbb{Q}) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{S}}\mathbb{E}_{\boldsymbol{\theta} \sim \mathbb{Q}}\left[\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] = \frac{1}{m}\sum_{i=1}^m\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}} \left[ \mathbb{I}\left(y_i\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}_i\right)\right)\right],
\end{align*}
where $(\mathbf{x}_i,y_i)\in \mathcal{S}$ and $m=|\mathcal{S}|$.
\subsection{Decision boundary}
Intuitively, if the output $k$-dimensional vector $f_{\boldsymbol{\theta}}(\mathbf{x})$ on the input example $\mathbf{x}$ has a tie, {\it i.e.}, more than one entries of the vector have the maximum value, then $\mathbf{x}$ is considered to locate on the decision boundary of $f_{\boldsymbol{\theta}}$. Formally, the decision boundary can be formally defined as below.
\begin{definition}[decision boundary]
\label{def:decision boundary}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$ be a neural network for classification parameterized by $\boldsymbol{\theta}$, where $n$ and $k$ are the dimensions of input and output, respectively. Then, the \textit{decision boundary} of $f_{\boldsymbol{\theta}}$ is defined by
\begin{equation*}
\{\mathbf{x}\in \mathbb{R}^n| \exists i\neq j \in [k], f_{\boldsymbol{\theta}}^{(i)}(\mathbf{x})=f_{\boldsymbol{\theta}}^{(j)}(\mathbf{x}) =\max_q f^{(q)}_{\boldsymbol{\theta}}(\mathbf{x})\}.
\end{equation*}
\end{definition}
Consequently, we have the following remark.
\begin{remark}
(1) If an input example $(\mathbf{x},y)$ is not located on the decision boundary of $f_{\boldsymbol{\theta}}$, $T(f_{\boldsymbol{\theta}},\mathbf{x})$ is a singleton, and we have,
\begin{equation*}
\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right) = \mathbb{I}\left(y \neq T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right).
\end{equation*}
(2) If the input $\mathbf{x}$ is a boundary point, in practice, we randomly draw a label from the set $T(f_{\boldsymbol{\theta}},\mathbf{x})$ as the prediction of $f_{\boldsymbol{\theta}}$ on $\mathbf{x}$.
\end{remark}
\subsection{Adversarial training}
Adversarial training (AT)
can enhance the adversarial robustness of neural networks against adversarial examples, which is generated through a popular approach projected gradient descent (PGD) \citep{madry2018towards} in our empirical studies as an example. More specifically, adversarial training can be formulated as solving the minimax-loss problem as follows,
\begin{equation}
\min _{\boldsymbol{\theta}} \frac{1}{m} \sum_{i=1}^{m} \max _{\left\|\mathbf{x}_{i}^{\prime}-\mathbf{x}_{i}\right\| \leq \gamma} \ell\left(f_{\boldsymbol{\theta}}\left(\mathbf{x}_{i}^{\prime}\right), y_{i}\right), \nonumber
\end{equation}
where $\gamma$ is the {\it radius} to limit the distance between adversarial examples and original examples. Intuitively, adversarial training enlarges the distances from training examples to decision boundaries to at least $\gamma$, which is formerly very closed to the decision boundary.
\subsection{Shapley value}
In cooperative game theory, Shapley value is a tool to analyze the surplus brought by each player \cite{shapley1951notes}. Similarity, if we treat the notion of ``player'' as feature contained in the dataset, Shapley value can be used to measure the contributions of features to the predictions outputted by neural networks \cite{NIPS2017_7062}. For each input $\mathbf{x}$, we assume that $\mathbf{x}$ consists of $p$ features, {\it i.e.}, $\mathbf{x}=\{F_1,\ldots,F_p\}$. Then, the Shapley value of feature $F_j$ {\it w.r.t.} the model $f_{\boldsymbol{\theta}}$ is as follows,
\begin{align}
\phi_{j}(f_{\boldsymbol{\theta}})= \sum_{S \subseteq\left\{F_{1}, \ldots, F_{p}\right\} \backslash\left\{F_{j}\right\}} \frac{|S| !(p-|S|-1) !}{p !}\mathbb{E}_{\mathbf{x}\sim S}\left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}\cup \{F_j\}) \neq T(f_{\boldsymbol{\theta}}, \mathbf{x}) \right)\right]. \nonumber
\end{align}
According to the above definition of Shapley value, a small Shapley value $\phi_j$ indicates that feature $F_j$ only has little impact on the prediction of the network $f_{\boldsymbol{\theta}}$. Therefore, the set of Shapley values $\{\phi_1,\ldots,\phi_p\}$ partly reveal how the well-trained network extracts knowledge from the input data and produces the predictions.
\section{Algorithm decision boundary variability}
Due to the randomness of learning algorithms, there is no doubt that the parameters have a substantial variation in training repeats. However, {\it do the decision boundaries in the different training repeats have a large discrepancy?}
Quantitatively, we define the algorithm decision boundary variability (AV) to measure the variability of DBs caused by the randomness of algorithms in different training repeats.
\iffalse
\begin{figure}[t]
\centering
\subfigure[STD]{
\centering
\begin{minipage}[b]{0.135\textwidth}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.9.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.8.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.7.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.6.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.5.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.4.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.3.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_noise0.1_intercept_0.2.pdf}
\end{minipage}
\label{figure:moon base}
}
\subfigure[ADV]{
\centering
\begin{minipage}[b]{0.135\textwidth}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.9.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.8.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.7.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.6.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.5.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.4.pdf}
\vfill
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.3.pdf}
\includegraphics[width=0.43\columnwidth]{figure/decision_boundary_advTrain_noise0.1_intercept_0.2.pdf}
\end{minipage}
\label{figure:moon adv}
}
\subfigure[AV vs. IC distance]{
\centering
\begin{minipage}[b]{0.215\textwidth}
\includegraphics[width=1.\columnwidth]{figure/moon_plot.pdf}
\end{minipage}
\label{figure:consistency vs sample complexity}
}
\subfigure[AV vs. layer width]{
\centering
\begin{minipage}[b]{0.44\textwidth}
\includegraphics[width=0.48\columnwidth]{figure/consistency_layer_width_2.pdf}
\includegraphics[width=0.48\columnwidth]{figure/consistency_layer_width_1.pdf}
\end{minipage}
\label{figure:consistency vs layer width}
}
\caption{Algorithm decision boundary variability on two half moon datasets. (a) Decision boundaries of moon datasets with standard training (STD). (b) Decision boundaries of two half moon datasets with adversarial training (ADV). (c) Plot of algorithm DB variability (AV) as a function of inter-class (IC) distance} on two half moon datasets. (d) Plots of algorithm DB variability (AV) as a function of layer widths on two half moon datasets with different inter-class distances. The darker lines show the average over seeds and the shaded area shows the standard deviations.
\label{figure:consistency and complexity}
\end{figure}
\fi
\begin{definition}[algorithm decision boundary variability]
\label{def: consistencey}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$ be a neural network for classification parameterized by $\boldsymbol{\theta}$. Suppose $\mathbb{Q}(\boldsymbol{\theta})$ is the distribution over $\boldsymbol{\theta}$. Then, the algorithm decision boundary variability for $f_\mathbb{Q}$ on $\mathcal{D}$ is defined as below,
\begin{equation}
AV(f_\mathbb{Q},\mathcal{D})
= \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right], \nonumber
\end{equation}
where $T(f_{\boldsymbol{\theta}}, \mathbf{x})=\{i\in [k]| f^{(i)}_{\boldsymbol{\theta}}(\mathbf{x})=\max_{j} f^{(j)}_{\boldsymbol{\theta}}(\mathbf{x}) \}$.
\end{definition}
According to the definition, algorithm DB variability reflects the similarity of decision boundaries during different training repeats. An illustration of algorithm DB variability is provided in Figure \ref{figure:illustration algorithm DB}.
\begin{figure*}[t]
\centering
\subfigure[Algorithm DB variability and test error vs. training time]{
\begin{minipage}[b]{\textwidth}
\centering
\includegraphics[width=0.235\columnwidth]{figure/training_process_variability_cifar10_lr_01.pdf}
\includegraphics[width=0.24\columnwidth]{figure/training_process_variability_cifar10_lr_001.pdf}
\includegraphics[width=0.23\columnwidth]{figure/training_process_variability_cifar100_lr_01.pdf}
\includegraphics[width=0.23\columnwidth]{figure/training_process_variability_cifar100_lr_001.pdf}
\end{minipage}
\label{figure:training process plot}
}
\subfigure[Test error vs. algorithm DB variability]{
\begin{minipage}[b]{\textwidth}
\centering
\includegraphics[width=0.23\columnwidth]{figure/training_process_scatter_plot_variability_vs_loss_cifar10.pdf}
\includegraphics[width=0.233\columnwidth]{figure/training_process_scatter_plot_variability_vs_loss_cifar100.pdf}
\includegraphics[width=0.23\columnwidth]{figure/training_process_scatter_plot_variability_vs_error_cifar100_lr_01.pdf}
\includegraphics[width=0.23\columnwidth]{figure/training_process_scatter_plot_variability_vs_error_cifar100_lr_001.pdf}
\end{minipage}
\label{figure:training process scatter plot}
}
\caption{(a) Plots of algorithm DB variability and test error as functions of training time (LR is learning rate). (b) Scatter plots of test error to algorithm DB variability (LR is learning rate). The points are collected from different epochs. Each curve and point is calculated and then averaged on $10$ trials.}
\label{figure:training process}
\end{figure*}
\subsection{Algorithm DB variability and generalization}
\label{sec:real data}
\iffalse
\begin{figure}[t]
\centering
\subfigure[Algorithm DB variability vs. test accuracy]{
\begin{minipage}[b]{0.455\textwidth}
\centering
\includegraphics[width=0.47\columnwidth]{figure/test_acc_vs_consistency_cifar10.pdf}
\includegraphics[width=0.47\columnwidth]{figure/test_acc_vs_consistency_cifar100.pdf}
\end{minipage}
\label{figure:result_cifar}
}
\subfigure[\blue{Algorithm DB variability vs. Epoch}]{
\begin{minipage}[b]{0.51\textwidth}
\centering
\includegraphics[width=0.48\columnwidth]{figure/label_noise_variability_cifar10.pdf}
\includegraphics[width=0.48\columnwidth]{figure/label_noise_variability_cifar100.pdf}
\end{minipage}
\label{figure:label noise}
}
\caption{Algorithm decision boundary variability on CIFAR-10 and CIFAR-100.
(a) Scatter plots of algorithm DB variability to accuracy on test set with different architectures and training strategies on CIFAR-10 and CIFAR-100. The colors of {\color{dodgerblue}blue}, {\color{red}red}, and {\color{gold}yellow} points denote the architectures of VGG-16 (VGG), ResNet-18 (ResN), and WideResNet-28 (WRN), respectively. The shapes of \raisebox{-0.7pt}{\Large $\bullet$}, {\normalsize $\blacktriangle$}, and {\small $\blacksquare$} designate the training strategies of standard training (S), non-data-augmentation training (N), and adversarial training (A), respectively. \blue{(b) Plots of algorithm DB variability and test error as a function of training epoch with the existence of $20\%$ training label noise on CIFAR-10 and CIFAR-100.} Each point is calculated and then averaged on $10$ trials.}
\label{figure:read-word}
\end{figure}
\fi
To explore the relationship between the algorithm DB variability and generalization in neural networks, we conduct experiments with various popular network architectures, VGG-16 \citep{simonyan2014very}, ResNet-18 \citep{he2016deep}, and WideResNet-28 \citep{Zagoruyko2016WRN}, on standard datasets, CIFAR-10 and CIFAR-100. In detail, VGG-16 \citep{simonyan2014very}, ResNet-18 \citep{he2016deep}, and WideResNet-28 \citep{Zagoruyko2016WRN} are optimized by standard, non-data-augmentation and adversarial training, respectively, until the training procedure converges.
Each training setting (determined by different datasets, architectures, and/or training strategies) is repeated for $10$ trials with different random seeds to estimate the parameter distribution $\mathbb{Q}(\boldsymbol{\theta})$. In order to simulate the data generating distribution, we trained two conditional BigGANs \citep{zhao2020differentiable}
to produce $100,000$ fake (or, synthetic) images for CIFAR-10 and CIFAR-100, respectively. Examples of fake images are shown in Figure \ref{figure:biggan_cifar10} and \ref{figure:biggan_cifar100}.
These generative fake images enables estimating the algorithm DB variability. In every training setting, we plot the average test accuracy vs. the algorithm DB variability; as shown in Figure \ref{figure:result_cifar}. From the plots, we obtain the following four observations: (1) adversarial training dramatically decreases the test accuracy and promotes the algorithm DB variability compared to the standard training.
(2) data augmentation decreases the algorithm decision boundary variability.
Intuitively, the images augmented by cropping or flipping are still located on the data generating distribution, so data augmentation can expand the training set. Hence, the expanded training set can characterize wider decision boundaries on the data generating distribution; (3) WideResNet has better test accuracy and lower algorithm DB variability than ResNet and VGG; and (4) a negative correlation exists between the test accuracy and the algorithm DB variability. Based on these observations, we propose the following conjecture.
\begin{hypothesis}
\label{hypothesis:consistency}
{\it Neural networks with smaller algorithm decision boundary variability on the data generating distributions possess better generalization performance.}
\end{hypothesis}
We then conduct experiments on the algorithm DB variability {\it w.r.t.} training time, sample size, and label noise to concrete this hypothesis to verify our hypothesis.
\begin{figure*}[t]
\centering
\subfigure[Algorithm DB variability vs. sample size]{
\begin{minipage}[b]{0.46\textwidth}
\centering
\includegraphics[width=0.485\columnwidth]{figure/sample_size_cifar10.pdf}
\includegraphics[width=0.48\columnwidth]{figure/sample_size_cifar100.pdf}
\end{minipage}
\label{figure:sample size plot}
}
\subfigure[Algorithm DB variability vs. time (label noise)]{
\begin{minipage}[b]{0.46\textwidth}
\centering
\includegraphics[width=0.485\columnwidth]{figure/label_noise_variability_cifar10.pdf}
\includegraphics[width=0.48\columnwidth]{figure/label_noise_variability_cifar100.pdf}
\end{minipage}
\label{figure:label noise}
}
\caption{(a) Plots of algorithm DB variability and test error as functions of training sample size on CIFAR-10 and CIFAR-100. (b) Plots of algorithm DB variability and test error as functions of training time with the existence of $20\%$ label noise on CIFAR-10 and CIFAR-100. Each curve is calculated and then averaged on $10$ trials.}
\label{figure:sample size}
\end{figure*}
\subsection{Algorithm DB variability and training time}
\label{sec:training process}
To investigate the relationship between algorithm DB variability and training time, we train $40$ ResNet-18 with different initial learning rates of $0.1$ and $0.01$ on CIFAR-10 and CIFAR-100. Then, the algorithm DB variability and test error are calculated at each epoch; see Figure \ref{figure:training process plot}. From the plots, two observations can be obtained: (1) algorithm DB variability and test error share a very similar curve {\it w.r.t.} the training time; and (2) algorithm DB variability decreases during the training process. The decline of algorithm DB variability shows that the interpolation on examples reduces the variability of decision boundaries on data generating distribution. As shown in Figure \ref{figure:training process scatter plot}, we collect the points of (algorithm DB variability, test error) from different epochs, and the scatter plots present a significant positive correlation between test error and the algorithm DB variability, and thus supports Hypothesis \ref{hypothesis:consistency}.
\subsection{Algorithm DB variability and sample size}
\label{sec:sample size}
We next investigate how sample size influences the algorithm DB variability. $100$ ResNet-18 are trained on five training sample sets of different sizes randomly drawn from CIFAR-10 and CIFAR-100, while all irrelevant variables are strictly controlled. Then, the algorithm DB variability and test error are calculated in all cases; see Figure \ref{figure:sample size plot}. From the plots, we have the following two observations: (1) test error and algorithm DB variability share a very similar curve {\it w.r.t.} the training sample size; (2) larger sample size, which intuitively helps obtain a more smooth estimation of the decision boundary, also contributes to smaller algorithm DB variability; and (3) there is a significant positive correlation between test error and algorithm DB variability, which fully supports our argument of Hypothesis \ref{hypothesis:consistency}.
\subsection{Algorithm DB variability and label noise}
\label{sec:label noise}
\citet{belkin2019reconciling,nakkiran2019deep} show a surprising epoch-wise double descent of test error, especially with the existence of label noise. We explore in this section the trend of algorithm DB variability when the label noise exists. $20$ ResNet-10 are trained for $500$ epochs with a constant learning rate of $0.0001$ on CIFAR-10 and CIFAR-100 with $20\%$ label noise. We clarify that the noise labels remain constant in different training repeats, which is necessary to estimate the algorithm DB variability. Then, the average test error and algorithm DB variability are calculated at every training epoch, as shown in Figure \ref{figure:label noise}. From the plots, two observations can be derived: (1) the algorithm DB variability also undergoes an epoch-wise double descent during the training process, especially in the left panel of Figure \ref{figure:label noise}; and (2) test error and algorithm DB variability still share a very similar curve {\it w.r.t.} the training time with the existence of label noise, which implies that factors influence the generalization of networks can also have an influence on the algorithm DB variability. Hence, algorithm DB variability is an excellent indicator for the generalization ability of networks.
Here, we propose an insightful explanation about the epoch-wise double descent phenomenon of the data DB variability {\it w.r.t.} the training time: the increase of algorithm DB variability shows that fitting the noisy examples has a considerable effect on the formation of decision boundaries on data generating distribution. However, the algorithm DB variability decreases when the training proceeds further. This indicates that the negative effects brought from fitting the noisy training examples is gradually weakened.
In other words, as the training proceeds, neural networks can automatically reduce the impact brought from interpolating hard-to-fit examples,
which is insightful in explaining the decent generalizability of neural networks.
\subsection{DB variability and feature extraction stability}
\label{sec:feature extraction stability}
Apparently, neural networks also rely on the features of the input data to make the predictions. Based on the Shapley value, we may evaluate the impacts of single features in making predictions. If the proportion of each features employed by the well-trained network is less fluctuated during the repeated training, the neural network is called to possess a strong feature extraction stability. In this section, we explore the relationship between the algorithm DB variability and the feature extraction stability.
We first construct a binary classification dataset with four notable features based on the MNIST and Fashion-MNIST \citep{xiao2017/online} datasets, as shown in Figure \ref{figure:f_mnist_examples}, where the top left part of the image shows MNIST digits from class $\{5,4\}$, the top right part is from the classes $\{\text{T-shirt, pullover}\}$ of Fashion-MNIST, the bottom left part is from the classes $\{\text{trouser, dress}\}$ of Fashion-MNIST, and the bottom left part shows MNIST digits from class $\{8,7\}$.
We train VGG-16, ResNet-18, and WideResNet-28 on the synthetic binary classification dataset repeatedly for $10$ times with different random seeds, respectively, and then compute their Shapley values for each well-trained model.
The results are presented in Figure \ref{figure:shapley_value} and \ref{figure:shapley_value_std}. Figure \ref{figure:shapley_value} shows the Shapley values of different features ($F_1,F_2,F_3,F_4$) {\it w.r.t.} different models, where the error bars, {\it i.e.}, the standard deviations, are represented by the purple lines. The error bar reflects the network' stability of extracting feature from the dataset.
The standard deviation of Shapley value is presented separately in Figure \ref{figure:shapley_value_std}. From the figure, we may obtain an observation that WideResNet-28, which possesses better the generalization performance, also owns much smaller standard deviation of Shapley value, than VGG-16 and ResNet-18. Therefore, given the positive correlation between the generalization performance, algorithm DB variability, and feature extraction stability, we reasonably conjecture that the algorithm DB variability reflect the feature extraction stability and thus is closely connected to the network's generalizability.
\begin{figure*}[t]
\centering
\subfigure[Negative and positive examples]{
\begin{minipage}[b]{0.46\textwidth}
\centering
\includegraphics[width=0.48\columnwidth]{figure/f_mnist_negative_sample.pdf}
\includegraphics[width=0.48\columnwidth]{figure/f_mnist_positive_sample.pdf}
\end{minipage}
\label{figure:f_mnist_examples}
}
\subfigure[Shapley value]{
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/f_mnist_shapley_value.pdf}
\end{minipage}
\label{figure:shapley_value}
}
\subfigure[Std of Shapley value]{
\begin{minipage}[b]{0.22\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figure/f_mnist_shapley_std.pdf}
\end{minipage}
\label{figure:shapley_value_std}
}
\caption{(a) Examples of synthetic binary dataset; (b) Shapley values on the synthetic dataset; (c) Standard deviation (std) of Shapley values on the synthetic dataset.}
\label{figure:feature and shalpey}
\end{figure*}
\subsection{Theoretical Evidence}
\label{sec:theoretical evidence about algorithm variability}
In this section, we explore and develop the theoretical foundations for the algorithm decision boundary variability on data generating distributions.
In most practical cases, the dimension of decision boundaries is smaller than the data space. For example, the decision boundary in a three-dimensional data space is usually two. Thus, we may make the following mild assumption.
\begin{assumption}
\label{assumption:null boundary}
The decision boundary of the classifier network $f_{\boldsymbol{\theta}}$ on data generating distribution $\mathcal{D}$ is a set with measure zero.
\end{assumption}
We then have the following lemma.
\begin{lemma}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$ be a classifier network parameterized by $\boldsymbol{\theta}$. If Assumption \ref{assumption:null boundary} holds for all $\boldsymbol{\theta}\sim\mathbb{Q}$, then, for all $i\in \{1,\cdots,k\}$, we have
\begin{equation}
\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{I}(i\in T(f_{\boldsymbol{\theta}},\mathbf{x}))\right] = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{I}(T(f_{\boldsymbol{\theta}},\mathbf{x})=i)\right] \nonumber
\end{equation}
and
\begin{equation}
\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{I}(i\notin T(f_{\boldsymbol{\theta}},\mathbf{x}))\right] = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{I}(T(f_{\boldsymbol{\theta}},\mathbf{x})\neq i)\right]. \nonumber
\end{equation}
\end{lemma}
Then, can we prove the following theorem.
\begin{theorem}[lower bound on expected risk]
\label{thm:lower bound}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$ be a neural network for classification parameterized by $\boldsymbol{\theta}$. Suppose $\mathbb{Q}(\boldsymbol{\theta})$ is the distribution over $\boldsymbol{\theta}$. Then, if Assumption \ref{assumption:null boundary} holds for all $\boldsymbol{\theta}\sim\mathbb{Q}$, we have,
\begin{equation}
\mathcal{R}_\mathcal{D}(\mathbb{Q}) \geq \frac{AV(f_\mathbb{Q},\mathcal{D})}{2}, \nonumber
\end{equation}
where $AV(f_\mathbb{Q},\mathcal{D})$ is the algorithm DB variability for $f_\mathbb{Q}$ on data generating distribution $\mathcal{D}$.
\end{theorem}
Theorem \ref{thm:lower bound} provides a lower bound on the expected risk $\mathcal{R}_\mathcal{D}(\mathbb{Q})$ based on the algorithm DB variability $AV(f_\mathbb{Q},\mathcal{D})$. Moreover, when we consider the binary classification, {\it i.e.}, $k=2$, there is a tighter lower bound.
\begin{theorem}[lower bound for binary case]
\label{thm:lower bound for binary case}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^2$ be a binary classifier network parameterized by $\boldsymbol{\theta}$ and let $\mathbb{Q}(\boldsymbol{\theta})$ be the distribution over $\boldsymbol{\theta}$. Suppose the expected risk $\mathcal{R}_\mathcal{D}(\mathbb{Q})\leq \frac{1}{2}$ and Assumption \ref{assumption:null boundary} hold for all $\boldsymbol{\theta}\sim \mathbb{Q}$, then we have
\begin{equation}
\mathcal{R}_{\mathcal{D}}(\mathbb{Q}) \geq \frac{1-\sqrt{1-2AV(f_\mathbb{Q},\mathcal{D})}}{2}. \nonumber
\end{equation}
\end{theorem}
These lower bounds present that the Gibbs classifier $f_\mathbb{Q}$ possesses a significant expected risk when its algorithm DB variability is large. As such, a small algorithm DB variability decreases the bound and facilitates generalization performance of $f_\mathbb{Q}$.
\section{Data decision boundary variability}
In the previous sections, we introduced the algorithm DB variability, which measures the decision boundary variability caused by the randomness of learning algorithms.
In this section, we define the data DB variability to characterize decision boundary variability caused by the randomness in training data.
\begin{definition}[data decision boundary variability]
\label{def:complexity of db}
Let $f_{\boldsymbol{\theta}}(\mathbf{x}): \mathbb{R}^n \rightarrow \mathbb{R}^k$ be a neural network for classification parameterized by $\boldsymbol{\theta}$, where $\boldsymbol{\theta}\sim\mathcal{A}(\mathcal{S})$ is returned by leveraging the stochastic learning algorithm $\mathcal{A}$ on the training set $\mathcal{S}$, which is sampled from the data generating distribution $\mathcal{D}$.
We term $\mathcal{S}_\eta \subset \mathcal{S}$ as a $\eta\textit{-subset}$ of $\mathcal{S}$ if $\frac{|\mathcal{S}_\eta|}{|\mathcal{S}|}=\eta$. Then, if we fixed $\eta$ and
\begin{equation}
\inf_{\mathcal{S}_\eta \subset \mathcal{S}} \mathbb{E}_{\mathcal{D}} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] = \epsilon, \nonumber
\end{equation}
the decision boundary of $f_{\mathcal{A}(\mathcal{S})}$ is said to possess a $(\epsilon, \eta)\textit{-data decision boundary variability}$.
\end{definition}
\begin{figure*}[t]
\centering
\subfigure[$\eta$-$\epsilon$ curves on CIFAR-10]{
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=1.\columnwidth]{figure/cifar10_complexity_curve.pdf}
\end{minipage}
\label{figure:cifar10_complexity_curve}
}
\subfigure[$\eta$-$\epsilon$ curves on CIFAR-100]{
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=1.\columnwidth]{figure/cifar100_complexity_curve.pdf}
\end{minipage}
\label{figure:cifar100_complexity_curve}
}
\subfigure[Schematic diagram]{
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=1.\columnwidth]{figure/complexity_curve.pdf}
\end{minipage}
\label{figure:idea_complexity curve}
}
\caption{(a) The $\eta$-$\epsilon$ curves on CIFAR-10 with different training sample sizes $2000$ ($m_{2000}$), $2000$ ($m_{2000}$), $10000$ ($m_{10000}$), $20000$ ($m_{20000}$), and $50000$ ($m_{50000}$), respectively. (b) The $\eta$-$\epsilon$ curves on CIFAR-100 with different training sample sizes. (c) The schematic diagram of the $\eta$-$\epsilon$ curves {\it w.r.t.} small ($m_s$), medium ($m_m$), large ($m_l$), and infinite ($m_\infty$) sample sizes, respectively.}
\label{figure:complexity_curve}
\end{figure*}
An illustration of data DB variability is presented in Figure \ref{figure:illustration data DB}.
The data decision boundary variability contains two parameters of $\epsilon$ and $\eta$, respectively. That Gibbs classifier $f_{\mathcal{A}(\mathcal{S})}$ has a $(\epsilon,\eta)$-data DB variability means that only the proportion of $\eta$ of $\mathcal{S}$, {\it i.e.}, $\mathcal{S}_\eta$ (which can be considered as ``support vector set'') is enough to reconstruct a similar decision boundary with the reconstruction error $\epsilon$. The data DB variability can also be connected with the complexity of decision boundaries if we assume that simpler decision boundaries rely on a smaller number of ``support vectors''; we provide a detailed discussion in Section \ref{app:complexity of db}.
\subsection{\texorpdfstring{$\eta$-$\epsilon$ c}curves about data DB variability}
\label{sec:consistency and complexity}
According to Definition \ref{def:complexity of db}, the data DB variability degrades to the algorithm DB variability $AV(f_\mathbb{Q},\mathcal{D})$ when $\mathcal{S}_\eta=\mathcal{S}$. In other words, the algorithm DB variability is a special case of the data DB variability with $\eta=1$.
Therefore, the data DB variability could present more detailed information on reflecting how the decision boundary variability depends on the training set, especially when we observe the variation of the reconstruction error $\epsilon$ {\it w.r.t.} different $\eta$.
To explore the relationship between the reconstruction error $\epsilon$ and the proportion of subset $\eta$,
we train $1,000$ networks of ResNet-18 on CIFAR-10 and CIFAR-100 of different sample sizes $m$. Albeit finding the most suitable $\eta$-subset is intractable, we adopt a coreset selection approach named, {\it selection via proxy} \citep{coleman2020selection}, which can rank the importance of training examples, to estimate the $\eta$-subset for a given training set $\mathcal{S}$ and proportion $\eta$.
Then, through repeatedly training the network on $\mathcal{S}_\eta$, we can estimate the reconstruction error $\epsilon$. The $\eta$-$\epsilon$ curves of CIFAR-10 and CIFAR-100 are presented in Figure \ref{figure:cifar10_complexity_curve} and \ref{figure:cifar100_complexity_curve}, respectively. From the plots, we have an observation that {\it there is a more rapid decline in $\epsilon$ along with small $\eta$} and also a smaller algorithm DB variability when the training sample size $m$ is larger. Furthermore, we plot the schematic diagram of $\eta$-$\epsilon$ curves {\it w.r.t.} different sample size $m$, as shown in Figure \ref{figure:idea_complexity curve}. When $\eta=0$, $f_{\mathcal{A}(\mathcal{S}_n)}$ cannot be better than random guess, and hence $\epsilon=\frac{k-1}{k}$, where $k$ is the number of potential categories. It is worth noting that that $\epsilon$ has a sharper drop along with $\eta$ when the sample size $m$ is larger. Therefore, we rationally propose the following assumption, which is also shown by the right angle with $m_\infty$ in Figure \ref{figure:idea_complexity curve}.
\begin{assumption}
\label{assumption:coverge}
If $m\rightarrow \infty$, we have $\epsilon\rightarrow 0$ when $\eta\rightarrow 0$.
\end{assumption}
These plots indicate that the area under the $\eta$-$\epsilon$ curve could be a more meticulous predictor for the generalization ability of neural networks compared to the algorithm DB variability, which is only a point on the $\eta$-$\epsilon$ curve when $\eta=1$. Hence, the area under the $\eta$-$\epsilon$ curve can also be considered as an extension of the algorithm DB variability: if the Gibbs classifier $f_{\mathcal{A}(\mathcal{S})}$ possesses a smaller area under the $\eta$-$\epsilon$ curve, it produces more stable decision boundaries with varying training subsets.
\subsection{Theoretical evidence}
\label{sec:theoretical evidence about dataset variability}
In this section, we develop the theoretical foundations for the data decision boundary variability. Our theory suggests that {\it neural networks with better data DB variability possess better generalization}, which fully supports our theory.
According to the definition of data decision boundary variability, the $\eta$-subset $\mathcal{S}_\eta$ plays a similar role of ``support vector set'', and the complement set $\mathcal{S}\backslash\mathcal{S}_\eta = \mathcal{S} - \mathcal{S}_\eta$ is supposed to be sampled from simpler distributions than $\mathcal{D}$. Therefore, we make the following mild assumption.
\begin{assumption}
\label{assumption:data db variabilty}
Examples in $\mathcal{S}\backslash\mathcal{S}_\eta$ are assumed to be drawn from the distribution $\mathcal{D}_1$, where for all $(\mathbf{x},y)\sim \mathcal{D}_1$, $\mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}_\eta)}[\mathbb{I}(y\in T(f_{\boldsymbol{\theta}},\mathbf{x}))]=\max_{i\in [k]}\mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}_\eta)}[\mathbb{I}(i\in T(f_{\boldsymbol{\theta}},\mathbf{x}))]$ holds, and
\begin{align}
&\mathbb{E}_{\mathcal{D}_1} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] \nonumber \\
& \leq \mathbb{E}_{\mathcal{D}} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right]=\epsilon \nonumber
\end{align}
\end{assumption}
\begin{remark}
Assumption \ref{assumption:data db variabilty} can also be stated as follows: the data ($\mathcal{D}_1$) correctly classified by $f_{\mathcal{A}(\mathcal{S}_\eta)}$ possesses a lower data decision boundary variability than the average data decision boundary variability on $\mathcal{D}$.
\end{remark}
Given Assumption \ref{assumption:data db variabilty}, we can further derive a probably approximately correct bound {\it w.r.t.} $\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))$, as shown in the following lemma.
\begin{lemma}
\label{lemma:complement set bound}
If the decision boundaries of $f_{\mathcal{A}(\mathcal{S})}$ possess a $(\epsilon,\eta)$-data DB variability and Assumption \ref{assumption:data db variabilty} holds, then, with the probability of at least $1-\delta$ over a sample of size $m$, we have
\begin{equation}
\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) \leq \epsilon + \sqrt{\frac{1}{2(1-\eta)m}\log\frac{1}{\delta}} \nonumber
\end{equation}
\end{lemma}
\begin{figure}[t]
\centering
\subfigure{
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=0.95\columnwidth]{figure/coreset_complement_acc_cifar10.pdf}
\end{minipage}
\label{figure:coreset_complement_acc_cifar10}
}
\subfigure{
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=0.95\columnwidth]{figure/coreset_complement_acc_cifar100.pdf}
\end{minipage}
\label{figure:coreset_complement_acc_cifar100}
}
\caption{Scatter of $\epsilon$ ($y$-axis) and $\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))$ ($x$-axis) on CIFAR-10 and CIFAR-100.}
\label{figure:coreset_complement_acc}
\end{figure}
We also conduct experiments to show the correlation between $\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))$ and $\epsilon$; see Figure \ref{figure:coreset_complement_acc}. From the plots we obtain an observation that $\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))\leq \epsilon$ is stable when $\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))$ is small (about less than $0.5$).
\begin{lemma}
\label{lemma:risk diff}
If the decision boundaries of $f_{\mathcal{A}(\mathcal{S})}$ possess a $(\epsilon, \eta)$-data DB variability, then we have
\begin{equation}
\left| \mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S})) - \mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \right| \leq \epsilon. \nonumber
\end{equation}
\end{lemma}
Lemma \ref{lemma:risk diff} shows that the difference between the expected risk of $\mathcal{A}(\mathcal{S})$ and $\mathcal{A}(\mathcal{S}_\eta)$ can be bounded by their difference in decision boundaries. Then, we continue to prove the generalization bound with data decision boundary variability.
\iffalse
Then, by applying the concentration inequality, we can bound the difference between their empirical risk on the same training set $\mathcal{S}$ with the probability $1-\delta$:
\begin{lemma}
\label{lemma:bound empirical risk}
If the decision boundaries of $f_{\mathcal{A}(\mathcal{S})}$ possess a $(\epsilon, \eta)$-data DB variability, then, with probability of at least $1-\delta$ over a sample of size $m$, we have
\begin{equation}
\mathcal{R}_\mathcal{S}(\mathcal{A}(\mathcal{S}_\eta)) \leq \mathcal{R}_\mathcal{S}(\mathcal{A}(\mathcal{S})) + \sqrt{\frac{1}{2m}\log\frac{1}{\delta}} + \epsilon,
\end{equation}
where $m=|\mathcal{S}|$ is the training sample size. If we further assume the training error $\mathcal{R}_\mathcal{S}(\mathcal{A}(\mathcal{S}))=0$, then, with the probability of at least $1-\delta$ over a sample of size $m$, we have
\begin{equation}
\mathcal{R}_\mathcal{S}(\mathcal{A}(\mathcal{S}_\eta)) \leq \sqrt{\frac{1}{2m}\log\frac{1}{\delta}} + \epsilon \nonumber
\end{equation}
\end{lemma}
\fi
\begin{theorem}[data DB variability-based upper bound on expected risk]
\label{thm:core}
If the decision boundaries of $f_{\mathcal{A}(\mathcal{S})}$ possess a $(\epsilon, \eta)$-data DB variability on the data generating distribution $\mathcal{D}$, and assume $\eta\leq 0.5$ and Assumption \ref{assumption:data db variabilty} holds, then, with the probability of at least $1-\delta$ over a sample of size $m$, we have
\begin{equation}
\label{eq:core}
\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S})) \leq \Omega + \sqrt{4\Omega\Delta} + 8\Delta + \epsilon,
\end{equation}
where
\begin{equation}
\Omega=\epsilon + \sqrt{\frac{1}{2(1-\eta)m}\log\frac{1}{\delta}}, \nonumber
\end{equation}
\begin{equation}
\Delta=\eta\log\frac{e}{\eta}+\frac{1}{m}\log\frac{2}{\delta}. \nonumber
\end{equation}
Moreover, for sufficient large $m$, we have
\begin{equation}
\label{eq: dataset variability complexity}
\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S})) \leq \mathcal{O}\left(\frac{1}{\sqrt{m}}+\epsilon+\eta\log\frac{1}{\eta}\right).
\end{equation}
\end{theorem}
According to Assumption \ref{assumption:coverge}, when $m\rightarrow \infty$, $\eta\rightarrow 0$ and $\epsilon\rightarrow 0$, while according to Eq. \ref{eq: dataset variability complexity}, $\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}))\rightarrow 0$. Therefore, the generalization bound is asymptotically converged. Theorem \ref{thm:core} suggests that a smaller data DB variability,
corresponds to a tighter upper bound on the expected risk, which theoretically verifies the relationship between the data DB variability and the generalization ability of neural networks.
\section{Discussions}
This sections discusses how our findings would shed light on understanding other interesting phenomena.
\subsection{Algorithm DB variability and the entropy of decision boundaries}
\label{app:entropy}
If Assumption $\ref{assumption:null boundary}$ holds for all $\boldsymbol{\theta}\sim\mathbb{Q}$, $1-AV(f_\mathbb{Q},\mathcal{D})$ can be rewritten as
\begin{equation}
\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\sum_{i=1}^k\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i\right)\right]. \nonumber
\end{equation}
The term $\sum_{i=1}^k\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i)\right]$ can be considered to measure the degree of prediction uncertainty for the given voxel $\mathbf{x}$ in the input space $\mathbb{R}^n$. If we leverage $-\log(\cdot)$ on the term $\sum_{i=1}^k\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i)\right]$, we have that
\begin{equation*}
-\log \sum_{i=1}^k\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i)\right],
\end{equation*}
denotes the collision entropy of prediction made by the Gibbs classifier $f_\mathbb{Q}$ on $\mathbf{x}$. We can also replace the collision entropy with canonical Shannon entropy,
\begin{equation*}
-\sum_{i=1}^k \mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}\left[\mathbb{I}(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i)\right]\log \mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}\left[\mathbb{I}(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)=i)\right],
\end{equation*}
in the future research. As such, the algorithm DB variability is closely related to the ``entropy of decision boundary'', and the uncanny generalization in neural networks might be further uncovered by investigating this low entropy of decision boundary.
\subsection{Data DB variability and the complexity of DBs}
\label{app:complexity of db}
According to the work by \citet{guan2020analysis}, a complex decision boundary has large curvatures and conjectured to indicate inferior generalization. Nevertheless, from the perspective of causality, we argue that the large curvatures or the non-linearity of DBs {\it is the result other than the cause} for classification tasks, and the primary factor in shaping a complex DB during the training procedure should be the significant non-linearity of the training data. If only the geometric properties of decision boundaries are analysed without investigating the data, the results might be incomplete and even misleading. Another obstacle for describing the complexity of DBs by its geometric properties is the huge dimensional input space, which makes the geometric properties of DBs hard to quantify and estimate. Therefore, defining the complexity of DBs based on its curvature is not rational and impractical.
Here, we consider the complexity of DBs from the perspective of the training set. During the training procedure, a small part of training examples, considered as ``support vectors'', play a more critical role in supervising the formation of decision boundaries and compelling the DB to be gradually more complicated. If the construction of decision boundaries relies on fewer ``support vectors'', the decision boundary should be simpler. In other words, if these ``support vectors'' are excluded from the training sample, the DB will be notably dissimilar when the network is retrained on the modified training set. Hence, the complexity of DBs can be also defined with the notion of the data DB variability: {\it with $(\epsilon,\eta)$-data decision boundary variability, the decision boundary of $f_{\mathcal{A}(\mathcal{S})}$ is said to possess a $(\epsilon, \eta)\textit{-complexity}$}.
By considering the data DB variability as the complexity of DB,
many phenomenons {\it w.r.t.} generalization in deep learning can be easily understood: (1) difficult tasks generally have more complex decision boundaries, since their datum are more non-linear and contain more ``support vectors''; (2) in adversarial training, each data point is converted to a ``data ball'' with the radius of the adversarial perturbation and has more impact on forming the DBs. Hence, adversarial training contributes to a more complex decision boundary by enlarging the ``support vector set'', and thus causes the decline in generalization performance; (3) for data augmentation, generated images are also considered to obey the data generation distribution $\mathcal{D}$. Hence, data augmentation decreases the complexity of decision boundaries by greatly expanding the training set $\mathcal{S}$, while $|\mathcal{S}_\eta|$ has only a slight growth.
\section{Experimental Implementation Details}
This section provides all the additional implementation details for our experiments.
\subsection{Model training}
We employ SGD to optimize all the models and the momentum factor is $0.9$. The weight decay factor is set to $5$e-$4$, and the learning rate is decayed by $0.2$ every $50$ epochs. Besides, basic data augmentation (crop and flip) \citep{Zagoruyko2016WRN} is adopted in both standard and adversarial training, and only the basic data augmentation is considered in our experiments and analysis.
\textbf{Additional details in \ref{sec:real data}}
We train VGG-16, ResNet-18, Wide-ResNet-28 on CIFAR-10 and CIFAR-100. In the training procedure, the model is trained for $200$ epochs, in which the batch size is set to $128$, and the learning rate is initialized as $0.1$. There are three training strategies included in this experiment: standard training, non-data-augmentation training, and adversarial training. In the adversarial training, the radius of the adversarial perturbation is set as $10/255$ and $l_\infty$ distance is selected. The basic data augmentation (cropping and flipping) in the standard training and adversarial training is achieved by the following Pytorch code:
\begin{lstlisting}[language=Python]
transforms.RandomCrop(32, padding=4)
transforms.RandomHorizontalFlip()
\end{lstlisting}
The experiment is repeated for $10$ trials for each (dataset, architecture, training strategy) setting.
\textbf{Additional details in \ref{sec:training process}}
We repeatedly train $10$ ResNet-18 on CIFAR-10 and CIFAR-100, respectively, with different random seeds. In the training procedure, the model is trained for $200$ epochs, in which the batch size is set to $128$, and the learning rate is initialized as $0.1$ and $0.01$, respectively. Basic data augmentation is included during the training process.
\textbf{Additional details in \ref{sec:sample size}}
We randomly sample examples from the training set of CIFAR-10 and CIFAR-100 to form five datasets with different sizes of $[2000, 5000, 10000, 20000, 50000]$, respectively. $10$ ResNet-18 are trained for each dataset. In the training procedure, the model is trained for $200$ epochs, in which the batch size is set to $128$, and the learning rate is initialized as $0.1$. Basic data augmentation is included during the training process.
\textbf{Additional details in \ref{sec:label noise}}
We randomly change the labels of $20\%$ examples in the training set of CIFAR-10 and CIFAR-100. Then, $10$ ResNet-18 are optimize by SGD for $500$ epochs on the noise CIFAR-10 and CIFAR-100, respectively. the momentum factor is $0.9$, and the learning rate is $0.001$ and does not decay during the training process.
\textbf{Additional details in \ref{sec:feature extraction stability}}
We repeatedly train $10$ VGG-16, ResNet-18, Wide-ResNet-28 on CIFAR-10 on the synthetic binary CIFAR-10 and CIFAR-100 (Figure \ref{figure:f_mnist_examples}), respectively, with different random seeds. In the training procedure, the model is trained for $30$ epochs, in which the batch size is set to $128$, and the learning rate is fixed as $0.01$, respectively. There is no data augmentation during the training process to preserve the location information of the feature in the synthetic images.
\textbf{Additional details in \ref{sec:consistency and complexity}}
We randomly sample examples from the training set of CIFAR-10 and CIFAR-100 to form five datasets with different sizes of $[2000, 5000, 10000, 20000, 50000]$, respectively. For each dataset, we obtain $10$ $\eta$-subsets with different $\eta$ of $[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]$ via a coreset selection approach named {\it selection via proxy} \citep{coleman2020selection}. The related code can be downloaded from \url{https://github.com/stanford-futuredata/selection-via-proxy}. The ResNet-18 is repeatedly trained for $10$ trials to estimate the complexity of decision boundaries for each $\eta$-subset.
\section{Proofs}
The section collects detailed proofs of the results that are omitted in Section \ref{sec:theoretical evidence about algorithm variability} and \ref{sec:theoretical evidence about dataset variability}. To avoid technicalities, the measurability/integrability issues are ignored throughout this paper. Moreover, Fubini's theorem is assumed to be applicable for any integration {\it w.r.t.} multiple variables. In other words, the order of integrations is exchangeable.
\subsection{Proof of Theorem \ref{thm:lower bound}}
\label{app:proof of lower bound}
\begin{proof} If Assumption \ref{assumption:null boundary} holds for all $\boldsymbol{\theta}\sim \mathbb{Q}$, we have
\begin{equation*}
\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}^\prime}, \mathbf{x}\right)\right) \mathbb{I}\left(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right) \neq T\left(f_{\boldsymbol{\theta}^\prime}, \mathbf{x}\right)\right) \right] = 0
\end{equation*}
Hence,
\begin{align}
AV(f_\mathbb{Q},\mathcal{D})=&\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] \nonumber\\
=& \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(y\in T(f_{\boldsymbol{\theta}}, \mathbf{x})\right)\mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right. \nonumber\\
&+\left. \mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}}, \mathbf{x})\right)\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right) \right]\nonumber \\
\leq & 2\mathcal{R}_{\mathcal{D}}(\mathbb{Q})
\end{align}
The proof is completed.
\end{proof}
\subsection{Proof of Theorem \ref{thm:lower bound for binary case}}
\begin{proof}
When the classification is binary, {\it i.e.}, $k=2$, with Assumption \ref{assumption:null boundary}, we have
\begin{align}
& \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right) = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
=& \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber + \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(y\notin T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
=& \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] + \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\left[1- \mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right]\right]^2 \nonumber\\
=& 2\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] + 2\mathcal{R}_\mathcal{D}(\mathbb{Q}) - 1 \nonumber
\end{align}
Plugging in $\operatorname{Var}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right]\right] = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}^2\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] - (1-\mathcal{R}_{\mathcal{D}}(\mathbb{Q}))^2$ yields
\begin{align}
&\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right) = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
=& 2\operatorname{Var}_{(\mathbf{x},y)\sim \mathcal{D}}\left[\mathbb{E}_{\boldsymbol{\theta}\sim \mathbb{Q}}\left[\mathbb{I}\left(y\in T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right]\right] + \mathcal{R}_\mathcal{D}^2(\mathbb{Q}) + (1- \mathcal{R}_\mathcal{D}(\mathbb{Q}))^2 \nonumber\\
\geq & \mathcal{R}_\mathcal{D}^2(\mathbb{Q}) + (1- \mathcal{R}_\mathcal{D}(\mathbb{Q}))^2 \nonumber
\end{align}
Plugging in $\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}}\mathbb{E}_{\boldsymbol{\theta},\boldsymbol{\theta}^\prime \sim \mathbb{Q}}\left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) = T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] = 1- AV(f_\mathbb{Q}, \mathcal{D})$ and solving the inequality yield the desired inequality and finishes the proof of Theorem \ref{thm:lower bound for binary case}.
\end{proof}
\subsection{Proof of Lemma \ref{lemma:complement set bound}}
\label{app:complement set bound}
\begin{proof}
According to Assumption \ref{assumption:data db variabilty} that for all $(\mathbf{x},y)\sim \mathcal{D}_1$, we have
$$\mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}_\eta)}[\mathbb{I}(y\in T(f_{\boldsymbol{\theta}},\mathbf{x}))]=\max_{i\in [k]}\mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}_\eta)}[\mathbb{I}(i\in T(f_{\boldsymbol{\theta}},\mathbf{x}))],$$
then
\begin{align}
&\mathcal{R}_{\mathcal{D}_1}(\mathcal{A}(\mathcal{S}_\eta)) \nonumber\\
= &1 - \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}_1}\mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(y = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
=& 1 - \mathbb{E}_{\mathcal{D}_1}\sum_{i=1}^k\mathbb{E}_{\mathcal{A}(\mathcal{S})} \left[\mathbb{I}\left(i = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \mathbb{E}_{\mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(y = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
\leq & 1 - \mathbb{E}_{\mathcal{D}_1}\sum_{i=1}^k\mathbb{E}_{\mathcal{A}(\mathcal{S})} \left[\mathbb{I}\left(i = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \mathbb{E}_{ \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(i = T\left(f_{\boldsymbol{\theta}}, \mathbf{x}\right)\right)\right] \nonumber\\
=&\mathbb{E}_{\mathcal{D}_1} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] \nonumber\\
\leq& \mathbb{E}_{\mathcal{D}} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] \nonumber\\
=&\epsilon. \nonumber
\end{align}
Because the examples in $\mathcal{S}\backslash\mathcal{S}_\eta$ are drawn from $\mathcal{D}_1$, by applying Hoeffding's Inequality, we have
\begin{equation}
\label{eq:hoe}
\text{Pr}\left[ \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta} - \mathcal{R}_{\mathcal{D}_1} \geq t \right] \leq \exp(-2(1-\eta)mt^2).
\end{equation}
Plug in $\delta=\exp(-2(1-\eta)mt^2)$ into Eq. \ref{eq:hoe}, thus, with the probability of at least $1-\delta$, we have
\begin{equation}
\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) \leq \epsilon + \sqrt{\frac{1}{2(1-\eta)m}\log\frac{1}{\delta}}. \nonumber
\end{equation}
\end{proof}
\subsection{Proof of Lemma \ref{lemma:risk diff}}
\label{app:proof lemma1}
\begin{proof}
From the definition of data DB variability, there exists a $\eta$-subset $\mathcal{S}_\eta$ s.t.
\begin{equation}
\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] = \epsilon. \nonumber
\end{equation}
Recall the $\text{LHS}=\left| \mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S})) - \mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \right|$ and denote $\mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}} \mathbb{E}_{\boldsymbol{\theta}\sim \mathcal{A}(\mathcal{S}), \boldsymbol{\theta}^\prime \sim \mathcal{A}(\mathcal{S}_\eta)}$ as $\mathbb{E}_{\mathcal{D}, \mathcal{A}(\mathcal{S}), \mathcal{A}(\mathcal{S}_\eta)}$ for simplicity, then
\begin{align}
\text{LHS} &=\left| \mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}} \mathbb{E}_{\boldsymbol{\theta} \sim \mathcal{A}(\mathcal{S})} \left[\mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}}, \mathbf{x}) \right)\right] - \mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}} \mathbb{E}_{\boldsymbol{\theta} \sim \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}}, \mathbf{x})\right)\right] \right| \nonumber\\
&= \left| \mathbb{E}_{\mathcal{D}, \mathcal{A}(\mathcal{S}), \mathcal{A}(\mathcal{S}_\eta)}\left[\mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}}, \mathbf{x})\right) - \mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right) \right] \right| \nonumber\\
&\leq \mathbb{E}_{\mathcal{D}, \mathcal{A}(\mathcal{S}), \mathcal{A}(\mathcal{S}_\eta)}\left[\left|\mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}}, \mathbf{x})\right) - \mathbb{I}\left(y\notin T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right| \right] \nonumber\\
&\leq \mathbb{E}_{\mathcal{D}, \mathcal{A}(\mathcal{S}), \mathcal{A}(\mathcal{S}_\eta)} \left[\mathbb{I}\left(T(f_{\boldsymbol{\theta}}, \mathbf{x}) \neq T(f_{\boldsymbol{\theta}^\prime}, \mathbf{x})\right)\right] = \epsilon \nonumber
\end{align}
The proof is completed.
\end{proof}
\subsection{Proof of Theorem \ref{thm:core}}
\label{app:proof core}
We first introduce Lemma \ref{lemma:30.1} and Lemma \ref{lemma:core lemma} as below.
\begin{lemma}[Lemma 30.1 in \citep{shalev2014understanding}]
\label{lemma:30.1}
Assume $T$ and $V$ are two datasets independently sampled from the data generating distribution $\mathcal{D}$, then, with the probability of at least $1-\delta$, we have
\begin{small}
\begin{equation}
\mathcal{R}_\mathcal{D}(\mathcal{A}(T)) \leq \mathcal{R}_V(\mathcal{A}(T)) + \sqrt{\frac{2 \mathcal{R}_V(\mathcal{A}(T)) \log (1 / \delta)}{|V|}}+\frac{4 \log (1 / \delta)}{|V|}. \nonumber
\end{equation}
\end{small}
\end{lemma}
\begin{lemma}[Theorem 30.2 in \citep{shalev2014understanding}]
\label{lemma:core lemma}
Let $\mathcal{S}_\eta$ be a $\eta$-subset of the dataset $\mathcal{S}$, which is sampled from the data generation distribution $\mathcal{D}$ and the sample size $|\mathcal{S}|=m$. Let $\mathcal{S}\backslash\mathcal{S}_\eta=\mathcal{S}-\mathcal{S}_\eta$ and assume $\eta\leq 0.5$. Then, with the probability of at least $1-\delta$ over a sample of size $m$, we have
\begin{equation}
\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \leq \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) + \sqrt{4\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))\Delta} + 8\Delta, \nonumber
\end{equation}
where
\begin{equation}
\Delta=\eta\log\frac{e}{\eta}+\frac{1}{m}\log\frac{1}{\delta}. \nonumber
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:core lemma}]
\begin{align}
&\text{Pr}\left[\exists \mathcal{S}_\eta \subseteq \mathcal{S} \text{ s.t. } \mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \leq \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))+ \sqrt{\frac{2 \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) \log (1 / \delta)}{|\mathcal{S}\backslash\mathcal{S}_\eta|}}+\frac{4 \log (1 / \delta)}{|\mathcal{S}\backslash\mathcal{S}_\eta|} \right] \nonumber\\
\leq& \sum_{\mathcal{S}_\eta \subseteq \mathcal{S}} \text{Pr}\left[\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \leq \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) + \sqrt{\frac{2 \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) \log (1 / \delta)}{|\mathcal{S}\backslash\mathcal{S}_\eta|}}+\frac{4 \log (1 / \delta)}{|\mathcal{S}\backslash\mathcal{S}_\eta|}\right] \nonumber\\
=& {m\choose \eta m}\delta \leq \left(\frac{e}{\eta}\right)^{\eta m}\delta \nonumber
\end{align}
Plug in $\delta^\prime=\left(\frac{e}{\eta}\right)^{\eta m}\delta$, and use the assumption $\eta \leq \frac{1}{2}$, which implies $|\mathcal{S}\backslash\mathcal{S}_\eta|\geq \frac{m}{2}$, then, with the probability of at least $1-\delta^\prime$, we have that
\begin{align}
\mathcal{R}_\mathcal{D}(\mathcal{A}&(\mathcal{S}_\eta)) \leq \mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) + \sqrt{4\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta))\left(\eta\log\frac{e}{\eta} +\frac{1}{m}\log\frac{1}{\delta^\prime}\right)} + 8\left(\eta\log\frac{e}{\eta}+\frac{1}{m}\log\frac{1}{\delta^\prime}\right), \nonumber
\end{align}
which concludes the proof.
\end{proof}
With the above lemmas, we can derive the generalization bound based on the complexity of decision boundary.
\begin{proof}[Proof of Theorem \ref{thm:core}]
According to Lemma \ref{lemma:core lemma}, with the probability of at least $1-\delta$, we have
\begin{equation}
\mathcal{R}_{\mathcal{S}\backslash\mathcal{S}_\eta}(\mathcal{A}(\mathcal{S}_\eta)) \leq \epsilon + \sqrt{\frac{1}{2(1-\eta)m}\log\frac{1}{\delta}}. \nonumber
\end{equation}
Through combining this with Lemma \ref{lemma:core lemma}, with the probability of at least $1-2\delta$, we have
\begin{equation}
\label{eq:r}
\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S}_\eta)) \leq \Omega + \sqrt{4\Omega\Delta} + 8\Delta,
\end{equation}
where
\begin{equation}
\Omega=\epsilon + \sqrt{\frac{1}{2(1-\eta)m}\log\frac{1}{\delta}}, \nonumber
\end{equation}
\begin{equation}
\Delta=\eta\log\frac{e}{\eta}+\frac{1}{m}\log\frac{1}{\delta}. \nonumber
\end{equation}
Plugging the equality of $\mathcal{R}_\mathcal{D}\left(\mathcal{A}\left(\mathcal{S}\right)\right)\leq \mathcal{R}_\mathcal{D}\left(\mathcal{A}\left(\mathcal{S}_\eta\right)\right) + \epsilon$ in Lemma \ref{lemma:risk diff} into Eq. \ref{eq:r} yields the desired inequality of Eq. \ref{eq:core}.
When $m$ is sufficient large, $\sqrt{4\Omega\Delta}$ can be dropped due to $\sqrt{4\Omega\Delta}\leq \Omega + \Delta$. Considering $\eta\leq 0.5$, $\Omega\leq \sqrt{\frac{1}{2m}\log\frac{1}{\delta}} + \epsilon=\mathcal{O}(\frac{1}{\sqrt{m}}+\epsilon)$. The term $\frac{1}{m}\log\frac{1}{\delta}$ in $\Delta$ can be dropped because it has a faster convergence speed compared to $\sqrt{\frac{1}{2m}\log\frac{1}{\delta}}$ in $\Omega$. Because $\log\frac{1}{\delta}$ is considered as a constant, we have
\begin{equation}
\mathcal{R}_\mathcal{D}(\mathcal{A}(\mathcal{S})) \leq \mathcal{O}(\frac{1}{\sqrt{m}}+\epsilon+\eta\log\frac{1}{\eta}). \nonumber
\end{equation}
The proof of Theorem \ref{thm:core} is finished.
\end{proof}
\section{Conclusion}
In this paper, we empirically and theoretically explored the relationship between decision boundary variability and generalization in neural networks, through the notions of algorithm DB variability and data DB variability, respectively. A significant negative correlation between the decision boundary variability and generalization performance is observed in our experiments. As for the theoretical results, two lower bounds based on algorithm DB variability and an upper bound based on data DB variability are proposed, respectively, for the sake of enhancing our findings.
|
hep-ph/0606171
|
\section*{three variations on a same theme}
This preprint is the collection of three separated notes written during the last quarter, aiming to
communicate an amusing finding of a colleague. All the three are different {\it facets} of a same
technical point. We concentrated in this procedure because, as announced in the abstract, it seemed
to have some relationship to the quotient $M_W/M_Z$. It took a long time to us to realize than when the
formulae were adjusted to the mass of $Z^0$ then the value of the electroweak vacuum was also hinted.
\section{Mass terms from Casimir Invariants}
Under Poincare symmetry, suppose we have a family of particles $(m_i,s_i)$ labeled
using the two Casimirs of the group, $\C_1,\C_2$ with respective eigenvalues
$c_1=m^2$, $c_2=-m^2 s (s+1)$.
We ask for constructions of operators $M^2_s$ with dimension $[mass]^2$ built exclusively from
combinations of this casimirs (excluding inversion) and with the additional asymptotic
condition
\begin{equation}
\label{cond}
\lim_{s\to \infty} m^2_s = m
\end{equation}
of recovering the original mass eigenvalue in the high spin limit. This condition allows for
preservation of the string tension (from the asymptotic Regge trajectory) if for instance
our spectrum of particles comes from a string theory.
The simplest combination $\alpha \C_1 + \beta \C_2$ of the Casimirs has the adequate dimensions
but fails to meet the asymptotic condition. The next simplest try, and the simplest one
fulfilling our condition, is got from square roots
of the quartic combination. This is, from the solution of the equation
\begin{equation}
M^4_s - M^2_s \C_2 + \C_1 \C_2 =0
\end{equation}
And if we want to dispose of square roots we must rewrite it in terms
of Pauli Matrices
\begin{equation}
M^2_s=
\sigma^+ \otimes \C_1 \C_2 + \sigma^- \otimes {\bf I} +
{ {\bf I} - \sigma_z \over 2 } \otimes \C_2
\end{equation}
Note that this operator can be also got from conditions different to (\ref{cond}). An interesting
alternative could be to ask
\begin{equation}
\Tr M_s^2 = \Tr \C_2
\end{equation}
The goal of this note to point out that our method seems to have a role in electroweak
breaking. Meeting with the same equation in a relativistic mechanics context, Hans de Vries
discovered \cite{dVonline} that the positive eigenvalues of this
operator for s=1/2 and s=1 let one to build the quantity
\begin{equation}
s_{dV}^2\equiv
1-{m^2_{s=1/2,+}\over m^2_{s=1,+}}=0.22310132...
\end{equation}
unexpectedly near of the mass shell Weinberg angle \cite{pdg,erler}
\begin{equation}
s_W^2=1-{M_W^2 \over M_Z^2} = 0.22306 \pm 0.00033
\end{equation}
In fact the quotient between de Vries and Weinberg angles
is $s^2_{dV}/s^2_W=0.9998 \pm .0015$ even too good for a tree level
prediction, and we should expect it to survive to further experimental updates.
With this ansatz, we can insert the measured $M_Z^2=(91.1874 GeV)^2$ as input for the
eigenvalue $m_{1,+}^2$ and get the
other three eigenvalues:
\begin{equation}
m_{s=1/2,+}^2= (80.3717 GeV)^2
\end{equation}
\begin{equation}
m_{s=1,-}^2= - (176.154 GeV) ^2
\end{equation}
\begin{equation}
m_{s=1/2,-}^2= - (122.384 GeV)^2
\end{equation}
This last negative value is not used in electroweak models, but we find that the negative eigenvalue
$m^2_{1,-}$ is actually in the expected range for the negative mass square operator we use to break
the electroweak symmetry. Remember that
\begin{equation}
{<v> \over {\sqrt 2}} = \sqrt{-m_h^2 \over \lambda_h} = 174.1042 \ (\pm 0.00075)\ GeV
\end{equation}
The experimental value coming from Fermi constant \cite{pdg}. So, we are compatible with
$\lambda_h \approx 1$. In fact we could fix it equal to 1 and pivot on the standard model to
get a tree level estimate of the fine structure constant, getting $\alpha^{-1}=135.28\dots$
It is mysterious why so easily two predictions are got. If we add the actual measurement\cite{top} of the
top yukawa coupling, $\lambda_t=0.991 \pm 0.013$ to our basket and we take it as hint
for a technicolor/topcolor mechanism, then
one could suspect that techni-forces has also stringy properties --not surprisingly-- and that its
associated string carries somehow a supersymmetry --surprisingly, but a good excuse for $M_W$ to come packed in
a $s=1/2$ object.
\section{A formula to break degeneration of Susy multiplets}
Representations of the 3+1 Poincare algebra can be labeled with two polynomial or Casimir
invariants, ${\cal C}_1$ and ${\cal C}_2$, that in the massive case correspond respectively to the $P^2$
and $W^2$, the latter being the square of Pauli-Lubanski vector. Upon a $(m,s)$ representation
the quadratic Casimir ${\cal C}_1$ has eigenvalue $m^2$ while the quartic Casimir ${\cal C}_2$ has
eigenvalues $-m^2 s (s+1)$.
The goal of this note is to build a new operator of dimension $[mass]^2$ under two restrictions:
1) Use only combinations of the Casimirs, ie the only objects more generally available.
2) Get the same Regge asymptotic trayectory in the limit of high spin. So we request at
least that $\lim_{s\to\infty} M_s=M$, being $M_s$ the new operator.
For a set of equal mass $(m,s_i)$ representations such as the ones happening in a
supersymmetry multiplet, if we want to break mass degeneracy meeting the above conditions
the simplest way that is to use the formula
\begin{equation}
M_{(s)}^2
\equiv \frac 12 ( {\cal C}_2 + \sqrt{ ({\cal C}_2)^2 - 4 {\cal C}_1 {\cal C}_2 })
\end{equation}
so that $M^2$ upon a $(m,s)$ representation has eigenvalue
$(m^2/2) (( s (s+1) )^2 + 4 s(s+1))^{1/2} - s (s+1) )$, that in
the limit $\it s\to \infty$ approaches to $m^2$. Given
its extreme simplicity this kind of
expressions is not rarely found in textbooks but we have never
seen suggested its use to break mass degeneracy.
Starting from a primitive relativistic quantum mechanics model,
De Vries found \cite{pf} (see also footnote in \cite{RivVries}) that the
eigenvalue expression of the above operator, when evaluated both
at $s=1/2$ and $s=1$ -with degenerated mass- were able to produce a definite number
\begin{equation}
s^2_{dV} \equiv 1 - { M_{s=1/2}^2 \over M_{s=1}^2} = 0.22310132...
\end{equation}
and that a mass-related quantity with a similar experimental value seems to exist in Nature;
indeed we can take from the global fit of \cite{pdg,erler}
\begin{equation}
s^2_W=0.22306 \pm 0.00033
\end{equation}
So that the quotient between experimental mass-shell value of Weinberg sine and the theoretical
De Vries "sine" happens to be
\begin{equation}
s^2_{W,exp}/s^2_{dV} = 0.9998 \pm .0015
\end{equation}
Let us to stress that at the time of De Vries estimate, November 2004, the experimental value and error
were slightly different so that the $s^2_{dV}$ was more than one sigma away from the measurement. The new results of mass of $W$ and other parameters have moved the global fit so that now $s^2_{dV}$ is very centered inside $0.13 \sigma$.
Of course we have the paradoxical situation that we have produced this quantity in the context of a susy-like relationship between spin 1/2 and spin 1, while Nature seems to have it produced for two spin 1 particles. The transition from one situation to the other shall be given by the still unknown mechanism
of electroweak symmetry breaking. This is to be added to the other mysterious coincidence of the scale of
electroweak breaking, the value of Yukawian top coupling $y_t$, that currently \cite{top} is expected to
be about $0.991 \pm 0.013$.
In principle both $y_t$ and $s_W$ are running quantities coming from the GUT scale,
but now we see that they get very singular values just exactly at the moment that the
electroweak symmetry breaks.
\section{The $\sin \theta_W$ found in a 1924 timecapsule}
De Broglie's relativistic quantum orbit rule \cite{broglie}
\begin{equation}
\label{eqbroglie}
{m_0 \beta^2 c^2 \over \sqrt {1-\beta^2} }T_r= n h
\end{equation}
was proposed about the same time that Land\'e-Pauli substitution rule for 3D angular momentum\cite{lande,pauli},
\begin{equation}
\label{eqlande}
{1 \over j^2} \to - {d \over dj } ({1 \over j}) \to {1 \over j} - {1 \over j+1 } \to {1\over j(j+1)}
\end{equation}
but the fast pace of the events in the mid-twenties did not allow for a fusion of both ideas; almost immediately (\ref{eqlande}) was rigorised in the Heisenberg-Born matrix mechanics -- even allowing for half-integer j --, while De Broglie's suggestions for wave mechanics were absorbed into Schr\"odinger's analytic methodology.
In November of 2004, eighty years later, during an empirical study of gyromagnetic ratios\cite{RivVries}, Hans de Vries suggested to combine (\ref{eqlande}) and (\ref{eqbroglie}) with the extra requirement
\begin{equation}
\label{eqVries}
T_r = { h \over m_0 c^2}
\end{equation}
on the orbital period, so that rest mass and Planck constant are canceled out and we are left with a relationship between relativistic speed and angular momentum:
\begin{equation}
{ \beta^2 \over \sqrt {1-\beta^2} }= \sqrt {j (j+1)}
\end{equation}
Solving $\beta$ for the $j=1/2$, $j=1$, and via the ratio of speeds, de Vries produced the following adimensional quantity
\begin{equation}
s^2_{dV} \equiv 1 - \big({ \beta_{1/2} \over \beta_1}\big)^2 = 0.22310132...
\end{equation}
which remembers closely to the mass-based experimental Weinberg's sine.
At the time of calculation the data on $W^+$ mass and the global fits to standard model parameters were putting de Vries' sine at more than $1\sigma$ deviation from the measured value. So the result was put aside as one-line footnote in the preprint report. But the new data released from LEP II during 2005 and the fits from the particle data group have moved the experimental value to be \cite{erler, pdg}
\begin{equation}
s^2_W=0.22306 \pm 0.00033
\end{equation}
so that $s^2_{HdV}$ is now inside the experimental error, centered at $ 0.13 \sigma$. If you prefer, lets say that the quotient $s^2_{W,exp}/s^2_{dV}$ between experimental and theoretical quantities is now $0.9998 \pm .0015$.
While the experimental error is still too big, the centrality of the calculated result seems to grant that the agreement will continue under further experimental improvements. In any case lets keep in mind that this theoretical number comes from plain relativistic quantum mechanics, thus from the point the view of QFT it is a tree level statement and we should do not expect to push it beyond 0.1\% level; in fact it should be surprising if the experimental error decreases but the central value keeps fixed, because in such case a 0.01\% agreement level would be reached.
De Vries reasonment started from orbital radius instead of orbital period. Indeed one can use the condition (\ref{eqVries}) to get an orbital radius
\begin{equation}
r= \beta \ c \ T_r = \beta {h \over m c} = \frac h c \frac \beta m
\end{equation}
proportional to Compton length and thus inverse proportional to the orbiting mass.
Thus we can do the additional remark that if a particle of mass $\propto M_{W^\pm}$orbits according (\ref{eqVries}) producing j=1/2 according (\ref{eqlande})(\ref{eqbroglie}), then a particle of mass $\propto M_{Z^0}$ orbiting at the same radius under the same conditions will produce j=1.
Independently of this remark, we think that model builders can find useful this result. The electroweak scale can be defined as the point at which the renormalised Weinberg's angle, running down from its GUT-theoretical value, reaches the value of de Vries's angle. Besides, de Vries number comes from a pair of well calculated adimensional numbers,
\begin{eqnarray}
\beta_{1/2}&=\sqrt{\frac 38 (\sqrt{19/3} -1)}=& 0.7541414352817\dots \\
\beta_1&=\sqrt{\sqrt{3}-1}=& 0.855599677167\dots
\end{eqnarray}
so it contains slightly more information. It could be used for instance to pinpoint mass values at $\beta_{1/2} M_Z \propto \beta_1 M_W \propto 68.76 \ GeV$ or $ M_W/\beta_{1/2} \propto M_Z/\beta_1 \propto 106.5 \ GeV$. Also, the attempt of providing physical meaning to the quotient of speeds (or, via an arbitrary potential, of binding radius) seems to underline composite, top-condensation like, models of the Higgs sector, but we do not put forward a definitive statement on this.
\section*{coda}
Since the redaction of the above notes, I have received a letter from Hans showing that other arrangements can also hit
three digit precision easily, then putting less confidence in a single hitting of a single parameter. I still have some
confidence on the above idea because on one side it has some physical content, as the last note shows, and on another
hand it seems to hit more than one single parameter. Still, for convenience, let me finish reproducing this note from
de Vries, and addressing you towards \cite{dVonline} for detailed comments.
{ \it
I spend some time on other purely numerical coincidences involving the
Weinberg angle, yes, more coincidences...
cos(Theta) = arcsinh(1) --- sW\^{}2 = 0.2231806
This is by far the simplest but it doesn't make so much sense physically,
mW and mZ would be related by some momentum/boost ratio..
The other one is:
sin(Theta)/cos(Theta) = Beta\_{}1\^{}4 --- sW\^{}2 = 0.223112151
Where the left term is the ratio in which W3 and B are combined to form
the massless Electromagnetic field in the Weinberg/Salam theory. The
right term is the spin-1 beta 0.85559967716. In correspondence with the
Pauli spinors one could relate W1,W2,W3 with x,y,z and B with t so the
ratio W3/B could be related to speed, however here we have something to
the power 4....
Well I'm just making a note of them here. Don't know what to do with them.
It made me feel less sure about the one we're using but I still think
that's the one that makes most sense physically.}
|
physics/0606153
|
\section*{A Brief Life of Schwinger}
A full biography of Julian Schwinger has been published \cite{MM},
as well as a selection of his most important papers \cite{QL}.
Here we will sketch a brief outline of Schwinger's life and work, referring
the interested reader to the biography for more details. An excellent
100-page account of Schwinger's career through 1950 may also be found in
Schweber's history of quantum electrodynamics \cite{SSS}.
Julian Schwinger was born in Manhattan, New York City, on February 12, 1918,
to rather well-off middle-class parents. His father was a well-known
designer of women's clothes. He had a brother Harold seven years older than
himself, whom Julian idolized as child. Harold claimed that he taught
Julian physics until he was 13. Although Julian was recognized
as intelligent in school, everyone thought Harold was the bright one.
(Harold in fact eventually became a well-known lawyer, and his mother
always considered him as the successful son, even after Julian received the
Nobel Prize.) The Depression cost Julian's father his business, but he
was sufficiently appreciated that he was offered employment by other
designers; so the family survived, but not so comfortably as before.
It did mean that Julian would have to rely on free education, which New York
well-provided in those days: A year or two at Townsend Harris High School,
a public preparatory school feeding into City College, where Julian
matriculated in 1933. Julian had already discovered physics, first through
Harold's {\it Encyclopedia Britannica\/} at home, and then through the
remarkable institution of the New York Public Library. At City College
Julian was reading and digesting the latest papers from Europe, and starting
to write papers with instructors who were, at the same time, graduate
students at Columbia and NYU. He no longer had the time to spend in the
classroom attending lectures. In physics and mathematics he was able
to skim the texts and work out the problems from first principles,
frequently leaving the professors baffled with his original, unorthodox
solutions, but it was not so simple in history, English, and German.
City College had an enormous number of required courses then in all subjects.
His grades were not good, and he would have flunked out if the College had
not also had a rather forgiving policy toward grades.
Not only was Julian already reading the literature at City College, but
he quickly started to do original research.
Thus before he left the City College, Schwinger wrote a paper entitled
`On the
Interaction of Several Electrons,'
in which he introduced a procedure that
he would later call the interaction representation to
describe the scattering of
spin-1/2 Dirac particles, electron-electron scattering or M\o ller
scattering. This paper he wrote entirely on his own, but showed it
to no one, nor did he submit it to a journal. It was `a little practice
in writing,' but it was a sign of great things to come.
It was Lloyd Motz, one the instructors at City College,
who had heard about Julian from Harold, and with whom Julian
was writing papers,
who introduced him to Rabi. Then, in a conversation between Rabi and Motz
over the Einstein, Rosen, Podolsky
paper \cite{erp}, which had just appeared, Julian's
voice appeared with the resolution of a difficulty through the completeness
principle, and Schwinger's career was assured. Rabi, not without some
difficulty, had Schwinger transferred to Columbia with a scholarship,
and by 1937 he had
7 papers published, which constituted his Ph.D. thesis, even though his
bachelor's degree had not yet been granted. The papers which Julian
wrote at Columbia were on both
theoretical and experimental physics, and Rabi prized Julian's ability
to obtain the numbers to compare with experiment.
The formal awarding of the
Ph.D. had to wait till 1939 to satisfy a University regulation. In
the meantime, Schwinger was busy writing papers (one, for example,
laid the foundation for the theory of nuclear magnetic resonance),
and spent a somewhat lonely, but productive winter of 1937
in Wisconsin, where he provided the groundwork for his
prediction that the deuteron had an electric quadrupole moment,
independently
confirmed by his experimental colleagues at Columbia a year later \cite{quad},
both announced at the Chicago meeting of the American Physical Society in
November 1938.
Wisconsin confirmed his predilection for working at night, so as not
to be `overwhelmed' by his hosts, Eugene Wigner and Gregory Breit.
By 1939, Rabi felt Schwinger had outgrown Columbia, so with a NRC Fellowship,
he was sent to J. Robert
Oppenheimer in Berkeley. This exposed him to new fields: quantum
electrodynamics (although as we recall,
he had written an early, unpublished paper on the
subject while just 16) and cosmic-ray physics, but he mostly continued to
work on nuclear physics. He had a number of collaborations; the most
remarkable was with William Rarita, who was on sabbatical from Brooklyn
College: Rarita was Schwinger's `calculating arm' on a series of papers
extending the notion of nuclear tensor forces which he had conceived in
Wisconsin over a year earlier. Rarita and Schwinger also wrote the
prescient paper on spin-3/2 particles, which was to be influential decades
later with the birth of supergravity.
The year of the NRC Fellowship was followed by a second year at Berkeley
as Oppenheimer's assistant. They had already written
an important paper together which
would prove crucial several years later: Although Oppenheimer was
happy to imagine new interactions, Schwinger showed that an anomaly in
fluorine decay could be explained by the existence of vacuum polarization,
that is, by the virtual creation of electron-positron pairs. This gave
Schwinger a head start over Feynman, who for years suspected that vacuum
polarization did not exist.
After two years at Berkeley, Oppenheimer and Rabi arranged a real job for
Schwinger: He became first an instructor, then an Assistant Professor
at Purdue University, which had acquired a number of bright young
physicists under the leadership of Karl Lark-Horowitz. But the war was
impinging on everyone's lives, and Schwinger was soon recruited into
the work on radar. The move to the MIT Radiation Laboratory took place in
1943. There Schwinger rapidly became the theoretical leader, even though
he was seldom seen, going home in the morning just as others were arriving.
He developed powerful variational methods for dealing with complicated
microwave circuits, expressing results in terms of quantities the engineers
could understand, such as impedance and admittance. These methods, and
the discoveries he made there concerning the reality of the electromagnetic
mass, would be invaluable for his work on quantum electrodynamics a few
years later. As the war wound down, physicists started thinking about
new accelerators, since the pre-war cyclotrons had been defeated by
relativity, and Schwinger became a leader in this development: he proposed
a microtron, a accelerator based on acceleration through microwave cavities,
developed the theory of stability of synchrotron orbits, and most importantly,
worked out in detail the theory of synchrotron radiation, at a time when
many thought that such radiation would be negligible because of destructive
interference.\footnote{Material based on his Radiation Lab work has now
been published \cite{ER}.}
Although he never really published his considerations on self-reaction,
he viewed that understanding as the most important part of his work on
synchrotron radiation:
`It was a useful thing for me for what was
to come later in electrodynamics, because the technique I used for calculating
the electron's classical radiation was one of self-reaction, and I did it
relativistically, and it was a situation in which I had to take seriously the
part
of the self-reaction which was radiation, so why not take seriously the part of
the self-reaction that is mass change? In other words, the ideas of mass
renormalization and relativistically handling them were already present at this
classical level.' \cite{js}
At first it may seem strange that Schwinger, by 1943 the leading nuclear
theorist, should not have gone to Los Alamos, where nearly all his
colleagues eventually settled for the duration.
There seem to be at least three reasons why Schwinger stayed at the
Radiation Laboratory throughout the war.
\begin{itemize}
\item The reason he most often cited later in life was one of moral
repugnance. When he realized the destructive power of what was being
constructed at Los Alamos, he wanted no part of it. In contrast,
the radiation lab was developing a primarily defensive technology, radar,
which had already saved Britain.
\item He believed that the problems to solve at the Radiation Laboratory
were more interesting. Both laboratories were involved
largely in
engineering, yet although Maxwell's equations were certainly well known,
the process of applying them to waveguides required the development of
special techniques that would prove invaluable to Schwinger's later
career.
\item Another factor probably was Schwinger's fear of being overwhelmed.
In Cambridge he could live his own life, working at night when no one
was around the lab. This privacy would have been much more difficult
to maintain in the microworld of Los Alamos.
Similarly, the working conditions at the Rad Lab were much freer
than those at Los Alamos. Schwinger never was comfortable in a team
setting, as witness his later aversion to the atmosphere at the
Institute for Advanced Study.
\end{itemize}
In 1945 Harvard offered Schwinger an Associate Professorship, which he
promptly accepted, partly because in the meantime he had met his future
wife Clarice Carrol. Counteroffers quickly appeared, from Columbia,
Berkeley, and elsewhere, and Harvard shortly made Schwinger the youngest
full professor on the faculty to that date. There Schwinger quickly established
a pattern that was to persist for many years---he taught brilliant courses
on classical electrodynamics, nuclear physics, and quantum mechanics,
surrounded himself with a devoted coterie of graduate students and
post-doctoral assistants, and conducted incisive research that set the
tone for theoretical physics throughout the world. Work on classical
diffraction theory, begun at the Radiation Lab, continued for several years
largely due to the presence of Harold Levine, whom Schwinger had brought along
as an assistant. Variational methods, perfected in the electrodynamic
waveguide context, were rapidly applied to problems in nuclear physics.
But these were old problems, and it was quantum electrodynamics that
was to define Schwinger's career.
\section*{Quantum Electrodynamics}
But it took new experimental data to catalyze this development. That
data was presented at the famous Shelter Island meeting held in June 1947,
a week before Schwinger's wedding. There he, Feynman, Victor Weisskopf,
Hans Bethe, and the other
participants learned the details of the new experiments of Lamb and
Retherford \cite{lamb}
that confirmed the pre-war Pasternack effect, showing a splitting between
the $2S_{1/2}$ and $2P_{1/2}$ states of hydrogen, that should be degenerate according
to Dirac's theory. In fact, on the way to the conference, Weisskopf and
Schwinger speculated that quantum electrodynamics could explain this effect,
and outlined the idea to Bethe there, who worked out the details,
non-relativistically, on his famous train ride to Schenectady
after the meeting \cite{bethe}.
But the other experiment announced there was unexpected: This was the
experiment by Rabi's group and others \cite{g-2} of the hyperfine anomaly
that would prove to
mark the existence of an anomalous magnetic moment of the electron,
\begin{equation}
\mbox{\boldmath{$\mu$}}=g{e\over2m}{\bf S},
\end{equation}
differing
from the value $g=2$ again predicted by Dirac. Schwinger immediately
saw this as the crucial calculation to carry out first, because it
was purely relativistic, and much cleaner to understand theoretically,
not involving the complication of bound states. However, he was delayed
three months in beginning the calculation because of an extended honeymoon
through the West. He did return to it in October, and by December 1947
had obtained the result $g/2=1+{\alpha/2\pi}$, completely consistent with
experiment. He also saw how to compute the relativistic Lamb shift (although
he did not have the details quite right), and found the error in the
pre-war Dancoff calculation of the radiative correction to electron scattering
by a Coulomb field \cite{dancoff}. In effect, he had solved all the fundamental
problems that had plagued quantum electrodynamics in the 1930s: The
infinities were entirely isolated in quantities that renormalized the
mass and charge of the electron. Further progress, by himself and others,
was thus a matter of technique.
\section*{Covariant Quantum Electrodynamics}
During the next two years Schwinger developed two new approaches to quantum
electrodynamics.
His original approach, which made use of successive canonical transformations
to isolate the infinities, while
sufficient for calculating the anomalous magnetic moment of the electron, was
noncovariant, and as such, led to inconsistent results. In particular,
the magnetic moment appeared
also as part of the Lamb shift calculation, through the coupling with the
electric field implied by
relativistic covariance; but the noncovariant scheme gave the
wrong coefficient. (If the coefficient
were modified by hand to the right number, what turned out to be the correct
relativistic value for
the Lamb shift emerged, but what that was was unknown in January 1948,
when Schwinger announced his results at the APS meeting in New York.)
So first at the Pocono Conference in
April 1948, then in the Michigan Summer School that year, and finally in a
series of three monumental
papers, `Quantum Electrodynamics I, II, and III,'
Schwinger set forth his covariant approach to QED. At about
the same time Feynman was formulating his covariant path-integral approach;
and although Feynman's
presentation at Pocono was not well-received, Feynman and Schwinger compared notes and realized
that they had climbed the same mountain by different routes.
Feynman's systematic papers \cite{feynman} were
published only after Dyson \cite{dyson}
had proved the equivalence of Schwinger's and Feynman's schemes.
It is worth remarking that Schwinger's approach was conservative. He took
field theory at face value,
and followed the conventional path of Pauli, Heisenberg, and Dirac \cite{phd}.
His genius was to recognize that
the well-known divergences of the theory, which had stymied all pre-war
progress, could be
consistently isolated in renormalization of charge and mass. This bore a
superficial resemblance
to the ideas of Kramers advocated as early as 1938 \cite{kramers},
but Kramers proceeded classically. He had
insisted that first the classical theory had to be rendered finite and then
quantized. That idea was
a blind alley. Renormalization of quantum field theory is unquestionably the
discovery of Schwinger.
Feynman was more interested in finding an alternative to field theory,
eliminating entirely the
photon field in favor of action at a distance. He was, by 1950, quite
disappointed to realize that
his approach was entirely equivalent to the conventional electrodynamics, in
which electron and photon fields are treated on the same footing.
As early as January 1948, when Schwinger was expounding his
noncovariant QED to overflow crowds
at the American Physical Society meeting at Columbia University, he learned
from Oppenheimer
of the existence of the work of Tomonaga carried out in Tokyo during the
terrible conditions of wartime \cite{tomonaga}.
Tomonaga had independently invented the `Interaction Representation'
which Schwinger
had used in his unpublished 1934 paper, and had come up with a
covariant version of the Schr\"odinger
equation as had Schwinger, which upon its Western rediscovery was dubbed
by Oppenheimer the Tomonaga-Schwinger
equation. Both Schwinger and Tomonaga independently wrote the same equation,
a generalization of the Schr\"odinger equation to an arbitrary spacelike
surface $\sigma$, using nearly the same notation:
\begin{eqnarray}
i\hbar c {\delta\Psi[\sigma]\over\delta\sigma(x)}={\cal H}(x)\Psi[\sigma],
\label{tomeqn}
\end{eqnarray}
where $\cal H$ is the interaction Hamiltonian,
\begin{eqnarray}
{\cal H}(x)=-{1\over c}j_\mu(x)A_\mu(x),
\label{intham}
\end{eqnarray}
$j_\mu$ being the electric current density of the electrons, and $A_\mu$ the
electromagnetic vector potential.
The formalism found by Tomonaga and his school was essentially identical to
that developed
by Schwinger five years later; yet they at the time calculated nothing,
nor did they discover
renormalization. That was certainly no reflection on the ability of the
Japanese; Schwinger could not
have carried the formalism to its logical conclusion without the impetus of
the postwar experiments,
which overcame prewar paralysis by showing that the quantum corrections
`were neither infinite nor zero, but finite and small, and
demanded understanding.' \cite{RTQED}
However, at first Schwinger's covariant
calculation of the Lamb shift contained another error, the same as
Feynman's \cite{rpf}.
`By this time I had forgotten the number
I had gotten by just artificially changing the wrong spin-orbit coupling.
Because I was now thoroughly involved with the covariant calculation and it
was the covariant calculation that betrayed me, because something went wrong
there as well. That was a human error of stupidity.' \cite{js} French and
Weisskopf \cite{fw}
had gotten the right answer, `because they
put in the correct value of the magnetic moment and used it all the way
through. I, at an earlier stage, had done that, in effect, and also got the
same answer.' \cite{js}
But now he and Feynman `fell into the same trap. We were
connecting a relativistic calculation of high energy effects with a
nonrelativistic calculation of low energy effects, a la Bethe.' Based
on the result Schwinger had presented at the APS meeting in January 1948,
Schwinger claimed priority for the Lamb shift calculation: `I had the
answer in December of 1947. If you look at those [other] papers you will
find that on the critical issue of the spin-orbit coupling, they appeal
to the magnetic moment. The deficiency in the calculation I did [in 1947]
was [that it was] a non-covariant calculation. French and Weisskopf
were certainly doing a non-covariant calculation. Willis
Lamb \cite{kl} was doing
a non-covariant calculation. They could not possibly have avoided these
same problems.'
The error Feynman and Schwinger
made had to do with the infrared problem that occurred in the relativistic
calculation, which was handled by giving the photon a fictitious mass.
`Nobody thought that if you give the photon a finite mass it will also affect
the low energy problem. There are no longer the two transverse degrees
of freedom of a massless photon, there's also a longitudinal degree of
freedom. I suddenly realized this absolutely stupid error, that a photon
of finite mass is a spin one particle, not a helicity one
particle.' Feynman \cite{feynman}
was more forthright and apologetic in acknowledging his
error which substantially delayed the publication of the French and Weisskopf
paper, in part because he, unlike Schwinger, had published
his incorrect result \cite{rpf}.
\section*{Quantum Action Principle}
Schwinger learned from his competitors, particularly Feynman and Dyson.
Just as Feynman had
borrowed the idea from Schwinger
that henceforward would go by the name of Feynman parameters,
Schwinger recognized that the systematic approach of Dyson-Feynman was
superior in higher orders. So by 1949 he replaced the
Tomonaga-Schwinger approach by a much more powerful engine,
the quantum action principle. This was a logical outgrowth of the formulation
of Dirac \cite{lagrangian}, as was
Feynman's path integrals; the latter was an integral approach, Schwinger's
a differential. The
formal solution of Schwinger's differential equations was Feynman's functional
integral; yet while the
latter was ill-defined, the former could be given a precise meaning, and for
example, required the
introduction of fermionic variables, which initially gave Feynman some
difficulty. It may be fair to say, at the beginning of the new millennium,
that while the path integral formulation of quantum field
theory receives all the press, the most precise exegesis of field theory
is provided by the functional differential
equations of Schwinger resulting from his action principle.
He began in the `Theory of Quantized Fields I' by introducing
a complete set of eigenvectors `specified by a spacelike
surface $\sigma$ and the eigenvalues $\zeta'$ of a complete set of
commuting operators constructed from field quantities attached to that
surface.' The question is how to compute the transformation function
from one spacelike surface to another, that is, $(\zeta'_1,\sigma_1|\zeta''_2,
\sigma_2)$. After remarking that this development, time-evolution,
must be described by a unitary transformation, he {\it assumed\/} that
any infinitesimal change in the transformation function must be given
in terms of the infinitesimal change in a quantum action operator, $W_{12}$,
or of a quantum Lagrange function $\cal L$. This is the quantum dynamical
principle:
\begin{eqnarray}
&&\delta(\zeta'_1,\sigma_1|\zeta''_2,\sigma_2)={i\over\hbar}(\zeta'_1,\sigma_1|
\delta W_{12}|\zeta''_2,\sigma_2)\nonumber\\
&&\quad={i\over \hbar}(\zeta'_1,\sigma_1|\delta\int_{\sigma_2}^{\sigma_1}
(dx){\cal L}(x)|\zeta_2'',\sigma_2).
\label{qap}
\end{eqnarray}
Here, $\cal L$ is a relativistically invariant Hermitian function of
the fields and their derivatives,
\begin{eqnarray}
{\cal L}(x)={\cal L}(\phi^a(x),\partial_\mu\phi^a(x)),
\end{eqnarray}
where $a$ labels the different field operators of the system.
If the parameters of the system are not altered, the only changes
arise from those of the initial and final states, which changes are
effected by infinitesimal generating operators $F(\sigma_1)$, $F(\sigma_2)$,
expressed in terms of operators associated with the surfaces $\sigma_1$
and $\sigma_2$. In this way, Schwinger deduced the {\it Principle of
Stationary Action},
\begin{eqnarray}
\delta W_{12}=F(\sigma_1)-F(\sigma_2),
\end{eqnarray}
from which the field equations may be deduced. A series of six papers followed
with the same title, and the most important `Green's Functions of
Quantized Fields,' published in the Proceedings of the National Academy
of Sciences.
The paper `On Gauge Invariance and Vacuum Polarization,'
submitted by Schwin\-ger
to the {\it Physical Review\/} near the end of December 1950,
is nearly universally acclaimed as his greatest publication. As his lectures
have rightfully been compared to the works of Mozart, so this might be
compared to a mighty construction of Beethoven, the 3rd Symphony, the {\it
Eroica}, perhaps. It is most remarkable because it stands in splendid
isolation. It was written over a year
after the last of his series of papers on
his second, covariant, formulation of quantum electrodynamics
was completed: `Quantum Electrodynamics III. The Electromagnetic Properties
of the Electron---Radiative Corrections to Scattering' was submitted
in May 1949. And barely two months later, in March 1951,
Schwinger would submit the first
of the series on his third reformulation of quantum
field theory, that
based on the quantum action principle, namely,
`The Theory of Quantized Fields I.' But
`Gauge Invariance and Vacuum Polarization' stands on its own,
and has endued the rapid changes in tastes and developments in quantum
field theory, while the papers in the other series are mostly of historical
interest now. Among many other remarkable developments, Schwinger discovered
here the axial-vector anomaly, nearly twenty years before its rediscovery and
naming by Adler, Bell, and Jackiw \cite{abj}.
As Lowell Brown \cite{brown}
pointed out, `Gauge Invariance and Vacuum Polarization'
still has over one hundred citations per year, and is far and away Schwinger's
most cited paper.\footnote{In 2005 the {\it Science Citation Index} lists
104 citations, out of a total of 458 citations to all of Schwinger's
work. These numbers have remained remarkably constant over ten years.}
So it was no surprise that in the late 1940s and early 1950s Harvard was the
center of the world, as
far as theoretical physics was concerned. Everyone, students and professors
alike, flocked to
Schwinger's lectures. Everything was revealed, long before publication;
and not infrequently
others received the credit because of Schwinger's reluctance to publish
before the subject was ripe.
A case in point is the so-called Bethe-Salpeter equation \cite{bs},
which as Gell-Mann and Low noted \cite{gml}, first
appeared in Schwinger's lectures at Harvard.
At any one time, Schwinger had ten or twelve Ph.D.
students, who typically saw him but rarely. In part, this was because he was
available to see his
large flock but one afternoon a week, but most saw him only when absolutely
necessary, because
they recognized that his time was too valuable to be wasted on trivial
matters. A student may have
seen him only a handful of times in his graduate career, but that was all the
student required.
When admitted to his sanctum, students were never rushed, were listened to
with respect, treated with kindness,
and given inspiration and practical advice. One must remember that
the student's problems
were typically quite unrelated to what Schwinger himself was working on at
the time; yet in a few
moments, he could come up with amazing insights that would keep the student
going for weeks,
if not months. A few students got to know Schwinger fairly well, and were
invited to the Schwingers'
house occasionally; but most saw Schwinger primarily as a virtuoso in the
lecture hall, and now and
then in his office. A few faculty members were a bit more intimate, but
essentially Schwinger was a very private person.
\section*{Field Theory}
Feynman left the field of quantum electrodynamics in 1950, regarding it as
essentially complete.
Schwinger never did. During the next fifteen years, he continued to explore
quantum field theory,
trying to make it reveal the secrets of the weak and strong interactions.
And he accomplished much.
In studying the relativistic structure of the theory, he recognized that
all the physically significant
representations of the Lorentz group were those that could be derived from the
`attached' four-dimensional
Euclidean group, which is obtained by letting the time coordinate become
imaginary. This idea was
originally ridiculed by Pauli, but it was to prove a most fruitful suggestion.
Related to this was the
CPT theorem, first given a proof for interacting systems by Schwinger in his
`Quantized Field' papers
of the early 1950s, and elaborated later in the decade.
By the end of the 1950s, Schwinger, with his
former student Paul Martin, was applying field theory methods to many-body
systems, which led to a
revolution in that field, and independently developed techniques which
opened up non-equilibrium statistical mechanics.
Along the way, in what he considered rather modest papers, he discovered
Schwinger terms, anomalies in the commutation relations between
field operators, and the Schwinger
model, still the only known example of dynamical mass generation.
The beginning of a quantum field theory for non-Abelian fields was made;
the original example of a non-Abelian field being that of
the gravitational field, he laid the groundwork for later canonical
formulations of gravity. (See also \cite{adm}.) Fundamental here were his
consistency conditions for a relativistic quantum field theory.
\section*{Measurement Algebra}
In 1950 or so, as we mentioned, Schwinger developed his action principle,
which applies to any
quantum system, including nonrelativistic quantum mechanics. Two years later,
he reformulated
quantum kinematics, introducing symbols that abstracted the essential elements
of realistic measurements.
This was measurement algebra, which yielded conventional Dirac
quantum mechanics. But although
the result was as expected, Schwinger saw the approach as of great value
pedagogically, and as
providing a interpretation of quantum mechanics that was self-consistent. He
taught quantum mechanics
this way for many years, starting in 1952 at the Les Houches summer school;
but only in 1959 did he
start writing a series of papers expounding the method to the world.
He always intended to write a
definitive textbook on the subject, but only an incomplete version based
on the
Les Houches lectures ever appeared. (In the last few years, Englert
brought his UCLA quantum mechanics
lectures to a wider audience \cite{englertbook}.)
One cannot conclude a retrospective of Schwinger's work without mentioning
two other magnificent achievements in the quantum mechanical domain.
He presented in 1952 a definitive development
of angular momentum theory derived in terms of oscillator variables in
`On Angular Momentum,' which was never properly
published; and he developed a `time-cycle' method of calculating
matrix elements without having to find all the wavefunctions in his
beautiful `Brownian Motion of a Quantum Oscillator' (1961).
We should also mention the famous Lippmann-Schwinger paper (1950),
which is chiefly remembered for what Schwinger considered a standard
exposition of quantum scattering theory, not for the variational methods
expounded there.
\section*{Electroweak Synthesis}
In spite of his awesome ability to make formalism work for him, Schwinger was
at heart a
phenomenologist. He was active in the search for higher symmetry; while he
came up with $W_3$,
Gell-Mann found the correct approximate symmetry of hadronic states, $SU(3)$.
Schwinger's
greatest success in this period was contained in his masterpiece,
his 1957 paper `A Theory of
the Fundamental Interactions.' Along with many other insights,
such as the existence of two neutrinos and the $V-A$ structure of
weak interactions, Schwinger there laid the
groundwork for the electroweak unification. He introduced two charged
intermediate vector
bosons as partners to the photon, which couple to charged weak currents.
That coupling is exactly that found in the standard model. A few years later,
his former student, Sheldon Glashow, as an outgrowth of his thesis, would
introduce a neutral
heavy boson to close the system to the modern $SU(2)\times U(1)$ symmetry
group \cite{glashow}; Steven
Weinberg \cite{weinberg} would complete the picture by
generating the masses for
the heavy bosons by
spontaneous symmetry breaking. Schwinger did not have the details
right in 1957, in
particular because experiment seemed to disfavor the $V-A$ theory his
approach implied,
but there is no doubt that Schwinger must be counted as the grandfather of the
Standard Model on the basis on this paper.
\section*{The Nobel Prize and Reaction}
Recognition of Schwinger's enormous contributions had come early.
He received the Charles L. Mayer
Nature of Light Award in 1949 on the basis of the partly completed manuscripts
of his `Quantum
Electrodynamics' papers. The first Einstein prize was awarded to him, along
with Kurt G\"odel,
in 1951. The National Medal of Science was presented to him by President
Johnson in 1964. The following year, Schwinger, Tomonaga, and Feynman
received the Nobel Prize in Physics from the King of Sweden.
But by this point his extraordinary command of the machinery of quantum field
theory had convinced
him that it was too elaborate to describe the real world, at least directly.
In his Nobel Lecture,
he appealed for a phenomenological field theory that would describe directly
the particles experiencing
the strong interaction. Within a year, he developed such a theory, Source
Theory.
\section*{Source Theory and UCLA}
It surely was the difficulty of incorporating strong interactions into
field theory that led to `Particles and Sources,' received
by the {\it Physical Review\/} barely six months after his Nobel lecture,
in July 1966, based on lectures Schwinger
gave in Tokyo that summer. One must appreciate the milieu in which
Schwinger worked in 1966. For more than
a decade he and his students had been nearly the only exponents of field
theory, as the community sought to understand weak and strong interactions,
and the proliferation of `elementary particles,' through dispersion relations,
Regge poles, current algebra, and the like, most ambitiously through the
$S$-matrix bootstrap hypothesis of Geoffrey Chew and
Stanley Mandelstam \cite{chew,frautschi,adler,mandelstam}.
What work in field theory did exist then was largely axiomatic,
an attempt to turn the structure of the theory into a branch of mathematics,
starting with Arthur Wightman \cite{wightman},
and carried on by many others, including
Arthur Jaffe at Harvard \cite{jaffe}.
(The name changed from axiomatic field theory to constructive field theory
along the way.) Schwinger looked
on all of this with considerable distaste; not that he did not appreciate
many of the contributions these techniques offered in specific contexts,
but he could not see how they could form the {\em basis\/} of a theory.
The new source theory was supposed to supersede field theory, much
as Schwinger's
successive covariant formulations of quantum electrodynamics had replaced
his earlier schemes. In fact, the revolution was to be more profound,
because there were no divergences, and no renormalization.
`The concept of renormalization is simply foreign to this phenomenological
theory. In source theory, we begin by hypothesis with the description of
the actual particles, while renormalization is a field theory concept
in which you begin with the more fundamental operators, which are then
modified by dynamics. I emphasize that there never can be divergences
in a phenomenological theory. What one means by that is that one is
recognizing that all further phenomena are consequences of one
phenomenological constant, namely the basic charge unit, which describes the
probability of emitting a photon relative to the emission of an electron.
When one says that there are no divergences one means that it is not
necessary to introduce any new phenomenological constant. All further
processes as computed in terms of this primitive interaction automatically
emerge to be finite, and in agreement with those which historically had
evolved much earlier.' \cite{btts}
Robert Finkelstein has offered a perceptive discussion of Schwinger's source
theory program:
`In comparing operator field theory with source theory Julian revealed his
political orientation when he described operator field theory as a trickle
down theory (after a failed economic theory)---since it descends from implicit
assumptions about unknown phenomena at inaccessible and very high energies to
make predictions at lower energies. Source theory on the other hand he
described as anabatic (as in Xenophon's Anabasis) by which he meant that it
began with solid knowledge about known phenomena at accessible energies to make
predictions about physical phenomena at higher energies. Although source theory
was new, it did not represent a complete break with the past but rather was a
natural evolution of Julian's work with operator Green's functions. His
trilogy on source theory is not only a stunning display of Julian's power
as an analyst but it is also totally in the spirit of the modest scientific
goals he had set in his QED work and which had guided him earlier as a
nuclear phenomenologist.' \cite{fink}
But the new approach was not well received. In part this was because
the times were changing; within a few years, 't Hooft \cite{thooft}
would establish
the renormalizability of the Glashow-Weinberg-Salam $SU(2)\times U(1)$
electroweak model, and field theory was seen by all to be viable again.
With the discovery of asymptotic freedom in 1974 \cite{af},
a non-Abelian gauge theory of strong interactions,
quantum chromodynamics, which was proposed somewhat earlier \cite{QCD},
was promptly accepted by nearly
everyone. An alternative to conventional field theory did not seem to
be required after all. Schwinger's insistence on a clean break with the
past, and his rejection of `rules' as opposed to learning while
serving as an `apprentice,' did not encourage conversions.
Already before the source theory revolution,
Schwinger felt a growing sense of unease with
his colleagues at Harvard.
But the chief reason Schwinger left Harvard for UCLA was health related.
Formerly overweight and inactive,
he had become health conscious upon the premature death of Wolfgang Pauli
in 1958. He had been fond of tennis from his youth, had discovered
skiing in 1960, and now his doctor was recommending a daily swim for his
health. So he listened favorably to the entreaties of David Saxon,
his closest colleague at the Radiation Lab during the war, who
for years had been trying to induce him to come to UCLA. Very much against
his wife's wishes, he made the move in 1971. He brought along his three
senior students at the time, Lester DeRaad, Jr., Wu-yang Tsai, and the
present author, who became long-term `assistants' at UCLA.
He and Saxon expected, as in the early days at Harvard, that
students would flock
to UCLA to work with him; but they did not. Schwinger was no longer the
center of theoretical physics.
This is not to say that his little group at UCLA did not make an heroic
attempt to establish a source-theory presence. Schwinger remained a
gifted innovator and an awesome calculator. He wrote 2-1/2 volumes of
an exhaustive treatise on source theory, {\it Particles, Sources, and Fields},
devoted primarily to the reconstruction of quantum
electrodynamics in the new language; unfortunately, he abandoned the
project when it came time to deal with strong interactions, in part because
he became too busy writing papers on an `anti-parton' interpretation of
the results of deep-inelastic scattering experiments.
He made some significant contributions
to the theory of magnetic charge; particularly noteworthy was his introduction
of dyons. He reinvigorated proper-time methods
of calculating processes in strong-field electrodynamics;
and he made some major contributions to the theory of the Casimir effect,
which are still having repercussions.
But it was clear he was reacting, not leading, as witnessed by his
quite pretty paper on the `Multispinor Basis of Fermi-Bose Transformation'
(1979), in which he kicked himself for not discovering supersymmetry.
\section*{Conclusion}
It is impossible to do justice in a few words
to the impact of Julian Schwinger on physical thought in the 20th Century.
He revolutionized fields from nuclear physics to many body theory,
first successfully formulated renormalized quantum electrodynamics,
developed the most powerful functional formulation of quantum field theory,
and proposed new ways of looking at quantum mechanics, angular momentum
theory, and quantum fluctuations. His legacy includes `theoretical tools'
such as the proper-time method, the quantum action principle, and effective
action techniques. Not only is he responsible for formulations bearing his
name: the Rarita-Schwinger equation, the Lippmann-Schwinger equation, the
Tomonaga-Schwinger equation, the Schwinger-Dyson equations,
the Schwinger mechanism, and so forth, but
some attributed to others, or known anonymously:
Feynman parameters, the Bethe-Salpeter equation,
coherent states, Euclidean field theory; the list goes on and on.
It is impossible to imagine what physics would be like in the 21st Century
without the contributions of Julian Schwinger, a very private
yet wonderful human being. It is most gratifying that a dozen years after his
death, recognition of his manifold influences is growing, and research projects
he initiated are still underway.
Julian Schwinger lectured twice at the Erice International School on
Subnuclear Physics, in the years 1986 and 1988.
|
1603.02496
|
\subsection*{#1}}
\newcommand{\Abst}[1]{\,#1}
\newcommand{k_{\rm B}}{k_{\rm B}}
\newcommand{m_{\rm e}}{m_{\rm e}}
\newcommand{N_{\rm e}}{N_{\rm e}}
\newcommand{n_{\rm p}}{n_{\rm p}}
\newcommand{n_{\rm b}}{n_{\rm b}}
\newcommand{T_{\rm e}}{T_{\rm e}}
\newcommand{T_{\gamma}}{T_{\gamma}}
\newcommand{\theta_{\rm e}}{\theta_{\rm e}}
\newcommand{\theta_{\gamma}}{\theta_{\gamma}}
\newcommand{m_{\rm p}}{m_{\rm p}}
\newcommand{\sigma_{\rm T}}{\sigma_{\rm T}}
\newcommand{\sigma_{\rm K-N}}{\sigma_{\rm K-N}}
\newcommand{r_{\rm c}}{r_{\rm c}}
\newcommand{n_{\rm Pl}}{n_{\rm Pl}}
\newcommand{n_{\rm BE}}{n_{\rm BE}}
\newcommand{n_{\rm W}}{n_{\rm W}}
\newcommand{\rho_{\rm Pl}}{\rho_{\rm Pl}}
\newcommand{\rho_{\rm BE}}{\rho_{\rm BE}}
\newcommand{N_{\rm Pl}}{N_{\rm Pl}}
\newcommand{N_{\rm BE}}{N_{\rm BE}}
\newcommand{\vek} [1]{\mbox{\boldmath${#1}$\unboldmath}}
\newcommand{\matr} [1]{\mbox{$\hat{#1}$}}
\newcommand{\SP}[2]{\mbox{$\vek{#1}\cdot\vek{#2}$}}
\newcommand{\partial}{\partial}
\newcommand{\pAb}[2]{\frac{\displaystyle\partial #1}{\displaystyle\partial #2}}
\newcommand{\pAbc}[3]{\left.\frac{\displaystyle\partial #1}{\displaystyle\partial #2}\right|_{#3}}
\newcommand{\PAb}[3]{\frac{\displaystyle\partial^{#3} #1}{\displaystyle\partial {#2}^{#3}}}
\newcommand{\tAb}[2]{\frac{\displaystyle d #1}{\displaystyle d #2}}
\newcommand{\TAb}[3]{\frac{\displaystyle d^{#3} #1}{\displaystyle d {#2}^{#3}}}
\newcommand{\Abl}[2]{\frac{{\rm d} #1}{{\rm d} #2}}
\newcommand{\AKc}[1]{{\rm C}_{#1}}
\newcommand{\AKs}[1]{{\rm S}_{#1}}
\newcommand{\sgn}[1]{\,{\rm sgn}\,{#1}}
\newcommand{\Mitw}[1]{\left<#1\right>}
\newcommand{\Vv}[1]{\mbox{${#1}$}}
\newcommand{\Vvek}[2]{\mbox{${#1}^{#2}$}}
\newcommand{\VvN}[1]{\mbox{${#1}_{0}$}}
\newcommand{\VvNN}[1]{\mbox{${#1}_{0}^2$}}
\newcommand{\VSP}[2]{\mbox{$\Vv{#1}\Vv{#2}$}}
\newcommand{\STV}[1]{\mbox{$\tilde{#1}$}}
\newcommand{\figref}[1]{Abb.\ref{#1}}
\newcommand{\tabref}[1]{Tab.\ref{#1}}
\newcommand{\kapref}[1]{Kap.\ref{#1}}
\newcommand{\deltaD}[1]{\mbox{$\delta( {#1} )$}}
\newcommand{\Aut}[1]{\it{#1}\rm }
\newcommand{\AutP}[1]{\it{#1}\rm }
\newcommand{\AutPet}[1]{\it{#1 et al.}\rm }
\newcommand{\AutPno}[1]{{#1}\rm }
\newcommand{\AutPnoet}[1]{{#1 et al.}\rm }
\newcommand{\Vvekm}[1]{\mbox{$\Vvek{#1}{\mu}$}}
\newcommand{\Vvekcv}[2]{\mbox{${#1}_{#2}$}}
\newcommand{\Vvekcvm}[1]{\mbox{$\Vvekcv{#1}{\mu}$}}
\newcommand{\pot}[2]{#1 \times 10^{#2}}
\newcommand{\poterr}[3]{(#1\pm#2)\times10^{#3}}
\newcommand{n_{\gamma}}{n_{\gamma}}
\newcommand{t_{\rm C}}{t_{\rm C}}
\newcommand{t_{\rm K}}{t_{\rm K}}
\newcommand{t_{\gamma \rm e}}{t_{\gamma \rm e}}
\newcommand{t_{\rm e\gamma}}{t_{\rm e\gamma}}
\newcommand{t_{\rm exp}}{t_{\rm exp}}
\newcommand{\Theta_{2.7}}{\Theta_{2.7}}
\newcommand{\theta_{z}}{\theta_{z}}
\newcommand{Y_{\rm p}}{Y_{\rm p}}
\newcommand{\Omega_{\rm b}h^2}{\Omega_{\rm b}h^2}
\newcommand{\Omega_{\rm m}h^2}{\Omega_{\rm m}h^2}
\newcommand{z_{\rm eq}}{z_{\rm eq}}
\section{Introduction}
\label{sec:Intro}
The standard $\Lambda$CDM cosmology has been shown to describe our Universe to extremely high accuracy \citep{WMAP_params, Planck2013params, Planck2015params}. This model is based upon a spatially flat, expanding Universe with dynamics governed by General Relativity and whose dominant constituents at late times are cold dark matter (CDM) and a cosmological constant ($\Lambda$). The primordial seeds of structures are furthermore Gaussian-distributed adiabatic fluctuations with an almost scale-invariant power spectrum thought to be created by inflation.
Today, we know the key cosmological parameters (e.g., the total, CDM and baryon densities, the CMB photon temperature, Hubble expansion rate, etc.) to percent-level precision or better \citep{Planck2015params}. Assuming standard Big Bang Nucleosynthesis (BBN) and a standard thermal history, we can furthermore derive precise values for the helium abundance, $Y_{\rm p}$, and effective number of relativistic degrees of freedom, $N_{\rm eff}$ \citep[e.g.,][]{Steigman2007}. Also the physics of the recombination era, which defines the decoupling of photons and baryons around redshift $z\simeq 10^3$, is now believed to be well understood within $\Lambda$CDM \citep[e.g.,][]{Chluba2010b, Yacine2010c}.
Of the many cosmological data sets, measurements of the cosmic microwave background (CMB) temperature and polarization anisotropies, beyond doubt, have driven the development towards precision cosmology over the few past years. Today, cosmologists have exhausted practically all information about the primordial Universe contained in the CMB temperature power spectra.
{\it WMAP} and {\it Planck} have also clearly seen the $E$-mode polarization signals \citep{Page2006, Planck2013powerspectra, Planck2015powerspectra}. Several sub-orbital and space-based experiments (e.g., {\it BICEP3, CLASS, SPTpol, ACTpol, SPIDER, PIPER, LiteBird, PIXIE, COrE+}) are now rushing to detect the primordial $B$-modes at large angular scales and to squeeze every last bit of information out of the $E$-mode signals, all to deliver the long-sought proof of inflation.
It is well known that CMB {\it spectral distortions} -- tiny departures of the average CMB energy spectrum from that of a perfect blackbody -- deliver a new independent probe of different processes occurring in the early Universe. The case for spectral distortions has been made several times and the physics of their formation is well understood \citep[for recent overview see,][]{Chluba2011therm, Sunyaev2013, Chluba2013fore, Tashiro2014, deZotti2015}. The purpose of this article is to provide an overview of all distortion signals created within $\Lambda$CDM (Fig.~\ref{fig:signals}), also assessing their total uncertainty. Although non-standard processes (i.e., decaying particles or evaporating primordial black holes) could cause additional interesting signals, the $\Lambda$CDM distortions define clear targets for designing future distortion experiments.
Thus far, no all-sky distortion has been found \citep{Fixsen1996, Fixsen2009}, however, new experimental concepts, such as {\it PIXIE} \citep{Kogut2011PIXIE}, are being actively discussed and promise improvements of the measurements carried out with {\it COBE/FIRAS} by several orders of magnitude. It is thus time to ask what new information could be extracted from the CMB spectrum and how this could help refine our understanding of the Universe.
The case for spectral distortions as a new independent probe of inflation has also been made several times \citep[e.g.,][]{Hu1993, Chluba2012, Chluba2012inflaton, Dent2012, Pajer2012b, Khatri2013forecast, Chluba2013iso, Chluba2013fore, Clesse2014}, most recently by \citet{Cabass2016}, who emphasized that, given the constraints from {\it Planck}, an improvement in the sensitivity by a factor of $\simeq 3$ over {\it PIXIE} guarantees either a detection of $\mu$ or of negative running ($\gtrsim 95\%$ c.l.).
Here we add a few aspects to the discussion related to the interpretation of future distortion measurements carried out with an instrument similar to {\it PIXIE}.
For real distortion parameter estimation, one has to simultaneously determine the average CMB temperature, $\mu$, $y$ and residual ($r$-type) distortion parameters, as well as several foreground parameters from measurements in different spectral bands \citep{Chluba2013PCA}.
In this case, estimates for $\mu$ and $y$ based on simple scattering physics arguments (Sect.~\ref{sec:estimates}) underestimate the experimentally recovered ($\leftrightarrow$ measured) parameters, as we illustrate here.
We also briefly illustrate how well a {\it PIXIE}-like experiment may be able to constrain power spectrum parameters through the associated $\mu$-distortion when combined with existing constraints from {\it Planck} \citep{Planck2015params}. We find that an experiment with $\simeq 3.4$ times the sensitivity of {\it PIXIE} in its current design \citep{Kogut2011PIXIE} could allow tightening the constraint on the running of the spectral index by $\simeq 40\%-50\%$ when combined with existing data. This would also deliver an $\simeq 5\sigma$ detection of the $\mu$-distortion from CMB distortions alone. An $\simeq 10$ times enhanced sensitivity over {\it PIXIE} would furthermore allow a marginal detection of the first residual distortion parameter, which could be crucial when it comes to distinguishing different sources of distortions.
These forecasts are very idealized, assuming that the effective channel sensitivity already includes the penalty paid for foreground separation. Clearly, a more detailed foreground modeling for the monopole is required to demonstrate the full potential of future spectroscopic CMB missions, as we briefly discuss in Sect.~\ref{sec:foregrounds}. However, we argue that a combination of different data sets and exploitation of the many spectral channels of {\it PIXIE} will hopefully put us into the position to tackle this big challenge in the future.
\vspace{-3mm}
\section{Spectral distortions within $\Lambda$CDM}
\label{sec:distortions_LCDM}
Several exhaustive overviews on various spectral distortion scenarios exist \citep{Chluba2011therm, Sunyaev2013, Chluba2013fore, Tashiro2014, deZotti2015}, covering both standard and non-standard processes. Here we only focus on sources of distortions in the standard $\Lambda$CDM cosmology. For the numbers given in the text, we use the best-fitting parameters from {\it Planck} for the TT,TE,EE + lowP dataset \citep{Planck2015params}. Specifically, we use a flat model with $T_0=2.726\,{\rm K}$, $h=0.6727$, $\Omega_{\rm c}h^2=0.1198$, $\Omega_{\rm b}h^2=0.02225$, $Y_{\rm p}=0.2467$ and $N_{\rm eff}=3.046$, with their standard meaning \citep{Planck2015params}.
\subsection{Reionization and structure formation}
\label{sect:reion}
The first sources of radiation during reionization \citep{Hu1994pert}, supernova feedback \citep{Oh2003} and structure formation shocks \citep{Sunyaev1972b, Cen1999, Refregier2000, Miniati2000} heat the intergalactic medium at low redshifts ($z\lesssim 10$), leading to a partial up-scattering of CMB photons, causing a Compton $y$-distortion \citep{Zeldovich1969}.
Although this is the {\it largest} expected average distortion of the CMB caused within $\Lambda$CDM, its amplitude is quite uncertain and depends on the detailed structure and temperature of the medium, as well as scaling relations (e.g., between halo mass and temperature).
Several estimates for this contribution were obtained, yielding values for the total $y$-parameter at the level $y\simeq \pot{\rm few}{-6}$ \citep{Refregier2000, Zhang2004, Hill2015, Dolag2015, deZotti2015}.
Following \citet{Hill2015}, we will use a fiducial value of $y_{\rm re}=\pot{2}{-6}$. This is dominated by the low mass end of the halo function and the signal should be detectable with {\it PIXIE} at more than $10^3\,\sigma$. At this enormous significance, small corrections due to the high temperature ($kT_{\rm e} \simeq 1\,{\rm keV}$) of the gas become noticeable \citep{Hill2015}. The relativistic correction can be computed using the temperature moment method of {\tt SZpack} \citep{Chluba2012SZpack, Chluba2012moments} and it differs from the distortions produced in the early Universe. This correction should be detectable with {\it PIXIE} at $\simeq 30\,\sigma$ \citep{Hill2015} and could teach us about the average temperature of the intergalactic medium, promising a way to solve the missing baryon problem. Both distortion signals are illustrated in Fig.~\ref{fig:signals}.
\begin{figure*}
\centering
\includegraphics[width=2.0\columnwidth]{./eps/Signals.pdf}
\caption{Comparison of several CMB monopole distortion signals produced in the standard $\Lambda$CDM cosmology.
The low-redshift distortion created by reionization and structure formation is close to a pure Compton-$y$ distortion with $y\simeq \pot{2}{-6}$. Contributions from the hot gas in low mass haloes give rise to a noticeable relativistic temperature correction, which is taken from \citet{Hill2015}. The damping and adiabatic cooling signals were explicitly computed using {\tt CosmoTherm} \citep{Chluba2011therm}. The cosmological recombination radiation (CRR) was obtained with
{\tt CosmoSpec} \citep{Chluba2016}. The estimated sensitivity ($\Delta I_\nu \approx 5\, {\rm Jy/sr}$) of PIXIE is shown for comparison (dotted line). The templates will be made available at {\tt www.Chluba.de/CosmoTherm}.}
\label{fig:signals}
\end{figure*}
\vspace{-0mm}
\subsection{Damping of primordial small-scale perturbations}
\label{sec:damp}
The damping of small-scale fluctuations of the CMB temperature set up by inflation at wavelength $\lambda<1\,{\rm Mpc}$ causes another inevitable distortion of the CMB spectrum \citep{Sunyaev1970diss, Daly1991, Barrow1991, Hu1994, Hu1994isocurv}. While the idea behind this mechanism is quite simple, it was only recently rigorously described \citep{Chluba2012}, allowing us to perform detailed computations of the associated distortion signal for different early universe models \citep{Chluba2012, Chluba2012inflaton, Dent2012, Pajer2012b, Khatri2013forecast, Chluba2013iso, Chluba2013fore, Clesse2014, Cabass2016}. The distortion is sensitive to the amplitude and shape of the power spectrum at small scales (wavenumbers $1\,{\rm Mpc}^{-1}\lesssim k \lesssim \pot{2}{4}\,{\rm Mpc}^{-1}$) and thus provides a promising new way to constrain inflation.
For a given initial power spectrum of perturbations, the effective heating rate in general has to be computed numerically. However, at high redshifts the tight coupling approximation can be used to simplify the calculation. An excellent approximation for the effective heating rate is obtained with\footnote{Here, we define the heating rate such that $\int_z^\infty \frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z}{\,\rm d} z>0$.} \citep{Chluba2012, Chluba2013iso}
\begin{align}
\label{eq:adiabatic_damping}
\frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z}&\approx 4 A^2 \partial_z k_{\rm D}^{-2} \int^\infty_{k_{\rm min}} \frac{k^4{\,\rm d} k}{2\pi^2} P_\zeta(k)\,\expf{-2k^2/k_{\rm D}^2},
\end{align}
where $P_\zeta(k)=2\pi^2\,A_{\rm s}\,k^{-3}\,(k/k_0)^{n_{\rm S}-1 + \frac{1}{2}\,n_{\rm run} \ln(k/k_0)}$ defines the usual curvature power spectrum of scalar perturbations and $k_{\rm D}$ is the photon damping scale \citep{Weinberg1971, Kaiser1983}, which scales as $k_{\rm D}\approx \pot{4.048}{-6}\,(1 + z)^{3/2} {\rm Mpc}^{-1}$ early on. For adiabatic modes, we have a heating efficiency $A^2\approx (1+4R_\nu/15)^{-2}\approx 0.813$, where $R_\nu\approx 0.409$ for $N_{\rm eff}=3.046$. The $k$-space integral is truncated at $k_{\rm min}\approx 0.12\,{\rm Mpc}^{-1}$, which reproduces the full heating rate across the recombination era quite well \citep{Chluba2013fore}. With this we can directly compute the associated distortion using {\tt CosmoTherm} \citep{Chluba2011therm}. The various isocurvature perturbations can be treated in a similar manner \citep[e.g.,][]{Chluba2013iso}; however, in the standard inflation model these should be small. Tensor perturbations also contribute to the dissipation process; however, the associated heating rate is orders of magnitudes lower than for adiabatic modes even for very blue tensor power spectra and thus can be neglected \citep{Ota2014, Chluba2015}.
For $A_{\rm s}=\pot{2.207}{-9}$, $n_{\rm S}=0.9645$ and $n_{\rm run}=0$ \citep{Planck2015params}, we present the result in Fig.~\ref{fig:signals}. The adiabatic cooling distortion (see Sect.~\ref{sec:ad_cool}) was simultaneously included. The signal is uncertain to within $\simeq 10\%$ in $\Lambda$CDM (Sect~\ref{sec:results}). The distortion lies between a $\mu$- and $y$-distortion and is close to the detection limit of {\it PIXIE}.
As we will see below (Sect.~\ref{sec:forecasts}), with the current design the $\mu$-distortion part of the signal should be seen at the level of $\simeq 1.5\sigma$, which is in good agreement with earlier analysis \citep{Chluba2012, Chluba2013PCA}. A clear $5\sigma$ detection of this signal should be possible with $\simeq 3.4$ times higher sensitivity (Sect.~\ref{sec:forecasts}).
We will discuss various approximations for the damping signal below (Sect.~\ref{sec:Methods}), but simply performing a fit using $\mu$, $y$ and temperature shift \citep[see][for explicit definitions of these spectral shapes]{Chluba2013PCA}, $\Delta=\Delta T/T_0$, we find $\mu_{\rm fit}\approx \pot{1.984}{-8}$, $y_{\rm fit}\approx\pot{3.554}{-9}$ and $\Delta_{\rm fit}\approx \pot{-0.586}{-9}$ with a non-vanishing residual at the level of $20\%-30\%$.
\vspace{-1mm}
\subsection{Adiabatic cooling for baryons}
\label{sec:ad_cool}
The adiabatic cooling of ordinary matter continuously extracts energy from the CMB photon bath by Compton scattering leading to another small but guaranteed distortion that directly depends on the baryon density and helium abundance. The distortion is characterized by {\it negative} $\mu$- and $y$-parameters at the level of $\simeq \pot{\rm few}{-9}$ \citep{Chluba2005, Chluba2011therm, Khatri2011BE}. The effective energy extraction history is given by
\begin{align}
\label{eq:adiabatic_cooling}
\frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z}&=-\frac{3}{2}\,\frac{N_{\rm tot} kT_{\gamma}}{\rho_\gamma (1+z)}
\nonumber
\\
&\approx -\frac{\pot{5.71}{-10}}{(1+z)}\,
\!\left[\frac{(1-Y_{\rm p})}{0.7533}\right]
\!\left[\frac{\Omega_{\rm b}h^2}{0.02225}\right]
\nonumber
\\
&\qquad\qquad\qquad\qquad\times\left[\frac{(1+f_{\rm He}+X_{\rm e})}{2.246}\right]\left[\frac{T_0}{2.726\,{\rm K}}\right]^{-3}
\end{align}
where $N_{\rm tot}=N_{\rm H}(1+f_{\rm He}+X_{\rm e})$ is the number density of all thermally coupled baryons and electrons; $N_{\rm H}\approx \pot{1.881}{-6}\,(1+z)^3\,{\rm cm}^{-3}$ is the number density of hydrogen nuclei; $f_{\rm He}\approx Y_{\rm p}/4(1-Y_{\rm p})\approx 0.0819$ and $X_{\rm e}=N_{\rm e}/N_{\rm H}$ is the free electron fraction, which can be computed accurately with {\tt CosmoRec} \citep{Chluba2010b}. For {\it Planck} 2015 parameters, the signal is shown in Fig.~\ref{fig:signals}. It is uncertain at the $\simeq 1\%$ level in $\Lambda$CDM (Sect~\ref{sec:results}) and cancels part of the damping signal; however, it is roughly one order of magnitude weaker and cannot be separated at the currently expected level of sensitivity of next generation CMB spectrometers.
\subsection{The cosmological recombination radiation}
The cosmological recombination process is associated with the emission of photons in free-bound and bound-bound transitions of hydrogen and helium \citep{Zeldovich68, Peebles68, Dubrovich1975}. This causes a small distortion of the CMB and the redshifted recombination photons should be visible as the cosmological recombination radiation (CRR), a tiny spectral distortion ($\simeq$ nK-$\mu$K level) present at mm to dm wavelength \citep[for overview see][]{Sunyaev2009}. The amplitude of the CRR depends directly on the number density of baryons in the Universe. The helium abundance furthermore affects the detailed shape of the recombination lines. Finally, the line positions and widths depend on when and how fast the Universe recombined. The CRR thus provides an independent way to constrain cosmological parameters and map the recombination history \citep{Chluba2008T0}.
Several computations of this CRR have been carried out in the past \citep{Rybicki93, DubroVlad95, Dubrovich1997, Kholu2005, Jose2006, Chluba2006b, Chluba2007, Jose2008, Chluba2009c, Chluba2010}. These calculations were very time-consuming, taking a few days of supercomputer time for one cosmology \citep[e.g.,][]{Chluba2007, Chluba2010}. This big computational challenge was recently overcome \citep{Yacine2013RecSpec, Chluba2016}, today allowing us to compute the CRR in about 15 seconds on a standard laptop using {\tt CosmoSpec}\footnote{{\tt www.Chluba.de/CosmoSpec}} \citep{Chluba2016}.
The {\it fingerprint} from the recombination era shows several distinct spectral features that encode valuable information about the recombination process (Fig.~\ref{fig:signals}). Many subtle radiative transfer and atomic physics processes \citep[e.g.,][]{Chluba2007, Chluba2009c, Chluba2010b, Yacine2010c} are included by {\tt CosmoSpec}, yielding the most detailed and accurate predictions of the CRR in the standard $\Lambda$CDM model to date. In $\Lambda$CDM, the CRR is uncertain at the level of a few percent, with the error being dominated by atomic physics \citep[see][]{Chluba2016}.
The CRR is currently roughly $\simeq 6$ times below the estimated detection limit of {\it PIXIE} (cf. Fig.~\ref{fig:signals}) and a detection from space will require several times higher sensitivity \citep{Vince2015}, which in the future could be achieved by experimental concepts similar to {\it PRISM} \citep{PRISM2013WPII} or {\it Millimetron} \citep{Smirnov2012}. At low frequencies ($1\,{\rm GHz}\lesssim \nu\lesssim 10\,{\rm GHz}$), the significant spectral variability of the CRR may also allow us to detect it from the ground \citep{Mayuri2015}.
\subsection{Superposition of blackbodies}
\label{sec:sup}
It is well-known that the superposition of blackbodies with different temperatures no longer is a blackbody but exhibits a $y$-type spectral distortion \citep{Zeldovich1972, Chluba2004, Stebbins2007}. It is precisely this effect that leads to the distortion caused by Silk-damping \citep[e.g.,][]{Chluba2012}. To second order in the temperature fluctuations $\Delta_T=\Delta T/\bar{T}\ll 1$, the effective $y$-parameter is given by \citep[cf.,][]{Chluba2004}
\begin{align}
\label{eq:y-sup}
y&=\frac{1}{2}\left<\Delta^2_T\right>,
\end{align}
where $\bar{T}=\left<T\right>$ and the average can be related to any blackbody intensity mixing process. This can be i) Thomson scattering, ii) weighted averages of CMB {\it intensity} maps (e.g., due to spherical harmonic decomposition) or iii) inevitable averaging inside the instrumental beam \citep{Chluba2004}. As mentioned above, i) occurs in the early Universe and also during reionization \citep[e.g.,][]{Hu1994pert, Chluba2012} while ii) can occur as an {\it artefact} of the standard analysis of CMB intensity/antenna temperature maps, which can in principle be avoided by consistently converting to thermodynamic temperature at second order in $\Delta_T$.
For the standard CMB temperature anisotropies, the beam averaging effect is tiny for angular resolution typical for today's CMB imaging experiments ({\it Planck}, SPT, ACT, etc.) and should be completely negligible \citep{Chluba2004}. For a {\it PIXIE}-type experiment with beam $\simeq 2^\circ$, the effect should be limited to $y\lesssim \pot{{\rm few}}{-10}-10^{-9}$ \citep{Chluba2004}; however, since high-resolution CMB temperature maps are available, this unavoidable effect can be taken into account very accurately.
The largest distortion due to the superposition of blackbodies with different temperatures is caused by the presence of the CMB dipole, with $\beta=\varv/c\simeq \pot{(1.231\pm 0.003)}{-3}$ \citep{Hinshaw2009}. Averaging the CMB intensity\footnote{We emphasize again that this distortion is {\it not} produced if the thermodynamic temperature, computed to second order in the fluctuations, was used.} over the whole sky yields \citep{Chluba2004}
\begin{align}
\label{eq:y-sup_CMB_dipole}
y_{\rm d}&=\frac{\beta^2}{6}\approx \pot{(2.525 \pm 0.012)}{-7}.
\end{align}
with sky-averaged temperature dispersion $\left<[\beta \cos(\theta)]^2\right>=\beta^2/3$. The uncertainty, $\Delta y_0^{\rm d}\approx \pot{1.2}{-9}$, in the contribution of the CMB dipole to the average $y$-parameter
is a few times larger than the $y$-parameter induced by multipoles $\ell>1$ (see below).
One may also worry about higher order temperature terms from the dipole \citep{Chluba2004}, however, only even moments contribute, so that the next order correction averaged over the whole sky, $\left<[\beta \cos(\theta)]^4\right>\simeq \beta^4/5\simeq \pot{(4.59\pm0.04)}{-13}$, is negligible, although it could still contribute noticeably in the far Wien tail ($\nu\gtrsim 1\,{\rm THz}$). We mention that in the presence of a primordial dipole, we would in addition obtain a $y$-parameter contribution $y_0^{\rm pD}\simeq \beta \Delta_{10}/\sqrt{12\pi}$, which is caused by aberration and Doppler boosting \citep{Chluba2011ab} and could reach $\simeq 10^{-9}$. Here, $\Delta_{10}$ is the spherical harmonic coefficient of the primordial dipole along the direction of the CMB dipole.
Averaging the CMB intensity spectrum over the whole sky (after subtracting the CMB dipole spectrum), from the measured {\it Planck} temperature power spectrum \citep{Planck2015params} we find
\begin{align}
\label{eq:y-sup_CMB}
y_{\rm sup}&=\sum_{\ell=2} \frac{(2\ell+1)\,C_\ell}{8\pi}\approx\pot{8.23}{-10},
\end{align}
which is extremely close to an earlier estimate based on the theoretical CMB power spectrum \citep{Chluba2012}. The uncertainty in this derived value is dominated by large-angle foreground residuals but is estimated to be below $\simeq 1\%$. As this number is specific to our realization of the Universe and to our own location, it is {\it not} limited by cosmic variance. The CMB temperature is increased by $\Delta T\approx 4.49\,{\rm nK}$ due to the same effect.
We mention that the superposition of blackbodies of different temperatures has a slightly different physical effect than energy exchange through Compton scattering. In the latter case, energy is transferred from the electrons to the photons (assuming heating), such that photons are (partially) upscattered and afterwards {\it all} the energy is stored in the associated ($y$-type) distortion. For the mixing of blackbodies, {\it no} energy exchange between photons and electrons is required ($\leftrightarrow$ Thomson limit) and 2/3 of the energy stored by the original temperature fluctuations causes an increase for the average blackbody temperature. Thus, for a $y$-distortion created through the superposition of blackbodies one also finds an average temperature shift $\Delta T/\bar{T}=2 y_{\rm sup}$ \citep{Chluba2012, Chluba2015IJMPD, Inogamov2015}. For the effect of the dipole this implies $\Delta T_{\rm d}/T_0=\beta^2/3$ caused by the superposition. However, another correction, $\Delta T_{\rm D}/T_0\approx -\beta^2/2$, arises from the Lorentz boost, so that the total temperature shift is $\Delta T=\Delta T_{\rm d}+\Delta T_{\rm D}=-T_0\,\beta^2/6\approx -(0.688 \pm 0.003) \,\mu{\rm K}$ \citep{Chluba2004, Chluba2011ab}.
\vspace{-3mm}
\subsection{Dark matter annihilation}
Today, cold dark matter is a well-established constituent of our Universe \citep{WMAP_params, Planck2013params, Planck2015params}. However, the nature of dark matter is still unclear and many groups are trying to gather any new clue to help unravel this big puzzle \citep[e.g.,][]{Adriani2009, Galli2009, CDMS2010, Zavala2011, Huetsi2011, BSA11, Aslanyan2015}. Similarly, it is unclear how dark matter was produced, however, within $\Lambda$CDM, the WIMP scenario provides one viable solution \citep[e.g.,][]{Jungman1996, Bertone2005}. In this case, dark matter should annihilate at a low level throughout the history of the Universe and even today.
For specific dark matter models, the level of annihilation around the recombination epoch is tightly constrained with the CMB anisotropies \citep{Galli2009, Cirelli2009, Huetsi2009, Slatyer2009, Huetsi2011, Giesen2012, Diamanti2014, Planck2015params}. The annihilation of dark matter can cause changes in the ionization history around last scattering ($z\simeq 10^3$), which in turn can lead to changes of the CMB temperature and polarization anisotropies \citep{Chen2004, Padmanabhan2005, Zhang2006}. Albeit significant dependence on the interaction of the annihilation products with the primordial plasma \citep{Shull1985, Slatyer2009, Valdes2010, Galli2013, Slatyer2015}, the same process should lead to distortions of the CMB \citep{McDonald2001, Chluba2010a, Chluba2011therm}.
The effective heating rate of the medium can be expressed as \citep[see also][]{Chluba2013fore}
\begin{align}
\label{eq:DM_annihil}
\frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z}&=f_{\rm ann} \frac{N_{\rm H}(z)(1+z)^{2+\lambda}}{H(z)\,\rho_\gamma(z)}
\end{align}
where $\lambda=0$ for s-wave annihilation. Here, $H$ denotes the Hubble factor and $\rho_\gamma$ the CMB photon energy density. The annihilation efficiency, $f_{\rm ann}$, captures all details related to the dark matter physics (e.g., annihilation cross section, mass, decay channels, etc.) and can be roughly taken as constant.
For existing upper limits on $f_{\rm ann}$, the distortion is well below the detection limit of {\it PIXIE} \citep{Chluba2011therm, Chluba2013fore, Chluba2013PCA}. Using the latest constraints from {\it Planck}, we find the $\mu$-distortion to be $\mu\lesssim \pot{\rm few}{-10}-10^{-9}$ (see Sect.~\ref{sec:results}). For s-wave annihilation scenarios, this limit ought to be rather conservative, and it is hard to imagine a much larger effect. However, spectral distortion measurements are sensitive to {\it all} energy release at $z\lesssim \pot{2}{6}$ and not only limited to around last scattering. Thus, searches for this small distortion could deliver an important test of the WIMP paradigm should any signature of dark matter annihilation be found through another probe.
Possible coupling of WIMPs to the baryons or photons could further enhance the adiabatic cooling effect \citep{Yacine2015DM}, which could provide additional tests of the nature of dark matter especially for low dark matter masses.
\subsection{Anisotropic CMB distortions}
To close the discussion of different distortion signals, we briefly mention anisotropic ($\leftrightarrow$ {\it spectral-spatial}) CMB distortions.
Even in the standard $\Lambda$CDM cosmology, anisotropies in the spectrum of the CMB are expected. The largest source of anisotropies is due to the Sunyaev-Zeldovich effect caused by the hot plasma inside clusters of galaxies \citep{Zeldovich1969, Sunyaev1980, Birkinshaw1999, Carlstrom2002}. The $y$-distortion power spectrum has already been measured directly by {\it Planck} \citep{Planck2013ymap, Planck2015ymap} and encodes valuable information about the atmospheres of clusters \citep[e.g.,][]{Refregier2000, Komatsu2002, Diego2004, Battaglia2010, Shaw2010, Munshi2013, Dolag2015}. Similarly, the warm hot intergalactic medium contributes \citep{Zhang2004, Dolag2015}.
In the primordial Universe, anisotropies in the $\mu$ and $y$ distortions are expected to be tiny \citep[relative perturbations $\lesssim 10^{-4}$, e.g., see][]{Pitrou2010} unless strong spatial variations in the primordial heating mechanism are expected \citep{Chluba2012}. This could in principle be caused by non-Gaussianity of perturbations in the ultra-squeezed limit \citep{Pajer2012, Ganc2012, Biagetti2013, Razi2015}, however, this is beyond $\Lambda$CDM cosmology and will not be considered further.
Another guaranteed anisotropic signal is due to Rayleigh scattering of CMB photons in the Lyman-series resonances of hydrogen around the recombination era \citep{Yu2001, Lewis2013}. The signal is strongly frequency dependent, can be modeled precisely and may be detectable with future CMB imagers (e.g., {\it COrE+}) or possibly {\it PIXIE} at large angular scales \citep{Lewis2013}. In a very similar manner, the resonant scattering of CMB photons by metals appearing in the dark ages \citep{Loeb2001, Zaldarriaga2002, Kaustuv2004, Carlos2007Pol} or scattering in the excited levels of hydrogen during recombination \citep{Jose2005, Carlos2007Pol} can lead to anisotropic distortions. To measure these signals, precise channel cross-calibration and foreground rejection is required.
Due to our motion relative to the CMB rest frame, the spectrum of the CMB dipole should also be distorted simply because the CMB monopole has a distortion \citep{Danese1981, Balashev2015}. The signal associated with the large late-time $y$-distortion could be detectable with {\it PIXIE} at the level of a few $\sigma$ \citep{Balashev2015}. Since for these measurements no absolute calibration is required, this effect will allow us to check for systematics. In addition, the dipole spectrum can be used to constrain monopole foregrounds \citep{Balashev2015, deZotti2015}.
Finally, again due to the superposition of blackbodies (caused by the spherical harmonic expansion of the intensity map), the CMB quadrupole spectrum is also distorted, exhibiting a $y$-distortion related to our motion \citep{Kamionkowski2003, Chluba2004}. The associated effective $y$-parameter is $y_{\rm Q}=\beta^2/6 \approx \pot{(2.525 \pm 0.012)}{-7}$ and should be noticeable with {\it PIXIE} and future CMB imagers.
\section{Approximations for the distortion signals}
\label{sec:Methods}
The primordial distortion signals that are caused by early energy release can be precisely computed using {\tt CosmoTherm} \citep{Chluba2011therm}. However, for parameter estimation we will use Green's function method developed by \citet{Chluba2013Green}. The results for different scenarios will be compared with more approximate but very simple estimates summarized in the next section.
\subsection{Simple estimates for the $\mu$- and $y$-parameters}
\label{sec:estimates}
To compute estimates for the $\mu$- and $y$-parameters, several approximations have been discussed in the literature. Given the energy release history, ${\,\rm d} (Q/\rho_\gamma)/{\,\rm d} z$, they can all be compactly written as \citep[e.g.,][]{Chluba2013Green, Chluba2013PCA}
\begin{subequations}
\label{eq:Greens_approx_improved}
\begin{align}
y&=\frac{1}{4}\left.\frac{\Delta \rho_\gamma}{\rho_\gamma}\right|_y = \frac{1}{4}\int_0^\infty \mathcal{J}_y(z')\,\frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z'} {\,\rm d} z'
\\
\mu&=1.401\left.\frac{\Delta \rho_\gamma}{\rho_\gamma}\right|_\mu =1.401 \int_0^\infty \mathcal{J}_\mu(z') \frac{{\,\rm d} (Q/\rho_\gamma)}{{\,\rm d} z'} {\,\rm d} z'
\end{align}
\end{subequations}
where $\left.\Delta \rho_\gamma/\rho_\gamma\right|_y$ and $\left.\Delta \rho_\gamma/\rho_\gamma\right|_\mu$ denote the effective energy release in the $y$- and $\mu$-era, respectively. The individual distortion visibility functions, $\mathcal{J}_i(z)$, determine the differences between various existing approximations. The simplest approach assumes that the transition between $\mu$ and $y$ occurs sharply at $z=z_{\mu y}\simeq \pot{5}{4}$ and that no distortions are created at $z\gtrsim z_{\rm th}$, where $z_{\rm th}$ is the thermalization redshift, which is given by \citep{Burigana1991, Hu1993}
\begin{align}
\label{eq:z_th}
z_{\rm th}\approx
\pot{1.98}{6}\left[\frac{(1-Y_{\rm p}/2)}{0.8767}\right]^{-2/5}
\!\left[\frac{\Omega_{\rm b}h^2}{0.02225}\right]^{-2/5}
\!\left[\frac{T_0}{2.726\,{\rm K}}\right]^{1/5}.
\end{align}
In this case, we have the simple approximation (`Method A')
\begin{subequations}
\begin{align}
\label{eq:M1}
\mathcal{J}_y(z)
&=
\begin{cases}
1 & \text{for}\; z_{\rm rec}\leq z \leq z_{\mu y}
\\
0 & \text{otherwise}
\end{cases}
\\
\mathcal{J}_\mu(z)
&=
\begin{cases}
1 & \text{for}\; z_{\mu y}\leq z \leq z_{\rm th}
\\
0 & \text{otherwise}.
\end{cases}
\end{align}
\end{subequations}
For the estimates of $y$ from early energy release, we will not include any contributions from after recombination, $z\lesssim 10^3=z_{\rm rec}$. These contributions will be attributed to the reionization $y$-parameter.
The next improvement is achieved by taking into account that the thermalization efficiency does not abruptly vanish at $z\simeq z_{\rm th}$, but that even at $z>z_{\rm th}$ a small $\mu$-distortion is produced \citep{Sunyaev1970mu, Danese1982, Burigana1991, Hu1993}. With this we have (`Method B')
\begin{subequations}
\begin{align}
\label{eq:M1}
\mathcal{J}_y(z)
&=
\begin{cases}
1 & \text{for}\; z_{\rm rec}\leq z \leq z_{\mu y}
\\
0 & \text{otherwise}
\end{cases}
\\
\mathcal{J}_\mu(z)
&=
\begin{cases}
\mathcal{J}_{\rm bb}(z) & \text{for}\; z_{\mu y}\leq z
\\
0 & \text{otherwise}.
\end{cases}
\end{align}
\end{subequations}
where $\mathcal{J}_{\rm bb}(z)\approx \expf{-(z/z_{\rm th})^{5/2}}$ is the {\it distortion visibility function}.\footnote{Refined approximation for the distortion visibility function have been discussed \citep{Khatri2012b, Chluba2014}, but once higher accuracy is required it is easier to directly use the Green's function method, such that we do not go into more details here.}
\begin{figure}
\centering
\includegraphics[width=1.05\columnwidth]{./eps/Smode.pdf}
\\
\includegraphics[width=1.05\columnwidth]{./eps/Emode.pdf}
\caption{Principal component decomposition for {\it PIXIE}-like setting ($\{\nu_{\rm min}, \nu_{\rm max}, \Delta\nu\}=\{30, 1000, 15\}\,{\rm GHz}$). -- Upper panel: first two residual distortion eigenmodes, $S^{(k)}$, in comparison with the spectral shapes of temperature shift, $\mu$ and $y$-distortions. We scaled the templates by convenient factors to make them comparable in amplitude. -- Lower panel: associated energy release eigenmodes, $E^{(k)}$, and visibilities, $J_i$, of temperature shift, $\mu$ and $y$-distortions. The figures were adapted from \citet{Chluba2013PCA}.}
\label{fig:Sk}
\label{fig:Ek}
\end{figure}
The next simple approximations also include the fact that the transition between $\mu$ and $y$ distortions is not abrupt at $z\simeq z_{\mu y}$. The distortion around this redshift is mostly given by a superposition of $\mu$ and $y$, with a smaller correction in form of the residual ($r$-type) distortion, which can be modeled numerically. By simply determining the best-fitting approximation to the distortion Green's function using only $\mu$ and $y$ one can write \citep{Chluba2013Green}
\begin{subequations}
\begin{align}
\label{eq:branching_approx_improved}
\mathcal{J}_y(z)
&\approx
\begin{cases}
\left(1+\left[\frac{1+z}{\pot{6}{4}}\right]^{2.58}\right)^{-1}
& \text{for}\; z_{\rm rec}\leq z
\\
0 & \text{otherwise}
\end{cases}
\\
\mathcal{J}_\mu(z)
&\approx\mathcal{J}_{\rm bb}(z)\,\left[1-\exp\left(-\left[\frac{1+z}{\pot{5.8}{4}}\right]^{1.88}\right)\right].
\end{align}
\end{subequations}
We shall refer to this as `Method C' and only represents the exact proportions of $\mu$ and $y$ to $\simeq 10\%-20\%$ precision. To ensure full energy conservation (no leakage of energy to the $r$-distortion), instead one can use $\mathcal{J}_\mu(z)\approx [1- \mathcal{J}_y(z)]\,\mathcal{J}_{\rm bb}(z)$ (`Method D').
All the above expressions give slightly different results for the expected distortion $\mu$ and $y$-parameters. Below we will compare them with the more accurate distortion principal component decomposition \citep{Chluba2013PCA}, which optimizes the representation when simultaneously estimating $\mu$, $y$ and $\Delta=\Delta T/T_0$. At the same time, these approximations allow one to quickly estimate the expected distortion signals and their dependence on different parameters, which can be useful for order of magnitude work. We will see that a simple interpretation of the distortion in terms of $\mu$ and $y$ derived in this way differs slightly from what future measurements will recover (Sect.~\ref{sec:results}). Specifically, due to the uncertainty in the value of the CMB monopole, the projections of the distortion signals on to $\mu$ are underestimated by $\simeq 20\%-30\%$ (Table~\ref{tab:one}).
\begin{table*}
\centering
\caption{Comparison of distortion parameters for various methods and types of scenarios with cosmological parameters based on the {\it Planck} 2015 TT,TE,EE+lowP results \citep{Planck2015params}.
The uncertainties for the predictions in the dissipation scenarios are dominated by the uncertainties in the power spectrum parameters.
Assuming standard BBN ($Y_{\rm p}=0.2467$), for the adiabatic cooling distortion the small uncertainty is dominated by that of the baryon density. This contribution to the total distortion is also included in the values given for the dissipation scenarios.
For the annihilation scenario, we assumed s-wave annihilation for a Mayorana dark matter particle using $p_{\rm ann}<\pot{4.1}{-28}\,{\rm cm}^3\,\sec^{-1}{\rm GeV}^{-1}$ \citep{Planck2015params} with $f_{\rm ann}=(\rho_{\rm c}^2 c^4 \Omega_{\rm cdm}^2/N_{\rm H, 0})\,p_{\rm ann}\approx \pot{8.5}{3}\,{\rm eV}\,\sec^{-1} p_{\rm ann}/[{\rm cm}^3\,\sec^{-1}{\rm GeV}^{-1}]$. All quoted error bars are for 68\% c.l. and the central values are medians.}
\begin{tabular}{cccccc}
\hline
\hline
& Parameter
& Dissipation I & Dissipation II
& Adiabatic cooling
& Annihilation (s-wave)
\\
\hline
& $\ln(10^{10} A_{\rm s})$
& $3.094\pm0.034$
& $3.103 \pm 0.036$
& $-$
& $-$
\\
& $n_{\rm S}$
& $0.9645\pm0.0049$
& $0.9639 \pm 0.0050$
& $-$
& $-$
\\
& $n_{\rm run}$
& $0$
& $\!\!\!\!-0.0057 \pm 0.0071$
& $-$
& $-$
\\
& $100\,\Omega_{\rm b} h^2$
& $2.225 \pm 0.016$
& $2.229 \pm 0.017$
& $2.225\pm0.016$
& $2.225\pm0.016$
\\
& $f_{\rm ann}$
& $-$
& $-$
& $-$
& $<\pot{3.5}{-24}\,{\rm eV} \, \sec^{-1} \,(\text{95\% c.l.})$
\\
\hline
\hline
\multirow{2}{*}{Method A}
& $y/10^{-9}$
& $3.67 ^{+ 0.17 } _{- 0.17 }$
& $3.53 ^{+ 0.25 } _{- 0.23 }$
& $-0.532^{+ 0.003} _{- 0.003}$
& $<0.091$
\\[2pt]
& $\mu/10^{-8}$
& $1.72 ^{+ 0.13 } _{- 0.12 }$
& $ 1.31 ^{+ 0.52 } _{- 0.38 } $
& $-0.296^{+ 0.002} _{- 0.002}$
& $<0.062$
\\[1pt]
\hline
\multirow{2}{*}{Method B}
& $y/10^{-9}$
& $3.67 ^{+ 0.18 } _{- 0.17 } $
& $3.54 ^{+ 0.25 } _{- 0.23 } $
& $-0.532^{+ 0.003} _{- 0.003}$
& $<0.091$
\\[2pt]
& $\mu/10^{-8}$
& $1.62 ^{+ 0.12 } _{- 0.11 }$
& $1.27 ^{+ 0.49 } _{- 0.36 } $
& $-0.277^{+ 0.002} _{- 0.002}$
& $<0.058$
\\[1pt]
\hline
\multirow{2}{*}{Method C}
& $y/10^{-9}$
& $3.83 ^{+ 0.19 } _{- 0.18 }$
& $3.66 ^{+ 0.28 } _{- 0.26 }$
& $-0.558^{+ 0.003} _{- 0.003}$
& $<0.097$
\\[2pt]
& $\mu/10^{-8}$
& $1.71 ^{+ 0.12 } _{- 0.12 }$
& $1.34 ^{+ 0.48 } _{- 0.36 }$
& $-0.290^{+ 0.002 } _{- 0.002 }$
& $<0.061$
\\[1pt]
\hline
\multirow{2}{*}{Method D}
& $y/10^{-9}$
& $3.83 ^{+ 0.19 } _{- 0.18 } $
& $3.66 ^{+ 0.29 } _{- 0.26 }$
& $-0.558^{+ 0.003} _{- 0.003}$
& $<0.097$
\\[2pt]
& $\mu/10^{-8}$
& $1.54 ^{+ 0.11 } _{- 0.11 }$
& $1.18 ^{+ 0.46 } _{- 0.33 }$
& $-0.263^{+ 0.001 } _{- 0.001 }$
& $<0.055$
\\[1pt]
\hline
\hline
\multirow{4}{*}{PCA}
& $y/10^{-9}$
& $3.63 ^{+ 0.17 } _{- 0.17 }$
& $3.49 ^{+ 0.26 } _{- 0.23 }$
& $-0.527^{+ 0.003} _{- 0.003}$
& $< 0.091$
\\[2pt]
& $\mu/10^{-8}$
& $2.00^{+ 0.14 } _{- 0.13 }$
& $1.59 ^{+ 0.54 } _{- 0.40 }$
& $-0.334^{+ 0.002} _{- 0.002}$
& $< 0.070$
\\[2pt]
& $\mu_1/10^{-8}$
& $3.81 ^{+ 0.22 } _{- 0.21 }$
& $3.39 ^{+ 0.58 } _{- 0.49 }$
& $-0.587 ^{+ 0.003} _{- 0.003 }$
& $\!\!\!< 0.12$
\\[2pt]
& $\mu_2/10^{-9}$
& $\!\!\!\!-1.19 ^{+ 0.22 } _{- 0.20}$
& $\!\!\!\!-2.79 ^{+ 2.05 } _{- 1.53 }$
& $-0.051 ^{+ 0.001} _{- 0.001 }$
& $< 0.046$
\\[1pt]
\hline
\hline
\label{tab:one}
\end{tabular}
\end{table*}
\subsection{Distortion principal component decomposition}
\label{sec:PCA}
The approximations given in the previous section are all based on simple analytical considerations. However, the CMB spectrum is given by a superposition of $\mu$, $y$ and $r$ distortions as well as the CMB monopole temperature. The situation is further complicated by the presence of large foregrounds. Also, when considering instrumental effects (number of channels, upper and lower frequency channel and frequency resolution, etc.), different spectral shapes cannot be uniquely separated and project on to each other. In this case, a principal component analysis (PCA) helps parametrizing the expected distortion shapes with a small number of distortion parameters, ranked by their expected signal to noise ratio.
Considering experimental setting similar to {\it PIXIE}, this decomposition was carried out previously \citep{Chluba2013PCA}, identifying new distortion parameters, $\mu_k$, to describe the $r$-type distortion.
Since a minimal distortion parameter estimation will include $\mu$, $y$ and $\Delta = \Delta T/T_0$, the $r$-type distortion is defined such that none of the $\mu_k$ correlated with any of these.
For the technical details we refer to \citet{Chluba2013PCA}, but the primordial distortion signal in each frequency bin can then be decomposed as
\begin{subequations}
\begin{align}
\label{eq:DI_vec}
\Delta I_{i}&=\Delta I_{i}^T + \Delta I_{i}^\mu+\Delta I_{i}^y+\Delta I^R_{i}
\\[1mm]
\Delta I^R_{i}&\approx \sum_k S^{(k)}_{i} \,\mu_k,
\end{align}
\end{subequations}
where $\Delta I_{i}^T$, $\Delta I_{i}^\mu$ and $\Delta I_{i}^y$ correspond to the spectral shapes of a temperature shift, $\mu$- and $y$-distortions, respectively; $\Delta I^R_{i}$ describes the $r$-type distortion, where $\mu_k$ and $S^{(k)}_{i}$ are the amplitude and distortion signal of the $k^{\rm th}$ eigenmode, respectively (see Fig.~\ref{fig:Sk}).
The signal eigenmodes, $S^{(k)}_{i}$, are associated with a set of energy release eigenvectors, $\vek{E}^{(k)}$, in discretized redshift space, which are normalized as $\vek{E}^{(k)}\cdot \vek{E}^{(l)} =\delta_{kl}$. Any given energy-release history, $\mathcal{Q}(z)={\,\rm d} (Q/\rho_\gamma)/{\,\rm d} \ln z$, can then be written as
\begin{align}
\label{eq:eigenmodes}
\mathcal{\vek{Q}}&\approx \sum_k \vek{E}^{(k)} \,\mu_k,
\end{align}
where $\mathcal{\vek{Q}}=(\mathcal{Q}(z_0), \mathcal{Q}(z_1), ..., \mathcal{Q}(z_{n}))^{T}$ is the energy-release vector of $\mathcal{Q}(z)$ in different redshift bins.
By construction, the eigenvectors, $\vek{E}^{(k)}$, span an ortho-normal basis, while all $\vek{S}^{(k)}$ only define an orthogonal basis (generally $\vek{S}^{(k)}\cdot \vek{S}^{(l)} \geq \delta_{kl}$).
The mode amplitudes are then obtained as simple scalar product, $\mu_k=\vek{E}^{(k)}\cdot \mathcal{\vek{Q}}$. The eigenmodes are ranked by their signal-to-noise, such that modes with larger $k$ contribute less to the signal. In a similar way, we can write $\mu=1.401\,\vek{J}_\mu \cdot \mathcal{\vek{Q}}$, $4y=\vek{J}_y \cdot \mathcal{\vek{Q}}$ and $4\Delta T/T_0=\vek{J}_T \cdot \mathcal{\vek{Q}}$, where the $\vek{J}_i$ vectors play the role of effective visibilities, like in the integral versions, Eq.~\eqref{eq:Greens_approx_improved}. The first few energy-release eigenmodes and $\vek{E}^i$ are shown in Fig.~\ref{fig:Ek}. We note that again only heating at $z\geq 10^3$ is included in the estimates of the distortion parameters. This can underestimate the $y$-parameter at the $\simeq 10\%$ level (see Sect.~\ref{sec:results}).
Although in terms of simple scattering physics, distortion signals created by energy release at $z\lesssim \pot{{\rm few}}{5}$ should deviate from a simple $\mu$-distortion \citep{Hu1995PhD, Chluba2008c, Chluba2011therm, Khatri2012mix, Chluba2013Green}, even at lower redshift a non-vanishing additional projection on to $\mu$ is found \citep{Chluba2013PCA}, as reflected by an increase of $\vek{J}^\mu$ around $z\simeq 10^5$ (see Fig.~\ref{fig:Ek}). This enhances the recovered value of $\mu$ when the parameter estimation problem for $\mu$, $y$ and $\Delta$ is solved (see below), an effect that is mainly due to the fact that the average CMB temperature has to be determined simultaneously.
We immediately mention that one alternative approach, which could mitigate the above problem, could be to determine the average CMB temperature using the integral properties of the $\mu$, $y$ and $r$-distortions. Since these are created by a scattering process, the number density of photons should not change. Thus, determining the effective temperature of the CMB by computing the total photon number density ($N_\gamma \propto T_{\gamma}^3$) from the measured spectrum, the distortions would not contribute under idealized assumptions. However, this procedure is not expected to work well for discretized versions of the spectra. It is furthermore complicated by the presence of foregrounds and the possibility of non-standard processes that can actually lead to non-trivial photon injection \citep{Chluba2015GreensII}. Finally, the $r$-distortion parameters would no longer remain uncorrelated, so that we do not explore this avenue any further.
\subsection{Results from the different methods}
\label{sec:results}
We are now in the position to explicitly compute the $\mu$- and $y$-parameters for the different distortion scenarios discussed above. With the PCA, we are furthermore able to obtain the eigenmode amplitudes, $\mu_1$ and $\mu_2$. We stop at the second residual distortion eigenmode, since observing $\mu_2$ is already very futuristic for standard scenarios. We also mention that the values for $y$ are only used as a comparison, since the $y$-distortion from the low-redshift Universe is much larger in all cases.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./eps/M2_PCA.pdf}
\caption{Comparison of the posterior distributions for the dissipation scenario I (Table~\ref{tab:one}) obtained with method B (red contours) and the PCA (black contours). The vertical lines indicate the mean values. Method B predicts lower values for $\mu$ than the distortion eigenmode analysis.}
\label{fig:M2_PCA}
\end{figure}
In our estimates, we include the measurement uncertainties for the relevant $\Lambda$CDM parameters \citep{Planck2015params}. For the dissipation scenarios, these are mainly related to the power spectrum parameters, while for the adiabatic cooling distortion it is the baryon density (assuming standard BBN). The results are summarized in Table~\ref{tab:one}. For the dissipation scenarios, we obtained the error estimates by using the relevant covariance matrix for the {\it Planck} data, while the error for the adiabatic cooling effect was directly estimated using Gaussian error propagation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./eps/DII_PCA.pdf}
\caption{Posterior distributions for the dissipation scenario II (Table~\ref{tab:one}) obtained with the PCA. We omitted $y$ as its posterior remains fairly Gaussain. The vertical lines indicate the mean values.}
\label{fig:DII_PCA}
\end{figure}
\begin{table}
\centering
\caption{Explicit projections of the full {\tt CosmoTherm} output using the distortion eigenmodes for {\it PIXIE}-like settings. The last column also gives the estimates $1\sigma$ error for {\it PIXIE} in its current design \citep{Chluba2013PCA}, which degrades quickly for the $\mu_k$. In parenthesis, we show estimates for the expected significance in terms of distortion measurements.}
\begin{tabular}{c cc c}
\hline
\hline
Parameter & Dissipation I & Adiabatic cooling & {\it PIXIE} $1\sigma$
\\
\hline
$ y/10^{-9}$ & $3.54 \;(\simeq 3.0\sigma)$ & $-0.623 \;(\simeq 0.5\sigma)$ & $1.20$
\\[1pt]
$ \mu/10^{-8}$ & $2.00\;(\simeq 1.5\sigma)$ & $-0.334\;(\simeq 0.2\sigma)$ & $1.37$
\\
$ \mu_1/10^{-8}$ & $3.82\;(\simeq 0.3\sigma)$ & $\,\,\,-0.588\;(\simeq 0.04\sigma)$ & $14.8$
\\[1pt]
$ \mu_2/10^{-9}$ & $\!\!\!\!-1.18\;(\simeq 0.0\sigma)$ & $-0.054\;(\simeq 0.0\sigma)$ & $761$
\\
\hline
\hline
\label{tab:two}
\end{tabular}
\end{table}
For the $y$-parameter estimates, methods A and B are equivalent and give results which are quite close to those of the PCA, which should be considered the most precise representation of what would be recovered in a distortion analysis. The methods C and D are also equivalent, but overestimate the $y$-parameter by $\simeq 5\%-10\%$ in comparison to the PCA. The recovered error bars for all methods are very comparable.
For the $\mu$-parameter, all methods are slightly different. The PCA always gives $20\%-30\%$ larger values. The best agreement with the PCA is achieved by methods A and C. Again all methods give very similar estimates for the expected errors.
In Fig.~\ref{fig:M2_PCA}, we highlight the differences in the predicted $y$ and $\mu$-parameters for the dissipation scenario I obtained with method B and the PCA\footnote{The figure was obtained using the Markov Chain Monte Carlo (MCMC) tool of the {\tt Greens} software package \citep{Chluba2013Green} available at {\tt www.Chluba.de/CosmoTherm}.}. The errors are dominated by uncertainties in the power spectrum parameters. The result for $y$ agrees quite well, while the result for $\mu$ is biased low by $\simeq 2.6\,\sigma$ with method B, a difference that needs to be taken into account when interpreting future distortion measurements. The result from method B is very close to what was recently discussed in \citet{Cabass2016} for $\mu$.
In Fig.~\ref{fig:DII_PCA}, we show the posterior for dissipation scenario II (see Table~\ref{tab:one}). This no longer is a $\Lambda$CDM case, as for the standard inflation model running is negligible. However, it illustrates how well {\it Planck} data might constrain running and we will return to this case below when forecasting spectral distortion constraints.
While in the case without running the posteriors for the distortion parameters remained fairly Gaussian, when including running those for $\mu$, $\mu_1$ and $\mu_2$ become non-Gaussian. Since the {\it Planck} data prefers slightly negative running, implying less power at small scales, the median for $\mu$ decreases. Due to the long leaver arm, small changes in $n_{\rm run}$ have a large effect on $\mu$ and $\mu_1$, so that their uncertainties increase significantly relative to the case without running (see Table~\ref{tab:one}). Conversely, this means that distortion measurements have constraining power in particular for $n_{\rm run}$ \citep[e.g.,][]{Chluba2012}. Again, the expected recovered value for $\mu$ is slightly larger than what \citet{Cabass2016} find, whose central value is more close to that of methods A-C. However, in this case the significance of the bias is only $\simeq 1\sigma$ due to increased uncertainty in the prediction.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{./eps/Constraint_1P_3.4P.pdf}
\hspace{7mm}
\includegraphics[width=\columnwidth]{./eps/Constraint_1P_10P.pdf}
\caption{Posteriors for different combinations of data sets. In both panels, the red lines indicate the {\it Planck} TT,TE,EE+lowP+{\it PIXIE} ($\equiv$ basically like without {\it PIXIE}) constraints for the extended model with running. The black contours show the {\it Planck}+$3.4\times${\it PIXIE} (left panel) and {\it Planck}+$10\times${\it PIXIE} (right panel) constraints. Vertical lines indicate the fiducial values for each data set. Adding spectral distortions helps diminishing the uncertainty in the values of $n_{\rm run}$ by a factor of $\simeq 3$ for {\it Planck}+$10\times${\it PIXIE}.}
\label{fig:SD_const}
\end{figure*}
\vspace{-3mm}
\subsection{Distortion parameters for full {\tt CosmoTherm} outputs}
\label{sec:COSMOTherm}
We mention that by explicitly using the output from {\tt CosmoTherm} for the dissipation scenario I and the adiabatic cooling case we find values summarized in Table~\ref{tab:two}. We can see that the agreement is excellent in comparison with the Green's function projections that were used for the results presented in Table~\ref{tab:one}. The differences are mainly noticeable for the $y$-parameters, but even there they are $\lesssim 10\%$. The main reason for the difference is because with {\tt CosmoTherm} we include the adiabatic cooling effect all the way to redshift $z=200$, while in the Green's function approximation we stop at $z=10^3$. This gives another $y\approx -\pot{0.1}{-9}$ correction, which one can simply add by hand. It is possible to improve this in the code, but since this is not distinguishable from the much larger $y$-distortion created at lower redshifts, we neglect it. We mention that we include part of the cosmology-dependence of the Green's function, but at low frequencies this is only approximate.
\begin{table*}
\centering
\caption{Improvement of constraints on the small-scale power spectrum by combining {\it Planck} with a {\it PIXIE}-like experiment for different channel sensitivities. For the spectral distortion parameters, we also show the effective significance of the signal with respect to the spectral distortion measurement. The distortion amplitude $\mu_2$ remained undetectable ($\lesssim 0.02\sigma$) with distortions alone and thus remains a {\it derived} parameter even for $10\times${\it PIXIE} sensitivity. In the last column we show the {\it Planck} $\Lambda$CDM values for comparison.}
\begin{tabular}{c cccc c}
\hline
\hline
Parameter & {\it Planck} alone & +{\it PIXIE} & +$3.4\times${\it PIXIE} & +$10\times${\it PIXIE} & {\it Planck} $\Lambda$CDM values
\\
\hline
$\ln(10^{10} A_{\rm s})$
& $3.103 ^{+ 0.036 } _{- 0.036 }$
& $3.103 ^{+ 0.037 } _{- 0.037 }$
& $3.101 ^{+ 0.037 } _{- 0.037 }$
& $3.100 ^{+ 0.036 } _{- 0.036 }$
& $3.094 ^{+ 0.034 } _{- 0.034 }$
\\[2pt]
$n_{\rm S}$
& $0.9639 ^{+ 0.0050 } _{- 0.0050 }$
& $0.9640 ^{+ 0.0050} _{- 0.0050 }$
& $0.9647 ^{+ 0.0049 } _{- 0.0048 }$
& $0.9653 ^{+ 0.0048 } _{- 0.0047 }$
& $0.9645 ^{+ 0.0049 } _{- 0.0049 }$
\\[2pt]
$10^3n_{\rm run}$
& $\!\!\!\!-5.7^{+ 7.1 } _{- 7.1 }$
& $\!\!\!\!-5.2 ^{+ 6.9 } _{- 7.2}$
& $\!\!\!\!-2.8 ^{+ 4.6 } _{- 5.1 } $
& $\!\!\!\!-0.81 ^{+ 2.4 } _{- 2.5 } $
& $0 $
\\
\hline
$ \mu/10^{-8}$
& $1.59 ^{+ 0.54 } _{- 0.40 }$
& $1.62 ^{+ 0.55 } _{- 0.42 }\;(1.2\sigma)$
& $1.81 ^{+ 0.36 } _{- 0.33 }\;(4.5\sigma)$
& $1.993 ^{+ 0.053 } _{- 0.053 }\;(15\sigma)$
& $2.00 ^{+ 0.14 } _{- 0.13 }$
\\[2pt]
$ \mu_1/10^{-8}$
& $3.39 ^{+ 0.58 } _{- 0.49 }$
& $3.43 ^{+ 0.58 } _{- 0.52 }\;(0.23\sigma)$
& $3.63 ^{+ 0.38 } _{- 0.38 }\;(0.83\sigma)$
& $3.819 ^{+ 0.044 } _{- 0.044 }\;(2.6\sigma)$
& $3.81 ^{+ 0.22 } _{- 0.20 }$
\\[2pt]
$ \mu_2/10^{-9}$
& $\!\!\!\!-2.79 ^{+ 2.05 } _{- 1.53 }$
& $\!\!\!\!-2.69 ^{+ 2.08 } _{- 1.61 }\;(0\sigma)$
& $\!\!\!\!-2.02 ^{+ 1.42 } _{- 1.31 }\;(0\sigma)$
& $\!\!\!\!-1.28 ^{+ 0.43 } _{- 0.43 }\;(0\sigma)$
& $\!\!\!\!-1.19 ^{+ 0.22 } _{- 0.20 }$
\\
\hline
\hline
\label{tab:SD_const}
\end{tabular}
\end{table*}
\vspace{-3mm}
\section{Forecast for {\it PIXIE}-like experiments}
\label{sec:forecasts}
We now discuss the prospects for detecting the different distortion signals.
First of all, the $y$-parameter contributions are all insignificant compared to the $y$-parameter caused by low-redshift processes (Sect.~\ref{sect:reion}). Although in terms of sensitivity, the contributions summarized in Table~\ref{tab:one} are significant at the level of a few $\sigma$ for {\it PIXIE} in its current design, these cannot be separated and are thus not discussed further \citep[see][]{Chluba2013fore, Chluba2013PCA}.
The $\mu$-distortion amplitude for the dissipation scenario I is detectable at the level of $\simeq 1.45\sigma$ \citep[see also][]{Chluba2013PCA}. This is a factor of $\simeq 3.4$ short of a clear $\simeq 5\sigma$ detection of the standard $\Lambda$CDM signal \citep[cf.,][]{Chluba2012}. One simple improvement in the sensitivity is achieved by halving the number of channels. For a Fourier Transform Spectrometer (FTS), such as in {\it PIXIE}, this improves the detection limits by a factor of $\simeq \sqrt{2}$, since about twice as much time can be spent on each sample and but the total number of collected photons remains the same\footnote{The effective channel sensitivity improves by a factor of $2$, but half the number of channels are available to constrain the distortions so that overall only $\simeq \sqrt{2}$ is gained \citep[compare][]{Kogut2011PIXIE}.}. However, this also reduces the number of frequency channels in the CMB regime and thus degrades our ability to reject foregrounds, requiring a more detailed optimization. Similarly, changes in the distribution of channels could improve the performance of {\it PIXIE}, but also call for a more careful analysis of foreground effects. Another way to improve the sensitivity is by extending the mission duration or by changing the time spent on spectral distortions versus $B$-mode science \citep{Kogut2011PIXIE, Chluba2013fore}. Thus, factors of a few improvement in the raw spectral sensitivity seem to be within reach of current technology, although ultimate limitations by foregrounds need to be addressed carefully (Sect.~\ref{sec:foregrounds}).
One interesting questions is at what effective sensitivity can a spectral distortion measurement confirm $\Lambda$CDM assuming the simplest slow-roll inflation model \citep[see][for review]{Baumann2009}, which predicts $n_{\rm run}\simeq -\frac{1}{2}(n_{\rm S}-1)^2 \simeq \pot{6}{-4}$ \citep{Starobinsky1980}. To answer this, we assume a fiducial model for the small-scale power spectrum as in dissipation scenario I, which gives $\mu=\pot{2.0}{-8}$. However, we then compute the posterior assuming {\it Planck} constraints for the case with running. Adding spectral distortions should then pull the best-fitting solution closer to $n_{\rm run}\simeq 0$. We also add a large $y$-distortion signal\footnote{The uncertainty in the value of $y$ from low redshifts removes about 2/3 of the constraining power of to the dissipation signal. Without this large `foreground' one would achieve a factor of $\simeq 1.6$ improvement of the constraint on $n_{\rm run}$ with {\it PIXIE} in its default setup already.} $y=\pot{2}{-6}$ and a small shift in the CMB monopole temperature. Both parameters are estimated simultaneously with the power spectrum parameters using the MCMC tool of the {\tt Greens} software package.
In Fig.~\ref{fig:SD_const}, we illustrate this aspect for two channel sensitivities. Derived combined parameter constraints are summarized in Table~\ref{tab:SD_const}.
For the fiducial sensitivity of {\it PIXIE}, adding spectral distortions does not improve the constraints on power spectrum parameters significantly.
For $\simeq 3.4$ times higher channel sensitivity, we find an $\simeq 4.5\sigma$ detection of $\mu$ in terms of the uncertainty of the distortion measurement and a factor of $\simeq 1.4-1.5$ improvement of the error on $n_{\rm run}$, which starts to move towards the $\Lambda$CDM value. The uncertainties in the values of $A_{\rm s}$ and $n_{\rm S}$ are not improved much (see Fig.~\ref{fig:SD_const}).
At $10$ times the sensitivity of {\it PIXIE}, which is similar to the spectrometer of the {\it PRISM} concept \citep{PRISM2013WPII}, the constraints are further tightened by adding spectral distortions, reducing the error on $n_{\rm run}$ by a factor of $\simeq 3$ over {\it Planck} alone. We also find an $\simeq 15\sigma$ detection of the $\mu$-distortion parameter and a marginal $\simeq2.6\sigma$ detection of $\mu_1$ from distortions alone in this case. This sensitivity would furthermore make the CRR detectable at the level of a few $\sigma$ \citep{Vince2015}.
In the near future, ground-based observations (Stage-IV CMB) in combination with the upcoming spectroscopic galaxy surveys, eBOSS and DESI, aim at improving limits on $n_{\rm run}$ by factors of a few \citep{Abazajian2015inflation}. One other target is to obtain improved constraints on neutrino physics \citep{Abazajian2015}. As the $\mu$-distortion from the dissipation scenario is mostly sensitive to $n_{\rm run}$, adding spectral distortion measurements could help improving these limits by alleviating some existing degeneracies. However, enhanced versions of {\it PIXIE} are required in this case.
We highlight that the small-scale damping process is the dominant source of $\mu$-distortions in $\Lambda$CDM. This implies that any significant departure from the expected signal (Table~\ref{tab:two}) inevitably points towards new physics. If the signal is much lower, then the small-scale CMB power spectrum, around wavenumbers $k\simeq 740\,{\rm Mpc}^{-1}$, where most of the $\mu$-signal is created \citep[see][for illustration]{Razi2015}, either has a much lower amplitude \citep{Chluba2012, Chluba2012inflaton}, or an enhanced cooling process through the coupling of another non-relativistic particle to the CMB is required \citep[e.g.,][]{Yacine2015DM}. Conversely, if the $\mu$-distortion signal is much larger than expected, then the small-scale power spectrum could be strongly enhanced, possibly containing a localized feature \citep{Chluba2012inflaton, Chluba2015IJMPD}, or another heating mechanism \citep[e.g., a decaying particle][]{Hu1993b, Chluba2011therm, Chluba2013fore, Dimastrogiovanni2015} has to be at work. Thus, spectral distortions provide a powerful new avenue for testing $\Lambda$CDM cosmology without purely relying on an extrapolation from large ($k\lesssim 1\,{\rm Mpc}^{-1}$) to small scales ($1\,{\rm Mpc}^{-1}\lesssim k\lesssim \pot{\rm few}{4}\,{\rm Mpc}^{-1}$).
\vspace{-3mm}
\subsection{Importance of refined foreground modeling}
\label{sec:foregrounds}
It is clear that for the success of spectral distortion measurements, the name of the game will be foregrounds. The biggest challenge is that, aside from the large $y$-distortion introduced at late times, all known foregrounds are orders of magnitudes larger than the primordial signals. This means that tiny effects related to the spectral and spatial variation of the foreground signals need to be taken into account. Ways to tackle this problem are i) to measure the spectrum in as many individual channels as possible, ideally with high angular resolution and sensitivity, and ii) to exploit synergies with other future or existing datasets to inform the modeling of averaged signals. In both cases, refined modeling of the foregrounds with extended parametrizations are required to capture the effects of averaging of spatially varying components across the sky.
An FTS concept like {\it PIXIE} pushes us into a qualitatively different regime in terms of its spectral capabilities, where instead of playing with a few channels we have a few hundred at our disposal. Most of these channels are at high frequencies ($\nu\gtrsim 1\,{\rm THz}$), above the CMB bands and can be used to subtract the dust and cosmic infrared background components at lower frequencies \citep{Kogut2011PIXIE}. Simple, commonly used two-temperature modified blackbody spectra \citep[e.g.,][]{Finkbeiner1999} will not provide sufficient freedom to capture all relevant properties of the averaged dust spectrum.
Existing maps from {\it Planck} \citep[e.g.,][]{Planck2013components, Planck2015components} can be used to estimate the effect of spatial variations of the dust temperature across the sky. Similar to the superposition of blackbodies with varying temperature (Sect.~\ref{sec:sup}), this will cause new spectral shapes because of the i) {\it beam average} and ii) {\it all-sky averaging}, which need to be captured by extended foreground parametrizations (Chluba et al., 2016, in preparation). The associated parameters have to be directly determined in the distortion measurement, as existing data are not expected to provide sufficient information down to the noise level of experiments like {\it PIXIE}, but can be used to guide the modeling and parametrization.
Similar comments apply to the modeling of the synchrotron and free-free components at low frequencies. Albeit spectrally quite smooth, the superposition of spatially varying power-law spectra, $I_\nu\simeq A_0 \, (\nu/\nu_0)^{\alpha}$ [where $I_\nu$ denotes the intensity], causes new spectral shapes that depend on {\it moments} of the underlying distribution functions. The common addition of curvature to the spectral index, $I_\nu\simeq A_0 \, (\nu/\nu_0)^{\alpha+\frac{1}{2}\beta \ln(\nu/\nu_0)}$ \citep[e.g., compare][]{Remazeilles2015}, is too restrictive. Extending the list of curvature parameters,
\begin{align}
\label{eq:power_sum_exp_power}
I^{\rm p}_{\nu}&\approx A_0 \, (\nu/\nu_0)^{\alpha+\frac{1}{2}\beta \ln(\nu/\nu_0)+\frac{1}{6}\gamma \ln^2(\nu/\nu_0)+\frac{1}{24}\delta \ln^3(\nu/\nu_0)+\ldots},
\end{align}
can be shown to capture all the new degrees of freedom for the superposition of power-law spectra, although this generalization does not have the best convergence properties (Chluba et al., 2016, in preparation). In Eq.~\eqref{eq:power_sum_exp_power}, the coefficients $\alpha$, $\beta$, $\gamma$ and $\delta$ are directly related to the aforementioned moments of the spectral index distribution functions.
These have to be determined with the distortion measurement, while external data sets \citep[e.g., C-BASS,][]{Irfan2015} can likely only be used to guide the modeling at the level of sensitivity targeted by future CMB spectroscopy.
Additional important foreground components are due to anomalous microwave emission \citep{Draine1998, Yacine2009, Planck2011AME}, various narrow molecular lines (e.g., CO, HCN, HCO, etc.), zodiacal light \citep{Planck2014zodi} and the integrated flux of CO emission throughout cosmic history \citep{Righi2008b, Mashian2016}, all of which will also affect future CMB imaging experiments. In all cases, one will have to make use of properties of the underlying physical mechanism to motivate refined foreground parametrizations.
The primordial distortion signals caused by energy release are all {\it unpolarized}, which is another important property to exploit. Systematic effects related to the absolute calibration could furthermore be tested using the motion-induced leakage of monopole signals into the CMB dipole \citep{Balashev2015}. It was also pointed out that the relativistic correction signal (see Sect.~\ref{sect:reion}) could cause significant confusion for the $r$-distortion signals \citep{Hill2015}, which has to be modeled more carefully. All the above challenges need to be addressed to demonstrate the full potential and feasibility of future spectroscopic CMB experiments, a problem we are currently investigating.
For additional recent discussion of the foreground separation problem for the CMB monopole see \citet{Mayuri2015}, \citet{Vince2015} and \citet{deZotti2015}.
\vspace{-3mm}
\section{Conclusions}
Within $\Lambda$CDM, a range of guaranteed distortion signals is expected. Here, we summarized these distortions and also provided an assessment of the expected uncertainties in their prediction (Sect.~\ref{sec:distortions_LCDM} and Fig.~\ref{fig:signals}). The largest signals is due to the formation of structures and reionization process at late times (Sect.~\ref{sect:reion}). The signal is characterized by a $y$-type spectrum, with a $y$-parameter reaching $\simeq \pot{\rm few}{-6}$. Although the uncertainty in the amplitude of this contribution is rather large, this signal will be detectable with a {\it PIXIE}-like experiment at very high significance, even allowing us to determine the small relativistic temperature correction caused by abundant low-mass haloes with temperature $T_{\rm e} \simeq 10^6\,{\rm K}$ \citep[see][]{Hill2015}. This promises a way to solve the missing baryon problem and constrain the first sources of reionization.
The largest primordial signal expected within $\Lambda$CDM is due to the damping of small-scale acoustic modes (Sect.~\ref{sec:damp}). The $y$-distortion contribution is swamped by the low-redshift signal, but the $\mu$-distortion should be detectable with a slightly improved version of {\it PIXIE}. Using {\it Planck} data, we find $\mu\approx\pot{(2.00\pm0.14)}{-8}$ for the standard $\Lambda$CDM cosmology, where the uncertainty is dominated by those of power spectrum parameters. This value includes the fact that temperature shift, $\mu$, $y$ and $r$-distortions are not uniquely separable, but that these parameters have to be determined simultaneously. As our comparison of different approximation methods shows (Sect.~\ref{sec:estimates}), simple estimates for $\mu$, based solely on scattering physics arguments, are expected to underestimate the recovered/measured value obtained with future distortion experiments by $\simeq 20\%-30\%$ (see Table~\ref{tab:one}).
Our simple forecasts show (Sect.~\ref{sec:forecasts}), that by combining CMB spectral distortion measurements with existing {\it Planck} data, one can achieve an $\simeq 40\%-50\%$ improvement of the error on $n_{\rm run}$ for $\simeq 3.4$ times the sensitivity of {\it PIXIE}. At this sensitivity, also an $\simeq 5\sigma$ detection of $\mu$ from distortion measurements alone can be expected. About $\simeq 10$ times the sensitivity of {\it PIXIE} is required for a marginal $\simeq2.6\sigma$ detection of the first $r$-distortion parameter, $\mu_1\approx \pot{(3.82 \pm 0.22)}{-8}$, assuming $\Lambda$CDM. This sensitivity would furthermore render the CRR detectable at the level of a few $\sigma$ \citep{Vince2015} and deliver a factor of $\simeq 3$ improvement of the error on $n_{\rm run}$ (Fig.~\ref{fig:SD_const} and Table~\ref{tab:SD_const}).
A combination of CMB spectral distortion measurements with upcoming ground-based experiments (e.g., Stage-IV CMB) could help further tightening constraints on power spectrum parameters and neutrino physics; however, enhanced versions of {\it PIXIE} are required to achieve this.
In Sect.~\ref{sec:foregrounds}, we gave a brief discussion of the spectral distortion foreground challenge. Simple, physically-motivated extensions of the foreground parametrizations are required to capture the effect of averaging of spatially varying spectral components inside the instrumental beam and across the sky. While existing data can be used to inform the underlying models, at the high sensitivity targeted by future spectroscopic missions, these foreground parameters ultimately have to be determined in the measurements. FTS concepts provide many hundred channels, which should allow us to extend the parameter list of refined foreground models from a few to tens, promising a path towards detailed spectral distortion measurements that we will investigate in the future.
We close by mentioning that in spite of all the successes of $\Lambda$CDM, there are open puzzles, such as the nature of dark matter and dark energy, to name the obvious ones. Spectral distortions are sensitive to {\it new physics} and any departure from the expected $\Lambda$CDM predictions for the high-$z$ signals will inevitably point in this direction. For example, if at early times dark matter coupled to baryons or photons, then this will leave an effect on the CMB spectrum \citep[e.g.,][]{Yacine2015DM}, potentially diminishing the net distortion.
Also, significantly higher or significantly lower power at small scales, responsible for the $\mu$-distortion signal ($k\simeq 740\,{\rm Mpc}^{-1}$), are necessarily related to departures from simple slow-roll inflation \citep{Chluba2012inflaton, Chluba2013PCA, Clesse2014, Chluba2015IJMPD, Cabass2016}. The presence of unaccounted relicts (e.g., gravitinos or moduli) or excited states of dark matter, decaying early enough to leave the CMB anisotropies unaffected, could furthermore play a role \citep{Hu1993b, Chluba2011therm, Chluba2013fore, Chluba2013PCA, Dimastrogiovanni2015}.
This illustrates only a few of the interesting new directions that CMB spectral distortions measurements could shed light on, and the big challenge will be to disentangle different effects to allow us draw clear conclusions. We can only look forward to the advent of real distortion data in the future.
\small
\section*{Acknowledgements}
JC cordially thanks Nick Battaglia, Giovanni Cabass, Colin Hill, Alessandro Melchiorri and Enrico Pajer for valuable feedback on the manuscript, and Steven Gratton for helpful advice on {\it Planck} data.
JC is supported by the Royal Society as a Royal Society University Research Fellow at the University of Manchester, UK.
\small
\bibliographystyle{mn2e}
|
2107.12074
|
\section{Introduction}
Generalized matrix functions (GMFs) are an extension of the notion of matrix functions based on the singular value decomposition (SVD) instead of the spectral decomposition. They were introduced for the first time in \cite{hawkins1973generalized}, with the purpose of extending the definition of matrix functions to rectangular matrices. Although the introduction of GMFs dates to the Seventies \cite{hawkins1973generalized}, they have become a more active area of research only in recent years. For instance, theoretical aspects of generalized matrix functions have been investigated in \cite{ACP16, BenziHuang19, Noferini17}, while efficient numerical methods have been developed in \cite{arrigo2016computation, aurentz2019stable}. For applications of generalized matrix functions, we direct the reader to~\cite{ACP16, arrigo2016computation, aurentz2019stable} and the references therein.
In many applications that involve standard matrix functions, it is only required to compute matrix-vector products of the form $f(A) \vec b$, where the matrix $A$ is usually large and sparse. In this case, the (expensive) computation of the whole matrix $f(A)$ can be bypassed by using methods that directly approximate the product $f(A) \vec b$, such as Krylov methods. These methods only require matrix-vector products and possibly the solution of shifted linear systems with the matrix $A$.
A similar situation arises when GMFs are involved: indeed, it is often required to compute the action of a generalized matrix function on a vector \cite{ArrigoBenzi16Edge, arrigo2016computation}, and hence it is preferable to use a method that avoids the computation of the whole GMF by means of an SVD.
This problem was recently investigated in \cite{arrigo2016computation, aurentz2019stable}, using methods based on the Golub-Kahan bidiagonalization in \cite{arrigo2016computation}, and Chebyshev polynomial interpolation in \cite{aurentz2019stable}.
In this paper, we propose a generalization of the method proposed in \cite{arrigo2016computation}, using the interpretation of the Golub-Kahan bidiagonalization in terms of Krylov subspaces.
Performing $k$ steps of the Golub-Kahan bidiagonalization of a matrix $A$ with starting vector $\vec b$ is equivalent to the simultaneous computation of orthonormal bases of the polynomial Krylov subspaces $\kryl_k(A^TA, \vec b)$ and $\kryl_k(AA^T, A \vec b)$. By replacing the polynomial Krylov subspaces with their rational counterparts, we obtain a rational Krylov method for the computation of the action of a GMF on a vector.
As can be expected by analogy with standard matrix functions, in the case of non-analytic functions and functions of low regularity these rational methods have a faster convergence than the method based on the Golub-Kahan bidiagonalization. However, their increased effectiveness comes at the cost of having to solve a linear system at each iteration, while the methods discussed in \cite{arrigo2016computation, aurentz2019stable} only require matrix-vector products.
The Golub-Kahan bidiagonalization of a matrix can be computed with a short recurrence, which relies on the fact that the projected matrix is bidiagonal. This structure is unfortunately not preserved in the rational case that we consider here. However, we are still able to construct a short recurrence to update the rational Krylov bases and the projected matrix, using the fact that the projected matrix is a quasiseparable matrix.
We also prove error bounds that link the error of the method from \cite{arrigo2016computation} based on the Golub-Kahan bidiagonalization and the rational methods introduced in this paper with, respectively, the error of uniform polynomial and rational approximation of the function $f$. These bounds are a direct generalization of the bounds for standard matrix functions, and they can be proved with the same techniques. Although the connection of GMFs to standard matrix functions is well-known, to the best of our knowledge these error bounds for the approximation of GMFs have never appeared in previous literature.
The paper is organized as follows. In Section~\ref{sec:notation} we introduce the notation used throughout the paper.
In Section~\ref{sec:gen-mat-fun} we recall the definition of standard matrix functions and GMFs and we present some of their properties. In Section~\ref{sec-rat-kryl-methods} we briefly introduce the class of rational Krylov methods for standard matrix functions. The use of polynomial and rational Krylov methods in the context of generalized matrix functions is discussed in Section~\ref{sec:krylov-gmf}. Section~\ref{sec:error-bounds} is dedicated to the proof of the error bounds and related discussion. Some numerical experiments to compare the different methods and illustrate the error bounds are presented in Section~\ref{sec:numerical}, and Section~\ref{sec:conclusions} contains concluding remarks.
\section{Notation}
\label{sec:notation}
We denote by $\mathbb{R}^{m \times n}$ the space of $m \times n$ real matrices. We use bold letters for vectors, e.g.~$\vec v \in \mathbb{R}^n$. The entries of vector $\vec v$ are given by $v_1, \dots, v_n$, and the entries of matrix $A \in \mathbb{R}^{m \times n}$ are $a_{ij}$. We also use a MATLAB-like notation: $\diag(d_1, \dots, d_n)$ represents an $n \times n$ diagonal matrix with entries $d_1, \dots, d_n$ on the diagonal; for $i \le j$ and $h \le k$, we denote by $A(i:j, h:k)$ the submatrix of $A$ corresponding to row indices from $i$ to $j$ and column indices from $h$ to $k$.
We denote by $\triu(A)$ the upper triangular part of matrix $A$, and more generally by $\triu(A, k)$ the matrix with all zeroes below the $k$-th diagonal whose other entries coincide with those of $A$. Diagonals above the main diagonal are represented with a positive index, so that $\triu(A, 1)$ indicates the strictly upper triangular part of $A$. Similarly, $\tril(A)$ and $\tril(A,-1)$ denote the lower triangular and strictly lower triangular part of $A$, respectively. We use the same notation also for rectangular matrices.
We denote by $A^+$ the Moore-Penrose pseudoinverse of a matrix $A$.
\section{Matrix functions}
\label{sec:gen-mat-fun}
The goal of this section is to define generalized matrix functions (GMFs) and introduce their main properties. We begin by recalling some basic concepts about standard matrix functions, and then we introduce GMF s and some of their properties.
\subsection{Standard matrix functions}
The concept of matrix function is a natural way to generalize the evaluation of a function on a square matrix. For simplicity, we treat only the case of diagonalizable matrices. The general definitions and a thorough description of matrix functions can be found in the monograph~\cite{higham2008functions}.
Let $A$ be a $n\times n$ matrix. Assume that $A$ is diagonalizable, i.e.~$A=VDV^{-1}$, where $D=\diag(d_1,\dots,d_n)$. Given a function $f$ defined on the set $\{d_1,\dots,d_n\}$ the matrix function of $f$ applied on $A$ is defined as
$$ f(A)=Vf(D)V^{-1},$$
where $f(D)=\diag(f(d_1),\dots,f(d_n))$.
Equivalently, if $p(x)=\sum_{i=0}^{n-1}p_ix^i$ is a polynomial that interpolates $f$ in $d_1,\dots,d_n$, the matrix function can be defined as
$$f(A)=p(A)=\sum_{i=0}^{n-1}p_iA^i.$$
The computation of a matrix function using the first of the two definitions requires knowledge of the eigenvalues of $A$. However, if $n$ is large, finding the eigenvalues of $A$ can be unfeasible.
A widely used technique to obtain a good approximation of $f(A)$ is to find a low-degree polynomial $q$ that approximates the interpolant polynomial $p$, and to approximate $f(A)$ with $q(A)$.
Polynomial Krylov methods, presented in Section \ref{sec-rat-kryl-methods}, use a similar strategy to approximate $f(A) \vec b$, by using $A$ and $\vec b$ to construct the polynomial $q$.
\subsection {Generalized matrix functions}
Generalized matrix function were first introduced in \cite{hawkins1973generalized}, with the purpose of extending the definition of matrix functions to rectangular matrices.
They are defined in a similar way with respect to the standard matrix functions, but the singular value decomposition is used instead of the diagonalization.
Let $A\in\mathbb{R}^{m\times n}$ and let $A=U\Sigma V^T$ be its SVD, where $U\in\mathbb{R}^{m\times m}$ and $V\in \mathbb{R}^{n\times n}$ are orthogonal and $\Sigma \in \mathbb{R}^{m\times n}$ is defined as
\begin{equation*}
\Sigma_{i,j}=
\begin{cases*}
\sigma_i & if $i=j\le r$\\
0 &otherwise,
\end{cases*}
\end{equation*}
where $r\le \min\{m,n\}$ is the rank of $A$ and $\sigma_1\ge\sigma_2\ge\dots\ge\sigma_r>0$ are the nonzero singular values of $A$.
Given a function $f$ defined on the set $\{\sigma_1,\dots,\sigma_r\}$ the generalized matrix function of $f$ applied on $A$ is defined as
\begin{equation*}
f^{\diamond}(A)=Uf^{\diamond}(\Sigma) V^T,
\end{equation*}
where
\begin{equation*}
f^\diamond(\Sigma)_{i,j}=
\begin{cases*}
f(\sigma_i) & if $i=j\le r$\\
0 &otherwise.
\end{cases*}
\end{equation*}
Observe that a GMF can be expressed in terms of the compact SVD of the matrix $A$, that is $A=U_r\Sigma_rV_r^T$, where $U_r\in\mathbb{R}^{m\times r}$ and $V\in \mathbb{R}^{n\times r}$ have orthonormal columns, and $\Sigma_r=\diag(\sigma_1,\dots,\sigma_r)\in\mathbb{R}^{r\times r}$. In such case, we have
\begin{equation*}
f^{\diamond}(A)=U_rf(\Sigma_r)V_r^T.
\end{equation*}
Since the definition of a GMF only depends on the values of $f$ on the nonzero singular values of $A$, we can always assume that $f$ is an odd function, and in particular that $f(0) = 0$.
\begin{remark}\label{remark-poly-gmf}\cite[Theorem~2.1]{aurentz2019stable}
If $p$ is a polynomial that interpolates $f$ in $\sigma_1,\dots,\sigma_r$ it holds that $f^{\diamond}(A)=p^{\diamond}(A)$. Moreover, since $\sigma_i>0$ for $i=1,\dots,r$, we can always take $p$ as an odd polynomial, i.e.~$p(z) = q(z^2)z$ for some polynomial $q$.
\end{remark}
Next, we list some properties of GMFs that will be required in the following sections. A discussion of additional properties of generalized matrix functions can be found in~\cite{arrigo2016computation}.
\begin{lemma}\label{lemma:spd-equivalence}
Let $S\in\mathbb{R}^{n\times n}$ be a symmetric matrix and let $f$ be defined on the singular values of $S$. If $S$ is positive definite, or $S$ is positive semidefinite and $f(0)=0$, it holds
\begin{equation}
f^\diamond(S)=f(S).
\end{equation}
\end{lemma}
\begin{proposition}
\label{prop:gmf-polynomial-evaluation}
Let $p$ be an odd polynomial, i.e. we can write $p(z) = q(z^2) z$ for some polynomial $q$. Then, for any matrix $A \in \mathbb{R}^{m \times n}$ it holds
\begin{equation*}
p^\diamond(A) = q(A A^T) A = A q(A^T A).
\end{equation*}
\end{proposition}
For the proof of Proposition~\ref{prop:gmf-polynomial-evaluation} we refer to \cite[Section~2.2]{aurentz2019stable}.
\begin{corollary}\label{cor:gmf-rational-evaluation}
Let $r$ be a rational function with an odd numerator and an even denominator, i.e., $r(z) = \dfrac{p(z^2)}{q(z^2)} z$ for some polynomials $p$ and $q$, such that the singular values of $A \in \mathbb{R}^{m \times n}$ and $0$ are not roots of $q$. Then it holds
\begin{equation*}
r^\diamond(A) = q(A A^T)^{-1} p(A A^T) A = A q(A^T A)^{-1} p(A^T A).
\end{equation*}
\end{corollary}
\begin{proof}
The proof is practically the same as the proof of Proposition~\ref{prop:gmf-polynomial-evaluation}. We report it for completeness.
Let $A = U \Sigma V^T$ be an SVD of $A$. Then we have $AA^T = U \Sigma^2 U^T$, and more generally $s(AA^T) = U s(\Sigma^2) U^T$ for any polynomial $s$.
Since $0$ is not a root of $q$, we have $r(0) = 0$ and hence by Lemma~\ref{lemma:spd-equivalence} we have
\begin{equation*}
r^\diamond(A) = U r^\diamond(\Sigma) V^T = U r(\Sigma) V^T = U q(\Sigma^2)^{-1}p(\Sigma^2) \Sigma V^T.
\end{equation*}
Note that $q(\Sigma^2)^{-1}$ is well defined because the singular values of $A$ are not roots of $q$.
Now, multiplying by $U^TU = I$ we obtain
\begin{equation*}
r^\diamond(A) = U q(\Sigma^2)^{-1} U ^T U p(\Sigma^2) U^T U \Sigma V^T = q(AA^T)^{-1} p(AA^T) A.
\end{equation*}
The second identity can be obtained in a similar way.
\end{proof}
The following proposition gives a formulation of the above results for general functions. See also~\cite[Theorem~10]{arrigo2016computation} for an alternative formulation of the same identity.
\begin{proposition}
\label{prop:gmf-general-equivalence}
Let $A\in\mathbb{R}^{m\times n}$ and let $f$ be a function defined on the nonzero singular values of $A$. Defining $g(z)=\frac{f(\sqrt{z})}{\sqrt{z}}$, for $z \ne 0$, it holds
\begin{equation*}
f^\diamond(A) = g^{\diamond}(A A^T) A = A g^{\diamond}(A^T A).
\end{equation*}
Moreover, if $\displaystyle\lim_{z \to 0} \tfrac{f(z)}{z} = 0$, we can define $g(0) = 0$. Then $g(AA^T)$ and $g(A^TA)$ are well defined, and we have
\begin{equation*}
f^\diamond(A) = g(A A^T) A = A g(A^T A).
\end{equation*}
\end{proposition}
\begin{proof}
Let $p(z)=q(z^2) z$ be an odd polynomial that interpolates $f$ in the nonzero singular values of $A$. From Proposition \ref{prop:gmf-polynomial-evaluation} and Remark \ref{remark-poly-gmf} we have
\begin{equation*}
f^{\diamond}(A)=p^{\diamond}(A)=q(A A^T) A = A q(A^T A).
\end{equation*}
Since $p$ interpolates $f$ in the nonzero singular values of $A$, $q$ interpolates $g$ in the squares of the nonzero singular values of $A$, which are the nonzero singular values of $AA^T$ and $A^TA$. Hence $q^\diamond(AA^T)=g^{\diamond}(AA^T)$ and $q^\diamond(A^TA)=g^{\diamond}(A^TA)$.
If $g(0) = 0$, by Lemma~\ref{lemma:spd-equivalence} we also have $g^\diamond(A^TA)=g(A^TA)$ and $g^\diamond(AA^T)=g(AA^T)$.
\end{proof}
\begin{remark}
If $AA^T$ is positive definite, by Lemma~\ref{lemma:spd-equivalence} it holds $f^\diamond(A) = g(AA^T) A$ without the assumption $\displaystyle\lim_{z \to 0} \tfrac{f(z)}{z}$, and similarly if $A^TA$ is positive definite we have $f^\diamond(A) = Ag(A^TA)$. Note that we could also artificially define $g(0) = 0$ without any assumptions on $f$, since the definition of a GMF only depends on the nonzero singular values of the matrix. However, this would cause $g$ to be discontinuous at $0$ and it would not be very useful in practice.
\end{remark}
The following proposition links $f^\diamond(A)$ with $f^\diamond(A^T)$, which will be useful when $A$ is rectangular. We recall that $A^+$ denotes the Moore-Penrose pseudoinverse of $A$, which can be defined as $A^+ = h^\diamond(A^T) = h^\diamond(A)^T$ for $h(z) = z^{-1}$. The more general statement in Proposition~\ref{prop:link-gmf-a-and-gmf-at} can be seen as a generalization of~\cite[Proposition~7(iv)]{arrigo2016computation} and~\cite[Theorem~4(d)]{hawkins1973generalized}.
\begin{proposition}
\label{prop:link-gmf-a-and-gmf-at}
Let $A \in \mathbb{R}^{m \times n}$ and let $f$ be a function defined on the nonzero singular values of $A$. Then it holds
\begin{equation*}
f^\diamond(A) = (A^+)^T f^\diamond(A^T)A.
\end{equation*}
More generally, assume that $f(z) = g(z) h(z) k(z)$, where $g, h, k$ are functions defined on the nonzero singular values of $A$. Then it holds
\begin{equation*}
f^\diamond(A) = g^\diamond(A) h^\diamond(A^T) k^\diamond(A).
\end{equation*}
\end{proposition}
\begin{proof}
We directly prove the generalized version, since the first statement simply follows by taking $g(z) = z^{-1}$, $h(z) = f(z)$ and $k(z) = z$.
Let $A$ have the singular value decomposition $A = U \Sigma V^T$, where $U \in \mathbb{R}^{m \times m}$ and $V \in \mathbb{R}^{n \times n}$ are orthogonal, and $\Sigma \in \mathbb{R}^{m \times n}$ has the singular values of $A$ on the main diagonal. We have:
\begin{align*}
g^\diamond(A) h^\diamond(A^T) k^\diamond(A) & = U g^\diamond(\Sigma) V^TV h^\diamond(\Sigma^T) U^TU k^\diamond(\Sigma) V^T \\
& = U g^\diamond(\Sigma) h^\diamond(\Sigma)^T k^\diamond(\Sigma) V^T \\
& = U f^\diamond(\Sigma) V^T = f^\diamond(A),
\end{align*}
where we used that $g^\diamond(\Sigma) h^\diamond(\Sigma)^T k^\diamond(\Sigma) = f^\diamond(\Sigma)$, which can be verified directly.
\end{proof}
\begin{remark}
The same proof of Proposition~\ref{prop:link-gmf-a-and-gmf-at} can be used to show that, if $f(z) g(z) = h(z) k(z)$, then it holds
\begin{equation*}
f^\diamond(A^T) g^\diamond(A) = h^\diamond(A^T) k^\diamond(A).
\end{equation*}
In particular we have $A^T f^\diamond(A) = f^\diamond(A^T) A$.
\end{remark}
\section{Rational Krylov methods} \label{sec-rat-kryl-methods}
The class of Krylov methods provides an efficient way to compute approximations to expressions of the form $f(A) \vec b$. The main idea behind these methods is to construct a low dimensional subspace $\mathcal{S}_k \subset \mathbb{R}^n$ for some integer $k \ll n$ using information from $A$ and $\vec b$, and then to approximate $f(A)\vec b$ with an appropriate vector from $\mathcal{S}_k$.
A popular choice for the approximation subspace $\mathcal{S}_k$ is the \emph{polynomial Krylov subspace}
\begin{equation*}
\kryl_k(A,\vec b) = \vspan\{ \vec b, A \vec b, \dots, A^{k-1} \vec b\} = \{ p_{k-1}(A) \vec b : p \in \poly_{k-1}\},
\end{equation*}
where $\poly_{k-1}$ denotes the set of polynomials of degree $\le k-1$.
More generally, using a sequence of \emph{poles} $\{ \xi_k \}_{k \ge 1} \subseteq (\mathbb{C}\cup \{\infty\}) \setminus {\sigma(A)}$, one can define the \emph{rational Krylov subspace}
\begin{equation*}
\rat_k(A,\vec b) = q_{k-1}(A)^{-1} \kryl_k(A, \vec b) = \Big\{ r_{k-1}(A) \vec b : r_{k-1}(z) = \frac{p_{k-1}(z)}{q_{k-1}(z)}, \text{with } p_{k-1} \in \poly_{k-1}\Big\},
\end{equation*}
where $q_{k-1}(z) = \displaystyle\prod_{j = 1}^{k-1}(1 - z/\xi_j)$. In the case when all poles are located at $\infty$, we have $q_{k-1}(z) \equiv 1$ and hence we recover the polynomial Krylov subspace $\kryl_k(A, \vec b)$. It is easy to verify that the Krylov subspaces $\rat_k(A, \vec b)$ form a nested sequence, and that $\dim \rat_k(A, \vec b) = k$ as long as $k$ is smaller than the \emph{invariance index} $K$ of the sequence, i.e. the smallest integer such that $\rat_K(A, \vec b) = \rat_{K+1}(A, \vec b)$ (or, equivalently, $\kryl_K(A, \vec b) = \kryl_{K+1}(A, \vec b)$).
For $k \le K$, an orthonormal basis $\{ \vec v_1, \dots, \vec v_k \}$ of $\rat_k(A, \vec b)$ can be computed with the rational Arnoldi algorithm, introduced by Ruhe in \cite{Ruhe94}.
In the basic algorithm, the first basis vector is chosen as $\vec v_1 = \vec b/\norm{\vec b}_2$.
Then, given a set of vectors $\{ \vec v_1, \dots, \vec v_j \}$ which form an orthonormal basis of $\rat_j(A, \vec b)$, the next basis vector $\vec v_{j+1}$ is computed by orthonormalizing the vector $(I - \frac1{\xi_{j+1}}A)^{-1} A \vec v_j$ against the previously computed basis vectors.
To prevent the algorithm from failing, it is required that $(I - \frac1{\xi_{j+1}}A)^{-1} A \vec v_j \in \rat_{k+1}(A, \vec b) \setminus \rat_k(A, \vec b)$; this property is almost always satisfied in practice, however there are no theoretical guarantees that it holds.
An approach for finding a vector $\vec w_j$ that guarantees that $(I - \frac1{\xi_{j+1}}A)^{-1} \vec w_j \in \rat_{k+1}(A, \vec b) \setminus \rat_k(A, \vec b)$ and is (near-)optimal in some sense was recently discussed in \cite{BerljafaGuettel17}, using the notion of \emph{continuation pairs} $(\eta_j/\rho_j, \vec t_j)$: in general such a vector $\vec w_j$ is of the form $(\rho_m A - \eta_m I) V_j \vec t_j$, where $V_j = [\vec v_1 \dots \vec v_j] \in \mathbb{C}^{n \times j}$.
Using the matrix with orthonormal columns $V_k = [\vec v_1 \dots \vec v_k ]\in \mathbb{C}^{n \times k}$, we can compute the following approximation to $f(A) \vec b$ from the subspace $\rat_k(A, \vec b)$:
\begin{equation}
\label{eqn:rational-krylov-approximation}
\barvec y_k = V_k f(V_k^*AV_k) V_k^* \vec b = V_k f(A_k) \vec e_1,
\end{equation}
where $A_k = V_k^* A V_k$ is the projection of $A$ on the subspace $\rat_k(A, \vec b)$, and $\vec e_1$ denotes the first vector of the canonical basis of $\mathbb{C}^n$. The accuracy of the approximation $\barvec y_k$ is largely dependent on the pole sequence $\{\xi_k\}_{k \ge 1}$. The problem of choosing a sequence of poles that is effective for a particular function $f$ and a given set containing the spectrum of $A$ has often been discussed in the literature: we refer, for instance, to \cite{Guettel13} and the references therein.
Some specific sequences of poles lead to special cases of the rational Krylov subspaces $\rat_k(A, \vec b)$: if all the poles are equal to $\xi \in \mathbb{C} \setminus \sigma(A)$, then $\rat_(A, \vec b)$ is a \emph{Shift-and-Invert Krylov subspace},
\begin{equation*}
Q_k(A, \vec b) = \kryl_k\left(\left(I - \frac 1 \xi A\right)^{-1}, \vec b\right),
\end{equation*}
that was first investigated for the computation of matrix functions in \cite{MoretNovati04, VDEHochbruck06}. If the poles alternate between $0$ and $\infty$, we obtain the \emph{extended Krylov subspace}, introduced in \cite{DruskinKnizhnerman98}, which is of the form
\begin{equation*}
\rat_{2k}(A, \vec b) = A^{-k}\kryl_{2k}(A, \vec b) = \kryl_{2k}(A, A^{-k}\vec b).
\end{equation*}
We refer to \cite{GuettelThesis} for an extensive discussion on rational Krylov methods for the computation of matrix functions.
\section{Krylov methods for GMFs}
\label{sec:krylov-gmf}
The computation of $f^{\diamond}(A) \vec b$ by using an SVD of $A$ can be unfeasible if the size of $A$ is large. A possible way to approximate the product of a generalized matrix function times a vector is to use two rectangular matrices with orthonormal columns to project the matrix $A$ onto a smaller space and then to compute the generalized matrix function of the projected matrix: let $k\ll n$ and let $U_k,V_k\in\mathbb{R}^{n\times k}$ with orthonormal columns, $B_k\in\mathbb{R}^{k\times k}$ such that
$A\approx U_kB_kV_k^T$, then
\begin{equation}
\label{eqn:gmf-approx-polynomial-0}
f^{\diamond}(A) \vec b\approx U_k f^\diamond(B_k)V_k^T \vec b,
\end{equation}
where the matrix $f^\diamond (B_k)$ can be computed by means of the singular value decomposition.
In this section we describe how to compute such a projected matrix using the Golub-Kahan bidiagonalization, which is equivalent to using a polynomial Krylov method on the matrices $A^TA$ and $AA^T$. This strategy corresponds to the ``third approach'' discussed in \cite[Section~5.4]{arrigo2016computation}; the numerical results of \cite{arrigo2016computation} indicate that \eqref{eqn:gmf-approx-polynomial-0} is often more effective than the other approaches they propose, which are based on Gauss and Gauss-Radau quadrature formulas.
In Sections~\ref{subsec:rat-kry-gmf} and~\ref{subsec:short-term-rec} we generalize this approach to the rational Krylov case and we show that a short recurrence like the one of the Golub-Kahan bidiagonalization can be obtained in the rational case too.
\subsection{Golub-Kahan bidiagonalization}
The first method we describe for the computation of a truncated SVD is the Golub-Kahan bidiagonalization introduced for the first time in 1965.
\begin{theorem} \label{thm:householder-bidiagonalization}
Let $A\in \mathbb{R}^{m\times n}$, with $m>n$. There exist orthogonal matrices $P\in \mathbb{R}^{m\times m}, Q\in \mathbb{R}^{n\times n}$ such that
\begin{equation}
P^TAQ=B=
\begin{bmatrix}
\alpha_1 & \beta_1 & \dots & \dots & 0 \\
0 & \alpha_2 & \beta_2 & \dots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
\vdots & & 0 & \alpha_{n-1} & \beta_{n-1} \\
0 & \dots & \dots & 0 & \alpha_n \\
0 & \dots & \dots & \dots & 0 \\
\vdots & & & & \vdots \\
0 & \dots & \dots & \dots & 0
\end{bmatrix} .
\end{equation}
Moreover the first column of $Q$ can be chosen arbitrarily.
\end{theorem}
The proof of Theorem~\ref{thm:householder-bidiagonalization} is constructive and it is usually called Householder bidiagonalization process. It can be found in \cite[Section 5.4]{golub2013matrix}.
In the case of large matrices the full bidiagonalization is too expensive. The goal of the Golub-Kahan bidiagonalization is to extract good approximations of singular values and singular vectors before the full bidiagonalization is completed.
Let $$P=[\vec p_1|\dots|\vec p_m],\qquad Q=[\vec q_1|\dots| \vec q_n]$$ be a column partitioning of $P$ and $Q$. From Theorem~\ref{thm:householder-bidiagonalization} it follows that $AQ=PB$ and $A^TP=QB^T$; using these relations we have
\begin{equation} \label{eqn:Golub-kahan-recurrence}
\begin{aligned}
A\vec q_k & =\alpha_k\vec p_k+\beta_{k-1}\vec p_{k-1}, \\
A^T\vec p_k & =\alpha_k\vec q_k+\beta_k\vec q_{k+1},
\end{aligned}
\end{equation}
for $k=1,\dots,n,$ with the convention that $\beta_0\vec p_0=0$ and $\beta_n\vec q_{n+1}=0$.
Let $r_k=A\vec q_k-\beta_{k-1}\vec p_{k-1}.$ Using the orthogonality of the columns of $P$, we have
\begin{equation}\label{eq_alpha}
\alpha_k=\pm\norm{\vec r_k}_2, \qquad \vec p_k=\frac{\vec r_k}{\alpha_k} \quad \text{(if }\alpha_k \neq 0).
\end{equation}
Similarly defining $\vec s_k=A^T\vec p_k-\alpha_k\vec q_k$ we have
\begin{equation}\label{eq_beta}
\beta_k=\pm\norm{\vec s_k}_2, \qquad \vec q_{k+1}=\frac{\vec s_k}{\beta_k} \quad \text{(if }\beta_k \neq 0).
\end{equation}
Hence, given $\vec p_{k-1},\vec q_k,\beta_{k-1}$, we can compute $\vec p_{k}$, $\vec q_{k+1}$, $\beta_{k}$ and $\alpha_k$.
Defining $P_k=[\vec p_1|\dots|\vec p_k],$ $Q_k=[\vec q_1|\dots| \vec q_k]$ and
\begin{equation*}
B_k=
\begin{bmatrix}
\alpha_1 & \beta_1 & 0 & \dots & 0 \\
0 & \alpha_2 & \beta_2 & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & 0 \\
\vdots & & 0 & \alpha_{k-1} & \beta_{k-1} \\
0 & \dots & \dots & 0 & \alpha_k \\
\end{bmatrix},
\end{equation*}
after the $k$-th step of \eqref{eq_alpha} and \eqref{eq_beta} we have
\begin{equation}
\begin{aligned}
AQ_k & =P_kB_k, \\
A^TP_k & =Q_kB_k^T+\vec s_k e_k^T,
\end{aligned}
\end{equation}
assuming $\alpha_k>0$. It can be shown that
\begin{equation}
\begin{aligned}
\text{span}\{\vec q_1,\dots, \vec q_k\} & =\kryl_k(A^TA,\vec q_1), \\
\text{span}\{\vec p_1,\dots, \vec p_k\} & =\kryl_k(AA^T,A\vec q_1), \\
\end{aligned}
\end{equation}
thus the convergence of the Golub-Kahan bidiagonalization follows from the convergence of the Lanczos method applied on $A^TA$ and $AA^T$.
For a given $k$, we can approximate $A$ with $P_kB_kQ_k^T$, and hence we can compute an approximation of the SVD of $A$ by computing the SVD of $B_k$.
For further information on the Golub-Kahan bidiagonalization we refer to \cite[Chapter 10]{golub2013matrix}.
The vector $\vec y = f^\diamond(A) \vec b$ can be approximated with the expression
\begin{equation}
\label{eqn:gmf-approx-polynomial}
\barvec y_k = P_k f^\diamond(B_k) Q_k^T \vec b = \norm{\vec b}_2 P_k f^\diamond(B_k) \vec e_1 \: \in \: \kryl_k(AA^T, A \vec b).
\end{equation}
We refer to this approximation as a polyomial Krylov method for GMFs.
\subsection {Rational Krylov methods for GMFs} \label{subsec:rat-kry-gmf}
As we saw in the previous section, the Golub-Kahan bidiagonalization computes orthonormal bases for the polynomial Krylov subspaces $\kryl_k(A^TA, \vec b)$ and $\kryl_k(AA^T, A \vec b)$. By analogy with that approach, in this section we propose to compute an approximation to $f^\diamond(A) \vec b$ using the rational Krylov subspaces
\begin{equation}
\rat_k(A^T A,\vec b) \qquad \text{and} \qquad \rat_k(A A^T, A \vec b),
\end{equation}
where $q_{k-1}(z) = \displaystyle\prod_{j=1}^{k-1}(1 - z/\xi_j)$, for a given pole sequence $\{ \xi_j \}_{j \ge 1}$.
Assume that we have constructed two matrices with orthonormal columns $P_k$ and $Q_k$, such that $\vspan(P_k) = \rat_k(AA^T, A\vec b)$ and $\vspan(Q_k) = \rat_k(A^TA, \vec b)$. Then, defining $B_k = P_k^T A Q_k$, by analogy with the polynomial Krylov approach we can introduce the vector
\begin{equation}\label{eqn:gmf-approx-rational}
\barvec y_k = P_k f^\diamond(B_k) Q_k^T \vec b=P_k f^\diamond(B_k) \vec e_1,
\end{equation}
which is an approximation to $f^\diamond(A) \vec b$ from the subspace $\rat_k(A A^T, A \vec b)$.
First of all, notice that it is sufficient to compute only one of the two rational Krylov subspaces: indeed, since it holds $A \rat_k(A^TA, \vec b) = \rat_k(A A^T, A \vec b)$, we can compute an orthonormal basis of the subspace $\rat_k(AA^T, A \vec b)$ simply by orthonormalizing the columns of $A Q_k$. This is equivalent to computing a QR decomposition $A Q_k = W_k R_k$, where $W_k$ has orthonormal columns and $R_k$ is upper triangular, so we can set $P_k = W_k$. Moreover, notice that we also have
\begin{equation*}
B_k = P_k^T A Q_k = W_k^TW_k R_k = R_k,
\end{equation*}
i.e.~with the QR decomposition we also recover the matrix $B_k$, without the need to project $A$ explicitly. The basis $Q_k$ of the subspace $\rat_k(A^T A, \vec b)$ can be computed by applying the rational Arnoldi algorithm to the matrix $A^T A$ with initial vector~$\vec b$.
\subsection{Short recurrence for rational Golub-Kahan algorithm}\label{section:Rat-Golub_Kahan}
\label{subsec:short-term-rec}
In the Golub-Kahan bidiagonalizazion (without reorthogonalization), we can compute the last column of $P_k$, $Q_k$ and $B_k$ just by knowing a few previous columns of $P_k$ and $Q_k$, by means of the equations \eqref{eqn:Golub-kahan-recurrence}. This short recurrence is possible because of the bidiagonal structure of the matrix $B_k$, that unfortunately is not preserved when we perform a rational Krylov method.
In this section we show that, if a rational Krylov method is used, the matrix~$B_k$ is a quasiseparable matrix (see Definition~\ref{def:quasiseparable-matrix}). This structure extends the bidiagonal form that is obtained during the Golub-Kahan bidiagonalization. Using such structure we build a short recurrence that allows us to update the matrix $P_k$ and the matrix $B_k$ avoiding the full orthogonalization related to the computation of the QR factorization of the matrix $AQ_k.$
\begin{definition}\label{def:semiseparable-matrix}
A matrix $S\in \mathbb{R}^{n\times n}$ is called a semiseparable matrix if all submatrices
taken out of the lower and upper triangular part of the matrix have rank at most 1, that is
\begin{equation*}
\rank S(i : n, 1 : i) \le 1 \quad \text{and} \quad \rank S(1 : i, i : n) \le 1,
\end{equation*}
for every $i=1,\dots,n$.
Moreover, S is called a generator representable semiseparable
matrix if the lower and upper triangular parts of the matrix are derived from a rank~1 matrix, that is
\begin{equation*}
\tril(S) = \tril(\vec u\vec v^T )\quad \text{and} \quad \triu(S) = \triu(\vec p\vec q^T ),
\end{equation*}
for $\vec u,\vec v, \vec p,\vec q\in \mathbb{R}^n$.
\end{definition}
\begin{definition}\label{def:quasiseparable-matrix}
A matrix S is called a quasiseparable matrix if all the subblocks
taken out of the strictly lower triangular part of the matrix (respectively, the strictly upper triangular part) are of rank at most 1, that is
\begin{equation*}
\rank S(i + 1 : n, 1 : i) \le 1 \quad \text{and} \quad \rank S(1 : i, i + 1 : n) \le 1,
\end{equation*}
for every $i=1,\dots,n$.
\end{definition}
The following theorem from \cite[Section~1.5.2]{vandebril2007matrix} gives us a complete characterization of invertible semiseparable matrices.
\begin{theorem}\label{thm:inverse-semiseparable}
The inverse of an invertible tridiagonal matrix is a semiseparable
matrix, and vice versa. Moreover, the inverse of an invertible irreducible tridiagonal matrix is a generator representable semiseparable matrix and vice versa.
\end{theorem}
As it has been proved in \cite[Section~5.2]{GuettelThesis}, if we perform a rational Krylov algorithm on a symmetric matrix we obtain a particular equivalence called rational Arnoldi decomposition. See also \cite[Section~2]{berljafa2015generalized} and \cite[eq.~(2.2)]{PPS21}.
\begin{theorem}\label{thm:symmetric-RAD}
Given a symmetric matrix $A\in\mathbb{R}^{n\times n}$ and a vector $\vec b\in \mathbb{R}^n$, let $\rat_{k+1}(A,\vec b)$ be the rational Krylov space with nonzero poles $\{\xi_1,\dots,\xi_k\}$ and assume that $k$ is less than the invariance index of the Krylov subspace. Let $Q_{k+1}\in\mathbb{R}^{n\times (k+1)}$ be the matrix with orthonormal columns generated by the Arnoldi algorithm, such that $\vspan (Q_{k+1}) =\rat_{k+1}(A,\vec b)$. Then the following relation holds:
\begin{equation} \label{eqn:Rational-Arnoldi-decomposition}
AQ_{k+1}\underline{K_k}=Q_{k+1}\underline{H_k}, \quad \text{with} \quad \underline{K}_k=\underline I_k+D_k\underline H_k,
\end{equation}
where $\underline{H_k}\in \mathbb{R}^{(k+1)\times k}$ is a full rank tridiagonal irreducible symmetric matrix, $\underline{I}_k$ is the $(k+1)\times k$ identity matrix and $D_k=\text{diag}(0,\frac{1}{\xi_1},\dots,\frac{1}{\xi_{k-1}})$, where $\frac{1}{\infty}=0.$
\end{theorem}
Starting form \eqref{eqn:Rational-Arnoldi-decomposition}, we are going to prove that the projection of the symmetric matrix $A$ on the Krylov subspace (i.e., $Q^T_{k+1}AQ_{k+1}$) is the sum of a diagonal matrix and a semiseparable matrix.
\begin{theorem}{\label{thm:semiseparable-projection}}
Let $A\in \mathbb{R}^{n\times n}$ be a symmetric matrix and let $Q_{k+1}\in\mathbb{R}^{n\times (k+1)}$ be the matrix with orthonormal columns generated by the rational Arnoldi algorithm using poles $\{\xi_1,\dots,\xi_k\}$ different from zero and infinity. Assuming that $k+1$ is less than the invariance index, we have
$$J_{k+1}:=Q_{k+1}^TAQ_{k+1}=S+\tilde D_{k},$$
where $\tilde D_{k}=\diag(0,\xi_1,\dots,\xi_{k})$ and $S$ is a symmetric generator representable semiseparable matrix.
\end{theorem}
\begin{proof}
Using $\xi_{k+1}=\infty$, from Theorem~\ref{thm:symmetric-RAD} we obtain the relation
\begin{equation}\label{1}
AQ_{k+2}\underline K_{k+1}=Q_{k+2}\underline H_{k+1},
\end{equation}
where $\underline K_{k+1}$ and $\underline H_{k+1}$ are tridiagonal, $\vec e_{k+2}^T\underline K_{k+1}=\vec 0^T$ and $ \vec e_{k+2}^T\underline H_{k+1}$ is a multiple of $\vec e_{k+1}^T$.
Hence, multiplying \eqref{1} on the left by $Q_{k+2}^T$ and taking the first $k+1$ columns and rows, we have $$J_{k+1}=H_{k+1}K_{k+1}^{-1}.$$
Let us define $\hat{D}_k=\diag(\gamma, \xi_1,\dots, \xi_k)$, for $\gamma\in \mathbb{R}$. Notice that, since the first column of $Q_{k+1}$ is not an eigenvector of $A$, the entry in position (1,2) of $J_{k+1}$ has to be different from 0. From this it can be noticed that $J_{k+1}-\tilde D_k$ is a symmetric generator representable semiseparable matrix if and only if $J_{k+1}-\hat D_k$ is. Moreover, taking $\gamma \neq h_{1,1}$ and $\gamma \neq 0$ we have that $J_{k+1}-\hat D_k$ is invertible and its inverse can be computed as follows:
\begin{equation*}
\begin{aligned}
\left(H_{k+1}K_{k+1}^{-1}-\hat D_k\right)^{-1} & =\left(-\hat D_k(K_{k+1}-\hat D_k^{-1}H_{k+1})K_{k+1}^{-1}\right)^{-1}= \\
& =-K_{k+1}(K_{k+1}-\hat D_k^{-1}H_{k+1})^{-1}\hat D_k^{-1}.
\end{aligned}
\end{equation*}
Since $K_{k+1}={I}_{k+1}+D_k H_{k+1}$ where $D_k=\diag(0,\frac{1}{\xi_1},\dots,\frac{1}{\xi_k}),$ we have
\begin{equation*}
\begin{aligned}
\left(H_{k+1}K_{k+1}^{-1}-\hat D_k\right)^{-1} & =-K_{k+1}({I}_{k+1}+(D_k-\hat D_k^{-1})H_{k+1})^{-1}\hat D_k^{-1} \\
& =-K_{k+1}({I}_{k+1}-\frac1 \gamma \vec e_1 \vec e_1^TH_{k+1})^{-1}\hat D_k^{-1} \\
& =-K_{k+1}({I}_{k+1}+\frac{1}{\gamma-h_{1,1}}\vec e_1 \vec e_1^TH_{k+1})\hat D_k^{-1}= \\
& =-(K_{k+1}+\frac{1}{\gamma-h_{1,1}}K_{k+1}(\vec e_1 \vec e_1^T)H_{k+1})\hat D_k^{-1}.
\end{aligned}
\end{equation*}
The third equality follows from the Sherman-Morrison formula, using the fact that $\gamma \ne h_{1,1}$. This also shows that the matrix $J_{k+1} - \hat D_k$ is indeed invertible.
The obtained matrix is an irreducible tridiagonal matrix since $K_{k+1}$ and $H_{k+1}$ have such structure and $\gamma \ne 0$. Hence, using Theorem~\ref{thm:inverse-semiseparable}, we have that $J_{k+1}-\hat D_k$ is a generator representable semiseparable matrix, and therefore also $J_{k+1} - \tilde D_k$ is.
\end{proof}
The following corollary generalizes the statement of Theorem~\ref{thm:semiseparable-projection} to the case with poles at $\infty$.
\begin{corollary}
If there exists a pole $\xi_i=\infty$, the projected matrix $J_{k+1}=Q_{k+1}^TAQ_{k+1}$ is still a quasiseparable matrix.
\end{corollary}
\begin{proof}
Let $\{\xi^{(j)}\}_{j\in \mathbb{N}}$ be a sequence of real numbers outside of the convex hull of $\sigma(A)$ such that
$$\lim_{j\rightarrow \infty}\xi^{(j)}=\infty,$$
and let $J_{k+1}^{(j)}$ be the projected matrix obtained by using poles equal to $$\{\xi_1,\dots,\xi_{i-1},\xi^{(j)},\xi_{i+1},\dots,\xi_k\}.$$
Since the basis computed by the rational Arnoldi process depends continuously on the poles, we have that
$$\lim_{j\rightarrow \infty}J_{k+1}^{(j)}=J_{k+1}.$$
From Theorem~\ref{thm:semiseparable-projection} we have that the matrices $J_{k+1}^{(j)}$ are quasiseparable for all $j$. Since the quasiseparable matrices are a closed set \cite[Section~1.4.1]{vandebril2007matrix}, we have the thesis.
\end{proof}
Notice that, if the matrix $A$ is symmetric positive semidefinite and $k+1$ is less than the invariance index, the projected matrix $J_k=Q_k^TAQ_k$ has to be positive definite. Indeed, if there exists a vector $\vec x\neq \vec 0$ such that $J_k\vec x = \vec 0$, we have that $AQ_k \vec x = \vec 0$. In particular, since $Q_k \vec x\in q_{k-1}(A)^{-1}\kryl_k(A,b)$, where $q_{k-1}$ is as defined in Section~\ref{sec-rat-kryl-methods}, there exist $\alpha_0,\dots,\alpha_{j}$, $j \le k-1$, with $\alpha_j \ne 0$ such that
\begin{equation*}
Q_k \vec x=q_{k-1}(A)^{-1} \sum_{i=0}^{j}\alpha_i A^i \vec b,
\end{equation*}
and so
\begin{equation*}
\vec 0 = AQ_k \vec x = q_{k-1}(A)^{-1}\sum_{i=0}^{j}\alpha_i A^{i+1}\vec b.
\end{equation*}
This implies that $A^{j+1} \vec b \in \kryl_j(A, \vec b)$, but this is impossible because $k+1$ is less than the invariance index. In particular, this implies the existence of the Cholesky factorization of the matrix $J_k$ if the matrix $A$ is symmetric positive semidefinite.
The matrix $B_k = P_k^T A Q_k$ we are interested in when using rational Krylov methods for GMFs is exactly the transpose of the Cholesky factor of the matrix $J_k = Q_k^T (A^TA) Q_k$. Indeed $B_k$ is upper triangular and
$$J_k=(AQ_k)^TAQ_k=(P_kB_k)^T(P_kB_k)=B_k^TB_k.$$
The following proposition gives us that the matrix $B_k$ is the upper triangular part of a rank one plus diagonal matrix.
\begin{proposition}Let $A\in\mathbb{R}^{n\times n}$ be symmetric positive definite and let $L\in \mathbb{R}^{n\times n}$ be its Cholesky factor (i.e., $L$ is lower triangular and $A=LL^T$).
If there exist $\vec u ,\vec v\in \mathbb{R}^{n}$ such that $\tril(A,-1)=\tril(\vec v\vec u^T,-1)$, then $\tril(L,-1)=\tril(\vec v\vec x^T,-1)$ for $\vec x\in \mathbb{R}^{n}$.
\end{proposition}
\begin{proof}
It can be easily proved that the last row of $\tril(L,-1)$ is equal to
$$v_n\cdot \begin{bmatrix}
u_1 & \dots & u_{n-1} & 0
\end{bmatrix}\begin{bmatrix}
L_{n-1}^{-T} & \\&1
\end{bmatrix},$$
where $L_{n-1}$ is the leading principal submatrix of $L$ of size $n-1$. Using this fact the thesis can be proved recursively.
\end{proof}
\begin{remark}\label{rmk:nonzero_v}
Notice that if $\triu(J_k,1)=\triu(\vec u \vec v^T, 1)$, the vector $\vec v$ cannot have zero entries: indeed if there exists $s\le k$ such that $v_s=0$, then, as a consequence of the proof of Theorem \ref{thm:semiseparable-projection}, the matrix $J_s-\diag(\gamma,\xi_1,\dots,\xi_{s-1})$ has the last column equal to zero for all $\gamma\in \mathbb{R}$, but this is impossible since for an appropriate choice of~$\gamma$ this matrix has to be invertible.
\end{remark}
Exploiting the quasiseparable structure of the matrix $B_k$, we can compute the matrices $B_k$ and $P_k$ by only performing a few scalar products. Indeed, if we let
\begin{equation*}
B_k=\begin{bmatrix}
d_1 & \beta_1 & \gamma_1 \\
& d_2 & \beta_2 & \gamma_2 & \bignonz \\
& & \ddots & \ddots & \ddots \\
& \bigzero & & d_{k-2} & \beta_{k-2} & \gamma_{k-2} \\
& & & & d_{k-1} & \beta_{k-1} \\
& & & & & d_{k} \\
\end{bmatrix},
\end{equation*}
and we define $\vec x_k = [P_{k-1} \, \vec 0 ] B_k \vec e_k$, we have that
\begin{equation}\label{eqn:structured-representation_B}
A\vec q_k = AQ_k \vec e_k = P_kB_k \vec e_k = d_k\vec p_k + \vec x_k.
\end{equation}
Using the fact that the submatrix of $B_k$ that involves the last two columns and all except for the last two rows has rank at most 1, we can compute $\vec x_k$ with the recursive relation
\begin{equation*}
\vec x_k = \dfrac{\gamma_{k-2}}{\beta_{k-2}} \vec x_{k-1} + \beta_{k-1} \vec p_{k-1}.
\end{equation*}
This allows us to compute $d_k, \beta_{k-1}, \gamma_{k-2}$ and $\vec p_k$ with only two scalar products. The $k$-th step of the procedure is summarized in Algorithm~\ref{algorithm:rational-Golub-Kahan}.
\begin{algorithm}
\KwInput{$A,\vec q_k, \vec p_{k-1}, \vec p_{k-2}, \vec x_{k-1};$}
\KwOutput{ $\vec p_k, d_k, \beta_{k-1}, \gamma_{k-2}, \vec x_k;$}
$\vec w \gets A\vec q_k$\\
$\beta_{k-1} \gets \vec w^T\vec p_{k-1}$\\
$\gamma_{k-2} \gets \vec w^T\vec p_{k-2}$\\
$\vec x_k \gets \frac{\gamma_{k-2}}{\beta_{k-2}}\vec x_{k-1}+\beta_{k-1}\vec p_{k-1}$;\\
$\vec w \gets \vec w-\vec x_k$;\\
$d_k\gets\norm{\vec w}_2$\\
$\vec p_k=\frac{\vec w}{d_k}$;
\caption{$k$-th step of rational Golub-Kahan algorithm} \label{algorithm:rational-Golub-Kahan}
\end{algorithm}
Notice that during the $k$-th step of the procedure, we do not require the first $k-1$ columns of $Q_k$. Moreover, for the computation of the projected solution $\barvec y_k$ defined in \eqref{eqn:gmf-approx-rational}, we do not need the matrix $Q_k$. For this reason, the computation of the matrix $Q_k$ can be performed by using a short recurrence rational Lanczos algorithm, as the one presented in
\cite[Section~5.2]{GuettelThesis}, and we can keep in memory only the last two columns of $Q_k$.
After the $k$-th step of the algorithm, the $k$-th column of the matrix $B_k$ can be computed using the newly computed quantities and the previous column, by exploting the rank structure of the matrix $B_k$.
Algorithm~\ref{algorithm:rational-Golub-Kahan} reduces to the standard Golub-Kahan bidiagonalization if all the poles are chosen equal to infinity.
\begin{remark}
In the algorithm it is implicitly assumed that $\beta_i\neq 0$ for each $i$. In practice this hypothesis is always satisfied, however, as observed in Remark~\ref{rmk:nonzero_v}, if $k$ is less than the invariance index there is at least one nonzero off-diagonal entry in the $k$-th column of $B_k$. Hence we could modify the algorithm to avoid the issue of $\beta_i=0$.
\end{remark}
\begin{remark}The algorithm presented in this section also works if some of the poles are equal to zero. However, the proof of this fact requires slightly different tools, and hence we omitted it for brevity.
\end{remark}
\section{Error bounds}
\label{sec:error-bounds}
In this section we prove some error bounds for the approximation of $f^\diamond(A) \vec b$ using the polynomial and rational Krylov methods described above. These bounds link the approximation error with the error of polynomial and rational approximation of $f$ on an interval containing the singular values of $A$. Our results are the analogue of the ones that hold for standard matrix functions, and they can be proved in a similar way.
We first find an upper and lower bound for the singular values of $B_k$. For convenience, given a matrix $A \in \mathbb{R}^{m \times n}$, throughout this section we are going to use an extended notation for singular values, defining $\sigma_j := 0$ for all $j$ such that $\min\{m,n\} < j \le \max \{m,n\}$.
\begin{lemma}
\label{lemma:interlacing-singular-values}
Let $\sigma_1$ and $\sigma_n$ be the first and $n$-th singular value of $A \in \mathbb{R}^{m \times n}$, respectively. Then the singular values of $B_k$ belong to the interval $[ \sigma_n, \sigma_1 ]$.
\end{lemma}
\begin{proof}
Let $\eta_1$ and $\eta_k$ be the largest and smallest singular value of $B_k$, respectively. Using the variational characterization of eigenvalues, we have
\begin{equation*}
\eta_k^2 = \min_{\substack{\vec v \in \mathbb{R}^k \\ \norm{\vec v}_2 = 1}} \vec v^T B_k^T B_k \vec v = \min_{\substack{\vec v \in \mathbb{R}^k \\ \norm{\vec v}_2 = 1}} \vec v^T Q_k^T A^T P_k P_k^T A Q_k \vec v.
\end{equation*}
Since the columns of $Q_k$ are an orthonormal basis of the subspace $\rat_k(A^TA, \vec b)$, we get
\begin{equation*}
\eta_k^2 = \min_{\substack{\vec w \in \rat_k(A^TA, \vec b) \\ \norm{\vec w}_2 = 1}} \vec w^T A^T P_k P_k^T A \vec w = \min_{\substack{\vec w \in \rat_k(A^TA, \vec b) \\ \norm{\vec w}_2 = 1}} \vec w^T A^T A \vec w,
\end{equation*}
where for the last equality we used the fact that $A\vec w \in \rat_k(AA^T, \vec A \vec b)$ and therefore $P_k P_k^T A \vec w = A \vec w$. Finally, we obtain
\begin{equation*}
\eta_k^2 = \min_{\substack{\vec w \in \rat_k(A^TA, \vec b) \\ \norm{\vec w}_2 = 1}} \vec w^T A^T A \vec w \ge \min_{\substack{\vec w \in \mathbb{R}^n \\ \norm{\vec w}_2 = 1}} \vec w^T A^T A \vec w = \lambda_{\min} (A^TA) = \sigma_n^2.
\end{equation*}
With the same procedure, we also get
\begin{equation*}
\eta_1^2 = \max_{\substack{\vec w \in \rat_k(A^TA, \vec b) \\ \norm{\vec w}_2 = 1}} \vec w^T A^T A \vec w \le \max_{\substack{\vec w \in \mathbb{R}^n \\ \norm{\vec w}_2 = 1}} \vec w^T A^T A \vec w = \lambda_{\max} (A^TA) = \sigma_1^2.
\end{equation*}
\end{proof}
Observe that if $A \in \mathbb{R}^{m \times n}$ is rectangular with $n >m$, we always have $\sigma_n = 0$, and hence $B_k$ may have singular values arbitrarily close to $0$ even if $\sigma_{\min\{m,n\}}(A) > 0$.
This fact is going to affect the error bounds in Theorem~\ref{thm:polynomial-krylov-error-bound} and Theorem~\ref{thm:rational-krylov-error-bound}.
As an example, consider the $1 \times 2$ matrix $A = \begin{bmatrix}
1 & 0 \\
\end{bmatrix}$ and the vector $\vec b = \begin{bmatrix}
\epsilon \\
1
\end{bmatrix}$, for small $\epsilon > 0$. For $k = 1$, we have $Q_1 = \vec b /\norm{ \vec b}_2 = \frac{1}{\sqrt{1 + \epsilon^2}} \vec b$, and $P_1 = A \vec b / \norm{A \vec b}_2 = 1$. So we have $B_1 = P_1^T A Q_1 = \epsilon \in \mathbb{R}^{1 \times 1}$, and hence $B_1$ can have an arbitrarily small singular value even if $\sigma_1(A) = 1$.
\subsection{Polynomial error bounds}
\label{subsubsec:pol-err-bounds}
We first prove the error bounds in the polynomial case. Recall that a polynomial Krylov method computes an approximation to $\vec y = f^\diamond(A) \vec b$ from the subspace $\kryl_k(A A^T, A \vec b)$ as
\begin{equation}
\label{eqn:gmf-approx-polynomial--bounds}
\barvec y_k = P_k f^\diamond(B_k) Q_k^T \vec b = \norm{\vec b}_2 P_k f^\diamond(B_k) \vec e_1,
\end{equation}
where $B_k = P_k^T A Q_k$, and $P_k$ and $Q_k$ are the matrices computed in the Golub-Kahan bidiagonalization of $A$, satisfying $\vspan(P_k) = \kryl_k(A A^T, A \vec b)$ and $\vspan(Q_k) = \kryl_k(A^T A, \vec b)$.
A key observation for proving the bounds is the exactness of the approximation~\eqref{eqn:gmf-approx-polynomial--bounds} when $f$ is an odd polynomial, stated in the following lemma.
\begin{lemma}
\label{lemma:polynomial-krylov-exactness}
Assume that $f = p_{2k-1}$ is an odd polynomial of degree $\le 2k-1$. Then the approximation $\barvec y_k$ given by \eqref{eqn:gmf-approx-polynomial--bounds} is exact, i.e., it holds $\vec y = \barvec y_k$.
\end{lemma}
\begin{proof}
It is sufficient to prove this for $f(z) = z^{2\ell -1}$, $\ell = 1, \dots, k$. For $\ell = 1$, we have
\begin{equation*}
\vec y = A \vec b \quad \text{and} \quad \barvec y_k = P_k (P_k^T A Q_k) Q_k^T\vec b = A \vec b,
\end{equation*}
since $Q_kQ_k^T \vec b = \vec b$, and $P_k P_k^T A \vec b = A \vec b$, for all $k \ge 1$.
For $\ell > 1$, we have by Proposition~\ref{prop:gmf-polynomial-evaluation} that
\begin{equation}
\label{eqn:proof-polynomial-exactness-1}
\vec y = f^\diamond(A) = (AA^T)^{\ell-1} A \vec b
\end{equation}
and
\begin{equation}
\label{eqn:proof-polynomial-exactness-2}
\barvec y_k = P_k f^\diamond(B_k) \vec e_1 = P_k (B_k B_k^T)^{\ell-1} B_k Q_k^T \vec b.
\end{equation}
Recalling the definition of $B_k$, we get
\begin{equation*}
\barvec y_k = P_k (B_kB_k^T)^{\ell-2} (P_k ^T A Q_k Q_k^T A^T P_k)(P_k^T A Q_k) Q_k^T \vec b.
\end{equation*}
We have $Q_k Q_k^T \vec b = \vec b$ since $\vec b \in \kryl_1(A^TA, \vec b)$, and similarly $P_kP_k^T A \vec b = A \vec b$ since $ A \vec b \in \kryl_1(AA^T, A \vec b)$. Hence, letting $\vec b_2 := A^TA \vec b \in \kryl_2(A^TA, \vec b)$, we obtain
\begin{equation*}
\barvec y_k = P_k (B_k B_k^T)^{\ell-2} (P_k^T A Q_k) Q_k^T \vec b_2,
\end{equation*}
which is the same situation as before, with $\ell$ replaced by $\ell-1$, and $\vec b \in \kryl_1(A^TA, \vec b)$ replaced by $\vec b_2 \in \kryl_2(A^TA, \vec b)$.
Since $k \ge \ell$, we can repeat the above procedure until we are left with
\begin{equation*}
\barvec y_k = P_k (P_k^T A Q_k) Q_k^T \vec b_\ell, \qquad \text{where} \quad \vec b_\ell = (A^T A)^{\ell-1} \vec b \in \kryl_\ell(A^TA, \vec b).
\end{equation*}
Then, because $k \ge \ell$, we have that $Q_k Q_k^T \vec b_\ell = \vec b_\ell$, and likewise $P_kP_k^T A \vec b_\ell = A \vec b_\ell$, since $A \vec b_\ell \in \kryl_\ell(AA^T, A \vec b)$, so we obtain $\barvec y_k = A (A^TA)^{\ell-1} \vec b$. By comparing with \eqref{eqn:proof-polynomial-exactness-1}, we finally get $\vec y = \barvec y_k$.
\end{proof}
Using Lemma~\ref{lemma:polynomial-krylov-exactness}, we can prove the following theorem.
\begin{theorem}
\label{thm:polynomial-krylov-error-bound}
Let $A \in \mathbb{R}^{m \times n}$, and let $\sigma_1$, $\sigma_n$ and $\sigma_m$ be the first, $n$-th and $m$-th singular value of $A$, respectively. Let $\barvec y_k$ be the approximation to $\vec y = f^\diamond(A) \vec b$ given by \eqref{eqn:gmf-approx-polynomial--bounds}. Then the following inequality holds:
\begin{equation}
\label{eqn:gmf-polynomial-bound-1}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{\vec b}_2 \min_{p \in \poly_{k-1}} \norm{ f(z) - p(z^2)z }_{\infty, [\sigma_n, \sigma_1]}.
\end{equation}
Moreover, if $A$ is square with $\sigma_m = \sigma_n > 0$, or if $\displaystyle\lim_{z \to 0}\tfrac{f(z)}{z} = 0$, it also holds that
\begin{equation}
\label{eqn:gmf-polynomial-bound-2}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm {A \vec b}_2 \min_{p \in \poly_{k-1}} \norm{f(z)/z - p(z^2)}_{\infty, [\sigma_{\max\{m,n\}}, \sigma_1]}.
\end{equation}
\end{theorem}
\begin{proof}
Let $p$ be a polynomial of degree $\le k-1$. Then $p_{2k-1}(z) = p(z^2) z$ is an odd polynomial of degree $\le 2k-1$, and by Lemma~\ref{lemma:polynomial-krylov-exactness} we have
\begin{equation}
\label{eqn:proof-polynomial-bound-1}
p^\diamond_{2k-1}(A) \vec b = P_k p^\diamond_{2k-1}(B_k) Q_k^T \vec b.
\end{equation}
By adding and subtracting the quantity in~\eqref{eqn:proof-polynomial-bound-1} to $f^\diamond(A) \vec b - \barvec y_k$, we get
\begin{equation}
\label{eqn:proof-polynomial-bound-2}
f^\diamond(A) \vec b - \barvec y_k = [f^\diamond(A) - p_{2k-1}^\diamond(A)] \vec b - P_k[f^\diamond(B_k) - p_{2k-1}^\diamond(B_k)]Q_k^T \vec b.
\end{equation}
By invariance of the $2$-norm under unitary transformations, we have
\begin{equation*}
\norm {f^\diamond(A) - p^\diamond_{2k-1}(A) }_2 = \norm{f - p_{2k-1}}_{\infty, \sigma_\text{sing}(A)} \le \norm{f - p_{2k-1}}_{\infty, [\sigma_{\min\{m,n\}}, \sigma_1]},
\end{equation*}
and similarly, by Lemma~\ref{lemma:interlacing-singular-values} it holds
\begin{equation*}
\norm {f^\diamond(B_k) - p^\diamond_{2k-1}(B_k) }_2 \le \norm{f - p_{2k-1}}_{\infty, [\sigma_n, \sigma_1]}.
\end{equation*}
Combining the above inequalities with \eqref{eqn:proof-polynomial-bound-2}, we get
\begin{equation*}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{\vec b}_2 \norm{f - p_{2k-1}}_{\infty, [\sigma_n, \sigma_1]},
\end{equation*}
and by taking the minimum over $p \in \poly_{k-1}$ we obtain \eqref{eqn:gmf-polynomial-bound-1}.
To prove \eqref{eqn:gmf-polynomial-bound-2}, recall that if $\sigma_m > 0$ or $\displaystyle \lim_{z \to 0} \tfrac{f(z)}{z} = 0$, by Proposition~\ref{prop:gmf-general-equivalence} we have $f^\diamond(A) = g(AA^T)A$, where $g(z) = f(\sqrt{z})/\sqrt{z}$, and similarly, if $\sigma_n > 0$ or $\displaystyle \lim_{z \to 0} \tfrac{f(z)}{z} = 0$ it holds $f^\diamond(B_k) = g(B_k B_k^T)B_k$.
Therefore, by also using Proposition~\ref{prop:gmf-polynomial-evaluation}, we can rewrite \eqref{eqn:proof-polynomial-bound-2} in the form
\begin{equation*}
f^\diamond(A) \vec b - \barvec y_k = [ g(AA^T) - p(AA^T) ] A \vec b - P_k [g(B_k B_k^T) - p(B_k B_k^T)] B_k Q_k^T \vec b.
\end{equation*}
Given that the eigenvalues of $B_k B_k^T$ are the squares of the singular values of $B_k$, with a similar argument as before we obtain
\begin{align*}
\norm{f^\diamond(A) - \barvec y_k}_2 & \le \norm{A \vec b}_2 \left(\norm{g - p}_{\infty, [\sigma_m, \sigma_1]} + \norm{g - p}_{\infty, [\sigma_n^2, \sigma_1^2]} \right) \\
& \le 2 \norm{A \vec b}_2 \norm*{f(z)/z - p(z^2)}_{\infty, [\sigma_{\max\{m,n\}}, \sigma_1]}.
\end{align*}
As before, \eqref{eqn:gmf-polynomial-bound-2} follows by taking the minimum over $p \in \poly_{k-1}$.
\end{proof}
\begin{remark}
\label{rem:rectang-discuss-post-poly-bound}
Observe that if the matrix $A$ is not square, we have $\sigma_{\max\{m,n\}} = 0$, and hence the bound~\eqref{eqn:gmf-polynomial-bound-2} always involves a polynomial approximation over the whole interval $[0, \sigma_1]$, even when $\sigma_{\min\{m,n\}} > 0$. If $A \in \mathbb{R}^{m \times n}$ is rectangular with $m < n$, then the bound~\eqref{eqn:gmf-polynomial-bound-1} also involves the whole interval $[0, \sigma_1]$.
A possible strategy to overcome this issue is to use Proposition~\ref{prop:link-gmf-a-and-gmf-at} and write
\begin{equation*}
\vec y = f^\diamond(A) \vec b = (A^+)^T f^\diamond(A^T) A \vec b.
\end{equation*}
The vector $\vec w = f^\diamond(A^T) A \vec b$ can be approximated using a Krylov method on $A^T$, and then $\vec y$ can be recovered by solving the least squares problem
\begin{equation}
\label{eqn:gmf-approx-transp}
\vec y = (A^+)^T \vec w = \arg\min_{\vec y} \norm{A^T \vec y - \vec w}_2.
\end{equation}
By rewriting the problem in this form, if $m < n$ and $\sigma_m >0$ we get a bound involving approximation on the smaller interval $[\sigma_m, \sigma_1]$ for the approximation of~$\vec w$, which translates to a bound for the approximation of $\vec y$.
\end{remark}
The bound \eqref{eqn:gmf-polynomial-bound-1} can be manipulated to obtain a more explicit bound. Assume that $\sigma_n > 0$, and let $I = [\sigma_n, \sigma_1]$. The polynomial $p(z^2)z$ is odd, and we can assume that $f$ is also odd, so we have
\begin{align*}
\min_{p \in \poly_{k-1}}\norm{f(z) - p(z^2)z}_{\infty, I} & = \min_{p \in \poly_{k-1}}\norm{f(z) - p(z^2)z}_{\infty, (-I) \cup I} \\
& = \min_{q \in \poly_{2k-1}}\norm{f(z) - q(z)}_{\infty, (-I) \cup I},
\end{align*}
where we used the fact that the polynomial of best approximation on $(-I) \cup I$ for an odd function is itself odd.
Bounds on the asymptotic rate of convergence for the polynomial approximation of a function on the union of disjoint intervals $(-I) \cup I$ have been developed in~\cite{ChuiHasson83}.
Using \cite[Theorem~8.2]{Trefethen13} in the proof of \cite[Theorem~1]{ChuiHasson83} to get explicit inequalities instead of asymptotic bounds, we obtain the following quantitative version of \cite[Theorem~1]{ChuiHasson83}.
We denote by $E_\rho$ the ellipse with foci at $\pm 1$ and vertices at $\pm \frac{1}{2}(\rho + \frac{1}{\rho})$, and by $\tilde E_\rho$ its image under the linear function that maps $[-1, 1]$ to the interval $[a^2, b^2]$. The ellipse $\tilde E_\rho$ has foci at $a^2$ and $b^2$, and vertices at $\frac{1}{2}(a^2 + b^2) \pm \frac{1}{4}(\rho + \frac{1}{\rho})(b^2 - a^2)$.
\begin{theorem}
\label{thm:chui-hasson-quantitative}
Let $0 < a < b$ and let $I = [a, b]$. Assume that $f|_{-I}$ and $f|_{I}$ are, respectively, the restrictions of a function $f_1$ analytic in the left half-plane $\Real(z) < 0$, and of a function $f_2$ analytic in the right half-plane $\Real(z) > 0$. Then, for all $k > 0$ and $1 < \rho \le \dfrac{b+a}{b-a}$ it holds
\begin{equation}
\label{eqn:chui-hasson-bound-quantitative}
\min_{p \in \poly_{2k-1}}\norm{f(z) - p(z)}_{\infty, (-I) \cup I} \le C \frac{\rho}{\rho - 1} \rho^{-k},
\end{equation}
where $C = C(\rho) = M_1 + M_2 + \frac{1}{a}(N_1 + N_2)$, with
\begin{align*}
M_1 & = \max_{-z \in \tilde E_\rho} \abs{f_1(\sqrt{-z})}, & N_1 & = \max_{-z \in \tilde E_\rho} \abs{f_1(\sqrt{-z})/\sqrt{-z}}, \\
M_2 & = \max_{z \in \tilde E_\rho} \abs{f_2(\sqrt{z})}, & N_2 & = \max_{z \in \tilde E_\rho} \abs{f_2(\sqrt{z})/\sqrt{z}},
\end{align*}
where $\tilde E_\rho$ denotes the ellipse defined above.
\end{theorem}
\begin{remark}
If we take $\rho = \dfrac{b+a}{b-a}$, the ellipse $\tilde E_\rho$ has a vertex at $0$. In this case, depending on the function $f$, we may have $N_i = + \infty$ and hence $C = +\infty$. In such a situation the bound only makes sense for $\rho < \dfrac{b+a}{b-a}$.
\end{remark}
Assuming that $\sigma_n > 0$, by plugging \eqref{eqn:chui-hasson-bound-quantitative} in the bound \eqref{eqn:gmf-polynomial-bound-1}, we get
\begin{equation}
\label{eqn:gmf-polynomial-bound-chui-hasson}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 C \norm{\vec b}_2 \frac{\rho}{\rho - 1} \rho^{-k}, \qquad \text{for }1 < \rho \le \frac{\sigma_1 + \sigma_n}{\sigma_1 - \sigma_n},
\end{equation}
where $C$ is as defined in Theorem~\ref{thm:chui-hasson-quantitative}.
\subsection{Rational error bounds}
\label{subsubsec:rat-err-bounds}
Next, we prove similar error bounds for the rational approximation \eqref{eqn:gmf-approx-rational}. The strategy of the proof is the same as in the polynomial case, even though the proof itself is a bit more technical. We start by stating the result analogous to Theorem \ref{thm:polynomial-krylov-error-bound}. Recall that the denominator in the rational Krylov space $\rat_k(AA^T, A \vec b)$ is given by the polynomial $q_{k-1}(z) = \displaystyle\prod_{j = 1}^{k-1} (1 - z/\xi_j)$, where $\{ \xi_j \}_{j \ge 1}$ is a sequence of poles in $(\mathbb{C} \cup \{\infty\}) \setminus \sigma(AA^T)$. For convenience, in this section we use as denominator polynomial for $\rat_k(AA^T, \vec b)$ the polynomial $\tilde q_{k-1}(z) = \displaystyle\prod_{j = 1}^{k-1} (z - \xi_j)$. This polynomial differs from $q_{k-1}$ only by a multiplicative constant, and hence it identifies the same rational Krylov subspace. The results of this section, such as Theorem~\ref{thm:rational-krylov-error-bound}, can be equivalently stated in terms of $q_{k-1}$ or $\tilde q_{k-1}$.
\begin{theorem}
\label{thm:rational-krylov-error-bound}
Let $A \in \mathbb{R}^{m \times n}$, and let $\sigma_1$, $\sigma_n$ and $\sigma_m$ be the first, $n$-th and $m$-th singular value of $A$, respectively. Let $\barvec y_k$ be the approximation to $\vec y = f^\diamond(A) \vec b$ from $\rat_k(AA^T, A \vec b)$ given by \eqref{eqn:gmf-approx-rational}. Then the following inequality holds:
\begin{equation}
\label{eqn:gmf-rational-bound-1}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{\vec b}_2 \min_{p \in \poly_{k-1}} \norm{ f(z) - q_{k-1}(z^2)^{-1} p(z^2)z }_{\infty, [\sigma_n, \sigma_1]}.
\end{equation}
Moreover, if $A$ is square with $\sigma_m = \sigma_n > 0$, or if $\displaystyle \lim_{z \to 0} \tfrac{f(z)}{z} = 0$, it also holds that
\begin{equation}
\label{eqn:gmf-rational-bound-2}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm {A \vec b}_2 \min_{p \in \poly_{k-1}} \norm{f(z)/z - q_{k-1}(z^2)^{-1} p(z^2)}_{\infty, [\sigma_{\max\{m,n\}}, \sigma_1]}.
\end{equation}
\end{theorem}
Note that the same issues discussed after Theorem~\ref{thm:polynomial-krylov-error-bound} in the case of rectangular matrices also arise in the rational case, and the same approach proposed in Remark~\ref{rem:rectang-discuss-post-poly-bound} can be used to address them.
Similarly to the polynomial case, the bound \eqref{eqn:gmf-rational-bound-1} can be rewritten by exploiting the fact that $f$ can be assumed to be odd and hence that the best approximant on a symmetric interval is odd, yielding
\begin{equation}
\label{eqn:gmf-rational-bound--2-intervals}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{\vec b}_2 \min_{p \in \poly_{2k-1}} \norm{f(z) - q_{k-1}(z^2)^{-1} p(z)}_{\infty, (-I) \cup I},
\end{equation}
where again $I = [\sigma_n, \sigma_1]$. However, rational approximation on disjoint intervals is a complicated problem, and hence this formulation might be less useful in practice.
A more practical way to rewrite the bound~\eqref{eqn:gmf-rational-bound-1} is the following:
\begin{align}
\nonumber
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 & \le 2 \norm{\vec b}_2 \min_{p \in \poly_{k-1}} \norm{ f(z) - q_{k-1}(z^2)^{-1} p(z^2)z }_{\infty, [\sigma_n, \sigma_1]} \\
\nonumber
& = 2 \norm{\vec b}_2 \min_{p \in \poly_{k-1}} \norm{ \sqrt{z} \big(f(\sqrt{z})/\sqrt{z} - q_{k-1}(z)^{-1}p(z)\big) }_{\infty, [\sigma_n^2, \sigma_1^2]} \\
\label{eqn:gmf-rational-bound--practical}
& \le 2 \sigma_1 \norm{\vec b}_2 \min_{p \in \poly_{k-1}} \norm{\big(f(\sqrt{z})/\sqrt{z} - q_{k-1}(z)^{-1}p(z)\big) }_{\infty, [\sigma_n^2, \sigma_1^2]}.
\end{align}
Although we get an additional factor $\sigma_1$, this bound relates the error with a uniform rational approximation problem on a real interval. This approximation problem is well-studied in the literature, and it is the same that appears when computing standard matrix functions with rational Krylov methods, so it can be a viable tool for selecting good poles.
Before we prove Theorem~\ref{thm:rational-krylov-error-bound}, we state and prove a few auxiliary lemmas. As in the polynomial case, the key point for the proof is the exactness of the rational approximation on functions of the form $f(z) = q_{k-1}(z^2)^{-1} p(z^2) z$, where $p$ is any polynomial in $\poly_{k-1}$. In order to prove this fact, we are going to use the following lemma, which allows us to replace $B_k$ with $A$ as we did in the proof of Lemma~\ref{lemma:polynomial-krylov-exactness}.
\begin{lemma}
\label{lemma:replace-Bk-with-A-rational}
Let $k \ge j+1$, and let $\vec v \in \rat_j(A^TA, \vec b)$, with $\tilde q_{k-1}(z) = \displaystyle\prod_{i = 1}^{k-1}(z - \xi_i)$. Then:
\begin{align}
\label{eqn:replace-Bk-with-A-1}
(B_k^TB_k - \xi_j I)^{-1} Q_k^T \vec v & = Q_k^T (A^TA - \xi_{j} I )^{-1} \vec v \\
\label{eqn:replace-Bk-with-A-2}
B_k^T B_k (B_k^TB_k - \xi_j I)^{-1} Q_k^T \vec v & = Q_k^T A^TA (A^TA - \xi_j I )^{-1} \vec v
\end{align}
\end{lemma}
\begin{proof}
To prove \eqref{eqn:replace-Bk-with-A-1}, it is enough to show that
\begin{equation*}
(B_k^TB_k - \xi_j I) Q_k^T (A^TA - \xi_j I)^{-1} \vec v = Q_k^T \vec v.
\end{equation*}
Letting $\vec x = (A^TA - \xi_j I)^{-1} \vec v \in \rat_{j+1}(A^TA, \vec b)$ and observing that $Q_k Q_k^T \vec x = \vec x$, we have
\begin{align*}
(B_k^TB_k - \xi_j I) Q_k^T (A^TA - \xi_j I)^{-1} \vec v & = Q_k^T(A^T P_k P_k^TA - \xi_j I) Q_k Q_k^T \vec x \\
& = Q_k^T(A^T P_k P_k^TA \vec x - \xi_j \vec x) \\
& = Q_k^T(A^T A - \xi_j I )\vec x = Q_k^T \vec x,
\end{align*}
where we also used that $P_k P_k^T A \vec x = A \vec x$, since $A \vec x \in \rat_{j+1}(AA^T, A \vec b)$. This proves \eqref{eqn:replace-Bk-with-A-1}.
To prove \eqref{eqn:replace-Bk-with-A-2}, by using~\eqref{eqn:replace-Bk-with-A-1} we have
\begin{align*}
B_k^T B_k (B_k^TB_k - \xi_j I)^{-1} Q_k^T \vec v & = B_k^T B_k Q_k^T \vec x \\
& = Q_k^T A P_k P_k^T A^T Q_k Q_k^T \vec x.
\end{align*}
Now, observe that $\vec x \in \rat_{j+1}(A^TA, \vec b)$ and that $A \vec x \in \rat_{j+1}(AA^T, A \vec x)$, and hence it holds $Q_kQ_k^T \vec x = \vec x$ and $P_kP_k^T A \vec x = A \vec x$. With these facts, we get
\begin{equation*}
B_k^T B_k (B_k^TB_k - \xi_j I)^{-1} Q_k^T \vec v = Q_k^T A A^T \vec x,
\end{equation*}
which is equivalent to \eqref{eqn:replace-Bk-with-A-2}.
\end{proof}
\begin{lemma}
\label{lemma:rational-krylov-exactness}
Assume that $f$ is of the form $f(z) = q_{k-1}(z^2)^{-1} p(z^2)z$, where $p \in \poly_{k-1}$. Then the approximation $\barvec y_k$ from the rational Krylov subspace $\rat_k(AA^T, A \vec b)$ given by \eqref{eqn:gmf-approx-rational} is exact, i.e., it holds $\vec y = \barvec y_k$.
\end{lemma}
\begin{proof}
It is sufficient to show that $\vec y = \barvec y_k$ for $f(z) = \dfrac{z^{2\ell - 1}}{\tilde q_{k-1}(z^2)}$, for $\ell = 1, \dots, k$. By Corollary \ref{cor:gmf-rational-evaluation}, we have
\begin{equation*}
\vec y = f^\diamond(A) \vec b = A (A^TA)^{\ell - 1} \tilde q_{k-1}(A^TA)^{-1} \vec b.
\end{equation*}
Similarly, we have
\begin{align*}
\barvec y_k & = P_k f^\diamond(B_k) Q_k^T \vec b = P_k B_k (B_k^T B_k)^{\ell - 1} \tilde q_{k-1}(B_k^T B_k)^{-1} Q_k^T \vec b \\
& = P_k B_k \prod_{j=\ell}^{k-1} (B_k^T B_k - \xi_j I)^{-1} \prod_{j = 1}^{\ell-1}\left [(B_k^T B_k)(B_k^T B_k - \xi_j I)^{-1}\right ] Q_k^T \vec b.
\end{align*}
Now, define the vectors $\vec t_m \in \mathbb{C}^k$ as
\begin{equation*}
\vec t_m = \begin{cases}
\displaystyle\prod_{j = 1}^{m-1}\left [(B_k^T B_k)(B_k^T B_k - \xi_j I)^{-1}\right ] Q_k^T \vec b. & \text{if $1 \le m \le \ell$,} \\
\displaystyle\prod_{j=\ell}^{m-1} (B_k^T B_k - \xi_j I)^{-1}\prod_{j = 1}^{\ell-1}\left [(B_k^T B_k)(B_k^T B_k - \xi_j I)^{-1}\right ] Q_k^T \vec b & \text{if $\ell < m \le k$}.
\end{cases}
\end{equation*}
It is straightforward to see that $\vec t_{m+1} = B_k^T B_k (B_k^T B_k - \xi_m I)^{-1} \vec t_m$ for $1 \le m < \ell$, and that $\vec t_{m+1} = (B_k^T B_k - \xi_m I)^{-1} \vec t_m$ for $\ell \le m < k$.
By Lemma~\ref{lemma:replace-Bk-with-A-rational}, we have
\begin{equation*}
\vec t_2 = B_k^T B_k (B_k^TB_k - \xi_1 I)^{-1} Q_k^T \vec b = Q_k^T A^TA (A^TA - \xi_1 I)^{-1} \vec b,
\end{equation*}
and hence $\vec t_2 = Q_k^T \vec b_2$, with $\vec b_2 \in \rat_2(A^TA, \vec b)$.
By induction, assume that $\vec t_m = Q_k^T \vec b_m$, where $\vec b_m \in \rat_m(A^TA, \vec b)$. Then, if $m < \ell$, we have
\begin{equation*}
\vec t_{m+1} = B_k^TB_k(B_k^T B_k - \xi_m I)^{-1} Q_k^T \vec b_m = Q_k^T A^TA(A^TA - \xi_m I)^{-1}\vec b_m,
\end{equation*}
where we used \eqref{eqn:replace-Bk-with-A-2}. Defining $\vec b_{m+1} = A^TA(A^TA - \xi_m I)^{-1}\vec b_m$, we get that $\vec t_{m+1} = Q_k^T \vec b_{m+1}$, where $\vec b_{m+1} \in \rat_{m+1}(A^TA, \vec b)$. The case $\ell \le m < k$ is similar: by using \eqref{eqn:replace-Bk-with-A-1}, we find that $\vec t_{m+1} = Q_k^T \vec b_{m+1}$, with $\vec b_{m+1} = (A^TA - \xi_m I)^{-1} \vec b_m \in \rat_{m+1}(A^TA, \vec b)$.
Considering $m = k$, by the above discussion we have obtained that
\begin{align*}
\vec t_k = Q_k^T \vec b_k & = Q_k^T (A^TA)^{\ell-1} \displaystyle\prod_{j = 1}^{k-1}(A^TA - \xi_j I)^{-1} \vec b \\
& = Q_k^T (A^TA)^{\ell-1} \tilde q_{k-1}(A^TA)^{-1} \vec b,
\end{align*}
with $\vec b_k = (A^TA)^{\ell-1} \tilde q_{k-1}(A^TA)^{-1} \vec b \in \rat_k(A^TA, \vec b)$. Therefore we have
\begin{equation*}
\barvec y_k = P_k B_k \vec t_k = P_k P_k^T A Q_k Q_k^T \vec b_k = A \vec b_k,
\end{equation*}
where we used $Q_kQ_k^T \vec b_k = \vec b_k$ and $P_kP_k^T A \vec b_k = A \vec b_k$. Hence we conclude that $\vec y = \barvec y_k$.
\end{proof}
We now have all the elements to prove Theorem~\ref{thm:rational-krylov-error-bound}. The proof follows the same strategy as the proof of Theorem~\ref{thm:polynomial-krylov-error-bound}.
\begin{proof}[Proof of Theorem~\ref{thm:rational-krylov-error-bound}]
Let $p$ be a polynomial of degree $\le k-1$. Then the rational function $r(z) = \tilde q_{k-1}(z^2)^{-1}p(z^2) z$ satisfies the assumptions of Lemma~\ref{lemma:rational-krylov-exactness}, and hence we have
\begin{equation}
\label{eqn:proof-rational-bound-1}
r^\diamond(A) \vec b = P_k r^\diamond(B_k) Q_k^T \vec b.
\end{equation}
By adding and subtracting the quantity in~\eqref{eqn:proof-rational-bound-1} to $f^\diamond(A) \vec b - \barvec y_k$, we get
\begin{equation}
\label{eqn:proof-rational-bound-2}
f^\diamond(A) \vec b - \barvec y_k = [f^\diamond(A) - r^\diamond(A)] \vec b - P_k[f^\diamond(B_k) - r^\diamond(B_k)]Q_k^T \vec b.
\end{equation}
Since by Lemma~\ref{lemma:interlacing-singular-values} the nonzero singular values of $B_k$ are contained in the interval $[\sigma_n, \sigma_1]$, by invariance of the $2$-norm under unitary transformations, we have
\begin{align*}
\norm {f^\diamond(A) - r^\diamond(A) }_2 & \le \norm{f - r}_{\infty, [\sigma_{\min\{m,n\}}, \sigma_1]}, \\
\norm {f^\diamond(B_k) - r^\diamond(B_k) }_2 & \le \norm{f - r}_{\infty, [\sigma_n, \sigma_1]}.
\end{align*}
Combining the above inequalities with \eqref{eqn:proof-rational-bound-2}, we get
\begin{equation*}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{\vec b}_2 \norm{f - r}_{\infty, [\sigma_n, \sigma_1]},
\end{equation*}
and by taking the minimum over $p \in \poly_{k-1}$ we obtain \eqref{eqn:gmf-rational-bound-1}.
To prove \eqref{eqn:gmf-rational-bound-2}, if $\sigma_n = \sigma_m > 0$ or $\displaystyle\lim_{z \to 0}\tfrac{f(z)}{z} = 0$, by Proposition~\ref{prop:gmf-general-equivalence} we can write $f^\diamond(A) = g(AA^T)A$ and $f^\diamond(B_k) = g(B_kB_k^T) B_k$, where $g(z) = f(\sqrt{z})/\sqrt{z}$.
Thus, using also Corollary~\ref{cor:gmf-rational-evaluation}, we can rewrite \eqref{eqn:proof-rational-bound-2} in the form
\begin{equation*}
f^\diamond(A) \vec b - \barvec y_k = h(AA^T) A \vec b - P_k h(B_kB_k^T) B_k Q_k^T \vec b,
\end{equation*}
where $h(z) = g(z) - \tilde q_{k-1}(z)^{-1}p(z)$.
Given that the eigenvalues of $B_k B_k^T$ are the squares of the singular values of $B_k$, using Lemma~\ref{lemma:interlacing-singular-values} and proceeding as above we obtain
\begin{align*}
\norm{f^\diamond(A) - \barvec y_k}_2 & \le \norm{A \vec b}_2 \left( \norm{h}_{\infty, [\sigma_m^2, \sigma_1^2]} + \norm{h}_{\infty, [\sigma_n^2, \sigma_1^2]} \right) \\
& = 2 \norm{A \vec b}_2 \norm*{f(z)/z - \tilde q_{k-1}(z^2)^{-1} p(z^2)}_{\infty, [\sigma_{\max\{m,n\}}, \sigma_1]}.
\end{align*}
As before, \eqref{eqn:gmf-rational-bound-2} follows by taking the minimum over $p \in \poly_{k-1}$.
\end{proof}
We mention that the results of this section can also be obtained by exploiting the link betweeen GMFs of $A$ and standard functions of the matrix $\mathcal{A} = \begin{bmatrix}
0 & A \\
A^T & 0
\end{bmatrix}$.
Indeed, it was observed in \cite{arrigo2016computation} that for an odd function $f$ it holds
\begin{equation*}
f(\mathcal{A}) = \begin{bmatrix}
0 & f^\diamond(A) \\
f^\diamond(A^T) & 0
\end{bmatrix}.
\end{equation*}
If we define the orthogonal matrix $\mathcal{U}_{2k} = \begin{bmatrix}
P_k & 0 \\
0 & Q_k
\end{bmatrix}$ and the vector $\vec c = \begin{bmatrix}
0 \\
\vec b
\end{bmatrix} \in \mathbb{R}^{m+n}$, we have that $\mathcal{U}_{2k}^T \mathcal{A} \mathcal{U}_{2k} = \begin{bmatrix}
0 & B_k \\
B_k^T & 0
\end{bmatrix}$ and
\begin{align*}
f(\mathcal{A}) \vec c & = \begin{bmatrix}
f^\diamond(A) \vec b \\
0
\end{bmatrix}, \\
\mathcal{U}_{2k} f(\mathcal{U}_{2k}^T \mathcal{A} \mathcal{U}_{2k}) \mathcal{U}_{2k}^T \vec c & = \begin{bmatrix}
P_k f^\diamond(B_k) Q_k^T \vec b \\
0
\end{bmatrix}.
\end{align*}
Moreover, it can be proved that the columns of $\mathcal{U}_{2k}$ are an orthonormal basis for a rational Krylov subspace $\rat_{2k}(\mathcal{A}, \vec c)$, the poles of which consist of a single pole at $\infty$, and $\pm \theta_j$, $j = 1, \dots, k-1$, where $\theta_j^2 = \xi_j$ for each $j$. This fact is straightforward to prove in the polynomial case, where all $\theta_j$ are equal to $\infty$.
An alternative derivation of the error bounds in Theorem \ref{thm:polynomial-krylov-error-bound} and Theorem \ref{thm:rational-krylov-error-bound} could then be obtained by combining the above fact with Lemma \ref{lemma:interlacing-singular-values} and error bounds concerning rational Krylov approximation of standard matrix functions (see, e.g., \cite[Corollary~3.4]{Guettel13}).
\subsection{An optimal pole for the Shift-and-Invert method}
\label{subsec:optimal-pole-SI}
In this section we use the error bounds in Theorem~\ref{thm:rational-krylov-error-bound} combined with a known result from approximation theory to find a pole that optimizes the bounds in the case of a single repeated pole (Shift-and-Invert method) located on the negative real line.
We consider the case of a nonsingular square matrix $A \in \mathbb{R}^{n \times n}$, with singular values contained in the interval $[\sigma_{\min}, \sigma_{\max}]$, with $\sigma_{\min} > 0$.
Note that with a change of variables the bound \eqref{eqn:gmf-rational-bound-2} can be rewritten in the form
\begin{equation*}
\norm{f^\diamond(A) \vec b - \barvec y_k}_2 \le 2 \norm{A \vec b}_2 \min_{p \in \poly_{k-1}} \norm{ g(z) - q_{k-1}(z)^{-1} p(z) }_{\infty, [\sigma_{\min}^2, \sigma_{\max}^2]},
\end{equation*}
where the function $g$ is defined as $g(z) = f(\sqrt{z})/\sqrt{z}$.
In the case of a single repeated pole $\xi < 0$, we have $q_{k-1}(z) = (z - \xi)^{k-1}$ and
\begin{equation*}
\Big\{ (z - \xi)^{-k+1} p(z) \,:\, p \in \poly_{k-1} \Big\} = \Big\{ p((z-\xi)^{-1}) \,:\, p \in \poly_{k-1} \Big\}.
\end{equation*}
By defining $h(z) = g(z^{-1} + \xi)$, so that $g(z) = h((z - \xi)^{-1})$, we get
\begin{equation}
\label{eqn:min-polynomial--SI-pole}
\min_{p \in \poly_{k-1}} \norm{g(z) - (z - \xi)^{-k+1}p(z)}_{\infty, [\sigma_{\min}^2, \sigma_{\max}^2]} = \min_{p \in \poly_{k-1}} \norm{h(z) - p(z)}_{\infty, [\mu_\text{min}, \mu_\text{max}]},
\end{equation}
where $\mu_\text{min} := (\sigma_{\max}^2 - \xi)^{-1}$ and $\mu_\text{max} := (\sigma_{\min}^2 - \xi)^{-1}$. Notice that for $\xi < 0$ it holds indeed $0 < \mu_\text{min} \le \mu_\text{max} \le (-\xi)^{-1}$.
The minimum in \eqref{eqn:min-polynomial--SI-pole} can be bounded using the following result in approximation theory, adapted from \cite[Proposition~3.1]{MoretNovati19}. Its proof relies on classical bounds for Faber series, see~\cite[Corollary~2.2]{Ellacott83}.
\begin{proposition}
\label{prop:polynomial-bound--SI-pole}
Let $\xi < 0$, and assume that $h(z) = g(z^{-1} + \xi)$ is analytic in the strip $0 < \Real z < (-\xi)^{-1}$ and continuous in $[0, (-\xi)^{-1}]$. Then, for any integer $k \ge 1$ it holds
\begin{equation}
\label{eqn:SI-bound--SI-pole}
\min_{p \in \poly_{k-1}} \norm{h(z) - p(z)}_{\infty, [\mu_{\min}, \mu_{\max}]} \le 2M \frac{\rho^k}{1-\rho},
\end{equation}
where $M = \norm{h(z)}_{\infty, [0, (-\xi)^{-1}]}$ and
\begin{equation*}
\rho = \max \left\{ \frac{\sqrt{\sigma_{\max}^2 - \xi} - \sqrt{\sigma_{\min}^2 - \xi}}{\sqrt{\sigma_{\max}^2 - \xi} + \sqrt{\sigma_{\min}^2 - \xi}}\, , \, \frac{\sigma_{\max} \sqrt{\sigma_{\min}^2 - \xi} - \sigma_{\min}\sqrt{\sigma_{\max}^2 - \xi}}{\sigma_{\max} \sqrt{\sigma_{\min}^2 - \xi} + \sigma_{\min}\sqrt{\sigma_{\max}^2 - \xi}} \right\}.
\end{equation*}
\end{proposition}
It follows from the analysis after \cite[Proposition~3.1]{MoretNovati19} that the bound \eqref{eqn:SI-bound--SI-pole} is optimized by choosing $\xi = -\sigma_{\min} \sigma_{\max}$. This choice leads to the following bound for the Shift-and-Invert iterates:
\begin{equation}
\label{eqn:SI-bound-chosen-pole--SI-pole}
\norm{\vec y - \barvec y_k}_2 \le 2 \norm{\vec b}_2 M \sqrt{\frac{\sigma_\text{max}}{\sigma_\text{min\phantom i\!}}} \exp\Big(-2k \sqrt \frac{\sigma_\text{min}}{\sigma_\text{max\phantom i\!}}\Big).
\end{equation}
We remark that the original result (see \cite[equation~(3.4)]{MoretNovati19}) exihibited an error like $\displaystyle\exp\big(-2k \sqrt[4]{\frac{a}{b}}\big)$ for a symmetric matrix $A$ with spectrum in $[a, b] \subset (0, +\infty)$, when using the Shift-and-Invert method with the optimal pole $\xi = -\sqrt{a b}$. The fact that in \eqref{eqn:SI-bound-chosen-pole--SI-pole} we have $\sqrt{\dfrac{\sigma_\text{min}}{\sigma_\text{max\phantom i\!}}}$ instead of $\sqrt[4]{\dfrac{\sigma_\text{min}}{\sigma_\text{max\phantom i\!}}}$ is not surprising, since we are essentially applying the result from \cite{MoretNovati19} to the matrix $A^TA$, whose spectrum is contained in $[\sigma_\text{min}^2, \sigma_\text{max\phantom i\!}^2]$.
\section{Numerical results}
\label{sec:numerical}
In this section we present some numerical experiments with the purpose of illustrating the error bounds and comparing the different methods proposed in the previous sections. We test the methods on randomly generated matrices with a prescribed distribution of singular values, obtained by taking two random orthogonal matrices $U, V \in \mathbb{R}^{n \times n}$ and constructing $A = U \Sigma V^T$, where $\Sigma = \diag(\sigma_1, \dots, \sigma_n) \in \mathbb{R}^{n \times n}$.
The random orthogonal matrices are obtained by taking a matrix $B \in \mathbb{R}^{n \times n}$ with entries from the normal distribution $\normal(0,1)$ and computing the QR factorization $B = QR$. If the diagonal entries of $R$ are nonnegative, then $Q$ is a random orthogonal matrix from the Haar distribution, a natural uniform probability distribution on the manifold of $n \times n$ orthogonal matrices \cite{Stewart80}.
The experiments were done with MATLAB, using the \texttt{rat\_krylov} function from the Rational Krylov Toolbox \cite{RKToolbox} for the implementation of the rational Arnoldi algorithm.
The plots display the relative 2-norm error $\norm{f^\diamond(A) \vec b - \barvec y_k}_2$, where $\barvec y_k$ is the approximation defined in \eqref{eqn:gmf-approx-polynomial} or \eqref{eqn:gmf-approx-rational}, depending on the Krylov method that was used.
\subsection{Error bounds}
We start by illustrating in Figure~\ref{fig:polynomial-convergence} the sharpness of the bound \eqref{eqn:gmf-polynomial-bound-chui-hasson} for the polynomial Krylov method. Under the assumptions of Theorem~\ref{thm:chui-hasson-quantitative}, the rate of convergence in the bound only depends on the interval $[\sigma_n, \sigma_1]$, and hence we can expect it to be pessimistic for most functions.
Indeed, for entire functions such as $\sinh(z)$ and $\sin(z)$ (see Figure~\ref{fig:polynomial-convergence}(b)) the convergence is much faster than the bound~\eqref{eqn:gmf-polynomial-bound-chui-hasson}; however, the bound can capture the asymptotic rate of convergence for certain functions with lower regularity such as $\sqrt{z}$, for suitable singular value distributions (see Figure~\ref{fig:polynomial-convergence}(a)). This implies that, under the same assumptions of Theorem~\ref{thm:chui-hasson-quantitative}, it is only possible to improve the multiplicative constant in the bound~\eqref{eqn:gmf-polynomial-bound-chui-hasson}. Note that in Figure~\ref{fig:polynomial-convergence} the multiplicative constant in the bound~\eqref{eqn:gmf-polynomial-bound-chui-hasson} was ignored for better visualization.
\begin{figure}
\makebox[\linewidth][c]{
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
x tick label style={/pgf/number format/.cd,%
scaled x ticks = false,
set thousands separator={},
fixed},
legend pos=north east,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=black, mark size=1pt, dashed] table {polybound.dat};
\addlegendentry{{Bound~(6.10)}}
\addplot[color=blue, forget plot] table {polyerr_sqrtzlogz.dat};
\addplot[only marks, color=blue,mark=*, mark size=1pt, each nth point=10, forget plot] table {polyerr_sqrtzlogz.dat};
\addplot[color=blue,mark=*, mark size=1pt, each nth point=1000] table {polyerr_sqrtzlogz.dat};
\addlegendentry{{$\sqrt{z}\log(z)$}}
\addplot[color=red, forget plot] table {polyerr_z_m025.dat};
\addplot[only marks, color=red,mark=square*, mark size=1pt, each nth point = 10, forget plot] table {polyerr_z_m025.dat};
\addplot[color=red,mark=square*, mark size=1pt, each nth point = 1000] table {polyerr_z_m025.dat};
\addlegendentry{{$z^{-1/4}$}}
\addplot[color=green, forget plot] table {polyerr_sqrtz.dat};
\addplot[only marks, color=green, mark=triangle*, mark size=1pt, each nth point = 10, forget plot] table {polyerr_sqrtz.dat};
\addplot[color=green, mark=triangle*, mark size=1pt, each nth point = 1000] table {polyerr_sqrtz.dat};
\addlegendentry{{$\sqrt{z}$}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
legend pos=north east,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue,mark=*, mark size=1pt] table {polyerr_sinh.dat};
\addlegendentry{{$\sinh(z)$}}
\addplot[color=red,mark=square*, mark size=1pt] table {polyerr_sin.dat};
\addlegendentry{{$\sin(z)$}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}}
\caption{Convergence of the polynomial Krylov method for the approximation of $f^\diamond(A) \vec b$, where $A$ is a $2000 \times 2000$ matrix whose singular values are Chebyshev points of the second kind for the interval $[10^{-1}, 10]$, and $\vec b$ is a random vector. Left: functions with an asymptotic convergence rate predicted by the bound~\eqref{eqn:gmf-polynomial-bound-chui-hasson}. Right: entire functions with fast convergence.}
\label{fig:polynomial-convergence}%
\end{figure}
In Figure~\ref{fig:rational-convergence} we compare the convergence of the rational Krylov methods and we test the sharpness of the bounds~\eqref{eqn:gmf-rational-bound--2-intervals} and~\eqref{eqn:SI-bound-chosen-pole--SI-pole}. We use the Shift-and-Invert method with the pole $\xi = - \sigma_\text{min} \sigma_\text{max}$, the extended Krylov method \cite{DruskinKnizhnerman98}, that alternates poles at $\infty$ with poles at $0$, and a general Krylov method with an asymptotically optimal pole sequence for Laplace-Stieltjes functions, developed in \cite{MasseiRobol20}.
The poles were selected using the interval $[\sigma_n^2, \sigma_1^2]$, with reference to the bound~\eqref{eqn:gmf-rational-bound--practical}. The function $f(z) = \sqrt{z}\log(1 + \sqrt{z})$ is such that the function
\begin{equation*}
\frac{f(\sqrt{z})}{\sqrt{z}} = \dfrac{\log(1 + \sqrt[4]{z})}{\sqrt[4]{z}}
\end{equation*}
is Laplace-Stieltjes, or equivalently, completely monotonic~\cite[Definition~1.3]{SSV-BernsteinFunctions}. This follows from~\cite[Theorem~3.7]{SSV-BernsteinFunctions} and the fact that $\log(1+z)/z$ is completely monotonic.
An approximation from above to the bound~\eqref{eqn:gmf-rational-bound--2-intervals} was evaluated using a quasi-optimal polynomial $p$ computed by replacing the uniform norm with the 2-norm on a discrete set of points in $I \cup (-I)$.
We can see in Figure~\ref{fig:rational-convergence} that the convergence of the rational Krylov method with asymptotically optimal poles closely follows the bound~\eqref{eqn:gmf-rational-bound--2-intervals}, and that the convergence rate of the Shift-and-Invert method is correctly predicted by the bound~\eqref{eqn:SI-bound-chosen-pole--SI-pole}. The convergence speed of the extended Krylov method is comparable to the one of the Shift-and-Invert method. Note that, as in the polynomial case, the bound~\eqref{eqn:SI-bound-chosen-pole--SI-pole} displayed in Figure~\ref{fig:rational-convergence} does not include the multiplicative constant.
\begin{figure}
\makebox[\linewidth][c]{
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
x tick label style={/pgf/number format/.cd,%
scaled x ticks = false,
set thousands separator={},
fixed},
legend pos=south west,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue, forget plot] table {EDS_2.dat};
\addplot[color=blue, mark=*, mark size=1pt] table {EDS_2.dat};
\addlegendentry{{Opt. Poles}}
\addplot[color=blue, mark size=1pt, dashed] table {ratbound_2.dat};
\addlegendentry{{Bound (6.13)}}
\addplot[color=red, forget plot] table {SI_2.dat};
\addplot[color=red, mark=square*, mark size=1pt] table {SI_2.dat};
\addlegendentry{{S\&I}}
\addplot[color=red, mark size=1pt, dashed] table {SIbound_2.dat};
\addlegendentry{{Bound (6.21)}}
\addplot[color=green, forget plot] table {ext_2.dat};
\addplot[color=green, mark=triangle*, mark size=1pt] table {ext_2.dat};
\addlegendentry{{Extended}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
legend pos=south west,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue, forget plot] table {EDS_1.dat};
\addplot[color=blue, mark=*, mark size=1pt] table {EDS_1.dat};
\addlegendentry{{Optimal}}
\addplot[color=blue, mark size=1pt, dashed] table {ratbound_1.dat};
\addlegendentry{{Bound (6.13)}}
\addplot[color=red, forget plot] table {SI_1.dat};
\addplot[color=red, mark=square*, mark size=1pt] table {SI_1.dat};
\addlegendentry{{S\&I}}
\addplot[color=red, mark size=1pt, dashed] table {SIbound_1.dat};
\addlegendentry{{Bound (6.21)}}
\addplot[color=green, forget plot] table {ext_1.dat};
\addplot[color=green, mark=triangle*, mark size=1pt] table {ext_1.dat};
\addlegendentry{{Extended}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}}
\caption{Convergence of different rational Krylov methods for the
approximation of $f^\diamond(A) \vec b$, where $A$ is a $2000 \times 2000$ matrix with logspaced singular values in the interval $[10^{-1}, 10]$ (left) or $[1, 10]$ (right), $f(z) = \sqrt{z} \log(1 + \sqrt{z})$, and $\vec b$ is a random vector.}
\label{fig:rational-convergence}
\end{figure}
\subsection{Rectangular case}
Next, we investigate the performance of the methods in the case of a rectangular matrix $A \in \mathbb{R}^{m \times n}$, and the effectiveness of the strategy proposed in Remark~\ref{rem:rectang-discuss-post-poly-bound} when $m < n$ to reduce the computation of $f^\diamond(A) \vec b$ to the computation of $f^\diamond(A^T) A \vec b$. We report in Figure~\ref{fig:transp-convergence} the convergence plots of the rational Krylov method with asymtotically optimal poles, for the functions $f(z) = \sqrt{z}$ and $f(z) = z\log(z)$. We can observe that the convergence is similar for the function $z \log(z)$ (Figure~\ref{fig:transp-convergence}(b)), while there is a large benefit in using the alternative expression~\eqref{eqn:gmf-approx-transp} in the case of the function $f(z) = \sqrt{z}$. This is likely due to the fact that $\sqrt{z}$ has a large derivative close to zero, and hence roundoff errors in the smallest computed singular values of the matrix $B_k$ are extremely amplified when applying the function $f$. Indeed, we can see in Figure~\ref{fig:transp-convergence}(a) that it is not possible to get below a relative accuracy of $10^{-8}$ if we directly approximate $f^\diamond(A) \vec b$, while we can reach a relative error of about $10^{-13}$ if we use the connection with $f^\diamond(A^T) A \vec b$, since in this case the projected matrix $B_k$ has no singular values close to zero.
\begin{figure}
\makebox[\linewidth][c]{
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
x tick label style={/pgf/number format/.cd,%
scaled x ticks = false,
set thousands separator={},
fixed},
legend pos=south west,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue, mark=*, mark size=1pt] table {rect_sqrtz.dat};
\addlegendentry{{$\sqrt{z}$}}
\addplot[color=red, mark=square*, mark size=1pt] table {rect_sqrtz_transp.dat};
\addlegendentry{{$\sqrt{z}$, $A^T$}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
legend pos=south west,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue, mark=*, mark size=1pt] table {rect_zlogz.dat};
\addlegendentry{{$z \log(z)$}}
\addplot[color=red, mark=square*, mark size=1pt] table {rect_zlogz_transp.dat};
\addlegendentry{{$z \log(z)$, $A^T$}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}}
\caption{Convergence of the rational Krylov methods with asymptotically optimal poles for
the approximation of $f^\diamond(A) \vec b$, where $A$ is a rectangular $1000 \times 1500$ matrix whose singular values are Chebyshev points of the second kind for the interval $[10^{-2}, 10]$.
The red line shows the convergence of the method described in Remark~\ref{rem:rectang-discuss-post-poly-bound}, which computes $f^\diamond(A) \vec b$ by first computing $f^\diamond(A^T) A \vec b$ and then solving a least squares problem.}
\label{fig:transp-convergence}
\end{figure}
\subsection{Finite precision issues}
In finite precision, one of the main practical problems of the Krylov methods based on a short recurrence (such as, for instance, the Lanczos method) is the loss of orthogonality in the computed basis vector. This phenomenon has been studied for the polynomial Lanczos case in \cite{paige1976error}. A brief study of the problem for the rational Lanczos case can be found in \cite{PPS21}.
As can be expected, the algorithm presented in Section~\ref{section:Rat-Golub_Kahan} also suffers from this numerical instability. However, our experiments show that this loss of orthogonality deteriorates only slightly the accuracy of the algorithm: if the poles are chosen to guarantee a moderate number of iterations for convergence, it appears that the error produced by comparing the short recurrence algorithm with the one that uses full ortogonalization remains rather small, and it stops growing after a few iterations (see Figure \ref{fig:lossOrthogonality}). This effect has been already studied in \cite{musco2018stability} for the approximation of the product between a standard matrix function and a vector by means of the polynomial Lanczos algorithm.
\begin{figure}
\makebox[\linewidth][c]{
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
x tick label style={/pgf/number format/.cd,%
scaled x ticks = false,
set thousands separator={},
fixed},
legend pos=north west,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue,mark=*, mark size=1pt] table {lossOrth.dat};
\addlegendentry{{$\norm{I_k-P_k^TP_k}_2$}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{.6\textwidth}
\begin{tikzpicture}
\begin{semilogyaxis}[
title = {},
xlabel = {Iteration},
ylabel = {Error},
legend pos=north east,
ymajorgrids=true,
grid style=dashed,legend style={font=\tiny}]
\addplot[color=blue,mark=*, mark size=1pt] table {ErrShortRec.dat};
\addlegendentry{{Short Rec.}}
\addplot[color=red,mark=square*, mark size=1pt] table {ErrFullOrth.dat};
\addlegendentry{{Full Orth.}}
\addplot[color=yellow,mark=triangle*, mark size=1pt] table {diffFullShort.dat};
\addlegendentry{{Diff. Short-Full}}
\end{semilogyaxis}
\end{tikzpicture}
\caption{}
\end{subfigure}}
\caption{Effects of the loss of orthogonality in the rational Golub-Kahan algorithm for the approximation of $f^\diamond(A) \vec b$, where $f(z)=\sqrt{z}$ and $A$ is a $2000 \times 2000$ matrix with logspaced singular values in the interval $[10^{-1}, 10^2]$, for the rational Krylov method with asymptotically optimal poles.
Left: loss of orthogonality when using the short recurrence. Right: comparision of the error in the approximation of $f^\diamond(A) \vec b$ when using the short recurrence or full orthogonalization of the basis vectors. In yellow we reported the norm of the difference between the two approximations.}
\label{fig:lossOrthogonality}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have proposed the use of rational Krylov methods in the computation of the action of a generalized matrix function on a vector. We have developed an extension of the Golub-Kahan bidiagonalization to the rational case, that uses a short recurrence to compute the basis vectors of the rational Krylov subspace. We have proved error bounds for the computation of GMFs with polynomial and rational Krylov methods, that relate the error of approximating $f^\diamond(A)\vec b$ with the best uniform polynomial or rational approximation of the function $f$ on a real interval containing the singular values of $A$, and we have conducted experiments to investigate the sharpness of such bounds. The experiments we performed also show that rational Krylov methods are particularly effective compared to polynomial Krylov methods when the function $f$ or its derivatives have singularities close to the singular values of $A$.
\section*{Acknowledgements} The authors would like to thank Michele Benzi for his support and advice.
\FloatBarrier
\bibliographystyle{amsplain}
|
2107.12112
|
\section{Introduction} \label{sec:intro}
While detections of gravitational waves \cite[e.g.,][]{gw150914} have been made with ground-based interferometers that are sensitive to hertz-kilohertz gravitational waves, experiments that operate at lower frequencies have yet to identify a signal.
Pulsar timing array (PTA) experiments, which monitor and measure arrival times from millisecond pulsars (MSPs), have been established to search for signals in the nanohertz band.
This is the domain of the stochastic background from supermassive black hole binaries, which is expected to be the first gravitational-wave signal detected with PTAs~\citep{rosado2015properties}.
The background manifests as a temporally and spatially correlated process in the MSP arrival times.
The strain spectrum of such a background is predicted to have the power-law form
$h(f) = A (f/{\rm 1\, yr^{-1}})^{-2/3}$,
where $f$ is the gravitational-wave frequency and $A$ is the strain amplitude at $f=1~\text{yr}^{-1}$ \cite[][]{phinney_backgrounds}.
The amplitude, $A$, will depend on the demographics of the supermassive black hole population.
Astrophysical effects relating to supermassive binary black hole evolution may cause deviations from a power law \cite[e.g.,][]{ravi_gwb,sampson2015finalparsec,taylor2017env}.
A definitive detection of the gravitational wave background is the presence of specific spatial correlations in the arrival times~\citep{hellingsdowns1983}.
Other processes can produce signals with similar temporal properties, with either no \citep{shannon_timingnoise} or different spatial correlation. \citet{tiburzicorrelations} and \citet{taylor2017allcorrelations} showed the challenges in distinguishing between sources that produce different spatial correlations.
Previous searches for the gravitational-wave background from supermassive black hole binaries have reported limits on the strain amplitude, $A$, ranging between $1.1\times 10^{-14}$ and $1.0 \times 10^{-15}$ at $95\%$ of either confidence or credibility, where appropriate \citep{jenet2006gwb,vanhaasteren2011gwb,demorest2013gwb,shannon2013gwb,lentati2015gwb,arzoumanian2015gwb9yr,shannon2015gwb,arzoumanian2018gwb11yr}.
The limits are now known to be affected by systematic uncertainties in the ephemeris of the solar system, which impacts pulsar timing because the arrival times are necessarily referenced to an inertial frame located at solar system barycentre \cite[e.g.,][]{arzoumanian2018gwb11yr}.
In a recent analysis of the NANOGrav 12.5-year data set, a common noise process was reported having a Bayes factor greater than $10^4$ (this corresponds to $9$ on the commonly used natural logarithmic scale), with the signal persisting even when accounting for uncertainties in the solar system ephemeris \cite[][]{arzoumanian2020gwb}. We discuss the meaning of the term ``common process'' in detail later. Here we define the symbol ${\rm CP}$ to represent the common process as obtained by the hypotheses used in the NANOGrav analysis.
Evidence for Hellings-Downs correlations was insignificant.
We have carried out a similar analysis using the Parkes Pulsar Timing Array \cite[PPTA;][]{manchesterppta} second data release \cite[][]{kerr2020pptadr2}. The observations and methodology are described in Section~\ref{sec:data}.
In Section~\ref{sec:results} we discuss the results of the searches. In Section~\ref{sec:discussion}, we discuss limitations in the methodology.
In particular, we demonstrate through simulation that the search methods can spuriously detect a common red process in timing array data sets in which it is absent.
\section{The data set and methodology} \label{sec:data}
The PPTA project monitors an ensemble of MSPs with the 64-m Parkes radio telescope (also named {\em Murriyang}) in New South Wales, Australia. The data used, namely pulse arrival times from the observations, were acquired between 2003 and 2018 and were published as part of the second data release of the project \cite[PPTA-DR2;][]{kerr2020pptadr2}. Observations were taken at a cadence of approximately three weeks. At each epoch, data were usually recorded in bands centred at three different radio frequencies in order to correct variations in pulsar dispersion measures \cite[][]{keithdmvariations}. Data were recorded with a series of digital processing systems, with quality having improved over the course of the project.
We analyzed the data set using methodology that was based on that applied to the NANOGrav 12.5-year data set~\citep{arzoumanian2020gwb}, which itself was based on \cite{arzoumanian2015gwb9yr} and \cite{taylor2017allcorrelations}.
Stochastic signals were modeled as being correlated (red) or uncorrelated (white) in time.
We had previously characterized the noise processes for individual pulsars in the PPTA sample \cite[][]{goncharov2020pptadr2noise}.
That analysis showed that the PPTA data sets contain a wide variety of noise processes, including instrument-dependent or band-dependent processes.
In this work we included red-noise processes in all pulsars, even for those pulsars that showed no evidence for such noise in previous analyses.
As in \cite{goncharov2020pptadr2noise}, we assume that the power spectral density of all red processes follows a power law, parameterized such that the amplitude, $A$, is in units of gravitational-wave strain at $1\,\text{yr}^{-1}$:
\begin{equation}\label{eq:powerlaw}
P(f|A,\gamma) = \Gamma(\zeta_{ab}) \frac{A^2}{12 \pi^2} \bigg(\frac{f}{\text{yr}^{-1}}\bigg)^{-\gamma} \text{yr}^3.
\end{equation}
The fluctuation frequency of the pulse arrival time power spectrum is denoted $f$ and the spectral index is $-\gamma$.
The noise terms are modeled using a Fourier series, starting with a fundamental frequency that is the inverse of the observation span corresponding to the entire pulsar data set, $T_\text{obs}$. We use $n_\text{c} = 30$ harmonics if $\gamma > 1.5$ \cite[][]{goncharov2020turnover}, otherwise we use $n_\text{c}$ from the single-pulsar analysis~\citep{goncharov2020pptadr2noise}.
The overlap reduction function \cite[$\Gamma(\zeta_{ab})$;][]{finnorf}
characterizes the spatial correlation of the signal, and depends on the angular separation, $\zeta_{ab}$, of two pulsars $a$ and $b$ with respect to the observer.
For an isotropic stochastic background from the gravitational waves of General Relativity ~\citep{hellingsdowns1983,jenet2015orf},
\begin{equation}\label{eq:hd}
\begin{split}
\Gamma_{\text{GWB}}(\zeta_{ab}) = & \frac{1}{2} - \frac{1}{4} \bigg( \frac{1 - \cos{\zeta_{ab}}}{2} \bigg) + \\
& \frac{3}{2} \bigg( \frac{1 - \cos{\zeta_{ab}}}{2} \bigg) \ln{\bigg(\frac{1 - \cos{\zeta_{ab}}}{2}\bigg)},
\end{split}
\end{equation}
when $a \neq b$.
Our analysis proceeded through these steps:
\begin{itemize}
\item We first searched for a common power-law, red-spectrum stochastic process (${\rm CP1}$), with an identical power spectrum and unrelated temporal evolution or spatial correlation across pulsars. We emphasize that the timing spectrum of the process is assumed to be statistically identical ensemble-average power spectrum among pulsars, which would be the case for a gravitational wave background.\footnote{This term was first introduced in \cite{arzoumanian2018gwb11yr}, and we refer the reader to that paper for further discussion of its meaning.} Throughout our analysis we marginalised over deterministic terms in the timing model \citep{2021arXiv210704609R}. This included instrument-dependent offsets (``jumps'') of unknown value, as identified by \cite{kerr2020pptadr2}. Offsets with {\em a-priori} measured values were held fixed.
We trialed both marginalising over the white-noise parameters and also holding them fixed at their maximum {\em a-posteriori} values. We obtained consistent results between these approaches.
\item Following the NANOGrav analysis we also assumed a power-law model with $\gamma = 13/3$. This value has astrophysical interest as it is the expected value for a gravitational wave background caused by supermassive binary black holes~\citep{phinney_backgrounds}. The resulting common power-law, red-noise process is here labeled as ${\rm CP2}$. Based on a factorized-likelihood approach, we performed a dropout analysis to evaluate the consistency of individual PPTA DR2 pulsars with the signal identified by ${\rm CP2}$ (see \citealt{arzoumanian2020gwb}).
\item We measured the amplitude of individual Fourier components $P_i(f_i|\rho_i) = \rho_i^2~T_\text{obs}$ of a common process, at each frequency $f_i$ separately in order to determine whether the power-law assumption of ${\rm CP1,2}$ is valid.
\item We searched for evidence that ${\rm CP2}$ exhibits spatial correlations from a gravitational wave background, a monopolar signal, MP, or a dipolar signal, DP. In this analysis we held the white-noise stochastic components fixed at their maximum {\em a-posteriori} value to reduce the number of parameters in the search and reduce computation time.
\item To assess the shape of any spatial correlations, we measured $\Gamma(\zeta_{ab})$ at seven equally separated ``node'' angles between $0$ and $180\deg$ inclusive, using linear interpolation to determine $\Gamma(\zeta_{ab})$ from pulsar pairs between the nodes. The interpolant modeling of the PTA correlation curve was first done in \cite{taylor2013orf}.
\end{itemize}
\subsection{Comparison with the NANOGrav data set and processing methods}
The NANOGrav analysis included the timing data from 45 pulsars in their analysis of their 12.5-year data set~\citep{arzoumanian2020gwb}. The PPTA-DR2 analysis is based on the data from 26 pulsars spanning up to 15 years.
While the data sets were obtained with different telescopes at a different range of frequencies, the two data sets have $11$ pulsars in common, including some of the most precisely timed pulsars.
The sources in common are PSRs~J0613$-$0200, J1024$-$0719, J1600$-$3053, J1643$-$1224, J1713$+$0747, J1832$-$0836, J1857+0943, J1909$-$3744, J1939+2134 and J2145$-$0750.
Even for these pulsars we independently determined noise models. Our instrumental noise terms are necessarily independent from NANOGrav.
However, we also included extra noise terms into the modelling for specific pulsars.
In particular for PSR~J1713+0747 the NANOGrav analysis included timing noise and dispersion measure noise terms as well as the inclusion of two exponential dips attributed to rapid dispersion measure variations.
The PPTA analysis is similar. In the second exponential dip we allowed for a different chromaticity as there is evidence that it is not caused by dispersion-measure variations~\citep{goncharov2020pptadr2noise}.
Both analyses made use of \textsc{tempo2}.
Bayesian inference was performed with \textsc{enterprise} \cite[][]{ellis2019enterprise}.
Preferred models were selected based on the Bayes factor calculated using a product-space sampling method \cite[][]{carlin1995bayesian,taylor2020productspace}.
We denote a Bayes factor for model $\text{A}$ over model $\text{B}$ as $\mathcal{B}^{\text{A}}_{\text{B}}$.
The null model, with no common or correlated noise processes in the data set, is denoted $\varnothing$.
We referenced pulse arrival times to the solar system barycenter using the ephemeris DE436, to maintain consistency with PPTA-DR2 \citep{kerr2020pptadr2} and the single-pulsar analyses~\citep{goncharov2020pptadr2noise,2021arXiv210704609R}.
The more recent DE438 ephemeris was used in the NANOGrav analysis.
\section{Results}
\label{sec:results}
Following the process described above, we obtained strong evidence for ${\rm CP1}$, with $\ln \mathcal{B}^{\text{CP1}}_{\varnothing} > 15.0$.
This implies an odds ratio in favor of CP1 of $> 3\times 10^6:1$, if both models had even odds {\em a priori}.
The results from the parameter estimation are shown in the left panel in Figure~\ref{fig:crn2}. We obtain\footnote{Unless otherwise specified, throughout the paper uncertainties provide 1-$\sigma$ credible levels.} $\log_{10}A_{\text{CP1}} = -14.55^{+0.10}_{-0.23}$ and $\gamma = 4.11^{+0.52}_{-0.41}$. The results from the NANOGrav analysis are overlaid and have significant overlap, although the NANOGrav analysis preferred a steeper spectral exponent, when the latter analysis was conducted with $5$ Fourier components. Unlike in \cite{arzoumanian2020gwb}, our measurements are consistent when we changed the numbers of fluctuation frequencies used to model the common process.
\begin{figure*}
\includegraphics[width=0.43\textwidth]{log10_A_gamma.pdf}
\includegraphics[width=0.55\textwidth]{freesp.pdf}
\caption{Left: Measurements of common power-law red-noise parameters and the demonstration of their robustness to assumptions about pulsar-intrinsic noise and the number of fluctuation frequencies $n_\text{c}$. The dashed vertical line indicates $\gamma=13/3$. The solid lines represent the measurement based on $n_\text{c} = 30$. Dashed and dotted lines represent $n_\text{c} = 20$ and $n_\text{c} = 5$. The dash-dotted lines correspond to the measurement from~\cite{arzoumanian2020gwb}. Contours and shaded regions are 1-$\sigma$ and 2-$\sigma$ credible levels.
Grey lines and regions are based on the assumption of achromatic timing noise in every pulsar, whereas blue ones are based on the assumption of timing noise only in pulsars where it was reported in~\cite{goncharov2020pptadr2noise}.
Right: Common red-noise parameter estimation with the free-spectral model. Lines represent the full PPTA data, whereas filled regions represent PPTA DR2 without PSR J0437$-$4715. The black line is the inferred spectrum assuming a power-law model with $\gamma=13/3$. Vertical dotted lines represent inverse orbital periods of solar system planets.}
\label{fig:crn2}
\end{figure*}
The right panel in Figure~\ref{fig:crn2} represents the parameter estimation for the free-spectral model. It is challenging to obtain a complete noise model for the brightest MSP, PSR~J0437$-$4715~\citep{goncharov2020pptadr2noise}, and so we show this spectrum with, and without, the inclusion of this pulsar.
We overlay the astrophysically interesting spectrum corresponding to $\gamma = 13/3$, along with the frequencies corresponding to the orbital periods of the planets.
In the left-hand panel of Figure~\ref{fig:crn1}, we show the posterior distribution for $\log_{10}A$, assuming a power-law model with $\gamma = 13/3$. The measured ${\rm CP2}$ amplitude is $\log_{10}A = -14.66 \pm 0.07$ corresponding to $A = 2.2^{+0.4}_{-0.3} \times 10^{-15}$, which is consistent with that measured in the NANOGrav 12.5-year data set.
The NANOGrav analysis attempted to determine which pulsars contributed to this signal by calculating dropout factors, which for our sample are displayed in the right-hand panel of Figure~\ref{fig:crn1}. The pulsars with the smallest dropout factors (PSRs J1824$-$2452A and J1939$+$2134) are known to have strong timing noise inconsistent in strength with that of other pulsars. However the pulsars with the highest dropout factors include pulsars with high (PSR~J0437$-$4715) and low timing precision (PSR J1022$+$1001), and pulsars with shorter timing baselines (PSR~J2241$-$5236). The meaning and use of dropout factors is further discussed in Section~\ref{sec:dropout}.
\begin{figure*}[!htb]
\includegraphics[width=9cm]{log10_A_crn_snall.pdf}
\includegraphics[width=9cm]{dropout_snall.pdf}
\caption{Pulsar contributions to the common red noise, assuming a fixed power-law index of $-13/3$ (${\rm CP2}$). Left: posterior distributions for the common red-noise amplitude, $A$. The hatched blue area is the result of a joint analysis of all pulsars with fixed white-noise parameters. The thick blue line shows the distribution obtained from a factorized likelihood approach. Thin grey lines show contributions from individual pulsars to the factorized posterior. The yellow vertical line and the shaded region represent the median and 1-$\sigma$ levels of the NANOGrav measurement. Right: Dropout factors for PPTA DR2 pulsars. We interpret the dropout factors to represent the consistency of noise in a given pulsar with ${\rm CP2}$, as discussed in Section \ref{sec:dropout}.}
\label{fig:crn1}
\end{figure*}
\begin{figure*
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{orf.pdf}
& \includegraphics[width=0.48\textwidth]{log10_A_hd.pdf} \\
\end{tabular}
\caption{
Left: Inferred inter-pulsar spatial correlations in PPTA DR2 at seven node angles $\zeta$ between Earth-pulsar baselines.
The dashed line is the predicted correlation from the gravitational-wave background.
Right: Power-law amplitude of the Hellings-Downs process without auto-correlation (red) and of the common red noise (blue).
In both figures, lines represent the full PTA based on the assumptions of timing noise only in pulsars according to~\cite{goncharov2020pptadr2noise}, whereas the filled regions represent PPTA DR2 without PSR~J0437$-$4715 and timing noise terms in all pulsars.
}
\label{fig:orfhd}
\end{figure*}
The results from the model-independent parameter estimation of the overlap reduction function (obtained assuming $\gamma = 13/3$) are provided in Figure~\ref{fig:orfhd}.
The left-hand panel shows the inferred spatial correlations.
They were sampled at seven node angles, whereas spatial correlations for other angles were obtained with linear interpolation.
The Hellings-Downs relation is overplotted.
The right-hand panel shows inferred amplitude of the Hellings-Downs spatially correlated noise and that for ${\rm CP2}$.
With the entire PPTA sample of pulsars the Bayes factor is $\ln \mathcal{B}^{\text{HD}}_{\text{CP2}}=0.3$, which provides no significant evidence for, or against, Hellings-and-Downs correlations. We note that if PSR~J0437$-$4715 is removed from the sample then the Bayes factor increases to $\ln \mathcal{B}^{\text{HD}}_{\text{CP2}}=1.0$. The data strongly disfavor ${\rm CP2}$ having monopole or dipole spatial correlations ($\ln \mathcal{B}^\text{MP}_\text{CP2}$ and $\ln \mathcal{B}^\text{DP}_\text{CP2}$ both $< -10$).
\section{Discussion}\label{sec:discussion}
Under the same assumptions the Bayesian analyses of both the PPTA and NANOGrav data sets show a preference for models which include a common noise process in addition to individual noise terms. The ${\rm CP1}$ model has consistent spectral index and amplitude between the data sets and therefore we can exclude this signal from being telescope dependent. Given the different strategies employed in mitigating interstellar propagation effects by the two projects (both in terms of choice of observing band and methods for correcting for dispersion-measure variations), it is also unlikely that the noise is associated with the interstellar medium.
However, we are attempting to detect a common noise process from a single realization of the process in each pulsar. The noise process is strongest at lowest fluctuation frequency, so the process is being characterised on a same time scale comparable to the typical data span.
This greatly complicates tests of the noise modelling.
Consequently there are a number of caveats for interpreting the ${\rm CP1}$ and ${\rm CP2}$ results as discussed in the following sub-sections. We conclude by discussing our search for spatial correlations in the data.
\subsection{Are the models of the intrinsic noise correct and complete?}
We have modeled the intrinsic noise to be a power-law process.
Intrinsic timing noise for millisecond pulsars is not well studied over the relevant time scales. We know millisecond pulsars \cite[PSRs~J0613$-$0200 and J1824$-$2452A][]{cognard_1824glitch,mckee_0163glitch} have exhibited glitch events, so small glitches are possibly present in other pulsars.
Two of the pulsars in our sample, PSRs~J0437$-$4715 and J2241$-$5236, have reported evidence for excess non-stationary noise \cite[][]{goncharov2020pptadr2noise}.
Large-scale studies of non-millisecond pulsars have demonstrated that the amplitude of their timing noise is approximately determined by the pulsar spin-down rate~\citep{aditimingnoise}, but there is also clear evidence for discrete changes in spin frequency or frequency derivative \cite[][]{cordes_timingnoise}, which may occur at quasi-periodic intervals~\cite[][]{hobbsquasi}. It is therefore unlikely that the intrinsic pulsar timing noise is perfectly modeled via a power-law process. We know that the pulse profiles of millisecond pulsars can show secular shape changes \cite[][]{shannon_magnetosphere}, which, if unmitigated, result in a non-stationary noise process.
\subsection{What assumptions lead to the evidence of a common process?}
The assessment of whether a common red-noise process is present is based on the null hypothesis that all pulsars are affected by independent red timing noise, modeled by a power law spectrum described by amplitudes and exponents drawn from a uniform prior.
The detection hypothesis is that all pulsars also exhibit a red process with the same power-law spectrum, where amplitudes and exponents are drawn from the delta-function prior.
There is a possibility that neither of these models nor their priors are sufficient descriptions of the data, which is often referred to as model misspecification \cite[e.g.][]{specification,misspec_muller}.
In simulations we demonstrate that noise without a statistically identical spectrum between pulsars can be misinterpreted as the common red process.
We simulated 10 realizations of timing residuals for the 26 PPTA-DR2 pulsars, with a range of realistic white noise levels, and injected power-law timing noise models with amplitudes and spectral indices drawn uniformly across approximately several orders of magnitude ($\log_{10} A$ spanning approximately $-16$ to $-13$ and $\gamma$ spanning $3$ to $5$).
The power spectral densities of the simulated residuals are shown in the left-hand panel of Figure~\ref{fig:sim_psd}.
We performed model selection for a model with a common-spectrum process along with intrinsic pulsar timing noise ($\rm{CP2}$), against a model with intrinsic timing noise only ($\varnothing$), and obtained $\ln\mathcal{B}^{\rm{CP2}}_{\varnothing} > 13.5$ in all realizations, implying that our methodology can detect a ``common'' process if the properties of the noise are broadly similar but far from identical, with the amplitude of the noise varies by three orders of magnitude.
Figure~\ref{fig:similar_A_gamma} shows the recovered common noise spectrum and the injected timing noise models.
We continued to increase the spread in injected noise amplitudes upward from $\approx 3$ orders of magnitude, and found that common noise is disfavoured when the spread in amplitude exceeds 5 orders of magnitude.
Thus, the simulated data only favors the correct hypothesis when the range of simulated noise parameters starts to resemble the uniform red noise prior assumed in recent analyses.
It is physically likely that intrinsic pulsar timing noise has similar, but not identical properties between different pulsars \cite[][]{shannon_timingnoise}. As such, a second null hypothesis (not tested by the current analysis) would be that each pulsar has independent red noise, but the properties of that noise cluster in a similar range.
This could be examined by assuming that the amplitude and spectral index of the noise terms for the pulsars are not identical, but are drawn from a distribution.
The non-uniform noise hypothesis is distinct both from the signal hypothesis with the delta-function prior and from the noise hypothesis with the uniform prior.
For example, the new noise prior could be modeled as a Gaussian distribution of width $\mu_A$ and $\mu_\gamma$, and variance $\sigma_A^2$ and $\sigma_\gamma^2$.
If the variance is inconsistent with zero it would suggest that the noise processes were not common but just similar.
If it is consistent with zero then values could be used to constrain the properties of a common process.
As a gravitational-wave background signal will affect all pulsars, it will be apparent not only as a common spectral process, but also as a ``noise floor''. The noise level of a given pulsar should not be below this floor apart from statistical fluctuations, including instances where two noise processes cancel each other out. We updated the simulations shown in Figure~\ref{fig:sim_psd} by including timing noise with identical power-law spectral densities ($\log_{10} A = -15.51$, $\gamma = 5.5$) for 25 pulsars, and a lower, shallower-spectrum timing noise ($\log_{10} A = -14.38$, $\gamma = 1.5$) in the pulsar with the lowest white noise levels.
The power spectral densities corresponding to these simulations are shown in the right-hand panel of Figure~\ref{fig:sim_psd}. We performed model selection for $\rm{CP2}$ over $\varnothing$, and again found significant support for the common-spectrum noise process ($\ln\mathcal{B}^{\rm{CP2}}_{\varnothing} >$ $13.1$ across all realizations), which is comparable to the support found in both the NANOGrav and PPTA analyses. As the power spectral density in the lowest frequency channel for the low-noise pulsar is $4$ orders of magnitude lower than the common signal, the model selection is not explicitly identifying a noise floor.
However, a factorized-likelihood dropout analysis showed that the simulated pulsar with a low noise level was not consistent with the retrieved common-spectrum process (with a dropout factor $<1$; see the following section).
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{publ_cholspec_similar.pdf}
\includegraphics[width=0.49\textwidth]{publ_cholspec_noisefloor.pdf}
\caption{\label{fig:sim_psd}Power spectral density of the two sets of simulations that mimic a common red process. The grey lines represent the injected input noise spectra of simulated timing residuals for the 26 pulsars. The spectra were formed using a generalized least squares technique \cite[][]{coles_cholesky}. The blue lines represent the pulsar-averaged values, and the black lines indicate the recovered common spectrum. Left: Pulsars with similar, but not identical, timing noise properties.
Right: A simulation where 25 pulsars have identical timing noise properties, and one has a lower and shallower timing noise than the others (red line).
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{log10_A_gamma_similar.pdf}
\caption{\label{fig:similar_A_gamma}Simulated steep timing noise parameters and recovered common red process parameters, corresponding to the left panel in Figure~\ref{fig:sim_psd}. The contours represent posterior probability density for the common-process amplitude and spectral index. The amplitudes and spectral indices of the injected timing-noise spectra for the 26 pulsars are indicated by the orange crosses and orange dashed lines. }
\end{figure}
\subsection{Do we understand which pulsars are contributing to the evidence?}\label{sec:dropout}
The dropout factor does not explicitly assess individual pulsar contributions to a common-noise process. As shown in Figure~\ref{fig:crn1}, unexpected pulsars (such as PSR~J2241$-$5236, which is a relatively new addition to the PPTA sample) have high dropout factors. As the dropout factor is the integral of the product of an individual posterior distribution constraint on the common red process with the posterior distribution of the apparent common-spectrum process in the rest, pulsars with uninformative posterior distributions (i.e., pulsars with relatively high white noise and no evidence of red noise) can still have significant dropout factors. Therefore, the dropout factor represents not a ``contribution statistic'', but a ``consistency statistic'': pulsars with high factors have noise that is not inconsistent with the presence of a common process. If we wish to determine which pulsars contribute to the evidence for a red-noise process then we could calculate evidence values in favor of common noise, starting from the pulsar which provides the best (or worst) single pulsar limit on the gravitational-wave background, and iteratively increasing the number of pulsars, or calculate the change in evidence when removing individual pulsars from the array.
\subsection{Are we affected by the choice of solar system ephemeris?}
The amplitude of ${\rm CP2}$ is higher than the two 95\% confidence upper limits previously set by NANOGrav ($A < 1.45 \times 10^{-15}$) and the PPTA ($A < 1.0 \times 10^{-15}$).
The PPTA limit was based on the DE421 Solar System ephemeris, without marginalizing over errors in the ephemeris and included data up to the beginning of 2015.
In order to carry out an initial exploration, to see if the results in this paper may be affected by errors within the solar system ephemeris, we modeled a subset of the potential errors as parametrized perturbations in the ephemerides~\citep[\textsc{bayesephem,}][]{vallisneri2020sse}. We performed model selection for perturbations in (1) the masses of Mars, Jupiter and Saturn (2) the individual Keplerian orbital elements of these planets and (3) in the rate of rotation about the ecliptic pole. These terms, except for Mars and Saturn, were marginalized over by~\cite{arzoumanian2020gwb}.
The resulting Bayes factors are negative and hence we conclude that the presence of such errors is not favored by the data.
We also performed the same analysis using the JPL DE438, DE430 and DE421 ephemerides. Neither DE436 nor DE438 provide evidence for any errors.
Using DE430, we find positive $\ln \mathcal{B} = 1.6$ for an error in one of the Jupiter's orbital elements and $\ln \mathcal{B} = 0.6$ favoring both the error in Saturn's mass and one of the Saturn's orbital elements. Using the DE421 ephemeris, the oldest one that we tested, we only find a positive $\ln \mathcal{B} = 0.3$ for an error in the mass of Saturn.
\subsection{Searching for spatial correlations}
Only a correlation analysis will provide incontrovertible evidence of a gravitational-wave background and we currently have no statistical evidence for the presence of spatial correlations.
The overlap reduction function in Figure~\ref{fig:orfhd} is consistent with the expected correlations from a gravitational-wave background (in particular if PSR~J0437$-$4715 is removed from the analysis).
We note that the bins in Figure~\ref{fig:orfhd} are interpolated and correlated, which may boost the apparent significance seen by eye.
As shown in Figure~\ref{fig:crn1}, the noise spectrum in PSR~J0437$-$4715 is consistent with ${\rm CP2}$ and hence it provides the highest dropout factor.
However, as shown in the right-hand panel of Figure~\ref{fig:orfhd}, the inclusion of PSR~J0437-4715 lowers the posterior probability of spatial correlations at the maximum-posterior value of $A$ of ${\rm CP2}$, further diminishing the evidence for a gravitational-wave background.
Unfortunately, the only telescope with a long timing baseline for PSR~J0437$-$4715 is Parkes and it is therefore challenging to confirm the noise modelling for this pulsar.
In the future it will be possible to compare with observations taken with, for example, MeerKAT as part of the MeerTime project \citep{bailes2020}, or with the Jansky Very Large Array as part of NANOGrav.
When evidence for Hellings-Downs correlations in pulsar timing array data sets is found, it will be important to examine the hypothesis tested. Simulations, such as ones containing uncorrelated red noise could be used to determine the likelihood of improperly modeled uncommon red noise inducing these spatial correlations.
With the provisos given above we have no evidence that the detected ${\rm CP1}$ or ${\rm CP2}$ noise process is linked to a gravitational-wave background. However, if the signal is a bona-fide astrophysical gravitational-wave background, then the relatively high amplitude would favor high merger-rate densities, short merger timescales, and high normalisations for the black hole - galaxy bulge mass relation \citep{middleton2020ngresults}.
The near-future prospects for probing the origin of the signal and the underlying dynamics of the supermassive black hole population are discussed in~\cite{pol2020milestones}.
\cite{sesana2013gwbamp} showed that background amplitudes of $> 10^{-15}$ could be caused by the effect of overmassive black holes on black hole - host relations.
The detected amplitude is also within observationally-constrained limits based on the local supermassive black-hole mass function~\citep{zhu2019minmaxgwb}.
\section{Conclusions}
Under the assumptions of an analysis of the NANOGrav 12.5-year data set~\citep{arzoumanian2020gwb}, we have detected with high confidence a common-spectrum time-correlated signal in the timing of the 26 PPTA-DR2 millisecond pulsars.
However, as noted above, there are some important caveats that need to be addressed before the signature could be confidently attributed to a physical process common to all the pulsars in the array.
We do not confirm or rule out that the common-spectrum process is a spatially-correlated stochastic gravitational-wave signal.
However, the identified process does not possess monopole or dipole correlations and is not caused by errors in masses and trajectories of Mars, Jupiter and Saturn, that would have resulted in deterministic timing residuals according to \textsc{bayesephem} models~\citep{vallisneri2020sse}.
Proposed follow-up work relates to (1) improving our understanding of the properties of the intrinsic timing noise in millisecond pulsars and (2) identifying the optimal model comparisons and methodologies that can determine whether the noise detected corresponds to a ``noise floor'' and is identical in all pulsars.
The PPTA project now has nearly three additional years of data obtained with a higher sensitivity wide-band system \cite[][]{hobbsuwl} that can be added with the data presented to increase timing baselines.
We will also combine the PPTA data sets with observations from other observatories as part of the International Pulsar Timing Array project \cite[][]{perera_iptadr2}, with the latter being ideal to continue this work. Such lengthened and more sensitive data sets will allow us to probe time scales significantly longer than the orbital period of Jupiter and closer to that of Saturn. We will be able to compare noise models obtained from a wide range of telescopes and maximise our chance of determining the nature of the red-noise signals that are present in our data.
These data sets will also enable more sensitive and robust searches for the Hellings-Downs spatial correlations necessary to make a definitive detection of the gravitational-wave background.
\section*{Acknowledgements} \label{sec:acknowledgements}
This work has been carried out by the Parkes Pulsar Timing Array, which is part of the International Pulsar Timing Array.
The Parkes radio telescope (Murriyang) is part of the Australia Telescope, which is funded by the Commonwealth Government for operation as a National Facility managed by CSIRO. This paper includes archived data obtained through the CSIRO Data Access Portal (\href{http://data.csiro.au}{data.csiro.au}).
We acknowledge the use of \textsc{chainconsumer}~\citep{chainconsumer}.
Parts of this research were conducted by the Australian Research Council (ARC) Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004. RMS acknowledges support through ARC future fellowship FT190100155. RS acknowledges support through the ARC Laureate fellowship FL150100148.
The author list is based on three tiers, which correspond to primary contributors, to members of the collaboration who provided feedback, and to members of the collaboration with significant observing record.
The data-processing code that was used in this work is available at \href{https://github.com/bvgoncharov/correlated_noise_pta_2020/}{github.com/bvgoncharov/correlated\_noise\_pta\_2020}.
|
0910.0756
|
\section{Introduction}
Several recent theoretical\cite{fu07a,qi08,zhangiop} and
experimental\cite{hsieh08,hsieh09a,hsieh09b,chen09,xia09,hor09}
works have focused on a new quantum state of matter, {\it
topological insulators} in three dimensions, which exhibit bulk
insulating gaps (mainly of spin-orbit origin) while possess {\it
time-reversal symmetry} protected gapless surface states. One of
intriguing properties in this new quantum state comes from those
``protected'' surface states, which provide a lab-realizable
condensed-matter analog of two dimensional, massless Dirac theory
with ``odd'' number of species (Dirac cones), in the surface
Brillouin zone (SBZ)\cite{fu07a,fu07b}. The charge carriers on the
surfaces here, the so-called (spin) {\it helical} Dirac
fermions\cite{wu06,hsieh09b}, behave like relativistic particles
with a spin locked to its momentum leading to the breakdown of the
spin $SU(2)$ rotational symmetry. This feature is sharply in
contrast to graphene, where the system not only possesses an even
number of Dirac cones in its spectrum, but the role of the
``locked'' spin is also replaced by a pseudo-spin (sublattice
symmetry) and hence each Dirac cone still has two-fold spin
degeneracy\cite{geim07}.
As a useful surface probe, recent angle-resolved photoemission
spectroscopy (ARPES) experiments successfully demonstrated the
surface band structures with odd number of Dirac
cones\cite{chen09,hsieh08} as well as the corresponding spin
helical structures near a Dirac
point\cite{xia09,hsieh09a,hsieh09b}. Although the confirmed nature
of the bands by ARPES suggests the quantum state to be
topologically insulating, the quest for new quantum phenomena
uniquely associated with such topology-protected surface states
remains urgent and necessary. The usual way in solid state physics
to explore the nontrivial electronic properties of helical Dirac
fermion systems would be the transport measurement on the surface
of a topological insulator\cite{fu07b}. However, such a
measurement may not be practically straightforward, since (i)
tuning the system to the topological transport regime where the
charge density vanishes is tricky, and (ii) the presence of the
n-type doping from vacancy (or anti-site defects) as well as the
fact that the surface states surround the sample make the results
difficult to be distinguished from the bulk and surface
contributions\cite{chen09,peng09}.
Alternatively, the quasiparticle interference (QPI) caused by
scattering off impurities on a surface can provide a way of
revealing the topological nature of the surface
states\cite{yazdani2009,alpichshev09,zhang09,gomes09}. The concept
of QPI is elementary in quantum mechanics. For instance, due to
impurity (elastic) scattering, the interference between the incoming
and outgoing waves with momenta $\mathbf{k}_i$ and $\mathbf{k}_f$, respectively,
can give rise to an amplitude modulation in the local density of
states (LDOS) at wavevector $\mathbf{q}=\mathbf{k}_f-\mathbf{k}_i$. Such kind of
interference pattern can be observed in Fourier transform scanning
tunneling spectroscopy (FT-STS) nowadays and it has been proved
useful in determining the pairing nature of high-$T_c$
cuprates\cite{hoffman02}. By measuring the QPI patterns and
analyzing them through a convolution of ARPES data together with a
spin-dependent scattering matrix element, Roushan and et
al\cite{yazdani2009} were able to demonstrate the absence of
backscattering in the topological surface states of
$Bi_{1-x}Sb_{x'}$, a key property of helical spin liquid.
Most recently, based on symmetry analysis, a new hexagonal warping
term, which is absent in $Bi_{1-x}Sb_{x'}$, is suggested by Liang
Fu\cite{fu09} to explain the evolution of the Fermi surface of the
effective 2D helical Dirac model describing the surface band
structure of a family of 3D topological insulators, $Bi_2X_3$ (X=Se
or Te). As measured in ARPES experiments, the shape of the Fermi
surface (FS) evolves gradually from a hexagram, a hexagon, to a
circle of shrinking volume, and finally meets at the Dirac point
when lowering the Fermi energy. The new term leads to strong density
variation around Fermi surface and also modifies the spin helical
configuration. As a result, the existence of the new term can
strongly modify the QPI. In other words, the QPI can provide a
direct evidence to justify the model.
In this paper we systematically investigate the interference effects
of a point-impurity and an edge-impurity scattering, respectively,
on the LDOS in a 2D helical Dirac fermion system. We use $T$-matrix
approach to calculate QPI spectra at a few representative energies,
for emphasizing the effects of the hexagonal warping term, in the
presence of a nonmagnetic/magnetic impurity. We also investigate an
edge impurity by using a method generalized from 1D scattering
problems with a potential barrier. Several profound features are
found in this study. In a nutshell, we observe: (i) the backward
scattering by nonmagnetic point impurities is topologically
suppressed, just as what has been shown in \cite{yazdani2009} with a
simpler empirical analysis, and the dominant interference pattern
becomes that of spatial period $2\pi/|\mathbf{q}_{35}|$ when going away
from the Dirac regime (see Fig.~\ref{fig:CCE_EW} for the definition
of $\mathbf{q}_{35}$); (ii) In the presence of magnetic impurity, the QPI
of charge density is very weak while that of spin density becomes
strong. Near the Dirac regime, spin moments of fermions are flipped
when scattering wave vector crosses over $|\mathbf{q}|=2|\mathbf{k}_F|$, as
demonstrated in the ($z$-component) spin LDOS [see
Fig.~\ref{fig:dmagz_QPI} (b)]; (iii) the mirror symmetries of the
spin LDOS in the presence of in-plane magnetic impurity with spin
polarization fixed along $x$ and $y$ directions can be used to
determine the symmetry of microscopic models and to verify the
presence or absence of the warping term; (iv) In the case of 1D
edge impurities, the Friedel oscillation has no universal decaying
function. Depending on Fermi surface energy, we show that the
oscillation decays as $1/\sqrt{|x|}$ if the FS shape is dominated by
the warping term, and as $|x|^{-3/2}$ if the warping term is
negligible.
These special quantum phenomena, sharply in contrast to conventional
metals, are mainly associated with the 2D helical liquid.
\begin{figure}[th]
\begin{center}
\includegraphics[scale=0.5]{energycontours}
\caption{Contours of constant energy and the evolution of FS.}
\label{fig:cce}
\end{center}
\end{figure}
\section{The model and $T$-matrix formalism}
We now briefly introduce our used formalism below. The explicit
model we study here is written as \begin{equation}
H(\mathbf{k})=v(k_x\sigma_y-k_y\sigma_x)+\frac{k^2}{2m^*}+\frac{\lambda}{2}(k^3_+
+k^3_-)\sigma_z, \label{eq:model} \end{equation} where $k_{\pm}\equiv k_y \pm i
k_x$. $v$ and $\lambda$ denote Fermi velocity and hexagonal warping
parameter, respectively. The Pauli matrices, $\sigma_i$, act on spin
space of fermionic quasiparticles. The form of $H(\mathbf{k})$ is suitable
for describing the [111] surface band structure near $\Gamma$ point
in SBZ of a 3D topological insulator Bi$_2$X$_3$, and is fixed under
general symmetry considerations, namely, time reversal and $C_{3v}$
symmetries\cite{fu09}. Notice that we have chosen $x$ direction to
be along $\Gamma M$ in SBZ. The $k$-linear term,
$H_0=v(k_x\sigma_y-k_y\sigma_x)$, describes an isotropic 2D helical
Dirac fermions, and the $k$-square term causes particle-hole
asymmetry. More importantly, the $k$-cube warping term,
$H_w=\frac{\lambda}{2}(k^3_+ +k^3_-)\sigma_z$, leads to hexagonal
distortion of the Fermi surface. The resulting two energy bands now
touch at the Dirac point ({\it i.e.}, $\Gamma$ point in SBZ) with
dispersion relation, \begin{equation}
\epsilon_{\pm}(\mathbf{k})=\frac{k^2}{2m^*}\pm\sqrt{v^2k^2+\frac{{\lambda}^2}{4}(k_+^3+k_-^3)^2}.
\end{equation} Defining the characteristic length scale
$b\equiv\sqrt{\lambda/v}$ and energy $E^*\equiv v/b$ introduced by
the hexagonal warping parameter, we draw the contours of constant
energy (CCE) in momentum space in units of $1/b$ and single-particle
density of states (DOS) of $H(\mathbf{k})$, respectively, in
Figs.~\ref{fig:cce} and \ref{fig:dos}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{rho}
\caption{Density of states based on the model in Eq.~(\ref{eq:model}).}
\label{fig:dos}
\end{center}
\end{figure}
In the numerical evaluation, we have taken $b\equiv 1$
, $v=0.25$, and $\lambda=0.25$ such that the Fermi surface in 0.67\%
Sn-doped Bi$_2$Te$_3$ can be qualitatively reproduced, where the
measured $v=2.55eV\cdot\r{A}$ and $E_F=1.2E^*\approx$0.3eV. Unless
otherwise stated, we will assume particle-hole symmetry, {\it i.e.},
$m^*\rightarrow \infty$. As shown in Figs.~\ref{fig:cce} and
\ref{fig:dos}, when $\omega\ll 0.2$ the DOS is almost linear in
$\omega$ with more circular FS, while when $\omega\gg 0.2$ the DOS
behaves like $\omega^{-1/3}$ with hexagram-like FS.
In addition to the CCE, we also present the spin-resolved FS with
two representative energies used through out this paper,
$E_D=$0.05eV ($0.2E^*$) and $E_W=$0.3eV ($1.2E^*$) in
Fig.~\ref{fig:sFS}. They clearly demonstrate the ``spin-helical''
nature of the 2D fermions, which is indeed essential when analyzing
the QPI spectra later. In particular, as $\omega=E_W$, non-vanishing
spin moments along z direction (out of surface plane) are present
mainly due to $\sigma_z$ in the warping term, which is directly
proportional to electron's spin. Notice that the spin moment must be
in-plane along $\Gamma M$ ({\it i.e.}, at each sharp vertex of the FS),
which is a consequence of the odd parity of $\sigma_z$ under the
mirror operation $y\rightarrow -y$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.5]{spinangles_d3}
\includegraphics[scale=0.5]{spinangles_d05_zview}
\caption{Spin textures around the Fermi surface at $\omega=0.3$eV in
(a) and at $\omega=0.05$eV (b). \label{fig:sFS}}
\end{center}
\end{figure}
Next, we consider the quasiparticle scattering problem within the
$T$-matrix approach\cite{morr03}. For a general $N$-impurity
problem, the impurity-induced electronic Green's function is given
by \begin{equation} \delta G(\mathbf{r},\mathbf{r}^\prime,\omega) =
\sum_{i,j=1}^{N}G_0(\mathbf{r},\mathbf{r}_i,\omega)
T(\mathbf{r}_i,\mathbf{r}_j,\omega)G_0(\mathbf{r}_j,\mathbf{r}^\prime,\omega),
\label{eq:rspaceT} \end{equation} where the $T$-matrix obeys the
Bethe-Salpeter equation \begin{equation} T(\mathbf{r}_i,\mathbf{r}_j,\omega) =
V_{\mathbf{r}_i}\delta_{\mathbf{r}_i,\mathbf{r}_j}+V_{\mathbf{r}_i}\sum_{k=1}^{N}
G_0(\mathbf{r}_i,\mathbf{r}_k,\omega)T(\mathbf{r}_k,\mathbf{r}_j,\omega), \label{eq:BSeq} \end{equation}
and the Green's function (in momentum space) of the clean system
is \begin{equation} G_0(\mathbf{k},\omega) = [\omega+i\eta-H(\mathbf{k})]^{-1}. \label{eq:GF}
\end{equation}
In the case of a single point nonmagnetic (magnetic) impurity
located at the origin, the scattering potential is simply
$V_{\mathbf{r}}=\delta_{\mathbf{r},0}V_{NI}\sigma_0$
$(\delta_{\mathbf{r},0}V_{MI}\vec{\sigma})$, where $\sigma_0$ is a $2\times
2$ identity matrix. Taking the advantages of the translational
symmetry of the clean system and momentum independence of the
scattering potential (for instance,
$V_{\mathbf{k},\mathbf{k}^\prime}=V_{NI}\sigma_0/N\equiv\hat{V}$ in the
nonmagnetic case), one can simplify the formula as \begin{equation}
T(\omega)=[1-\hat{V}\int_{\epsilon_+(\mathbf{k})<\Lambda}\frac{d^2
k}{(2\pi)^2} G_0(\mathbf{k},\omega)]^{-1}\hat{V}, \end{equation} and hence around the
impurity, spatial oscillations of the local density of states are
induced. To see the interference effects due to impurity scattering,
it is more convenient to compute the Fourier-transformed (induced)
local density of states (FT-LDOS), \begin{eqnarray}
\int d^2 r e^{i\mathbf{q}\cdot\mathbf{r}}\delta\rho(\mathbf{r},\omega) &\sim& \delta\rho(\mathbf{q},\omega) \nonumber \\
&=& \frac{i}{2\pi}\int_{\epsilon_+(\mathbf{k})<\Lambda}\frac{d^2
k}{(2\pi)^2}g(\mathbf{k},\mathbf{q},\omega), \end{eqnarray} where
$g(\mathbf{k},\mathbf{q},\omega)=\sum_{i=1}^{2}[\delta
G_{ii}(\mathbf{k},\mathbf{k}+\mathbf{q},\omega)-\delta G^*_{ii}(\mathbf{k}+\mathbf{q},\mathbf{k},\omega)]$.
In general, $\rho(\mathbf{q},\omega)$ is a complex number. If we
separately define the symmetric and antisymmetric parts of the
LDOS as
$\rho^S(x,y,\omega)=[\rho(x,y,\omega)+\rho(-x,-y,\omega)]/2$ and
$\rho^A(x,y,\omega)=[\rho(x,y,\omega)-\rho(-x,-y,\omega)]/2$, the
real and imaginary parts of $\rho(\mathbf{q},\omega)$ simply describe the
symmetric and antisymmetric parts of the LDOS respectively. In the
following discussion of the effects of non-magnetic impurities,
since the real part is at least two orders of magnitude larger
than the imaginary part, we focus on the former. In our
calculation, we have introduced an energy cutoff $\Lambda=4 E^*$
when integrating over momentum. Our main results do not
sensitively depend on the chosen $\Lambda$ as long as $\Lambda$ is
much greater than the impurity scattering strength. Moreover, the
spin-resolved FT-LDOS can be obtained if we separate each
component $i$ when evaluating function $g(\mathbf{k},\mathbf{q},\omega)$, {\it i.e.},
$i=1$ for spin-up and $i=2$ for spin-down.
In principle, for the case of an edge-impurity scattering, one can
use Eqs.~(\ref{eq:rspaceT})-(\ref{eq:GF}) to compute the LDOS from
$\delta\rho(\mathbf{r},\omega)$=-Im$\sum_{i}\delta
G_{ii}(\mathbf{r},\mathbf{r},\omega)/\pi$ in a straightforward manner. However,
it is more convenient, without loss of generality, to treat this
scattering problem by using an analogy of the elementary
scattering problem with a barrier potential in one dimension,
which is directly based on the wave function point of view. Our
method is briefly sketched in section III C.
\section{Numerical results}
We compute the induced LDOS at selected $\omega$,
$\delta\rho(\mathbf{q},\omega)$, for the nonmagneic/magnetic impurity
case, and, $\rho(q_x,\omega)$, for the edge impurity case. Our
numerical results are reported for a representative potential
scattering strength, $V_{NI}=V_{MI}=V_0=$0.05eV. The chosen imaginary
part of the energy $\eta=$10meV has been checked to be insensitive
to the observed main features. Also, in our analysis a $400 \times
400$ momentum grid is used in $(-\pi,\pi)\times(-\pi,\pi)$ $k$ space
and $200$ discrete points are displayed within $(-\pi,\pi)$ along
each direction in $q$ space. Note that the relevant range of SBZ in
experiments would correspond to about 5.5 times larger than $2\pi$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.65]{contour_d3}
\caption{The spectral function $\mathcal{A}(\mathbf{k},\omega)$ at
$\omega=0.3$eV with three most possible scattering wave vectors.
Note the wave vector is in units of $\pi b^{-1}$ and brighter
region corresponds to higher spectral weight.} \label{fig:CCE_EW}
\end{center}
\end{figure}
\subsection{Nonmagnetic point impurity}
We first consider the interference patterns in a 2D helical liquid
with a nonmagnetic point impurity. Starting with
$\omega=E_W=$0.3eV far away from the Dirac point ($\omega=0$), the
shape of the FS is now like a hexagram. This is just the energy
range where experiment may achieve without subtle chemical tuning
near the surface of a 3D topological insulator. As we will see
later, such energy range indeed provide a better chance to reveal
the topological nature of the helical Fermion system. In
Fig.~\ref{fig:CCE_EW}, the spectral function,
$\mathcal{A}(\mathbf{k},\omega)=-\frac{1}{\pi}\text{Im}[\text{Tr}G_0(\mathbf{k},\omega)]$
at $\omega=0.3$eV, are plotted with scattering vectors on top,
which are expected to associate with high joint DOS on a
constant-energy contour.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{tnonmag_cos_d3}
\includegraphics[scale=0.45]{tnonmag_cos_d05}
\caption{The real part of the Fourier transform of local density of
states in the case of single nonmagnetic point impurity at (a)
$\omega=0.3eV$, and (b) $\omega=0.05eV$.} \label{fig:nonmag_QPI}
\end{center}
\end{figure}
As shown in Fig.~\ref{fig:nonmag_QPI} (a), the interference pattern
includes six sharp peaks along $\Gamma K$ outside a complicated,
hexagon-shaped pattern centered at $\Gamma$ and other six weaker
peaks along $\Gamma M$ slightly inside the hexagon. These two sets
of peaks simply correspond to
$(\pm\mathbf{q}_{13},\pm\mathbf{q}_{35},\pm\mathbf{q}_{51})$ and
$(\pm\mathbf{q}_{12},\pm\mathbf{q}_{23},\pm\mathbf{q}_{34})$, respectively, as indicated
in Fig.~\ref{fig:CCE_EW}. However, the most prominent feature we
observed here is that those expected peaks, which correspond to the
$(\pm\mathbf{q}_{14},\pm\mathbf{q}_{25},\pm\mathbf{q}_{36})$, are entirely absent. This
apparent puzzle can be understood by the absence of backscattering
between two time reversal connected partners, as shown in
\cite{yazdani2009}. Suppose in our scattering problem,
$|\mathbf{k},\uparrow\rangle$ is the incoming state, while its
time-reversal partner,
$|-\mathbf{k},\downarrow\rangle\propto\mathcal{T}|\mathbf{k},\uparrow\rangle$, is the
outgoing state. $\mathcal{T}$ is the time-reversal operator with the
property $\mathcal{T}^2=-1$. For any time-reversal invariant and
hermitian operator $\hat{V}$ (such as our nonmagnetic scattering
potential), we have \begin{eqnarray}
\langle-\mathbf{k},\downarrow|\hat{V}|\mathbf{k},\uparrow\rangle &=& \langle
\mathcal{T}(\mathbf{k},\uparrow)|\hat{V}(\mathbf{k},\uparrow)\rangle=\langle\mathcal{T}\hat{V}
(\mathbf{k},\uparrow)|\mathcal{T}^2(\mathbf{k},\uparrow)\rangle \nonumber \\
&=& -\langle\mathbf{k},\uparrow|\mathcal{T}\hat{V}|\mathbf{k},\uparrow\rangle^* =-\langle\mathbf{k},\uparrow|\hat{V}\mathcal{T}|\mathbf{k},\uparrow\rangle^* \nonumber \\
&=& -\langle\mathbf{k},\uparrow|\hat{V}|-\mathbf{k},\downarrow\rangle^* =
-\langle -\mathbf{k},\downarrow|\hat{V}^\dagger|\mathbf{k},\uparrow\rangle \nonumber \\
&=& -\langle -\mathbf{k},\downarrow|\hat{V}|\mathbf{k},\uparrow\rangle =0.
\label{eq:forbidden} \end{eqnarray} In other words, the backward scattering
between time-reversal partners is not allowed. This naturally
explains the absence of the interference peaks, corresponding to
$\mathbf{q}_{36}$ (and of the same type). Such a behavior sharply
distinguishes the 2D helical Fermion system from a conventional
metal. In addition, it might be worth mentioning here that the
angles of our observed interference peaks, $\mathbf{q}_{35}$, appear
different from the experiment done by Zhang {\it et
al.}\cite{zhang09}, where there exhibits six peaks along $\Gamma M$,
instead of $\Gamma K$ as displayed in Fig.~\ref{fig:nonmag_QPI} (a).
We would like to postpone this issue to the discussion section.
When further increasing the Fermi level, the vertices become sharper
and the joint DOS at fixed $\mathbf{q}_{35}$, however, is suppressed. As a
result, the six peaks seen in Fig.~\ref{fig:nonmag_QPI}(a) diminish
and the replaced feature turns out to be the other six peaks
\textbf{at fixed $\mathbf{q}'$}, corresponding to the scattering vectors
connecting between second neighbor of the convex parts of the FS
(see Fig.~\ref{fig:nonmag_0375}), which were observed in recent
experiments\cite{zhang09}. On the other hand, when the Fermi level
gets closer to the Dirac point, for instance, $\omega=$0.05eV, the
interference pattern becomes almost isotropic with obvious stronger
weight within a circular region, as shown in
Fig.~\ref{fig:nonmag_QPI} (b). The size of the region can be
estimated to be a disk with twice longer radius of the corresponding
circular FS of the system. This is basically consistent with our CCE
picture (see Fig.~\ref{fig:cce}), where no finite, specific $\mathbf{q}$
vectors can be picked out when $\omega$ approaches to the Dirac
point.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{nonmag_cos_up_0375.eps}
\caption{The real part of the Fourier transform of local density of
states in the case of single nonmagnetic point impurity at
$\omega=0.375eV$. \label{fig:nonmag_0375}}
\end{center}
\end{figure}
\subsection{Classical magnetic point impurity}
Next, we study the QPI induced by a time-reversal symmetry breaker,
a magnetic impurity\cite{liu09}. We focus on the effects of a
classical magnetic impurity so that the Kondo physics is ignored. In
the following, after describing general features of the QPI with a
magnetic impurity,
we will discuss the cases separately when the impurity moment is
fixed along $x$, $y$, and $z$ directions.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.4]{tmagz_cos_d3}
\includegraphics[scale=0.4]{tmagz_cos_d05}
\caption{The real part of the Fourier transform of charge local
density of states in the case of single magnetic point impurity
with its spin polarized along the z-axis at (a) $\omega=0.3eV$,
and (b) $\omega=0.05eV$.} \label{fig:tmagz_QPI}
\end{center}
\end{figure}
Different from nonmagnetic impurities, a weak magnetic impurity has
very little effect on the charge density of the system, namely,
instead of having
$\delta\rho_\uparrow(\mathbf{q},\omega)=\delta\rho_\downarrow(\mathbf{q},\omega)$
as in the nonmagnetic impurity case, we have
$\delta\rho_\uparrow(\mathbf{q},\omega)\approx-\delta\rho_\downarrow(\mathbf{q},\omega)$
. This effect can be easily understood. Suppose we are considering
an impurity moment along the $z$-direction, then the spin-up
electrons and spin-down electrons see two scattering potentials of
opposite signs. In the lowest order of perturbation theory, the
scattering amplitude of the spin-up and spin-down electrons thus
differ by a minus sign so that the total interference pattern of the
charge density vanishes almost everywhere. The same argument no
longer holds if higher orders of perturbation are included. For the
model considered here, we can explicitly prove the above statement.
Assuming $V\ll\omega$, the approximation $T(\omega)\approx\hat{V}$
becomes sufficiently accurate. In this case (impurity moment along
$z$-direction), we have\begin{widetext}\begin{eqnarray}} \def\eea{\end{eqnarray}
\nonumber&&{\text{Tr}}[\delta{G}(\mathbf{q},\omega)]\approx\int\frac{d^2k}{(2\pi)^2}
\text{Tr}[G_0(\mathbf{k},\omega)\hat{V}G_0(\mathbf{k+q},\omega)]\\
\nonumber&=&V\int\frac{d^2k}{(2\pi)^2}\frac{\text{Tr}
[(\omega\sigma_0-k_y\sigma_x+k_x\sigma_y+\frac{\lambda}{2}
(k_+^3+k_-^3)\sigma_z)\sigma_z(\omega\sigma_0-(k_y+q_y)\sigma_x+(k_x+q_x)\sigma_y+\frac{\lambda}{2}((k+q)_+^3+(k+q)_-^3)\sigma_z)]}{((\omega+i\eta)^2-\epsilon_+^2(\mathbf{k}))((\omega+i\eta)^2-\epsilon_+^2(\mathbf{k+q}))}\\
\nonumber&=&V\int\frac{d^2k}{(2\pi)^2}(2\frac{k_+^3+k_-^3+(k+q)_+^3+(k+q)_-^3+ik_y(k_x+q_x)-ik_x(k_y+q_y)}{((\omega+i\eta)^2-\epsilon_+^2(\mathbf{k}))((\omega+i\eta)^2-\epsilon_+^2(\mathbf{k+q}))})\\
&=&0.\eea
\end{widetext}
The last equality is achieved by shifting the origin to
$(q_x,q_y)$, changing the integrated variables $\mathbf{k}$ to
$-\mathbf{k}$, and taking the advantage that
$\epsilon_+(\mathbf{k})=\epsilon_+(\mathbf{-k})$. Similar
derivations hold for the impurity moment along $x$ and
$y$-directions. If the second order term $\mathcal{O}(V^2)$ is included
in the $T$-matrix, the cancellation becomes no longer valid, and
there is indeed small but finite charge LDOS pattern in the system.
In Fig.~\ref{fig:tmagz_QPI}, we plot the numerical results of
$\delta\rho(\mathbf{q},\omega)$ at $\omega=0.05,0.3$. It is clear that The
amplitude of charge density variation by magnetic impurities in
Fig.~\ref{fig:tmagz_QPI} is two orders of magnitude smaller than that
shown in Fig.~\ref{fig:nonmag_QPI} by nonmagnetic impurities.
Therefore, for the magnetic impurity case, we should choose a
time-reversal breaking observable to study the interference, and a
natural choice is the spin local density of states (SLDOS),
defined
by\begin{eqnarray}} \def\eea{\end{eqnarray}\vec{S}(\mathbf{r},\omega)=-\frac{1}{\pi}\text{Im}[\int{dt}\theta(t)
\langle{}c_\alpha(\mathbf{r},t)\vec{\sigma}^{\alpha\beta}c^\dag_\beta(\mathbf{r},0)
\rangle{}e^{i\omega{t}}],\eea where $c^\dagger_{\alpha}(\mathbf{r},t)$
creates an electron with spin polarization $\alpha$ at position
$\mathbf{r}$ and time $t$. From now on we will only focus on the FT of
the $z$-component SLDOS.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{dmagz_cos_d3}
\includegraphics[scale=0.45]{dmagz_cos_d05}
\caption{The real part of the Fourier transform of spin local
density of states in the case of single magnetic point impurity
with its spin polarized along the z-axis at (a) $\omega=0.3eV$,
and (b) $\omega=0.05eV$.} \label{fig:dmagz_QPI}
\end{center}
\end{figure}
In the case of nonmagnetic impurity, we have demonstrated the
absence of interference between $|\mathbf{k},\uparrow\rangle$ and
$|\mathbf{-k},\downarrow\rangle$, which form a time-reversal pair.
Physically, a time-reversal breaker such as a magnetic impurity can
lift this ban on the backscattering. Similar to
Eq.~(\ref{eq:forbidden}), it is easy to show that
$\langle{-\mathbf{k}},\downarrow|\hat{V}|\mathbf{k},\uparrow\rangle\neq
0$ due to $\mathcal{T}\sigma_i\mathcal{T}^{-1}=-\sigma_i$. This feature is
universal in all of our figures for magnetic impurity. Taking
Fig.~\ref{fig:dmagz_QPI}(a) as an example, we can compare it with
Fig.~\ref{fig:nonmag_QPI}(a) and notice that although they have
common features, the points in the FT-SLDOS that associate with the
$2\mathbf{k}_F$ backscattering scattering vectors is only present
($\pm\mathbf{q}_{14},\pm\mathbf{q}_{25},\pm\mathbf{q}_{36}$) (see Fig.~\ref{fig:CCE_EW})
in the magnetic scattering. We can also compare
Fig.~\ref{fig:dmagz_QPI}(b) for magnetic scattering with
Fig.~\ref{fig:nonmag_QPI}(b) for nonmagnetic scattering when
$\omega=0.05$eV. In the latter case, the interference strength
universally decays quickly after reaching the boundary of the
circle; while in the former case, the interference strength reaches
a negative peak across the boundary, indicating a scattering that
flips spin moments of the quasiparticles.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.4]{dmagx_cos_d3}
\includegraphics[scale=0.4]{dmagx_sin_d3}
\caption{The (a) real part and the (b) imaginary part of the
Fourier transform of spin local density of states in the case of
single magnetic point impurity with its spin polarized along the
$y$-axis at $\omega=0.3eV$.} \label{fig:dmagy_QPI}
\end{center}
\end{figure}
Now, we discuss the QPI by magnetic impurities with in-plane
magnetic moments. In this case, a unique feature rises in the
FT-SLDOS. As shown in Fig.~\ref{fig:dmagy_QPI}and in
Fig.~\ref{fig:dmagx_QPI}, at $\omega=0.3$ we plot two figures,
which correspond to the real and imaginary parts of the FT-SLDOS
separately. Similar to the LDOS, the real and imaginary parts
correspond to the symmetric and antisymmetric parts of
$S_z(x,y,t)$ respectively. For magnetic impurity with magnetic
moment along z axis, the symmetric part dominates and the
antisymmetric part is either vanishing or orders of magnitude
smaller than the symmetric part. However, here as shown in
Fig.~\ref{fig:dmagy_QPI}, the antisymmetric part is about three
times larger than the symmetric part. the result can be understood
as follows. An inversion transformation in a two dimensional
plane, i.e. $(x,y)\rightarrow(-x,-y)$ takes
$\hat{\sigma}_z(x,y,t)\rightarrow \hat\sigma_z(-x,-y,t)$ and
$\hat\sigma_{x,y}(-x,-y,t)\rightarrow-\hat \sigma_{x,y}(-x,-y,t)$.
Therefore, under this transformation, the Hamiltonian without the
warping term in the presence of magnetic impurities with in-plane
magnetic moments transforms as $H(V_0)\rightarrow H(-V_0)$, where
$V_0$ is the coupling strength of magnetic impurity. Thus, from
this symmetry, if
we consider $S_z(x,y,t)$ as function of $V_0$ as well, we have
$S_z(x,y,t,V_0) = S_z(-x,-y,t,-V_0)$. Therefore, the first order correction
from the scattering potential vanishes for the symmetry part. In the presence of the warping term,
there is no such an exact symmetry argument. Nevertheless, the symmetric part is still much smaller than the antisymmetric part. In
the following, we will first focus on the antisymmetric part.
Fig.~\ref{fig:dmagy_QPI}(b) shows the (antisymmetric)
FT-interference pattern for the impurity moment along the $y$-axis
at $\omega=0.3$eV. We find that the strongest interference appears
at wave vector $\pm\mathbf{q}_{51}$ in Fig.~\ref{fig:dmagy_QPI}(b)
($q_{ij}$ is defined in Fig.~\ref{fig:CCE_EW}). Moreover,
$\mathbf{q}_{13}$ and $\mathbf{q}_{35}$ do not present as strong
peaks, in contrast with the cases of the nonmagnetic impurity and
the magnetic impurity spin along $z$-axis. In addition, a remarkable
feature in the interference pattern is that
$S_z^A(\mathbf{q},\omega)$ is
zero on the line $q_y=0$.
This is caused by an exact symmetry of the
system which dictates $S_z(x,y,t)=-S_z(x,-y,t)$. This point will be
discussed later in length. Fig.~\ref{fig:dmagx_QPI}(b) shows the
(antisymmetric) FT-interference pattern for the impurity spin along
the $x$-axis at $\omega=0.3$eV. We can see that the strongest
interference is associated with the vertex-to-vertex wave vectors
$\mathbf{q}_{13}$ and $\mathbf{q}_{35}$. The strong peak at
$\mathbf{q}_{51}$ does not appear and we have $S^A_z(0,q_y,\omega)$
vanishing. This result stems from an approximate equality
$S_z(x,y,t)\approx{}S_z(x,-y,t)$, a point of which will be discussed
next.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.4]{dmagy_cos_d3}
\includegraphics[scale=0.4]{dmagy_sin_d3}
\caption{The (a) real part and the (b) imaginary part of the
Fourier transform of spin local density of states in the case of
single magnetic point impurity with its spin polarized along the
$x$-axis at $\omega=0.3eV$.} \label{fig:dmagx_QPI}
\end{center}
\end{figure}
We can understand above detailed features in the SLDOS from the
symmetry analysis of the model. The model obviously has the
time-reversal symmetry and the three-fold rotation symmetry.
Moreover, the model also preserves the $y\rightarrow-y$ mirror
symmetry ($m_y$) but breaks the $x\rightarrow-x$ mirror symmetry
($m_x$), as can be seen in the warping term. Explicitly, the $m_x$
operator takes $k_\pm$ to $k_\mp$ and $\sigma_z$ to $-\sigma_z$,
which changes the sign of the warping term. Now, let us consider
the system in the presence of a magnetic impurity with its spin
along $y$-axis. Since $s_y\rightarrow s_y$ under ${m_y}$, the
whole system still preserves the mirror symmetry $m_y$. This
symmetry directly leads to \begin{equation} S_z(x,y,\omega)=-S_z(x,-y,\omega).
\end{equation} This symmetry property is clearly demonstrated in
Figs.~\ref{fig:dmagy_QPI}(a) and (b). On the other hand, if the
impurity spin is fixed along the $x$-direction, the system does
NOT have $m_x$ symmetry and we have
$S_z(x,y,\omega)\neq-S_z(-x,y,\omega)$. This feature is also
demonstrated in Fig.~\ref{fig:dmagx_QPI}(a). If we had
$S_z(x,y,\omega)=-S_z(-x,y,\omega)$, we should have
$S_z^{A(S)}(q_x,q_y,\omega)=-S_z^{A(S)}(-q_x,q_y,\omega)$ or
$S_z(x,y,\omega=S_z(-x,y,\omega)=0$. However, in
Fig.~\ref{fig:dmagx_QPI}(a), it is clear that
$S_z^{S}(q_x,q_y,\omega)=S_z^{S}(-q_x,q_y,\omega)\neq0$.
The above symmetry is a very important property of the model. In
fact, to simply account for the shape of FS, we may also
artificially make the Fermi velocity strongly angle dependent while
keeping the same spin texture where all spins on the FS are in-plane
without tilting. For instance, we can write \begin{equation}
\tilde{H}(\mathbf{k})=v(\mathbf{k})(k_x\sigma_y-k_y\sigma_x)+\frac{k^2}{2m^*},
\label{model2} \end{equation} where
$v(\mathbf{k})=\sqrt{v^2+\lambda^2k^4\sin^2(3\theta)}$, with $\theta$ being
the azimuthal angle with respect to $x$ axis ($\Gamma M$). This
model (the in-plane model) has the same dispersion as the model in
Eq.\ref{eq:model}, but has only in-plane spin texture. The
symmetries of the SLDOS here can help us distinguish these two
models. For example, one can check these two equations
experimentally: $S_z(x,y,\omega)=-S_z(x,-y,\omega)$ for impurity
spin polarized along $y$ axis and
$S_z(x,y,\omega)=-S_z(-x,y,\omega)$ for impurity spin polarized
along $x$ axis. If both are held, then the in-plane model suffices;
but if only one is held, we may need an out of plane spin (warping)
term. In Table.\ref{table1}, we list the property of SLDOS in the
two models, Eq.(\ref{eq:model}) and Eq.(\ref{model2}), in the presence
of different types of impurities and under basic symmetry
operations.
\begin{table}
\begin{tabular}{cc}
\begin{tabular}{|c|c|c|c|}
\hline
$S_z$ & $m_x$ & $m_y$ & $C_3$ \\
\hline
$s_x$ & $\times$ & $\approx1$ & $\times$ \\
$s_y$ & $\times$ & -1 & $\times$ \\
$s_z$ & $\approx1$ & $\approx1$ & 1 \\
\hline
\end{tabular}
&
\begin{tabular}{|c|c|c|c|}
\hline
$S_z$ & $m_x$ & $m_y$ & $C_3$ \\
\hline
$s_x$ & -1 & $\approx1$ & $\times$ \\
$s_y$ & $\approx1$ & -1 & $\times$ \\
$s_z$ & $\approx1$ & $\approx1$ & 1 \\
\hline
\end{tabular}\\
(a)& (b)\\
\end{tabular}
\caption{The symmetry of $S_z(x,y,t)$ under symmetry operations of
mirror-x ($m_x$), mirror-y ($m_y$) and three-fold rotation about
z-axis ($C_3$) with impurity spin along three axes. (a) is for the
model in Eq.(\ref{eq:model}) and (b) the model in
Eq.(\ref{model2}). '$1$' means symmetric; '$-1$' means
antisymmetric and '$\times$' means neither of the above. The
'$\approx$' means it is symmetric (antisymmetric) in the weak
impurity strength approximation.} \label{table1}\end{table}
\subsection{Nonmagnetic edge impurity}
Step atomic roughness on a surface may be locally idealized into
an edge impurity, that is, an infinite line with different but
uniform potential on two sides. An edge impurity in a 2D
conventional Fermi gas is known to give rise to Friedel
oscillation {\it at fixed energy} in the LDOS. This oscillation can
simply be understood as an interference pattern between the
incoming plane wave and the reflected wave by the 1D edge. The
major contribution comes from the two opposite $\mathbf{k}$-points
on the constant energy contour, $\pm\mathbf{k}_F$, and the oscillation
has the wavenumber $2|\mathbf{k}_F|$ while decaying as a form
$1/\sqrt{d}$ where $d$ is the distance from the edge
impurity\cite{crommie93}. The same picture is no longer valid if
the state at $\mathbf{k}$ and $-\mathbf{k}$ do not scatter with each other, a
case for the surface states of a 3D topological insulator where
the backscattering is forbidden by the time-reversal symmetry.
Therefore the oscillation is expected to decay much faster and
thus practically absent in an STM experiment. The 'absence' of the
Friedel oscillation is considered as a sign of (spin) helical
Dirac Fermion systems. However, the
oscillation has been observed in STM experiments\cite{alpichshev09}. The apparent discrepancy between
theory and experiment was soon claimed to be superficial and
explained by the hexagram-shape of the FS\cite{fu09}. In this
subsection an exact calculation is performed to test this physical
picture.
We consider that the edge impurity is fixed along $y$ axis and the
system has zero potential for $x<0$ and uniform potential $V$ for
$x>0$. A general quantum state on the left hand side (LHS) takes
the form \begin{equation}
\psi(k_x,k_y;x,y)=\frac{\phi_0(k_x,k_y;x,y)+r\phi_0(-k_x,k_y;x,y)}
{\sqrt{1+|r|^2}}, \end{equation} and the LDOS is \begin{equation}\label{eq:edge_LDOS}
\rho(x,\omega)=\int_{k_x>0}\frac{d^2k}{(2\pi)^2}|\psi(k_x,k_y;x,y)|^2
\delta(\omega-\epsilon_+(k_x,k_y)). \end{equation} The reflection amplitude
$r$ can be obtained together with the transmission amplitude $t$
by matching the boundary condition at the edge, namely, \begin{equation}
\phi_0(k_x,k_y;0,y)+r\phi_0(-k_x,k_y;0,y)=t\phi_0(k''_x,k_y;0,y),
\end{equation} where $k''_x$ is fixed by the energy conservation
$\epsilon(k_x,k_y)=\epsilon(k''_x,k_y)-V$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{FT_omega_d5}
\includegraphics[scale=0.45]{contour_d5_edge}
\caption{(a)The Fourier transform of the edge impurity ($V=-0.1$)
interference pattern. (b)The three $\mathbf{k}$'s that dominate the
interference pattern on the energy contour at $\omega=0.5$eV.}
\label{fig:FT_edge_omega05}
\end{center}
\end{figure}
Fig.~\ref{fig:FT_edge_omega05}(a) shows the FT-LDOS for the LHS of
the edge impurity at $\omega=0.5$. We can clearly identify the
two peaks in the interference associated with $q_x=2\mathbf{k}_{2}$ and
$q_x=2\mathbf{k}_3$, defined in Fig.~\ref{fig:FT_edge_omega05}(b). No
feature is present at $q_x=2\mathbf{k}_1$, reflecting the absence of
backscattering. The spatial dependence of the oscillation, a real
space LDOS, is given in Fig.~\ref{fig:real_edge}(a). A clear
beating pattern can be seen with spatial period $\sim
(\mathbf{k}_3-\mathbf{k}_2)^{-1}$. The oscillation decays like $1/|x|^{\alpha}$
where $\alpha\sim0.46$, qualitatively matching the theoretical
prediction in the large $|x|$ limit\cite{crommie93,fu09b}. When
$|x|$ is large enough, the stationary points approximation tells
us that, if the edge impurity is along the $y$-axis, the
interference pattern is dominated by the $\mathbf{k}$-points where $k_x$
reaches local minimum or maximum. In our model, $\mathbf{k}_{2(3)}$ are
the points corresponding to the minimum (maximum) of $k_x$ on the
contour of constant energy. However, The existence of such extrema
depends on $\omega$. If $\omega$ is small enough, the extrema
$\mathbf{k}_{2,3}$ disappear and we are left with only $\mathbf{k}_1$. Since
$\mathbf{k}_1$ is not allowed to scatter with its time-reversal partner,
the decaying of Friedel oscillation
becomes $|x|^{-3/2}$ at large $|x|$
[see Fig.~\ref{fig:real_edge}(b)]. Therefore, there is no
universal function for the oscillation decay. The decay depends
on the values of parameters. There are two inherent length scales
in the model: $b=\sqrt{\lambda/v}$ and $b'=v/\omega$. If
$b>1.48b'$, the energy contour is a hexagram and an $1/\sqrt{|x|}$
decay of the oscillation appears, while if $b\ll{b}'$, we have a
nearly circular FS and the decay of oscillation takes the form
$\rho(x)\sim|x|^{-3/2}$. In the intermediate range, the
oscillation varies. For example, at $b=1.2b'$ ($\omega=0.3$), the oscillation
decays exponentially for $|x|<100b$ but close to $|x|^{-3/2}$ for
$|x|>200b$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.45]{edge_omega_d5}
\includegraphics[scale=0.45]{edge_omega_d05}
\caption{The real space interference pattern for the edge impurity
($V=-0.1$) at (a) $\omega=0.5$eV and (b) $\omega=0.05$eV. The
density fluctuation $\delta\rho$ is defined as
$\delta\rho=\rho-\rho_0=\rho-1$. The position $x$ is in units of $b$.} \label{fig:real_edge}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.3]{exp_fit}
\caption{Fitting the experimental data of
Ref.\onlinecite{alpichshev09} using different oscillating
functions. The experimental energy -62meV corresponds to
$\sim0.25$eV in our units. In the exponential fit, $d=107\r{A}$.}
\label{fig:exp_exp}
\end{center}
\end{figure}
\section{Discussion and conclusion}
The model we have solved produces interference patterns that have
enough features to be associated with the topology-protected
surface states and the effects of the hexagonal warping term in 3D
topological insulators. However, in order to be more careful before
making conclusions, there are two more remarks we would like to
mention here.
(i) In our calculations, we neglected the possibility of any
ordering due to interaction-induced FS instability. This is valid as
long as there is no significant FS nesting vector\cite{fu09}. In
addition, we do not expect strong electron-electron interaction
based on the following observation. In experiments on topological
insulators, the Fermi level of the sample in general is closer to
the bottom of the conduction band and is far away from the Dirac
point. Such a system with finite density of states may provide
enough screening effect to Coulomb interaction between surface
electrons. Moreover, attempting to tune the Fermi level lower by a
metallic gate may also lead to the same phenomenon, turning
interaction between electrons into irrelevant regime.
(ii)In real systems, there is no 'purely magnetic' impurity. A
magnetic impurity should also have a non-magnetic component. This
fact does not change our results obtained for magnetic impurities.
In the parameter region we choose, the weak impurity approximation
is always valid (see a detailed discussion of this approximation in
the Appendix), the non-magnetic impurity only leads to the charge
density modulation and has little effect on the SLDOS. Namely,
the magnetic part of impurity is solely responsible for the SLDOS.
(iii) As we noticed in section III A, the STM experiment done by
Zhang et al.\cite{zhang09} on [111] surface of Bi$_2$Te$_3$
exhibited six peaks in FT-LDOS for the case of nonmagnetic
impurities. The experimental result differs from our results shown
in Fig.~\ref{fig:nonmag_QPI}(a) by a 30 degrees of rotation.
However, this discrepancy can be understood by noticing that in the
energy range where they observed the clear interference patterns
(50meV$\sim$400meV), the surface density of states are mixed with
bulk states along $\Gamma M$. Consequently, due to the superposition
of waves with various wavelengths the interference patterns are
simply smeared out in these regions. Instead of a full FS we
considered here, the dominant interference patterns are then from
other unmixed parts of the FS, {\it i.e.}, the parts along $\Gamma K$.
(iv) In an STM experiment done by Alpichshev et
al.\cite{alpichshev09}, the decaying behavior of the Friedel
oscillation was claimed to be $1/|x|$. However, in the case of 1D
edge impurities, our calculation shows $1/|x|^{1/2}$ behavior if the
FS shape is dominated by the warping term, and $|x|^{-3/2}$ if the
warping term is negligible. We believe there are two possible
sources of the discrepancy. First, we notice that a simple fitting
to the first several periods of oscillation is not enough to
determine the decaying behavior. In Fig.\ref{fig:exp_exp}, we show
that the data in Ref.\onlinecite{alpichshev09} can also be well
fitted using an exponentially decaying function, as opposed to the
$1/|x|$-type fit used in Ref.\onlinecite{alpichshev09}. Second, the
experimental measurements are not a pure surface effect. There are
bulk electrons in the nearby conduction band which can cause
different decaying behavior and complicate the issue. More future
experimental measurements are necessary to resolve the issue and
test the theoretical predictions.
(v) We also notice that a similar theoretical work\cite{lee09}
focusing solely on nonmagnetic impurity was posted online recently,
which suggests the six peaks at fixed $\mathbf{q}$'s that correspond to the
scattering vectors connecting between second neighbor of the convex
parts of the FS dominate in the QPI patterns. Their results are
consistent with our calculations since their results, according to
the energy unit in our paper, are obtained at $\omega=0.375eV$.
However, our results suggest that the relative strength between the
interference at $\mathbf{q}$'s connecting next nearest neighboring vertices
(e.g., $\mathbf{q}_{35}$ and the interference at $\mathbf{q}$'s connecting next
nearest neighboring arc-centers (e.g., $\mathbf{q}_{2'4'}$ in the QPI
patterns is quite subtle and depends on energy. Therefore, a full
$T$ matrix calculation is necessary in calculating the QPI patterns.
In conclusion, we have investigated the quasiparticle scattering in
a 2D helical liquid in the presence of nonmagnetic/magnetic point
impurity or an nonmagnetic edge impurity. The inclusion of the
hexagonal warping term in our system not only inherits the nature of
the $k$-linear helical liquid but also sharpens our features
mentioned above by distorting the shape of the FS. More importantly,
it requires an out of plane spin texture and can be distinguished
from other systems with examination of the mirror symmetries when
the magnetic point impurity with in-plane spin moment is present.
The absence (presence) of spots in FT-LDOS (FT-SLDOS), corresponding
to the backscattering interference, are the essential features to
confirm the topological nature of the helical liquid. The results in
our work, as may be detected by STM experiments, can be a useful
quantum signature, which is uniquely associated with this new phase
of matter, a 3D topological insulator.
\begin{acknowledgments}
The authors thank Liang~Fu for his insights and stimulating
discussion and H.~Yao for useful conversation.
\end{acknowledgments}
|
0910.1278
|
\section{Introduction}
\label{s1}
In this paper, we will study the loop quantum cosmology (LQC)
\cite{aa-rev,mbrev} of the Bianchi type II model. These models are
of special interest to the issue of singularity resolution
because of the intuition derived from the body of results related to
the Belinksii, Khalatnikov, Lifshitz (BKL) conjecture
\cite{bkl1,bkl2} on the nature of generic, spacelike singularities
in general relativity (see, e.g., \cite{bb}). Specifically, as the
system enters the Planck regime, dynamics at any fixed spatial point
is expected to be well described by the Bianchi I evolution.
However, there are transitions in which the parameters
characterizing the specific Bianchi I space-time change and the
dynamics of these transitions mimics the Bianchi II time evolution.
In a recent paper \cite{awe2}, we studied the Bianchi I model in the
context of LQC. In this paper we will extend that analysis to the
Bianchi II model. We will follow the same general approach and use
the same notation, emphasizing only those points at which the
present analysis differs from that of \cite{awe2}.
Bianchi I and II models are special cases of type A Bianchi models
which were analyzed already in the early days of LQC (see in
particular \cite{mb-hom, bdv}). However, as is often the case with
pioneering early works, these papers overlooked some important
conceptual and technical issues. At the classical level,
difficulties faced by the Hamiltonian (and Lagrangian) frameworks in
non-compact, homogeneous space-times went unnoticed. In these cases,
to avoid infinities, it is necessary to introduce an elementary cell
and restrict all integrals to it \cite{as,abl}. The Hamiltonian
frameworks in the early works did not carry out this step. Rather,
they were constructed simply by dropping an infinite volume integral
(a procedure that introduces subtle inconsistencies). In the quantum
theory, the kinematical quantum states were assumed to be periodic
---rather than almost-periodic--- in the connection, and the quantum
Hamiltonian constraint was constructed using a ``pre-$\mu_o$''
scheme. Developments over the intervening years have shown that
these strategies have severe limitations (see, e.g.,
\cite{aps3,acs,cs1,aa-badhonef,cs2}). In this paper, they will be
overcome using ideas and techniques that have been introduced in the
isotropic and Bianchi I models in these intervening years. Thus, as
in \cite{awe2} the classical Hamiltonian framework will be based on
a fiducial cell, quantum kinematics will be constructed using almost
periodic functions of connections and quantum dynamics will use the
``$\bar\mu$ scheme.'' Nonetheless, the space-time description of
Bianchi II models in \cite{mb-hom, bdv}, tailored to LQC, will
provide the point of departure of our analysis.
New elements required in this extension from the Bianchi I model
can be summarized as follows. Recall first that the spatially
homogeneous slices $M$ in Bianchi models are isomorphic to
3-dimensional group manifolds. The Bianchi I group is the
3-dimensional group of translations. Hence the the three Killing
vectors ${}^o\xi^a_i$ on $M$ ---the left invariant vector fields on
the group manifold--- commute and coincide with the right
invariant vector fields ${}^o\!e^a_i$ which constitute the fiducial
orthonormal triads on $M$. In LQC one mimics the strategy used in
LQG and spin foams and defines the curvature operator in terms of
holonomies around plaquettes whose edges are tangential to these
vector fields. The Bianchi II group, on the other hand, is
generated by the two translations and the rotation on a null
2-plane. Now the Killing vectors ${}^o\xi^a_i$ no longer commute and
neither do the fiducial triads ${}^o\!e^a_i$. Therefore we have to
follow another strategy to build the elementary plaquettes.
However, this situation was already encountered in the
k=$1$, isotropic models \cite{warsaw,apsv}. There, the desired
plaquettes can be obtained by alternating between the integral
curves of right and left invariant vector fields which do commute.
However, in the isotropic case, the gravitational connection is
given by $A_a^i = c\,\, {}^o\!\omega_a^i$, where ${}^o\!\omega_a^i$ are the covectors
dual to ${}^o\!e^a_i$ and the holonomies around these plaquettes turned
out to be almost periodic functions of the connection component
$c$ \cite{warsaw,apsv}. By contrast, in the Bianchi II model we
have three connection components $c^i$ because of the presence of
anisotropies, and, unfortunately, the holonomies around our
plaquettes are no longer almost periodic functions of $c^i$. (This
is also the case in more complicated Bianchi models.) Since the
standard kinematical Hilbert space of LQC consists of almost
periodic functions of $c^i$, these holonomy operators are not
well-defined on this Hilbert space. Thus, the strategy \cite{abl}
used so far in LQC to define the curvature operator is no longer
viable.
One could simply enlarge the kinematical Hilbert space to
accommodate the new holonomy functions of connections. But then the
problem quickly becomes as complicated as full LQG. To solve the
problem within the standard, symmetry reduced kinematical framework
of LQC, one needs to generalize the strategy to define the curvature
operator. Of course, the generalization must be such that, when
applied to all previous models, it is compatible with the procedure
of computing holonomies around suitable plaquettes used there. We
will carry out this task by suitably modifying ideas that have
already appeared in the literature. This generalization will enable
one to incorporate \emph{all} class A Bianchi models in the LQC
framework.
Once this step is taken, one can readily construct the quantum
Hamiltonian constraint and the physical Hilbert space, following
steps that were introduced in the analysis \cite{awe2} of the
Bianchi I model. However, because Bianchi II space-times have
spatial curvature, the spin connection compatible with the
orthonormal triad is now non-trivial. It leads to two new terms in
the Hamiltonian constraint that did not appear in the Bianchi I
Hamiltonian. We will analyze these new terms in some detail. In
spite of these differences, the big bang singularity is resolved in
the same precise sense as in the Bianchi I model \cite{awe2}: If a
quantum state is initially supported only on classically
non-singular configurations, it continues to be supported on
non-singular configurations throughout its evolution.
The paper is organized as follows. Section \ref{s2} summarizes the
classical Hamiltonian theory describing Bianchi II models. Section
\ref{s3} discusses the quantum theory. We first define a non-local
connection operator $\hat{A}_a^i$ and use it to obtain the
Hamiltonian constraint. We then show that the singularity is
resolved and the Bianchi I quantum dynamics is recovered in the
appropriate limit. In Section \ref{s4}, we introduce effective
equations for the model (with the same caveats as in the Bianchi I
case \cite{awe2})
Finally, in section V we summarize our results and discuss the new
elements that appear in the Bianchi II model. In Appendix A we
improve on the discussion of discrete symmetries presented in
\cite{awe2}. The results on the Bianchi I model obtained in
\cite{awe2} carry over without any change. But the change of
viewpoint is important to the LQC treatment of the Bianchi II model
and more general situations.
\section{Classical Theory}
\label{s2}
This section is divided into two parts. In the first we recall the
structure of Bianchi II space-times and in the second we summarize
the phase space formulation, adapted to LQC.
\subsection{Diagonal Bianchi II Space-times}
\label{s2.1}
Because the issue of discrete symmetries is subtle in background
independent contexts, and because it plays a conceptually important
role in the quantum theory of Bianchi II models, we will begin with
a brief summary of how various fields are defined
\cite{alrev,aa-dis}. This stream-lined discussion brings out the
assumptions which are often only implicit, making the discussion of
discrete symmetries clearer.
In the Hamiltonian framework underlying loop quantum gravity (LQG),
one fixes an \emph{oriented} 3-manifold $M$ and a 3-dimensional
`internal' vector space $I$ equipped with a positive definite metric
$q_{ij}$. The internal indices $i,j,k,\ldots$ are then freely
lowered and raised by $q_{ij}$ and its inverse. A spatial triad
$e^a_i$ is an isomorphism from $I$ to tangent space at each point of
$M$ which associates a vector field $v^a:= e^a_i v^i$ on $M$ to each
vector $v^i$ in $I$.%
\footnote{Thus, in LQG one begins with non-degenerate triads and
metrics, passes to the Hamiltonian framework and then, at the end,
extends the framework to allow degenerate geometries.}
The dual co-triads are denoted by $\omega_a^i$. Given a triad, we
acquire a positive definite metric $q_{ab}:= q_{ij} \omega_a^i
\omega_b^j$ on $M$. The metric $q_{ab}$ in turn singles out a 3-form
$\epsilon_{abc}$ on $M$ which has \emph{positive orientation} and
satisfies $ \epsilon_{abc}\epsilon_{def}\, q^{ad} q^{be}q^{cf}= 3!$.
One can then define a 3-form $\epsilon_{ijk}$ on $I$ via
$\epsilon_{ijk} = \epsilon_{abc} e^a_i e^b_j e^c_k$. Note that
$\epsilon_{ijk}$ is automatically compatible with $q_{ij}$, i.e.,
$\epsilon_{ijk}\epsilon_{lmn}\, q^{il} q^{jm} q^{kn}= 3!$. If a
triad $\bar{e}^a_i$ is obtained by flipping an odd number of the
vectors in the triad $e^a_i$, then $\bar{e}^a_i$ and $e^a_i$ have
opposite orientations and the fields they define satisfy
$\bar{q}_{ab} = {q}_{ab},\, \bar\epsilon_{abc} = \epsilon_{abc}$ but
$\bar\epsilon_{ijk} = - \epsilon_{ijk}$. Had we fixed
$\epsilon_{ijk}$ once and for all on $I$, then $\epsilon_{abc}$
would have flipped sign under this operation and volume integrals on
$M$ computed with the unbarred and barred triads would have had
opposite signs. With our conventions, these volume integrals will
not change and the parity flips will be symmetries of the symplectic
structure and the Hamiltonian constraint.
The triad also determines an unique spin connection $\Gamma_a^i$ via
\begin{equation} \label{sc} D_{[a} \omega_{b]}^i\, \equiv \,
\partial_{[a}\omega_{b]}^i + \epsilon^{i}{}_{jk} \Gamma_{[a}^j
\omega_{b]}^k \, =\, 0\, .\end{equation}
The gravitational configuration variable $A_a^i$ is then given by
$A_a^i = \Gamma_a^i + \gamma K_a^i$ where $K_{ab} := K_a^i
\omega_{bi}$ is the extrinsic curvature of $M$ and $\gamma$ is the
Barbero-Immirzi parameter, representing a quantization ambiguity.
(The numerical value of $\gamma$ is fixed by the black hole entropy
calculation.) The momenta $E^a_i$ carry, as usual, density weight 1
and are given by: $E^a_i = \sqrt{q} e^a_i$. The fundamental Poisson
bracket is:
\begin{equation} \{A_a^i(x), \, E^b_j(y)\} = 8\pi G\gamma\,\, \delta_a^b\,
\delta^i_j\, \delta^3(x,y)\, .\end{equation}
In Bianchi models \cite{taub,bianchi,atu}, one restricts oneself to
those phase space variables admitting a 3-dimensional group of
symmetries which act simply and transitively on $M$. Thus, the
3-metrics $q_{ab}$ under consideration admit a 3-parameter group of
isometries and $M$ is diffeomorphic to a 3-dimensional Lie group
$G$. (However, there is no canonical diffeomorphism, so that there
is no preferred point on $M$ corresponding to the identity element
of $G$.) To avoid a proliferation of spaces and types of indices, it
is convenient to identify the internal space $I$ and the Lie-algebra
$\mathcal{L} G$ of $G$ via a fixed isomorphism. Then, there is a natural
isomorphism ${}^o\xi^a_i$ between $\mathcal{L} G \equiv I$ and Killing vector
fields on $M$: for each internal vector $v^i$, ${}^o\xi^a_i v^i$ is a
Killing field on $M$. For brevity we will refer to ${}^o\xi^a_i$ as
(left invariant) vector fields on $M$. There is a canonical triad
${}^o\!e^a_i$
---the right invariant vector fields--- which is Lie dragged by the
${}^o\xi^a_i$. This triad and the dual co-triad ${}^o\!\omega_a^i$ satisfy:
\begin{eqnarray} [ {}^o\xi_i,\, {}^o\!e_j ] &=&0, \quad\quad [{}^o\!e_i,\, {}^o\!e_j] =
- {}^o C_{ij}^k\, {}^o\!e_k,\nonumber\\
\mathcal{L}_{{}^o\xi_i}\,( {}^o\!\omega^j) &=&0, \quad\quad {\rm d}\,{}^o\!\omega^k = \frac{1}{2}\,
{}^o C_{ij}^k {}^o\!\omega^i\wedge{}^o\!\omega^j,\end{eqnarray}
where ${}^o C_{ij}^k$ denotes the structure constants of $\mathcal{L} G$. It is
convenient to use the fixed fields ${}^o\!e^a_i$ and ${}^o\!\omega_a^i$ as
\emph{fiducial} triads and co-triads.
In the case when $G$ is the Bianchi II group, we have ${}^o C_{ik}^k =0$
as in all class A Bianchi models and, furthermore, the symmetric
tensor $k^{kl}:={}^o C_{ij}^k \, \epsilon^{ijl}$ has signature +,0,0.
Therefore, we can fix, once and for all an orthonormal basis
${}^o b_1^i, {}^o b_2^i, {}^o b_3^i$ in $I$ such that the only non-zero
components of ${}^o C_{ij}^k$ are
\begin{equation} {}^o C_{23}^1 = - {}^o C_{32}^1 = \tilde{\alpha}\, ,\end{equation}
where $\tilde{\alpha}$ is a non-zero real number.%
\footnote{Without loss of generality $\tilde{\alpha}$ can be chosen to be 1. We
keep it general because we will rescale it later (see Eq.
(\ref{tilde})) and because we want to be able to pass to the Bianchi
I case by taking the limit $\tilde{\alpha}\to0$.}
We will assume that this basis is so oriented that
\begin{equation} \label{ve1} \epsilon_{123}\, :=\, \epsilon_{ijk} \, {}^o b^i_1\,
{}^o b^j_2\, {}^o b^k_3\, \, =\, \varepsilon\end{equation}
where $\varepsilon = \pm 1$ depending on whether the frame $e^a_i$ (which
determines the sign of $\epsilon_{ijk}$) is right or left handed.
Throughout this paper we will set ${}^o\xi^a_1 = {}^o\xi^a_i {}^o b^i_1,\,
{}^o\!e^a_1 = {}^o\!e^a_i{}^o b^i_1,\, {}^o\!\omega_a^1 = {}^o\!\omega_a^i{}^o b^1_i$, etc.
The form of the components of ${}^o C^k_{ij}$ in this basis implies that
$M$ admits global coordinates $x,y,z$ such that the Bianchi II
Killing vectors have the fixed form
\begin{equation} {}^o\xi_1^a = \left(\frac{\partial}{\partial x}\right)^a, \qquad
{}^o\xi^a_2 = \left(\frac{\partial}{\partial y}\right)^a, \qquad
{}^o\xi^a_3 = \tilde{\alpha} y\left(\frac{\partial} {\partial x}\right)^a+
\left(\frac{\partial}{\partial z}\right)^a. \end{equation}
These expressions bring out the fact that, if we were to attempt to
compactify the spatial slices to pass to a $\mathbb{T}^3$ topology
---as one can in the Bianchi I model--- we will no longer have
globally well-defined Killing fields. Thus, in the Bianchi II model,
we are forced to deal with the subtleties associated with
non-compactness of the spatially homogeneous slices.
In the $x,y,z$ chart, the right invariant triad is given by
\begin{equation} {}^o\!e^a_1 = \left(\frac{\partial}{\partial x}\right)^a, \qquad {}^o\!e^a_2
= \tilde{\alpha} z \left(\frac{\partial}{\partial
x}\right)^a+\left(\frac{\partial}{\partial y} \right)^a, \qquad {}^o\!e^a_3
= \left(\frac{\partial}{\partial z}\right)^a, \end{equation}
and the dual co-triad by
\begin{equation} {}^o\!\omega_a^1=({\rm d} x)_a-\tilde{\alpha} z({\rm d} y)_a, \qquad{}^o\!\omega_a^2=({\rm d} y)_a,
\qquad{}^o\!\omega_a^3=(dz)_a. \end{equation}
They determine a fiducial 3-metric ${}^o\!q_{ab}:= q_{ij}{}^o\!\omega_a^i{}^o\!\omega_b^j$
with Bianchi II symmetries:
\begin{equation} {}^o\!q_{ab} {\rm d} x^a {\rm d} x^b = ({\rm d} x-\tilde{\alpha} z\:{\rm d} y)^2\,+\,{\rm d}
y^2\,+\, {\rm d} z^2. \end{equation}
In the diagonal models, the physical triads $e^a_i$ are related to
the fiducial ones by%
\footnote{There is no sum if repeated indices are both covariant or
contravariant. As usual, the Einstein summation convention holds if a
covariant index is contracted with a contravariant index.}
\begin{equation} \label{edef} \omega_a^i = a^i(\tau){}^o\!\omega_a^i, \qquad \mathrm{and}
\qquad a_i(\tau) e^a_i = {}^o\!e^a_i
\end{equation}
where the $a_i$ are the three directional scale factors. Since the
physical spatial metric is given by $q_{ab} =
\omega_a^i\omega^{}_{bi}$, the space-time metric can be expressed as
\begin{equation} \label{metric} {\rm d} s^2= -N {\rm d}\tau^2 + a_1(\tau)^2\:({\rm d} x-\tilde{\alpha}
z\:{\rm d} y)^2+a_2(\tau)^2\:{\rm d} y^2+a_3(\tau)^2\:{\rm d} z^2 \end{equation}
where $N$ is the lapse function adapted to the time coordinate
$\tau$.
For later use, let us calculate the spin connection (\ref{sc})
determined by triads $e^a_i$. From the definition of $\Gamma_a^i$ it
follows that
\begin{equation} \Gamma_a^i =
-\epsilon^{ijk}\,e^b_j\,\left(\partial_{[a}\omega_{b]k}+\frac{1}{2}e^c_k
\omega^l_a\partial_{[c}\omega_{b]l}\right)\, . \end{equation}
Using (\ref{ve1}), the components of $\Gamma_a^i$ in the internal
basis ${}^o b^i_1, {}^o b^i_2, {}^o b^i_3$ can be expressed as
\begin{equation} \Gamma_a^1 = \frac{\tilde{\alpha}\varepsilon a_1^2}{2a_2a_3}\:{}^o\!\omega_a^1; \qquad \Gamma_a^2
=-\frac{\tilde{\alpha}\varepsilon a_1}{2a_3}\:{}^o\!\omega_a^2; \qquad \Gamma_a^3 = -\frac{\tilde{\alpha}\varepsilon
a_1}{2a_2}\:{}^o\!\omega_a^3. \end{equation}
Before studying the dynamics of the model, let us examine the action
of internal parity transformation $\Pi_k$ which flips the $k$th
triad vector and leaves the orthogonal vectors alone. (For details
see Appendix and \cite{aa-dis}). Under the parity transformation
$\Pi_1$, for example, we have: $e^a_1\,\to \, -e^a_1,\, e^a_2\, \to
e^a_2,\, e^a_ 3\, \to \, e^a_3$ and $a_1\to -a_1,\, a_2\to a_2, a_3
\to a_3$ whence $\Gamma_a^1 \to -\Gamma^a_1,\, \Gamma_a^2 \to
\Gamma^a_2,\, \Gamma_a^3 \to \Gamma^a_3$. Thus, both $e^a_i$ and
$\Gamma_a^i$ are \emph{proper} internal vectors. $\varepsilon$ on the other
hand is a pseudo internal scalar, $\varepsilon \to -\varepsilon$ under every
$\Pi_k$. Note that the fiducial quantities carrying a label $o$ do
not change under this transformation; it affects only the physical
quantities.
\subsection{The Bianchi II Phase space}
\label{s2.2}
As is usual in LQC, we will now use the fiducial triads and
co-triads to introduce a convenient parametrization of the phase
space variables, $E^a_i, A_a^i$. Because we have restricted
ourselves to the diagonal model and these fields are symmetric under
the Bianchi II group, from each equivalence class of gauge related
phase space variables we can choose a pair of the form
\begin{equation} \label{var} E^a_i = \tilde{p}_i\sqrt{|{}^o\!q|}\,{}^o\!e^a_i \qquad \mathrm{and}
\qquad A_a^i = \tilde{c}^i \,{}^o\!\omega_a^i, \end{equation}
where, as spelled out in footnote 3, there is no sum over $i$. Thus,
a point in the phase space is now coordinatized by six real numbers
$\tilde{p}_i,\tilde{c}^i$. One would now like to use the symplectic structure in
full general relativity to induce a symplectic structure on our
six-dimensional phase space. However, because of spatial homogeneity
and the ${\mathbb{R}}^3$ spatial topology, the integrals defining the
symplectic structure, the Hamiltonian (and the action) all diverge.
Therefore we have to introduce a fiducial cell $\mathcal{V}$ and
restrict integrals to it \cite{as,abl}. We will take the fiducial
cell to be rectangular with edges along the coordinate axes and
lengths of $L_1, L_2$ and $L_3$ with respect to the \emph{fiducial}
metric ${}^o\!q_{ab}$. It then follows that the volume of the fiducial
cell with respect to ${}^o\!q_{ab}$ is $V_o=L_1L_2L_3$. Then the non-zero
Poisson brackets are given by:
\begin{equation} \label{pb1} \{\tilde{c}^i,\, \tilde{p}_j\} \, = \, \frac{8\pi G \gamma}{V_o}\,
\delta^i_j \end{equation}
where $\gamma$ is the Barbero-Immirzi parameter. As in the Bianchi I
case, we have a 1-parameter ambiguity in the symplectic structure
because of the explicit dependence on $V_o$ and we have to make sure
that the final physical results are either independent of $V_o$ or
remain well-defined as we remove the `regulator' and take the limit
$V_o \to \infty$.
It is convenient to rescale variables to absorb this dependence in
the phase space coordinates (as was done in the treatment of Bianchi
I model in \cite{awe2}). Let us set
\begin{equation} p_1= L_2L_3\tilde{p}_1, \qquad p_2=L_3L_1\tilde{p}_2, \qquad p_3= L_1L_2\tilde{p}_3,
\end{equation}
\begin{equation} \label{tilde} c_1=L_1\tilde{c}_1, \qquad c_2=L_2\tilde{c}_2, \qquad
c_3=L_3\tilde{c}_3 \qquad \mathrm{and} \qquad \alpha =
\frac{L_2L_3}{L_1}\tilde{\alpha}\, , \end{equation}
where the last rescaling has been introduced to absorb factors of
$L_i$ which would otherwise unnecessarily obscure the expression of
the Hamiltonian constraint. The Poisson brackets between these new
phase space coordinates is given by%
\begin{equation} \label{pb2}\{c^i,\, p_j\} \, = \, 8\pi G \gamma \,\delta^i_j \,
. \end{equation}
These variables have direct physical interpretation. For example,
$p_1$ is the (oriented) area of the 2-3 face of the elementary cell
with respect to the \emph{physical} metric $q_{ab}$ and $h^{(1)} =
\exp c_1\tau_1$ is the holonomy of the physical connection $A_a^i$
along the first edge of the elementary cell.
Our choice (\ref{var}) of physical triads and connections has fixed
the internal gauge as well as the diffeomorphism freedom.
Furthermore, it is easy to explicitly verify that, thanks to
(\ref{var}), the Gauss and the diffeomorphism constraints are
automatically satisfied. Thus, as in \cite{awe2}, we are left just
with the Hamiltonian constraint
\begin{equation} \label{Hgen} \mathcal{C}_H = \int_\mathcal{V}
\Big[\frac{NE^a_iE^b_j}{16\pi G\sqrt{|q|}}
\big(\epsilon^{ij}{}_kF_{ab}{}^k-2(1+\gamma^2)K_{[a}^iK_{b]}^j \Big)
+ N \mathcal{H}_{{\rm matt}}\big]\, {\rm d}^3x\, , \end{equation}
where
\begin{equation} F_{ab}{}^k=2\partial_{[a}A_{b]}^k+\epsilon_{ij}{}^kA_a^iA_b^j \end{equation}
is the curvature of $A_a^i$ and $\mathcal{H}_{\rm matt}$ is the matter
Hamiltonian density. As in \cite{awe2}, our matter field will
consist only of a massless scalar field $T$ which will later serve
as a relational time variable a la Liebniz. (Additional matter
fields can be incorporated in a straightforward manner, modulo
possible intricacies of essential self-adjointness.) Thus,
\begin{equation} \mathcal{H}_{{\rm matt}} = \frac{1}{2}\frac{p_T^2}{\sqrt{|q|}}. \end{equation}
Since we want to use the massless scalar field as relational time,
it is convenient to use a harmonic-time gauge, i.e., assume that the
time coordinate $\tau$ in (\ref{metric}) satisfies $\Box \tau=0$.
The corresponding lapse function is $N=\sqrt{|p_1p_2p_3|}$. With
this choice, the Hamiltonian constraint simplifies considerably.
Note first that the basic canonical variables can be expanded as
\begin{equation} E^a_i = \frac{p_i}{V_o}L_i\sqrt{|{}^o\!q|}{}^o\!e^a_i \qquad {\rm and} \qquad
A_a^i = \frac{c^i}{L^i}{}^o\!\omega_a^i, \end{equation}
and the extrinsic curvature is given by
$$\qquad K_a^i = \gamma^{-1} (A_a^i-\Gamma_a^i).$$
Next, using $p_1 = ({\rm sgn}a_1)\, |a_2a_3|\,L_2L_3$ etc, the
components of the spin connection become:
\begin{equation} \Gamma_a^1 = \frac{\alpha\varepsilon p_2p_3}{2p_1^2}\frac{{}^o\!\omega_a^1}{L_1}, \qquad
\Gamma_a^2= -\frac{\alpha\varepsilon p_3}{2p_1}\frac{{}^o\!\omega_a^2}{L_2}, \qquad
\Gamma_a^3=-\frac{\alpha\varepsilon p_2}{2p_1} \frac{{}^o\!\omega_a^3}{L_3} \, .\end{equation}
Collecting terms, the Hamiltonian constraint (\ref{Hgen}) becomes
\begin{align} \label{Hcl} \mathcal{C}_H&=-\frac{1}{8\pi G\gamma^2}
\Big[p_1p_2c_1c_2+p_2p_3 c_2c_3+p_3p_1c_3c_1+\alpha\varepsilon p_2p_3c_1
\nonumber \\
&\qquad\qquad-(1+ \gamma^2)\,\big(\frac{\alpha p_2p_3}{2p_1}\big)^2\Big]
+ \frac{1}{2}p_T^2 \\
& = \mathcal{C}_H^{\rm (BI)} - \frac{1}{8\pi G\gamma^2}\Big[\alpha\varepsilon
p_2p_3c_1-(1+ \gamma^2)\,\big(\frac{\alpha p_2p_3}{2p_1}\big)^2\Big],
\end{align}
where $\mathcal{C}_H^{\rm (BI)}$ is the Hamiltonian constraint
(including the matter term) for Bianchi I space-times which has
already been studied in \cite{awe2}. Note that this constraint is
recovered in the limit $\alpha\to 0$, as it must be.
Knowing the form of the Hamiltonian constraint, it is now possible to derive
the time evolution of any classical observable $\mathcal{O}$ by taking its
Poisson bracket with $\mathcal{C}_H$:
\begin{equation} \dot{\mathcal{O}} = \{\mathcal{O},\mathcal{C}_H\}\, , \end{equation}
where the `dot' stands for derivative with respect to harmonic time
$\tau$. This gives
\begin{equation} \label{ceom1} \dot{p_1}=\gamma^{-1}(p_1p_2c_2+p_1p_3c_3+\alpha\varepsilon p_2p_3), \end{equation}
\begin{equation} \dot{p_2}=\gamma^{-1}(p_2p_1c_1+p_2p_3c_3), \end{equation}
\begin{equation} \dot{p_3}=\gamma^{-1}(p_3p_1c_1+p_3p_2c_2), \end{equation}
\begin{equation}
\dot{c_1}=-\frac{1}{\gamma}\Big(p_2c_1c_2+p_3c_1c_3+\frac{1}{2p_1}(1+\gamma^2)
\big(\frac{\alpha p_2p_3}{p_1}\big)^2\Big), \end{equation}
\begin{equation} \dot{c_2}=-\frac{1}{\gamma}\Big(p_1c_2c_1+p_3c_2c_3+\alpha\varepsilon
p_3c_1-\frac{1}{2p_2} (1+\gamma^2)\big(\frac{\alpha
p_2p_3}{p_1}\big)^2\Big), \end{equation}
\begin{equation} \label{ceom2}
\dot{c_3}=-\frac{1}{\gamma}\Big(p_1c_3c_1+p_2c_3c_2+\alpha\varepsilon p_2
c_1-\frac{1}{2p_3}(1+\gamma^2)\big(\frac{\alpha p_2p_3}{p_1}\big)^2\Big).
\end{equation}
Any initial data satisfying the Hamiltonian constraint can be
evolved by using the six equations above. It is straightforward to
extend these results if there are additional matter fields.
Finally, let us consider the parity transformation $\Pi_k$ which
flips the $k$th \emph{physical} triad vector $e^a_k$. (As noted
before, this transformation does not act on any of the fiducial
quantities which carry a label $o$.) Under this map, we have:
$q_{ab} \to q_{ab}, \, \epsilon_{abc} \to \epsilon_{abc}\,$ but
$\epsilon_{ijk} \to -\epsilon_{ijk}, \, \varepsilon \to -\varepsilon$. The canonical
variables $c^i, p_i$ transform as proper internal vectors and
co-vectors: For example
\begin{equation} \Pi_1(c_1,c_2,c_3) \rightarrow (-c_1, c_2, c_3) \qquad {\rm and}
\qquad \Pi_1(p_1,p_2,p_3) \rightarrow (-p_1, p_2, p_3)\, . \end{equation}
Consequently, both the symplectic structure and the Hamiltonian
constraint are left invariant under any of the parity maps $\Pi_k$.
This Hamiltonian description will serve as the point of departure
for loop quantization in the next section.
\section{Quantum Theory}
\label{s3}
This section is divided into three parts. In the first, we discuss
the kinematics of the model, in the second we define an operator
corresponding to the connection $A_a^i$ using holonomies and in the
third we introduce the Hamiltonian constraint operator and describe
its action on states.
\subsection{LQC Kinematics}
The kinematics for the LQC of Bianchi II models is almost identical
to that for Bianchi I models. Therefore, in the sub-section we
closely follow \cite{awe2}.
Let us begin by specifying the elementary functions on the
classical phase space which will have unambiguous analogs in the
quantum theory. As in the Bianchi I model, the elementary
variables are the momenta $p_i$ and holonomies of the
gravitational connection $A_a^i$ along the integral curves of the
right invariant vector fields ${}^o\!e^a_i$. Let $\tau_i$ be a basis of
the Lie algebra of SU(2), satisfying $\tau_i \tau_j =
\frac{1}{2}\epsilon_{ij}{}^k \tau_k- \frac{1}{4} \delta_{ij}\mathbb{I}$
where $\mathbb{I}$ is the unit $2\times2$ matrix. Consider an edge
of length $\ell L_k$ with respect to the fiducial metric
${}^o\!q_{ab}$, parallel to ${}^o\!e^a_k$. The holonomy $h_k^{(\ell)}$ along
it is given by
\begin{equation} \label{hol} h_k^{(\ell)}(c_1,c_2,c_3) = \exp\left(\ell
c_k\tau_k\right) = \cos\frac{\ell c_k}{2} \mathbb{I} + 2\sin\frac{\ell
c_k}{2}\tau_k. \end{equation}
(Note that $\ell$ depends of the fiducial cell but not on the
fiducial metric.) This family of holonomies is completely
determined by the almost periodic functions $\exp(i\ell c_k)$ of
the connection. These almost periodic functions will be our
elementary configuration variables which will be promoted
unambiguously to operators in the quantum theory.
It is simplest to use the $p$-representation to specify the
gravitational sector $\mathcal{H}_{\rm kin}^{\rm grav}$ of the kinematic Hilbert space. The
orthonormal basis states $|p_1,p_2,p_3\rangle$ are eigenstates of
quantum geometry. For example, in the state $|p_1,p_2,p_3\rangle$
the face $S_{23}$ of the fiducial cell $\mathcal{V}$ (given by $x$
={\rm const}) has area $|p_1|$.
The basis is orthonormal in the sense
\begin{equation} \langle p_1,p_2,p_3|p_1',p_2',p_3'\rangle = \delta_{p_1^{}p_1'}
\delta_{p_2^{}p_2'}\delta_{p_3^{}p_3'}\, , \end{equation}
where the right side features Kronecker symbols rather than the
Dirac delta distributions. Hence kinematical states can consist only
of \emph{countable} linear combinations
\begin{equation} |\Psi\rangle \,=\,
\sum_{p_1,p_2,p_3}\Psi(p_1,p_2,p_3)|p_1,p_2,p_3\rangle\ \end{equation}
of these basis states for which the norm
\begin{equation} \label{norm} ||\Psi ||^2\, =\, \sum_{p_1,p_2,p_3}\,
|\Psi(p_1,p_2,p_3)|^2 \end{equation}
is finite. Because the right side features a sum over a countable
number of points on ${\mathbb{R}}^3$, rather than a 3-dimensional integral,
LQC kinematics are inequivalent to those of the Schr\"odinger
approach used in Wheeler-DeWitt quantum cosmology.
Next, recall that on the classical phase space the three reflections
$\Pi_i:\,\,e^a_i\,\to\, -e^a_i$ are large gauge transformations
under which physics does not change (since both the metric and the
extrinsic curvature are left invariant). These large gauge
transformations have a natural induced action, denoted by
$\hat\Pi_i$, on the space of wave functions $\Psi(p_1,p_2,p_3)$. For
example,
\begin{equation} \hat\Pi_1\Psi(p_1,p_2,p_3)=\Psi(-p_1,p_2,p_3). \end{equation}
Since $\hat\Pi_i^2$ is the identity, for each $i$, the group of
these large gauge transformations is simply $\mathbb{Z}_2$. As in Yang-Mills
theory, physical states belong to its irreducible representation.
For definiteness, as in the isotropic and Bianchi I models, we will
work with the symmetric representation. It then follows that
$\mathcal{H}_{\mathrm{kin}}^{\mathrm{grav}}$ is spanned by wave
functions $\Psi(p_1,p_2,p_3)$ which satisfy
\begin{equation} \label{parity} \Psi(p_1,p_2,p_3)=\Psi(|p_1|,|p_2|,|p_3|) \end{equation}
and have a finite norm (\ref{norm}).
The action of the elementary operators on
$\mathcal{H}_{\mathrm{kin}}^{\mathrm{grav}}$ is as follows: the
momenta act by multiplication whereas the almost periodic
functions in $c_i$ shift the $i$th argument. For example,
\begin{equation} [\hat p_1 \Psi](p_1,p_2,p_3) = p_1\, \Psi(p_1,p_2,p_3) \,\quad
\mathrm{and} \,\quad \Big[\widehat{\exp(i\ell c_1)}\Psi\Big](p_1,
p_2, p_3) = \Psi(p_1-8\pi\gamma G\hbar \ell, p_2, p_3)\, . \end{equation}
The expressions for $\hat p_2, \widehat{\exp(i\ell c_2)}, \hat
p_3$ and $\widehat{\exp(i\ell c_3)}$ are analogous. Finally, we
need to define the operator $\hat{\varepsilon}$ since $\varepsilon$ features in
the expression of the Hamiltonian constraint. In the classical
theory, $\varepsilon$ is unambiguously defined on non-degenerate triads,
i.e., when $p_1p_2p_3 \not= 0$. In quantum theory, wave functions
can have support also on degenerate configurations. We will extend
the definition to degenerate triads using the basis
$|p_1,p_2,p_3\rangle$:
\begin{equation} \label{ve2} \hat{\varepsilon}\,|p_1,p_2,p_3\rangle := \left\{
\rlap{\raise2ex\hbox{\,\,$\quad|p_1,p_2,p_3 \rangle$ if $p_1p_2p_3
\ge 0$,}}{\lower2ex\hbox{\,\,$ -\,|p_1,p_2,p_3 \rangle$ if
$p_1p_2p_3<0$.}} \right. \end{equation}
Finally, the full kinematical Hilbert space
$\mathcal{H}_{\mathrm{kin}}$ will be the tensor product
$\mathcal{H}_{\mathrm{kin}}=\mathcal{H}_{\mathrm{kin}}^
{\mathrm{grav}}\otimes\mathcal{H}_{\mathrm{kin}}^{\mathrm{matt}}$,
where $\mathcal{H}_{\mathrm{kin}}^{\mathrm{matt}}=L^2({\mathbb{R}},dT)$ is
the matter kinematical Hilbert space for the homogeneous scalar
field. On $\mathcal{H}_ {\mathrm{kin}}^{\mathrm{matt}}$, $\hat T$
will act by multiplication and $\hat p_T:=-i\hbar \mathrm{d}_T$
will act by differentiation. As in isotropic and Bianchi I models,
our final results would remain unaffected if we use a ``polymer
representation'' also for the scalar field.
\subsection{The connection operator $\hat{A}_a^i$}
\label{s3.2}
To define the quantum Hamiltonian constraint, we cannot directly use
the symmetry reduced classical constraint (\ref{Hcl}) because it
contains connection components $c_k$ themselves and in LQC only
almost periodic functions of $c_k$ have unambiguous operator
analogs. Indeed, in all LQC models considered so far
\cite{abl,aps3,warsaw,apsv,kv1,ls,bp,awe2}, we were led to return to
the expression (\ref{Hgen}) in the full theory and mimic the
procedure used in LQG \cite{tt}. More precisely, the key strategy
was to follow full LQG (and spin foams) and define a ``field
strength operator'' using holonomies around suitable closed loops.
In the Bianchi I model, these closed loops were formed by following
integral curves of right invariant vector fields (which are also
left invariant). As mentioned in section \ref{s2}, in the Bianchi II
model the right invariant vector fields define the fiducial triads
${}^o\!e^a_i$, the left invariant vector fields, the Killing fields
${}^o\xi^i$. Neither constitutes a commuting set, whence their integral
curves cannot be used to form closed loops. However, as in the k=1
case \cite{warsaw,apsv}, one can hope to exploit the fact that the
right invariant vector fields do commute with the left invariant
ones and construct the closed loops by alternately following right
and left invariant vector fields. But, as mentioned in section
\ref{s1}, a new problem arises: unlike in the k=1 (or Bianchi I)
model the resulting holonomies are no longer almost periodic
functions of $c_k$, whence the Hilbert space $\mathcal{H}_{\rm kin}^{\rm
grav}$ does not support these holonomy operators. For completeness
we will first show this fact explicitly and then introduce a new
avenue to bypass this difficulty.
The problematic curvature component turns out to be $F_{yz}{}^1$. To
construct the corresponding operator, following the strategy used in
the k=1 case \cite{warsaw,apsv}, we will construct a closed loop
$\Box_{yz}$ as follows. In the coordinates $(x,y,z)$,\,\, i) Move
from $(0,0,0)$ to $(0,\bar\mu_2L_2,0)$ following $\xi^a_2$;\,\, ii)
then move from $(0,\bar\mu_2L_2,0)$ to
$(0,\bar\mu_2L_2,\bar\mu_3L_3)$ following ${}^o\!e^a_3$;\,\, iii) then
move from $(0,\bar\mu_2L_2,\bar\mu_3L_3)$ to $(0,0,\bar\mu_3L_3)$
following $-\xi^a_2$;\,\, and, finally, iv) move from
$(0,0,\bar\mu_3L_3)$ to $(0,0,0)$ following $-{}^o\!e^a_3$. The
parameters $\bar\mu_i$ which determine the `lengths' of these edges
can be fixed by the semi-heuristic correspondence between LQC and
LQG exactly as in the Bianchi I model \cite{awe2} because the
geometric considerations used in that analysis continue to hold
without any modification in all Bianchi models with $\mathbb{R}^3$ spatial
topology:
\begin{equation} \label{mubar} \bar\mu_1 =
\sqrt\frac{|p_1|\Delta\,\ell_{\mathrm{Pl}}^2}{|p_2p_3|}, \qquad \bar\mu_2 =
\sqrt\frac{|p_2|\Delta\,\ell_{\mathrm{Pl}}^2}{|p_1p_3|}, \qquad \bar\mu_3 =
\sqrt\frac{|p_3| \Delta\,\ell_{\mathrm{Pl}}^2}{|p_1p_2|} \end{equation}
where $\Delta\,\ell_{\mathrm{Pl}}^2 = 4\sqrt{3}\pi\gamma\,\ell_{\mathrm{Pl}}^2$ is the `area gap'.
The holonomy around this closed loop $\Box_{yz}$ is given by
\begin{equation} {h}_{\Box_{yz}} = \frac{2}{c\,\,\bar\mu_2\bar\mu_3L_2L_3}\cos\left(
\frac{\bar\mu_2c_2}{2}\right)\sin\left(\frac{\bar\mu_2 c}{2}\right)
\Big(c_2\sin(\bar\mu_3c_3)+
\alpha\bar\mu_3c_1\cos(\bar\mu_3c_3)\Big) \end{equation}
where
\begin{equation} \label{c12} c = \sqrt{\alpha^2\bar\mu_3^2c_1^2+c_2^2}. \end{equation}
If we were to shrink the loop so that the area it encloses goes to
zero, we do indeed recover the classical expression of $F_{yz}{}^1$.
However, because of presence of the term $c$, if $\alpha\not=0$ the
right side fails to be almost periodic in $c_1$ and $c_2$. Hence
this holonomy operator fails to exist on $\mathcal{H}_{\rm kin}$. It is clear
from the expression (\ref{c12}) of $c$ that the problem is
independent of the specific way $\bar\mu_i$ are fixed.
We will bypass this difficulty by mimicking another strategy used in
full LQG \cite{tt}: We will use holonomies along segments parallel
to ${}^o\!e^a_i$ to define an operator corresponding to the connection
itself. This is a natural strategy because holonomies along these
segments suffice to separate the Bianchi II connections (\ref{var}).
Let us set $A_a := A_a^k\tau_k$. Then we have the identity:
\begin{equation} \label{classA} A_a = \lim_{\ell_k \to 0}\, \sum_k
\,\frac{1}{2\ell_kL_k}\,\, \Big(h_k^{(\ell_k)} -
(h_k^{(\ell_k)})^{-1}\Big) \end{equation}
where $h_k^{(\ell_k)}$ is given by (\ref{hol}). In the expressions
of physically interesting operators such as the Hamiltonian
constraint of full LQG, one often replaces $A_a$ with the (analog of
the) right side of (\ref{classA}). But because of the specific forms
of these operators, the limit trivializes on diffeomorphism
invariant states of LQG. In LQC, we have gauge fixed the system and
therefore cannot appeal to diffeomorphism invariance. Indeed, while
the holonomies are well-defined for each non-zero $\ell_k$, the
limit fails to exist on $\mathcal{H}_{\rm kin}^{\rm grav}$. A natural
strategy is to shrink $\ell_k$ to a judiciously chosen non-zero
value. But what would this value be? In the case of plaquettes, we
could use the interplay between LQG and LQC directly because the
argument $p_i$ of LQC quantum states refers to \emph{quantum} areas
of faces of the elementary cell $\mathcal{V}$ \cite{awe2}. For edges
we do not have such direct guidance. There is, nonetheless a natural
principle one can adopt: Normalize $\ell_k$ such that the numerical
coefficient in front of the curvature operator constructed from the
resulting connection agrees with that in the expression of the
curvature operator constructed from holonomies around closed loops,
in all cases where the second construction is available. We will use
this strategy. Let us apply it to the Bianchi I model where
$F_{ab}{}^k = \epsilon_{ij}{}^k\, A_a^i A_b^j$. Using holonomies
around closed loops one obtains the field strength operator
\begin{equation} \hat{F}_{ab}{}^k = \epsilon_{ij}{}^k\,
\big(\frac{\sin\bar{\mu}c}{\bar{\mu}L}\, {}^o\!\omega_a\big)^i\,
\big(\frac{\sin\bar{\mu}c}{\bar{\mu}L}\, {}^o\!\omega_b\big)^j \end{equation}
where
\begin{equation} \big(\frac{\sin\bar{\mu}c}{\bar{\mu}L}\, {}^o\!\omega_a\big)^i =
\big(\frac{\sin\bar{\mu_i}c_i}{\bar{\mu_i}L_i}\, {}^o\!\omega_a^i\big) \quad\quad
\hbox{\rm (no sum over i)} \nonumber \end{equation}
(see Eqs (3.12) and (3.13) in \cite{awe2}). Therefore, our strategy
yields $\ell_k = 2\bar\mu_k$, that is,
\begin{equation} \label{Aop} \hat{A}_a^k =
\frac{\widehat{\sin(\bar\mu^kc^k)}}{\bar\mu^kL_k}\,\,{}^o\!\omega_a^k, \end{equation}
where there is no sum over $k$. Note that the principle stated above
leads us unambiguously to the factor $2$ in $\ell_k = 2\bar\mu_k$;
without recourse to a systematic strategy, one may have naively set
$\ell_k =\bar\mu_k$.
If we compare the expression (\ref{Aop}) of the connection operator
with the expression (\ref{var}) of the classical connection, we have
effectively defined an operator $\hat{c}$ via
\begin{equation} \hat{c}_k = \frac{\widehat{\sin(\bar\mu^kc^k)}}{\bar\mu^k} \end{equation}
where there is again no sum over $k$. In the literature such a
quantization of $c$ is often called ``polymerization.'' Our approach
is an improvement over such strategies in two respects. First, we
did not just make the substitution $c \rightarrow \sin \ell c/\ell$
by hand; a priori one could have used another substitution such as
$c \rightarrow \tan \ell c/\ell$. Rather, as in full LQG, we used
the strategy of expressing the connection in term of holonomies,
`the elementary variables'. But this still leaves open the question
of what $\ell$ one should use. We determined this by requiring that
the overall normalization of $\hat{F}_{ab}{}^k$ constructed from
$\hat{A}_a^i = c^i (L^i)^{-1}\,{}^o\!\omega_a^i$ should agree with that of
$\hat{F}_{ab}{}^k$ constructed from holonomies around appropriate
closed loops, when the second construction is possible. Therefore,
our construction is a bona-fide generalization of the previous
constructions used successfully in LQC.
This strategy has some applications beyond the Bianchi II model
studied in this paper. First, the k=$-1$ isotropic case has been
studied in detail in \cite{kv1,ls}. The analysis uses the $\bar\mu$
scheme, carries out a numerical simulation using exact LQC equations
and shows that the effective equations of the ``embedding approach"
\cite{jw,vt} (discussed in section \ref{s4}) provide an excellent
approximation to the quantum evolution. While this is an essentially
exhaustive treatment, as \cite{kv1,ls} itself points out, the
treatment has a conceptual limitation in that it builds holonomies
around the closed loops using the extrinsic curvature $K_a^i$
---rather than $A_a^i$--- as a ``connection''. This limitation can
be overcome in a straightforward fashion using our current strategy.
More importantly, this strategy is applicable to all class A Bianchi
models, including type IX. Thus, it opens the door to the LQC
treatment of all these models in one go.
\subsection{The quantum Hamiltonian constraint}
\label{s3.3}
With the connection operator at hand, one can construct the
Hamiltonian constraint operator starting either from the general LQG
expression (\ref{Hgen}) or the symmetry reduced expression
(\ref{Hcl}). We will begin by a small change in the representation
of kinematical states which will facilitate this task.
\subsubsection{A more convenient representation}
\label{s3.3.1}
Ignoring factor ordering ambiguities for the moment, the
constraint operator $\hat{\mathcal{C}}_H$ is given by
\begin{align} \label{qHam1} \hat{\mathcal{C}}_H = -\frac{1}{8\pi
G\gamma^2\Delta\ell_{\mathrm{Pl}}^2}&\Big[p_1 p_2|p_3|\sin\bar\mu_1c_1
\sin\bar\mu_2c_2+|p_1|p_2p_3\sin\bar\mu_2c_2\sin\bar\mu_3c_3 \nonumber \\
&+p_1|p_2|p_3\sin\bar\mu_3c_3\sin\bar\mu_1c_1\Big]-
\frac{1}{8\pi G\gamma^2}\Big[\alpha\hat{\varepsilon}p_2p_3\sqrt\frac{|p_2p_3|}{|p_1|\Delta
\ell_{\mathrm{Pl}}^2}\sin\bar\mu_1c_1\nonumber \\ & -(1+\gamma^2)\left(\frac{\alpha
p_2p_3}{2p_1}\right)^2\Big]+\frac{1}{2}\hat{p}_T^2 \end{align}
where for simplicity of notation here and in what follows we have
dropped the hats on the $p_i$ and $\sin\bar\mu_ic_i$ operators.
Recall that, classically, the Bianchi II symmetry group reduces to
the Bianchi I symmetry group if we set $\alpha=0$. If one sets
$\alpha=0$ in (\ref{qHam1}), the last two terms disappear and the
operator $\hat{\mathcal{C}}_H$ reduces to that of the Bianchi I
model \cite{awe2} showing explicitly that our construction is a
natural generalization of the strategy used there.
To obtain the action of operators corresponding to terms of the form
$\sin\bar\mu_ic_i$ we use the same strategy as in \cite{awe2}. As
shown there, it is simplest to introduce dimensionless variables
\begin{equation}
\lambda_i=\frac{\mathrm{sgn}(p_i)\sqrt{|p_i|}}{(4\pi\gamma\sqrt\Delta\ell_{\mathrm{Pl}}^3)^{1/3}}\,
. \end{equation}
Then the kets $|\lambda_1,\lambda_2,\lambda_3\rangle$ constitute an orthonormal
basis in which the operators $p_k$ are diagonal
\begin{equation} p_k|\lambda_1,\lambda_2,\lambda_3\rangle\, =\,
[\mathrm{sgn}(\lambda_k)(4\pi\gamma\sqrt\Delta\ell_{\mathrm{Pl}}^3)^{2/3}
\lambda_k^2]\,\,|\lambda_1,\lambda_2,\lambda_3\rangle\, . \end{equation}
Quantum states will now be represented by functions
$\Psi(\lambda_1,\lambda_2,\lambda_3)$. The operator $e^{i\bar\mu_1c_1}$ acts on
them as follows
\begin{align} \big[e^{i\bar\mu_1c_1}\,\Psi\big] (\lambda_1,\lambda_2,\lambda_3)
&= \Psi(\lambda_1- \frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3) \nonumber \\
&= \Psi(\frac{v-2\mathrm{sgn}(\lambda_2\lambda_3)}{v}\cdot\, \lambda_1,\lambda_2,\lambda_3),
\end{align}
where we have introduced the variable $v=2\lambda_1\lambda_2\lambda_3$ which is
proportional to the volume of the fiducial cell:
\begin{equation} \hat{V}\,\Psi(\lambda_1,\lambda_2,\lambda_3)\, =\,
[2\pi\gamma\sqrt\Delta\,|v|\,\ell_{\mathrm{Pl}}^3]\, \Psi(\lambda_1, \lambda_2,\lambda_3). \end{equation}
(The $e^{i\bar\mu_1c_1}$ operator is well-defined in spite of the
appearance of $|\lambda_2\lambda_3|$ in the denominator; see \cite{awe2}.)
The operators $e^{i\bar\mu_2 c_2}$ and $e^{i\bar\mu_3c_3}$ have
analogous action.
We are now ready to write the Hamiltonian constraint explicitly in
the $\lambda_i$-representation. As noted above, the three terms in the
first square bracket on the right hand side of Eq. (\ref{qHam1})
constitute the gravitational part of $\hat{\mathcal{C}}_H$ for the
LQC of Bianchi I model%
\footnote{There are some minor changes in the action of these three
terms since $\gamma$ is no longer treated as a pseudoscalar (see
Appendix \ref{a1}), but these do not affect the discussion.}
and have been discussed in \cite{awe2}. In the next two
sub-sections we will now discuss the last two terms, which are
specific to the Bianchi II model.
\subsubsection{The Fourth term in $\hat{\mathcal{C}}_H$}
\label{s3.3.2}
Using a symmetric factor ordering, the fourth term becomes
\begin{equation} \label{hc4} \hat{\mathcal{C}}_H^{(4)} = -\frac{\alpha
p_2p_3\sqrt{|p_2p_3|}} {16\pi
G\gamma^2\sqrt\Delta\ell_{\mathrm{Pl}}}\,\,\widehat{|p_1|^{-1/4}}\,(\hat{\varepsilon}\,
\sin\bar\mu_1c_1+\sin\bar\mu_1c_1\,\hat{\varepsilon})\,\widehat{|p_1|^{-1/4}}
\, . \end{equation}
(Note that $p_2$ and $p_3$ commute with the other terms in
$\hat{\mathcal{C}}_H^{(4)}$). The operator $p_1$ is self-adjoint on
$\mathcal{H}_{\rm kin}^{\rm grav}$ whence any measurable function of $p_1$ is
also a well-defined self-adjoint operator. However, since kets
$|\lambda_1=0, \lambda_2,\lambda_3\rangle$ are normalizable in $\mathcal{H}_{\rm kin}^{\rm
grav}$, the naive inverse powers of $\hat{p}_1$ fail to be densely
defined and cannot be self-adjoint. To define inverse powers, as is
usual in LQG, we will use a variation on the Thiemann inverse triad
identities \cite{tt}. Classically, we have the identity
\begin{equation} \label{class} |p_1|^{-1/4} = -\frac{i\,\mathrm{sgn}(p_1)}{2\pi G\gamma}
\sqrt\frac{|p_2p_3|}{\Delta\ell_{\mathrm{Pl}}^2}\,\,
e^{-i\bar\mu_1c_1}\,\{e^{i\bar\mu_1c_1},|p_1|^{1/4}\}\, . \end{equation}
which holds for any choice of $\bar\mu_1$. Since it is most natural
to use the same $\bar\mu_1$ that featured in the definition of the
connection operator, we will make this choice. Eq (\ref{class})
suggests a natural quantization strategy for $|p_1|^{-1/4}$. Using
it and the parity considerations, we are led to the following factor
ordering:%
\footnote{In the classical theory, $(L_2L_3)^{1/4}\,|p_1|^{-1/4}$ is
independent of the choice of the elementary cell. As pointed out in
\cite{kv1} the inverse triad operators, by contrast, depend on the
choice of the cell. However, one can verify that as we remove the
regulator, i.e., take the limit $\mathcal{V} \to \mathbb{R}^3$, as in the
classical theory, the limiting
$(L_2L_3)^{1/4}\,\widehat{|p_1|^{-1/4}}$ has a well defined limit.}
\begin{equation} \widehat{|p_1|^{-1/4}} = - \frac{i\,\mathrm{sgn}(p_1)}{2\pi
G\gamma}\sqrt\frac{|p_2p_3|}{\Delta\ell_{\mathrm{Pl}}^2}\,\,e^{-i\bar\mu_1c_1/2}\,\,
\frac{1}{i\hbar}[e^{i\bar\mu_1c_1},|p_1|^{1/4}]\,\,
e^{-i\bar\mu_1c_1/2}\, , \end{equation}
where, as is common in LQC, $\mathrm{sgn}(p_1)$ is defined as
\begin{equation} \mathrm{sgn}(p_1) = \left\{\rlap{\rlap{\raise4ex\hbox{\,\,$+1$ if $p_1>0$,}}
{\raise0ex\hbox{\,\,$0$ if $p_1=0$,}}}
{\lower4ex\hbox{\,\,$-1$ if $p_1<0$.}} \right. \end{equation}
At first it may seem surprising that the expression of
$\widehat{|p_1|^{-1/4}}$ involves operators other than ${p_1}$. It
is therefore important to verify that it has the standard desirable
properties. First, as one would hope, it is indeed diagonal in the
eigenbasis of the operators $\hat{p}_k$:
\begin{equation} \label{inv} \widehat{|p_1|^{-1/4}}\, |\lambda_1,\lambda_2,\lambda_3\rangle =
\frac{\sqrt2 \mathrm{sgn}(\lambda_1)\,\sqrt{|\lambda_2\lambda_3|}}
{(4\pi\gamma\sqrt\Delta\ell_{\mathrm{Pl}}^3)^{1/6}}
\left(\sqrt{|v+\mathrm{sgn}(\lambda_2\lambda_3)|}-\sqrt{|v-\mathrm{sgn}(\lambda_2\lambda_3)|}\right)\,
|\lambda_1,\lambda_2,\lambda_3\rangle. \end{equation}
Second, on eigenkets with large volume, the eigenvalue is indeed
well-approximated by $p_1^{-1/4}$, whence on semi-classical states
it behaves as the inverse of $\hat{p}^{1/4}$, just as one would
hope. Thus, (\ref{inv}) is a viable candidate for
$\widehat{|p_1|^{-1/4}}$. But there are interesting
non-trivialities in the Planck regime. In particular, although
counter-intuitive, as is common in LQC the operator annihilates
states $|\lambda_1,\lambda_2,\lambda_3\rangle$ with $v = 2\lambda_1\lambda_2\lambda_3 =0$
Finally, note that the operator $\hat{\varepsilon}$ appearing in the
expression (\ref{hc4}) of $\hat{\mathcal{C}}_H^{(4)}$ either
operates immediately before or after $\widehat{|p_1|^{-1/4}}$. Since
$\widehat{|p_1|^{-1/4}}$ annihilates all zero volume states and
$\hat{\varepsilon}$ acts on such states as the identity operator, we only
need to consider the action of $\hat{\varepsilon}$ on states with nonzero
volume. In this case, $\hat{\varepsilon}$ acts as $\mathrm{sgn}(v)$. Therefore the
action of $\hat{\mathcal{C}}_H^{(4)}$ can be written as:
\begin{align}
\Big[\hat{\mathcal{C}}_H^{(4)}\,\Psi\Big](\lambda_1,\lambda_2,\lambda_3) =&
-\frac{i\alpha\pi\sqrt\Delta\hbar\ell_{\mathrm{Pl}}^2}{(4\pi\gamma\sqrt\Delta)^{1/3}}\,\,
\mathrm{sgn}(v)\,\, (\lambda_2\lambda_3)^4\nonumber\\
\left(\sqrt{|v+\mathrm{sgn}(\lambda_2\lambda_3)|}-\sqrt{|v-\mathrm{sgn}(\lambda_2\lambda_3)|} \right)
& \quad \Big[\Phi^+(\lambda_1,\lambda_2,\lambda_3) -\Phi^-(\lambda_1,\lambda_2,\lambda_3)\Big]
\label{c4}
\end{align}
where
\begin{align} \Phi^\pm(\lambda_1,\lambda_2,\lambda_3) =& \Big(\sqrt{\left|v\pm2\mathrm{sgn}(\lambda_2
\lambda_3)+\mathrm{sgn}(\lambda_2\lambda_3))\right|}
-\sqrt{\left|v\pm2\mathrm{sgn}(\lambda_2\lambda_3)-\mathrm{sgn}(\lambda_2\lambda_3)\right|}\, \Big)
\nonumber \\ & \quad\: \times
\big(\mathrm{sgn}(v)+\mathrm{sgn}(v\pm2 \mathrm{sgn}(\lambda_2\lambda_3))\big)\,\,
\Psi(\frac{v\pm2\mathrm{sgn}(\lambda_2\lambda_3)}{v}\lambda_1,\lambda_2,\lambda_3).\label{phi} \end{align}
Recall that in the classical theory the singularity corresponds
precisely to the phase space points at which the volume vanishes.
Therefore, as in the Bianchi I model, states with support only on
points with $v=0$ will be called `singular' and those which vanish
at points with $v=0$ will be called regular. The total Hilbert space
$\mathcal{H}_{\rm kin}^{\rm grav}$ is naturally decomposed as a direct sum $\mathcal{H}_{\rm kin}^{\rm grav} = \mathcal{H}^{\rm
grav}_{\rm sing}\oplus \mathcal{H}^{\rm grav}_{\rm reg}$ of singular and
regular sub-spaces. We will conclude this discussion by examining
the action of $\hat{\mathcal{C}}_H^{(4)}$ on these sub-spaces. Note
first that in the action (\ref{hc4}) of $\hat{\mathcal{C}}_H^{(4)}$,
the state is first acted upon by the operator
$\widehat{|p_1|^{-1/4}}$. Since this operator annihilates states
$|\lambda_1 \lambda_2,\lambda_3\rangle$ with $v = 2\lambda_1\lambda_2\lambda_3 =0$, singular
states are simply annihilated by $\hat{\mathcal{C}}_H^{(4)}$. In
particular this implies that the singular sub-space is mapped to
itself under this action. It is clear from (\ref{phi}) that if
$\Psi$ is regular, i.e. vanishes on all points with $v =0$,
$\Phi^\pm$ also vanish at these points. Thus the regular sub-space
is also preserved by this action. This fact will be used in the
discussion of singularity resolution in section \ref{s3.3.4}.
\emph{Remark:}\, Our definition of the operator
$\widehat{|p|^{-1/4}}$ is not unique; as is common with non-trivial
functions of elementary variables, there are factor ordering
ambiguities. For example, for $0<n<1/2$, we have the classical
identity
\begin{equation}
|p_1|^{n-1/2}=\frac{-i\mathrm{sgn}(p_1)\sqrt{|p_2p_3|}}{8\pi\gamma\sqrt\Delta
G\ell_{\mathrm{Pl}} n} e^{-i\bar\mu_1c_1}\left\{e^{i\bar\mu_1c_1},|p_1|^n\right\}\,
. \nonumber \end{equation}
Hence, it is possible to instead define $\widehat{p_1^{-1/4}}$ as
\begin{equation} \widehat{p_1^{-1/4}} =
\left(\widehat{|p_1|^{n-1/2}}\right)^{-1/(4n-2)} \nonumber \end{equation}
where
\begin{equation} \widehat{|p_1|^{n-1/2}} =
-\frac{(4\pi\gamma\sqrt\Delta\ell_{\mathrm{Pl}}^3)^{(2+2n)/3}} {4^n(8\pi\gamma
G\sqrt\Delta\ell_{\mathrm{Pl}})^3n}\mathrm{sgn}(\lambda_1)|\lambda_2\lambda_3|^{1-2n}\Big[|v+
\mathrm{sgn}(\lambda_2\lambda_3)|^{2n}-|v-\mathrm{sgn}(\lambda_2\lambda_3)|^{2n}\Big]. \nonumber \end{equation}
For $n\ne1/4$, this choice for the operator $\widehat{p_1^{-1/4}}$
is not equivalent to the one we chose. These two choices are both
well-defined and admit the same classical limit but they differ in
the Planck regime. It is also possible to construct other such
inequivalent $\widehat{p_1^{-1/4}}$ candidate operators. For
definiteness we have made the `simplest' choice.
\subsubsection{The fifth term in $\hat{\mathcal{C}}_H$}
\label{s3.3.3}
Let us now consider the last term in the expression of the
gravitational part of the Hamiltonian constraint
\begin{equation} \hat{\mathcal{C}}_H^{(5)} = \frac{\alpha^2}{32\pi
G\gamma^2}(1+\gamma^2)\,(p_2 p_3)^2\,\,\widehat{p_1^{-2}}. \end{equation}
This term is simpler since it only involves powers of $p_k$ and we
are working in a representation where $p_k$ are diagonal. From our
discussion of the last section, it is natural to set
\begin{equation} \widehat{p_1^{-2}}:=\left(\widehat{p_1^{-1/4}}\right)^8\, , \end{equation}
then we have
\begin{align} \label{c5} \hat{\mathcal{C}}_H^{(5)}\,\Psi(\lambda_1,\lambda_2,
\lambda_3)\, =\, & \frac{8\pi\alpha^2\Delta(1+\gamma^2)\hbar\ell_{\mathrm{Pl}}^2}{(4\pi\gamma\sqrt
\Delta)^{2/3}}\,\mathrm{sgn}(\lambda_1)^8\lambda_2^8\lambda_3^8 \nonumber \\ & \times \left(
\sqrt{|v+\mathrm{sgn}(\lambda_2\lambda_3)|}-\sqrt{|v-\mathrm{sgn}(\lambda_2\lambda_3)|}\right)^8 \Psi(\lambda_1,\lambda_2,
\lambda_3). \end{align}
Again, it is clear that if $v=0$, the wave function is annihilated
by this part of the constraint. Also, it follows by inspection that
the singular and regular subspaces are both mapped to themselves by
the action of $\hat{\mathcal{C}}_H^{(5)}$.
\subsubsection{Singularity resolution}
\label{s3.3.4}
We can now determine the gravitational part $\hat{\mathcal{C}}_{\rm grav}$
of the Hamiltonian constraint by combining the results of
\cite{awe2} and Eqs. (\ref{c4}) and (\ref{c5}). We have:
\begin{equation} \label{qHam2} \hat{\mathcal{C}}_{\rm grav} = \hat{\mathcal{C}}_{\rm grav}^{\rm
(BI)} + \hat{\mathcal{C}}_H^{(4)} + \hat{\mathcal{C}}_H^{(5)} \end{equation}
where $\hat{\mathcal{C}}_{\rm grav}^{\rm (BI)}$ is the gravitational part of
the Hamiltonian constraint in the Bianchi I model \cite{awe2}. There
is however, a conceptual subtlety. In the classical theory the
Hamiltonian density $\mathcal{C}_{\rm grav}/(L_1L_2L_3)^2$ is independent of
the choice of the elementary cell (where we have to divide by
$(L_1L_2L_3)^2$ because the lapse corresponding to harmonic time
scales as $(L_1L_2L_3)$ and the Hamiltonian constraint is obtained
by integration over the elementary cell $\mathcal{V}$). As shown in
the section V of \cite{awe2}, $\hat{\mathcal{C}}_{\rm grav}^{\rm
(BI)}/(L_1L_2L_3)^2$ is again independent of the choice of the
elementary cell $\mathcal{V}$. However, the two additional terms
that are special to the Bianchi II model are not independent of
$\mathcal{V}$ because they involve the inverse-triad operators
\cite{kv1}. Nonetheless, in the limit as we take the regulator away,
i.e., $\mathcal{V} \to \mathbb{R}^3$, the operator
$\hat{\mathcal{C}}_{\rm grav}/(L_1L_2L_3)^2$ has a well-defined limit (see
footnote 5). Strictly speaking, in the discussion of Bianchi II
quantum dynamics, we have to work with this limit, rather than with
operators defined using a fixed cell.
As in the Bianchi I model, the action simplifies if we replace one
of the $\lambda_i$ by $v$. In the Bianchi I model, it does not matter
which of the $\lambda_i$ is replaced because of the additional symmetry
of that model. In the Bianchi II case, while it remains possible to
replace any of the $\lambda_i$, it is simplest to replace $\lambda_1$ by $v$
and represent quantum states as $\Psi=\Psi(\lambda_2,\lambda_3,v;T)$. This
change of variables would be nontrivial if, as in the Wheeler-DeWitt
theory, we had used the Lesbegue measure in the gravitational
sector. However, it is quite tame here because the norms are defined
using a discrete measure. The inner product on $\mathcal{H}_{\rm kin}^{\rm grav}$ is now given
by
\begin{equation} \langle\Psi_1|\Psi_2\rangle_{\rm kin} = \sum_{\lambda_2,\lambda_3,v}
\,\,\bar{\Psi}_1(\lambda_2,\lambda_3,v)\,\Psi_2(\lambda_2,\lambda_3,v) \end{equation}
and states are symmetric under the action of $\hat\Pi_k$. In
Appendix \ref{a1}, we show that, under the action of reflections
$\hat\Pi_i$, the operators $\sin\bar\mu_ic_i$ have the same
transformation properties that $c_i$ have under reflections $\Pi_i$
in the classical theory. As a consequence, $\hat{\mathcal{C}}_{\rm grav}$ is
also reflection symmetric. Therefore, its action is well defined on
$\mathcal{H}_{\rm kin}^{\rm grav}$: $\hat{\mathcal{C}}_{\rm grav}$ is a densely defined, symmetric
operator on this Hilbert space. In the isotropic case, its analog
has been shown to be essentially self-adjoint \cite{warsaw2}. In
what follows we will assume that (\ref{qHam2}) is essentially
self-adjoint on $\mathcal{H}_{\rm kin}^{\rm grav}$ and work with its self-adjoint extension.
It is now straightforward to write down the full Hamiltonian
constraint on $\mathcal{H}_{\rm kin}^{\rm grav}$:
\begin{equation} \label{qHam3} -\hbar^2\, \partial^2_T \, \Psi(\lambda_2,\lambda_3,v; T) =
\Theta\, \Psi(\lambda_2,\lambda_3,v; T)\quad {\rm where}\quad \Theta = -2
\hat{\mathcal{C}}_{\rm grav} \end{equation}
As in the isotropic case \cite{aps2}, one can obtain the physical
Hilbert space $\mathcal{H}_{\rm phy}$ by a group averaging procedure and the
final result is completely analogous. Elements of $\mathcal{H}_{\rm phy}$
consist of `positive frequency' solutions to (\ref{qHam3}), i.e.,
solutions to
\begin{equation} \label{qHam4} -i\hbar \partial_T \Psi(\lambda_2,\lambda_3,v; T)\, = \,
\sqrt{|\Theta|}\, \Psi(\lambda_2,\lambda_3,v; T)\, ,\end{equation}
which are symmetric under the three reflection maps $\hat\Pi_i$,
i.e. satisfy
\begin{equation} \label{sym} \Psi(\lambda_2,\lambda_3,v;\, T) = \Psi(|\lambda_2|,|\lambda_3|,|v|;\,
T)\, . \end{equation}
The scalar product is given simply by:
\begin{eqnarray} \label{ip1} \langle \Psi_1|\Psi_2\rangle_{\rm phys} &=& \langle
\Psi_1(\lambda_2,\lambda_3, v; T_o)|\Psi_2(\lambda_2,\lambda_3,v; T_o) \rangle_{\rm
kin} \nonumber\\
&=& \sum_{\lambda_1,\lambda_2,\lambda_3} \bar\Psi_1(\vec{\lambda}, T_o)\,
\Psi_2(\vec{\lambda}, T_o) \end{eqnarray}
where $T_o$ is any ``instant'' of internal time $T$.
We can now address the issue of singularity resolution using general
properties of various operators. Recall that the gravitational part
of the Hamiltonian constraint operator in the Bianchi I model shares
two properties with the fourth and the fifth terms studied above
which are specific to the Bianchi II model. First, it annihilates
singular states and, second, singular states decouple from the
regular states under its action. Therefore the full Bianchi II
Hamiltonian constraint also has these two properties. Since the
singular states decouple from regular states%
\footnote{Singular states are in the kernel of $\Theta$ and regular
states are orthogonal to the singular ones. From spectral
decomposition one expects $\sqrt{\Theta}$ to have the same property.
However, to complete this argument, one would have to establish that
$\hat{\mathcal{C}}_{\rm grav}$ is essentially self-adjoint and its self
adjoint extension also shares this property.},
an initial state in the regular sub-space cannot become singular
during evolution. It is in this precise sense that the classical
singularity is resolved. Sometimes one considers weaker forms of
singularity resolution. For example, it could happen that the
evolution of the wave function is always well defined but a regular
state can evolve to the singular sub-space. For the Bianchi I and II
models, the singularity is resolved in a stronger sense: \emph{Not
only is the evolution well defined at all times, but the singular
states (are stationary and) decouple entirely from the regular
ones.}
\subsubsection{The explicit form of the Hamiltonian constraint}
\label{s.3.5}
We will conclude by providing an explicit form of the full quantum
constraint equation that will be needed in numerical simulations.
Recall that in the Bianchi I model \cite{awe2} symmetries enabled us
to restrict our attention to the positive octant of the
3-dimensional space spanned by $(\lambda_1,\lambda_2,\lambda_3)$. This is again the
case for the Bianchi II model. More precisely, elements of $\mathcal{H}_{\rm kin}^{\rm grav}$
are invariant under the three parity maps $\hat{\Pi}_k$ and, as
shown in the Appendix \ref{a1}, the Hamiltonian constraint satisfies
$\hat{\Pi}_k\, \hat{\mathcal{C}}_{\rm grav} \hat{\Pi}_k =
\hat{\mathcal{C}}_{\rm grav}$. Therefore, knowledge of the restriction of
the image $\hat{ \mathcal{C}}_{\rm grav}\Psi$ of $\Psi$ to the positive
octant suffices to determine $\hat{\mathcal{C}}_{\rm grav}\Psi$ completely.
In the positive octant, $\mathrm{sgn}(\lambda_k)$ can only be 0 or 1 which
simplifies the action of operators. Therefore, in the remainder of
this section we will restrict the argument of $\hat{
\mathcal{C}}_H\Psi$ to the positive octant. The full action is given
simply by
\begin{equation} \big(\hat{\mathcal{C}}_{\rm grav}\Psi\big)(\lambda_2,\lambda_3, v)=\big(\hat{
\mathcal{C}}_{\rm grav}\Psi\big)(|\lambda_2|,|\lambda_3|,|v|). \end{equation}
Since the singular states are annihilated by
$\hat{\mathcal{C}}_{\rm grav}$, their evolution is trivial:
\begin{equation} \partial_T^2\, \Psi(\lambda_2,\lambda_3,v=0;T) = 0\, . \end{equation}
Non-singular states are physically more relevant. On them, the
explicit form of the full constraint is given by:
\begin{align} \partial_T^2\, \Psi(\lambda_2,\lambda_3,v;T) =& \frac{\pi G}{2}\Bigg[\sqrt{v}
\bigg((v+2)\sqrt{v+4}\,\Psi^+_4(\lambda_2,\lambda_3,v;T) - (v+2)\sqrt v\,
\Psi^+_0( \lambda_2, \lambda_3,v;T)\nonumber \\& -(v-2)\sqrt
v\,\Psi^-_0(\lambda_2,\lambda_3,v;T)+(v-2)\sqrt{|v-4|}
\,\Psi^-_4(\lambda_2,\lambda_3,v;T)\bigg)\nonumber \\ &
+\frac{2i\alpha\sqrt\Delta}{(4\pi
\gamma\sqrt\Delta)^{1/3}}(\lambda_2\lambda_3)^4\left(\sqrt{v+1}-\sqrt{|v-1|}\right)\bigg(
\Phi^--\Phi^+\bigg)(\lambda_2,\lambda_3,v;T) \nonumber\\& \label{qHamfin} +
\frac{16
\alpha^2\Delta(1+\gamma^2)}{(4\pi\gamma\sqrt\Delta)^{2/3}}(\lambda_2\lambda_3)^8\:
(\sqrt{v+1}-\sqrt{|v-1|})^8\:\Psi(\lambda_2,\lambda_3,v;T)\Bigg] \end{align}
where $\Psi^\pm_{0,4}$ are defined as follows:
\begin{align} \Psi^\pm_4(\lambda_2,\lambda_3,v;T)=& \:\Psi\left(\frac{v\pm4}{v\pm2}\cdot
\lambda_2,\frac{v\pm2}{v}\cdot\lambda_3,v\pm4;T\right)+\Psi\left(\frac{v\pm4}{v\pm2}\cdot\lambda_2,
\lambda_3,v\pm4;T\right)\nonumber\\& +\Psi\left(\frac{v\pm2}{v}\cdot\lambda_2,\frac{v\pm4}{v
\pm2}\cdot\lambda_3,v\pm4;T\right)+\Psi\left(\frac{v\pm2}{v}\cdot\lambda_2, \lambda_3,v\pm4;T
\right) \nonumber \\ & +\Psi\left(\lambda_2,\frac{v\pm2}{v}\cdot\lambda_3,v\pm4;T\right)+
\Psi\left(\lambda_2,\frac{v\pm4}{v\pm2}\cdot\lambda_3,v\pm4;T\right), \end{align}
and
\begin{align} \Psi^\pm_0(\lambda_2,\lambda_3,v;T)= & \:\Psi\left(\frac{v\pm2}{v}\cdot\lambda_2,
\frac{v}{v\pm2}\cdot\lambda_3,v;T\right)+\Psi\left(\frac{v\pm2}{v}\cdot\lambda_2,\lambda_3,v;T
\right) \nonumber \\ & +\Psi\left(\frac{v}{v\pm2}\cdot\lambda_2,\frac{v\pm2}{v}\cdot\lambda_3,
v;T\right)+\Psi\left(\frac{v}{v\pm2}\cdot\lambda_2,\lambda_3,v;T\right) \nonumber \\ & +
\Psi\left(\lambda_2,\frac{v}{v\pm2}\cdot\lambda_3,v;T\right)+\Psi\left(\lambda_2,\frac{v\pm2}{v}
\cdot\lambda_3,v;T\right)\, ,\end{align}
while $(\Phi^--\Phi^+)$ is given by
\begin{align} \big(\Phi^--\Phi^+\big)(\lambda_2,\lambda_3,v;T)\, =& \,(\sqrt{|v-2+
\mathrm{sgn}(v-2)|}-\sqrt{|v-2-\mathrm{sgn}(v-2)|}) \nonumber \\ & \qquad \times (1+\mathrm{sgn}(v-2))
\Psi(\lambda_2,\lambda_3,v-2;T) \nonumber \\ & \: -2(\sqrt{v+3}-\sqrt{v+1})
\Psi(\lambda_2,\lambda_3,v+2;T). \end{align}
(The imaginary coefficients in (\ref{qHamfin}) come from the
action of single $\sin\bar\mu_ic_i$ terms.)
Eq. (\ref{qHamfin}) immediately implies that, as in the Bianchi I
model, the steps in $v$ are uniform: the argument of the wave
function only involves $v-4, v-2, v, v+2$ and $v+4$. Thus, there is
a superselection in $v$. For each $\epsilon\in[0,2),$ let us
introduce a lattice $\mathcal{L}_\epsilon$ of points $v=2n+\epsilon$
if $\epsilon$ is 0 or 1 or $v=n+\epsilon$ otherwise.
\footnote{The lattice for $\epsilon \not= 0,1$ is twice as large as
that for $\epsilon=0$ or $\epsilon=1$ due to the symmetry properties
of the wave function.}
Then the quantum evolution ---and the action of the Dirac
observables $\hat{p}_T$ and $\hat{V}|_{T}$ commonly used in
LQC--- preserves the subspaces $\mathcal{H}^\epsilon_{\mathrm{phy}}$
consisting of states with support in $v$ on $\mathcal{L}_\epsilon$.
The most interesting lattice is the one corresponding to
$\epsilon=0$
since it includes the classically singular points $v=0$.
Finally, it is obvious from (\ref{qHamfin}) that in the limit
$\alpha\to0$ quantum dynamics of the Bianchi II model reduces to
that of the Bianchi I model discussed in \cite{awe2}. In particular,
it is possible to obtain the LQC dynamics for the $k$=0 FRW
cosmology from this model by first taking $\alpha\to0$ and then
following the projection map defined in section IVA in \cite{awe2}.
\section{Effective Equations}
\label{s4}
In the isotropic models, effective equations have been introduced
via two different approaches ---the embedding and the truncation
methods. Both start by regarding the space of quantum states as an
infinite dimensional symplectic manifold ---the quantum phase
space--- which is also equipped with a K\"ahler structure that
descends from the Hermitian inner product on the Hilbert space. In
the first method, one finds a judicious embedding of the classical
phase space into the quantum phase space which is approximately
preserved by the quantum evolution vector field \cite{jw,vt}. By
projecting this vector field into the image of the embedding one
obtains quantum corrected effective equations. In the isotropic case
these effective equations provide an excellent approximation to the
full quantum evolution of states which are Gaussians at late times,
even in the $\Lambda\not=0$ as well as k=$\pm 1$ cases where the
models are not exactly soluble. In the second method one uses
expectation values, uncertainties, and higher moments to define a
convenient system of coordinates on the infinite dimensional phase
space. The exact quantum evolution equations are then a set of
coupled non-linear ordinary differential equations for these
coordinates. By a judicious truncation of this system one obtains
effective equations containing quantum corrections \cite{bs}. In its
spirit the first method is analogous to the `variational principle
technique' used in perturbation theory, in that it requires a
judicious combination of art (of selecting the embedding) and
science. It is often simpler to use and can be surprisingly
accurate. The second method is more systematic, similar in our
analogy to the standard, order by order perturbation theory. It is
also more general in the sense that it is applicable to a wide
variety of states. In this section we will use the first method to
gain qualitative insights into leading order quantum effects.
To obtain the effective equations, without loss of generality we can
restrict our attention to the positive octant of the classical phase
space (where $\varepsilon=1$). Then the quantum corrected Hamiltonian
constraint is given by the classical analogue of (\ref{qHam1}):
\begin{equation} \label{Heff}
\frac{p_T^2}{2}+\mathcal{C}^{\mathrm{eff}}_{\mathrm{grav}} = 0, \end{equation}
where
\begin{align} \mathcal{C}^{\mathrm{eff}}_{\mathrm{grav}} =&
-\frac{p_1p_2p_3}{8\pi G\gamma^2\Delta\ell_{\mathrm{Pl}}^2}
\Bigg[\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_2c_2
\sin\bar\mu_3c_3+\sin\bar\mu_3c_3\sin\bar\mu_1c_1\Bigg] \nonumber
\\ & \quad - \frac{1}{8\pi
G\gamma^2}\Bigg[\frac{\alpha(p_2p_3)^{3/2}}{\sqrt\Delta\ell_{\mathrm{Pl}}
\sqrt{p_1}}\sin\bar\mu_1c_1 -(1+\gamma^2)\left(\frac{\alpha
p_2p_3}{2p_1}\right)^2 \Bigg]. \end{align}
Using the expressions (\ref{mubar}) of $\bar\mu_k$, it is easy to
verify that far away from the classical singularity ---more
precisely in the regime in which the (gauge fixed) spin connection
and the extrinsic curvature are sufficiently small so that
$c_k\bar\mu_k \ll 1$--- the effective Hamiltonian constraint
(\ref{Heff}) is well-approximated by the classical one (\ref{Hcl}).
Since $\sin\theta$ is bounded by 1 for all $\theta$, these equations
imply that the matter density $\rho_{\mathrm{matt}}=
p_T^2/2V^2=p_T^2/2p_1p_2p_3$ satisfies
\begin{equation} \rho_{\mathrm{matt}} \le \frac{3}{8\pi\gamma^2\Delta G\ell_{\mathrm{Pl}}^2}+
\frac{1}{8\pi \gamma^2 G} \left[\frac{x}{\sqrt\Delta\ell_{\mathrm{Pl}}} -
\frac{(1+\gamma^2)x^2}{4} \right] \end{equation}
where we have introduced $x:=\alpha\sqrt{p_2p_3/p_1^3}$. The maximum
of the expression in square brackets is attained at
$x=2/(1+\gamma^2)\sqrt\Delta\ell_{\mathrm{Pl}}$, whence
\begin{equation} \rho_{\mathrm{matt}} \le \frac{3+(1+\gamma^2)^{-1}}
{8\pi\gamma^2\Delta G \ell_{\mathrm{Pl}}^2} \approx 0.54 \rho_{\mathrm{Pl}}. \end{equation}
Thus, on the constraint surface in the phase space defined by
(\ref{Heff}), the matter energy density is bounded by $0.54
\rho_{\mathrm{Pl}}$. But this bound may be far from being optimal.
In all isotropic models, the optimal bound on matter density was
found to be $0.41 \rho_{\rm Pl}$. In the Bianchi I model, available
simulations by Vandersloot (private communication) show that the
`volume bounce' occurs when matter density is \emph{lower} than
$0.41 \rho_{\rm Pl}$ because there is also energy density in
gravitational waves. It would be interesting to use numerical
simulations to find out what happens for generic solutions to the
Bianchi II effective equations.
Finally, to obtain the effective equations for each variable, one
simply takes its Poisson bracket with the effective Hamiltonian
constraint. This gives the effective equations
\begin{equation} \dot{p_1} = \gamma^{-1}\left(\frac{p_1^2}{\bar\mu_1}(\sin\bar\mu_2c_2+
\sin\bar\mu_3c_3)+\alpha p_2p_3\right)\cos\bar\mu_1c_1, \end{equation}
\begin{equation} \dot{p_2} = \frac{p_2^2}{\gamma\bar\mu_2}(\sin\bar\mu_1c_1+\sin\bar\mu_3c_3)
\cos\bar\mu_2c_2, \end{equation}
\begin{equation} \dot{p_3} = \frac{p_3^2}{\gamma\bar\mu_3}(\sin\bar\mu_1c_1+\sin\bar\mu_2c_2)
\cos\bar\mu_3c_3, \end{equation}
\begin{align} \dot{c_1} &= -\frac{1}{\gamma}\Big[\frac{p_2p_3}{\Delta\ell_{\mathrm{Pl}}^2}\big(
\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_1c_1\sin\bar\mu_3c_3+\sin\bar\mu_2
c_2\sin\bar\mu_3c_3 \nonumber \\ & \qquad +\frac{\bar\mu_1c_1}{2}\cos\bar\mu_1c_1(
\sin\bar\mu_2c_2+\sin\bar\mu_3c_3)-\frac{\bar\mu_2c_2}{2}\cos\bar\mu_2c_2(\sin\bar
\mu_1c_1+\sin\bar\mu_3c_3) \nonumber \\ & \qquad -\frac{\bar\mu_3c_3}{2}\cos\bar
\mu_3c_3(\sin\bar\mu_1c_1+\sin\bar\mu_2c_2)\big)+(1+\gamma^2)\alpha^2\frac{(p_2
p_3)^2}{2p_1^3} \nonumber \\ & \qquad +\frac{\alpha}{2\sqrt\Delta\ell_{\mathrm{Pl}}}\left(
\frac{p_2p_3}{p_1}\right)^{3/2}\!\!(\bar\mu_1c_1\cos\bar\mu_1c_1-\sin\bar\mu_1c_1)
\Big], \end{align}
\begin{align} \dot{c_2} &= -\frac{1}{\gamma}\Big[\frac{p_1p_3}{\Delta\ell_{\mathrm{Pl}}^2}\big(
\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_1c_1\sin\bar\mu_3c_3+\sin\bar\mu_2
c_2\sin\bar\mu_3c_3 \nonumber \\ & \qquad -\frac{\bar\mu_1c_1}{2}\cos\bar\mu_1c_1(
\sin\bar\mu_2c_2+\sin\bar\mu_3c_3)+\frac{\bar\mu_2c_2}{2}\cos\bar\mu_2c_2(\sin\bar
\mu_1c_1+\sin\bar\mu_3c_3) \nonumber \\ & \qquad -\frac{\bar\mu_3c_3}{2}\cos\bar
\mu_3c_3(\sin\bar\mu_1c_1+\sin\bar\mu_2c_2)\big)-(1+\gamma^2)\alpha^2\frac{p_2
p_3^2}{2p_1^2} \nonumber \\ & \qquad +\frac{\alpha p_3}{2\bar\mu_1}(3\sin\bar
\mu_1c_1-\bar\mu_1c_1\cos\bar\mu_1c_1) \Big], \end{align}
\begin{align} \dot{c_3} &= -\frac{1}{\gamma}\Big[\frac{p_1p_2}{\Delta\ell_{\mathrm{Pl}}^2}\big(
\sin\bar\mu_1c_1\sin\bar\mu_2c_2+\sin\bar\mu_1c_1\sin\bar\mu_3c_3+\sin\bar\mu_2
c_2\sin\bar\mu_3c_3 \nonumber \\ & \qquad -\frac{\bar\mu_1c_1}{2}\cos\bar\mu_1c_1(
\sin\bar\mu_2c_2+\sin\bar\mu_3c_3)-\frac{\bar\mu_2c_2}{2}\cos\bar\mu_2c_2(\sin\bar
\mu_1c_1+\sin\bar\mu_3c_3) \nonumber \\ & \qquad +\frac{\bar\mu_3c_3}{2}\cos\bar
\mu_3c_3(\sin\bar\mu_1c_1+\sin\bar\mu_2c_2)\big)-(1+\gamma^2)\alpha^2\frac{p_2^2
p_3}{2p_1^2} \nonumber \\ & \qquad +\frac{\alpha p_2}{2\bar\mu_1}(3\sin\bar
\mu_1c_1-\bar\mu_1c_1\cos\bar\mu_1c_1) \Big]. \end{align}
In the ``embedding approach'' these effective equations provide the
leading-order quantum corrections to the classical equations of
motion Eqs.~(\ref{ceom1})~--~(\ref{ceom2}). It would be very
interesting to numerically test if the accuracy they display in the
isotropic case for states which are Gaussians at late times carries
over to the Bianchi II case.
\section{Discussion}
\label{s5}
In this paper, we analyzed the ``improved'' LQC dynamics of the
Bianchi II model. As in the isotropic and Bianchi I cases, we chose
the matter source to be a massless scalar field since it continues
to serve as a viable relational time parameter in the classical as
well as the quantum theory. It is again rather straightforward to
accommodate additional matter fields in this framework.
Our broad strategy is the same as that used in the Bianchi I model
\cite{awe2}. However, because Bianchi II models have anisotropies
\emph{as well as} spatial curvature, holonomies around closed curves
are no longer guaranteed to be almost periodic functions of the
connection. Hence, one cannot use them to construct the field
strength operator on the LQC Hilbert space; a new conceptual and
technical input is necessary to define the quantum Hamiltonian
constraint operator. We overcame this difficulty by generalizing the
strategy used so far \cite{abl,aps3,warsaw,apsv,kv1,ls,bp,awe2}.
Specifically, we used holonomies around open segments parallel to
the fiducial triads ${}^o\!e^a_i$ to define a connection operator. This
strategy is also inspired by methods introduced by Thiemann in the
full theory \cite{tt}. However, because of gauge fixing LQC does not
enjoy the manifest diffeomorphism invariance of full LQG. As a
consequence, in LQC one needs a principle to fix the `length' of the
open segment along which holonomy is evaluated. We required that the
`length' be so chosen that the field strength operator constructed
from the resulting connection should agree with that constructed
from holonomies around closed loops whenever the second construction
is available. This guarantees that (apart from `tame' factor
ordering ambiguities) the new procedure reduces to the one used in
the LQC literature before. Moreover, the strategy of defining the
Hamiltonian constraint through this connection operator can be used
also in more general contexts. In particular, it enables one to
overcome a conceptual limitation of the otherwise complete treatment
of the isotropic, k=$-1$ model given in \cite{kv1,ls}. More
importantly, it extends to more general class A Bianchi models. A
systematic treatment of the Bianchi IX model along the lines of this
paper would be especially interesting.
There is a second ---but primarily technical--- difference from the
Bianchi I case: The Hamiltonian operator now contains inverse powers
of $p_1$. This was handled following a general method introduced by
Thiemann to define inverse triad operators in LQG \cite{tt}. As
usual, there is a factor ordering ambiguity. In the main discussion
we used the simplest operator which has the same symmetries with
respect to parity as its classical counterpart.
After addressing these two issues, we obtained a well defined
quantum Hamiltonian constraint and showed that the singularity in
Bianchi II models is resolved in the same precise sense as in the
FRW and Bianchi I models. The Kinematical Hilbert space $\mathcal{H}_{\rm kin}^{\rm grav}$ can
be decomposed as $\mathcal{H}_{\rm kin}^{\rm grav} = \mathcal{H}_{\rm sing}^{\rm grav} \oplus \mathcal{H}^{\rm
grav}_{\rm reg}$ where states in the singular subspace have support
only on configurations with zero volume, while those in the regular
sub-space have no support on these singular configurations.
\emph{The Hamiltonian constraint annihilates states in $\mathcal{H}_{\rm
sing}^{\rm grav}$ and maps $\mathcal{H}^{\rm grav}_{\rm reg}$ to itself.} We
also provided an explicit form of the Hamiltonian constraint which
should be helpful in performing numerical simulations.
Finally, we obtained effective equations using the ``embedding
method'' introduced by Willis \cite{jw} and further developed by
Taveras \cite{vt} in the isotropic case. There, although the
assumptions made in the derivation fail in the deep Planck regime,
the final equations provide a surprisingly accurate approximation to
the full quantum evolution of states which are Gaussians at late
times. This holds not only for the exactly soluble k=0, $\Lambda=0$
model but also for the much more complicated $\Lambda\not=0$ and
k=$\pm 1$ models. It would be interesting to see if this phenomenon
extends also to Bianchi II models. Furthermore, numerical solutions
of these effective equations themselves may be of considerable
interest because the simplest upper bound on matter density they
lead to is higher than that in all other models studied so far,
including Bianchi I. Numerical simulations of effective equations
will answer several questions within this approximation. Is the
upper bound optimal, i.e., do generic solutions to effective
equations come close to saturating it? In the Bianchi I case,
numerical simulations by Vandersloot (private communication)
revealed that, unlike in the isotropic model, there are several
distinct kinds of `bounces.' Roughly, anytime a shear ---or a Weyl
curvature--- scalar enters the Planck regime, quantum geometry
repulsion comes into pay in a dominant manner and `dilutes' that
scalar, preventing a blow up. How do additional terms in the Bianchi
II effective equations affect this scenario? Qualitative lessons
from numerical simulations would be valuable in developing further
intuition for various quantum geometry effects.
\section*{Acknowledgements:}
We would like to thank Gianluca Calcagni, Alex Corichi, Jerzy
Lewandowski, Simone Mercuri, Tomasz Pawlowski and Hanno Sahlmann for
helpful discussions. This research was supported in part by NSF
grant PHY0854743, the George A. and Margaret M. Downsbrough
Endowment, the Eberly research funds of Penn State, Le Fonds
qu\'eb\'ecois de la recherche sur la nature et les technologies and
the Edward A. and Rosemary A. Mebus Fellowship.
\begin{appendix}
\section{Parity Symmetries}
\label{a1}
In non-gravitational physics, parity transformations are normally
taken to be discrete diffeomorphisms $x_i \rightarrow -x_i$ in the
physical space which are isometries of the flat 3-metric thereon. In
the phase space formulation of general relativity, we do not have a
flat metric ---or indeed, any fixed metric. Therefore these discrete
symmetries are no longer meaningful (except in the weak field
limit). However, if the dynamical variables have internal indices
---such as the triads and connections used in LQG--- we can use the
fact that the internal space $I$ is a vector space equipped with a
flat metric $q_{ij}$ to define parity operations on the internal
indices. Associated with any unit internal vector $v^i$, there is a
parity operator $\Pi_v$ which reflects the internal vectors across
the 2-plane orthogonal to $v^i$. This operation induces a natural
action on triads $e^a_i$, the connections $A_a^i$ and the conjugate
momenta $P^a_i =: (1/8\pi G\gamma)\, E^a_i$ (since they are internal
vectors or co-vectors).
The triads $e^a_i$ are proper internal co-vectors. In previous
references \cite{bd,awe2}, conventions were such that the spin
connection $\Gamma^a_i$ turned out to be an internal \emph{pseudo}
vector. It was then natural to regard the Barbero-Immirzi parameter
$\gamma$ to be a pseudo quantity so that the connection $A_a^i$ has
definite parity namely, it transforms as an internal pseudo-vector.
This in turn led to the conclusion that $P^a_i$ is also an internal
pseudo-vector (as one would expect because it is canonically
conjugate to $A_a^i$) \cite{awe2}. While this is all
self-consistent, these conventions lead to two undesirable
consequences. First, in the classical theory, it is not possible to
reconstruct the triads $e^a_i$ unambiguously starting from the
momenta $P^a_i$. Therefore, one cannot recover the space-time
geometry unambiguously starting from the Hamiltonian theory. Second,
the momenta $P^a_i$ are subject to a non-holonomic constraint which
obstructs the passage to quantum theory a la LQG. However, if one
sets conventions as in section \ref{s2.1}, then $\Gamma_a^i, \gamma,
A_a^i$ and $P_a^i$ are all \emph{proper} quantities and the two
difficulties disappear \cite{aa-dis}. In the main text we have used
this strategy. We now summarize the differences from the Appendix of
\cite{awe2} that it leads to.
In diagonal Bianchi models, we can restrict ourselves just to three
parity operations $\Pi_i$. Under their action, the canonical
variables $c_i,p_i$ transform as follows:
\begin{equation} \label{P1} \Pi_1 (c_1,c_2, c_3) = (-c_1, c_2, c_3), \quad \quad
\Pi_1 (p_1, p_2, p_3) = (-p_1, p_2, p_3)\, , \end{equation}
and the action of $\Pi_2, \Pi_3$ is given by cyclic permutations.
Thus, $c^i$ and $p_i$ are \emph{proper} internal vectors and
co-vectors. Under any of these maps $\Pi_i$, the symplectic
structure (\ref{pb2}), the Hamiltonian (\ref{Hcl}), and hence also
the Hamiltonian vector field, are left invariant. This is just as
one would expect because $\Pi_i$ are simply large gauge
transformations of the theory under which the physical metric
$q_{ab}$ and the extrinsic curvature $K_{ab}$ do not change. Also,
it is clear from the action of (\ref{P1}) that if one knows the
dynamical trajectories on the octant $p_i\ge 0$ of the phase space,
then dynamical trajectories on any other octant can be obtained just
by applying a suitable (combination of) $\Pi_i$. Therefore, in the
classical theory one can restrict one's attention just to the
positive octant.
Let us now turn to the quantum theory. We now have three operators
$\hat\Pi_i$. Their action on states is given by
\begin{equation} \hat\Pi_1 \Psi(\lambda_1,\lambda_2,\lambda_3) = \Psi(-\lambda_1,\lambda_2\lambda_3)\, , \end{equation}
etc. What is the induced action on operators? Since
\begin{align} \label{P2}
\hat{\Pi}_1\hat{\lambda}_1\hat{\Pi}_1\Psi(\lambda_1,\lambda_2,\lambda_3)
&= \hat{\Pi}_1 \Big({\lambda}_1\,\Psi(-\lambda_1,\lambda_2,\lambda_3)\Big) \nonumber \\
&= -\lambda_1\Psi(\lambda_1,\lambda_2,\lambda_3), \end{align}
we have
\begin{equation} \label{P3} \hat{\Pi}_1\hat{\lambda}_1\hat{\Pi}_1 = -\hat{\lambda}_1. \end{equation}
The Hamiltonian constraint operator, modulo factor ordering which is
not important here, is given by Eq. (\ref{qHam1}). To calculate its
transformation property under parity maps, in addition to
(\ref{P3}), we also need the transformation property of the
operators $\sin \bar\mu_ic_i$ and $\hat{\varepsilon}$ and operators
corresponding to inverse powers of $p_1$. Due to the symmetries of
type A Bianchi models, to know the properties of $\sin \bar\mu_ic_i$
under parity transformations, it is sufficient to calculate
$\hat{\Pi}_i \sin \bar\mu_1c_1\hat{\Pi}_i$. We have:
\begin{align} \hat{\Pi}_1\sin\bar\mu_1c_1\hat{\Pi}_1\Psi(\lambda_1,\lambda_2,\lambda_3)
&= \frac{1}{2i}\, \hat\Pi_1\,\Big[\Psi(-\lambda_1+\frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3)-
\Psi(-\lambda_1-\frac{1} {|\lambda_2\lambda_3|},\lambda_2,\lambda_3)\Big] \nonumber \\
&= \frac{1}{2i}\Big[\Psi(\lambda_1+\frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3)-\Psi(\lambda_1-
\frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3)\Big] \nonumber \\ &=
-\sin\bar\mu_1c_1\Psi(\lambda_1,\lambda_2,\lambda_3), \end{align}
whence
\begin{equation} \hat{\Pi}_1 \sin\bar\mu_1c_1\hat\Pi_1 = -\sin\bar\mu_1c_1. \end{equation}
An identical calculation shows that
\begin{align} \hat{\Pi}_2\sin\bar\mu_1c_1\hat{\Pi}_2\,\Psi(\lambda_1,\lambda_2,\lambda_3)
&= \frac{1}{2i}\, \hat\Pi_2\, \Big[ \Psi(\lambda_1-\frac{1}{|\lambda_2\lambda_3|},
-\lambda_2,\lambda_3)-\Psi(\lambda_1+\frac{1}{|\lambda_2\lambda_3|},-\lambda_2,\lambda_3)\Big] \nonumber\\
&= \frac{1}{2i}\Big[\Psi(\lambda_1-\frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3)-
\Psi(\lambda_1+\frac{1}{|\lambda_2\lambda_3|},\lambda_2,\lambda_3)\Big] \nonumber \\
&= \sin\bar\mu_1c_1\Psi(\lambda_1,\lambda_2,\lambda_3)\, , \end{align}
and similarly for $\hat\Pi_3$. Therefore, we have:
\begin{equation} \hat{\Pi}_2 \sin\bar\mu_1c_1\hat\Pi_2 = \sin\bar\mu_1c_1, \quad
{\rm and} \quad \hat{\Pi}_3 \sin\bar\mu_1c_1\hat{\Pi}_3 =
\sin\bar\mu_1c_1.\end{equation}
As expected, these transformation properties of $\sin\bar\mu_1c_1$
under $\hat\Pi_i$ mirror those of $c_1$ under the three parity
operations $\Pi_i$ in the classical theory. (Note that, because of
the absolute value signs in the expressions (\ref{mubar}),
$\bar\mu_i$ do not change under any of the parity maps.) Finally,
it is clear from Eq. (\ref{ve2}) that
\begin{equation} \hat\Pi_i\,\hat{\varepsilon}\,\hat\Pi_i = \left\{ \rlap{\raise2ex\hbox{
$\hat{\varepsilon}$ if $v=0$,}}{\lower2ex\hbox{$-\hat{\varepsilon}$ otherwise,}}
\right. \end{equation}
and from Eq. (\ref{inv}) that
\begin{equation} \hat\Pi_i\,\widehat{|p_1|^{-1/4}}\,\hat\Pi_i =
\widehat{|p_1|^{-1/4}}. \end{equation}
(Note incidentally that this need not be the case for different
factor ordering choices in Eq. (\ref{inv}).)
We can now collect these results to study the transformation
property of the Hamiltonian constraint. Consider first the regular
subspace $\mathcal{H}_{\rm reg}^{\rm grav}$ of $\mathcal{H}_{\rm kin}^{\rm grav}$ spanned by states which
have no support on points with $v=0$. From Eq. (\ref{qHam1}) it
follows that the restriction to $\mathcal{H}_{\rm reg}^{\rm grav}$ of the
gravitational part of the Hamiltonian constraint is left invariant
under $\hat\Pi_i$. Since $\hat{p}_T^2$ is manifestly invariant, on
the regular sub-space we have
\begin{equation} \label{Pham} \hat{\Pi}_i\,\, \hat{\mathcal{C}}_H \,\,\hat{\Pi}_i
= \hat{\mathcal{C}}_H \end{equation}
Next, since the gravitational part of the Hamiltonian constraint
annihilates the states in the singular sub-space (i.e. those with
support only on those points at which $v=0$), we have
\begin{equation} \hat{\mathcal{C}}_H\Psi=-\hbar^2\partial_T^2\Psi=\hat{\Pi}_i\,\,
\hat{\mathcal{C}}_H \,\, \hat{\Pi}_i\Psi. \end{equation}
Thus, the Hamiltonian constraint operator is left invariant by all
the parity operators, mirroring the behavior of its classical
counterpart.
This invariance implies that, given any state $\Psi \in
\mathcal{H}_{\rm kin}^{\rm grav}$, the restriction to the positive
octant of its image under $\hat{\mathcal{C}}_{\rm grav}$ determines
its image everywhere on $\mathcal{H}_{\rm kin}^{\rm grav}$. This
property simplifies calculations and was used to arrive at the form
of the Hamiltonian constraint given in (\ref{qHamfin}).
\end{appendix}
|
2205.02107
|
\section{Introduction}
Spatio-temporal modeling of catch or landings per unit effort data has a long history in fisheries sciences. Such models are mainly used to standardize fisheries dependent (i.e. landing and effort data from commercial fisheries) or fisheries independent (catch data by haul from scientific research cruises) information in order to derive an index of abundance~\cite{maunder_2004}. Such an index is typically used to inform population dynamic models used to assess the exploitation status of fish stocks.
Yet few applications exist in which spatio-temporal models, fitted to catch or landings and effort data, are used to forecast the spatio-temporal dynamics of fish species. Such applications could be very valuable in the context of dynamic ocean management, a new fisheries management paradigm defined as ‘management that changes rapidly in space and time in response to the shifting nature of the ocean and its users based on the integration of new biological, oceanographic, social and/or economic data in near real-time’~\cite{maxwell_2015}. A notable example of dynamic ocean management are Real Time Incentives (RTIs), a system that allows to reduce by-catches of fish by providing weekly maps with tariffs set according the expected catches of the fishery for a given set of species~\cite{kraak_2015}. Clearly, applications of dynamic ocean management typically build on the fusion of alternative data sources, i.e. remote sensing, and advanced analytical processing and modeling techniques.
In this paper, we combine common fisheries dependent data sources, being daily landings reported by fishers through electronic logbooks and data on vessel activity provided through the Vessel Monitoring System (VMS), and environmental data (sea bottom temperature). With these data, we use a computer vision and machine learning pipeline to predict the spatio-temporal abundance of two commercially important species of the Belgian fishery, sole (\textit{Solea solea}) and plaice (\textit{Pleuronectes platessa}).
\section{Related works}
\emph{fish distribution forecasting} Different applications exist that forecast the spatio-temporal distribution of fish in relation to environmental conditions. Many of these application build on suitable habitat models in which so called environmental envelopes are constructed from species occurrence data or expert knowledge. These envelopes, constrained to some predefined shape (i.e. trapezoidal~\cite{kaschner_2006}, shape constrained GAMS~\cite{citores_2020}), show the probability of species occurrence in response to a set of environmental variables and can as such be used to predict suitable habitats. Typically, the environmental envelopes are rather static and are used to make long term prediction of changes in species distribution for a given climate scenario.
An alternative approach was developed for the EcoCast application in which machine learning models are used to predict the daily probability of occurrence of swordfish, sea lions, leatherback turtles and blue sharks off the coast of Oregon and California~\cite{hazen_2018}. These individual species maps are combined to identify suitable areas for fishing. The approach applies boosted regression trees on historical observer (1991 - 2014) (presence/absence) and tracking (2001 - 2009) data derived from tagging experiments combined with multiple environmental data to model the probability of species occurrence. This model uses the most recent observations (often the previous day) of the environmental parameters used in the model to make predictions for the current day. As such, it is assumed that the most recent environmental conditions are similar to the current environmental conditions, a shortcoming we aim to address in this study.
\emph{temperature forecasting with machine learning} Many approaches have been specifically proposed to predict temperature from satellite data with recurrent deep learning methods~\cite{xiao_spatiotemporal_2019,kim_sea_2020,qiao_sea_2021,nipslf}. Simultaneously, related approaches, designed to predict images from videos, have been proposed~\cite{wang_predrnn_nodate,fpred1}. To our knowledge, none of these works have been applied to the prediction of water temperature at the sea bottom.
\section{Proposed framework}
To predict the probability of fish presence at a given location at sea, we propose a framework that consists of two main building blocks. The first is a deep learning model, trained on satellite images, made to predict the sea bottom temperature. The second block is a machine learning (gradient boosting) model, which uses the predicted temperature as input plus a set of features from a dataset that comprises information on landings of particular fish species at particular locations.
\subsection{Datasets used}
\subsubsection{Satellite data}
The satellite data of sea bottom temperature used in this work has been collected using the Copernicus Marine Environment Monitoring Service (CMEMS). Sea bottom temperatures are not directly observed but generated by the NEMO (Nucleus for European Modelling of the Ocean) ocean model, using the 3DVar NEMOVAR system to assimilate observations.
The region of interest for our experiments is in a rectangular area with latitude ranging from 50.07 to 55.33 and longitude ranging from 0.22 to 8.56. The resolution of the latitude and longitude is 80x80. Figure~\ref{fig:aera} shows the location of this area on the earth. We collected data from the years 2006 to 2020, giving a total of 5295 successive days of bottom temperature.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{imgs/bottomT_in_clip_up.eps}
\caption{The region of interest is an area located between north of France and south of UK (the area is in color in the picture)}\label{fig:aera}
\centering
\end{figure}
\subsubsection{Fisheries data}
A dataset that contains species landings of the Belgian fishing fleet was compiled from electronic logbook and VMS data. The electronic logbook data comprise daily information on fish landings, a description of the fishing activity (gear and mesh size used, ICES statistical rectangle, fishing trip identifier, and day), and information on the fishing vessel (vessel identifier, vessel tonnage, engine power and length). The landing and effort data were combined according an agreed procedure as described in~\cite{hintzen}. By merging both datasets, a final dataset with landing data at a higher spatio-temporal resolution is generated.
Data was available from 2006 up to 2020. In total the dataset comprised \num{1684560} observations.
\subsection{Predicting bottom temperature with deep learning}
The first component of our framework is a deep learning model capable of forecasting the bottom temperature from a temperature history.
\subsubsection{Problem formulation}
The bottom temperature for a given timestamp can be represented by a matrix $BT$ of dimensions $M \times N$, where $M$ denotes the resolution in longitude space and $N$ the resolution in latitude space. Therefore, $BT[i][j]$ is the numerical value of the bottom temperature at coordinate $i,j$ in degrees Celsius.
We can then construct a sequence of consecutive bottom temperatures for a given time period. Let $BT_t$ be the temperature matrix at time $t$. The sequence is then the following:
\begin{equation}
BT_{0},BT_{1}, \cdots, BT_{h-1}, BT_{h}, \cdots , BT_{h+p-1}
\label{eq:seq}
\end{equation}
The length of the sequence is $l = p + h$ with $p$ the number of matrices to predict and $h$ the number of available histories. Our goal is therefore to predict $p$ matrices of consecutive bottom temperatures given a history of $h$ measures.
\subsubsection{Creation of the sequences and processing of land areas}
The raw dataset contains a large sequence of size $nb_{R}=5295$ bottom temperature matrices. The sequences needed by the neural network are built by advancing step by step in the raw dataset. Thus, the dataset size used for training, which is the number of sequences of length $l$ can be calculated as follows: $n_{sequence} = nb_{R}-l$
\label{txt:minus5}
Then, the data are normalized by subtracting each value by the mean value of the dataset and dividing by its standard deviation. Our region of interest contains land, which corresponds to NaN values in the dataset. These NaN values are replaced, after normalization, by a high negative value ($-5$ in our implementation).
\subsubsection{Recurrent neural network}
Our work is based on the framework PredRNN++ \cite{wang_predrnn_nodate}, we build our forecasting model on the building blocs introduced in~\cite{wang_predrnn_nodate} which are the recurrent units called Causal LSTM and the Gradient Highway Unit.
We motivate our choice because this framework can handle short-term dependencies very efficiently and outperforms traditional approaches, often used in satellite data forecasting, such as LSTM or Convolutional LSTM~\cite{xiao_spatiotemporal_2019,kim_sea_2020,moskolai_application_2021}.
\subsection{Fish prediction with gradient boosting}
The fisheries dataset, which contains a substantial history of fishing operations in the North Sea, is used to build a machine learning model that aims to predict the probability of fish presence in the sea.
Since plaice and sole are bottom dwelling species that have a narrow thermal preference range, the temperature of the sea floor is assumed to influence the location of fish. Hence, we use the temperature prediction model presented in the previous section among the features used by our machine learning model. Therefore, the resolution of the fish prediction is the same as the resolution of the temperature map. As discussed in the previous section, the resolution of the temperature map is 80x80, so the resolution of the fish prediction is the same.
\subsubsection{Gradient boosting to predict potential fishing zone}
Since the fisheries dataset contains many different types of features, we chose a decision tree-based machine learning algorithm. Specifically, we used the lightGBM~\cite{ke_lightgbm_2017} framework with GPU acceleration~\cite{zhang_gpu-acceleration_nodate}. We build two models for two species of fish: one to predict the probability of presence of plaice, and one to predict the probability of presence of sole. The output of the models is then a probability of fish presence for each point of the 80x80 map.
\subsubsection{Features selection}
From the original dataset, we build seven features to train the machine learning models: longitude ($i$), latitude ($ii$), bottom temperature ($iii$) and two features for the day ($iv$, $v$) and two for the year ($vi$, $vii$). Indeed, we transform day and month, which are cyclic in nature, into two new features using a sine and cosine transformation~\cite{cyclic}. Furthermore, because the fish are predicted in an 80x80 resolution grid, the latitude and longitude correspond to the coordinates of the point in which the prediction is made and are therefore integers ranging from 0 to 79.
\section{Experimental settings}
\subsection{Train-test-evaluation splits}
The recurrent neural network is trained and evaluated on satellite data from sequences defined in section \ref{txt:minus5}. We follow the evaluation procedure described in \cite{xiao_spatiotemporal_2019}, the training, validation, and test sets are chosen to follow each other in chronological order. In our study, the training set consists of the years from 2006 to 2018, and the evaluation set and test set are formed from the years 2019 to 2020.
The plaice and sole gradient boosting models are trained on both satellite and fisheries dataset with a similar train-test-evaluation split. We take the years from 2019 to 2020 as test and evaluation sets.
\subsection{Hyperparameters}
The hyperparameters of the recurrent neural network for temperature forecasting are as follows. The input and output lengths are set to 4, which means that we predict 4 days with a history of 4 days. The optimization is performed with Adam~\cite{DBLP:journals/corr/KingmaB14} with a learning rate of 0.001. The loss function used in our experiments is the $L1+L2$ function which gives the best performance compared to using $L1$ or $L2$ functions only. The batch size is set to 8. Following the neural network architecture introduced in \cite{wang_predrnn_nodate}, each layer is built with 4 causal LSTMs with 128, 64, 64, 64 channels and a 128 channel gradient highway unit. In all recurrent units, the convolution filter size is set to 5.
The two lightGBM models use the standard hyperparameters of the framework~\cite{ke_lightgbm_2017} with a number of leaves set to 23. The objective is binary classification and the loss used is the log-loss.
\subsection{Metrics}
We evaluate the temperature forecasting model using two metrics well suited to regression problems: MAE (mean absolute error) and RMSE (root-mean-square error).
For the fish prediction models, we use the F1 score, which is well suited to binary classification problems.
\begin{equation*}
\mathit{RMSE}=\sqrt{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n (x_{ij}-y_{ij})^2}
\quad\text{,}\quad
\mathit{MAE}=\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n |x_{ij}-y_{ij}|
\end{equation*}
\begin{equation*}
\mathit{F1} = \frac{2*Precision*Recall}{Precision+Recall}
\end{equation*}
$m$ and $n$ correspond to the latitude and longitude resolutions ($m=n=80$). Since we have land in our study area, we make sure that for both RMSE and MAE, the coordinates $i,j$ correspond only to points that are in the sea.
\section{Performance evaluation}
To measure the performance of our framework, we first evaluate the performance of its two component and then the performance of the entire framework. For the fish prediction models, we also verify that they have captured the link, which exists in the marine environment, between bottom temperature and fish abundance.
\subsection{Temperature forecasting performance}
We first ensure that the forecasting performance of our model is better than simply taking the last known day as a predictor of future days' temperature.
Then, because we have chosen to set the temperature value of the areas with land to a high negative value, we validate this approach.
\subsubsection{Last know day estimator compared to the proposed forecasting model}
\paragraph{Motivation}
For a given sequence such as the one in equation~(\ref{eq:seq}), we call the last day estimator the temperature matrix $BT_{h-1}$. We can then calculate the error (RMSE and MAE) between this matrix and the matrices $BT_{h}\cdots BT_{h+p-1}$.
Similarly to \cite{nipslf}, we perform this experiment because we observed that the variation of the temperature is extremely low between the different days. For this reason, it would be easy to develop a forecasting algorithm with a low error rate by taking only the last know day as prediction. It is then essential to ensure that our forecast model is able to outperform the last known day estimator.
\paragraph{Results}
The errors obtained by simply using the last known day are shown in orange in Figure~\ref{fig:mae} for MAE and Figure~\ref{fig:rmse} for RMSE.
As expected, using the last known day as the temperature estimator for the following days results in a relatively small error ($0.10^\circ C$ of MAE) for the first day. The error increases significantly for the following days (for predicting the fourth day, the error is $0.3^\circ C$).
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{imgs/mae_plot_val.eps}
\caption{MAE}
\label{fig:mae}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{imgs/rmse_plot_val.eps}
\caption{RMSE}
\label{fig:rmse}
\end{subfigure}
\caption{For both figures, the error is calculated between the forecast and the ground truth (blue bars) and between the last day estimator and the ground truth (orange). The lowest values are the best.}
\label{fig:mae_rmse}
\end{figure}
In both figures \ref{fig:mae} and \ref{fig:rmse}, we plot the error for the day predicted by our model (in blue).
We can observe here that our model outperforms the estimator of the last known day with both RMSE and MAE metrics for all days. Furthermore, the percentage of error between the estimator of the last known day and the value predicted by our model increases when we progress in the predictions. The error (in terms of RMSE) is 14.4\% higher for the estimator of the last known day for day 1 and 26.3\% for day 4.
\subsubsection{Repartition of the errors}
During the preprocessing of the dataset, we replace the NaN values (which represent the land) by a high negative value. To validate this approach we compute the RMSE between the predicted temperatures and the ground truth for days 1-4 and we show the RMSE error for each cell of the error matrix. Results are depicted in Figures~\ref{fig:pred_1} to \ref{fig:pred_4}.
For the predicted days, the errors are fairly well distributed in all areas. The errors around the interface between the sea and the land are comparable to those of the other zones, which validates our approach.
It should be noted, however, that these areas at the interface are among the areas that concentrate the most errors, especially when we advance in the prediction. This can be explained by the fact that these are shallow areas where the temperature variations are the most important from one day to another. We illustrate this on the figure \ref{fig:last-estim}. We calculate the location of the RMSE errors between the Last know day estimator and the ground truth of day 4. There is no prediction here, we only compare the ground-truth. The largest errors correspond to the largest temperature variations. This confirm that the areas where the prediction errors are the largest correspond to the areas where the temperature variations are the most important.
\begin{figure}[h]
\centering
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{imgs/pred_1.eps}
\caption{day 1}
\label{fig:pred_1}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{imgs/pred_2.eps}
\caption{day 2}
\label{fig:pred_2}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{imgs/pred_3.eps}
\caption{day 3}
\label{fig:pred_3}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{imgs/pred_4.eps}
\caption{day 4}
\label{fig:pred_4}
\end{subfigure}
\caption{RMSE between ground truth and prediction for day 1 to 4. RMSE is computed for each points (80x80) in the area of interest. The white area is the land.}
\label{fig:pred}
\end{figure}
\subsection{Influence of bottom temperature for fish prediction}
\subsubsection{Link between temperature and quality of prediction}
We train both fish prediction models with the fisheries dataset. To validate the relevance of using bottom temperature for the prediction of sole and plaice presence, we perform two experiments. In the first, we train and test a model with and without the "bottom temperature" feature. The results are depicted in Table~\ref{tab1} under the columns ``w/ temp.'' and ``w/o temp.''. For this experiment, the bottom temperature is not predicted, we use the ground truth value from the Copernicus dataset.
According to the result presented Table~\ref{tab1}, using bottom temperature for both plaice and sole models increases the quality of the prediction. This justify our approach of using bottom temperature for fish presence prediction.
\subsubsection{Link between temperature and abundance of fish}
To more precisely measure how the model has captured the link between temperature and presence of fish, the following experiment was conducted. We added an offset, from $-2^\circ C$ to $7^\circ C$, to the bottom temperature of each grid point on a given day and computed the mean of probability of fish presence in the study area using the sole and plaice model. Results are depicted in Figure~\ref{fig:tempincr}. As shown, when we decrease (resp. increase) the temperature, the model predicts a lower (resp. higher) occurrence probability of fish on average. The stronger positive correlation between species occurrence and seawater temperature for sole compared to plaice can be explained by the fact that sole is a Lusitanian species that prefers warmer water. Moreover, the study area comprises the northernmost habitat of the sole. In contrast, plaice is widely distributed over the entire North Sea which may explain why the response of species occurrence to temperature is less pronounced.
\begin{table}
\caption{F1 score for plaice (PLE) and sole (SOL) models}
\label{tab1}
\center
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Fish type & w/ temp. & w/o temp. & pred. day 1 & pred. day 2 & pred. day 3 & pred. day 4\\
\hline
SOL & 82.8 & 81.4 & 82.7& 82.7& 82.6& 82.4 \\
PLE & 82.0 & 80.6 & 81.8& 81.8& 81.8& 81.8\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\linewidth]{imgs/last_estim_3.eps}
\captionsetup{width=0.8\linewidth}
\captionof{figure}{RMSE errors between the Last know day estimator and the ground truth of day 4.}
\label{fig:last-estim}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth]{imgs/temperature_variation.eps}
\captionsetup{width=0.8\linewidth}
\captionof{figure}{Average probability of fish presence when an offset is added to bottom temperature.}
\label{fig:tempincr}
\end{minipage}
\end{figure}
\subsection{Performance of the complete framework}
The accuracy of the fish prediction is evaluated using the predicted bottom temperature as follows: for each day $d$ of the fisheries data test set, we construct four predictions. For the prediction at day 1 (pred. day 1 in Table~\ref{tab1}), we construct a sequence $BT_{d-4},BT_{d-3}, \cdots, BT_{d-1}$ from the ground truth measurements. Thus, with our bottom temperature prediction model, we can predict day $d$ by taking the first day of the predicted sequence. For the prediction of day 2, we build a sequence $BT_{d-5},BT_{d-4}, \cdots, BT_{d-2}$ and take the second predicted day. We proceed in the same way for day 3 and day 4. We then measure the quality of the prediction in terms of F1 score when we use these different predictions as the temperature for day $d$. The results are presented in Table~\ref{tab1}.
Our experiments show that, even the prediction of temperature contain small errors in term of MAE or RMSE, using this predicted value shows excellent results for 1,2,3 and 4 days ahead. With a 4-day prediction we have performance extremely close to using a ground truth value for both sole and plaice.
\section{Discussion and Conclusion}
Our study showed how a pipeline of advanced analytical tools and data fusion can be used to make a short term forecast of species occurrence in a highly variable environment such as the marine ecosystem.
We shown that a recurrent neural network building blocks designed to predict successive frames of a video can be efficient for sea bottom temperature forecasting. During the evaluation, we noticed that the variation in temperature is small and even using only the last known day as a predictor, we can have small errors. We therefore carefully ensured that our model outperformed this simple estimator. Moreover, our choice to replace the value of the coordinates containing land by a high negative value has been validated.
By enriching a fisheries dependent dataset with features on environmental conditions such as sea bottom temperature, a higher predictive accuracy of sole and plaice occurrence could be achieved with a gradient boosting algorithm. This accuracy was further increased by using the predicted sea bottom temperature that was inferred from a recurrent deep learning model. Since information on environmental conditions is currently available in near real time through satellite based earth monitoring programs such as Copernicus, this may provide an opportunity for practical applications of dynamic ocean management.
Although the accuracy of sole and plaice occurrence could be increased by adding an environmental feature to the fisheries dataset, the overall accuracy can still be improved. It should be noted however that the quality of the fisheries dependent data is limited. The landing data reported by fishers are estimated with a fault tolerance of 20\%. Moreover, misreporting is known to occur wherewith species landings from one area are reported in another area. Finally, the landing data are reported on a daily basis and distributed over the GPS positions of a fishing vessel that were recorded during that day according~\cite{hintzen}. To do so, an equal distribution of landings over all fishing position is assumed which is unlikely to be the case in reality. As more data will become available in the future, and alternative catch monitoring techniques, e.g. image analysis, will be implemented, the predictive accuracy of fish occurrence models relying on fisheries dependent information is likely to increase.
\bibliographystyle{splncs04}
|
2205.02047
|
\section{Introduction}
Keyphrase Extraction (KE) aims to extract phrases related to the main points discussed in the source document, a fundamental task in Natural Language Processing (NLP). Because of their succinct and accurate expression, keyphrase extraction is helpful for a variety of applications such as information retrieval \cite{KimKCOPS13} and text summarization \cite{LiuPLL09}. Typically, keyphrase extraction methods consist of two main components: candidate keyphrase extraction and keyphrase importance estimation. Concretely, the former extracts the candidate keyphrases from the source document via some heuristics (i.e., n-grams are shown in Figure~\ref{example}), and the latter determines which candidates are chosen as keyphrases. In other words, keyphrase importance estimation directly affects the performance of the keyphrase extraction model in most cases.
\begin{figure}
\centering
\includegraphics[scale=0.37]{example.pdf}
\caption{Sample partial of the document in \textit{OpenKP} dataset. For ease of presentation, we assume “a large region of land” is a 5-gram phrase as an example.}
\label{example}
\end{figure}
Generally, in the neural supervised keyphrase extraction model, keyphrase importance estimation can be subdivided into information representation and importance discrimination. Specifically, information representation focuses on modeling the encoding procedure, and the importance discrimination focuses on measuring and ranking the importance of candidate phrases. To represent information comprehensively, recent methods have been proposed to learn better representations via different backbones, such as Bi-LSTM \cite{catseq17}, GCNs \cite{gcn2019, gcn2020}, and pre-trained language models (e.g., ELMo \cite{xiong19} and BERT \cite{bert2020, baseline}). To distinguish the importance of candidate phrases precisely, most existing supervised models \cite{baseline, span2020, song} estimate and rank the importance of candidate phrases to extract keyphrases by using different approaches, such as classification and ranking models.
Although the methods mentioned above have achieved significant performance, the keyphrase extraction task still needs improvement. Among them, there are the following two main issues. The first issue lies in the information representation. Typically, phrases often exhibit inherent hierarchical structure ingrained with complex syntactic and semantic information \cite{text_hyperbolic,alleman2021syntactic, ZhouLZ20}. In general, the longer phrases contain more complex structures. (as shown in Figure~\ref{example}, the phrase "a large region of land" has more complex inherent structures than "region" or "a large region". Similarly, the phrase "a large region" is more complex than "region"). Besides the phrases, since linguistic ontologies are intrinsic hierarchies \cite{text_hyperbolic}, the conceptual relations between phrases and the document can also form hierarchical structures. Therefore, the hierarchical structures need to be considered when representing both phrases and documents and estimating the phrase-document relevance. However, it is difficult to capture such structural information even with infinite dimensions in the Euclidean space \cite{LinialLR95}. The second issue lies in distinguishing the importance of phrases. Keyphrases are typically used to retrieve and index their corresponding document, so they should be highly related to the main points of the source document \cite{2014survey}. However, most existing supervised keyphrase extraction methods ignore explicitly modeling the relevance between phrases and their corresponding document, resulting in biased keyphrase extraction.
Motivated by the above issues, we explore the potential of hyperbolic space for the keyphrase extraction task and propose a new hyperbolic relevance matching model (HyperMatch) for neural supervised keyphrase extraction. Firstly, to capture hierarchical syntactic and semantic structure information, HyperMatch integrates the hidden representations in all the intermediate layers of RoBERTa to collect the adaptive contextualized word embeddings via an adaptive mixing layer based on the self-attention mechanism. And then, considering the hierarchical structure hidden in the natural language content, HyperMatch encodes both phrases and documents in the same hyperbolic space via hyperbolic phrase encoder and hyperbolic document encoder. Meanwhile, we adopt the Poincaré distance to calculate the phrase-document relevance by considering the latent hierarchical structures between phrases and the document. In this setting, the keyphrase extraction can be regarded as a matching problem and effectively implemented by minimizing a hyperbolic margin-based triplet loss. To the best of our knowledge, we are the first work to explore the supervised keyphrase extraction in the Hyperbolic space. Extensive experiments on six benchmark datasets show the effectiveness of HyperMatch. The results have demonstrated that HyperMatch outperforms the state-of-the-art baselines in most cases.
\begin{figure*}
\centering
\includegraphics[scale=0.49]{hyper_match.pdf}
\caption{Framework of the hyperbolic relevance matching model (HyperMatch). }
\label{model}
\end{figure*}
\section{Preliminaries}
Hyperbolic space is an important concept in hyperbolic geometry, which is considered as a special case in the Riemannian geometry \cite{2011Riemannian}. Before presenting our model, this section briefly introduces the basic information of hyperbolic space.
In a traditional sense, hyperbolic spaces are not vector spaces; one cannot use standard operations such as summation, multiplication, etc. To remedy this problem, one can utilize the formalism of M{\"o}bius gyrovector spaces allowing the generalization of many standard operations to hyperbolic spaces \cite{hyperimage}.
Similarly to the previous work \cite{NickelK17, Ganea18, TifreaBG19}, we adopt the {Poincaré ball} and use an additional hyper-parameter $c$ which modifies the curvature of Poincaré ball; it is then defined as $\mathbb{D}^n_c = \{\mathbf{x}\in \mathbb{R}^n : c\Vert\mathbf{x}\Vert^2 <1, c \geq0\}$. The corresponding conformal factor now takes the form $\lambda_{\mathbf{x}}^c:= \frac{2}{1-c\Vert \mathbf{x}\Vert^2}$. In practice, the choice of $c$ allows one to balance hyperbolic and Euclidean geometries, which is made precise by noting that when $c\rightarrow0$, all the formulas discussed below take their usual Euclidean form.
We restate the definitions of fundamental mathematical operations for the generalized Poincaré ball model. We refer readers to \cite{Ganea18} for more details. Next, we give details of the closed-form formulas of several M{\"o}bius operations.
\noindent{{\textbf{M{\"o}bius Addition.}}}
For a pair $\mathbf{x},\mathbf{y}\in \mathbb{D}^n_c$, the {M{\"o}bius addition} is defined as,
\begin{equation}
\small
\mathbf{x} \oplus_c \mathbf{y} = \frac{(1+2c\langle\mathbf{x}, \mathbf{y}\rangle+ c\Vert\mathbf{y}\Vert^2)\mathbf{x}+(1-c\Vert\mathbf{x}\Vert^2)\mathbf{y}}{1+2c\langle\mathbf{x}, \mathbf{y}\rangle+c^2\Vert\mathbf{x}\Vert^2\Vert\mathbf{y}\Vert^2}.
\end{equation}
\noindent{{\textbf{M{\"o}bius Matrix-vector Multiplication.}}}
For a linear map $\mathbf{M}: \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $\forall\mathbf{x}\in \mathbb{D}^n_c$, if $\mathbf{Mx}\neq0$,
then the {M{\"o}bius matrix-vector multiplication} is defined as,
\begin{equation}
\small
\mathbf{M} \otimes_c \mathbf{x} = (\frac{1}{\sqrt{c}})\text{tanh}(\frac{\Vert\mathbf{Mx}\Vert}{\Vert\mathbf{x}\Vert}\text{tanh}^{-1}(\Vert\sqrt{c}\mathbf{x}\Vert))\frac{\mathbf{Mx}}{\Vert\mathbf{Mx}\Vert},
\end{equation}
where $\mathbf{M} \otimes_c \mathbf{x}=0$ if $\mathbf{Mx}=0$.
\noindent{{\textbf{Poincaré Distance.}}}
The induced distance function is defined as,
\begin{equation}\label{distance}
\small
d_c(\mathbf{x},\mathbf{y}) = \frac{2}{\sqrt{c}}\text{arctanh}(\sqrt{c}\Vert-\mathbf{x}\oplus_c\mathbf{y}\Vert).
\end{equation}
Note that with $c = 1$ one recovers the geodesic distance, while with $c\rightarrow0$ we obtain the Euclidean distance $\text{lim}_{c\rightarrow0}d_c(\mathbf{x},\mathbf{y}) = 2\Vert \mathbf{x}-\mathbf{y} \Vert$.
\noindent{{\textbf{Exponential and Logarithmic Maps.}}}
To perform operations in the hyperbolic space, one first needs to define a mapping function from $\mathbb{R}^n$ to $\mathbb{D}^n_c$ to map Euclidean vectors to the hyperbolic space. Let $T_{\mathbf{x}}\mathbb{D}_c^n$ denote the tangent space of $\mathbb{D}_c^n$ at $\mathbf{x}$. The exponential map $\text{exp}^c_{\mathbf{x}}(\mathbf{\cdot}): T_{\mathbf{x}}\mathbb{D}_c^n \rightarrow \mathbb{D}_c^n$ for $\mathbf{v} \neq 0$ is defined as:
\begin{equation}
\small
\text{exp}^c_{\mathbf{x}}(\mathbf{v}) = \mathbf{x} \oplus_c (\text{tanh}(\sqrt{c}\frac{\lambda_{\mathbf{x}}^c\Vert\mathbf{v}\Vert}{2})\frac{\mathbf{v}}{\sqrt{c}\Vert\mathbf{v}\Vert}).
\end{equation}
As the inverse of $\text{exp}^c_{\mathbf{x}}(\mathbf{\cdot})$, the logarithmic map $\text{log}^c_{\mathbf{x}}(\mathbf{\cdot}): \mathbb{D}_c^n \rightarrow T_x\mathbb{D}_c^n$ for $\mathbf{y} \neq \mathbf{x}$ is defined as:
\begin{equation}
\small
\text{log}^c_{\mathbf{x}}(\mathbf{y}) = \frac{2}{\sqrt{c}\lambda_{\mathbf{x}}^c}\text{tanh}^{-1}(\sqrt{c}\Vert-\mathbf{x} \oplus_c \mathbf{y} \Vert) \frac{-\mathbf{x} \oplus_c \mathbf{y}}{\Vert -\mathbf{x} \oplus_c \mathbf{y} \Vert}
\end{equation}
\noindent{{\textbf{Hyperbolic Averaging Pooling.}}}
The average pooling, as an important operation common in natural language processing, is averaging of feature vectors. In the \textit{Euclidean} setting, this operation takes the following form:
\begin{equation}
\small
\text{AP}(\mathbf{x}_1, ..., \mathbf{x}_i, ..., \mathbf{x}_M) = \frac{1}{M}\sum^M_{i=1}\mathbf{x}_i.
\end{equation}
Extension of this operation to hyperbolic spaces is called the \textit{Einstein midpoint} and takes the most simple form in \textit{Klein} coordinates:
\begin{equation}\label{ap}
\small
\text{HyperAP}(\mathbf{x}_1, ..., \mathbf{x}_i, ..., \mathbf{x}_M) = \sum_{i=1}^{M}\gamma_i\mathbf{x}_i/ \sum_{i=1}^{M}\gamma_i,
\end{equation}
where $\gamma_i = \frac{1}{\sqrt{1-c\Vert\mathbf{x_i}\Vert^2}}$ is the \textit{Lorentz} factor.
Recent work \cite{hyperimage} demonstrates that the \textit{Klein} model is supported on the same space as the \textit{Poincaré ball}; however, the same point has different coordinate representations in these models. Let $\mathbf{x}_{\mathbb{D}}$ and $\mathbf{x}_{\mathbb{K}}$ denote the coordinates of the same point in the \textit{Poincaré} and \textit{Klein} models correspondingly. Then the following transition formulas hold.
\begin{equation}\label{k}
\small
\mathbf{x}_{\mathbb{D}} = \frac{\mathbf{x}_{\mathbb{K}}}{1+\sqrt{1-c\Vert\mathbf{x_{\mathbb{K}}}\Vert^2}},
\end{equation}
\begin{equation}\label{p}
\small
\mathbf{x}_{\mathbb{K}} = \frac{2\mathbf{x}_{\mathbb{D}}}{1+c\Vert\mathbf{x_{\mathbb{D}}}\Vert^2}.
\end{equation}
Therefore, given points in the \textit{Poincaré ball}, we can first map them to the \textit{Klein} model via Eq.(\ref{p}), compute the average using Eq.(\ref{ap}), and then move it back to the \textit{Poincaré} model via Eq.(\ref{k}).
\section{HyperMatch}
Given a document $\mathcal{D}=\{w_1, ..., w_i, ..., w_M\}$, the candidate phrases are first extracted from the source document by the n-gram structures, where $M$ indicates the max length of the input document. Then, to determine which candidates are keyphrases, we design a new hyperbolic relevance matching model (HyperMatch), which consists of two main procedures: information representation and importance discrimination. Figure~\ref{model} illustrates the overall framework of HyperMatch.
\subsection{Information Representation}
Information representation is one of the essential parts of keyphrase importance estimation, which needs to represent information comprehensively. To capture rich syntactic and semantic information, HyperMatch first embeds words by the pre-trained language model RoBERTa with the adaptive mixing layer. Then, phrases and documents are embedded in the same hyperbolic space by the hyperbolic phrase encoder and hyperbolic document encoder. In the following subsections, the information representation procedure will be described in detail.
\subsubsection{Contextualized Word Encoder}
Pre-trained language models \cite{elmo, bert,roberta} have emerged as a critical technology for achieving impressive gains in natural language tasks. These models extend the idea of word embeddings by learning contextualized text representations from large-scale corpora using a language modeling objective. Thus, recent keyphrase extraction methods \cite{xiong19, baseline, 2020sota, span2020} represent words / documents by the last intermediate layer of pre-trained language models.
However, various probing tasks \cite{structure2019,2020specical} are proposed to discover linguistic properties learned in contextualized word embeddings, which demonstrates that different intermediate layers in pre-trained language models contain different linguistic properties or information. Specifically, each layer has specific specializations, so combining features from different layers may be more beneficial than selecting the last one based on the best overall performance.
Motivated by the phenomenon above, we propose a new adaptive mixing layer to combine all intermediate layers of RoBERTa \cite{roberta} to obtain representations. Firstly, each word in the source document $\mathcal{D}$ is represented by all intermediate layers in RoBERTa, which is encoded to a sequence of vector $\mathbf{H}=\{\mathbf{h}_1, ..., \mathbf{h}_i, ..., \mathbf{h}_M\}$,
\begin{equation}
\mathbf{H}= \text{RoBERTa} \normalsize \{\mathbf{w}_1, ..., \mathbf{w}_i, ..., \mathbf{w}_M\}.
\end{equation}
Specially, $\mathbf{h}_i \in \mathbb{R}^{L*d_r}$ indicates the $i$-th contextualized word embedding of $\mathbf{w}_i$, where $L$ and $d_r$ are set to $12$ and $768$.
Then, the self-attention mechanism is adopted to aggregate multi-layer representations of each word as follows:
\begin{align}
\alpha_{i} &= \text{softmax}(\mathbf{V}_a\mathbf{h}_{i}),\\
\mathbf{\hat{h}}_{i} &= \mathbf{W}_a \alpha_{i}\mathbf{h}_{i},
\end{align}
where $\mathbf{V}_a\in\mathbb{R}^{d_r}$ and $\mathbf{W}_a\in\mathbb{R}^{d_r*d_r}$ are learnable weights. Here, $\alpha_{i}\in\mathbb{R}^{L}$ represents the adaptive mixing weights of the proposed adaptive mixing layer. In this case, each word in the source document $\mathcal{D}$ is transferred to a sequence of vector $\mathbf{\hat{H}}=\{\mathbf{\hat{h}}_1, ..., \mathbf{\hat{h}}_i, ..., \mathbf{\hat{h}}_M\}$.
The adaptive mixing layer allows our model to obtain more comprehensive word embeddings, capturing more meaningful information (e.g., surface, syntactic, and semantic).
\subsubsection{Hyperbolic Phrase Encoder}
Phrases often exhibit inherent hierarchies ingrained with complex syntactic and semantic information \cite{hypertext2021}. Therefore, representing information requires sufficiently encoding semantic and syntactic information, especially for the latent hierarchical structures hidden in the natural languages. Recent studies \cite{baseline, xiong19} typically obtain phrase representations in Euclidean space, which makes it difficult to learn representations with such latent structural information even with infinite dimensions in Euclidean space \cite{LinialLR95}. On the contrary, hyperbolic spaces are non-Euclidean geometric spaces that can naturally capture the latent hierarchical structures \cite{Sarkar11,desa2018}.
Lately, the use of the hyperbolic space in NLP \cite{DhingraSNDD18, TifreaBG19, NickelK17} is motivated by the ubiquity of hierarchies (e.g., the latent hierarchical structures in phrases, sentences, and documents) in NLP tasks. Therefore, in this paper, we propose to embed phrases in the hyperbolic space. Concretely, the phrase representation of the $i$-th $n$-gram $c_i^n$ is computed as follows,
\begin{equation}
\mathbf{\hat{h}}_i^n = \text{CNN}^n (\mathbf{\hat{h}}_{i:i+n}),
\end{equation}
where $\mathbf{\hat{h}}_i^n \in \mathbb{R}^{d_h}$ represents the $i$-th $n$-gram representation, $n\in[1,N]$ indicates the length of n-grams, and $N$ is the maximum length of n-grams. Each $n$-gram has its own set of convolution filters $\text{CNN}^n$ with window size $n$ and stride $1$.
To capture the latent hierarchies of phrases, we map phrases representation to the Poincaré ball using the \textit{exponential} map,
\begin{equation}
\mathbf{\tilde{h}}_i^n = \text{exp}^c_{\mathbf{0}}(\mathbf{\hat{h}}_i^n),
\end{equation}
where $\mathbf{\tilde{h}}_i^n$ indicates the $i$-th $n$-gram $\mathbf{\hat{h}}_i^n$ phrase representation in the hyperbolic space. By mapping phrases representation into hyperbolic spaces, HyperMatch may implicitly model the latent hierarchical structure of phrases.
\subsubsection{Hyperbolic Document Encoder}
When using the source document as the query to match keyphrases, the document representation should cover its main points (important information). Meanwhile, documents are usually long text sequences with richer semantic and syntactic information than phrases. Many current BERT-based methods \cite{span2020,matchsum} in NLP obtain documents representation by using the first output token (the [CLS] token) of pre-trained language models.
However, recent studies \cite{reimers2019sentencebert, LiZHWYL20} demonstrate that in many NLP tasks, documents representation obtained by the average pooling of words representation is better than the [CLS] token. Motivated by the above methods, we use the average pooling, a simple and effective operation, to encode documents. To further consider the latent hierarchical structures of documents, we map word representations and transfer the average pooling operation to the hyperbolic space.
In this case, we first map word representations to the hyperbolic space via the \textit{exponential} map as follows:
\begin{equation}
\mathbf{\tilde{H}} = \{\mathbf{\tilde{h}}_1, ..., \mathbf{\tilde{h}}_i, ...,\mathbf{\tilde{h}}_M\} = \text{exp}^c_{\mathbf{0}}(\mathbf{\hat{H}}\mathbf{W}_h),
\end{equation}
where $\mathbf{W}_h\in\mathbb{R}^{d_r*d_h}$ maps the original BERT embedding space to the tangent space of the origin of the Poincaré ball. Then $\text{exp}_{\mathbf{0}}(\cdot)$ maps the tangent space inside the Poincaré ball. Next, we use the hyperbolic averaging pooling to encode the source document as follows:
\begin{equation}
\mathbf{\tilde{h}} =\text{HyperAP}(\{\mathbf{\tilde{h}}_1, ..., \mathbf{\tilde{h}}_i, ...,\mathbf{\tilde{h}}_M\}),
\end{equation}
where $\mathbf{\tilde{h}}\in \mathbb{R}^{d_h}$ indicates the hyperbolic document representation (called the \textit{Einstein midpoint} pooling vectors in the Poincaré ball \cite{Gulcehre19}). The hyperbolic average pooling emphasizes semantically specific words that usually contain more information but occur less frequently than general ones. It should be noted that points near the boundary of the {Poincaré ball} get larger weights in the \textit{Einstein midpoint} formula, which are regarded to be more representative content (more helpful information such as the latent hierarchies) from the source document \cite{DhingraSNDD18, hypertext2021}.
\subsection{Importance Discrimination}
Importance discrimination is one of the primary parts of keyphrase importance estimation, which measures and sorts the importance of candidate phrases accurately to extract keyphrases. To reach this goal, we first calculate the scaled phrase-document relevance between phrases and their corresponding document via the Poincaré distance as the important score of each candidate phrase. Then, the important score is optimized by the margin-based triplet loss to extract keyphrases.
\subsubsection{Scaled Phrase-Document Relevance}
Besides the intrinsic hierarchies of linguistic ontologies, the conceptual relations between candidate phrases and their corresponding document can also form hierarchical structures.
Once the document representation $\mathbf{\tilde{h}}$ and phrase representations $\mathbf{\tilde{h}}_i^n$ are obtained, it is expected that the phrases and their corresponding document embedded close to each other based on their geodesic distance\footnote{Note that cosine similarity \cite{WangHF17} is not appropriate to be the metric since there does not exist a clear hyperbolic inner-product for the \textit{Poincaré ball} \cite{TifreaBG19}, so the Poincaré distance is more suitable.} if they are highly relevant. Specifically, the scaled phrase-document relevance of the $i$-th $n$-gram representation $c_i^n$ can be computed as follows:
\begin{equation}\label{s}
\textit{S}(c_i^n, \mathcal{D}) = - \frac{\lambda (d_c (\mathbf{\tilde{h}}_i^n, \mathbf{\tilde{h}}))^2}{\sqrt{d_{h}}} + (1-\lambda)f_c(\mathbf{\tilde{h}}_i^n),
\end{equation}
where $\textit{S}(\cdot)$ indicates the scaled phrase-document relevance. Here, $d_c$ indicates the Poincaré distance, which is introduced in Eq.(\ref{distance}). Furthermore, $f_c$ indicates the linear transformation in the hyperbolic space. Specifically, for Eq.~\ref{s}, the first term indicates modeling the phrase-document relevance explicitly, and the second term denotes modeling the phrase-document relevance implicitly. Estimating the phrase-document relevance via the Poincaré distance in hyperbolic space allows HyperMatch to model the relationships between candidate phrases and their document by simultaneously considering semantics and latent hierarchical structures, which benefits ranking keyphrases accurately. Furthermore, we find that with increasing representation dimension $d_h$, the value of the phrase-document relevance will also increase, resulting in model optimization crash, and the loss value tends to infinity. To counteract this effect, we scale the phrase-document relevance by $\frac{1}{\sqrt{d_{h}}}$.
\subsubsection{Margin-based Triplet Loss}
To select phrases with higher importances, we adopt the margin-based triplet loss in our model and optimize for margin separation in the hyperbolic space.
Therefore, we first treat the candidate keyphrases in the document that are labelled as keyphrases, in the positive set $\mathbf{P^+}$, and the others to the negative set $\mathbf{P^-}$, to obtain the matching labels. Then, the loss function is calculated as follows:
\begin{equation}
\mathcal{L} =\text{max} (0, \frac{\delta}{\sqrt{d_{h}}} -\textit{S}(p^+, \mathcal{D}) \\+ \textit{S}(p^-, \mathcal{D})),
\end{equation}
where $\delta$ indicates the margin. It enforces HyperMatch to sort the candidate keyphrases $p^+$ ahead of $p^-$ within their corresponding document. Through this training objective, our model will tend to extract the keyphrases, which are more relevant to the source document.
\section{Experimental Settings}
\subsection{Benchmark Datasets}
\noindent Six benchmark keyphrase datasets are used in our experiments, which contain \textit{OpenKP} \cite{xiong19}, \textit{KP20k} \cite{catseq17}, \textit{Inspec} \cite{Inspec}, \textit{Krapivin} \cite{Krapivin}, \textit{Nus} \cite{Nus}, and \textit{SemEval} \cite{SemEval}). We follow the previous work \cite{baseline} to preprocess each dataset with the same method.
\subsection{Implementation Details}
Implementation details of HyperMatch are summarized in Table~\ref{parameter}.
The maximum document length is 512 tokens due to RoBERTa limitations \cite{roberta} and documents are zero-padded or truncated to this length.
Our model was implemented in Pytorch 1.8\footnote{https://pytorch.org/} \cite{pytorch} using the huggingface reimplementation of RoBERTa\footnote{https://huggingface.co/transformers/index.html} \cite{transformer_pytorch} and was trained on eight NVIDIA RTX A4000 GPUs to achieve best performance.
\begin{table}[!t]
\small
\centering
\renewcommand\tabcolsep{4pt}
\renewcommand\arraystretch{1.3}
\begin{tabular}{cc}
\hline \hline
\textbf{Hyperparameter} & \textbf{Dimension or Value} \\
\hline
RoBERTa Embedding $(\mathbb{R}^{d_c})$ & 768 \\
Hyperbolic Rank $(\mathbb{R}^{d_h})$ & 768 \\
Max Sequence Length & 512 \\
Maximum Phrase Length $(N)$ & 5 \\
\hline
$c$ & $1$ \\
$\lambda$ & $0.5$ \\
$\delta$ & 1.0 \\
Optimizer & AdamW \\
Batch Size & $72$ \\
Learning Rate & $5\times10^{-5}$ \\
Warm-Up Proportion & $10\%$ \\
\hline \hline
\end{tabular}
\caption{Parameters used for training HyperMatch.}
\label{parameter}
\end{table}
\subsection{Evaluation Metrics}
For the keyphrase extraction task, the model's performance is typically evaluated by comparing the top-$K$ predicted keyphrases with the target keyphrases (ground-truth labels).
The evaluation cutoff $K$ can be a fixed number (e.g., F1@5 compares the top-$5$ keyphrases predicted by the model with the ground-truth to compute an F1 score).
Following the previous work \cite{catseq17, gcn2019}, we adopt macro-averaged recall and F-measure (F1) as evaluation metrics, and $K$ is set to be 1, 3, 5, and 10.
In the evaluation, we apply Porter Stemmer\footnote{https://tartarus.org/martin/PorterStemmer/} \cite{stemmer} to both target keyphrases and extracted keyphrases when determining the exact match of keyphrases.
\begin{table*}[!htb]
\small
\centering
\renewcommand\tabcolsep{7.7pt}
\renewcommand\arraystretch{1.5}
\begin{tabular}{l|ccc|ccc|ccc}
\hline\hline
\multirow{2}{*}{\normalsize \textbf{{Model}}} & \multicolumn{9}{c}{\textbf{\textit{OpenKP}}} \\\cline{2-10}
& P@1 & {P@3} & P@5 & R@1 & {R@3} & R@5 & F1@1 & \textbf{F1@3} & F1@5 \\ \hline
\multicolumn{10}{l}{{Unsupervised Keyphrase Extraction Models}}\\\hline
\multicolumn{1}{l|}{{TFIDF}}
& {28.3} & {18.4} & {13.7} & {15.0} & {28.4} & {34.7} & {19.6}* & {22.3}* & {19.6}* \\
\multicolumn{1}{l|}{{TextRank}}
& 7.7 & 6.2 & 5.5 & 4.1 & 9.8 & 14.2 & 5.4* & 7.6* & 7.9* \\
\hline
\multicolumn{10}{l}{{Supervised Keyphrase Extraction via Classification Models}}\\\hline
\multicolumn{1}{l|}{{{BERT-Spanning-KPE}}}
& 47.6 & 28.5 & 20.9 & 25.3 & 43.6 & 52.1 & 31.8 & 33.2 & 28.9\\
\multicolumn{1}{l|}{{{BERT-Chunking-KPE}}}
& 51.1 & 30.6 & 22.5 & 27.1 & 46.4 & 55.8 & 34.0 & 35.6 & 31.1 \\
\multicolumn{1}{l|}{{{SpanBERT-Chunking-KPE}}}
& 52.3 & 32.1 & 23.5 & 27.8 & 48.6 & 58.1 & 34.8 & 37.2 & 32.4 \\
\multicolumn{1}{l|}{{{RoBERTa-Chunking-KPE}}}
& 53.3 & 32.2 & 23.5 & 28.3 & 48.6 & 58.1 & 35.5 & 37.3 & 32.4 \\
\hline
\multicolumn{10}{l}{{Supervised Keyphrase Extraction via Ranking Models}}\\\hline
\multicolumn{1}{l|}{{{BERT-Ranking-KPE}}}
& 51.3 & 32.3 & 23.5 & 27.3 & 48.9 & 58.2 & 34.2 & 37.4 & 32.5 \\
\multicolumn{1}{l|}{{{SpanBERT-Ranking-KPE}}}
& 53.0 & 32.7 & 24.0 & 28.4 & 49.7 & 59.3 & 35.5 & 38.0 & 33.1 \\
\multicolumn{1}{l|}{{{RoBERTa-Ranking-KPE}}}
& 53.8 & 33.7 & 24.4 & 29.0 & 50.9 & 60.4 & 36.1 & 39.0 & 33.7 \\
\multicolumn{1}{l|}{{\textbf{HyperMatch}}}
& \textbf{54.7} & \textbf{33.9} & \textbf{24.7} & \textbf{29.5} & \textbf{51.5} & \textbf{61.2} & \textbf{36.4} & \textbf{39.4} & \textbf{33.8}\\
\hline\hline
\end{tabular}
\caption{Model performance on the \textit{OpenKP} dataset. The best results of our model are highlighted in bold. F1@3 is the main evaluation metric (marked in bold) for this dataset \cite{xiong19, 2020sota}. * denotes these results are not included in the original paper and are estimated with Precision and Recall score. The results of the baselines are reported in their corresponding papers.}
\label{openkp}
\end{table*}
\subsection{Baselines}
We compare two kinds of solid baselines to give a comprehensive evaluation of the performance of HyperMatch: unsupervised keyphrase extraction models (e.g., TextRank \cite{textrank} and TFIDF \cite{tfidf}) and supervised keyphrase extraction models (e.g., classification and ranking models based variants of BERT \cite{baseline}).
Noticeably, HyperMatch extracts keyphrases without using additional features on the \textit{OpenKP} dataset. Therefore, for the sake of fairness, we do not compare with the methods \cite{xiong19, 2020sota} which use additional features to extract keyphrases. In addition, this paper mainly focuses on exploring keyphrase extraction in hyperbolic space via a matching framework (similar to the ranking model). Hence, the compared baselines we mainly choose are keyphrase extraction methods based on the classification and ranking models rather than some existing studies based on integration models \cite{19unified,2021select,2021uni} or multi-task learning \cite{song}.
\section{Results and Analysis}
In this section, we investigate the performance of HyperMatch on six widely-used benchmark keyphrase extraction datasets ({OpenKP}, {KP20k}, {Inspec}, {Krapivin}, {Nus}, and {Semeval}) from three facets. The first one demonstrates its superiority by comparing HyperMatch with the recent baselines in terms of several metrics. The second one is to verify the effect of each component via ablation tests. The last one is to analyze the sensitivity of the triplet loss with different margins.
\subsection{Performance Comparison}
The experimental results are given in Table~\ref{openkp} and Table~\ref{kp20k}. Overall, HyperMatch outperforms the recent BERT-based keyphrase extraction models (the results are reported in their own articles) in most cases. Concretely, on the \textit{OpenKP} and \textit{KP20k} datasets, HyperMatch achieves better results than the best ranking models RoBERTa-Ranking-KPE. We consider that the main reason for this result may be that learning representation in hyperbolic space can capture more latent hierarchical structures than the Euclidean space. Meanwhile, compared with the results on the other four zero-shot datasets (\textit{Inspec}, \textit{Krapivin}, \textit{Nus}, and \textit{Semeval}) in Table~\ref{kp20k}, it can be seen that HyperMatch outperforms both unsupervised and supervised baselines. We consider that the main reason is the scaled phrase-document relevance \textit{explicitly }models a strong connection between phrases and their corresponding document via the Poincaré distance, obtaining more robust performance even in different datasets.
\begin{table*}[!htb]
\small
\centering
\renewcommand\tabcolsep{5pt}
\renewcommand\arraystretch{1.7}
\begin{tabular}{l|cc|cc|cc|cc|cc}
\hline\hline
\multirow{2}{*}{\normalsize \textbf{\textit{Model}}} & \multicolumn{2}{c|}{\textit{\textbf{Inspec}}} & \multicolumn{2}{c|}{\textit{\textbf{Krapivin}}}& \multicolumn{2}{c|}{\textit{\textbf{Nus}}}& \multicolumn{2}{c|}{\textit{\textbf{SemEval}}}& \multicolumn{2}{c}{\textit{\textbf{KP20k}}}\\ \cline{2-11}
& F1@5 & F1@10 & F1@5 & F1@10 & F1@5 & F1@10& F1@5 & F1@10& F1@5 & F1@10\\ \hline
\multicolumn{1}{l|}{{TFIDF}}
& 22.3 & 30.4 & 11.3 & 14.3 & 13.9 & 18.1 & 12.0 & 18.4 & 10.5 & 13.0 \\
\multicolumn{1}{l|}{{TextRank}}
& 22.9 & 27.5 & 17.2 & 14.7 & 19.5 & 19.0 & 17.2 & 18.1 & 18.0 & 15.0 \\
\hline
\multicolumn{1}{l|}{{RoBERTa-Ranking-KPE$^\dagger$}}
& 28.1 & 29.1 & 29.9 & 23.7 & 44.6 & 37.7 & 35.4 & 32.6 & 41.4 & 34.2 \\
\multicolumn{1}{l|}{{\textbf{HyperMatch}}}
& \textbf{30.4} & \textbf{32.2} & \textbf{32.8} & \textbf{26.3} & \textbf{45.8} & \textbf{41.3} & \textbf{35.7} & \textbf{36.8} & \textbf{41.6} & \textbf{34.3} \\
\hline\hline
\end{tabular}
\caption{Results of keyphrase extraction on five benchmark keyphrase datasets. F1 scores on the top $5$ and $10$ keyphrases are reported. $^\dagger$ indicates that these results are evaluated via the code which is provided by its corresponding paper. The best results are highlighted in bold.}
\label{kp20k}
\end{table*}
\subsection{Ablation Study}
\begin{table}[!htb]
\small
\centering
\renewcommand\tabcolsep{5pt}
\renewcommand\arraystretch{1.7}
\begin{tabular}{l|ccc}
\hline\hline
\multirow{2}{*}{\normalsize \textbf{{Model}}} & \multicolumn{3}{c}{\textit{\textbf{OpenKP}}} \\\cline{2-4}
& F1@1 & \textbf{F1@3} & F1@5 \\ \hline
\multicolumn{1}{l|}{{{{\textbf{HyperMatch}}}}}
& \textbf{36.4} & \textbf{39.4} & \textbf{33.7} \\
\multicolumn{1}{l|}{{{EuclideanMatch}}}
& 36.1 & 38.5 & 33.4 \\
\multicolumn{1}{l|}{{{HyperMatch w/o Relevance}}}
& 36.1 & 38.9 & 33.6 \\
\multicolumn{1}{l|}{{{{{HyperMatch w/o AML}}}}}
& 36.3 & 38.7 & 33.5 \\
\hline\hline
\end{tabular}
\caption{Ablation tests on the \textit{OpenKP} dataset. The best results are highlighted in bold. F1@3 is the main evaluation metric (marked in bold) for this dataset.}
\label{ablation}
\end{table}
In this section, we report on several ablation experiments to analyze the effect of different components. The ablation experiment on the \textit{OpenKP} dataset is shown in Table~\ref{ablation}.
To measure the effectiveness of the hyperbolic space for the keyphrase extraction task, we compare it with the same model in the Euclidean space and use the Euclidean distance to explicitly model the phrase-document relevance.
As shown in Table~\ref{ablation}, HyperMatch outperforms EuclideanMatch, which shows that the hyperbolic space can capture the latent hierarchical structures more effectively than the Euclidean space.
To verify the effectiveness of the adaptive mixing layer, we propose a model HyperMatch w/o AML, which indicates HyperMatch without using the adaptive mixing layer module and only uses the last intermediate layer of RoBERTa to embed phrases and documents.
As shown in Table~\ref{ablation}, the performance drops in all evaluation metrics without the adaptive mixing layer. These results demonstrate that combining all the intermediate layers of RoBERTa by the self-attention mechanism to embed words can capture more helpful information of different layers in RoBERTa.
Unlike our model, most recent keyphrase extraction methods (e.g., RoBERTa-Ranking-KPE) implicitly model relevance between candidate phrases and their corresponding document by a linear transformation layer as the phrase-document relevance.
Therefore, to verify the effectiveness of explicitly modeling the phrase-document relevance, we built the HyperMatch w/o Relevance, which only implicitly computes the phrase-document relevance by a hyperbolic linear transformation layer \cite{Ganea18}.
The results of HyperMatch w/o Relevance show a drop in all evaluation metrics, indicating that explicitly considering the relevance between phrases and the document is essential for estimating the importance of candidate phrases in the keyphrase extraction task.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.46]{margin_rank.pdf}
\caption{Performance of HyperMatch with different margins ($\delta$) of the margin-based triplet loss on the \textit{OpenKP} dataset.}
\label{margin_rank}
\end{figure}
\subsection{Sensitivity of Hyperparameters}
In this section, we verify the sensitivity of HyperMatch with different margins ($\delta$) of the hyperbolic triplet loss.
For keyphrase extraction methods equipped with the margin-based triplet loss, the margin design significantly impacts the final result, where a poor margin usually causes performance degradation. Therefore, we verify the effects of different margins on HyperMatch in Figure~\ref{margin_rank}. We can see that HyperMatch achieves the best results when $\delta=1$.
\section{Related Work}
This section briefly describes the related work from two fields: keyphrase extraction and hyperbolic deep learning.
\subsection{Keyphrase Extraction}
Most existing KE models are based on the two-stage extraction framework, which consists of two main parts: candidate keyphrase extraction and keyphrase importance estimation. Candidate keyphrase extraction extracts a set of candidate phrases from the source document by heuristics (e.g., essential n-gram-based phrases \cite{hulth2004,medelyan2009,xiong19,baseline,2020sota}). Keyphrase importance estimation first represents candidate phrases and documents by the pre-trained language models \cite{bert, roberta} and then estimates the phrase-document relevance implicitly as the importance scores. Finally, the candidate phrases are ranked by their importance scores, which can be learned by either unsupervised \cite{textrank, heuristic_liu_b} or supervised \cite{xiong19,baseline,span2020} ranking approaches.
Different from the existing KE models, we first embed phrases and documents by RoBERTa in the Euclidean space and then map these representations to the same hyperbolic space to capture the latent hierarchical structures. Next, we adopt the Poincaré distance to model the phrase-document relevance explicitly as the important score of each candidate phrase. Finally, the hyperbolic margin-based triplet loss is used to optimize the whole model for extracting keyphrases. To the best of our knowledge, we are the first study to explore supervised keyphrase extraction in the hyperbolic space.
\subsection{Hyperbolic Deep Learning}
Recent Studies on representation learning \cite{NickelK17,TifreaBG19,MathieuLMTT19} demonstrate that the hyperbolic space is more suitable for embedding symbolic data with hierarchies than the Euclidean space since the tree-like properties \cite{hamann2018tree} of the hyperbolic space make it efficient to learn hierarchical representations with low distortion \cite{desa2018,Sarkar11}. As linguistic ontologies are innately hierarchies, hierarchies are ubiquitous in natural language \cite{text_hyperbolic}. Some recent studies show the superiority of the hyperbolic space for many natural language processing tasks \cite{Gulcehre19, hypertext2021}. \citet{chen2021probing} demonstrate that mapping contextualized word embeddings (i.e., BERT-based embeddings) to hyperbolic space can capture richer hierarchical structure information than the euclidean space when encoding natural language text.
\section{Conclusions and Future Work}
A new hyperbolic relevance matching model HyperMatch is proposed to map phrase and document representations into the hyperbolic space and model the relevance between candidate phrases and the document via the Poincaré distance. Specifically, HyperMatch first combines the intermediate layers of RoBERTa via the adaptive mixing layer for capturing richer syntactic and semantic information. Then, phrases and documents are encoded in the same hyperbolic space to capture the latent hierarchical structures. Next, the phrase-document relevance is estimated explicitly via the Poincaré distance as the importance scores of all the candidate phrases. Finally, we adopt the hyperbolic margin-based triplet loss to optimize the whole model for extracting keyphrases.
In this paper, we explore the hyperbolic space to implicitly model the latent hierarchical structures when representing candidate phrases and documents. In the future, it will be interesting to introduce external knowledge (i.e., WordNet) to explicitly model the latent hierarchical structures when representing candidate phrases and documents. Our code is publicly available to facilitate other research\footnote{https://github.com/MySong7NLPer/HyperMatch}.
\section{Acknowledgments}
This work was supported in part by the National Key Research and Development Program of China under Grant 2020AAA0106800; the National Science Foundation of China under Grant 61822601 and 61773050; the Beijing Natural Science Foundation under Grant Z180006; the Fundamental Research Funds for the Central Universities (2019JBZ110).
\section{Reviewer khpK}
\begin{itemize}
\item \textbf{Question-1}:
\textit{How are the negative set and positive set generated?}
\item \textbf{Answer-1}:
\textit{We describe it in lines 372 to 375 of this paper, "we put the candidate keyphrases in the document that are labeled as keyphrases, in the positive set $\mathbf{P}+$, and the others to the negative set $\mathbf{P}-$, to obtain the matching labels".}
\item \textbf{Question-2}:
\textit{If negative set is sampled (as in KE almost 90\% of data is negative) what bearing does the result have on way/rate/strategy of selecting learning rate. Currently only the dimension of latent space is shown as impacting hyperparam.}
\item \textbf{Answer-2}:
\textit{I strongly agree with you. In the model based on metric learning, the sampling of negative sets will impact the results, which is a problem worthy of study. In this paper, we mainly explore the application of keyphrase extraction in hyperbolic space. This is also the reason why we take the dimension of the mapped hyperbolic space as a key hyperparameter.}
\end{itemize}
\begin{table*}[!htb]
\small
\centering
\renewcommand\tabcolsep{3pt}
\renewcommand\arraystretch{1.3}
\begin{tabular}{l|cc|cc|cc|cc|cc}
\hline\hline
\multirow{2}{*}{\normalsize \textbf{\textit{Model}}} & \multicolumn{2}{c|}{\textit{\textbf{Inspec}}} & \multicolumn{2}{c|}{\textit{\textbf{Krapivin}}}& \multicolumn{2}{c|}{\textit{\textbf{Nus}}}& \multicolumn{2}{c|}{\textit{\textbf{SemEval}}}& \multicolumn{2}{c}{\textit{\textbf{KP20k}}}\\ \cline{2-11}
& $F_1@5$ & $F_1@10$ & $F_1@5$ & $F_1@10$& $F_1@5$ & $F_1@10$& $F_1@5$ & $F_1@10$& $F_1@5$ & $F_1@10$\\ \hline\hline
\multicolumn{1}{l|}{{TFIDF \cite{tfidf}}}
& 22.3 & 30.4 & 11.3 & 14.3 & 13.9 & 18.1 & 12.0 & 18.4 & 10.5 & 13.0 \\
\multicolumn{1}{l|}{{TextRank \cite{textrank}}}
& 22.9 & 27.5 & 17.2 & 14.7 & 19.5 & 19.0 & 17.2 & 18.1 & 18.0 & 15.0 \\
\hline
\multicolumn{1}{l|}{{CopyRNN \cite{catseq17}}}
& 27.8 & 34.1 & 31.1 & 26.6 & 33.4 & 32.6 & 29.3 & 30.4 & 33.3 & 26.2 \\
\hline
\multicolumn{1}{l|}{{RoBERTa-JointKPE** \cite{baseline}}}
& 28.5 & 31.7 & 28.0 & 22.7 & 42.2 & 36.2 & 35.0 & 33.6 & 41.7 & 34.3 \\
\multicolumn{1}{l|}{{SKE-Large-CLS \cite{span2020}}}
& \underline{29.4} & \underline{33.4} & 30.9 & \underline{25.2} & 40.3 & 36.4 & 36.1 & \underline{35.8} & 39.2 & 33.0 \\
\hline
\multicolumn{1}{l|}{{KG-KE-KR-M \cite{19unified}}}
& 25.7 & 28.4 & 27.2 & 25.0 & 28.9 & 28.6 & 20.2 & 22.3 & 31.7 & 28.2 \\
\multicolumn{1}{l|}{{UniKeyphrase \cite{2021uni}}}
& 29.0 & - & - & - & 43.4 & - & \underline{41.6} & - & 40.8 & - \\
\multicolumn{1}{l|}{{SEG-Net \cite{2021select}}}
& 21.6 & - & 27.6 & - & 39.6 & - & 28.3 & - & 31.1 & - \\
\hline
\multicolumn{1}{l|}{\textit{{EuclideanMatch}}}
& {28.8} & {32.0} & {29.4} & {23.8} & {42.6} & {37.7} & {36.1} & {34.4} & {41.6} & {33.8} \\
\multicolumn{1}{l|}{\textit{{HyperMatch w/o Relevance}}}
& {29.1} & {32.3} & {30.9} & {24.6} & {43.8} & {39.2} & {37.5} & {35.9} & {41.8} & {34.2} \\
\multicolumn{1}{l|}{\textit{{HyperMatch w/o AML}}}
& {29.3} & {32.3} & {31.0} & {24.7} & {44.3} & {39.5} & {38.0} & {36.0} & {41.7} & {34.4} \\
\multicolumn{1}{l|}{\textit{\textbf{HyperMatch}}}
& \textbf{29.5} & {32.6} & \textbf{31.3} & \textbf{25.2} & \textbf{44.8} & \textbf{39.9} & {38.1} & \textbf{36.4} & \textbf{42.0} & \textbf{34.6} \\
\hline\hline
\end{tabular}
\caption{Results of keyphrase extraction on five benchmark keyphrase datasets. $F_1$ scores on the top $1$, $3$, and $5$ keyphrases are reported. ** indicates that these results are evaluated via the code which is provided by its corresponding paper. The best results of baselines are underline and the best results of ours are bold.}
\label{ablation}
\end{table*}
\section{Reviewer GBP7}
\begin{itemize}
\item \textbf{Question-1}:
\textit{Is there any existing study or result supporting this claim: (L264) “is difficult to learn such structural representation even with infinite dimensions in the Euclidean space”?}
\item \textbf{Answer-1}:
\textit{We describe it in lines 372 to 375 of this paper, "we put the candidate keyphrases in the document that are labeled as keyphrases, in the positive set $\mathbf{P}+$, and the others to the negative set $\mathbf{P}-$, to obtain the matching labels".}
\item \textbf{Question-2}:
\textit{Why doesn’t Table 3 include the ablation results?}
\item \textbf{Answer-2}:
\textit{We present ablation tests on the OpenKP dataset. To save space, we did not show the results of the other five datasets. I’ve added the ablation test in Table~\ref{ablation}. As you mentioned, we will add these results in the final version.}
\item \textbf{Question-3}:
\textit{Does the baseline EuclideanMatch in Table 4 have features like Relevance and AML?}
\item \textbf{Answer-3}:
\textit{Yes}
\item \textbf{Question-4}:
\textit{I am thinking about whether the hyperparameter discussion of Sec 5.3 also relates to EuclideanMatch (model scores vary with different margin/dimension). Can you provide the results?}
\item \textbf{Answer-4}:
\textit{}
\item \textbf{Question-5}:
\textit{I wonder if it is possible to visualize the hierarchy learned by the model.
Can you elaborate on the hierarchical relationship in the right part of Figure 1? The blue triangle (phrase) is positive and the rest are negative samples?}
\item \textbf{Answer-5}:
\textit{}
\end{itemize}
\section{Reviewer v4dK}
\begin{itemize}
\item \textbf{Question-1}:
\textit{the authors should report the results of Xiong et al. and Wang et al. on OpenKP and discuss whether their model could benefit from the additional features used in these studies.}
\item \textbf{Answer-1}:
\textit{As you mentioned, we will add the results of Xiong et al. and Wang et al. on OpenKP. Actually, Xiong et al. and Wang et al. leverage additional features for web keyphrase extraction model to improve the performance. }
\item \textbf{Question-2}:
\textit{In addition, the application studied here is neither a central application in NLP nor a central application in IR (contrary to what the authors claim). The impact of the paper is thus likely to be limited.}
\item \textbf{Answer-2}:
\textit{-}
\item \textbf{Question-3}:
\textit{For evaluation, are you using exact matches between candidate and glod standard keyphrases (Section 4.2)?}
\item \textbf{Answer-3}:
\textit{-}
\item \textbf{Question-4}:
\textit{The truncation at 512 tokens may filter out a lot of important keyphrases. Have you evaluated the impact of this?}
\item \textbf{Answer-4}:
\textit{-}
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.4]{example.pdf}
\caption{Sample partial of the input document in \textit{OpenKP}. For ease of presentation, we assume “a large region of land” is a 5-gram based phrase of the input document as an example.}
\label{example}
\end{figure}
\section{Reviewer TBQW}
\begin{itemize}
\item \textbf{Question-1}:
\textit{The authors make the claim (more than once) that most existing keyphrase systems consider the salience of keyphrases independently, ignoring their relevance to the document. On its face, this claim seems like it couldn't possibly be true. I haven't done a full literature survey of recent keyphrase extraction work, but the first random paper I looked at [1] does indeed have an encoder for the keyphrase candidates and an encoder for the document text and a decoder that takes both representations as input.}
\item \textbf{Answer-1}:
\textit{In this paper, we focus on the supervised keyphrase extraction method. We revised our description and limited the given description to supervised methods.}
\item \textbf{Question-2}:
\textit{In the first paragraph of the Introduction (describing the two typical phases of keyphrase extraction: candidate generation and salience determination), you say that salience determination directly affects the performance of keyphrase extraction. Of course both phases do (candidate generation affects R, salience determination affects P).}
\item \textbf{Answer-2}:
\textit{In this paper, we take all n-grams as candidates (enough candidates). Therefore, in this situation, keyphrase importance estimation is particularly important in this process.}
\item \textbf{Question-3}:
\textit{In the third paragraph of the Introduction, you say that "phrases often exhibit inherent hierarchical structure (i.e., n-gram structures). What does this mean? N-grams aren't hierarchical, they're just flat sequences of n tokens (you could describe certain n-grams as a flattening of hierarchical structures such as grammatical constituents, but certainly the n-grams themselves aren't hierarchical).}
\item \textbf{Answer-3}:
\textit{Similar to the recent work \cite{ngram_hyperbolic}, we exploit the Poincaré ball embedding of words or ngrams to capture the latent hierarchies (as shown in Figure~\ref{example}) in natural language. Here, the latent hierarchies can be divided into many forms \cite{hyperimage, text_hyperbolic, ngram_hyperbolic, image_hyperbolic}.}
\item \textbf{Question-4}:
\textit{Section 3 seems to suggest that ALL n-grams in the document are considered candidates. Is this the case? If so, you should note it explicitly. It also represents a departure/simplification from other keyphrase extraction systems that have an explicit candidate generation phase, and should be mentioned again in section 6.1.}
\item \textbf{Answer-4}:
\textit{As you mentioned, we will add the description in the related work.}
\end{itemize}
|
2205.02092
|
\section{Introduction}
Recently, state-of-the-art reinforcement learning (RL) approaches have made several significant advances in challenging domains, such as controlling nuclear reactors \citep{degrave_magnetic_2022}.
Despite these successes, it is clear that these approaches do not capture a remarkable aspect of human intelligence---namely, that humans can solve not just a single problem, but a massively diverse array of tasks.
Consider the \textsc{AlphaZero} agent which attained superhuman performance in the grand challenge of Go \citep{silver16}.
While this is an immensely difficult task, the input to this agent is a set of binary vectors specifying stone locations, while the output is a location at which to place a stone.
This input-output format is provided to the agent by a human designer because it captures exactly the task that must be accomplished.
However, it means that the agent cannot solve any other tasks by definition; it cannot drive a car or cook a meal. While the former approach is useful in designing narrow, application-specific solutions, it falls short of the ultimate aim of generally intelligent agents.
In general, tasks in RL are formulated by human designers and provided to agents in a standardised, compact form.
Though this practice is widespread, it sidesteps an important question: \textit{where do these representations come from in the first place?}
It is obvious that this approach is infeasible in the long run: we cannot preprogram
an agent with every task it may encounter before deploying it in the real world. Nor can we require that a human designer accompany the agent throughout its lifetime, providing task representations as and when required. Clearly then, the only option is for the
agent to learn its own representation for any newly encountered task directly from its observations of the world.
If we are to design a \emph{single} agent capable of solving multiple tasks in the real world, it must necessarily have a complex sensorimotor space.
However, solving long-horizon tasks at this low level is typically infeasible.
A common approach to tackling this problem is hierarchical RL, which makes use of \textit{abstractions} to simplify the problem.
Action abstractions (also known as \textit{skills}) alleviates the need to reason using low-level actions, while the use of state abstraction (where states are aggregated into high-level states) reduces the size of the problem.
However, if an agent’s abstractions are too high-level, it risks omitting important and necessary
details. Conversely, if it seeks to preserve every last detail of the environment, then its
representations will be too low-level and planning will once again be infeasible. The
key question is how best to construct an abstract model of an environment while
retaining only the information required for planning.
In this work, we outline a framework for learning transferable abstract representations from low-level data that can be used for long-term planning. More concretely, we extend the framework of \citet{konidaris18} so that the learned representations are portable---given a new task, an agent can reuse
the representations it has learned previously to speed up learning.
We apply our framework to learn both agent- and object-centric representations in several high-dimensional domains, and demonstrate that our approach results in agents that are i) more sample efficient; ii) able to learn their own representations; and iii) able to use their learned representations to solve a variety of tasks.
\section{Preliminaries} \label{sec:reps}
We begin by assuming that an agent is equipped with a set of skills and model tasks as semi-Markov decision processes $\mathcal{M} = \langle \mathcal{S}, \mathcal{O}, \mathcal{T}, \mathcal{R} \rangle$ where
\begin{enumerate*}[label=(\roman*)]
\item $\mathcal{S}$ is the state space;
\item $\mathcal{O}(s)$ is the set of temporally-extended actions known as \textit{options} available at state $s$;
\item $\mathcal{T}$ describes the transition dynamics, specifying the probability of arriving in state $s^\prime$ after option $o$ is executed from $s$; and
\item $\mathcal{R}$ specifies the reward for reaching state $s'$ after executing option $o$ in state $s$.
\end{enumerate*}
An option $o$ is defined by the tuple $\langle I_o, \pi_o; \beta_o \rangle$, where $I_o$ is the
\textit{initiation set} specifying the states where the option can be executed, $\pi_o$ is the \textit{option policy} which specifies the actions to execute, and $\beta_o$ the probability of the option terminating in each state \citep{sutton99}.
We intend to learn an abstract representation suitable for planning. Prior work has shown that a sound and complete abstract representation must necessarily be able to estimate the set of initiating and terminating states for each option \citep{konidaris18}.
In classical planning, this corresponds to the \textit{precondition} and \textit{effect} of each high-level action operator.
The precondition is defined as $\text{Pre}(o) = \operatorname{\Pr}\probarg{s \in I_o}$, which is a probabilistic classifier that expresses the probability that option $o$ can be executed at state $s$.
Similarly, the effect represents the distribution of states an agent may find itself in after executing an option from states drawn from some starting distribution \citep{konidaris18}.
Since the precondition is a probabilistic classifier and the effect is a density estimator, they can be learned directly from option execution data.
We can use preconditions and effects to evaluate the probability of an arbitrary sequence of options---a plan---executing successfully.
\paragraph{Partitioned Options}
For large or continuous state spaces, estimating $\operatorname{\Pr}\probarg{s' | s, o}$ is difficult; however, if we assume that terminating states are independent of starting states, we can make the simplification $\operatorname{\Pr}\probarg{s' | s, o} = \operatorname{\Pr}\probarg{s' | o}$.
These \textit{subgoal} options are not overly restrictive, since they refer to options that drive an agent to some set of states with high reliability.
While many options are not subgoal, it is often possible to \textit{partition} an option's initiation set into a finite number of subsets.
That is, we partition an option $o$'s start states into finite regions $\mathcal{C}$ such that $\operatorname{\Pr}\probarg{s^\prime | s, o, c} \approx \operatorname{\Pr}\probarg{s^\prime | o, c}, c \in \mathcal{C}$.
Given (partitioned) subgoal options, we can estimate their preconditions and effects using the approach outlined by \citet{konidaris18}.
\section{Agent-Centric Abstractions} \label{sec:agent-learning}
Central to the field of artificial intelligence is the notion of the \textit{agent}.
Real-world agents are robots, which perceive their environments through sensors and act upon them with effectors.
In practice, a human designer will usually build upon the observations produced by the agent’s sensors to construct the Markov state space for the problem at hand, while discarding unnecessary perceptual information. Instead we will seek
to effect transfer by using both the agent’s sensor information---which is typically egocentric---in addition to the Markov state space.
We assume that tasks are related because they are faced by the same agent. For example, consider a robot (equipped with various sensors) that is required to perform a number of as yet unspecified tasks. The only aspect that remains constant across all these tasks is the presence of the robot and, more importantly, its sensors, which map the state space $\mathcal{S}$ to a portable, lossy and egocentric observation space $\mathcal{D}$ known as \textit{agent space}.
We can use $\mathcal{D}$ to define portable options, whose option policies, initiation sets and termination conditions are all defined egocentrically. Because $\mathcal{D}$ remains constant regardless of the underlying SMDP, these options can be transferred across tasks \citep{konidaris07}.
Having made this distinction, we can write the state space of any given task $\mathcal{M}_i$ as the tuple $\langle \mathcal{X}_i, \mathcal{D} \rangle$, where $\mathcal{D}$ is shared across tasks and $\mathcal{X}_i$ represents task-specific state variables.
Given this representation, we can follow a two-step process. The first phase uses the procedure outlined in Section~\ref{sec:reps} to learn portable abstract rules using agent-space transition data only.
The second phase uses problem-space transitions to partition options in $\mathcal{X}_i$.
Each partition is assigned a unique label, and these labels are used as parameters to ground the previously learned portable representations in the current task.
For a new task, the agent need only estimate how the partition labels change under each option execution.
Figure~\ref{fig:overview} illustrates this entire process, but see \citet{james20} for more details.
We test our approach in the \textit{Treasure Game} \citep{konidaris18}, where an agent navigates a maze in search of treasure.
This domain contains ladders and doors which impede the agent. Some doors can be opened and closed with levers, while others require a key to unlock.
We first learn an abstract representation using agent-space transitions only, following the same procedure above.
Once we have learned sufficiently accurate portable abstractions, they need only be instantiated for the given task by learning the linking between partitions.
This requires far fewer samples than learning a task-specific representation from scratch.
To illustrate, we construct ten levels and gather transition samples from each task. We use these samples to build both task-specific and egocentric (portable) models.
For each level, we collect data until a model is sufficiently accurate
at which point we continue to the next task. Results are given by Figure~\ref{fig:results}.
\begin{figure}[h!]
\begin{minipage}[b]{.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{s2sdiag}
\caption{The agent learns transferable representations, which are then combined with problem-specific abstractions to form a model suitable for planning. }
\label{fig:overview}
\end{minipage}
\hfill
\begin{minipage}[b]{.45\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{final_result}
\caption{Owing to transfer, the number of samples required by the agent to learn a sufficiently accurate model decreases with the number of tasks faced.} \label{fig:results}
\end{minipage}
\end{figure}
\section{Object-Centric Abstractions} \label{sec:minecraft}
Having assumed the existence of an agent, it is natural to make another assumption---that the world consist of objects, and that similar objects are common amongst tasks.
Previously, we assumed the existence of an agent equipped with sensors, which led to the idea of agent space.
Since we are now assuming the existence of objects, a natural extension is to introduce the notion of \textit{object space}.
We adopt an object-centric formulation: in a task with $n$ objects, the state is represented by the set
$\{ \mathbf{f}_{a}, \mathbf{f}_{1}, \mathbf{f}_{2}, \ldots, \mathbf{f}_{n} \},$ where $\mathbf{f}_{a}$ is a vector of the agent's features and $\mathbf{f}_{i}$ are the features of object $i$ \citep{ugur15}.
The process to learn a grounded representation is now three-fold and is summarised by Figure~\ref{fig:object-process2}.
We first follow the same procedure outlined in Section~\ref{sec:reps} to construct a non-portable representation of a single task.
Since object space is already factored into the constituent objects, each abstraction will refer to a distribution over a particular object's state.
Next, we merge these representations where objects fall into the same ``type'' using the notion of \textit{effect equivalence} \citep{sahin07}---two objects are grouped into the same type when they undergo similar effects under the same set of options.
Finally, we once more use the problem-specific state data to construct partition labels, which are used to ground previously learned portable representations in the current task.
See \citet{james22} for more details.
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-15pt}
\centering
\begin{subfigure}[t]{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{symbol_15.png}
\caption{\texttt{rep\_15}}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{symbol_2.png}
\caption{\texttt{rep\_2}}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[height=20mm]{psymbol_17.png}
\caption{\texttt{{\color{red}rep\_17}}}
\end{subfigure}
\par\bigskip
\begin{subfigure}[t]{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{symbol_19.png}
\caption{\texttt{rep\_19}}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}
\centering
\includegraphics[width=\linewidth]{symbol_20.png}
\caption{\texttt{rep\_20}}
\end{subfigure}
\caption{Abstract precondition and effect for breaking a gold block. The agent must be standing in front of a gold block (\texttt{rep\_15}) at a particular location (\texttt{rep\_17}), and the gold block must be whole (\texttt{rep\_2}). As a result, the agent finds itself in front of a disintegrated block (\texttt{rep\_20}), and the gold block is disintegrated (\texttt{rep\_19}). Only the red abstraction must be relearned for each new task.} \label{fig:attack-block}
\vspace{-25pt}
\end{wrapfigure}
We demonstrate our approach in a series of Minecraft levels, where each consists of five rooms with various items positioned throughout.
Rooms are connected with either regular doors, which can be opened by direct interaction, or puzzle doors requiring the agent to pull a lever to open.
The world is described by the state of each of the objects (given directly by each object's appearance as a $600 \times 800$ RGB image), the agent's view, and current inventory.
The agent is given high-level skills, such as \texttt{ToggleDoor} and \texttt{WalkToItem}.
To simplify learning, we downscale images and applying PCA to a greyscaled version, preserving the top 40 principal components.
We follow the process in Figure~\ref{fig:object-process2} to learn portable object-centric representations, and ground them with task-specific partition labels derived from the agent's $xyz$-location.
As mentioned, objects are grouped into types based on their effects, which is made easier because certain objects do not undergo effects under certain options.
For example, the chest cannot be toggled, while a door can, and thus it is immediately clear that they are not of the same type.
We investigate transferring abstractions between five procedurally-generated tasks, where each task differs in the location of the objects and doors.
For a given task, the agent transfers all operators learned from previous tasks, and continues to collect samples using uniform random exploration until it produces a model that predicts the optimal plan can be executed.
Figure~\ref{fig:attack-block} illustrates a learned abstraction, while Figure~\ref{fig:result2} shows the number of abstract option representations (operators) transferred between tasks.
\begin{figure}[b!]
\begin{minipage}[b]{.45\textwidth}
\centering
\includegraphics[width=.95\textwidth]{s2sdiag2}
\caption{Learning object-relative representations from data. Blue nodes represent problem-specific representations, while green nodes are abstractions that can be transferred between tasks.} \label{fig:object-process2}
\end{minipage}
\hfill
\begin{minipage}[b]{.45\textwidth}
\centering
\includegraphics[width=0.65\textwidth]{transfer}
\caption{Orange: number of operators that must be learned to produce a sufficiently accurate model of a task. Blue: number of operators transferred between tasks. Mean and standard deviation over 80 runs.}\label{fig:result2}
\end{minipage}
\end{figure}
\section{Hierarchies of Abstractions}
The previous approaches, whether agent- or object-centric, resulted in an abstract decision problem.
If we apply our framework repeatedly, it will discover increasingly higher order representations, which are themselves distributions over the representations at the level below; in the agent-centric setting, an abstract state space at level $i > 0$ is the tuple $\langle \mathcal{X}^{(i)}, \mathcal{D}^{(i)} \rangle$, where each $x \in \mathcal{X}^{(i)}$ and $d \in \mathcal{D}^{(i)}$ is a distribution over states in $\mathcal{X}^{(i-1)}$ and $\mathcal{D}^{(i-1)}$ respectively.
The above formulation means that, as we construct more levels in the hierarchy, the resulting representations become increasingly compact and faster to plan with, but so does the degree of uncertainty.
Our first step is to construct an abstract representation using the approach in Section~\ref{sec:agent-learning}.
Next we must decide how best to discover higher order skills in this new representation.
We achieve this by converting our representation to a transition graph, identify ``important'' nodes (using the \textsc{VoteRank} metric), and then construct options to reach these \textit{subgoal} nodes using Djikstra's algorithm.
Edges along these paths constitute our higher-order options.
Note that since all the options contain only a single node in their termination set, they are subgoal by construction.
We can then simply iterate this approach to construct an entire abstraction hierarchy.
We again apply our approach to the \textit{Treasure Game} to construct portable hierarchies.
To illustrate the effect of the hierarchy, we compute the distribution of the length of all pairs of shortest paths for each of the tasks when using abstractions from varying levels of the hierarchy.
Results for the first task are given by Figure~\ref{fig:histos} and indicate that incorporating information at increasingly abstract levels of the hierarchy reduces the size of the graph (this trend holds across all other levels too).
Consequently, the maximum planning horizon is shortened, which greatly simplifies the planning problem.
We also investigate transfer by presenting the agent with each of the ten tasks in sequence.
Unlike previously, portable representations here consist of representations at various levels in the hierarchy.
We measure the number of samples required to learn a model of a new task, with the results illustrated by Figure~\ref{fig:sample-efficiency}.
Although the results exhibit high variance (due to the exploration strategy, the differences in tasks, and the randomised task order), sample efficiency is clearly improved when an agent is able to reuse past knowledge.
\begin{figure}[h!]
\begin{minipage}[b]{.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{Task1}
\caption{Distribution of optimal plan lengths in the first task when using hierarchies of varying heights.} \label{fig:histos}
\end{minipage}
\hfill
\begin{minipage}[b]{.45\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{efficiency}
\caption{Number of episodes required to learn a model of a given task, decreasing as the agent observes more tasks. Mean and variance reported over 100 runs.} \label{fig:sample-efficiency}
\end{minipage}
\end{figure}
\section{Conclusion}
We proposed a framework for autonomously learning reusable representations. We showed how to learn an agent- and object-centric representation that can be used for planning. These representations can be transferred to new tasks, reducing the number of times an agent is required to interact with the world. We also showed how to construct a portable hierarchy of abstractions
that can be used to plan at different levels. Altogether, our results indicate
that the learned abstractions can be reused in new tasks, reducing the number of
times an agent is required to interact with its environment. We believe this will be critical to scaling abstraction learning approaches to real
world tasks in the future.
{\footnotesize
|
2209.04904
|
\section{Introduction and Results}
The search for a quasi local energy is one of the most prominent problems in classical relativity, with many different candidates. From these candidates one of the most famous is the quasi local energy described by Hawking in 1968 \cite{Hawma} the so called Hawking energy, given by the expression
\begin{equation}
\mathcal{E}(\Sigma) = \sqrt{\frac{|\Sigma|}{16 \pi}} \left( 1+ \frac{1}{8 \pi} \int_\Sigma \theta^+ \theta^- d\mu \right)
\end{equation}
where $\Sigma$ is a closed surface in a $4$ dimensional space time, $|\Sigma|$ is the area of the surface, and $\theta^+ \theta^-$ is the product of the null expansions $\theta^+$ and $\theta^-$. The Hawking energy is one of the simplest quasi local energies that one can find and fulfils almost all the expected properties of a quasi local energy, however it has the inconvenience that it is not necessarily positive, there are well known examples in flat space of surfaces that give a negative Hawking energy. Therefore it is of high importance to know which surfaces are appropriate to evaluate the Hawking energy, for instance it was shown by Christodoulou and Yau in \cite{Chriyau} and by Miao, Wang and Xie in \cite{Miao} that under some physically reasonable conditions the Hawking energy (in the time symmetric case) is well behaved when evaluated in constant mean curvature spheres.
Here we will work in the initial data set setting, this means that we consider a smooth $3$-dimensional Riemannian manifold $(M,g)$ which will be equipped with a symmetric $2$-tensor $k$, we denote this manifold as a triple $(M,g,k)$. The motivation for considering this setting comes again from general relativity since $(M,g,k)$ can be seen as a spacelike hypersurface with second fundamental form $k$ in a $4$-dimensional spacetime. In this setting the Hawking energy can be written for a surface $\Sigma \subset M $ as
\begin{equation}\label{hawkingmass2}
\mathcal{E}(\Sigma) = \sqrt{\frac{|\Sigma|}{16 \pi}} \left( 1- \frac{1}{16 \pi} \int_\Sigma H^2 - P^2 d\mu \right)
\end{equation}
where $H$ is the mean curvature of the surface $\Sigma$ and $P=\operatorname{tr}_{g_{\Sigma}} k$ is the trace of the tensor $k$ with respect to the metric induced in $\Sigma$, that is $P= \operatorname{tr}_\Sigma k= \operatorname{tr} k -k(\nu,\nu)$, where $\nu$ is the normal to $\Sigma$ in $M$.
Form a variational point of view studying (\ref{hawkingmass2}) is equivalent to study the Hawking functional
\begin{equation}\label{hawfun}
\mathcal{H}(\Sigma)= \frac{1}{4} \int_\Sigma H^2 - P^2 d\mu
\end{equation}
We are interested in study area constrained critical surfaces of this functional, then considering a fixed area, we look for surfaces that maximize or minimize the functional. In particular these are then critical surfaces of the Hawking energy. In case $k=0$ so the so called time symmetric case (or a totally geodesic hyperssurface) the Hawking functional reduces to the Willmore functional
$$\mathcal{W}(\Sigma)= \frac{1}{4} \int_\Sigma H^2 d\mu $$
and the critical surfaces of this functional subject to the constraint that $|\Sigma| $ be fixed are the area constrained Willmore surfaces which we call here for simplicity just Willmore surfaces. These surfaces are characterized by the following Euler Lagrange equation with Lagrange parameter $\lambda$.
\begin{equation}\label{Willeq}
0= \lambda H +\Delta^\Sigma H + H|\mathring{B}|^2+ H \mathrm{Ric}(\nu, \nu)
\end{equation}
where $\mathring{B}$ is the traceless part of the second
fundamental form $B$ of $\Sigma$ in $M$, that is $ \mathring{B} = B- \frac{1}{2} H g_\Sigma$ with norm $|\mathring{B}|^2 = \mathring{B}_{ij}\, g_\Sigma^{ip}\, g_\Sigma^{jq}\, \mathring{B}_{pq}$, $\mathrm{Ric}$ is the Ricci curvature of $M$, $\nu$ is the normal to $\Sigma$ and $\Delta^\Sigma $ is the Laplace-Beltrami operator on $\Sigma$.
The Willmore surfaces have been extensively studied and in the context of general relativity they were first introduced by Lamm, Metzger and Schulze in \cite{willflat}, where they showed that there exist an unique foliations of Willmore spheres for asymptotically flat manifolds, this is a foliation that covers the whole manifold except a compact region, what we call a foliation at infinity. In their work they claimed that these surfaces are the optimal surfaces for evaluating the Hawking energy, this since if the manifold has nonnegative scalar curvature (that means that the dominant energy condition holds) the Hawking energy is positive on these surfaces and it is monotonically increasing along the foliation. It was also shown in \cite{Thomas} by Koerber that the leaves of the foliation are
strict local area preserving maximizers of the Hawking energy.
This foliation by Willmore spheres at infinity has been improved by Eichmair and Koerber in \cite{eichko} where they used a Lyapunov-Schmidt reduction procedure (technique that we will also apply in our construction) to obtain the foliation, furthermore in \cite{willcen} they studied the center of mass of this foliation. The non-totally geodesic case was also consider by Fridrich in his thesis \cite{Friedrich2020}, there he generalized the foliation of \cite{willflat} for critical surfaces of the Hawking functional and showed that the Hawking energy is monotonically increasing along the foliation. We will see in Theorem \ref{positivity} that under even more general conditions if the dominant energy condition holds then the Hawking energy is positive on these surfaces for large enough radius.
\begin{theorem*}
Assuming that on an asymptotically flat initial data set $(M,g,k)$ the dominant energy conditions holds. There exist an $r_0>0$ such that for $r\geq r_0$, if $\Sigma_r$ is a critical surface of the Hawking energy with area radius $r$ ( $|\Sigma_r |=4 \pi r^2$), it is almost centered, the Lagrange parameter $ \lambda$ is positive with $ \lambda = \mathcal{O}(r^{-3})$ and also the mean curvature is positive with $H = \mathcal{O}(r^{-1}) $ then the Hawking energy on $\Sigma_r$ is positive.
\end{theorem*}
This shows that these critical surfaces of the Hawking functional in the asymptotically flat case have the same desirable properties as the Willmore surfaces and are "optimal" (in the sense of Lamm, Metzger and Schulze) to evaluate the Hawking energy on a spacelike hypersurface.
Here we are more interested in the local behaviour of the surfaces, in this direction it was shown by Lamm and Metzger in \cite{ToMet} and later by Laurain and Mondino in \cite{Laurain} that Willmore surfaces concentrated around points which are critical points of the scalar curvature, that is points $p\in M$ such that $\nabla \mathrm{Sc}_p =0$. Furthermore in \cite{locwill} Lamm, Metzger and Schulze, and in \cite{ikoma} Ikoma, Machiodi and Mondino showed by a means of a Lyapunov-Schmidt reduction procedure that if at a point $p \in M$, $\nabla \mathrm{Sc}_p =0$ and $\nabla^2 \mathrm{Sc}_p$ is not degenerated then around $p$ there is a local foliation of area constrained Willmore surfaces around that point.
The first part of this paper will be devoted to generalize these local foliations to the general case when $k \neq 0$ obtaining the following results
\begin{theorem*}
Let $p \in M $ be such that at $p$, $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)=0 $ and $\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2) $ is nondegenerate. Then there exist $\delta, \epsilon_0, C >0$ such that if at $p$,
$$ C |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot \; |k|\; |\nabla k|\, (|k|^2 + |\mathrm{Ric}| ) <1 $$ then there exist a smooth foliation $\mathcal{F}= \{ S_r : r\in (0, \delta) \} $ around $p$ of area constrained critical spheres of the Hawking functional, that is surfaces satisfying equation (\ref{eulag}), for some $\lambda \in \mathbb{R}$. Furthermore these surfaces can be express as normal graphs over geodesic spheres of radius $r$, and they satisfy $\mathcal{H}(S_r) < 4\pi +\epsilon_0^2$ and $|S_r|< \epsilon_0^2 $, for $r \in (0, \delta)$.
\end{theorem*}
We also obtained an uniqueness result
\begin{theorem*}
$(i)$ Assume that at $p$, $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)=0 $, $\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2) $ is nondegenerate and that the foliation $ \mathcal{F}$ of the previous theorem exist satisfying $\mathcal{H}(\Sigma) < 4\pi +\epsilon_0^2$ and $|\Sigma|< \epsilon_0^2 $ for any $\Sigma \in \mathcal{F} $ and the $\epsilon_0$ of the theorem. If $ \mathcal{F}_2$ is a foliation around $p$ of area constrained critical spheres of the Hawking functional, which satisfy $\mathcal{H}(\Sigma) < 4\pi +\epsilon^2$ and $|\Sigma|< \epsilon^2 $ for any $\Sigma \in \mathcal{F}_2 $ and some $\epsilon \leq \epsilon_0$, then either $\mathcal{F}$ is a restriction of $\mathcal{F}_2$ or $\mathcal{F}_2$ is a restriction of $\mathcal{F}$.
$(ii)$ The claim $(i)$ also hold if instead of foliations we consider a concentrations surfaces around $p$ which satisfy $\mathcal{H}(\Sigma) < 4\pi +\epsilon^2$ and $|\Sigma|< \epsilon^2 $ for any $\Sigma \in \mathcal{F}_2 $ and $\epsilon \leq \epsilon_0$.
\end{theorem*}
\subsection{Small Sphere Limit}\label{secsmall}
For the second part of this paper we will focus on studying the small sphere limit of the Hawking energy. In general any quasi-local energy must have right asymptotics when evaluated on large and small spheres, in particular it must satisfy the small sphere limit.
Here we consider a $4$-dimensional spacetime $M^4$ and will denote the geometric quantities on this manifold by an index $(\cdot)^4$. Before introducing the small sphere limit we need to define what is a light cut.
Let $p\in M^4$ and let $C_p$ be the future null cone of $p$, that is the null hypersurface generated by future null geodesics starting at $p$. Pick any future directed timelike unit vector $e_0$ at $p$. We normalize a null vector $L$ at $p$ by $\langle L,e_0 \rangle =-1$. We consider the null geodesics of the vector $L$ and let $l$ be the affine parameter of these null geodesics. We define the light cuts $\Sigma_l$ to be the family of surfaces on $C_p$ determined by the level sets of the
affine parameter $l$.
The small sphere limit tell us that when evaluating the quasi local energy on surfaces approaching a point $p$ in a spacetime along the light cuts of the null cone of $p$ the leading term of the quasi-local energy should recover the stress-energy tensor in spacetimes with
matter fields.
$$\lim_{r \to 0} \frac{M(\Sigma_r)}{r^3} = \frac{4 \pi}{3} T(e_0,e_0) =\frac{1}{12}( \mathrm{Sc} + (\operatorname{tr} k)^2 - |k|^2 ) $$
where everything is evaluated at $p$ and for the right hand side we just used the Gauss–Codazzi
equations to obtain the energy density of the Einstein constrained equations on an hypersurface. This small sphere limit was first introduced by Horowitz and Schmidt for the Hawking energy \cite{Haw}, it must be satisfy by any reasonable notion of quasi local energy as can be seen in for the Brown-York energy \cite{Brown} the Kijowski-Epp-Liu-Yau energy \cite{Yu}, the Wang-Yau \cite{wangyau} and for their higher dimensional versions \cite{Wang} among others. In particular we have the following expansion for the Hawking energy for cuts on the light cut $S_l$
\begin{equation}\label{lightex}
\mathcal{E}(\Sigma_l)= \frac{1}{12}( \mathrm{Sc} + (\operatorname{tr} k)^2 - |k|^2 )l^3 + \mathcal{O}(l^5)
\end{equation}
at $p$. Having this expansion in mind when studying area constrained critical surfaces of the Hawking functional (\ref{hawfun}) in an spacelike hypersurface (initial data set) it would be natural to think that such surfaces concentrate around points satisfying that
\begin{equation}
\nabla (\mathrm{Sc} + (\operatorname{tr} k)^2 - |k|^2)=0
\end{equation}
at $p$. However in \cite{Alex} Friedrich found that this is not the case, in fact a point having a concentration of thses surfaces must satisfy
$$\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)=0 $$
at $p$, this was an unexpected result that we manage to confirm with our results as well (in Theorem \ref{primfoli}) and we also obtained in the equivalent Theorem \ref{nonexis}. This result gives the impression that the local expansion of the Hawking energy depends on how you approach the point.
\begin{figure}[h]\label{figure}
\centering
\includegraphics[width=9.35cm]{compfoli.png}
\caption{ Comparison between approaching a point along cuts on a null cone and along critical surfaces on a spacelike hypersurface.}
\end{figure}
In section \ref{secdis} we will study this discrepancy found by Friedrich and see that it comes from purely geometric reasons, in particular that even if a priory the two ways to approach the point may look similar, the surfaces used are quite different. Finally in Remark \ref{excess} we will see that these results suggest that the critical surfaces of the Hawking functional induce an excess of energy in the Hawking energy.
\section{Foliations}
\subsection{Preliminaries and setting}
\begin{lemma}[First variation]
The area constrained Euler Lagrange equation for the Hawking functional (\ref{hawfun}) is
\begin{equation}\label{eulag}
\begin{split}
0=& \lambda H +\Delta^\Sigma H + H|\mathring{B}|^2+ H \mathrm{Ric}(\nu, \nu) +\frac{1}{2}H P^2+P( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu,\nu )) - 2P \operatorname{div}_\Sigma (k(\cdot, \nu))\\ &- 2k (\nabla^\Sigma P, \nu )
\end{split}
\end{equation}
Here $H$ is the mean curvature of $\Sigma$ , $\mathring{B}$ is the traceless part of the second
fundamental form $B$ of $\Sigma$ in $M$, that is
$\mathring{B}= B- \frac{1}{2} H g_\Sigma$ where $g_\Sigma$ is the induced
metric on $\Sigma$, $\mathrm{Ric} $ is the Ricci curvature of $M$, $\nabla^\Sigma $, $\operatorname{div}_\Sigma$ and $\Delta^\Sigma$ are the covariant derivate, tangential divergence and Laplace Beltrami operator on $\Sigma$. Finally $\lambda \in \mathbb{R}$ plays the role of a Lagrange parameter.
\end{lemma}
\begin{proof}
Let $\Sigma \subset M$ be a surface and let $f: \Sigma \times (-\epsilon , \epsilon) \rightarrow M$ be a variation of $\Sigma$ with $f(\Sigma, s)= \Sigma_s$ and lapse $\frac{\partial f}{\partial s }_{|s=0}=\alpha \nu $.
In \cite[Section 3]{willflat} it was shown that the first variation of the Willmore functional
$$\mathcal{W}(\Sigma)= \frac{1}{4} \int_\Sigma H^2 d\mu $$ is given by
\begin{equation}
\begin{split}
\frac{1}{4} \frac{d}{d s}_{| s=0}\int_{\Sigma_s} H^2 d\mu_{s=0} = \int_{\Sigma_s} \left( -\Delta^\Sigma H - H|\mathring{B}|^2- H\mathrm{Ric}(\nu, \nu) \right) \alpha \, d\mu
\end{split}
\end{equation}
now lets compute the variation of $\frac{1}{2} \int_\Sigma P^2 d\mu $. In \cite{Ce} it was shown that the variation of $P$ is given by
\begin{equation}
\frac{d\, P }{d s}_{| s=0}=\left( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu, \nu)\right)\alpha +2 k(\nabla \alpha, \nu)
\end{equation}
using this relation and integration by parts we have
\begin{equation}
\begin{split}
\frac{1}{4} \frac{d}{d s}_{| s=0}\int_{\Sigma_s} P^2 d\mu_{s=0} =& \int_{\Sigma_s} \frac{1}{2} P^2 H \alpha +P\left( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu, \nu)\right)\alpha +2P k(\nabla \alpha, \nu) d\mu\\
=\int_{\Sigma_s}\big( &\frac{1}{2} P^2 H +P\left( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu, \nu)\right) -2P \operatorname{div}_\Sigma \left( k(\cdot, \nu) \right )\\
&- 2 k(\nabla^\Sigma P, \nu ) \big) \alpha d\mu
\end{split}
\end{equation}
We are considering area constrained surfaces, that means surfaces which variation of area is zero, this traduces to the area constraint $\int_\Sigma H \alpha d\mu =0 $. Then our surfaces must satisfy the area constraint and
\begin{equation*}
\begin{split}
&0=\frac{1}{2} \left( \frac{d}{d s}_{| s=0}\int_{\Sigma_s} H^2 d\mu_{s=0} - \frac{d}{d s}_{| s=0}\int_{\Sigma_s} P^2 d\mu_{s=0} \right) =\\
&\int_{\Sigma_s} \big( -\Delta^\Sigma H - H|\mathring{B}|^2- H \mathrm{Ric}(\nu, \nu) - \frac{1}{2} P^2 H -P\left( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu, \nu)\right) +2P \operatorname{div}_\Sigma \left( k(\cdot, \nu) \right )\\
&+ 2 k(\nabla^\Sigma P, \nu ) \big) \alpha d\mu
\end{split}
\end{equation*}
Then combining this expression and the area constraint give us the Euler Lagrange equation (\ref{eulag}).
\end{proof}
Note that this result is equivalent to \cite[Lemma 2.8]{Alex}, and it reduces to the Willmore equation (\ref{Willeq}) in case $k=0$.
Friedrich proved in \cite{Friedrich2020} the existence of a foliation of critical surfaces of the Hawking functional in asymptotically Schwarschild manifolds, and also proved that the Hawking energy is monotonically increasing along the foliation. Now we will show that if the dominant energy condition holds then Hawking energy is positive on these surfaces. This holds in more general conditions that the ones considered by Friedrich (it holds when assuming general asymptotic flatness). First recall that the dominant energy condition is given by
\begin{equation}
\mu \geq |J|
\end{equation}
where
\begin{equation}
\mathrm{Sc} + (\operatorname{tr} k)^2 - |k|^2=2 \mu \quad \text{and} \quad \operatorname{div} (k -(\operatorname{tr} k) g)= J
\end{equation}
are the energy density and the momentum density of the Einstein constraint equations. In particular the dominant energy conditions implies $\mu \geq 0$ which also implies $ \mathrm{Sc} + \frac{2}{3}(\operatorname{tr} k)^2 \geq 0 $.
\begin{theorem}\label{positivity}
Assuming that on an asymptotically flat initial data set $(M,g,k)$, where $k$ decays like $ |k|+ |\nabla k| |x| \leq C|x|^{-\frac{3}{2}-\epsilon}$ for some constant $C>0$ and $\epsilon \in (0, \frac{1}{2})$ and the dominant energy conditions holds. There exist an $r_0>0$ such that for $r\geq r_0$, if $\Sigma_r$ is a critical surface of the Hawking energy with area radius $r$ ( $|\Sigma_r |=4 \pi r^2$), it is almost centered ($|x|$ the distance to the origin of any point in $\Sigma_r$ is comparable to $r$), the Lagrange parameter $ \lambda$ is positive with $ \lambda = \mathcal{O}(r^{-3})$ and also the mean curvature is positive with $H = \mathcal{O}(r^{-1}) $ then the Hawking energy on $\Sigma_r$ is positive.
\end{theorem}
\begin{proof}
According to (\ref{hawkingmass2}) is is enough to see that $\int_{\Sigma_r} H^2 -P^2 d\mu < 16 \pi$. We proceed in a similar way as in \cite[Theorem 4]{willflat}, we consider equation (\ref{eulag}), divided by $H$, integrate by parts the term $\frac{\Delta H}{H}$ and use the Gauss equation $2\mathrm{Ric}(\nu, \nu) = \mathrm{Sc} -\mathrm{Sc}^{\Sigma_r} + H^2 -|B|^2 $ obtaining.
\begin{equation}
\begin{split}
0=& \int_{\Sigma_r}\lambda+ |\nabla \log H|^2 + \frac{1}{2} |\mathring{B}|^2+ \frac{1}{2}(\mathrm{Sc} -\mathrm{Sc}^{\Sigma_r}) +\frac{1}{4}H^2 + \frac{1}{2} P^2+\frac{P}{H}( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu,\nu ))\\& - 2\frac{P}{H} \operatorname{div}_\Sigma (k(\cdot, \nu))- \frac{2}{H} k (\nabla^\Sigma P, \nu )d\mu
\end{split}
\end{equation}
we can estimate for some constant $C$
\begin{equation}
\int_{\Sigma_r}\lambda+ |\nabla \log H|^2 + \frac{1}{2} |\mathring{B}|^2 +\frac{1}{4}H^2 +\frac{1}{2} P^2 - \frac{C}{H}|k| |\nabla k| d\mu \leq - \int_{\Sigma_r} \frac{1}{2}(\mathrm{Sc} -\mathrm{Sc}^{\Sigma_r}) d\mu
\end{equation}
Now using Gauss-Bonnet theorem to replace $ \mathrm{Sc}^{\Sigma_r}$ and subtracting $\frac{1}{3}(\operatorname{tr} k)^2$ on both sides we have
\begin{equation}
\begin{split}
&\int_{\Sigma_r}\lambda+ |\nabla \log H|^2 +\frac{1}{4}(H^2- P^2) +\frac{3}{4} P^2 - \frac{1}{3}(\operatorname{tr} k)^2+ \frac{1}{2} |\mathring{B}|^2 - \frac{C}{H}|k| |\nabla k| d\mu \\& \leq 4 \pi - \int_{\Sigma_r} \frac{1}{2}(\mathrm{Sc}+\frac{2}{3} (\operatorname{tr} k)^2) d\mu
\end{split}
\end{equation}
Now thanks to the dominant energy condition we have $\mathrm{Sc}-\frac{2}{3} (\operatorname{tr} k)^2 \geq 0 $ and by the decay conditions of the assumptions it is direct to see that for $r$ large enough
$$0 \leq\int_{\Sigma_r}\lambda+ \frac{3}{4} P^2 - \frac{1}{3}(\operatorname{tr} k)^2 - \frac{C}{H}|k| |\nabla k| d\mu $$
then it follows directly that $\int_{\Sigma_r} H^2 -P^2 d\mu < 16 \pi$.
\end{proof}
\begin{remark}
Note that the foliation constructed in \cite{Friedrich2020} satisfies the conditions of the previous result. This shows that these surfaces have the same desired properties as the Willmore surfaces in the totally geodesic case ($k=0$) when evaluation the Hawking energy.
\end{remark}
To produce our foliations we will use the fact that geodesics spheres of small radius around a point $p\in M$ form a foliation and this foliation can be perturbed in a suitable way. The perturbation procedure consist in a normal perturbation to the geodesics spheres and a perturbation of their center. For this procedure we will consider the setup considered in \cite{Me} and which is like the one considered in \cite{ikoma,locwill,Ye} when $k=0$.
Denote by $R_p$ the injectivity radius of $p$ and define $r_p:= \frac{1}{8} R_p$. we will also denote $\mathbb{B}_r:=\{x\in\mathbb{R}^{3}: ||x ||<r \} $ and $\mathbb{S}^2_r:=\{x\in\mathbb{R}^{n+1}: ||x ||=r \} $ where $||\cdot ||$ is the euclidean norm.
For $\tau \in \mathbb{R}^{3} $ with $|| \tau|| < r_p$ we define $F_\tau : \mathbb{B}_{2r_p} \rightarrow M $ by
\begin{equation}\label{coordinates}
F_\tau(x)= \exp_{c(\tau)}(x^i e^\tau_i)
\end{equation}
where $c(\tau)= \exp_p (\tau^i e_i)$, $e_i$ are an orthonormal basis of $T_p M$ and $e_i^\tau$ their parallel transport to $c(\tau)$ along the geodesic $c(t \tau )_{0\leq t \leq 1
}$. Consider also the dilation $\alpha_r(x)=rx $ for $r>0$. For each $\tau$ and $0<r<r_p$, the map $F_\tau \circ \alpha_r $ gives rise to some rescaled normal coordinates centered at $c(\tau)$, in particular the metric $g$ in this coordinates satisfies that
$$g_{ij}(r x)= r^2 (\delta_{ij} + \sigma_{ij}(x r)) $$
where $\delta$ detones the euclidean metric and $\sigma$ satisfies $ |\sigma_{ij}(x)|\leq |x|^2$, we denote this by $g_{ij}(rx)= r^2 (\delta_{ij} + \mathcal{O}(|x|^2 r^2))$.
As in \cite{locwill} let $\Omega_1= \{ \varphi \in \mathcal{C}^{4, \frac{1}{2}}(\mathbb{S}^2) \; |\; ||\varphi ||_{\mathcal{C}^{4, \frac{1}{2}}(\mathbb{S}^2)} <\delta_0 \}$ with $\delta_0>0$ so small that $ S_\varphi :=\{x+ \varphi(x) \nu(x): x \in \mathbb{S}^2 \}$ is an embedded $\mathcal{C}^4$ surface in $\mathbb{R}^{3} $, and where $\nu$ is the unit normal to $\mathbb{S}^n$. Define the map $\Tilde{\Phi}: (0,r_p) \times \mathbb{B}_{2r_p} \times \Omega_1 \times \mathbb{R} \rightarrow \mathcal{C}^\frac{1}{2} (\mathbb{S}^2) $ given by
\begin{equation}
\begin{split}
\Tilde{\Phi}(r, \tau , \varphi , \lambda)=&\lambda H +\Delta^\Sigma H + H|\mathring{B}|^2+ H \mathrm{Ric}(\nu, \nu)+\frac{1}{2}H P^2+P( \nabla_\nu \operatorname{tr} k - \nabla_\nu k(\nu,\nu )) \\&- 2P \operatorname{div}_\Sigma (k(\cdot, \nu))- 2k (\nabla^\Sigma P, \nu )
\end{split}
\end{equation}
where the expression of the right is evaluated for $\Sigma= F_\tau (\alpha_r (S^n_\varphi))$ at $F_\tau (r(x+ \varphi(x) \nu))$ with respect to $g$. Note that this is the equation that characterize the area constrained critical surfaces of the Hawking functional. Then in order to find a foliation we look for some functions $\tau(r)$, $\varphi(r)$ and $\lambda(r)$ such that $ \Tilde{\Phi}(r, \tau(r) , \varphi(r) , \lambda(r))=0 $ for some $r \in (0, r_0)$, then our surfaces $\Sigma_r= F_{\tau(r)} (\alpha_r (S^n_\varphi(r)))$ are parameterized by $r$ and with some extra work one can see that they form a foliation.
In order to find this functions we will use the implicit function theorem but in an auxiliary manifold $(\mathbb{B}_{2r_p}, g_{\tau,r}= r^{-2} \alpha_r^*(F_\tau^*(g)),$ $k_{\tau, r}= r^{-1} \alpha_r^*(F_\tau^*(k)) )$ this manifold is useful since its metric is conformal to $g$ in the $F_\tau \circ \alpha_r $ coordinates and when $r=0$, $g_{\tau,0}$ is just the euclidean metric and $k_{\tau,0}=0$, allowing us to work with an $r$ arbitrarily small. Furthermore we define the operator
\begin{equation}\label{resc}
\begin{split}
\Phi(r, \tau , \varphi , \lambda)=&r^2 \lambda H_{r,\tau} +\Delta_{r,\tau}^\Sigma H_{r,\tau} + H_{r,\tau}|\mathring{B}_{r,\tau}|^2+ H_{r,\tau} \mathrm{Ric}_{r,\tau}(\nu_{r,\tau}, \nu_{r,\tau})+\frac{1}{2}H_{r,\tau} P_{r,\tau}^2\\&+P_{r,\tau}( \nabla_{\nu_{r,\tau}} \operatorname{tr} k_{r,\tau} - \nabla_{\nu_{r,\tau}} k_{r,\tau}(\nu_{r,\tau},\nu_{r,\tau} )) - 2P_{r,\tau} \operatorname{div}_\Sigma (k_{r,\tau}(\cdot, \nu_{r,\tau}))\\&- 2k_{r,\tau} (\nabla^\Sigma P_{r,\tau}, \nu_{r,\tau} )
\end{split}
\end{equation}
where the right hand side is evaluated on $\Sigma= S^n_\varphi $ at $x+ \varphi(x) \nu(x) $ with respect to $g_{\tau,r}$ on $\mathbb{B}_2 $ (we denote this by the subindex $r,\tau$). The convenience of this operator on the auxiliary manifold is that the metric $g_{r,\tau}$ is conformal to $g$ in the coordinates $F_\tau \circ \alpha_r $ with conformal factor $r^2$, $k_{r,\tau}$ is also conformal to $k$ and then using how the different terms on (\ref{resc}) transform under this conformal transformation (for instance $H_{r,\tau}= r H$, $\nu_{r,\tau}= r \nu $, $P_{r,\tau}= r P $ etc) one obtains the following relation
\begin{equation}
\begin{split}
\Phi(r, \tau , \varphi , \lambda)=&r^3 \Tilde{\Phi}(r, \tau , \varphi , \lambda)
\end{split}
\end{equation}
and therefore if we manage to find a surface satisfying $\Phi(r, \tau , \varphi , \lambda)=0$ we then have an area constrained critical surfaces of the Hawking functional in our original manifold.
Note that operator (\ref{resc}) can be decomposed in two parts, one that doesn't depend on $k$ that we denote by $W_1$ and another that depends on $k$ which we denote by $W_2 $. Then we have $\Phi(r, \tau , \varphi , \lambda)=(W_1+ W_2)(r, \tau , \varphi, \lambda) $ where
\begin{equation}
W_1(r, \tau , \varphi, \lambda):= r^2 \lambda H_{r,\tau} +\Delta_{r,\tau}^\Sigma H_{r,\tau} + H_{r,\tau}|\mathring{B}_{r,\tau}|^2+ H_{r,\tau} \mathrm{Ric}_{r,\tau}(\nu_{r,\tau}, \nu_{r,\tau})
\end{equation}
and
\begin{equation}
\begin{split}
W_2(r, \tau , \varphi, \lambda):=& \frac{1}{2}H_{r,\tau} P_{r,\tau}^2+P_{r,\tau}( \nabla_{\nu_{r,\tau}} \operatorname{tr} k_{r,\tau} - \nabla_{\nu_{r,\tau}} k_{r,\tau}(\nu_{r,\tau},\nu_{r,\tau} )) \\ &- 2P_{r,\tau} \operatorname{div}_\Sigma (k_{r,\tau}(\cdot, \nu_{r,\tau})) - 2k_{r,\tau} (\nabla^\Sigma P_{r,\tau}, \nu_{r,\tau} )
\end{split}
\end{equation}
Note that $ W_1(r, \tau , \varphi, \lambda) $ correspond to the Willmore operator whose local behaviour have been studied in many different papers like in \cite{ToMet}, \cite{locwill} and \cite{ikoma} among others.
Now lets see the operator (\ref{resc}) when one considers a geodesic sphere, that is when $\varphi$ is equal to zero.
\begin{lemma}
Considering the setting of above one has
\begin{equation}
W_1(r, \tau , 0, \lambda)= r^2 (2 \lambda - \frac{2}{3}\mathrm{Sc}^\tau(0) + 4 \mathrm{Ric}^\tau_{pq}(0) x^p x^q ) + r^3( 5 \mathrm{Ric}^\tau_{pq,s}(0) x^p x^q x^s -\mathrm{Sc}_{,p} x^p )+ \mathcal{O}(r^4)
\end{equation}
where we denote by $A^\tau(x)$ a tensor evaluated at $F_\tau(x)$ and then $A^\tau$(0) is the tensor evaluated at the point $c(\tau)$, also if $\tau=0$ we omit the superscript i.e. $A^0=A$.
\begin{equation}\label{sphep}
\begin{split}
W_2(r, \tau , 0, \lambda)=& r^2\big( -(\operatorname{tr} k^\tau)^2 + (6\operatorname{tr} k^\tau \, k^\tau_{ij}
+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j -9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q \big)\\
&+ r^3 \big( \big(\frac{\partial_i (\operatorname{tr} k^\tau)^2}{2} - 2\partial_s ( \operatorname{tr} k^\tau \, k^\tau_{ s i}) \big) x^i +(\partial_s (\operatorname{tr} k^\tau k^\tau_{ij}) + 2 \partial_t (k^\tau_{ij} k^\tau_{ts}) )x^i x^j x^s \\
&-3 k^\tau_{ij} \, k^\tau_{pq,s} x^i x^j x^px^q x^s \big) +\mathcal{O}(r^4)
\end{split}
\end{equation}
where $k^\tau=k^\tau(rx)$. In particular $\Phi(r, \tau , 0, \lambda)=(W_1+ W_2)(r, \tau , 0, \lambda) $
\end{lemma}
\begin{proof}
In \cite[Proposition 2.3]{locwill} it was shown that
$$W_1(r, \tau , 0, \lambda)= r^2 (2 \lambda - \frac{2}{3}\mathrm{Sc}^\tau(0) + 4 \mathrm{Ric}^\tau_{pq}(0) x^p x^q ) + r^3( 5 \mathrm{Ric}^\tau_{pq,s}(0) x^p x^q x^s -\mathrm{Sc}^\tau_{,p}(0) x^p )+ \mathcal{O}(r^4) $$
In the rest of the proof we omit the superindex $\tau$ for simplicity. Now considering the rescaling we have
\begin{equation}\label{W2}
\begin{split}
W_2(r, \tau , \varphi , \lambda)=&r^3 \big(\frac{1}{2}H P^2+P( \nabla_{\nu} \operatorname{tr} k - \nabla_{\nu} k(\nu,\nu )) - 2P \operatorname{div}_\Sigma (k(\cdot, \nu)) - 2k (\nabla^\Sigma P, \nu )\big)
\end{split}
\end{equation}
where the right hand side is evaluated on the geodesic sphere $ F_\tau (\alpha_r (S^n)):=\Sigma$ using the metric $g$. Consider a local frame $e_i \in TM $ $i=1,2,3$ such that $e_3= \nu$ is the normal to $\Sigma$ and $e_i \in T \Sigma$ for $i=1,2$ are two parallel tangent vectors i.e. $\nabla^\Sigma_{e_\alpha} e_\beta =0$ for $\alpha, \beta=1,2$. We use Latin letters as indices in to denote the whole frame $ i,j,r,s,t...$ and Greek letters $\alpha, \beta$ just to denote the vectors tangent to $\Sigma$. We use the Einstein summation convention.
First lets expand the last two terms of (\ref{W2}).
\begin{equation}
\begin{split}
\operatorname{div}_\Sigma (k(\cdot, \nu)) &= e_\alpha \left( k(e_\alpha, \nu) \right)= \nabla_{e_\alpha} k(e_\alpha, \nu) + k(\nabla_{e_\alpha} e_\alpha , \nu) + k(e_\alpha, \nabla_{e_\alpha} \nu)\\
&= \nabla_{e_i} k (e_i, \nu) - \nabla_{\nu} k(\nu, \nu) + k (\nabla_{e_\alpha} e_\alpha , \nu) + g^\Sigma (k, B)
\end{split}
\end{equation}
where $g^\Sigma (k, B)= g^{\Sigma\alpha \gamma} g^{\Sigma \beta \sigma } k_{\alpha \beta} B_{\gamma \sigma}$.
\begin{equation}
\nabla^\Sigma_{e_\alpha} P= e_\alpha (\operatorname{tr} k - k(\nu, \nu))= \nabla_{e_\alpha} k(e_i, e_i) - \nabla_{e_\alpha} k(\nu, \nu) + 2 k(\nabla_{e_\alpha}e_\beta , e_\beta)
\end{equation}
where all the terms are evaluated at the point $c(\tau)$. Now introducing this terms in (\ref{W2}) we have
\begin{equation}\label{W22}
\begin{split}
W_2(r, \tau , \varphi, \lambda)=&r^3 \Big(\frac{1}{2}H P^2+P( \nabla_{\nu} \operatorname{tr} k - \nabla_{\nu} k(\nu,\nu )) - 2P\big(\nabla_{e_i} k (e_i, \nu) - \nabla_{\nu} k(\nu, \nu) + g^\Sigma (k, B)\\ &
+ k (\nabla_{e_\alpha} e_\alpha , \nu)\big)- 2k_{\alpha j} \nu^j\big( \nabla_{e_\alpha} k(e_i, e_i) - \nabla_{e_\alpha} k(\nu, \nu) + 2 k(\nabla_{e_\alpha}e_\beta , e_\beta)\big) \Big)
\end{split}
\end{equation}
and using that for a geodesic sphere one has $H(r,\tau, 0, \lambda)= \frac{2}{r} - \frac{r^2}{3} \mathrm{Ric}_{ij}\, x^i x^j- \frac{r^3}{4} \mathrm{Ric}_{ij,l}\, x^i x^j x^l +\mathcal{O}(r^4) $ (this expression can be found in \cite{Ye}) where $\mathrm{Ric}$ is evaluated at $c(\tau)$, $B(r,\tau, 0, \lambda)= r^{-1} g^\Sigma + \mathcal{O}(r^2)$ and $\nabla_{e_\alpha} e_\beta = -B(e_\alpha, e_\beta) $ we have
\begin{equation}\label{W23}
\begin{split}
W_2(r, \tau , 0, \lambda)=& r^2 P^2+r^3 P( \nabla_{\nu} \operatorname{tr} k - \nabla_{\nu} k(\nu,\nu )) - 2r^3 P\big(\nabla_{e_i} k (e_i, \nu) - \nabla_{\nu} k(\nu, \nu) -2 k (\nu , \nu)\\ &
+ \frac{1}{r} P\big)- 2r^3k(e_j, \nu) \nabla_{e_j} k(e_i, e_i) +2 r^3k(\nu, \nu) \nabla_{\nu} k(e_i, e_i) +2r^3k(e_i, \nu) \nabla_{e_i} k(\nu, \nu)\\&
-2 r^3k(\nu, \nu) \nabla_{\nu} k(\nu, \nu) + 4r^2 k(e_i, \nu)\,k(e_i, \nu) -4r^2 k(\nu, \nu)\,k(\nu, \nu) +\mathcal{O}(r^4) \\
=& r^2\big( 4 k(e_i, \nu)\,k(e_i, \nu) -4 k(\nu, \nu)\,k(\nu, \nu) -P^2\big)+r^3 P\big( \nabla_{\nu} \operatorname{tr} k - \nabla_{\nu} k(\nu,\nu )\\
&- 2\nabla_{e_i} k (e_i, \nu)+2 \nabla_{\nu} k(\nu, \nu) +4 k (\nu , \nu) \big) +2 r^3 \big(k(\nu, \nu) \nabla_{\nu} k(e_i, e_i) \\&
- k(e_i, \nu) \nabla_{e_\alpha} k(e_i, e_i) +k(e_i, \nu) \nabla_{e_i} k(\nu, \nu)
-k(\nu, \nu) \nabla_{\nu} k(\nu, \nu) \big) +\mathcal{O}(r^4) \\
=&-r^2 (\operatorname{tr} k)^2 + r^3(\operatorname{tr} k \, \partial_i \operatorname{tr} k - 2\operatorname{tr} k \, k_{i s, s} - 2 k_{s j}\, \partial_s \operatorname{tr} k) x^i + 2r^2(\operatorname{tr} k \, k_{ij}\\ &
+ 4 \operatorname{tr} k\, k_{ij}+ 4 k_{si} \, k_{sj})x^i x^j +r^3(2 \operatorname{tr} k \,k_{ij,s} - \operatorname{tr} k k_{ij,s} - \partial_i \operatorname{tr} k\, k_{js} +2 k_{ij} \, k_{st,t}\\ &
+2k_{ij} \, \partial_s \operatorname{tr} k+2 k_{st} \, k_{ij,t} )x^ix^jx^s -9 r^2 k_{ij}\, k_{pq} x^ix^j x^px^q -3r^3 k_{ij} \, k_{pq,s} x^i x^j x^px^q x^s\\
& +\mathcal{O}(r^4)\\
=& r^2\big( -(\operatorname{tr} k)^2 + (6\operatorname{tr} k\, k_{ij}+ 4 k_{si} \, k_{sj})x^i x^j -9 k_{ij}\, k_{pq} x^ix^j x^px^q \big)\\
&+ r^3 \big( \big(\frac{\partial_i (\operatorname{tr} k)^2}{2} - 2\partial_s ( \operatorname{tr} k \, k_{ s i}) \big) x^i +(\partial_s (\operatorname{tr} k k_{ij}) + 2 \partial_t (k_{ij} k_{ts}) )x^i x^j x^s \\
&-3 k_{ij} \, k_{pq,s} x^i x^j x^px^q x^s \big) +\mathcal{O}(r^4)
\end{split}
\end{equation}
\end{proof}
We have an analogous result to \cite[Lemma 3.2]{locwill}
\begin{lemma}\label{3.2}
For every $\tau \in $ and every $ \lambda \in \mathbb{R}$ we have that
$$\Phi_{\varphi r}(0, \tau , 0, \lambda)=0 $$
where we denote $ \Phi_{\varphi }(r, \tau , \varphi, \lambda) \varphi' = \frac{d}{dt} \Phi_{\varphi }(r, \tau, \varphi +t \varphi' , \lambda) \vert_{t=0}.$
\end{lemma}
\begin{proof}
First we consider the terms depending on $k$ that is expression (\ref{W22}). In \cite[Lemma 1.3]{Ye} and its proof it was shown that $H_{\varphi r}(0, \tau , 0, \lambda)=0$ and $B_{\varphi r}(0, \tau , 0, \lambda)=0$ then we have that the terms of the linearization that don't depend on $B_{\varphi r}$ have order at least $\mathcal{O}(r^2)$ and therefore
$$W_{2\varphi r}(0, \tau , 0, \lambda) = \frac{\partial}{\partial r} W_{2\varphi }(r, \tau , 0, \lambda)_{|r=0} =0 $$
Finally in \cite[Lemma 3.2]{locwill} it was shown that $W_{1\varphi r}(0, \tau , 0, \lambda) =0$ and as $\Phi_{\varphi r}(0, \tau , 0, \lambda)=W_{1\varphi r}(0, \tau , 0, \lambda)+W_{2\varphi r}(0, \tau , 0, \lambda) $ we have the result.
\end{proof}
In \cite[Section 3]{locwill} it was shown that when $r \to 0$ the linearization of $W_1$ reduces to
\begin{equation}
W_{1\varphi }(0, \tau , 0, \lambda) = -\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2)
\end{equation}
which is the linearization of the Willmore operator in Euclidean space. The kernel of this operator is generated by the constant functions and the first spherical harmonics, that is $K=\mathrm{Span} \{1,x^1, x^2, x^3 \}$ where $x^i$ are coordinate components of a point $x\in \mathbb{S}^2$. Now notice that by our scaling (as seen in Lemma \ref{3.2}) the operator $W_{1\varphi r}(r, \tau , 0, \lambda)$ has order $\mathcal{O}(r^2) $ and therefore we have
\begin{equation}
\Phi_{\varphi }(0, \tau , 0, \lambda) = -\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2)
\end{equation}
Now we define precisely what a concentration of surfaces is.
\begin{definition}
We say that a family of closed compact embedded surfaces $\{S_r: r \in I \}$, where $I$ is an interval satisfying $0 \in \Bar{I} $, is a \emph{concentration of surfaces around $p$} if
\begin{equation*}
\limsup_{r \to 0} \operatorname{diam} S_r =0
\quad \text{ and} \quad \bigcap_{r_0 \in (0,\infty)} \overline{\bigcup_{r \in I \cap (0,r_0) }S_r}= \{ p \}.
\end{equation*}
\end{definition}
Note that a foliation is a concentration of surfaces where the surfaces can be continuously parameterized by $r$ (that is $\forall r\in I$ there is a surface $S_r$) and where the surfaces do not intersect with each other.
\subsection{ Foliation construction}
As mentioned before if a surface satisfies $\Phi_{\varphi r}(r, \tau , \varphi, \lambda)=0$ then we have an area constrained critical surface of the Hawking functional, then the idea to construct the foliation is to find by means of the implicit function theorem some $\tau (r)$, $\varphi(r)$ and $\lambda(r)$ such that $\Phi(r,\tau(r), \varphi(r), \lambda(r))=0$ for all $r\in (0,r_0)$. To achieve this we use that we can decompose $\mathcal{C}^{4,\frac{1}{2}}(\mathbb{S}^2)$ as $K \oplus K^\bot$ where $K$ is the kernel of $-\Delta^{\mathbb{S}^2} (-\Delta^{\mathbb{S}^2} -2)$ on euclidean space and $K^\bot$ its $L^2$ orthogonal complement. Then if one manage to show that $\Phi(r,\tau(r), \varphi(r), \lambda(r))=0$ holds on $K$ and on $K^\bot$ the equation holds on $\mathcal{C}^{4,\frac{1}{2}}(\mathbb{S}^2)$, and this is precisely what we are going to show using the implicit function theorem in each of the cases.
\begin{theorem}\label{primfoli}
Let $p \in M $ be such that at $p$, $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)=0 $ and $\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2) $ is nondegenerate. Then there exist $\delta, \epsilon_0, C >0$ such that if at $p$,
$$ C |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot \; |k|\; |\nabla k|\, (|k|^2 + |\mathrm{Ric}| ) <1 $$ then there exist a smooth foliation $\mathcal{F}= \{ S_r : r\in (0, \delta) \} $ around $p$ of area constrained critical spheres of the Hawking functional, that is surfaces satisfying equation (\ref{eulag}), for some $\lambda \in \mathbb{R}$. Furthermore these surfaces can be express as normal graphs over geodesic spheres of radius $r$, and they satisfy $\mathcal{H}(S_r) < 4\pi +\epsilon_0^2$ and $|S_r|< \epsilon_0^2 $, for $r \in (0, \delta)$.
\end{theorem}
\begin{proof}
We split the kernel K in two parts $K_0=\mathrm{Span }\{ 1\} $ and $K_1=\mathrm{Span}\{ x^1, x^2, x^3\} $.
Let $\pi_i$ for $i=0,1$ denote the orthogonal projection from $ \mathcal{C}^{0, \frac{1}{2}}( \mathbb{S}^n)$ onto $K_i$, let $T_1:K_1 \rightarrow \mathbb{R}^3 $ be the isomorphism sending $x^i_{| \mathbb{S}^2} $ to the $i$th coordinate basis $e_i$, and let $T_0:K_0 \rightarrow \mathbb{R} $ be the identity map. Define $ \Tilde{\pi}_i:= T_i \circ \pi_i$ for $i=1,2$. We consider the expansion
\begin{equation}\label{expan}
\begin{split}
\Phi(r, \tau, r^2 \varphi, \lambda )=& \Phi(r, \tau, 0, \lambda ) +\Phi_\varphi (0, \tau, 0, \lambda) \varphi r^2 +\Phi_{\varphi r}(0, \tau , 0, \lambda)\varphi r^3 \\
&+ r^4 \int_0^1 \int_0^1 t \Phi_{\varphi \varphi}(sr, \tau,st r^2 \varphi, \lambda ) \varphi \varphi ds dt\\
&+ r^4 \int_0^1 \int_0^1 \int_0^1 s \Phi_{\varphi r r}(usr, \tau,ust r^2 \varphi, \lambda ) \varphi du ds dt\\
&+ r^5 \int_0^1 \int_0^1 \int_0^1 s t \Phi_{\varphi \varphi r}(usr, \tau,ust r^2 \varphi, \lambda ) \varphi \varphi du ds dt.\\
\end{split}
\end{equation}
Note that $\Phi_{\varphi r}(0, \tau , 0, \lambda)\varphi =0$ by Lemma \ref{3.2}. We will study the projection of this expansion to the kernel, we have for the first term that $\Phi(r, \tau, 0, \lambda )= W_1(r, \tau, 0, \lambda )+ W_2(r, \tau, 0, \lambda )$ and in \cite[Lemma 3.1]{locwill} it was shown that
\begin{equation}\label{willker}
\begin{split}
\Tilde{\pi}_0 \left( W_1(r, \tau, 0, \lambda )\right)=& 8 \pi r^2 \left( \lambda + \frac{1}{3} \mathrm{Sc}^\tau(0) \right) + \mathcal{O}(r^4)\\
\Tilde{\pi}_1\left( W_1(r, \tau, 0, \lambda )\right)=& \frac{4 \pi}{3}r^3 \nabla_{e_i} \mathrm{Sc}^\tau(0) e_i + \mathcal{O}(r^5)
\end{split}
\end{equation}
Now using equation (\ref{sphep}) and the fact that $\int_{\mathbb{S}^2} x^i d\mu = \int_{\mathbb{S}^2} x^i x^jx^p d\mu =\int_{\mathbb{S}^2} x^ix^j x^p x^q x^s d \mu =0 $ we have
\begin{equation}
\begin{split}
\Tilde{\pi}_0\left( \frac{W_2(r, \tau, 0, \lambda )}{r^2} \right)_{|r=0}
=& \int_{\mathbb{S}^2}\big( \left(
6 \operatorname{tr} k^\tau(rx)\, k^\tau_{ij}(rx)+ 4 k^\tau_{si}(rx) \, k^\tau_{sj}(rx)\right)x^i x^j \\
&-(\operatorname{tr} k^\tau)(rx)^2 -9 k^\tau_{ij}(rx)\, k^\tau_{pq}(rx) x^ix^j x^px^q \big)d\mu_{|r=0}\\
=& \left( 6 \operatorname{tr} k^\tau(0)\, k^\tau_{ij}(0)+ 4 k^\tau_{si}(0) \, k^\tau_{sj}(0)\right) \int_{\mathbb{S}^2}x^i x^j d\mu \\
&-(\operatorname{tr} k^\tau)(0)^2\int_{\mathbb{S}^2}d\mu -9 k^\tau_{ij}(0)\, k^\tau_{pq}(0) \int_{\mathbb{S}^2}x^ix^j x^px^q d\mu\\
=& 8 \pi ( \frac{1}{5}(\operatorname{tr} k^\tau)^2 + \frac{1}{15} |k^\tau|^2)
\end{split}
\end{equation}
where as before the quantities are evaluated at the point $c(\tau)$.
Note that for any $\varphi_0 \in K^\bot$ one has $\Tilde{\pi}_i(\Phi_\varphi (0, \tau, 0, \lambda) \varphi ) =0 $, then taking some arbitrary $\varphi_0 \in K^\bot$ which will be fixed later, and $\lambda_0 = - \frac{1}{3}\mathrm{Sc} - \frac{1}{15} |k|^2 - \frac{1}{5} (\operatorname{tr} k)^2 $ where the geometric quantities are evaluated at $p$, we find using the expansion (\ref{expan}) that
\begin{equation}\label{k0}
\begin{split}
\Tilde{\pi}_0\left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^2} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0, \varphi=\varphi_0}} = 8 \pi ( \lambda_0 + \frac{1}{3}\mathrm{Sc} + \frac{1}{15} |k|^2 + \frac{1}{5} (\operatorname{tr} k)^2) =0
\end{split}
\end{equation}
Using again the expansion (\ref{expan}) and (\ref{willker}) we have
\begin{equation}\label{proj1}
\begin{split}
\Tilde{\pi}_1\left( \frac{\Phi(r,\tau, r^2\varphi_0,\lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} &= \frac{4 \pi}{ 3} \mathrm{Sc}_{,i}e_i + \Tilde{\pi}_1 \left( \frac{ W_2(r, \tau, 0, \lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}}\\
&= \frac{4 \pi}{ 3} \mathrm{Sc}_{,i}e_i + \Tilde{\pi}_1 \Big( \big(\frac{\partial_i (\operatorname{tr} k^\tau)^2}{2} - 2\partial_s ( \operatorname{tr} k^\tau \, k^\tau_{ s i}) \big) x^i +(\partial_s (\operatorname{tr} k^\tau k^\tau_{ij}) \\
&+ 2 \partial_t (k^\tau_{ij} k^\tau_{ts}) )x^i x^j x^s -3 k^\tau_{ij} \, k^\tau_{pq,s} x^i x^j x^px^q x^s \Big)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} \\
&+\frac{1}{r} \Tilde{\pi}_1 \big( (6 \operatorname{tr} k^\tau\, k^\tau_{ij}+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j-(\operatorname{tr} k^\tau)^2\\
&-9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q \big)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}}\\
\end{split}
\end{equation}
Lets see in detail the last two terms of this expression, we have that the second term is equal to
\begin{equation}
\begin{split}
\int_{\mathbb{S}^2} &\big(\frac{\partial_i (\operatorname{tr} k^\tau)^2}{2} - 2\partial_s ( \operatorname{tr} k^\tau \, k^\tau_{ s i}) \big) x^i x^l +(\partial_s (\operatorname{tr} k^\tau k^\tau_{ij})
+ 2 \partial_t (k^\tau_{ij} k^\tau_{ts}) )x^i x^j x^sx^l \\&
-3 k^\tau_{ij} \, k^\tau_{pq,s} x^i x^j x^px^q x^sx^l d\mu _{|\substack{r=0,\tau=0}} e_l\\
=& \big(\frac{\partial_i (\operatorname{tr} k^\tau)^2}{2} - 2\partial_s ( \operatorname{tr} k^\tau \, k^\tau_{ s i}) \big) \int_{\mathbb{S}^2}x^i x^l d\mu +(\partial_s (\operatorname{tr} k^\tau k^\tau_{ij})
+ 2 \partial_t (k^\tau_{ij} k^\tau_{ts}) ) \int_{\mathbb{S}^2}x^i x^j x^sx^l d\mu\\
&-3 k^\tau_{ij} \, k^\tau_{pq,s} \int_{\mathbb{S}^2} x^i x^j x^px^q x^sx^l d\mu e_l\\
=& \frac{92\pi}{105} \partial_l (\operatorname{tr} k)^2 e_l- \frac{64\pi}{35} \partial_s(\operatorname{tr} k\, k_{sl})e_l + \frac{64\pi}{105} \partial_t( k_{ls} \,k_{st})e_l - \frac{12 \pi}{105} \partial_l |k|^2e_l
\end{split}
\end{equation}
For the last term of the expression note that $\Tilde{\pi}_1 \big( ( 6\operatorname{tr} k^\tau \, k^\tau_{ij}
+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j
-(\operatorname{tr} k^\tau)^2 \\-9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q \big)_{|\substack{r=0}}=0
$ and that $\frac{\partial}{\partial r}_{|r=0} k_{ij}(rx)= k_{ij,t}(0)x^t$, then by performing a Taylor expansion around $r=0$ we find
\begin{equation*}
\begin{split}
&\Big(\frac{1}{r} \Tilde{\pi}_1 \big( (6 \operatorname{tr} k^\tau\, k^\tau_{ij}+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j
-(\operatorname{tr} k^\tau)^2 -9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q \big) \Big)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}}\\
&=\frac{\partial}{\partial r} \int_{\mathbb{S}^2} \big((6\operatorname{tr} k^\tau\, k^\tau_{ij}+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j
-(\operatorname{tr} k^\tau)^2 -9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q\big)x^l d\mu_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} e_l\\
&=\int_{\mathbb{S}^2} \big((6 \partial_p(\operatorname{tr} k, k_{ij})+ 4 \partial_p (k_{si} \, k_{sj}))x^i x^jx^p
-\partial_i (\operatorname{tr} k)^2x^i -9 \partial_s (k_{ij}\, k_{pq}) x^ix^j x^px^qx^s\big)x^l d\mu \, e_l\\
&=-\frac{8\pi}{105} \partial_l (\operatorname{tr} k)^2 e_l+ \frac{64\pi}{35} \partial_s(\operatorname{tr} k\, k_{sl})e_l - \frac{64\pi}{105} \partial_t( k_{ls} \,k_{st})e_l + \frac{8 \pi}{21} \partial_l |k|^2e_l
\end{split}
\end{equation*}
Then putting everything back into (\ref{proj1}) we obtain
\begin{equation}\label{proj2}
\begin{split}
\Tilde{\pi}_1\left( \frac{\Phi(r,\tau, r^2\varphi_0,\lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} &= \frac{4 \pi}{ 3} \mathrm{Sc}_{,i}e_i + \Tilde{\pi}_1 \left( \frac{ W_2(r, \tau, 0, \lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}}\\
&= \frac{4 \pi}{ 3} \partial_l( \mathrm{Sc}+ \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2 )e_l =0
\end{split}
\end{equation}
To apply the implicit function theorem for the system of equations (\ref{k0}) and (\ref{proj2}) we need the corresponding operator to be invertible, lets find the operator. We compute the following derivatives
\begin{equation*}
\begin{split}
\frac{\partial}{\partial \lambda }\Tilde{\pi}_0\left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^2} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} =8 \pi, \quad \frac{\partial}{\partial \lambda } \Tilde{\pi}_1\left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} = 0,
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\frac{\partial}{\partial \tau_\beta }\Tilde{\pi}_0\left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^2} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} =\frac{8 \pi}{3} ( \partial_\beta \mathrm{Sc} + \frac{1}{5} \partial_\beta|k|^2 + \frac{3}{5} \partial_\beta(\operatorname{tr} k)^2) )=0
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\frac{\partial}{\partial \tau_\beta } \Tilde{\pi}_1\left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^3} \right)_{|\substack{r=0,\tau=0, \lambda=\lambda_0}} &= \frac{4 \pi}{ 3} \partial_\beta \partial_l( \mathrm{Sc}+ \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2 )e_l
\end{split}
\end{equation*}
Then we need the operator
\begin{equation}
\begin{pmatrix}
8\pi & 0 \\
0 & \frac{4 \pi}{ 3} \nabla^2( \mathrm{Sc}+ \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2 )
\end{pmatrix}
\end{equation}
to be invertible at point $p$ and this is equivalent to have $\nabla^2( \mathrm{Sc}+ \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2 )$ invertible. Then there exist functions $\tau =\tau(r,\varphi)$ and $\lambda =\lambda(r, \varphi)$ such that $\tau(0,\varphi_0)=0$, $\lambda(0,\varphi_0)=\lambda_0=- \frac{1}{3}\mathrm{Sc} -\frac{1}{15} |k|^2 - \frac{1}{5} (\operatorname{tr} k)^2 $ and $\Tilde{\pi}_i( \Phi(r,\tau, r^2\varphi,\lambda ))=0$ $i=1,2 $ for $(r,\tau,\varphi,\lambda)$ close to $(0,0,\varphi_0, \lambda_0)$.
Now lets try to apply the implicit function theorem to have a vanishing projection to the orthogonal to the kernel. First we fix the map $\varphi_0 \in K^\bot$ to be the solution to the equation
\begin{equation}\label{phi_0}
-\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2) \varphi_0= \pi^\bot\left( 9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q - (4 \mathrm{Ric}_{ij}+ 6 \operatorname{tr} k^\tau\, k^\tau_{ij}+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j
\right)
\end{equation}
where $ \pi^\bot$ is the orthogonal projection to $K^\bot$. Then we have projecting (\ref{expan}) to $K^\bot$ and normalizing it by $r^2$.
\begin{equation}\label{kort}
\begin{split}
\pi^\bot \left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^2} \right)_{|r=0,\varphi=\varphi_0}=&\pi^\bot \big( -2(\frac{1}{3}\mathrm{Sc} + \frac{1}{15} |k|^2 + \frac{1}{5} (\operatorname{tr} k)^2 ) -\frac{2}{3}\mathrm{Sc} +4 \mathrm{Ric}_{ij}x^ix^j \\ & -(\operatorname{tr} k^\tau)^2
+ (6\operatorname{tr} k^\tau \, k^\tau_{ij}
+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j -9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q\\
&- \Delta^{\mathbb{S}^2}(\Delta^{\mathbb{S}^2} -2) \varphi_0 \big)\\
=& \pi^\bot\left( (4 \mathrm{Ric}_{ij}+ 6 \operatorname{tr} k^\tau\, k^\tau_{ij}+ 4 k^\tau_{si} \, k^\tau_{sj})x^i x^j
-9 k^\tau_{ij}\, k^\tau_{pq} x^ix^j x^px^q \right)\\
&-\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2) \varphi_0\\
=&0
\end{split}
\end{equation}
\begin{equation}
\frac{\partial}{\partial \varphi}\pi^\bot \left( \frac{\Phi(r,\tau, r^2\varphi,\lambda )}{r^2} \right)_{|r=0,\varphi=\varphi_0}= -\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2)\vert_{K^\bot}
\end{equation}
and this operator is invertible since our equation is restricted to $K^\bot$ (the $K$ part is zero). Then by the implicit function theorem there exist some $\delta>0$, $\tau=\tau(r)$, $\varphi (x)=\varphi(x,r)$ and $\lambda =\lambda(r) $ such that $\Phi(r, \tau(r), r^2 \varphi(r), \lambda(r) )=0 $ for $0<r<\delta $, this means that for each $r$ we have an area constrained critital surface of the Hawking functoinal. Now lets see that these surfaces form a foliation.
By construction we have the following parametrization for our surfaces.
\begin{equation}\label{parame}
G:\mathbb{R}^+ \times \mathbb{S}^n \mapsto M, \quad (r,x) \mapsto \exp_{c(\tau(r))}\big(rx(1+r^2 \varphi(r))\big)
\end{equation}
where we write $\varphi(r)=\varphi(r)(x) $ for simplicity. To find the lapse function of these surfaces one calculates
$ \frac{\partial G }{\partial r}_{|r=0} = \left(d_x \exp_{c(\tau(r))} \right) \big(x(1 +r^2 \varphi(r)) + rx(r^2 \varphi(r))_r\big)_{|r=0} + \left( \frac{\partial \exp_{c(\tau(r))}}{\partial r} \right) \big( rx(1 +r^2 \varphi(r))\big)_{|r=0} $
and this reduces to $ \frac{\partial G}{\partial r}_{|r=0}= x + \frac{\partial \tau^k}{\partial r}_{|r=0} e_k $
then we see that the lapse function is given by
\begin{equation}\label{lapse}
\alpha:=\langle \frac{\partial G}{\partial r}_{|r=0},\nu \rangle = 1 + \frac{\partial \tau^k}{\partial r}\langle e_k, \nu \rangle
\end{equation}
therefore we have a foliation if $\alpha>0$, then it suffices to show that $|\frac{\partial\tau }{\partial r}_{|r=0}|<1 $. To estimate $\frac{\partial\tau }{\partial r}_{|r=0} $ we will use that the equation $ \Tilde{\pi}_1\left(\frac{\Phi(r, \tau, r^2 \varphi, \lambda )}{r^3}\right)=0 $ implies that $ \frac{\partial}{\partial r} \Tilde{\pi}_1\left(\frac{\Phi(r, \tau, r^2 \varphi, \lambda )}{r^3}\right)_{|r=0}=0$ and by (\ref{expan}) this is
\begin{equation}\label{estau}
\begin{split}
0=& \frac{\partial}{\partial r} \Tilde{\pi}_1\left(\frac{\Phi(r, \tau, 0, \lambda )}{r^3}\right)_{|r=0} + \frac{1}{2} \Tilde{\pi}_1\left( \Phi_{\varphi \varphi}(0, 0,0,0 ) \varphi_0 \varphi_0 \right) +\frac{1}{2} \Tilde{\pi}_1\left( \Phi_{\varphi r r}(0, 0,0,0 ) \varphi_0 \right)\\
\end{split}
\end{equation}
note that the second term is equal to zero, for the first term it is not hard to see using (\ref{proj2}) and chain rule that
\begin{equation}
\frac{\partial}{\partial r} \Tilde{\pi}_1\left(\frac{\Phi(r, \tau, 0, \lambda )}{r^3}\right)_{|r=0}= \frac{4 \pi}{ 3} \partial_\beta \partial_l( \mathrm{Sc}+ \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2 )\frac{\partial\tau^\beta }{\partial r}_{|r=0}e_l
\end{equation}
then from (\ref{estau}) and the invertibility of $\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2) $ we have
\begin{equation}\label{foli2}
|\frac{\partial\tau }{\partial r}_{|r=0}| < \frac{3}{4 \pi} |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot | \frac{1}{2} \Tilde{\pi}_1\left( \Phi_{\varphi r r}(0, 0,0,0 ) \varphi_0 \right)|
\end{equation}
In the following we show that the right hand side of the previous
expression is less than one. The solution of the equation (\ref{phi_0}) is a function of the form $\varphi_0 =(k\ast k)_{ijpq} \;x^i x^j x^p x^q + C\cdot ( \mathrm{Ric} + k\ast k )_{ij} \; x^i x^j + C \cdot (\mathrm{Sc} + k\ast k )$ where we denote for any tensors $A$ and $B,$ $A \ast B$ to be any linear combination of contractions of $A$ and $B$ with the correspondent metric. In particular we have that $\varphi_0$ is an even function. In \cite[Lemma 4.1]{locwill} it was shown that $W_{1\varphi r r}(0, 0,0,0 ) $ is an even operator which implies that $\Tilde{\pi}_1\left( W_{1\varphi r r}(0, 0,0,0 ) \varphi_0 \right) =0$. However unfortunately the operator $W_{2\varphi r r}(0, 0,0,0 ) $ is not even, it has an odd part which is proportional to $\nabla k \ast k $, then combining this with the expression of $\varphi_0$ in (\ref{foli2}) we obtain the estimate
$$ |\frac{\partial\tau }{\partial r}_{|r=0}| < C |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot \; |k|\; |\nabla k|\, (|k|^2 + |\mathrm{Ric}| ) $$
where $C$ depends on $n$. Then if $ |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot \; |k|\; |\nabla k|\, (|k|^2 + |\mathrm{Ric}| ) $ is small enough we have $|\frac{\partial \tau}{ \partial r}_{| r=0}| <1$ and in particular a foliation.
The leaves of the foliation are normal graphs of the map $r^3 \varphi(r)$ over geodesics spheres of radius $r$. This implies that the mean curvature of our surfaces can be estimated by the mean curvature of the geodesic sphere and $\Hessian \varphi_0$. Then using that $|| \varphi||_{\mathcal{C}^2} < C$ with $C$ depending on the value of $\mathrm{Ric} $ and $k$ in these coordinates at $p$ we have
$$|H_{S_r}|<|H_{F_\tau(\mathbb{S}_r^n)} | + \mathcal{O}(r^2) < \frac{2}{r} + \mathcal{O}(r) $$
Then proceeding in the same way as it was done in \cite[Lemma 5.1]{ikoma} we find that the Willmore energy of the surfaces satisfy
$$ \frac{1}{4} \int_{S_r} H^2 d\mu = 4 \pi + \mathcal{O}(r^2) $$
and $|S_r| = 4 \pi r^2+ \mathcal{O}(r^4)$ the it is direct to see that there exist an $\epsilon_0$ such that $$\mathcal{H}(\Sigma)= \frac{1}{4} \int_{S_r} H^2 - P^2 d\mu < 4 \pi + \epsilon_0^2$$ and $|S_r| < \epsilon_0^2 $ for any $r \in (0, \delta)$. Note that the smaller $\delta$ is the smaller $\epsilon_0$ can be.
\end{proof}
\begin{remark}
Note that the condition $ C |(\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2))^{-1}| \cdot \; |k|\; |\nabla k|\, (|k|^2 + |\mathrm{Ric}| ) <1 $ is a sufficient but not a necessary condition to have the foliation. The necessary condition is that $\alpha= 1 + \frac{\partial \tau^k}{\partial r}_{|r=0} \langle e_k, \nu \rangle >0$, if this condition is not fulfil then we only have a regularly centered concentration of critical surface of the Hawking functional around $p$.
\end{remark}
\subsection{Uniqueness and nonexistence}
\begin{theorem}\label{uniq}
$(i)$ Assume that at $p$ $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)=0 $, $\nabla^2( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2) $ is nondegenerate and that the foliation $ \mathcal{F}$ of Theorem \ref{primfoli} exist satisfying satisfy $\mathcal{H}(\Sigma) < 4\pi +\epsilon_0^2$ and $|\Sigma|< \epsilon_0^2 $ for any $\Sigma \in \mathcal{F} $ and the $\epsilon_0$ of the theorem. If $ \mathcal{F}_2$ is a foliation around $p$ of area constrained critical spheres of the Hawking functional, which satisfy $\mathcal{H}(\Sigma) < 4\pi +\epsilon^2$ and $|\Sigma|< \epsilon^2 $ for any $\Sigma \in \mathcal{F}_2 $ and some $\epsilon \leq \epsilon_0$, then either $\mathcal{F}$ is a restriction of $\mathcal{F}_2$ or $\mathcal{F}_2$ is a restriction of $\mathcal{F}$.
$(ii)$ The claim $(i)$ also hold if instead of foliations we consider a concentrations surfaces around $p$ which satisfy $\mathcal{H}(\Sigma) < 4\pi +\epsilon^2$ and $|\Sigma|< \epsilon^2 $ for any $\Sigma \in \mathcal{F}_2 $ and $\epsilon \leq \epsilon_0$.
\end{theorem}
\begin{proof}
The idea of the proof is to show that the leaves of the foliation can be express as normal graphs over geodesic spheres, once this is done we obtain the uniqueness of the foliation from the implicit theorems used in Theorem \ref{primfoli}.
Consider the leaves of the foliation $\mathcal{F}_2$ being parametrized by their area radius that is $S_r \in \mathcal{F}_2$ where $r$ satisfies $ |S_r| = 4 \pi r^2$, and we consider $r$ so small that the leaves are contained in a small geodesic sphere where we have a decomposition of the metric as in (\ref{normalcor}). By assumption the leaves satisfy $\mathcal{H}(S_r) < 4\pi +\epsilon^2$ and $|S_r|< \epsilon^2 $ therefore by considering $r$ smaller if necessary we can apply directly \cite[Proposition 3.2 and Corollary 3.3]{Alex} obtaining that the surfaces satisfy
\begin{equation}\label{estimla}
\int_{S_r} |\nabla^2 H|^2+ H^2 |\nabla H|^2 +H^2 |\mathring{B}|^2 + H^4|\mathring{B}|^2 d\mu <C,
\end{equation}
\begin{equation}\label{estimsho}
||\mathring{B}||_{L^2(S_r)} < C| S_r|, \, \quad \, \left|\left|H - \frac{2}{r} \right|\right|_{L^\infty(S_r)} < C |S_r|^{\frac{1}{2}}
\end{equation}
where the $C$'s are constants depending on the injectivity radius of $p$, $\epsilon$ and of the value of $Ric$, $\nabla Ric$ at $p$. Note also that by using (\ref{estimla}), (\ref{estimsho}) and Lemma \ref{Sobol} one can reproduce the proof of \cite[Lemma 2.10]{Janme} in the exact same way obtaining the estimate
$$ ||\mathring{B}||_{L^\infty(S_r)} \leq C r.$$
From (\ref{estimsho}) and by considering $r$ small enough we can apply Lemma \ref{refine2} obtaining
\begin{equation}\label{normal2}
\big|\big|\frac{y}{r} -\nu \big|\big|_{L^2(S_r)} <Cr^3
\end{equation}
where $y$ denotes the position vector on some normal coordinates centered at a point $p_0$. To see that we can express our leaves as graphs over a geodesic spheres we need the normal $\nu$ to $S_r$, to satisfy on euclidean space that $\langle \nu, \frac{y}{r} \rangle \neq 0$ and this is true if we have that $||\frac{y}{r} -\nu ||_{L^\infty (S_r)} $ is small.
For any tangent vector $e_i$ to $S_r$ and its tangential projection to a sphere of radius $r$ in euclidean space, $e_i^T = e_i- \delta(e_i, \frac{y}{r}) \frac{y}{r}$ we have
$$\nabla^E_{e_i} \frac{y}{r}=\frac{1}{r}\left( e_i- \delta(e_i, \frac{y}{r}) \frac{y}{r}\right) \quad \text{and} \quad \nabla_{e_i} \nu =\frac{1}{2} H e_i + \mathring{B}(e_i, \cdot) $$
then by using that $\delta(e_i, \frac{y}{r}) = (\delta-g)(e_i ,\frac{y}{r}) + g(e_i,\frac{y}{r} -\nu ) $ and the decay of the metric $g$ (like in Lemma \ref{comparison}) we obtain
\begin{equation}\label{devire}
\big|\nabla \big(\nu -\frac{y}{r}\big) \big| < C\big(|\partial g| + \big|H -\frac{2}{r}\big| +|\mathring{B}| +r^{-1}\big(|g- \delta| + \big|\frac{y}{r} -\nu\big| \big)\big)< Cr +Cr^{-1} \big|\frac{y}{r} -\nu\big|
\end{equation}
for some constant $C$. From this inequality and (\ref{normal2}) we obtain $ ||\nabla (\frac{y}{r} -\nu) ||_{L^2(S_r)} <C r^2 $, then using the inequality (\ref{Sobol2}) from Lemma \ref{Sobol} with $p=2$ we obtain $||\frac{y}{r} -\nu ||_{L^4(S_r)} <Cr^\frac{5}{2} $, now using (\ref{devire}) again we have $||\nabla (\frac{y}{r} -\nu) ||_{L^4(S_r)} <Cr^\frac{3}{2} $. Finally using the Sobolev inequality (\ref{Sobolev2}) for $p=4$ we obtain
$$\big|\big|\frac{y}{r} -\nu \big|\big|_{L^\infty (S_r)} < C r^2 $$
then for $r$ small enough we can express $S_r$ as a graph over a geodesic sphere of radius $ \Tilde{r}=\Tilde{r} (r)$ centered on a point $p_r$, then we can also characterize the leaves by this radius and denote them by $S_{\Tilde{r}}$. Lets change notation and simply denote $\Tilde{r}$ by $r$, then we have $S_r= F_{\Tilde{\tau}(r)} (\alpha_r (S_{\Tilde{\varphi}}))$ for some $\Tilde{\varphi} \in \mathcal{C}^{4, \frac{1}{2}}(\mathbb{S}^2)$ and $\Tilde{\tau}(r)$ which satisfies $\Tilde{\tau}(r) \to 0 $ as $r \to 0$ and $c(\Tilde{\tau}) = \exp_p (\Tilde{\tau}^i e_i)$ where we used the notation of (\ref{coordinates}).
Denoting by $\mathbb{S}^2(a)$ the unit sphere of center $a$ in $\mathbb{R}^3$, $ S_\varphi(a) :=\{x+ \varphi(x) \nu(x): x \in \mathbb{S}^2(a) \}$ and defining $\Tilde{S}_r :=\alpha_{1/r}(F^{-1}_0(S_r))$ with euclidean center of mass denoted by $x(r)$, we have that the previous is equivalent to have $\Tilde{S}_r=S_{\Bar{\varphi}(r)}(x(r)) $ for some smooth function $\Bar{\varphi}(r)$ on $\mathbb{S}^2(a)$. Furthermore by Theorem \ref{muller} we have that our surfaces approach uniformly a round sphere in Euclidean space as $r \to 0$ hence we have in particular that $||\Bar{\varphi}(r) ||_{\mathcal{C}^5} \to 0$ as $r \to 0$, with this note that we have just proved the same result as in \cite[Lemma 2.3]{Ye} then we can apply the two results that follow after that Lemma \cite[Corollary 2.1 and Lemma 2.4]{Ye} and to our situation directly. With this we perturbed the center of our spheres obtaining a smooth function $a(r)$ with $ a(r)\in \mathbb{R}^3$ and $\lim_{r \to 0} ||a(r)|| =0$, such that $S_r=F_{r(x(r) +a(r))} (\alpha_r (S_{\varphi(r,a(r))})) $ for some smooth function $\varphi(r,a(r)) $ on $\mathbb{S}^2$ which satisfies $\pi_1(\varphi(r,a(r)))=0 $ and that $||\varphi(r,a(r)) ||_{\mathcal{C}^5} \to 0$ as $r \to 0$. We want our $\varphi$ to satisfy the same conditions as the one in Theorem \ref{primfoli}, this to use the uniqueness of the implicit function theorem, therefore we also want to have that $ \pi_0(\varphi(r,a(r)))=0$. In order to achieve this we will have to perturb the radius of our spheres.
Denote by $m(\varphi(r)):= \pi_0(\varphi(r,a(r)) )= \frac{1}{4 \pi} \int_{\mathbb{S}^2} \varphi (r,a(r)) d \mu $ and note that $ m(\varphi(r)) \to 0 $ for $r \to 0$, then define
\begin{equation}
\varphi^* (r):= \frac{\varphi(r,a(r))- m(\varphi(r))}{ 1 + m(\varphi(r))} \quad \text{and \quad} r^*(r):=r(1+ m(\varphi(r)))
\end{equation}
we then have $ \pi (\varphi^* (r))=0$ and as $r^* x(1+ \varphi^*(r) ) = r x(1+ \varphi(r,a(r)))$ for $ x \in \mathbb{S}^2$ we have that
$$S_r = F_{\tau(r)} (\alpha_r (S_{\varphi(r,a(r))})) =F_{\tau(r)} (\alpha_{r^*} (S_{\varphi^*(r)})) $$
where $\tau(r) = r(x(r) +a(r)) $. As $ r^* \to 0$ for $r\to 0$ and for $r$ small enough the relation between $r$ and $r^*$ is injective, we can write all of the relation of before in terms of $r^*$ instead of $r$, then we write$$S_{r^*}=F_{\tau(r^*)} (\alpha_{r^*} (S_{\varphi^*(r^*)})) $$where we also have that $\tau(r^*) \to 0$ and $||\varphi^*(r^*) ||_{\mathcal{C}^5}\to 0 $ for $r^* \to 0$.
As the surfaces $ S_{r^*}$ are area constraint critical points of the Hawking functional we have that on the manifold $(\mathbb{B}_{2r_p}, g_{\tau,r},$ $k_{\tau, r} )$ they satisfy $\Phi(r^*,\tau(r^*), \varphi^*,\lambda(r^*) )=0$ for some constants $\lambda(r^*) $. We have that $\varphi^* = \mathcal{O}(r^*)$ and then $\frac{\varphi^*}{r^*} $ is bounded. Then as in (\ref{expan}) we have
\begin{equation}\label{1b}
\begin{split}
-\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2)\varphi^* &= -W_1(r^*, \tau , 0, \lambda) -W_2(r^*, \tau , 0, \lambda) \\
&- r^{*2} \int_0^1 \int_0^1 t \Phi_{\varphi \varphi}(sr^*, \tau,st \varphi^*,\lambda ) \frac{\varphi^*}{r^*} \frac{\varphi^*}{r^*} ds dt\\
&- r^{*3}\int_0^1 \int_0^1 \int_0^1 s \Phi_{\varphi r r}(usr^*, \tau,ust \varphi^*,\lambda ) \frac{\varphi^*}{r^*} du ds dt\\
&- r^{*3} \int_0^1 \int_0^1 \int_0^1 s t \Phi_{\varphi \varphi r} (usr^*, \tau,ust \varphi^*,\lambda ) \frac{\varphi^*}{r^*} \frac{\varphi^*}{r^*} du ds dt + \mathcal{O}(r^{*2}) \\
&=:r^{*2} f(r^*)
\end{split}
\end{equation}
where $f(r^*)$ is bounded. Then $\varphi^*$ is a solution of the elliptic PDE $ -\Delta^{\mathbb{S}^2} (- \Delta^{\mathbb{S}^2} -2) \varphi =r^{* 2} f(r^*) $ in $K^\bot$ then by using Schauder estimates (and the injectivity of $L$ in $K^\bot$) we have $||\varphi^* ||_{\mathcal{C}^{2,\frac{1}{2}}} \leq C r^{*2}$. Now considering the projection to $K_0$ like in (\ref{k0}) and dividing by $r^{*2}$ we have
\begin{equation}
\begin{split}
0=&\Tilde{\pi}_0\left( \frac{\Phi(r^*,\tau, \varphi^*,\lambda )}{r^{*2}} \right) =8 \pi ( \lambda(0) + \frac{1}{3}\mathrm{Sc}^\tau + \frac{1}{15} |k^\tau|^2 + \frac{1}{5} (\operatorname{tr} k^\tau)^2) + \mathcal{O}(r^{*2}) \\
& +\Tilde{\pi} \big( \int_0^1 \int_0^1 t \Phi_{\varphi \varphi}(sr^*, \tau,st \varphi^*, \lambda ) \frac{\varphi^*}{r^*} \frac{\varphi^*}{r^*} ds dt\\
&+ r^* \int_0^1 \int_0^1 \int_0^1 s \Phi_{\varphi r r}(usr^*, \tau,ust \varphi^*, \lambda ) \frac{\varphi^*}{r^*} du ds dt\\
&+ r^* \int_0^1 \int_0^1 \int_0^1 s t \Phi_{\varphi \varphi r} (usr^*, \tau,ust \varphi^*,\lambda ) \frac{\varphi^*}{r^*} \frac{\varphi^*}{r^*} du ds dt \big) \\
\end{split}
\end{equation}
Then as $||\frac{\varphi^*}{r^*} ||_{\mathcal{C}^{2}} \to 0$ as $r^* \to 0$, we have that
$$\lambda(0) = -\frac{1}{3}\mathrm{Sc} - \frac{1}{15} |k|^2 - \frac{1}{5} (\operatorname{tr} k)^2 $$
Finally as $ 0=\pi^\bot\left( \Phi(r^*,\tau, \varphi^*,\lambda ) \right) $ and setting $\varphi(r^*):=r^{-2} \varphi^*(r^*)$ when considering the projection to $K^\bot$ just like in (\ref{kort}) we see that $\varphi (0)$ is given by the solution of the equation (\ref{phi_0}), then by the uniqueness of the implicit function theorems used in Theorem \ref{primfoli} the functions $\varphi(r^*)$, $\tau(r^*)$ and $\lambda(r^*)$ must agree with the ones found in the Theorem on a neighborhood of $r^*=0$.
For $(ii)$ note that we did not used the foliation property in the previous arguments.
\end{proof}
From the prove of the previous theorem we can also obtain directly the nonexistence result found in \cite[Theorem 1.2]{Alex}. Note that for our proof we use estimates found in \cite{Alex}.
\begin{theorem}\label{nonexis}
There exist an $\epsilon_0 >0$ such that if at a point $p \in M$, $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)\neq 0 $ then there exist no concentration of area constrained critical spheres of the Hawking functional by surfaces satisfying $\mathcal{H}(S_r) < 4\pi +\epsilon_0^2$ and $|S_r|< \epsilon_0^2 $.
\end{theorem}
\begin{proof}
We consider $\epsilon_0$ small enough to be in the setting of the proof of the previous theorem (so small enough to apply \cite[Proposition 3.2 ]{Alex}). Suppose that we have such surfaces and $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)\neq 0 $.
As in the proof of the previous theorem, having that on the manifold $(\mathbb{B}_{2r_p}, g_{\tau,r},$ $k_{\tau, r} )$ our surfaces satisfy $\Phi(r^*,\tau(r^*), \varphi^*,\lambda(r^*) )=0$ we can also consider the projection to $K_1$ and dividing by $r^{*3}$ obtaining
\begin{equation}
\begin{split}
0=&\Tilde{\pi}_1\left( \frac{\Phi(r^*,\tau, \varphi^*,\lambda )}{r^{*2}} \right) =\frac{4 \pi}{ 3} \mathrm{Sc}^\tau_{,i}e_i + \Tilde{\pi}_1 \left( \frac{ W_2(r, \tau, 0, \lambda )}{r^{*3}} \right) \\
& +\Tilde{\pi} \big( \int_0^1 \int_0^1 t \Phi_{\varphi \varphi}(sr^*, \tau,st \varphi^*, \lambda ) \frac{\varphi^*}{r^{*2}} \frac{\varphi^*}{r^*} ds dt\\
&+ \int_0^1 \int_0^1 \int_0^1 s \Phi_{\varphi r r}(usr^*, \tau,ust \varphi^*, \lambda ) \frac{\varphi^*}{r^*} du ds dt\\
&+ \int_0^1 \int_0^1 \int_0^1 s t \Phi_{\varphi \varphi r} (usr^*, \tau,ust \varphi^*,\lambda ) \frac{\varphi^*}{r^*} \frac{\varphi^*}{r^*} du ds dt \big) \\
\end{split}
\end{equation}
Then as $||\varphi^* ||_{\mathcal{C}^{2}} \leq C r^{*2}$ we find taking $r^* \to 0$ that $\frac{4 \pi}{ 3} \mathrm{Sc}_{,i}e_i + \Tilde{\pi}_1 \left( \frac{ W_2(r, \tau, 0, \lambda )}{r^{*3}} \right)_{|r=0}=0$ and proceeding as it was done for ($\ref{proj2} $) we find that $\nabla( \mathrm{Sc} + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |k|^2)= 0 $, a contradiction.
\end{proof}
\section{Discrepancy of small sphere limits}\label{secdis}
Note that our critical surfaces of Theorems \ref{primfoli} and \ref{uniq} are small deformations of geodesic spheres which satisfy that the smaller the radius the closer is the surface to a geodesic sphere, therefore in order to understand the discrepancy mention in section \ref{secsmall} it is a good idea to study the expansion of the Hawking energy on geodesic spheres of small radius. Recalling that the geodesic spheres are parameterized by
\begin{equation}
X_G:\mathbb{R}^+ \times \mathbb{S}^n \mapsto M,\quad (r,x) \mapsto \exp_{p}(rx)
\end{equation}
and that the mean curvature of the geodesic sphere can be express as
\begin{equation}\label{H}
H_G(x)= \frac{2}{r}-\frac{1}{3} \mathrm{Ric}_{ij}(0) x^i x^j r- \frac{1}{4} \mathrm{Ric}_{ij;k}(0)x^ix^jx^k r^2 + \mathcal{O}(r^4).
\end{equation}
were $\mathrm{Ric}$ is evaluated at $p$. One can proceed as in \cite{Fashitam} and find that in the totally geodesic case ($k=0$) the following expansion is found
\begin{equation}
\mathcal{E}(S_r) = \sqrt{\frac{|S_r|}{16 \pi}} \left( 1- \frac{1}{16 \pi} \int_{S_r} H^2 d\mu \right) = \frac{r^3}{12} \mathrm{Sc}_p + \mathcal{O}(r^5)
\end{equation}
where the Hawking energy is evaluated on the geodesic sphere $S_r$ of radius $r$ and centered on a point $p$. We can then compute as was done in Theorem \ref{primfoli} that
\begin{equation}
\begin{split}
\int_{S_r} P^2 d\mu&= 4 \pi r^2 (\operatorname{tr} k)^2 -2 \operatorname{tr} k k_{ij} \int_{\mathbb{S}^2} x^i x^j d\mu + k_{ij}k_{pq}\int_{\mathbb{S}^2} x^i x^j x^p x^q d\mu \\
&= \frac{8\pi}{5} r^2 (\operatorname{tr} k)^2 + \frac{8\pi}{15} r^2 |k|^2
\end{split}
\end{equation}
with this we then get the general expansion
\begin{equation}\label{geoexp}
\mathcal{E}(S_r)=\sqrt{\frac{|S_r|}{16 \pi}} \left( 1- \frac{1}{16 \pi} \int_{S_r} H^2-P^2 d\mu \right)= \frac{r^3}{12}(\mathrm{Sc}_p + \frac{3}{5} (\operatorname{tr} k)^2 + \frac{1}{5} |K|^2 ) +\mathcal{O}(r^5)
\end{equation}
This result would agree with the result found in \cite{Alex}, therefore this gives us the idea that the problem in this discrepancy lays in the difference between the light cuts spheres and the geodesic spheres. To see this we will follow \cite{Wang} and \cite{wangyau} in order to study in more detail the light cuts spheres and try to compare them with the geodesic spheres.
\begin{remark}
A natural idea would be to consider the small sphere limit evaluating on space time constant mean curvature (STCMC) surfaces, that is surfaces satisfying $ H^2-P^2 =4r^{-2}= \text{Constant}$. The local behaviour of these surfaces was studied in \cite{Me} and it was shown that these surfaces are small deformations of geodesic spheres which also satisfy that the smaller the radius the closer is the surface to a geodesic sphere. Therefore such a small sphere limit would also lead to (\ref{geoexp}).
\end{remark}
Let $C_p$ be the future null cone of $p$, that is the null hypersurface generated by future null geodesics starting at $p$. Pick any future directed timelike unit vector $e_0$ at $p$, then to parameterize the light cuts $\Sigma_l$ of $C_p$ we will consider the map
\begin{equation}
X_{lc}:[0, \delta) \times \mathbb{S}^2 \mapsto M^4
\end{equation}
such that for each point $x \in \mathbb{S}^2 $ and $l\in [0, \delta) $, $X_{lc}(x, l)$ is a null geodesic parameterized by the affine parameter $l$, with $X_{lc}(x, 0)=p$ and $\frac{\partial X_{lc}(x,0)}{\partial l} \in T_pM^4$ a null vector which satisfies $ \langle \frac{\partial X_{lc}(x,0)}{\partial l} , e_0 \rangle =-1$. We define $L=\frac{\partial X_{lc}}{\partial l} $ to be the null generator with $\nabla^4_L L=0$. We also choose a local
coordinate system $\{ u_a \}_{a=1,2}$ on $\mathbb{S}^2$ such that $\partial_a =
\frac{\partial X_{lc}}{\partial u_a} $, $a = 1, 2$ form a tangent basis to $\Sigma_l$. We define $\Bar{L}$ to be the null normal vector along $\Sigma_l$ such that $ \langle \Bar{L} , L \rangle =-1$. With this we can define
$$\sigma_{ab}^+ := \langle \partial_a, \nabla^4_{\partial_b} L \rangle \quad \quad \sigma_{ab}^- := \langle \partial_a, \nabla^4_{\partial_b} \Bar{L} \rangle $$
then we have that the null expansions of the null cone are given by the traces $\theta^+ = \operatorname{tr} \sigma^+ $ and $\theta^- =\operatorname{tr} \sigma^-$. In this setting and with the help of normal coordinates ($y^0,\, y^i$, $i=0,..,3$ with $\frac{\partial}{\partial y_0}=e_0 $) the vectors $L$ and $\Bar{L}$ can be expressed as
$$L= e_0 + \nu + \mathcal{O}(l) \quad \quad \Bar{L}= \frac{1}{2}(e_0 - \nu) + \mathcal{O}(l) $$
where $ \nu = x^i \frac{\partial}{\partial y^i}$ and $x \in \mathbb{S}^2 $. We will consider a situation like in the figure \ref{figure} that is supposing that the vector $e_0$ is a normal vector to an hypersurface $M$. Using the results obtained in \cite{Wang} we have then that the induced metric on $\Sigma_l$ is given by
\begin{equation}\label{metriclight}
\begin{split}
g^{lc}_{ab} &= l^2 \eta_{ab} +\frac{1}{3} \mathrm{Rm}^4(e_0 + \nu,\partial_a ,\partial_b,e_0 + \nu) l^2 + \mathcal{O}(l^3)\\
\end{split}
\end{equation}
where $\eta $ is the standard metric on the sphere $\mathbb{S}^2$ and $\mathrm{Rm}^4$ is evaluated at $p$, the area of $\Sigma_l$ is given by
\begin{equation}\label{volligt}
|\Sigma_l| =4 \pi l^2 -\frac{2 \pi}{9}l^4 (4 \mathrm{Ric}^4 (e_0,e_0) + \mathrm{Sc}^4 ) + \mathcal{O}(l^6)
\end{equation}
Finally by \cite[Lemma 3.3 Lemma 3.2]{Wang} we have that the expansions are
\begin{equation}
\begin{split}
\theta^+(l) =& \frac{2}{l} - \frac{1}{3} \mathrm{Ric}^4 (e_0 + \nu,e_0 + \nu) l+ \mathcal{O}(l^3)\\
\theta^-(l) =& -\frac{1}{l} - \big( \frac{2}{3}\mathrm{Ric}^4 (e_0 + \nu,\frac{1}{2}(e_0 - \nu)) - \mathrm{Rm}^4 (e_0 + \nu,\frac{1}{2}(e_0 - \nu),e_0 + \nu, \frac{1}{2}(e_0 - \nu))\\
&+ \frac{1}{6} \mathrm{Rm}^4 (e_0 + \nu,e_0 + \nu)\big) l + \mathcal{O}(l^3)\\
\end{split}
\end{equation}
and therefore using that the mean curvature of $\Sigma_l$ is given by $H= \frac{\theta^+}{2} -\theta^-$ we obtain
\begin{equation}\label{meanli}
\begin{split}
H_{lc}=& \frac{2}{l} + \left( \frac{1}{3} \mathrm{Ric}^4 (e_0 ,e_0) - \frac{1}{3} \mathrm{Ric}^4 (\nu,\nu)+\mathrm{Rm}^4 (\nu,e_0, e_0 , \nu) \right)l + \mathcal{O}(l^3)
\end{split}
\end{equation}
where everything is evaluated at $p$. Now we want to compare the light cuts with the geodesic spheres, for this we will consider two of the surfaces with the same (small) area that is $|S_r|=|\Sigma_l|$. First we want to find the difference between the parameters $r$ and $l$. Note that the area of a geodesic sphere of radius $r$ is given by
\begin{equation}\label{meangeo}
\begin{split}
|S_r| =&4 \pi r^2 -\frac{2 \pi}{9}r^4 \mathrm{Sc} + \mathcal{O}(r^6)\\
=& 4 \pi r^2 -\frac{2 \pi}{9}r^4( \mathrm{Sc}^4 +2 \mathrm{Ric}^4 (e_0 ,e_0) - (\operatorname{tr} k)^2 + |k|^2 )+ \mathcal{O}(r^6)
\end{split}
\end{equation}
where in the second line we used the Gauss equation $\mathrm{Sc}= \mathrm{Sc}^4 +2 \mathrm{Ric}^4 (e_0 ,e_0) - (\operatorname{tr} k)^2 + |k|^2 $ (for the Lorentzian setting). Now comparing (\ref{volligt}) and (\ref{meangeo}) we can obtain the following relation
\begin{equation}\label{r-l}
\begin{split}
r-l=& (18- (r^2 +l^2)\mathrm{Sc}^4 )^{-1} \left( \frac{r^4}{(r+l)} (|k|^2 -(\operatorname{tr} k)^2) + \frac{2 (r^4 -2l^4)}{(r+l)} \mathrm{Ric}^4 (e_0 ,e_0) + \mathcal{O}(l^5) + \mathcal{O}(r^5) \right)\\
=& \frac{1}{18} \left( \frac{r^4}{(r+l)} (|k|^2 -(\operatorname{tr} k)^2) + \frac{2 (r^4 -2l^4)}{(r+l)} \mathrm{Ric}^4 (e_0 ,e_0) + \mathcal{O}(l^5)+\mathcal{O}(r^5) \right)
\end{split}
\end{equation}
where we consider $r$ and $l$ to be small. As our surfaces are both parameterized over $[0, \delta) \times \mathbb{S}^2$ for some $\delta>0$ we can compare its different geometric quantities as functions. First note that in normal coordinates the metric of the geodesic spheres can be expressed as (by using the Gauss equation)
\begin{equation}
\begin{split}
g^{G}_{ab} &= r^2 \eta_{ab} +\frac{1}{3} \mathrm{Rm}(\nu,\partial_a ,\partial_b, \nu) r^2 + \mathcal{O}(r^3)\\
&= r^2 \eta_{ab} +\frac{1}{3}\big( \mathrm{Rm}^4(\nu,\partial_a ,\partial_b, \nu)-k(\nu,\nu)k(\partial_a ,\partial_b,)+ k(\nu,\partial_a)k(\nu ,\partial_b) \big) r^2 + \mathcal{O}(r^3)
\end{split}
\end{equation}
this expansion of the metric is similar to the one for the metric of the light cut (\ref{metriclight}), where the first term is just the metric of the round sphere, however the second terms of the expansions are different, this would suggest that the two spheres are intrinsically different but comparing the metrics is not enough since they are coordinate dependent quantities. We will compare different scalars directly to see that both spheres are geometrically distinct. First we are going to compare the scalar curvature of the two spheres. By \cite[Lemma 3.6]{Wang} we have that the scalar curvature of the light cuts is given by
\begin{equation}
\mathrm{Sc}_{lc} = \frac{2}{l^2} + \mathrm{Sc}^4 + \frac{8}{3}(\mathrm{Ric}^4 (e_0 ,e_0) -\mathrm{Ric}^4 (\nu ,\nu) ) -4\mathrm{Rm}^4(e_0 ,\nu,e_0,\nu) + \mathcal{O}(l^2)
\end{equation}
where $\mathrm{Sc}^4$, $\mathrm{Ric}^4$ and $\mathrm{Rm}^4$ are evaluated at $p$. Now for the case of a geodesic sphere, we have that the Gauss curvature was calculated on \cite{locwill} and from this we obtain
\begin{equation}
\begin{split}
\mathrm{Sc}_{G} =& \frac{2}{r^2} - \frac{2}{3} \mathrm{Ric}(\nu,\nu)+\mathcal{O}(r)\\
=& \frac{2}{r^2} - \frac{2}{3}\big(\mathrm{Ric}^4(\nu,\nu) + \mathrm{Rm}^4(\nu,e_0,e_0,\nu) - \operatorname{tr} k \, k(\nu,\nu) + \langle k(\nu, \cdot), k(\cdot, \nu) \rangle \big) +\mathcal{O}(r)
\end{split}
\end{equation}
where as always all the quantities are evaluated in the point $p$ and $ \nu = x^i \frac{\partial}{\partial y^i}$ for $x \in \mathbb{S}^2 $.
Now as both spheres are parameterized on $[0, \delta) \times \mathbb{S}^2$ we compare the two scalar curvatures as function over $[0, \delta) \times \mathbb{S}^2$ (assuming that they are evaluated in the same point $x\in \mathbb{S}^2$) and use (\ref{r-l}) to obtain
\begin{equation}
\begin{split}
\mathrm{Sc}_{G}- \mathrm{Sc}_{lc} =& 2T(\nu,\nu)- \frac{8}{3} \mathrm{Ric}^4 (e_0 ,e_0)
+\frac{2}{3} ( \operatorname{tr} k \, k(\nu,\nu) - \langle k(\nu, \cdot), k(\cdot, \nu) \rangle)\\ & - \frac{14}{3}\mathrm{Rm}^4(\nu,e_0,e_0,\nu) + \mathcal{O}(r) + \mathcal{O}(l^2)
\end{split}
\end{equation}
where $T= \mathrm{Ric}^4 (\nu ,\nu) -\frac{1}{2} \mathrm{Sc}^4$. As this quantity is in general nonzero we conclude that the spheres are intrinsically different (note that if we consider the two functions to be evaluated in two different points of $\mathbb{S}^2$ the quantity is also in general nonzero).
We continue with the mean curvature of the surfaces, which gives us a measure of their extrinsic curvature. In the case of the geodesic sphere by (\ref{H}) and the Gauss equation its mean curvature can be expressed as
\begin{equation}\label{H2}
H_G(x)= \frac{2}{r}-\frac{1}{3} ( \mathrm{Ric}^4(\nu,\nu) + \mathrm{Rm}^4(\nu,e_0,e_0,\nu) - \operatorname{tr} k \, k(\nu,\nu) + \langle k(\nu, \cdot), k(\cdot, \nu) \rangle) r + \mathcal{O}(r^4).
\end{equation}
Now we compare the two mean curvatures (\ref{meanli}) and (\ref{H2}) (considering that they are evaluated in the same point $x\in \mathbb{S}^2$) using (\ref{r-l}) obtaining after some calculations
\begin{equation}
\begin{split}
H_G- H_{lc}=& -\frac{1}{3} \Big(\frac{2}{3} \mathrm{Ric}^4 (e_0 ,e_0) +\frac{1}{6}(|k|^2 -(\operatorname{tr} k)^2) +4\mathrm{Rm}^4(\nu,e_0,e_0,\nu)
+ \langle k(\nu, \cdot), k(\cdot, \nu) \rangle\\ &- \operatorname{tr} k \, k(\nu,\nu) \Big)r+\mathcal{O}(r^2) +\mathcal{O}(l^2)
\end{split}
\end{equation}
this result is in general nonzero (as before even if the functions are evaluated in two different points of $\mathbb{S}^2$). Then we have that in general the light cuts and the geodesic spheres are intrinsically and extrinsically quite different, obtaining different values for the Hawking energy. However it is direct to see that if we are considering a totally geodesic hypersurface $k=0$ then both small sphere limits will agree and if we are also in the Minkowski space ($\mathrm{Rm}^4=0$) then the two spheres would be geometrically identical.
\begin{remark}\label{excess}
Note that when comparing the local expansion of the Hawking energy along the critical surfaces (this is the expansion (\ref{geoexp}) as the surfaces tend to converge to geodesic spheres) with the expansion along light cuts (\ref{lightex}), which in principle captures energy in a right way we obtain
\begin{equation}
\mathcal{E}(S_r) - \mathcal{E}(\Sigma_l)= \frac{6}{5} |\mathring{k}|^2 l^3 + \mathcal{O}(r^5) + \mathcal{O}(l^5)>0
\end{equation}
where we consider $|S_r|=|\Sigma_l|$ and used (\ref{r-l}) with $l$ and $r$ small. This suggest that the geodesic spheres and the critical surfaces of the Hawking functional induce an excess of energy measure by the Hawking energy. This is a result to take into account when evaluating the Hawking energy on these surfaces.
\end{remark}
\begin{remark}
Note that the study of the small sphere limit for quasi local energies is not the only place where these geometric discrepancies are relevant, they are also present when studying small causal diamonds as it was studied in \cite{Wang2} by Wang. The edge of a causal diamond can be thought in Minkowski space as the intersection of two light cones, a spacelike geodesic sphere emerging from the center of the diamond or as light cut of one of the two cones intersecting. When considering it to be a geodesic sphere the Einstein tensor can be obtained by comparing the area of the edge (so the area of the geodesic sphere) in an arbitrary spacetime with the area of the edge in Minkowski spacetime. In \cite{Wang2} this property was studied for the three definitions of diamonds, in higher dimensions and also in the vacuum case obtaining different results in each case (not always proportional to the Einstein tensor) which of course diverge because of the geometric differences of the edges.
\end{remark}
\paragraph*{\emph{Acknowledgements.}} We would like to thank Jan Metzger and Claudio Paganini for the helpful discussions about this work, and also thank Jinzhao Wang for the interesting discussions about his results \cite{Wang2} and \cite{Wang}. This research is supported by the International Max Planck Research School for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory.
|
2209.04755
|
\section{Introduction}
Although the gaseous nature of the sun allows its interior to be known only through models, other outer layers, such as the photosphere, chromosphere, and corona, can be observed. Most solar phenomena occur in these upper layers, such as solar flares that appear in the chromosphere and photosphere.
The previous studies of the heliographical distribution of the solar flares were studied in different methods. \cite{Gnevyshev,Shrivastava,Zharkova_Zharkov,Pandey,abdelsattar,Mawad} studied the latitudinal distribution. One of the previous studies on the solar flare’s location on the solar disk found that the solar flare’s latitude varies with the solar activity \cite{Aschwanden}.
\cite{Jetsu, Cliver, Li, Loumou} studied the longitudinal distribution of the solar flares. The latitudinal and longitudinal distributions were studied together, and it was found that the solar flares occur at a specific latitude called the "eruptive latitude" \cite{Mawad}. Most solar flares occur near or within active regions. This is because these solar flares need magnetic energy.
The different layers in the sun can only be seen under certain conditions or in certain bands. except for the innermost layer of the sun's atmosphere, the "photosphere". It is the layer where most of the sun's energy is, and it is always seen. We can observe it directly. Despite being the highest layer, the corona cannot be seen directly. But is it possible to observe the inner layers of the sun, such as the solar core? Is it possible to observe the impact of the inner layers on the solar surface?
The produced energy in the solar core must pass through large amounts of plasma to reach the solar surface, where it is radiated away in mainly two ways: radiation and convection. Solar energy transport switches from radiation to convection. The transmission of this energy from the interior to the photosphere and the appearance of interior layers depends on the opacity of the convection zone. Geometrically, we can see the inner layers of the sun. Because the direction of the observer penetrates the inner layers of the sun. Unfortunately, the models give high opacity for the convection zone \cite{Turck}. So those inner layers can't be seen. Modern studies such as \cite{Thompson,Turck} have been used in helioseismology \cite{Turck_2011} as an indication of what is inside the sun.
It is known that the position of the heart of the sun, according to the observer, is behind the center of the disk of the sun. Also, the layer of the convective zone is located behind the solar disk in the region near the solar limb. Depending on this, the vertical cross-section of the heliosphere at a heliographical longitude of 90° can simulate the inner solar layers. Therefore, the distance of the solar flare from the center of the solar disk may simulate the inner layers of the sun.
In this study, I will study the angular distance of the solar flares from the center of the solar disk, to study whether it simulates the solar inner layers.
\section{Distance Distribution}
\label{sect:dist}
As we can see in figure ~\ref{fig1}~(A) of the horizontal sector of the sun, we have a solar flare point (denoted to F) on the spherical surface of the sun. The point of the solar flare has an image in the solar disk in the background (denoted to F’), after penetration of the inner layers of the sun. While figure ~\ref{fig1}~(B) shows various locations for solar flares that occur on the solar disk.
The turquoise point is the solar flare occurring above the solar core. The turquoise represents the central distance $D$ of the flare from the center of the solar disk to the flare’s location. Consequently, this flare is above the rest of the inner layers. Black and indigo pointer lines represent the central distance of the solar flares above other inner layers such as radiative and convection zone layers. The purple line is the central distance of the solar flare located at the solar limb that equal to 90°. It represents the solar photosphere only. It has an angular distance of 1 \(R_\odot\). While the angular distance is equal to 0 for the exact central flares. Now, we need to distribute the solar flare according to the central angular distance (hereafter, distance or $D$).
\begin{figure*}
\centering
\includegraphics[width=8cm]{fig_1a.png}
\includegraphics[width=8cm]{fig_1b.png}
\caption{Plot (A): Equatorial and latitudinal sectors of the Sun (horizontal sector). The green circle represents the solar latitude. The black circle represents the projection of the solar latitude on the solar disk. F is the solar flare on the spherical surface. While F’ is the projection of the solar flare on the solar disk. Plot (B): The solar disk (vertical sector) of the sun. The turquoise line represents the distance D of the flare above the solar core. Consequently, it is above the rest of the inner layers. The black and indigo lines represent the solar flares above other inner layers such as radiative and convection zone layers. The purple line is the central distance of the solar flare above the solar limb. It represents the solar photosphere only.}
\label{fig1}
\end{figure*}
\subsection{Distance calculation method}
\label{sect:method}
The main vital factor simulating the solar interior layers and projecting them on the solar disk is the distance of the solar flares. The estimation of distance is done by assuming the Sun is a spherical body, using the laws of a spherical triangle, as shown in figure ~\ref{fig2}, by applying the cosine formula, the distance can be derived by the following formula:
\begin{equation}
D = \arccos\big[\cos(\lambda) \cos(\beta) \big]
\end{equation}
Where $D$ is the flare's distance between the projected center of the solar disk on the sphere and the solar flare heliographical location. $\beta$ and $\alpha$ are the heliographical flare’s latitude and longitude, respectively.
We can divide the distance into 90 slices (intervals) to give us a higher accuracy (i.e., 1° interval). This range will be 1–90°. The smallest circle is the closest one to the center which simulates the solar core. The greater one is the circle at the limb.
\begin{figure*}
\centering
\includegraphics[height=10cm]{fig_2.png}
\caption{The spherical triangle of the projected solar flare F on the solar surface. While C denotes the center of the solar disk. This center is positioned on the solar equator (orange circle). The right green great circle is the flare’s longitude pass from pole P, while the left green great circle line is the central meridian of the Sun. The arc CF is the angular distance between the center of the solar disk and the flare.}
\label{fig2}
\end{figure*}
\section{Results and Discussions}
\label{sect:result}
The distance was calculated for all solar X-ray (SXR) during the period 1975–2021, obtained by $GOES$ \cite{Winter} and we counted them according to their distance. In addition to $RHESSI$ solar flares during the period of 2002–2021 for comparison.
Figure \ref{fig4}~(A) shows the result of the calculated distance for all flares during the selected period with their count of flares at all distances. The behavior of the distance curvature indicates clearly that the flares demonstrate the inner layers.
The central disk events are very few 0 < $D$ <15\textdegree. This region reflects the solar core on the inner side. Furthermore, the number of events at the limb 80\textdegree < $D$ < 90\textdegree~ is very low, reflecting only photosphere and chromosphere events. while a large number of the flares happened at a distance of about 15\textdegree--20\textdegree. This region denotes the inner side of the radiative zone after the solar core's projection. Whereas the middle area 20\textdegree < $D$ <80\textdegree~ has a large number of X-ray events, and this is because this region reflects many interior layers in the background.
\begin{figure*}
\centering
\includegraphics[width=5.4cm]{fig_4a.png}
\includegraphics[width=5.4cm]{fig_4b.png}
\includegraphics[width=5.4cm]{fig_4c.png}
\caption{The central distance distribution ($D$\textdegree) of the X-Ray solar flare during all solar cycles (A Panel) and each cycle (B panel) by the flare’s count for GOES. While Plot C is for the RHESSI.}
\label{fig4}
\end{figure*}
The data is classified by solar cycle as shown in figure~\ref{fig4}~(B), to check how solar activity affects the curvature shape. We note that the curvature shape remains the same as it is during each solar cycle (21 to 24 cycles).
Also, we can show about four peaks during the distance range of 0–90°. The main and higher peak (hereafter, the core peak) is a distance of about 15°–20° that reflects the solar core. This means that the small peaks reflect other interior layers, including radiative and convection zones, which we will discuss briefly.
The core’s peak moves and changes slightly with time. We notice that the far radius is for cycles 21, 22, 24, and then 23. This means that the solar core radius increases as the strength of the solar cycle progress and activity increase, and vice versa.
It's worth noting that I repeated the same graph shown in figure ~\ref{fig4} but classified data according to flares’ GOES classes (B, C, M, and X). Besides, I examined the solar activity by classifying data according to quiet and active periods for all the selected periods and during each cycle. I did not get a significant result. The curvature of figure ~\ref{fig4} remains similar.
The plot is applied with RHESSI solar flares during the same solar period as shown in figure ~\ref{fig4}~(C), to check how the solar radiation bands (X-Ray in the current study) affect the curvature shape. I note that the curvature shape remains the same as it is during distances greater than 1°. But the results showed a huge number of events in the middle of the solar disk ($D$=0\textdegree) with RHESSI data, unlike those observed by GOES. The number of flare events that have a distance greater than 1 is about 90,000. But, we have about 25,000 flare events occurring exactly at the center of the solar disk.
For additional confirmation of this result, we can calculate the total importance $I$ of these GOES flares that occurred at the same distance.
\begin{equation}
I_D = \sum(f_n / I),
\end{equation}
Where $f_n$ is the flare importance value. The solar flare is worth 1 for the X-Class, 10 for the M-Class, 100 for the C-Class, and 1000 for the B-Class. $n$ is the index of flare events that occurred at the same distance $D$. $I_D$ represents the total value of solar flare importance in the X-Class unit that occurred in the distance $D$. Figure ~\ref{fig5} shows the compatibility of the total importance with the curvature of the number of events. But the high contrast of the curvature peaks matters more than the count of the events. It is clear that we have 4 peaks similar to figure ~\ref{fig3}.
\begin{figure*}
\centering
\includegraphics[height=8.7cm]{fig_5a.png}
\includegraphics[height=8.7cm]{fig_5b.png}
\caption{The central distance distribution ($D$°) of the X-Ray solar flare during all solar cycles (A plot) during each cycle (B panel) by the flare’s importance $I$ in the X-Class unit.}
\label{fig5}
\end{figure*}
The peaks in the distance curve may be the same as the solar inner layers, or they may represent something else. Therefore, we will distinguish the layers represented on the solar disk by different names inspired by the names of the inner layers, as follows:
\begin{itemize}
\item \emph{The core circle}: it is a projection of the solar core on the solar disk.
\item \emph{The radiative ring}: it is a projection of the radiative zone on the solar disk.
\item \emph{The convection ring}: It is a projection of the convection zone on the solar disk.
\item \emph{The limb ring}: It is a projection of the photosphere on the solar disk.
\end{itemize}
\subsection{The relationship between distance \& radius}
\label{sect:depth}
Previous studies calculated the radius of the inner layers in units of the radius of the sun \(R_\odot\). It differs from the measurement method here used in this study, which reflects the inner layers of the sun. Within the scope of this study, we must convert the radius from scalar distance \(R_\odot\) to angular distance $D$°. So that the units are unified, it will be easier to compare the current results with the previous studies. Figure figure ~\ref{fig3} depicts the Sun's great circle $CF$ sector, which is depicted in figure figure ~\ref{fig2}. The black line is the projected solar diameter on the solar disk. It may be the solar equator itself if the position of the solar flare is on a solar equator. The distance $D$ of any interior layer that has a depth radius of $r$ is given by
\begin{equation}
\sin(D) = r / R_\odot\
\end{equation}
By putting the solar radius \(R_\odot\) = 1, then
\begin{equation}
\label{depth}
D = \arcsin(r)
\end{equation}
\begin{figure*}
\centering
\includegraphics[height=10cm]{fig_3.png}
\caption{The scheme of the great circle CF as shown in figure~\ref{fig2}.}
\label{fig3}%
\end{figure*}
\subsection{The disk rings and inner layers radius}
\label{sect:radius}
Figures ~\ref{fig4} and \ref{fig5} show four peaks after the core’s peak, including two peaks after the convection zone. Each peak denotes an disk rings, and each has a boundary denoted by two crests. These crests appear clearly in weak solar cycles 23 and 24, especially in solar cycle 23. These peaks overlap during strong solar cycles such as 21 and 22. That longest distance demonstrates the radiative ring. The distance between the core and the radiative rings is not clear because the curve is rising sharply within the solar core.
We already know that X-rays cannot reach easily and directly from the solar core to the surface. However, the increase in the number of solar flares in the region of the core disk, which simulates the core of the Sun, makes us wonder, why? So I will compare the radius of the solar inner layers to the angular distance of the solar flares. There may prove a correlation or not.
The solar distance of core-radiative zone boundary equals about 0.25 \(R_\odot\) according to \cite{García,Ryan}. According to \cite{Christensen}, the radiation-convection boundary occurs at about 0.71 \(R_\odot\). Using equation \ref{depth}, hence,
\begin{equation}\label{eq2}
D_c = \arcsin(0.25) \simeq 15 ^{\circ},
\end{equation}
\begin{equation}
D_r = \arcsin(0.71) \simeq 45 ^{\circ},
\end{equation}
\begin{equation}
D_v = \arcsin(0.81) \simeq 55 ^{\circ},
\end{equation}
Where $D_c$, $D_r$ , and $D_v$ are the distances from the center to the core, radiative, and convection zones.
This result is consistent with the current results shown in figures \ref{fig4} and ~\ref{fig5}, where the 15° distance represents the core disk (peak of the core).
\subsection{Distance Model}
The first step is to assume the solar surface is a spherical body. The projection of the solar interior layers on the solar disk appears as circles around the center of the solar disk. We can split the Sun into 90 circles centralized by the center of the solar disk which appear as layers around the center of the solar disk. The suggested angular interval between these circles is 1°. We need to calculate the area of these projected circles on the real sphere on the front side and in the background, including the backside too. If we calculated it for the frontside, we could multiply it by $n$ number to give the areas of background spheres, including the backside. The projection of the circles on the real sphere is called "segment", which I want to estimate its area. Each segment has 2 bounders of circles, upper and lower. Each circle has a central angle. $\theta$ is for the upper (far) circle and $\phi$ is for the lower (near) circle. Which are measured from any flare’s direction (point on this circle) and the center of the solar disk (direction to the Earth in heliographical coordinates). This solar sphere is depicted in figure \ref{fig6} in the segment where we want to estimate its area.
\begin{figure*}
\centering
\includegraphics[height=10cm]{fig_6.png}
\label{fig6}
\caption{The schematic of the Sun. $C$ is the center of the Sun and the solar disk. The black circle is the projected solar disk for the Sun for the observer. The spherical cap is the upper boundary of the spherical segment of the solar flares. The difference between the areas of the upper and lower caps gives a segment area.}
\end{figure*}
The area of segment \cite{Donaldson_Siegel} is the difference between the boundary two caps. We can write
\begin{equation}
A = 2 \pi R_{\odot}^2 \big[1-\cos(\theta)\big]
\end{equation}
Where $A$ is the area of the frontside segment. $\Omega$ is the angular distance of the projected circle (segment angle). Then, the area of both spherical caps, which have angles $\theta$ and $(\theta+1^{\circ})$ become
\begin{equation}
A_{\theta} = 2 \pi R_{\odot}^2 \big[1-\cos(\theta) \big]
\end{equation}
\begin{equation}
A_{(\theta+1)} = 2 \pi R_{\odot}^2 \big[1- \cos(\theta+1^{\circ}) \big]
\end{equation}
The segment area formula become,
\begin{equation}
A = \mid A_{\theta} - A_{(\theta+1)} \mid
= 2 \pi R_{\odot}^2 \big[\cos(\theta)-\cos(\theta+1^\circ )\big]
\end{equation}
\begin{equation}
A = 2 \pi R_{\odot}^2 \big[\cos(\theta) - \cos(\theta) \cos(1^{\circ}) + \sin(\theta) \sin(1^{\circ}) \big]
\end{equation}
But $R_{\odot}=1$ , $\sin(1^{\circ}) \approx \frac{\pi}{180}$, and $\cos(1^{\circ}) \approx 1$ then
\begin{equation}
\label{eqn:distance}
A \approx 2 \frac{\pi^2}{180} \sin(\theta)
\end{equation}
Equation \ref{eqn:distance} refers to the sinusoidal function. In order to integrate this segment area over all the background layers reaching to the backside of the photosphere, the equation becomes the summation of sinusoids equation (\cite{Press}) that is written as
\begin{equation}
\label{eqn:sinusoid}
I_D = v \sum_{n=1}^{m}(a_n \cos(D \times T_n))
\end{equation}
where $a_n$ is the amplitude represented by an inner layer, $T_n$ are the frequencies of angles (period) in degrees. $v$ is the offset value, and $D$ is the distance of the solar flare's count $n$ or the summation of the importance $I$ in X-Class of all solar flares that occurred at the same distance. I set $m = 3$ because this is the best value for the high correlation coefficient and represents the simplest equation.
The compatibility of equation \ref{eqn:sinusoid} with experimental data was investigated, and it was discovered that equation \ref{eqn:sinusoid} gives a strong coefficient of determination $R^2$, as shown in table 1. $R^2$ equals $0.97$ and $0.752$ for the count and the importance of the flares. The sinusoid amplitude was discovered to be greater than the value of $2\pi^2/180$ in equation \ref{eqn:sinusoid}, indicating that there are many layers in the background added to the front and back sides of the surface.The left panel of figures \ref{fig3} and \ref{fig4} shows the fitting curve.
\begin{table}
\caption{The Sum of Sinusoids fitting parameters. $R^2$ is the coefficient of determination. $v$ is the offset value. $a$ is the amplitude. $T$ is the period. $P$ is the significance of the fit $P-test$. Chi-sq is the fitting error.}
\begin{tabular}{llllll}
\toprule
\multirow{2}{*}{$N$} & \multicolumn{2}{l}{Count} &
\multicolumn{2}{l}{Importance} \\
& $A$ & $T$ & $A$ & $T$\\
\midrule
1 & -233.9 & 4.408° & -14.84 & 4.408°\\
2 & -165.8 & 7.804° & -12.13 & 7.804°\\
3 & -68.5 & 11.053° & -3.895 & 11.053°\\
$v$ & \multicolumn{2}{l}{428.4} & \multicolumn{2}{l}{28.82} & \\
$R^2$ & \multicolumn{2}{l}{0.97} & \multicolumn{2}{l}{0.752} & \\
$Chi-sq$ & \multicolumn{2}{l}{1.099E05} & \multicolumn{2}{l}{4059.} & \\
$P$ & \multicolumn{2}{l}{1.4266E67} & \multicolumn{2}{l}{2.708E-29} &
\end{tabular}
\end{table}
\section{Conclusions}
\label{sect:concl}
We study the angular distance of the solar flares from their position to the projection point of the center of the Sun on the solar disk, using the solar flare data obtained from GOES during the period 1975–2021, and RHESSI during the period 2002–2021.
The solar disk is divided into 90 central rings. The number of solar flares that occurred in each ring was then compiled. It turns out that the results give a definite shape to the oscillation of the distribution.
It has also been noted that the shape of the distance distribution of solar flares remains the same with the different GOES classifications, and with different observations according to the different satellites.
It was observed that the shape of the curvature did not change when using the overall importance value of the solar flare. But this distribution is now showing the peaks in the curve more than the distribution using the number of solar flares.
There is a fixed number of peaks observed, and each cycle is present. It does, however, sway slightly with each solar activity cycle. It seems to be related to the strength of the solar activity cycle. The number of these vertices is four.
It was noted that these peaks form central rings that simulate the inner layers of the sun. So we divided the solar disk into specific rings in line with the solar inner layers. As these loops simulate and look like the solar inner layers in the background.
These rings were divided according to the peaks in the curve into
\begin{itemize}
\item \emph{The core circle}: it is a projection of the solar core onto the solar disk. It has solar flare distances in the range of 0–15°. This ring has very few solar flares for GOES and many for RHESSI. Whereas we do not have solar flares at D=0 except in RHESSI. RHESSI detected more than 25,000 solar flare events at a distance of 0.
\item \emph{The radiative ring}: it is a projection of the radiative zone on the solar disk. It has solar flare distances in the range of 15°–45°.
\item \emph{The convection ring}: it is a projection of the convection zone on the solar disk. It has solar flare distances in the range of 45°–55°.
\item \emph{The limb ring}: it is a projection of the photosphere on the solar disk. It has a very low number of flare events. It has a range of 80°–90°.
\end{itemize}
A large number of solar flares occurred in the radiative and convection rings. While we have a few events in the core and limb rings.
This result makes us wonder: why does the number of flares differ with the distance from the center? Why are there so few flares on the core disk? Why are there a few flares on the limb disk? Why are there so few flares at the center of the solar disk for GOES but a huge number with RHESSI?
We expect to have a large number at 15° because the distance is a composite of latitude. It is known that most solar flares occur at latitude 15° (\cite{Mawad,abdelsattar}). But to understand the rest of the distribution requires other studies focused on the distance distribution.
\section*{Data sources}
The X-ray solar flare data obtained by GOES satellites from the URL: \url{https://hesperia.gsfc.nasa.gov/goes/goes_event_listings/} during the solar period 1975-2021 and by RHESSI from the URL: \url{https://hesperia.gsfc.nasa.gov/rhessi3/data-access/rhessi-data/flare-list/index.html}
\section*{Acknowledgments}
The author thanks the teams of \emph{GOES} and \emph{RHESSI} for supporting the data that helped complete this study.
\bibliographystyle{unsrtnat}
|
2005.04169
|
\subsection*{Summary}
Equilibrium Propagation (EP) \cite{EP} is a biologically inspired alternative algorithm to backpropagation (BP) for training neural networks.
It applies to RNNs fed by a static input $x$ that settle to a steady state, such as Hopfield networks.
EP is similar to BP in that in the second phase of training, an error signal propagates backwards in the layers of the network, but contrary to BP, the learning rule of EP is spatially local.
Nonetheless, EP suffers from two major limitations.
On the one hand, due to its formulation in terms of real-time dynamics, EP entails long simulation times, which limits its applicability to practical tasks.
On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time:
the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
Our work addresses these two issues and
aims at widening the spectrum of EP from standard machine learning models to more bio-realistic neural networks.
First, we propose a discrete-time formulation of EP which enables to simplify equations, speed up training and extend EP to CNNs. Our CNN model achieves the best performance ever reported on MNIST with EP.
Using the same discrete-time formulation, we introduce \emph{Continual Equilibrium Propagation} (C-EP):
the weights of the network are adjusted continually in the second phase of training using local information in space and time. We show that in the limit of slow changes of synaptic strengths and small nudging, C-EP is equivalent to BPTT (Theorem~1). We numerically demonstrate Theorem~1 and C-EP training on MNIST and generalize it to the bio-realistic situation of a neural network with asymmetric connections between neurons.
\vspace{-0.5cm}
\subsection*{A simplified formalism}
\iffalse
The original formulation of EP assumes that the system described by a state variable $s$ evolve according to real-time dynamics which derive from an energy function $E$. Given an input $x$, the system evolves freely according to $\frac{ds}{dt}=-\frac{\partial E}{\partial s}(x,s,\theta)$, until reaching a first steady state $s_*$. In the second phase, the system is elastically nudged towards a target $y$ so that it subsequently evolves as $\frac{ds^\beta}{dt}=-\frac{\partial E}{\partial s}(x,s^\beta,\theta) - \beta\frac{\partial\ell}{\partial s}$ until settling to another steady state $s_*^\beta$ with $\ell = \frac{1}{2}\norm{s^0 - y}$ being the cost function where $s^0$ represents the output layer of the neural network. The learning rule reads like $\Delta\theta =-\frac{1}{\beta}\left(\frac{\partial E}{\partial\theta}(s^{\beta}_*) - \frac{\partial E}{\partial\theta}(s_*)\right)$.
\fi
In the original formulation of EP \cite{EP}, the dynamics of the neurons (denoted $s_t$) is in real time and derives from an energy function: $\frac{ds_t}{dt}=-\frac{\partial E}{\partial s}(s_t)$.
By discretizing the real-time dynamics with a time discretization parameter $\epsilon$ and defining $\Phi = \frac{1}{2}\norm{s}^2 - \epsilon E(s)$, the dynamics rewrites:
\begin{equation}
s_{t+1} = \frac{\partial \Phi}{\partial s}(x,s_t,\theta),
\label{eq:formalism}
\end{equation}
Our first contribution is to generalise EP to any $\Phi$.
\paragraph{Algorithm.}
EP proceeds in two phases.
In the first phase, neurons evolve freely following Eq.~(\ref{eq:formalism}) towards a first steady state $s_*$.
In the second phase, the subset of output neurons $\hat{y}$ is elastically nudged towards a target $y$ until the neurons reach a new steady state $s_*^\beta$.
More technically $s_{t+1}^\beta = \frac{\partial \Phi}{\partial s}(x,s_t^\beta,\theta) - \beta\frac{\partial\ell}{\partial s}(s_t^\beta,y)$ with $\ell(s,y) = \frac{1}{2}\norm{\hat{y} - y}$ being the cost function.
The learning rule reads:
\begin{equation}
\Delta\theta = \frac{1}{\beta}\left(\frac{\partial\Phi}{\partial\theta}(x,s_*^\beta,\theta) - \frac{\partial\Phi}{\partial\theta}(x,s_*,\theta)\right)
\label{eq:learning-rule}
\end{equation}
\subsection*{Towards conventional machine learning: discrete-time CNNs}
We define our CNN model through the dynamics:
\begin{equation}
s_{t+1} = \sigma(\mathcal{P}(\theta \star s_{t})),
\label{eq:cnn-eq}
\end{equation}
where $\sigma$, $\star$ and $\mathcal{P}$ respectively denote an activation function, convolution and pooling.
We show that there exists a primitive function $\Phi$ such that $s_{t+1}\approx\frac{\partial \Phi}{\partial s}(s_t, \theta)$ which, using Eq.~(\ref{eq:learning-rule}), enables to derive:
\begin{equation}
\Delta\theta = \frac{1}{\beta}\left(\mathcal{P}^{-1}(s^\beta_*) \star s^\beta_* - \mathcal{P}^{-1}(s_*) \star s_*\right).
\end{equation}
Our CNN model achieves $\sim 1 \%$ test error on MNIST. More generally, training fully connected architectures with equations of the kind of Eq.~(\ref{eq:cnn-eq}) yields an acceleration of a factor 5 to 8 without loss of accuracy compared to standard EP and BPTT.
\subsection*{Towards more biological plausibility: continual weight updates}
We define \emph{Continual Equilibrium Propagation} as a variant of EP where the first phase still reads as Eq.~(\ref{eq:formalism}) while during the second phase, both neurons and synapses evolve as dynamical variables:
\begin{align}
\left\{
\begin{array}{ll}
\displaystyle s_{t+1}^{\beta, \eta} &= \frac{\partial \Phi}{\partial s}(s_{t}^{\eta, \beta}) - \beta\frac{\partial \ell}{\partial s}(s_{t}^{\eta, \beta})\\
\displaystyle \theta_{t+1}^{\eta, \beta} &= \theta_{t+1}^{\eta, \beta} + \frac{\eta}{\beta}\left( \frac{\partial \Phi}{\partial \theta}(s_{t+1}^{\eta,\beta}) - \frac{\partial \Phi}{\partial \theta}(s_{t}^{\eta,\beta})\right).
\displaystyle
\end{array}
\right.
\label{eq:cep-dynamics}
\end{align}
We validate C-EP with training experiments on MNIST, whose accuracy approaches the one obtained with standard EP.
\vspace{-0.6cm}
\subsection*{Equivalence of EP and BPTT}
The BPTT approach for training is top-down:
defining the loss $\mathcal{L} = \ell(s_*,y)$,
it computes the gradients
\vspace{-0.2cm}
\begin{equation}
\nabla_{s}^{\rm BPTT}(t) = \frac{\partial \mathcal{L}}{\partial s_{T-t}}, \qquad
\nabla_{\theta}^{\rm BPTT}(t) = \frac{\partial \mathcal{L}}{\partial \theta_{T-t}},
\label{eq:ep-bptt}
\end{equation}
which in turn determine the weight updates.
In contrast, our approach is bottom-up: we first define the dynamics of Eq.~(\ref{eq:cep-dynamics}), then the EP updates
\vspace{-0.2cm}
\begin{align}
\left\{
\begin{array}{ll}
\Delta_{s}^{\rm EP}(\eta, \beta, t) & =\frac{1}{\beta}(s_{t+1}^{\eta,\beta} - s_{t}^{\eta,\beta}),\\
\Delta_{\theta}^{\rm EP}(\eta, \beta, t) & =\frac{1}{\beta}(\frac{\partial \Phi}{\partial\theta}(s_{t+1}^{\eta, \beta}) - \frac{\partial \Phi}{\partial\theta}(s_{t+1}^{\eta, \beta})),
\end{array}
\right.
\label{eq:ep-bptt}
\end{align}
which we show to compute the gradients of the loss.
Theorem~1 states that, provided the first steady state is maintained long enough, the updates of EP are equal to the gradients obtained by BPTT:
\iffalse
Standard EP, our CNN model and C-EP are all related to BPTT. To state Theorem~1, we define the loss function $\mathcal{L} = \ell(s_*, y)$. Given a first phase of length T, we define the gradients of the loss provided by BPTT with respect to the neural (resp. synaptic) states at previous time steps as $\nabla_{s}^{\rm BPTT}(t) = \frac{\partial \mathcal{L}}{\partial s_{T-t}}$ (resp. $\nabla_{\theta}^{\rm BPTT} (t)= \frac{\partial \mathcal{L}}{\partial \theta_{T-t}}$). Similarly, we define the updates of the neural (resp. synaptic) states during the second phase of EP as $\Delta_{s}^{\rm EP}(\eta, \beta, t)=\frac{1}{\beta}(s_{t+1}^{\eta,\beta} - s_{t}^{\eta,\beta})$ (resp. $\Delta_{\theta}^{\rm EP}(\eta, \beta, t) = \frac{1}{\beta}(\frac{\partial \Phi}{\partial\theta}(s_{t+1}^{\eta, \beta}) - \frac{\partial \Phi}{\partial\theta}(s_{t+1}^{\eta, \beta}))$).
Theorem~1 states that provided the first steady state is maintained long enough, the neural and weight updates of EP are step-by-step equal to the ones obtained by BPTT:
\fi
\vspace{-0.3cm}
\begin{align}
\left\{
\begin{array}{l}
\displaystyle \lim_{\eta \to 0} \lim_{\beta \to 0} \Delta_{s}^{\rm EP}(\eta, \beta,t) = -\nabla_{s}^{\rm BPTT}(t),\\
\displaystyle \lim_{\eta \to 0} \lim_{\beta \to 0} \Delta_{\theta}^{\rm EP}(\eta, \beta,t) = -\nabla_{\theta}^{\rm BPTT}(t)
\displaystyle.
\end{array}
\right.
\label{eq:ep-bptt}
\end{align}
Theorem~1 is illustrated on Fig.~\ref{fig}~(a) with a computational graph, where the updates of EP on the right match in the same colour the gradients provided by BPTT on the left. Fig.~\ref{fig}~(b) numerically demonstrates Theorem~1 for a system which exactly fulfills the conditions stated above with $\eta = 0$ ('Standard EP') and $\eta > 0$ ('Continual EP'): plain and dashed lines (i.e. $\Delta_{\theta}^{\rm EP}$ and $-\nabla_{\theta}^{\rm BPTT}$ processes) perfectly coincide and split apart upon using a finite learning rate.
Note that the property also holds even in the non-ideal setting of our CNN model ('Discrete EP').
\vspace{-0.4cm}
\subsection*{Continual weight updates and asymmetric connections between neurons}
We extend C-EP to networks whose synaptic connections are asymmetric \cite{VF}, which we name \emph{Continual Vector Field EP} (C-VF).
On top of demonstrating learning with C-VF on MNIST, we further show numerically on Fig~\ref{fig}~(c) that
the more a network satisfies Theorem~1 before training, the best it can learn.
\iffalse
\begin{equation*}
s_{t+1}^{\beta, \eta} = \sigma \left( W^{\eta, \beta}_t \cdot s_t^{\beta, \eta} + W^{\eta, \beta}_{x, t}\cdot x\right)- \beta\frac{\partial \ell}{\partial s}(s_{t}^{\eta, \beta})\\
\end{equation*}
\begin{equation*}
W_{t+1}^{\eta, \beta} = W_{t}^{\eta, \beta} + \frac{\eta}{\beta} \left( s_{t+1}^{{\beta,\eta}^\top} \cdot s_{t+1}^{\beta,\eta} - s_t^{{\beta,\eta}^\top} \cdot s_t^{\beta,\eta} \right).
\end{equation*}
\begin{equation*}
W^{\beta,\eta}_{t+1} = W^{\beta,\eta}_t + \frac{\eta}{\beta} s_t^{{\beta,\eta}^\top} \cdot \left( s_{t+1}^{\beta,\eta} - s_t^{\beta,\eta} \right)
\end{equation*}
\begin{equation}
s_{t+1}^\beta = \sigma(\mathcal{P}(\theta \star s_{t}^\beta)) - \beta\frac{\partial \ell}{\partial s}(s_t^\beta),
\end{equation}
\begin{align*}
\left\{
\begin{array}{ll}
s_{t+1}^{\beta, \eta} &= \sigma \left( W^{\eta, \beta}_t \cdot s_t^{\beta, \eta} + W^{\eta, \beta}_{x, t}\cdot x\right)- \beta\frac{\partial \ell}{\partial s}(s_{t}^{\eta, \beta})\\
W_{t+1}^{\eta, \beta} &= W_{t}^{\eta, \beta} + \frac{\eta}{\beta} \left( s_{t+1}^{{\beta,\eta}^\top} \cdot s_{t+1}^{\beta,\eta} - s_t^{{\beta,\eta}^\top} \cdot s_t^{\beta,\eta} \right)
\end{array}
\right.
\end{align*}
\begin{align*}
\left\{
\begin{array}{ll}
s_{t+1}^{\beta, \eta} &= \sigma \left( W^{\eta, \beta}_t \cdot s_t^{\beta, \eta} + W^{\eta, \beta}_{x, t}\cdot x\right)- \beta\frac{\partial \ell}{\partial s}(s_{t}^{\eta, \beta})\\
W_{t+1}^{\eta, \beta} &= W_{t}^{\eta, \beta} + \frac{\eta}{\beta} s_t^{{\beta,\eta}^\top} \cdot \left( s_{t+1}^{\beta,\eta} - s_t^{\beta,\eta} \right)
\end{array}
\right.
\end{align*}
\begin{align*}
\left\{
\begin{array}{l}
\displaystyle s_{t+1}^\beta = \frac{\partial \Phi}{\partial s}(x,s_t^\beta,\theta)-\beta\frac{\partial \ell }{\partial s}(s_t^\beta)\\
\displaystyle \Delta\theta = \frac{1}{\beta}\left(\frac{\partial\Phi}{\partial\theta}(x,s_*^\beta,\theta) - \frac{\partial\Phi}{\partial\theta}(x,s_*,\theta)\right)
\end{array}
\right.
\end{align*}
\begin{align*}
\left\{
\begin{array}{l}
\displaystyle s_{t+1}^\beta = \sigma(\mathcal{P}(\theta \star s_{t}^\beta)) - \beta\frac{\partial \ell}{\partial s}(s_t^\beta)\\
\displaystyle \Delta\theta = \frac{1}{\beta}\left(\mathcal{P}^{-1}(s^\beta_*) \star s^\beta_* - \mathcal{P}^{-1}(s_*) \star s_*\right)
\end{array}
\right.
\end{align*}
\begin{align*}
\left\{
\begin{array}{l}
\Delta W_{2x} = \frac{1}{\beta}\left(s_{*}^{2,\beta}\cdot x^\top - s_{*}^{2}\cdot x^\top \right),\\
\Delta W_{12} = \frac{1}{\beta}\left(s_{*}^{1,\beta}\cdot s_{*}^{{2,\beta}^\top} - s_{*}^{1}\cdot s_{*}^{{2}^\top} \right),\\
\Delta W_{01} = \frac{1}{\beta}\left(s_{*}^{0,\beta}\cdot s_{*}^{{1,\beta}^\top} - s_{*}^{0}\cdot s_{*}^{{1}^\top} \right).\\
\end{array}
\right.
\label{eq:model-simple-ep-deltaw}
\end{align*}
\begin{align*}
\frac{ds}{dt} &= - \frac{\partial E}{\partial s}(x,s;\theta)\rightsquigarrow s_*\\
\frac{ds}{dt} &= - \frac{\partial E}{\partial s}(x,s;\theta) - \beta\frac{\partial \ell}{\partial s}(s,y) \rightsquigarrow s_*^\beta\\
\Delta \theta &= -\frac{1}{\beta}\left(\frac{\partial E}{\partial s}(x, s^\beta_*; \theta) - \frac{\partial E}{\partial s}(x, s_*;\theta)\right)
\end{align*}
\fi
\vspace{-0.5cm}
|
2210.05992
|
\section{Introduction}
In this work, we continue a specific line of research on majority dynamics in dense random graphs \cite{Benjamini2016, Fountoulakis2020}.
We analyze the performance of a simple majority-rule protocol solving a fundamental coordination problem in distributed computing - \emph{binary majority consensus}, in the presence of a probabilistic network model.
In the binary consensus problem, to begin with, every agent is initially assigned some binary value, deterministically or probabilistically, referred to as the agent's initial opinion.
The objective of a protocol that solves consensus is to have all agents eventually decide on the same opinion, thus achieving unanimity throughout the system.
In binary majority consensus, if a majority of agents initially hold the same opinion, which is usually the case, then all agents must decide on this opinion. Majority consensus is utilized when, beyond facilitating agreement, the agreed upon opinion holds importance.
Consensus is an elementary problem in distributed computing, as many other coordination problems were shown to be directly reducible to and from consensus. The list includes agreeing on what transactions to commit to a database \cite{gray2006consensus}, state machine replication \cite{antoniadis2018state}, atomic snapshots \cite{attiya1998atomic}, total ordering of concurrent events \cite{lamport2019time}, and many more.
We analyze the performance of the simple majority protocol (SMP) when the underlying network is governed by the Erd\H{o}s--R\'enyi random graph model $\mathsf{G}(n,p)$. In SMP, agents communicate in equal-length time intervals called communication rounds, or just rounds in short. All messages are sent at the beginning of a communication round, and they arrive by the end of the round.
The SMP is shortly described as follows: In each round, every agent communicates its current opinion to all other agents that are connected to him through the underlying (random) graph. Then, it waits to receive all messages from other agents proposing their own opinions. If a majority of received messages advise the same opinion, then the agent adopts this opinion for the next round. All ties are solved by readopting the agent's own opinion. After a fixed, predefined number of communication rounds \textbf{\emph{r}}, each agent decides on its current opinion.
In \cite{Benjamini2016, Fountoulakis2020}, the binary majority consensus problem was solved for a $\mathsf{G}(n,p)$ random graph with a connectivity parameter $p \geq \tfrac{\lambda}{\sqrt{n}}$, for some sufficiently large $\lambda>0$ with random initial states, which is exactly the assumptions adopted in the current work. A remarkable result was conjectured in \cite{Benjamini2016} and proved in \cite{Fountoulakis2020}, stating that a majority consensus can be reached in at most four communication rounds with high probability. By allowing the underlying graph to be drawn independently for each round anew, we improve on this result, and prove that the SMP with $\boldsymbol{r}=3$ reaches majority consensus with probability converging to 1 as $n$ tends to infinity.
We also show that this achievability result is tight. We will prove here that $\boldsymbol{r}=3$ communication rounds is a necessary condition, since the probability to reach unanimity with only $\boldsymbol{r}=2$ rounds converges to 0 as $n \to \infty$.
We also study the exact dynamics of the system.
\subsection{Related Work}
The problem of binary majority consensus was extensively studied in many different fields and contexts including autonomous systems \cite{mustafa2001majority, moreira2004efficient, gogolev2015distributed}, distributed systems \cite{thomas1979majority, breitwieser1982distributed, kanrar2016new}, and information theory \cite{mostofi2007binary, perron2009using, cruise2014probabilistic}, for a non-exhaustive list.
Mustafa and Peke{\v{c}} \cite{mustafa2001majority}, studied the requirements on the connectivity of the network such that the SMP reaches unanimity for any initial assignment of agent opinions. The main result in \cite{mustafa2001majority} is that the SMP converges to the majority consensus successfully only in highly-connected networks. Our network model assumptions rely on their result.
Our work closely resembles the work done in \cite{perron2009using, cruise2014probabilistic}. These papers have proved that in a fully-connected network where agents poll a portion of their neighbors uniformly at random, the SMP converges rapidly to majority consensus with probability of error (in the sense that unanimity was obtained, but not on the majority opinion) that tends to zero exponentially fast as $n \to \infty$.
{Yet, another line of relatively recent work deserves a special attention.}
In \cite{PAPER1}, a local polling protocol is proposed, and it is proved that it reaches consensus on the initial global majority in a random Erd\H{o}s--R\'enyi graph $\mathsf{G}(n,p)$ with $p=d/n$ where $d \geq (2+\epsilon)\log n$.
An estimation on the number of required steps to reach consensus is provided.
In \cite{PAPER2}, similar results were given for random regular graphs. In both of these papers, it is assumed that a clear bias exists between the two initial opinions.
In \cite{PAPER3}, the binary consensus problem was tackled from a different angle. For a random Erd\H{o}s--R\'enyi graph $\mathsf{G}(n,p)$ with a connectivity parameter $p \in (0,1)$ and any given $\epsilon \in (0,1)$, this work reveals what the initial difference between the two camps should be, such that the larger camp will eventually win with probability at least as high as $1-\epsilon$.
Also for a random graph $\mathsf{G}(n,p)$ with $p=d/n$, it was proved in \cite{PAPER4} that if the probability assignment on one of the initial states behaves like $\tfrac{1}{2}+\omega\left(\tfrac{1}{\sqrt{d}}\right)$ and $d>(1+\epsilon)\log n$, then with high probability the process reaches unanimity in a constant number of rounds.
In \cite{TLS}, the performance of the SMP is analyzed in the presence of a probabilistic message loss.
It is proved that in a fully-connected network, the SMP reaches consensus in only three communication rounds with probability converging to $1$ as $n \to \infty$, regardless of the initial state.
It is proved in \cite{TLS} that if the difference between the numbers of agents that initially hold different opinions grows at a rate of $\sqrt{n}$, then the SMP with two communication rounds reaches unanimity on the majority opinion of the network, and if this difference grows faster than $\sqrt{n}$, then the SMP attains consensus on the majority opinion of the network in a single round, with probability approaching $1$ exponentially fast as $n \rightarrow \infty$.
The remaining part of the paper is organized as follows.
In Section \ref{sec2}, we establish notation conventions.
In Section \ref{sec3}, we formalize the model, the protocol, and the objectives of this work.
In Section \ref{sec4}, we provide and discuss the main results of this work, and in Sections \ref{sec5} and \ref{sec6}, we prove them.
\section{Notation Conventions} \label{sec2}
Throughout the paper, random variables will be denoted by capital letters, realizations will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters.
Random vectors and their realizations will be denoted,
respectively, by boldface capital and lower case letters.
Their alphabets will be superscripted by their dimensions.
The binary Kullback-Leibler divergence function between two binary probability distributions with parameters $\alpha,\beta \in [0,1]$ is defined as
\begin{align} \label{DEF_Bin_DIVERGENCE}
D(\alpha \| \beta) = \alpha \log \left(\frac{\alpha}{\beta}\right) + (1-\alpha) \log \left(\frac{1-\alpha}{1-\beta}\right),
\end{align}
where logarithms, here and throughout the sequel, are understood to be taken to the natural base.
The probability of an event ${\cal E}$ will be denoted by $\pr\{{\cal E}\}$, and the expectation operator by $\mathbb{E}[\cdot]$.
The variance of a random variable $X$ is denoted by $\text{Var}[X]$.
The indicator function of an event ${\cal A}$
will be denoted by $\mathbbm{1}\{{\cal A}\}$.
The notation $[x]_{+}$ will stand for $\max\{0,x\}$.
For $\bx = (x_{1},x_{2},\ldots,x_{n}) \in {\cal X}^{n}$ and for any $a \in {\cal X}$, let us denote
\begin{align}
N(\bx;a) = \sum_{i=1}^{n} \mathbbm{1}\{x_{i}=a\}.
\end{align}
Let us denote by $\text{Ber}(p)$ a Bernoulli random variable with a success probability $p$ and by $\text{Bin}(n,p)$ a binomial random variable with $n$ independent experiments, each one with a success probability $p$. We adopt the following convention: if an event contains at least 2 binomial random variables, then we assume that they are statistically independent.
We will be concerned with the Erd\H{o}s--R\'enyi random graph model, to be denoted by $\mathsf{G}(n,p)$. In this model, a graph over $n$ vertices is constructed by connecting vertices randomly. Each edge is included in the graph with probability $p$ independent from every other edge.
\section{Model, Protocol, and Objectives} \label{sec3}
Assume a set of $2n$ agents, and denote their assignment of initial opinions by $\bx_{0,n} \in \{0,1\}^{2n}$. The vector $\bx_{0,n}$ is called the initial state.
We assume that each agent picks his initial opinion at random according to a $\text{Ber}(\tfrac{1}{2})$ random variable. All the initial opinions are independent.
Denote the numbers of zeros and ones in $\bx_{0,n}$ by $\mathsf{I}_{0}$ and $\mathsf{I}_{1}$, respectively.
At each round, the $2n$ agents are randomly connected according to a $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$ graph, where $\lambda>0$ is independent of $n$.
In every round, each agent transmits its current state over the connected network.
At round $\ell \geq 1$, assume that agent $i$ receives opinions from other $c(\ell,i)$ different agents, whose indexes are given by $(a_{1},a_{2},\ldots,a_{c(\ell,i)})$.
The agent $i \in \{1,2,\ldots,2n\}$ receives the (random) vector:
\begin{align}
\by_{\ell}^{i} = (y_{\ell}^{i}(a_{1}),y_{\ell}^{i}(a_{2}), \ldots, y_{\ell}^{i}(a_{c(\ell,i)})) \in \{0,1\}^{c(\ell,i)},
\end{align}
and for $b \in \{0,1\}$, he calculates the enumerators:
\begin{align}
\mathsf{N}_{\ell,i}(b) = \sum_{j=1}^{c(\ell,i)} \mathbbm{1}\{y_{\ell}^{i}(a_{j}) = b\}.
\end{align}
In the simple majority protocol (SMP), each agent updates its value according to the more common value at hand, i.e., agent $i$ chooses:
\begin{align}
\bx_{\ell}(i)
= \left\{
\begin{array}{l l}
0 & \quad \text{if $\mathsf{N}_{\ell,i}(0) > \mathsf{N}_{\ell,i}(1)$} \\
1 & \quad \text{if $\mathsf{N}_{\ell,i}(0) < \mathsf{N}_{\ell,i}(1)$} \\
\bx_{\ell-1}(i) & \quad \text{if $\mathsf{N}_{\ell,i}(0) = \mathsf{N}_{\ell,i}(1)$}
\end{array} \right. .
\end{align}
The vector $\bx_{\ell} \in \{0,1\}^{2n}$ is called the state at the end of round $\ell$.
A specific SMP defines a-priori the number of rounds until termination.
Let us denote by SMP$(r)$ the SMP with $r$ rounds of communication until termination.
We say that the SMP$(r)$ attains {\it consensus} if
\begin{align}
\bx_{r}(1) = \bx_{r}(2) = \ldots = \bx_{r}(2n),
\end{align}
and denote this event by $\mathsf{Con}(r,n)$.
Similarly, we say that the SMP$(r)$ attains {\it majority consensus} if the following holds:
\begin{align}
\mathsf{I}_{0} > \mathsf{I}_{1} ~~ &\rightarrow ~~
\bx_{r}(1) = \bx_{r}(2) = \ldots = \bx_{r}(2n) = 0, \\
\mathsf{I}_{0} < \mathsf{I}_{1} ~~ &\rightarrow ~~
\bx_{r}(1) = \bx_{r}(2) = \ldots = \bx_{r}(2n) = 1, \\
\mathsf{I}_{0} = \mathsf{I}_{1} ~~ &\rightarrow ~~
\bx_{r}(1) = \bx_{r}(2) = \ldots = \bx_{r}(2n),
\end{align}
and denote this event by $\mathsf{MCon}(r,n)$.
Now, the objective of this work is to prove that the SMP requires only three rounds of communication in order to attain consensus, with a probability that converges to 1 when $n \to \infty$.
\section{Main Results} \label{sec4}
Our first main result in this work is as follows.
\begin{theorem} \label{Main_THEOREM}
Let $2n$ agents draw their initial opinions using independent $\mathsf{Ber}(\tfrac{1}{2})$ random variables. Assume that the $2n$ agents communicate over a graph $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$, with $\lambda>0$, which is drawn independently at each round. Then, it holds that $\pr\{\mathsf{MCon}(3,n)\} \to 1$ as $n \to \infty$.
\end{theorem}
\subsection*{Discussion}
Beyond the simple fact that consensus is attained in only three communication rounds, with a probability that converges to one as $n \to \infty$, we also study the exact dynamics of the system in each of the intermediate steps. At the beginning, each agent tosses a fair coin to determine her initial opinion. We prove in Proposition \ref{PROP_1} that with high probability, one of the two opinions will have a majority of the order of $\sqrt{n}$, e.g., about $n+\alpha\sqrt{n}$ agents will hold the opinion 0 and $n-\alpha\sqrt{n}$ agents will hold the opinion 1, for some $\alpha>0$. Then, under the assumption that the initial state have a majority of zeros, we prove in Proposition \ref{PROP_3} that with high probability, at the end of the first communication round, the agents holding the opinion zero will have a bigger majority, which is of the order of $n^{3/4}$. In other words, about $n+\beta n^{3/4}$ agents will hold the opinion 0 and $n-\beta n^{3/4}$ agents will hold the opinion 1, for some $\beta>0$. Moving further, we prove in Proposition \ref{PROP_5} that after the second communication round, with a probability converging to 1 exponentially fast, the agents holding the opinion zero will have a linear majority, i.e., about $n+\gamma n$ agents will hold the opinion 0 and $n-\gamma n$ agents will hold the opinion 1, for some $\gamma>0$. Then, only one more communication round is required to reach consensus; we prove in Proposition \ref{PROP_6} that after the third communication round, again with a probability converging to 1 exponentially fast, all agents will hold the opinion zero (or one, if the initial majority was in favor of the ones). For clarity, the dynamics of the system is demonstrated in Figure \ref{fig:Evolution} below.
It was proved by Benjamini {\it et al.} \cite[Theorem 2]{Benjamini2016} that majority consensus is attained after four communication rounds with probability slightly higher than 0.4 over the choice of the initial states and over the choice of the random graph. It was assumed in \cite{Benjamini2016} that the underlying graph is chosen only once and remains fixed. It was conjectured in \cite{Benjamini2016} that, in fact, majority consensus can be reached with high probability as $n \to \infty$, and this conjecture was recently proved by Fountoulakis {\it et al.} \cite{Fountoulakis2020}. The proof in \cite{Fountoulakis2020} consists of two main parts; the first part involves with the first two rounds and the second part with the last two rounds. It is assumed that the underlying random graph is drawn twice - before the first round and between the second and the third rounds. Here, in Theorem \ref{Main_THEOREM}, we allow the underlying network to be drawn before each communication round starts, and this enables to reach consensus faster than in \cite{Benjamini2016, Fountoulakis2020}; the entire community agrees on the initial majority after only three rounds with probability converging to 1 as $n \to \infty$. It is important to note that Theorem \ref{Main_THEOREM} holds for any constant $\lambda>0$, while the results in \cite{Benjamini2016, Fountoulakis2020} hold with $\lambda$ being a sufficiently large universal constant.
In Theorem \ref{Main_THEOREM} above, we assume that the underlying graph is drawn independently for each communication round as $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$, with $\lambda>0$. This means that each agent communicate her current state to around $\lambda\sqrt{n}$ other agents. It may be questioned whether consensus can be reached when communicating over sparser networks. As long as consensus is attained in three rounds, only a partial relaxation can be made. By examining the proof of Theorem \ref{Main_THEOREM}, it seems that if the graph in the third round is $\mathsf{G}(2n,\tfrac{\lambda}{n^{\xi}})$, $\xi \in (\tfrac{1}{2},1)$, without altering the random graphs at the first two rounds, then consensus can still be reached in three rounds with probability converging to 1 as $n \to \infty$. We conjecture that a majority consensus can be attained when the underlying graph at all communication rounds is $\mathsf{G}(2n,\tfrac{\lambda}{n^{\xi}})$, $\xi \in (\tfrac{1}{2},1)$, but then, the number of required rounds should be higher than three. We leave this point open for future research.
The result provided in Theorem \ref{Main_THEOREM} is, in fact, an achievability result, i.e., it only tells under what conditions consensus can be attained. Hence, it is worth investigating whether consensus may be attained by the SMP with even less communication rounds than required in Theorem \ref{Main_THEOREM}.
In the following result, which is the second main result of this work and is proved in Section \ref{sec6}, we show that three rounds of communications are not only sufficient, but also necessary.
\begin{theorem} \label{Main_THEOREM2}
Let $2n$ agents draw their initial opinions using independent $\mathsf{Ber}(\tfrac{1}{2})$ random variables. Assume that the $2n$ agents communicate over a graph $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$, with $\lambda>0$, which is drawn independently at each round. Then, it holds that $\pr\{\mathsf{Con}(2,n)\} \to 0$ as $n \to \infty$.
\end{theorem}
\begin{figure}[h!]
\definecolor{CODEWORD}{rgb}{0,0,0}
\definecolor{FULL}{rgb}{0.13,0.65,0.13}
\definecolor{EMPTY}{rgb}{1.0,0,0.22}
\begin{subfigure}[b]{0.25\columnwidth}
\centering
\begin{tikzpicture}
\draw[fill=red!50!white] (0,0) -- (0,3.4) -- (1,3.4) -- (1,0) -- (0,0);
\draw[fill=green!50!white] (1.6,0) -- (1.6,3.6) -- (2.6,3.6) -- (2.6,0) -- (1.6,0);
\node at (0.5,3.9) {$n-\alpha\sqrt{n}$};
\node at (2.1,4.1) {$n+\alpha\sqrt{n}$};
\node at (0.5,-0.5) {Ones};
\node at (2.1,-0.5) {Zeros};
\end{tikzpicture}
\caption{Initial State}
\label{fig:POPULATION1}
\end{subfigure
\begin{subfigure}[b]{0.25\columnwidth}
\centering
\begin{tikzpicture}
\draw[fill=red!50!white] (0,0) -- (0,3) -- (1,3) -- (1,0) -- (0,0);
\draw[fill=green!50!white] (1.6,0) -- (1.6,4) -- (2.6,4) -- (2.6,0) -- (1.6,0);
\node at (0.5,3.5) {$n-\beta n^{3/4}$};
\node at (2.1,4.5) {$n+\beta n^{3/4}$};
\node at (0.5,-0.5) {Ones};
\node at (2.1,-0.5) {Zeros};
\end{tikzpicture}
\caption{After round 1}
\label{fig:POPULATION2}
\end{subfigure}%
\begin{subfigure}[b]{0.25\columnwidth}
\centering
\begin{tikzpicture}
\draw[fill=red!50!white] (0,0) -- (0,2) -- (1,2) -- (1,0) -- (0,0);
\draw[fill=green!50!white] (1.6,0) -- (1.6,5) -- (2.6,5) -- (2.6,0) -- (1.6,0);
\node at (0.5,2.5) {$n-\gamma n$};
\node at (2.1,5.5) {$n+\gamma n$};
\node at (0.5,-0.5) {Ones};
\node at (2.1,-0.5) {Zeros};
\end{tikzpicture}
\caption{After round 2}
\label{fig:POPULATION3}
\end{subfigure}%
\begin{subfigure}[b]{0.25\columnwidth}
\centering
\begin{tikzpicture}
\draw[fill=red!50!white] (0,0) -- (0,0) -- (1,0) -- (1,0) -- (0,0);
\draw[fill=green!50!white] (1.6,0) -- (1.6,7) -- (2.6,7) -- (2.6,0) -- (1.6,0);
\node at (0.5,0.5) {$0$};
\node at (2.1,7.5) {$2n$};
\node at (0.5,-0.5) {Ones};
\node at (2.1,-0.5) {Zeros};
\end{tikzpicture}
\caption{After round 3}
\label{fig:POPULATION4}
\end{subfigure}%
\caption{Typical evolution in 3 communication rounds.}
\label{fig:Evolution}
\end{figure}
\section{Proof of Theorem \ref{Main_THEOREM}} \label{sec5}
\subsection{At the Initial State}
The following proposition shows that if every agent picks his initial opinion at random with a fair coin, then the initial state of the entire community will be asymmetric of order at least $\sqrt{n}$. This result is proved in Appendix A.
\begin{proposition} \label{PROP_1}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of random initial states, where each agent tosses a fair coin to determine his initial opinion. Let $\epsilon > 0$ be given. Then, there exist $\alpha=\alpha(\epsilon)$ with $\alpha(\epsilon) \xrightarrow{\epsilon \to 0} 0$ and $M(\epsilon)$, such that for all $n \geq M(\epsilon)$,
\begin{align}
\pr \left\{ \{N(\bX_{0};0) \leq n - \alpha\sqrt{n}\}
\cup \{N(\bX_{0};0) \geq n + \alpha\sqrt{n}\} \right\} \geq 1 - \epsilon.
\end{align}
\end{proposition}
\subsection{At the First Round}
In the following result, which is proved in Appendix B, we show that if the difference between the number of agents holding `zero' and the number of agents holding `one' is at the order of $\sqrt{n}$, then the probability that any agent will update its state to `zero' converges to $\tfrac{1}{2}$ as $n \to \infty$, but at a relatively slow rate. This fact is important for increasing the difference between the two opinions after the first round of communication.
\begin{proposition} \label{PROP_2}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\alpha\sqrt{n}$ zeros and $n-\alpha\sqrt{n}$ ones and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Define the constant
\begin{align}
C_{0}(\alpha,\lambda)=\frac{\pi}{8e^{4}}
\frac{([\alpha\lambda-\sqrt{2\alpha\lambda}]_{+}+1)\exp\{-4/\lambda\}}{\lambda}.
\end{align}
If an agent starts with a `0' or a `1', then the probability to update to `0' is lower-bounded as
\begin{align}
Q_{n} \geq \frac{1}{2} + \frac{C_{0}(\alpha,\lambda)}{n^{1/4}}.
\end{align}
\end{proposition}
Based on the result of Proposition \ref{PROP_2}, we next state that if the difference between the two different opinions at the initial state is at the order of $\sqrt{n}$, then with probability converging to 1 as $n \to \infty$, after one round of communication, the difference will grow to the order of $n^{3/4}$. The following result, which is based on Chernoff's inequality, is proved in Appendix D.
\begin{proposition} \label{PROP_3}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\alpha\sqrt{n}$ zeros and $n-\alpha\sqrt{n}$ ones and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Then,
\begin{align}
\pr \left\{N(\bX_{1};0) \geq n + C_{0}(\alpha,\lambda) n^{3/4} \right\}
&\geq 1 - \exp\left\{- C_{0}(\alpha,\lambda)^{2} \sqrt{n} \right\} \xrightarrow{n \to \infty} 1.
\end{align}
\end{proposition}
\subsection{At the Second Round}
In the following result, which is proved in Appendix E, we show that if the difference between the number of agents holding opinion `zero' and the number of agents holding opinion `one' is at the order of $n^{3/4}$, then the probability that an agent will update its state to `zero' is bounded away from $\tfrac{1}{2}$ for any $n$. Due to this fact, the difference between the two opinions after the second round of communication is going to be linear in $n$.
\begin{proposition} \label{PROP_4}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\beta n^{\frac{3}{4}}$ zeros and $n-\beta n^{\frac{3}{4}}$ ones and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Define the constant
\begin{align}
C_{1}(\beta,\lambda) = \frac{(\pi\beta)^{\frac{3}{2}} \exp\{-(4+2\beta)/\lambda\} \exp\left\{-4\beta^{2}\lambda\right\}}{e^{6}\sqrt{\lambda}}.
\end{align}
If an agent starts with a `0' or a `1', then the probability to update to `0' is lower-bounded as
\begin{align} \label{Prop4_Res}
Q_{n} \geq \frac{1}{2} + C_{1}(\beta,\lambda).
\end{align}
\end{proposition}
Based on the result of Proposition \ref{PROP_4}, we next state that if the difference between the two different opinions is at the order of $n^{3/4}$, then with probability converging to 1 exponentially fast as $n \to \infty$, after one round of communication, the difference will grow to the order of $n$.
\begin{proposition} \label{PROP_5}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\beta n^{\frac{3}{4}}$ zeros and $n-\beta n^{\frac{3}{4}}$ ones and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Then,
\begin{align}
\pr \left\{N(\bX_{1};0) \geq n + C_{1}(\beta,\lambda) n \right\}
\geq 1 - \exp\left\{-C_{1}(\beta,\lambda)^{2} n \right\}
\xrightarrow{n \to \infty} 1.
\end{align}
\end{proposition}
We omit the proof of Proposition \ref{PROP_5}, since it is very similar to the proof of Proposition \ref{PROP_3}.
\subsection{At the Third Round}
According to Proposition \ref{PROP_5}, after the second round of communication, the number of agents holding opinion `zero' (or `one') is going to be larger by $\gamma n$ than the number of agents holding opinion `one', with a probability converging to 1 as $n \to \infty$. Then, in the following result, which is proved in Appendix F, we state that only one more round is required in order to reach consensus.
\begin{proposition} \label{PROP_6}
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\gamma n$ zeros and $n-\gamma n$ ones, where $\gamma \in (0,1)$, and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Then, the SMP$(1)$ reaches consensus with probability converging to 1 as $n \to \infty$. Specifically,
\begin{align}
\label{prop6_res}
\pr\left\{N(\bX_{1};0) = 2n \right\}
&\geq 1 - 2n \sqrt{\tfrac{1+\gamma}{1-\gamma}} \cdot \exp \left\{-\lambda \gamma^{2} \sqrt{n} \right\}.
\end{align}
\end{proposition}
The fact that the probability in \eqref{prop6_res} converges to 1 relatively fast suggest that consensus may be attained even if the network at the third communication round is sparser. Indeed, by examining the proof of Proposition \ref{PROP_6}, we find out that if the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{n^{\xi}})$, $\xi \in (\tfrac{1}{2},1)$, then \eqref{prop6_res} is generalized to
\begin{align}
\label{prop6_res_b}
\pr\left\{N(\bX_{1};0) = 2n \right\}
&\geq 1 - 2n \sqrt{\tfrac{1+\gamma}{1-\gamma}} \cdot \exp \left\{-\lambda \gamma^{2} n^{1-\xi} \right\},
\end{align}
which still converges to 1 for any $\xi \in (\tfrac{1}{2},1)$. It is important to note that it is impossible to reach consensus with high probability in three communication rounds when the underlying graph at all rounds is $\mathsf{G}(2n,\tfrac{\lambda}{n^{\xi}})$, $\xi \in (\tfrac{1}{2},1)$. Under such conditions, the required number of rounds for attaining consensus is greater than 3, and this is left open for future research.
\subsection{Wrapping Up}
We are now able to prove Theorem \ref{Main_THEOREM}.
Let $\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4} > 0$ be arbitrarily small given constants.
Let $\alpha$ be as in Proposition \ref{PROP_1} corresponding to $\epsilon_{1}$ and define the following event
\begin{align}
{\cal A}_{n} = \left\{N(\bX_{0};0) \leq n - \alpha \sqrt{n} \text{~~or~~} N(\bX_{0};0) \geq n + \alpha \sqrt{n} \right\},
\end{align}
such that $\pr\{{\cal A}_{n}\} \geq 1-\epsilon_{1}$ for all sufficiently large $n$.
Next, for a given $\alpha$ and $\lambda$, choose $\beta=C_{0}(\alpha,\lambda)$ as defined in Proposition \ref{PROP_2}, and define the event
\begin{align}
{\cal B}_{n} = \left\{N(\bX_{1};0) \leq n-\beta n^{\frac{3}{4}} \text{~~or~~} N(\bX_{1};0) \geq n+\beta n^{\frac{3}{4}} \right\},
\end{align}
such that $\pr\{{\cal B}_{n}|{\cal A}_{n}\} \geq 1-\epsilon_{2}$ for all sufficiently large $n$, according to Proposition \ref{PROP_3}.
Furthermore, for a given $\beta$ and $\lambda$, choose $\gamma=C_{1}(\beta,\lambda)$ as defined in Proposition \ref{PROP_4}, and define the event
\begin{align}
{\cal C}_{n} = \left\{N(\bX_{2};0) \leq n-\gamma n \text{~~or~~} N(\bX_{2};0) \geq n+\gamma n \right\},
\end{align}
such that $\pr\{{\cal C}_{n}|{\cal B}_{n}\} \geq 1-\epsilon_{3}$ for all sufficiently large $n$, according to Proposition \ref{PROP_5}.
Then, consider the following.
\begin{align}
\pr\{\mathsf{Con}(3,n)\}
&= \pr\{ N(\bX_{3};0) = 0 \text{~or~} N(\bX_{3};0) = 2n\}\\
\label{A_To_exp1}
&= \pr\{ N(\bX_{3};0) = 0 \text{~or~} N(\bX_{3};0) = 2n| {\cal C}_{n}\} \cdot \pr\{{\cal C}_{n}\} \nn \\
&~~+ \pr\{ N(\bX_{3};0) = 0 \text{~or~} N(\bX_{3};0) = 2n| {\cal C}_{n}^{\mbox{\tiny c}}\} \cdot \pr\{{\cal C}_{n}^{\mbox{\tiny c}}\} \\
&\geq \pr\{ N(\bX_{3};0) = 0 \text{~or~} N(\bX_{3};0) = 2n| {\cal C}_{n}\} \cdot \pr\{{\cal C}_{n}\} \\
\label{A_To_exp2}
&\geq \left(1 - \epsilon_{4}\right) \cdot \pr\{{\cal C}_{n}\},
\end{align}
where \eqref{A_To_exp1} follows from the law of total probability and \eqref{A_To_exp2} holds for all large enough $n$, due to Proposition \ref{PROP_6}. Furthermore,
\begin{align}
\pr\{{\cal C}_{n}\}
\label{A_To_exp3}
&= \pr\{ {\cal C}_{n}| {\cal B}_{n}\} \cdot \pr\{{\cal B}_{n}\} + \pr\{ {\cal C}_{n}| {\cal B}_{n}^{\mbox{\tiny c}}\} \cdot \pr\{{\cal B}_{n}^{\mbox{\tiny c}}\} \\
&\geq \pr\{ {\cal C}_{n}| {\cal B}_{n}\} \cdot \pr\{{\cal B}_{n}\} \\
\label{A_To_exp4}
&\geq (1-\epsilon_{3}) \cdot \pr\{{\cal B}_{n}\} \\
\label{A_To_exp5}
&\geq (1-\epsilon_{3}) \cdot \pr\{{\cal B}_{n}|{\cal A}_{n}\} \cdot \pr\{{\cal A}_{n}\}\\
\label{A_To_exp6}
&\geq (1-\epsilon_{3}) \cdot (1-\epsilon_{2}) \cdot (1-\epsilon_{1}),
\end{align}
where \eqref{A_To_exp3} is due to the law of total probability, \eqref{A_To_exp4} follows from Proposition \ref{PROP_5} for all $n$ sufficiently large, and \eqref{A_To_exp5} follows from the law of total probability.
The passage to \eqref{A_To_exp6} follows from Propositions \ref{PROP_1} and \ref{PROP_3}, for all $n$ sufficiently large.
Substituting \eqref{A_To_exp6} back into \eqref{A_To_exp2}, we conclude that $\pr\{\mathsf{Con}(3,n)\}$ can be made arbitrarily close to 1, which proves Theorem \ref{Main_THEOREM}.
\section{Proof of Theorem \ref{Main_THEOREM2}} \label{sec6}
\subsection{Main Ingredients}
In the following negative result, which is proved in Appendix G, we show that even if the number of agents holding opinion `zero' is greater than the number of agents holding opinion `one', then the number of `zero' agents cannot grow too much in a single communication round.
\begin{proposition} \label{PROP_7}
Let $\{A_{n}\}_{n=1}^{\infty}$ and $\{B_{n}\}_{n=1}^{\infty}$ be two monotonically increasing sequences with $B_{n} \geq A_{n}$ for every $n$.
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+A_{n}$ zeros and $n-A_{n}$ ones and assume that if an agent starts with a `0' or a `1', then the probability to update to `0' is upper-bounded by $P_{n}$. Then, it holds that
\begin{align}
\pr \left\{N(\bX_{1};0) \geq n + B_{n}\right\} \leq \exp \left\{-2n \cdot D\left(\frac{1}{2} + \frac{B_{n}}{2n} \middle\| P_{n} \right) \right\}.
\end{align}
\end{proposition}
While in Proposition \ref{PROP_2} we stated a positive result that if the difference between the two camps is at the order of $\sqrt{n}$, then the probability of updating to `zero' is {\it lower-bounded} by $\tfrac{1}{2}+\tfrac{C_{0}}{n^{1/4}}$, in the following result, which is proved in Appendix H, we state a negative result, which provides an upper bound on this probability. In the following result, we allow the difference between the two opinion numbers to be a general function of $n$.
\begin{proposition} \label{PROP_8}
Let $\{\psi_{n}\}_{n=1}^{\infty}$ be a sequence such that $\lim_{n \to \infty} \frac{\psi_{n}}{n} = 0$.
Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+\psi_{n}$ zeros and $n-\psi_{n}$ ones and assume that the underlying graph is $\mathsf{G}(2n,p_{n})$, $p_{n}=\tfrac{\lambda}{\sqrt{n}}$. Assume that $\psi_{n}p_{n}\geq 1$ for every $n$.
Define the sequence $\{\delta_{n}\}_{n=1}^{\infty}$ according to
\begin{align}
n^{\delta_{n}} = \sqrt{\log(n^{\theta})},~~\theta > 5.
\end{align}
If an agent starts with a `0' or a `1', then for all sufficiently large $n$, the probability to update to `0' is upper-bounded as
\begin{align}
P_{n} \leq \frac{1}{2} + \frac{60\psi_{n}p_{n}}{\lambda n^{1/4-\delta_{n}}}.
\end{align}
\end{proposition}
Proposition \ref{PROP_8} shows that if the difference between the two camps is again at the order of $\sqrt{n}$, then the probability of updating to `zero' is now {\it upper-bounded} by $\tfrac{1}{2}+\tfrac{k_{n}}{n^{1/4}}$, where $k_{n}$ grows only logarithmically fast in $n$.
In the sequel, the result of Proposition \ref{PROP_8} will be instrumental in Proposition \ref{PROP_7}, which requires an upper bound on the probability of updating to `zero'.
In Proposition \ref{PROP_6} we stated a positive result, according to which, if the difference between the number of opinions is at the order of $n$, then the probability to reach consensus in a single round converges to 1 as $n \to \infty$.
In the following result, which is proved in Appendix J, we state a negative result on attaining consensus in a single communication round; if the difference between the two camps is too small, then reaching consensus is impossible in a single round.
\begin{proposition} \label{PROP_9}
Let $\{C_{n}\}_{n=1}^{\infty}$ be a sequence such that $\lim_{n \to \infty} \frac{C_{n}}{n} = 0$. Let $\bx_{0,n} \in \{0,1\}^{2n}$ be a sequence of initial states with $n+C_{n}$ zeros or $n+C_{n}$ ones, and assume that the underlying graph is $\mathsf{G}(2n,\tfrac{\lambda}{\sqrt{n}})$.
Then, the SMP$(1)$ is characterized by
\begin{align}
\label{prop9_res}
\pr\{\mathsf{Con}(1,n)\}
\leq \exp\left\{ -\frac{n C_{n}^{2}}{2(n+C_{n})}
\exp \left\{ - 32\lambda \cdot \frac{C_{n}^{2}}{\sqrt{n}(n-C_{n})} \right\} \right\}.
\end{align}
\end{proposition}
Specifically, if the difference between the number of opinions grows faster than $n^{3/4}$, then the inner exponent on the right-hand-side of \eqref{prop9_res} is relatively close to 0 and the bound is close to 1, which is useless. On the other hand, if the difference between the two camps grows slower than $n^{3/4}$, then the probability of reaching consensus converges to 0 as $n \to \infty$.
\subsection{Wrapping Up}
We are now in a good position to prove Theorem \ref{Main_THEOREM2}.
\begin{align}
\pr\{\mathsf{Con}(2,n)\}
&= \pr\{ N(\bX_{2};0) = 0 \text{~or~} N(\bX_{2};0) = 2n\}\\
\label{ref80}
&\leq \pr\{N(\bX_{2};0) = 0\} + \pr\{N(\bX_{2};0) = 2n\}.
\end{align}
We will prove that the second term in \eqref{ref80} converges to zero as $n \to \infty$, while a proof for the first term follows similar lines. Consider the following:
\begin{align}
\pr\{N(\bX_{2};0) = 2n\}
&= \sum_{\ell=0}^{2n} \pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = \ell\right\} \\
&= \sum_{\ell=0}^{n+\sigma_{n}} \pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = \ell\right\} \nn \\
&~~~~~~+\sum_{\ell=n+\sigma_{n}}^{2n} \pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = \ell\right\} \\
&\leq \sum_{\ell=0}^{n+\sigma_{n}} \pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~~~~+\sum_{\ell=n+\sigma_{n}}^{2n} \pr\left\{N(\bX_{1};0) = \ell\right\} \\
&= (n+\sigma_{n}+1) \cdot \pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~~~~+\pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}\right\}.
\end{align}
Since
\begin{align}
&\pr\left\{N(\bX_{2};0) = 2n,N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~= \pr\left\{N(\bX_{2};0) = 2n|N(\bX_{1};0) = n+\sigma_{n} \right\} \cdot \pr\left\{N(\bX_{1};0) = n+\sigma_{n} \right\} \\
&~~~\leq \pr\left\{N(\bX_{2};0) = 2n|N(\bX_{1};0) = n+\sigma_{n} \right\},
\end{align}
we arrive at
\begin{align}
\pr\{N(\bX_{2};0) = 2n\}
\label{ref81}
&\leq (n+\sigma_{n}+1) \cdot \pr\left\{N(\bX_{2};0) = 2n|N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~~~~~~+\pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}\right\}.
\end{align}
As for the second term in \eqref{ref81}, consider the following:
\begin{align}
\pr\{N(\bX_{1};0) \geq n+\sigma_{n}\}
&= \sum_{k=0}^{2n} \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n},N(\bX_{0};0) = k\right\} \\
&= \sum_{k=0}^{n+\tau_{n}} \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n},N(\bX_{0};0) = k\right\} \nn \\
&~~~~~~+\sum_{k=n+\tau_{n}}^{2n} \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n},N(\bX_{0};0) = k\right\} \\
&\leq \sum_{k=0}^{n+\tau_{n}} \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n},N(\bX_{0};0) = n+\tau_{n} \right\} \nn \\
&~~~~~~+\sum_{k=n+\tau_{n}}^{2n} \pr\left\{N(\bX_{0};0) = k\right\} \\
\label{ref82}
&\leq (n+\tau_{n}+1) \cdot \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}|N(\bX_{0};0) = n+\tau_{n} \right\} \nn \\
&~~~~~~+\pr\left\{N(\bX_{0};0) \geq n+\tau_{n}\right\}.
\end{align}
Upper-bounding \eqref{ref81} with \eqref{ref82} yields that
\begin{align}
\pr\{N(\bX_{2};0) = 2n\}
\label{ref83}
&\leq (n+\sigma_{n}+1) \cdot \pr\left\{N(\bX_{2};0) = 2n|N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~~+(n+\tau_{n}+1) \cdot \pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}|N(\bX_{0};0) = n+\tau_{n} \right\} \nn \\
&~~~~+\pr\left\{N(\bX_{0};0) \geq n+\tau_{n}\right\}.
\end{align}
Consider the probability in the first term in \eqref{ref83}. Let us choose $\sigma_{n}=\tfrac{n^{3/4}}{\sqrt{64\lambda}}\sqrt{\log(n^{\rho})}$, $\rho > 0$, and then, according to Proposition \ref{PROP_9}, we get that
\begin{align}
&\pr\left\{N(\bX_{2};0) = 2n|N(\bX_{1};0) = n+\sigma_{n} \right\} \nn \\
&~~~\leq \exp\left\{ -\frac{n \sigma_{n}^{2}}{2(n+\sigma_{n})}
\exp \left\{ - 32\lambda \cdot \frac{\sigma_{n}^{2}}{\sqrt{n}(n-\sigma_{n})} \right\} \right\} \\
&~~~\leq \exp\left\{ -\frac{n \sigma_{n}^{2}}{2(n+n)}
\exp \left\{ - 32\lambda \cdot \frac{\sigma_{n}^{2}}{\sqrt{n}(n-\tfrac{n}{2})} \right\} \right\} \\
&~~~= \exp\left\{ -\frac{\sigma_{n}^{2}}{4}
\exp \left\{ - 64\lambda \cdot \frac{\sigma_{n}^{2}}{n^{3/2}} \right\} \right\} \\
&~~~= \exp\left\{ -\frac{1}{4} \cdot \frac{n^{3/2}}{64\lambda}\log(n^{\rho})
\exp \left\{ -\frac{64\lambda}{n^{3/2}} \cdot \frac{n^{3/2}}{64\lambda}\log(n^{\rho}) \right\} \right\} \\
&~~~= \exp\left\{ -\frac{n^{3/2}}{256\lambda}\log(n^{\rho})
\exp \left\{-\log(n^{\rho}) \right\} \right\} \\
\label{bound1}
&~~~= \exp\left\{ -\frac{\rho}{256\lambda} \log(n) n^{3/2-\rho} \right\},
\end{align}
which converges to zero for any $\rho \in (0,\tfrac{3}{2})$.
Regarding the third term in \eqref{ref83}, let us choose $\tau_{n}=\tfrac{n^{1/2}}{\sqrt{64\lambda}}\sqrt{\log(n^{\kappa})}$, $\kappa > 0$, and then, according to Proposition \ref{PROP_7} with $P_{n}=\tfrac{1}{2}$ and Pinsker's inequality, we get that
\begin{align}
\pr\left\{N(\bX_{0};0) \geq n+\tau_{n}\right\}
&\leq \exp \left\{-2n \cdot D\left(\frac{1}{2} + \frac{\tau_{n}}{2n} \middle\| \frac{1}{2} \right) \right\} \\
&\leq \exp \left\{-4n \cdot \left(\frac{\tau_{n}}{2n}\right)^{2} \right\} \\
&=\exp \left\{-\frac{1}{n} \cdot \frac{n}{64\lambda}\log(n^{\kappa}) \right\} \\
\label{bound2}
&=\exp\left\{-\frac{\kappa}{64\lambda} \log(n) \right\},
\end{align}
which converges to zero as $n \to \infty$.
As for the probability in the second term in \eqref{ref83},
\begin{align}
\pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}|N(\bX_{0};0) = n+\tau_{n} \right\},
\end{align}
we conclude from Proposition \ref{PROP_8} that for all sufficiently large $n$, the probability to update to `0' is upper-bounded as
\begin{align}
Q_{n}
&\leq \frac{1}{2} + \frac{60\tau_{n}p_{n}}{\lambda n^{1/4-\delta_{n}}} \\
&= \frac{1}{2} + \frac{60}{\lambda n^{1/4}} \cdot
\sqrt{\log(n^{\theta})} \cdot
\frac{\sqrt{n}}{\sqrt{64\lambda}}\sqrt{\log(n^{\kappa})} \cdot \frac{\lambda}{\sqrt{n}} \\
&= \frac{1}{2} + \frac{60\sqrt{\theta\kappa}\log(n)}{\sqrt{64\lambda} n^{1/4}},
\end{align}
and then, it follows from Proposition \ref{PROP_7} that
\begin{align}
&\pr\left\{N(\bX_{1};0) \geq n+\sigma_{n}|N(\bX_{0};0) = n+\tau_{n} \right\} \nn \\
&~~~~\leq \exp \left\{-2n \cdot D\left(\frac{1}{2} + \frac{\sigma_{n}}{2n} \middle\| \frac{1}{2} + \frac{60\sqrt{\theta\kappa}\log(n)}{\sqrt{64\lambda} n^{1/4}} \right) \right\} \\
&~~~~= \exp \left\{-2n \cdot D\left(\frac{1}{2} + \frac{\sqrt{\log(n^{\rho})}}{2\sqrt{64\lambda} n^{1/4}} \middle\| \frac{1}{2} + \frac{60\sqrt{\theta\kappa}\log(n)}{\sqrt{64\lambda} n^{1/4}} \right) \right\} \\
&~~~~\leq \exp \left\{-4n \cdot
\frac{1}{64\lambda \sqrt{n}}
\left(60\sqrt{\theta\kappa}\log(n)-\frac{1}{2}\sqrt{\log(n^{\rho})}\right)^{2} \right\} \\
\label{bound3}
&~~~~\leq \exp \left\{- \frac{\theta\kappa}{\lambda} \log^{2}(n) \sqrt{n} \right\}.
\end{align}
We now continue from \eqref{ref83}. Using the three bounds from \eqref{bound1}, \eqref{bound2}, and \eqref{bound3} leads to
\begin{align}
\pr\{N(\bX_{2};0) = 2n\}
&\leq (n+\sigma_{n}+1) \cdot \exp\left\{ -\frac{\rho}{256\lambda} \log(n) n^{3/2-\rho} \right\} \nn \\
&~~~~+(n+\tau_{n}+1) \cdot \exp \left\{- \frac{\theta\kappa}{\lambda} \log^{2}(n) \sqrt{n} \right\}
+ \exp\left\{-\frac{\kappa}{64\lambda} \log(n) \right\} \\
&\leq 2n \cdot \exp\left\{ -\frac{\rho}{256\lambda} \log(n) n^{3/2-\rho} \right\} \nn \\
&~~~~+2n \cdot \exp \left\{- \frac{\theta\kappa}{\lambda} \log^{2}(n) \sqrt{n} \right\}
+ \exp\left\{-\frac{\kappa}{64\lambda} \log(n) \right\},
\end{align}
which converges to zero as $n \to \infty$, hence the proof of Theorem \ref{Main_THEOREM2} is complete.
\section*{Appendix A - Proof of Proposition \ref{PROP_1}}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
Denote $N_{0}=N(\bX_{1};0)$.
We would like to prove that the random variable $|N_{0}-n|/\sqrt{n}$ is bounded away from zero with an overwhelmingly high probability at large $n$.
Note that
\begin{align}
N_{0} = \sum_{\ell=1}^{2n} I_{\ell},
\end{align}
where $I_{\ell} \sim \text{Ber}(\tfrac{1}{2})$, for all $\ell \in \{1,2,\ldots, 2n\}$, and all of these binary random variables are independent.
Let $\epsilon > 0$ and $\alpha(\epsilon) > 0$, that will be specified later on with the property that $\alpha(\epsilon) \xrightarrow{\epsilon \to 0} 0$.
Consider the following
\begin{align}
\pr \left\{\left|\frac{N_{0}-n}{\sqrt{n}}\right| \geq \alpha(\epsilon) \right\}
\label{ToRef18}
= \pr \left\{\left|\sqrt{2n}\left(\frac{1}{2n}\sum_{\ell=1}^{2n} I_{\ell} - \tfrac{1}{2}\right) \right| \geq \frac{\alpha(\epsilon)}{\sqrt{2}} \right\}.
\end{align}
In order to conclude that the normalized sum inside the probability in \eqref{ToRef18} converge in distribution to normal random variables, we invoke Lindeberg-L\'evy central limit theorem (CLT) \cite[p.\ 144, Theorem 3.4.1.]{DURRETT}. We have the following result.
\begin{theorem}[Lindeberg-L\'evy CLT]
Suppose $\{X_{n}\}$ is a sequence of IID random variables with $\mathbb{E}[X_{i}]=\mu$ and $\text{Var}[X_{i}]=\sigma^{2}<\infty$.
Let us denote $\bar{X}_{n}=\tfrac{1}{n}\sum_{i=1}^{n}X_{i}$.
Then as $n \to \infty$, the random variables $\sqrt{n}(\bar{X}_{n}-\mu)$ converge in distribution to a normal ${\cal N}(0,\sigma^{2})$.
\end{theorem}
Now, concerning the normalized sum inside the probability in \eqref{ToRef18}, it follows by Lindeberg-L\'evy CLT that
\begin{align}
\sqrt{2n}\left(\frac{1}{2n}\sum_{\ell=1}^{2n} I_{\ell} - \tfrac{1}{2}\right)
\xrightarrow{d} X \sim {\cal N}(0,\tfrac{1}{4}).
\end{align}
We continue from \eqref{ToRef18} and arrive at
\begin{align}
\lim_{n \to \infty} \pr \left\{\left|\frac{N_{0}-n}{\sqrt{n}}\right| \geq \alpha(\epsilon) \right\}
&= \pr \left\{\left|X\right| \geq \frac{\alpha(\epsilon)}{\sqrt{2}} \right\} \\
&= \pr \left\{\left|{\cal N}(0,\tfrac{1}{2})\right| \geq \alpha(\epsilon) \right\} \\
&= 1- \frac{\epsilon}{2},
\end{align}
which can obviously be satisfied by a proper choice of $\alpha(\epsilon)$.
We conclude that for any $\epsilon > 0$, there exists some $M(\epsilon)$, such that for all $n \geq M(\epsilon)$,
\begin{align}
\pr \left\{\left|\frac{N_{0}-n}{\sqrt{n}}\right| \geq \alpha(\epsilon) \right\}
\geq 1- \epsilon,
\end{align}
which completes the proof of Proposition \ref{PROP_1}.
\section*{Appendix B - Proof of Proposition \ref{PROP_2}}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
Assume that the numbers of zeros and ones are $n+\alpha\sqrt{n}$ and $n-\alpha\sqrt{n}$, respectively.
If an agent starts with a `0', then the probability to decide in favor of `0' is lower-bounded by
\begin{align}
\label{ToCall0}
Q_{n}^{0}
&=\pr\left\{\text{Bin}\left(n+\alpha\sqrt{n}-1,p_{n}\right) + 1 \geq \text{Bin}\left(n-\alpha\sqrt{n},p_{n}\right) \right\} \\
\label{ref63}
&\geq \pr\left\{\text{Bin}\left(n+\alpha\sqrt{n}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \geq \text{Bin}\left(n-\alpha\sqrt{n},p_{n}\right) \right\} \\
\label{ref64}
&= \pr\left\{\text{Bin}\left(n,p_{n}\right) + \text{Bin}\left(\alpha\sqrt{n}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \geq \text{Bin}\left(n-\alpha\sqrt{n},p_{n}\right) \right\} \\
\label{ref65}
&\geq \pr\left\{\text{Bin}\left(n,p_{n}\right) + \text{Bin}\left(\alpha \sqrt{n},p_{n}\right) \geq \text{Bin}\left(n,p_{n}\right) \right\} \\
&= \pr\left\{X + Z \geq Y \right\},
\end{align}
where $X,Y \sim \text{Bin}\left(n,p_{n}\right)$ and $Z \sim \text{Bin}\left(\alpha\sqrt{n},p_{n}\right)$ are three independent random variables.
The passage to \eqref{ref63} is true since $\text{Bin}\left(1,p_{n}\right) \leq 1$ with probability 1 and \eqref{ref65} follows by increasing the number of trials in the binomial random variable on the right hand side of the inequality inside the probability in \eqref{ref64}.
It follows from the law of total probability that
\begin{align}
\pr\left\{X + Z \geq Y \right\}
\label{ref0}
&= \sum_{m=0}^{\alpha \sqrt{n}} \pr\left\{X + m \geq Y \right\} \pr\left\{Z = m \right\}.
\end{align}
As for the probability $\pr\left\{X + m \geq Y \right\}$, we have that
\begin{align}
\pr\left\{X + m \geq Y \right\}
\label{ref3}
= \pr\left\{X \geq Y \right\} + \pr\left\{X + 1 = Y \right\} + \ldots + \pr\left\{X + m = Y \right\}.
\end{align}
It follows by symmetry that
\begin{align}
1 &= \pr\{X > Y\}+\pr\{X < Y\}+\pr\{X = Y\} \\
&= 2\pr\{X > Y\}+\pr\{X = Y\},
\end{align}
or,
\begin{align}
\pr\{X > Y\} = \frac{1}{2} - \frac{1}{2} \cdot \pr\{X = Y\},
\end{align}
which implies that
\begin{align}
\pr\{X \geq Y\}
&= \pr\{X > Y\} + \pr\{X = Y\} \\
\label{ref1}
&= \frac{1}{2} + \frac{1}{2} \cdot \pr\{X = Y\}.
\end{align}
Substituting \eqref{ref1} back into \eqref{ref3} yields
\begin{align}
\pr\left\{X + m \geq Y \right\}
&= \frac{1}{2} + \frac{1}{2} \cdot \pr\{X = Y\} + \pr\left\{X + 1 = Y \right\} + \ldots + \pr\left\{X + m = Y \right\} \\
\label{Ref0}
&\geq \frac{1}{2} + \frac{1}{2} \cdot \sum_{i=0}^{m} \pr\{X + i = Y \}.
\end{align}
Lower-bounding \eqref{ref0} with \eqref{Ref0} yields
\begin{align}
Q_{n}^{0}
&\geq \sum_{m=0}^{\alpha \sqrt{n}} \pr\left\{X + m \geq Y \right\} \pr\left\{Z = m \right\} \\
&\geq \sum_{m=0}^{\alpha \sqrt{n}} \left[ \frac{1}{2} + \frac{1}{2} \cdot \sum_{i=0}^{m} \pr\{X + i = Y \} \right] \pr\left\{Z = m \right\} \\
\label{ref50}
&= \frac{1}{2} + \frac{1}{2} \cdot \sum_{m=0}^{\alpha \sqrt{n}} \left(\sum_{i=0}^{m} \pr\{X + i = Y \}\right) \pr\left\{Z = m \right\}.
\end{align}
The following result, which is proved in Appendix C, is going to be instrumental.
\begin{lemma} \label{Lemma_Binomial1}
Let $X,Y \sim \text{Bin}\left(n,p_{n}\right)$ be two independent binomial random variables with $p_{n} = \tfrac{\lambda}{\sqrt{n}}$. Then, for any $i \in \{0,1,\ldots,\lambda \sqrt{n}\}$,
\begin{align}
\pr\{X + i = Y\}
\geq \frac{\pi}{e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{i^{2}}{\lambda \sqrt{n}} \right\}.
\end{align}
\end{lemma}
Recall that $Z \sim \text{Bin}\left(\alpha\sqrt{n},\tfrac{\lambda}{\sqrt{n}}\right)$ and thus
\begin{align}
\mu_{Z} &= \mathbb{E}[Z] = \alpha\lambda, \\
\sigma_{Z} &= \sqrt{\text{Var}[Z]} = \sqrt{\alpha \lambda \left(1 - \frac{\lambda}{\sqrt{n}}\right)} \leq \sqrt{\alpha\lambda}.
\end{align}
Let us define $\nu_{0}=[\alpha\lambda-\sqrt{2\alpha\lambda}]_{+}$ and $\nu_{1}=\alpha\lambda+\sqrt{2\alpha\lambda}$.
Since $\sqrt{n}$ is assumed to be large, then $\alpha\lambda+\sqrt{2\alpha\lambda} \leq \alpha\sqrt{n}$ is true, and we continue to lower-bound \eqref{ref50} as
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \frac{1}{2} \cdot \sum_{m=\nu_{0}}^{\nu_{1}} \left(\sum_{i=0}^{m} \pr\{X + i = Y \}\right) \pr\left\{Z = m \right\} \\
&\geq \frac{1}{2} + \sum_{m=\nu_{0}}^{\nu_{1}} \left(\sum_{i=0}^{m} \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{i^{2}}{\lambda \sqrt{n}} \right\} \right) \pr\left\{Z = m \right\} \\
&\geq \frac{1}{2} + \sum_{m=\nu_{0}}^{\nu_{1}} \left(\sum_{i=0}^{m} \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \right) \pr\left\{Z = m \right\} \\
&= \frac{1}{2} + \sum_{m=\nu_{0}}^{\nu_{1}} \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}(m+1)}{\lambda n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{Z = m \right\} \\
\label{ref5}
&\geq \frac{1}{2} + \sum_{m=\nu_{0}}^{\nu_{1}} \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}(\nu_{0}+1)}{\lambda n^{1/4}} \exp \left\{ - \frac{\nu_{0}^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{Z = m \right\} \\
\label{ref6}
&= \frac{1}{2} + \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}(\nu_{0}+1)}{\lambda n^{1/4}} \exp \left\{ - \frac{\nu_{0}^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{\nu_{0} \leq Z \leq \nu_{1} \right\},
\end{align}
where \eqref{ref5} is true since the function $g(t) = (t+1) \exp\left\{-\tfrac{t^{2}}{\lambda \sqrt{n}}\right\}$ is monotonically increasing as long as $t \in \left[0, \tfrac{\sqrt{1+2\lambda \sqrt{n}}-1}{2}\right]$ and since $\sqrt{n}$ is assumed to be large, it then follows that $g(t)$ is monotonically increasing in the entire range $\left[[\alpha\lambda-\sqrt{2\alpha\lambda}]_{+},\alpha\lambda+\sqrt{2\alpha\lambda}\right]$.
As for the probability in \eqref{ref6}, we split into two cases. If $\nu_{0}>0$,
\begin{align}
\pr\left\{\nu_{0} \leq Z \leq \nu_{1} \right\}
&= \pr\left\{\alpha\lambda-\sqrt{2\alpha\lambda} \leq Z \leq \alpha\lambda+\sqrt{2\alpha\lambda} \right\} \\
&\geq \pr\left\{\mu_{Z} - \sqrt{2}\sigma_{Z} \leq Z \leq \mu_{Z} + \sqrt{2}\sigma_{Z} \right\} \\
&= \pr\left\{ |Z - \mu_{Z}| \leq \sqrt{2}\sigma_{Z} \right\} \\
\label{ref7}
&\geq \frac{1}{2},
\end{align}
where \eqref{ref7} is due to Chebyshev's inequality.
Otherwise, if $\nu_{0} = 0$, which is equivalent to $\alpha\lambda\leq \sqrt{2\alpha\lambda}$, we have that
\begin{align}
\pr\left\{Z \geq \nu_{1}\right\}
&\leq \frac{\mathbb{E}[Z]}{\nu_{1}} \\
&= \frac{\alpha\lambda}{\alpha\lambda + \sqrt{2\alpha\lambda}} \\
&\leq \frac{\alpha\lambda}{\alpha\lambda + \alpha\lambda} \\
&= \frac{1}{2},
\end{align}
and then,
\begin{align}
\pr\left\{\nu_{0} \leq Z \leq \nu_{1} \right\}
&= \pr\left\{0 \leq Z \leq \nu_{1} \right\} \\
&= 1 - \pr\left\{Z \geq \nu_{1} \right\} \\
\label{ref8}
&\geq \frac{1}{2}.
\end{align}
Lower-bounding the probability in \eqref{ref6} by $\tfrac{1}{2}$, we finally arrive at
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \frac{\pi}{4e^{4}}
\frac{([\alpha\lambda-\sqrt{2\alpha\lambda}]_{+}+1)\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{([\alpha\lambda-\sqrt{2\alpha\lambda}]_{+})^{2}}{\lambda \sqrt{n}} \right\} \\
&\geq \frac{1}{2} + \frac{\pi}{8e^{4}}
\frac{([\alpha\lambda-\sqrt{2\alpha\lambda}]_{+}+1)\exp\{-4/\lambda\}}{\lambda n^{1/4}},
\end{align}
where the last inequality is, again, due to the fact that $\sqrt{n}$ is assumed to be large. The proof of Proposition \ref{PROP_2} is now complete.
\section*{Appendix C - Proof of Lemma \ref{Lemma_Binomial1}}
\renewcommand{\theequation}{C.\arabic{equation}}
\setcounter{equation}{0}
The probability $\pr\{X + i = Y \}$ can be written explicitly as
\begin{align}
\pr\{X + i = Y \} = \sum_{\ell=0}^{n} \sum_{k=0}^{n}
\binom{n}{\ell} p_{n}^{\ell} (1-p_{n})^{n-\ell}
\binom{n}{k} p_{n}^{k} (1-p_{n})^{n-k} \mathbbm{1} \{\ell + i = k\},
\end{align}
which implies that
\begin{align}
\label{ToCont0}
\pr\{X + i = Y \} \geq \sum_{\ell=1}^{n-1} \sum_{k=1}^{n-1}
\binom{n}{\ell} p_{n}^{\ell} (1-p_{n})^{n-\ell}
\binom{n}{k} p_{n}^{k} (1-p_{n})^{n-k} \mathbbm{1} \{\ell + i = k\}.
\end{align}
We continue by lower-bounding the PMF of the binomial random variable $X=\text{Bin}(n,p)$, which is given by
\begin{align}
\label{TermToCall4}
P_{X}(k) = \binom{n}{k} p^{k} (1-p)^{n-k},~~~k \in [0:n].
\end{align}
In order to lower-bound the binomial coefficient in \eqref{TermToCall4}, we use the Stirling's bounds in
\begin{align}
\label{Stirling}
\sqrt{2 \pi n} \cdot n^{n} \cdot e^{-n}
\leq n!
\leq e \sqrt{n} \cdot n^{n} \cdot e^{-n},~~n \geq 1,
\end{align}
and get that
\begin{align}
\binom{n}{k}
&= \frac{n!}{k! \cdot (n-k)!} \\
\label{TermToCall5}
&\geq \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{n}{k(n-k)}}
\exp \left\{ -n \left[\frac{k}{n} \log \left(\frac{k}{n}\right)
+ \left(1-\frac{k}{n}\right) \log \left(1-\frac{k}{n}\right) \right] \right\}.
\end{align}
Substituting \eqref{TermToCall5} back into \eqref{TermToCall4} yields that for any $k=1,2,\ldots,n-1$
\begin{align}
P_{X}(k)
&\geq \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{n}{k(n-k)}}
\exp \left\{ -n D\left(\frac{k}{n} \middle\| p \right) \right\} \\
\label{ToRef4}
&\geq \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{1}{k}}
\exp \left\{ -n D\left(\frac{k}{n} \middle\| p \right) \right\},
\end{align}
where $D(\alpha \| \beta)$, for $\alpha,\beta \in [0,1]$, is defined in \eqref{DEF_Bin_DIVERGENCE}.
Substituting twice the lower bound of \eqref{ToRef4} into \eqref{ToCont0}, we arrive at
\begin{align}
\pr\{X + i = Y \}
&\geq
\frac{2\pi}{e^{4}}
\sum_{\ell=1}^{n-1} \sum_{k=1}^{n-1}
\sqrt{\frac{1}{\ell k}}
\exp \left\{ -n D\left(\frac{\ell}{n} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -n D\left(\frac{k}{n} \middle\| p_{n} \right) \right\} \mathbbm{1} \{\ell + i = k\} \\
&=
\frac{2\pi}{e^{4}}
\sum_{\ell=1}^{n-i-1}
\sqrt{\frac{1}{\ell (\ell+i)}}
\exp \left\{ -n D\left(\frac{\ell}{n} \middle\| p_{n} \right) \right\} \cdot
\exp \left\{ -n D\left(\frac{\ell + i}{n} \middle\| p_{n} \right) \right\} \\
\label{ToRef1}
&\geq
\frac{2\pi}{e^{4}}
\sum_{\ell=1}^{n-i-1}
\frac{1}{\ell+i}
\exp \left\{ -n D\left(\frac{\ell}{n} \middle\| p_{n} \right) \right\} \cdot
\exp \left\{ -n D\left(\frac{\ell + i}{n} \middle\| p_{n} \right) \right\}.
\end{align}
In order to lower-bound \eqref{ToRef1},
let $\epsilon_{n} = \tfrac{1}{n^{3/4}}$, for $n=1,2, \ldots$ and define the set of numbers
\begin{align}
{\cal N}_{n} = \left\{n(p_{n}-\epsilon_{n})-\tfrac{i}{2}+1,n(p_{n}-\epsilon_{n})-\tfrac{i}{2}+2,\ldots,np_{n}-\tfrac{i}{2},\ldots,n(p_{n}+\epsilon_{n})-\tfrac{i}{2}\right\},
\end{align}
whose cardinality is given by
\begin{align}
\label{Cardinality}
|{\cal N}_{n}| = 2 n \epsilon_{n}.
\end{align}
We now continue from \eqref{ToRef1} and arrive at
\begin{align}
\pr\{X + i = Y\}
&\geq \frac{2\pi}{e^{4}}
\sum_{\ell=1}^{n-i-1}
\frac{1}{\ell+i}
\exp \left\{ -n D\left(\frac{\ell}{n} \middle\| p_{n} \right) \right\} \cdot
\exp \left\{ -n D\left(\frac{\ell + i}{n} \middle\| p_{n} \right) \right\} \\
\label{ToRef2}
&\geq \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{\ell+i}
\exp \left\{ -n D\left(\frac{\ell}{n} \middle\| p_{n} \right) \right\} \cdot
\exp \left\{ -n D\left(\frac{\ell + i}{n} \middle\| p_{n} \right) \right\}.
\end{align}
We upper-bound the exponents in \eqref{ToRef2} using a reversed Pinsker inequality. Recall that the total variation distance between two probability distributions $P$ and $Q$ is defined by
\begin{align} \label{DEF_TVD}
|P-Q| = \frac{1}{2} \sum_{x \in {\cal X}} |P(x)-Q(x)|,
\end{align}
and the Kullback-Leibler divergence is defined by
\begin{align}
D(P\|Q) = \sum_{x \in {\cal X}} P(x) \log \frac{P(x)}{Q(x)}.
\end{align}
Then, it holds that \cite[p.\ 5974, Eq.\ (23)]{SASON}
\begin{align} \label{Reverse_PINSKER}
D(P\|Q) \leq \left(\frac{2}{Q_{\mbox{\tiny min}}}\right) \cdot |P-Q|^{2},
\end{align}
when
\begin{align}
Q_{\mbox{\tiny min}} = \min_{x\in {\cal X}} Q(x).
\end{align}
Now, the exponent in \eqref{ToRef2} is lower-bounded by
\begin{align}
\pr\{X + i = Y\}
&\geq \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{\ell+i}
\exp \left\{ -n \cdot \frac{2}{p_{n}} \cdot \left(\frac{\ell}{n} - p_{n} \right)^{2} \right\} \cdot
\exp \left\{ -n \cdot \frac{2}{p_{n}} \cdot \left(\frac{\ell + i}{n} - p_{n} \right)^{2} \right\} \\
&= \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{\ell+i}
\exp \left\{ - \frac{2}{np_{n}} \cdot \left(\ell - np_{n} \right)^{2} \right\} \cdot
\exp \left\{ -\frac{2}{np_{n}} \cdot \left(\ell + i - np_{n} \right)^{2} \right\} \\
&= \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{\ell+i}
\exp \left\{ - \frac{2}{np_{n}} \cdot \left[(\ell - np_{n})^{2} + (\ell + i - np_{n} )^{2} \right] \right\} \\
&= \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{\ell+i}
\exp \left\{ - \frac{2}{np_{n}} \cdot \left[2\left(\ell + \frac{i}{2} - np_{n}\right)^{2} + \frac{i^{2}}{2} \right] \right\}.
\end{align}
Both the term $\ell+i$ and the exponent are maximized at $\ell = n(p_{n}+\epsilon_{n})-\tfrac{i}{2}$, and thus
\begin{align}
\pr\{X + i = Y\}
&\geq \frac{2\pi}{e^{4}}
\sum_{\ell \in {\cal N}_{n}}
\frac{1}{n(p_{n}+\epsilon_{n})+\tfrac{i}{2}}
\exp \left\{ - \frac{2}{np_{n}} \cdot \left( 2n^{2}\epsilon_{n}^{2} + \frac{i^{2}}{2} \right) \right\} \\
&= \frac{2\pi}{e^{4}}
\frac{|{\cal N}_{n}|}{n(p_{n}+\epsilon_{n})+\tfrac{i}{2}}
\exp \left\{ - \frac{2}{np_{n}} \cdot \left( 2n^{2}\epsilon_{n}^{2} + \frac{i^{2}}{2} \right) \right\} \\
&= \frac{2\pi}{e^{4}}
\frac{2n\epsilon_{n}}{n(p_{n}+\epsilon_{n})+ \tfrac{i}{2}} \exp \left\{ - \left( \frac{4n\epsilon_{n}^{2}}{p_{n}} + \frac{i^{2}}{np_{n}} \right) \right\}.
\end{align}
Note that
\begin{align}
4n \cdot \frac{1}{p_{n}} \cdot \epsilon_{n}^{2}
&= 4n \cdot \frac{\sqrt{n}}{\lambda} \cdot \left(\frac{1}{n^{3/4}}\right)^{2} \\
&= \frac{4}{\lambda}.
\end{align}
In addition, $i \leq n(p_{n}+\epsilon_{n})$, since we assume that $i \in \{0,1,\ldots,\lambda \sqrt{n}\}$, and thus,
\begin{align}
\pr\{X + i = Y\}
&\geq \frac{2\pi}{e^{4}}
\frac{2n\epsilon_{n}}{n(p_{n}+\epsilon_{n})+ n(p_{n}+\epsilon_{n})} \exp \left\{ - \left( \frac{4}{\lambda} + \frac{i^{2}}{np_{n}} \right) \right\} \\
\label{ToRef3}
&= \frac{2\pi}{e^{4}}
\frac{\epsilon_{n}}{p_{n}+\epsilon_{n}} \exp \left\{ - \left( \frac{4}{\lambda} + \frac{i^{2}}{np_{n}} \right) \right\}.
\end{align}
As for the fraction $\epsilon_{n}/(p_{n}+\epsilon_{n})$,
\begin{align}
\frac{\epsilon_{n}}{p_{n}+\epsilon_{n}}
&= \frac{\frac{1}{n^{3/4}}}{\frac{\lambda}{\sqrt{n}}+\frac{1}{n^{3/4}}} \\
&\geq \frac{\frac{1}{n^{3/4}}}{\frac{\lambda}{\sqrt{n}}+\frac{\lambda}{\sqrt{n}}} \\
&= \frac{\sqrt{n}}{2\lambda n^{3/4}} \\
&= \frac{1}{2 \lambda n^{1/4}},
\end{align}
and substituting it back into \eqref{ToRef3} yields
\begin{align}
\pr\{X + i = Y\}
&\geq \frac{\pi}{e^{4}}
\frac{1}{\lambda n^{1/4}} \exp \left\{ - \left( \frac{4}{\lambda} + \frac{i^{2}}{\lambda \sqrt{n}} \right) \right\} \\
&= \frac{\pi}{e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{i^{2}}{\lambda \sqrt{n}} \right\}.
\end{align}
The proof of Lemma \ref{Lemma_Binomial1} is complete.
\section*{Appendix D - Proof of Proposition \ref{PROP_3}}
\renewcommand{\theequation}{D.\arabic{equation}}
\setcounter{equation}{0}
Let us denote
\begin{align}
\label{DEF_PHI_EPSILON}
\phi_{n} = \frac{1}{2} +
\frac{C_{0}(\alpha,\lambda)}{n^{1/4}},~~~\epsilon_{n}=\frac{C_{0}(\alpha,\lambda)}{2n^{1/4}},
\end{align}
and let $Q_{n}^{0}, Q_{n}^{1}$ denote the probabilities of deciding `0', for the two possible initial states.
It follows from Proposition \ref{PROP_2} that $\min\{Q_{n}^{0},Q_{n}^{1}\} \geq \phi_{n}$.
We now prove that the probability of drawing a relatively small number of zeros tends to 0 as $n \to \infty$. Denote $N_{0}=N(\bX_{1};0)$ and consider the following for $s \geq 0$
\begin{align}
\pr \left\{N_{0} \leq 2n (\phi_{n}-\epsilon_{n}) \right\}
&= \pr \left\{e^{-sN_{0}} \geq e^{-2ns (\phi_{n}-\epsilon_{n})} \right\} \\
\label{B_ToExp3}
&\leq \frac{\mathbb{E} \left[e^{-sN_{0}}\right]}{e^{-2ns (\phi_{n}-\epsilon_{n})}},
\end{align}
where \eqref{B_ToExp3} is due to Markov's inequality. Since \eqref{B_ToExp3} holds for every $s \geq 0$, it follows that
\begin{align}
\label{ToRef15}
\pr \left\{N_{0} \leq 2n (\phi_{n}-\epsilon_{n}) \right\}
\leq \inf_{s > 0} \frac{\mathbb{E} \left[e^{-sN_{0}}\right]}{e^{-2ns (\phi_{n}-\epsilon_{n})}}.
\end{align}
Note that
\begin{align}
N_{0} = \sum_{\ell=1}^{n+\alpha\sqrt{n}} I_{\ell} + \sum_{k=1}^{n-\alpha\sqrt{n}} J_{k},
\end{align}
where $I_{\ell} \sim \text{Ber}(Q_{n}^{0})$, for all $\ell \in \{1,2,\ldots, n+\alpha\sqrt{n}\}$, $J_{k} \sim \text{Ber}(Q_{n}^{1})$, for all $k \in \{1,2,\ldots, n-\alpha\sqrt{n}\}$, and all of these binary random variables are independent.
We get that
\begin{align}
\mathbb{E}\left[e^{-s N_{0}}\right]
&= \mathbb{E}\left[\exp\left\{-s \left(\sum_{\ell=1}^{n+\alpha\sqrt{n}} I_{\ell} + \sum_{k=1}^{n-\alpha\sqrt{n}} J_{k}\right) \right\}\right] \\
&= \mathbb{E}\left[\prod_{\ell=1}^{n+\alpha\sqrt{n}} e^{-s I_{\ell}} \cdot \prod_{k=1}^{n-\alpha\sqrt{n}} e^{-s J_{k}} \right] \\
\label{B_ToExp4}
&= \prod_{\ell=1}^{n+\alpha\sqrt{n}} \mathbb{E}\left[ e^{-s I_{\ell}} \right] \cdot \prod_{k=1}^{n-\alpha\sqrt{n}} \mathbb{E}\left[ e^{-s J_{k}} \right] \\
&= \left[1+Q_{n}^{0}(e^{-s}-1)\right]^{n+\alpha\sqrt{n}} \cdot \left[1+Q_{n}^{1}(e^{-s}-1)\right]^{n-\alpha\sqrt{n}} \\
\label{B_ToExp5}
&\leq \left[1+\phi_{n}(e^{-s}-1)\right]^{n+\alpha\sqrt{n}} \cdot \left[1+\phi_{n}(e^{-s}-1)\right]^{n-\alpha\sqrt{n}} \\
\label{ToRef14}
&= \left[1+\phi_{n}(e^{-s}-1)\right]^{2n},
\end{align}
where \eqref{B_ToExp4} is due to the independence of all binary random variables and \eqref{B_ToExp5} is true since $\min\{Q_{n}^{0},Q_{n}^{1}\} \geq \phi_{n}$ and $e^{-s}-1 \leq 0$.
Substituting \eqref{ToRef14} back into \eqref{ToRef15} yields that
\begin{align}
\pr \left\{N_{0} \leq 2n (\phi_{n}-\epsilon_{n}) \right\}
&\leq \inf_{s > 0} \exp \left\{2n \log \left[1+\phi_{n}(e^{-s}-1)\right] + 2ns (\phi_{n}-\epsilon_{n})\right\} \\
\label{ToRef16}
&= \exp \left\{ 2n \cdot \inf_{s > 0} \{\log \left[1+\phi_{n}(e^{-s}-1)\right] + s (\phi_{n}-\epsilon_{n}) \}\right\}.
\end{align}
Upon defining
\begin{align}
\label{ToRef17}
g(s) = \log \left[1+\phi_{n}(e^{-s}-1)\right] + s (\phi_{n}-\epsilon_{n}),
\end{align}
we find that the solution to $g'(s)=0$ is given by
\begin{align}
s^{*} = \log \left(\frac{\phi_{n}[1-(\phi_{n}-\epsilon_{n})]}{(1-\phi_{n})(\phi_{n}-\epsilon_{n})}\right).
\end{align}
Substituting it back into \eqref{ToRef17} yields that
\begin{align}
g(s^{*})
&= \log\left(1+ \phi_{n}\left[\frac{(1-\phi_{n})(\phi_{n}-\epsilon_{n})}{\phi_{n}[1-(\phi_{n}-\epsilon_{n})]}-1\right]\right) \nn \\
&~~~~~~~~~~~~~+ (\phi_{n}-\epsilon_{n}) \log \left(\frac{\phi_{n}[1-(\phi_{n}-\epsilon_{n})]}{(1-\phi_{n})(\phi_{n}-\epsilon_{n})}\right) \\
&= \log \left(\frac{1-\phi_{n}}{1-(\phi_{n}-\epsilon_{n})}\right)
+(\phi_{n}-\epsilon_{n}) \log \left(\frac{\phi_{n}}{\phi_{n}-\epsilon_{n}}\right) \nn \\
&~~~~~~~~~~~~~+ (\phi_{n}-\epsilon_{n}) \log \left(\frac{1-(\phi_{n}-\epsilon_{n})}{1-\phi_{n}}\right) \\
&= - (\phi_{n}-\epsilon_{n}) \log \left(\frac{\phi_{n}-\epsilon_{n}}{\phi_{n}}\right)
- (1-(\phi_{n}-\epsilon_{n})) \log \left(\frac{1-(\phi_{n}-\epsilon_{n})}{1-\phi_{n}}\right) \\
\label{ToDoPinsker}
&= -D(\phi_{n}-\epsilon_{n} \| \phi_{n}).
\end{align}
We upper-bound the expression in \eqref{ToDoPinsker} using Pinsker's inequality \cite{C1967,Kullback}, which asserts that
\begin{align} \label{PINSKER}
D(P\|Q) \geq 2 |P-Q|^{2}.
\end{align}
Thus, we arrive at
\begin{align}
\pr \left\{N_{0} \leq 2n (\phi_{n}-\epsilon_{n}) \right\}
&\leq \exp \left\{ -2n D(\phi_{n}-\epsilon_{n} \| \phi_{n}) \right\} \\
&\leq \exp \left\{ -4n \epsilon_{n}^{2} \right\}.
\end{align}
Hence, we conclude that
\begin{align}
\pr \left\{N_{0} \geq 2n (\phi_{n}-\epsilon_{n}) \right\}
&\geq 1 - \exp\left\{-4n\epsilon_{n}^{2} \right\},
\end{align}
and by substituting the specific expressions of $\phi_{n}$ and $\epsilon_{n}$ from \eqref{DEF_PHI_EPSILON}, we arrive at
\begin{align}
\pr \left\{N_{0} \geq 2n \left(\frac{1}{2} +
\frac{C_{0}(\alpha,\lambda)}{n^{1/4}}-\frac{C_{0}(\alpha,\lambda)}{2n^{1/4}}\right) \right\}
&\geq 1 - \exp\left\{-4n \left(\frac{C_{0}(\alpha,\lambda)}{2n^{1/4}}\right)^{2} \right\},
\end{align}
or
\begin{align}
\pr \left\{N_{0} \geq n + C_{0}(\alpha,\lambda) n^{3/4} \right\}
&\geq 1 - \exp\left\{- C_{0}(\alpha,\lambda)^{2} \sqrt{n} \right\},
\end{align}
which converges to 1 as $n \to \infty$.
Proposition \ref{PROP_3} is now proved.
\section*{Appendix E - Proof of Proposition \ref{PROP_4}}
\renewcommand{\theequation}{E.\arabic{equation}}
\setcounter{equation}{0}
Assume that the numbers of zeros and ones are $n+\beta n^{\frac{3}{4}}$ and $n-\beta n^{\frac{3}{4}}$, respectively.
If an agent starts with a `0', then the probability to decide in favor of `0' is lower-bounded by
\begin{align}
Q_{n}^{0}
&=\pr\left\{\text{Bin}\left(n+\beta n^{\frac{3}{4}}-1,p_{n}\right) + 1 \geq \text{Bin}\left(n-\beta n^{\frac{3}{4}},p_{n}\right) \right\} \\
&\geq \pr\left\{\text{Bin}\left(n+\beta n^{\frac{3}{4}}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \geq \text{Bin}\left(n-\beta n^{\frac{3}{4}},p_{n}\right) \right\} \\
&\geq \pr\left\{\text{Bin}\left(n,p_{n}\right) + \text{Bin}\left(\beta n^{\frac{3}{4}},p_{n}\right) \geq \text{Bin}\left(n,p_{n}\right) \right\} \\
&= \pr\left\{X + Z \geq Y \right\},
\end{align}
where $X,Y \sim \text{Bin}\left(n,p_{n}\right)$ and $Z \sim \text{Bin}\left(\beta n^{\frac{3}{4}},p_{n}\right)$.
It follows from the law of total probability that
\begin{align}
\pr\left\{X + Z \geq Y \right\}
\label{ref10}
&= \sum_{m=0}^{\beta n^{\frac{3}{4}}} \pr\left\{X + m \geq Y \right\} \pr\left\{Z = m \right\}.
\end{align}
As for the probability $\pr\left\{X + m \geq Y \right\}$, recall from \eqref{ref0} that
\begin{align}
\pr\left\{X + m \geq Y \right\}
\label{ref11}
&\geq \frac{1}{2} + \frac{1}{2} \cdot \sum_{i=0}^{m} \pr\{X + i = Y \}.
\end{align}
Lower-bounding \eqref{ref10} with \eqref{ref11} provides
\begin{align}
Q_{n}^{0}
&\geq \sum_{m=0}^{\beta n^{\frac{3}{4}}} \pr\left\{X + m \geq Y \right\} \pr\left\{Z = m \right\} \\
&\geq \sum_{m=0}^{\beta n^{\frac{3}{4}}} \left[ \frac{1}{2} + \frac{1}{2} \cdot \sum_{i=0}^{m} \pr\{X + i = Y \} \right] \pr\left\{Z = m \right\} \\
\label{ref13}
&= \frac{1}{2} + \frac{1}{2} \cdot \sum_{m=0}^{\beta n^{\frac{3}{4}}} \sum_{i=0}^{m} \pr\{X + i = Y \} \pr\left\{Z = m \right\}.
\end{align}
Recall that $Z \sim \text{Bin}\left(\beta n^{\frac{3}{4}},\tfrac{\lambda}{\sqrt{n}}\right)$ and thus, it follows from \eqref{ToRef4} that
\begin{align} \label{Binomial_bound}
P_{Z}(m) \geq \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{1}{m}}
\exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{m}{\beta n^{\frac{3}{4}}} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\}.
\end{align}
In order to lower-bound \eqref{ref13},
let $\delta_{n}$, $n=1,2, \ldots$, which converges to zero faster than $p_{n}=\tfrac{\lambda}{\sqrt{n}}$ and define the set of numbers
\begin{align}
{\cal S}_{n} = \left\{\beta n^{\frac{3}{4}}(p_{n}-\delta_{n}),\beta n^{\frac{3}{4}}(p_{n}-\delta_{n})+1,\ldots,\beta n^{\frac{3}{4}}p_{n},\ldots,\beta n^{\frac{3}{4}}(p_{n}+\delta_{n})\right\},
\end{align}
whose cardinality is given by
\begin{align}
\label{Cardinality2}
|{\cal S}_{n}| = 2 \beta n^{\frac{3}{4}} \delta_{n} + 1.
\end{align}
Continuing from \eqref{ref13},
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \frac{1}{2} \cdot \sum_{m \in {\cal S}_{n}} \sum_{i=0}^{m} \pr\{X + i = Y \} \pr\left\{Z = m \right\} \\
\label{ref18}
&\geq \frac{1}{2} + \frac{1}{2} \cdot \sum_{m \in {\cal S}_{n}} \sum_{i=0}^{m} \frac{\pi}{e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{i^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{Z = m \right\} \\
&\geq \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \sum_{i=0}^{m} \frac{\pi}{2e^{4}}
\frac{\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{Z = m \right\} \\
\label{ref19}
&\geq \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \frac{\pi}{2e^{4}}
\frac{m\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \pr\left\{Z = m \right\},
\end{align}
where \eqref{ref18} is due to Lemma \ref{Lemma_Binomial1} (Appendix B) and \eqref{ref19} is because we lower-bounded $m+1$ by $m$.
Lower-bounding \eqref{ref19} with \eqref{Binomial_bound} yields that
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \frac{\pi}{2e^{4}}
\frac{m\exp\{-4/\lambda\}}{\lambda n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \cdot \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{1}{m}}
\exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{m}{\beta n^{\frac{3}{4}}} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\} \\
\label{ref14}
&= \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \frac{\pi \sqrt{2\pi}\exp\{-4/\lambda\}}{2e^{6}\lambda}
\frac{\sqrt{m}}{n^{1/4}} \exp \left\{ - \frac{m^{2}}{\lambda \sqrt{n}} \right\} \cdot \exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{m}{\beta n^{\frac{3}{4}}} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\}.
\end{align}
Since $m \in {\cal S}_{n}$, the factor $\tfrac{\sqrt{m}}{n^{1/4}}$ is lower-bounded by
\begin{align}
\frac{\sqrt{m}}{n^{1/4}}
&\geq \frac{\sqrt{\beta n^{\frac{3}{4}}(p_{n}-\delta_{n})}}{n^{1/4}} \\
\label{ref20}
&\geq \frac{\sqrt{\beta n^{\frac{3}{4}}(p_{n}-\tfrac{1}{2}p_{n})}}{n^{1/4}} \\
&= \frac{\sqrt{\tfrac{1}{2}\beta n^{\frac{3}{4}}p_{n}}}{n^{1/4}} \\
&= \frac{\sqrt{\tfrac{1}{2}\beta n^{\frac{3}{4}}\frac{\lambda}{\sqrt{n}}}}{n^{1/4}} \\
\label{ref15}
&= \sqrt{\tfrac{1}{2}\beta\lambda} n^{-\frac{1}{8}},
\end{align}
where \eqref{ref20} holds for all $n$ sufficiently large, since $\delta_{n}$ converges to zero faster than $p_{n}$.
For the first exponent in \eqref{ref14}, it attains its minimal value for the maximal value of $m$ in ${\cal S}_{n}$:
\begin{align}
\exp \left\{ -\frac{m^{2}}{\lambda \sqrt{n}} \right\}
&\geq \exp \left\{ - \frac{\left[\beta n^{\frac{3}{4}}(p_{n}+\delta_{n})\right]^{2}}{\lambda \sqrt{n}} \right\} \\
&= \exp \left\{-\frac{\beta^{2} n^{\frac{3}{2}}(p_{n}+\delta_{n})^{2}}{\lambda \sqrt{n}} \right\} \\
&= \exp \left\{-\frac{\beta^{2} }{\lambda} n (p_{n}+\delta_{n})^{2} \right\} \\
\label{ref21}
&\geq \exp \left\{-\frac{\beta^{2} }{\lambda} n (p_{n}+p_{n})^{2} \right\} \\
&= \exp \left\{-\frac{\beta^{2} }{\lambda} n \frac{4\lambda^{2}}{n} \right\} \\
\label{ref16}
&= \exp \left\{-4\beta^{2}\lambda \right\},
\end{align}
where \eqref{ref21} holds for all $n$ sufficiently large, since $\delta_{n}$ converges to zero faster than $p_{n}$.
For the second exponent in \eqref{ref14}, since we will use the reverse Pinsker inequality in order to upper-bound the binary divergence, it follows that choosing either of the endpoints of ${\cal S}_{n}$ will yield the minimal value. We get that
\begin{align}
\exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{m}{\beta n^{\frac{3}{4}}} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\}
&\geq \exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{\beta n^{\frac{3}{4}}(p_{n}+\delta_{n})}{\beta n^{\frac{3}{4}}} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\} \\
&= \exp \left\{ -\beta n^{\frac{3}{4}} D\left(\frac{\lambda}{\sqrt{n}}+\delta_{n} \middle\| \frac{\lambda}{\sqrt{n}} \right) \right\} \\
\label{ref22}
&\geq \exp \left\{ -\beta n^{\frac{3}{4}} \frac{2\sqrt{n}}{\lambda} \delta_{n}^{2} \right\} \\
\label{ref17}
&= \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\},
\end{align}
where \eqref{ref22} follows from the reverse Pinsker inequality in \eqref{Reverse_PINSKER}.
Lower-bounding \eqref{ref14} using \eqref{ref15}, \eqref{ref16}, and \eqref{ref17} yields that
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \frac{\pi \sqrt{2\pi}\exp\{-4/\lambda\}}{2e^{6}\lambda}
\sqrt{\tfrac{1}{2}\beta\lambda} n^{-\frac{1}{8}} \cdot \exp \left\{-4\beta^{2}\lambda \right\} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\} \\
&= \frac{1}{2} + \sum_{m \in {\cal S}_{n}} \frac{\pi^{\frac{3}{2}} \exp\{-4/\lambda\} \sqrt{\beta}\exp \left\{-4\beta^{2}\lambda \right\}}{2e^{6}\sqrt{\lambda}} n^{-\frac{1}{8}} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\} \\
&= \frac{1}{2} + \frac{\pi^{\frac{3}{2}} \exp\{-4/\lambda\} \sqrt{\beta}\exp \left\{-4\beta^{2}\lambda \right\}}{2e^{6}\sqrt{\lambda}} \cdot |{\cal S}_{n}| \cdot n^{-\frac{1}{8}} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\} \\
&\geq \frac{1}{2} + \frac{\pi^{\frac{3}{2}} \exp\{-4/\lambda\} \sqrt{\beta}\exp \left\{-4\beta^{2}\lambda \right\}}{2e^{6}\sqrt{\lambda}} \cdot 2 \beta n^{\frac{3}{4}} \delta_{n} \cdot n^{-\frac{1}{8}} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\} \\
&= \frac{1}{2} + \frac{(\pi\beta)^{\frac{3}{2}} \exp\{-4/\lambda\} \exp \left\{-4\beta^{2}\lambda \right\}}{e^{6}\sqrt{\lambda}} \cdot n^{\frac{5}{8}} \delta_{n} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \delta_{n}^{2} \right\}.
\end{align}
Let us choose $\delta_{n}=\frac{1}{n^{5/8}}$, and then
\begin{align}
Q_{n}^{0}
&\geq \frac{1}{2} + \frac{(\pi\beta)^{\frac{3}{2}} \exp\{-4/\lambda\} \exp\left\{-4\beta^{2}\lambda \right\}}{e^{6}\sqrt{\lambda}} \cdot n^{\frac{5}{8}} \frac{1}{n^{5/8}} \cdot \exp \left\{ -\frac{2\beta}{\lambda} n^{\frac{5}{4}} \frac{1}{n^{5/4}} \right\} \\
&= \frac{1}{2} + \frac{(\pi\beta)^{\frac{3}{2}} \exp\{-4/\lambda\} \exp\left\{-4\beta^{2}\lambda \right\}\exp \left\{ -2\beta/\lambda \right\}}{e^{6}\sqrt{\lambda}} \\
&= \frac{1}{2} + \frac{(\pi\beta)^{\frac{3}{2}} \exp\{-(4+2\beta)/\lambda\} \exp\left\{-4\beta^{2}\lambda\right\}}{e^{6}\sqrt{\lambda}},
\end{align}
which is strictly larger then $\tfrac{1}{2}$, for any $\beta>0$ and $\lambda>0$.
Proposition \ref{PROP_4} is now proved.
\section*{Appendix F - Proof of Proposition \ref{PROP_6}}
\renewcommand{\theequation}{F.\arabic{equation}}
\setcounter{equation}{0}
It follows from the union bound that
\begin{align}
\pr\left\{N(\bX_{1};0) < 2n \right\}
&= \pr \left\{\bigcup_{i=1}^{2n} \{\bX_{1}(i) = 1\} \right\} \\
\label{ToSubs1}
&\leq \sum_{i=1}^{2n} \pr \left\{ \bX_{1}(i) = 1 \right\}.
\end{align}
As before, let us denote $p_{n} = \tfrac{\lambda}{\sqrt{n}}$. Define the sequence $\{A_{n}\}_{n=1}^{\infty}$ by $A_{n}=\gamma n$, where $\gamma \in (0,1)$.
If an agent starts with a `0', then the probability to decide in favor of `1' is upper-bounded by
\begin{align}
\label{B_ToExp0}
&\pr\left\{\text{Bin}\left(n-A_{n},p_{n}\right) \geq \text{Bin}\left(n+A_{n}-1,p_{n}\right) +1+1 \right\} \\
\label{B_ToExp1}
&~~\leq \pr\left\{\text{Bin}\left(n-A_{n},p_{n}\right) \geq \text{Bin}\left(n+A_{n}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \right\} \\
\label{ToCallA0}
&~~= \pr\left\{\text{Bin}\left(n-A_{n},p_{n}\right) \geq \text{Bin}\left(n+A_{n},p_{n}\right) \right\},
\end{align}
where the addition of the second 1 in \eqref{B_ToExp0} follows from the need to strictly break the tie in order to adopt `1' and \eqref{B_ToExp1} is due to the fact that $\text{Bin}\left(1,p_{n}\right) \leq 2$ with probability one.
If an agent starts with a `1', then the probability to decide `1' is upper-bounded by
\begin{align}
&\pr\left\{\text{Bin}\left(n-A_{n}-1,p_{n}\right) + 1 \geq \text{Bin}\left(n+A_{n},p_{n}\right) \right\} \nn \\
\label{ToCallA1}
&~~\leq \pr\left\{\text{Bin}\left(n-A_{n},p_{n}\right) + 1 \geq \text{Bin}\left(n+A_{n},p_{n}\right) \right\}.
\end{align}
Since \eqref{ToCallA1} cannot be smaller than \eqref{ToCallA0}, we continue with \eqref{ToCallA1}.
From now on, we prove that the probability in \eqref{ToCallA1}, to be denoted by $P_{n}$, converges to zero as $n \to \infty$.
Let
\begin{align}
X_{n} = \sum_{\ell=1}^{n-A_{n}} I_{\ell},~~~
Y_{n} = \sum_{k=1}^{n+A_{n}} J_{k},
\end{align}
where $I_{\ell} \sim \text{Ber}(p_{n})$, for all $\ell \in \{1,2,\ldots, n-A_{n}\}$, $J_{k} \sim \text{Ber}(p_{n})$, for all $k \in \{1,2,\ldots, n+A_{n}\}$, and all of these binary random variables are independent.
Now,
\begin{align}
P_{n}
&= \pr \{X_{n}+1 \geq Y_{n}\} \\
&= \pr \left\{e^{\lambda(X_{n}-Y_{n}+1)} \geq 1 \right\} \\
\label{ToCallA2}
&\leq \mathbb{E} \left[e^{\lambda(X_{n}-Y_{n}+1)}\right],
\end{align}
where \eqref{ToCallA2} is due to Markov's inequality. We get that
\begin{align}
\mathbb{E} \left[e^{\lambda(X_{n}-Y_{n}+1)}\right]
&= e^{\lambda} \cdot \mathbb{E}\left[\exp\left\{\lambda \left(\sum_{\ell=1}^{n-A_{n}} I_{\ell} - \sum_{k=1}^{n+A_{n}} J_{k}\right) \right\}\right] \\
&= e^{\lambda} \cdot \mathbb{E}\left[\prod_{\ell=1}^{n-A_{n}} e^{\lambda I_{\ell}} \cdot \prod_{k=1}^{n+A_{n}} e^{-\lambda J_{k}} \right] \\
\label{ToCallA3}
&= e^{\lambda} \cdot \prod_{\ell=1}^{n-A_{n}} \mathbb{E}\left[ e^{\lambda I_{\ell}} \right] \cdot \prod_{k=1}^{n+A_{n}} \mathbb{E}\left[ e^{-\lambda J_{k}} \right] \\
&= e^{\lambda} \cdot \left[1+p_{n}(e^{\lambda}-1)\right]^{n-A_{n}} \cdot \left[1+p_{n}(e^{-\lambda}-1)\right]^{n+A_{n}} \\
\label{ToCallA4}
&\leq e^{\lambda} \cdot \left[\exp\{p_{n}(e^{\lambda}-1) \}\right]^{n-A_{n}} \cdot \left[\exp\{p_{n}(e^{-\lambda}-1)\}\right]^{n+A_{n}} \\
\label{ToRef19}
&= \exp \left\{\lambda +p_{n}(e^{\lambda}-1)(n-A_{n}) +p_{n}(e^{-\lambda}-1)(n+A_{n}) \right\},
\end{align}
where \eqref{ToCallA3} is due to the independence of all binary random variables and \eqref{ToCallA4} follows from the inequality $1+x \leq e^{x}$.
Since the bound in \eqref{ToRef19} is true for any $\lambda \geq 0$, it holds in particular for the choice
\begin{align}
e^{\lambda^{*}} = \sqrt{\frac{n+A_{n}}{n-A_{n}}}.
\end{align}
Substituting it back into \eqref{ToRef19} yields that
\begin{align}
P_{n}
&\leq \exp \left\{\lambda^{*} +p_{n}(e^{\lambda^{*}}-1)(n-A_{n}) +p_{n}(e^{-\lambda^{*}}-1)(n+A_{n}) \right\} \\
&= \sqrt{\frac{n+A_{n}}{n-A_{n}}} \cdot \exp \left\{p_{n}\left(\sqrt{\frac{n+A_{n}}{n-A_{n}}}-1\right)(n-A_{n}) +p_{n}\left(\sqrt{\frac{n-A_{n}}{n+A_{n}}}-1\right)(n+A_{n}) \right\} \\
&= \sqrt{\frac{n+A_{n}}{n-A_{n}}} \cdot \exp \left\{p_{n}\left[\sqrt{(n+A_{n})(n-A_{n})}-n+A_{n}\right] \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~\times \exp \left\{p_{n}\left[\sqrt{(n-A_{n})(n+A_{n})}-n-A_{n}\right] \right\} \\
&= \sqrt{\frac{n+A_{n}}{n-A_{n}}} \cdot \exp \left\{2p_{n}\left(\sqrt{n^{2}-A_{n}^{2}}-n\right) \right\}.
\end{align}
Consider the following
\begin{align}
\sqrt{n^{2}-A_{n}^{2}}-n
&= \sqrt{n^{2} \left(1-\frac{A_{n}^{2}}{n^{2}}\right)}-n \\
&= n \sqrt{1-\frac{A_{n}^{2}}{n^{2}}}-n \\
\label{ToCallA5}
&\leq n \left(1-\frac{A_{n}^{2}}{2n^{2}}\right)-n \\
&= -\frac{A_{n}^{2}}{2n},
\end{align}
where \eqref{ToCallA5} follows from the inequality $\sqrt{1-t} \leq 1-t/2$. Continuing from \eqref{ToSubs1}, we arrive at
\begin{align}
\pr\left\{N(\bX_{1};0) < 2n \right\}
&\leq 2n \sqrt{\frac{n+A_{n}}{n-A_{n}}} \cdot \exp \left\{-p_{n} \cdot \frac{A_{n}^{2}}{n} \right\},
\end{align}
and specifically, since we defined $A_{n} = \gamma n$, $\gamma \in (0,1)$, we arrive at
\begin{align}
\pr\left\{N(\bX_{1};0) < 2n \right\}
&\leq 2n \sqrt{\frac{n+\gamma n}{n-\gamma n}} \cdot \exp \left\{- \frac{\lambda}{\sqrt{n}} \cdot \frac{\gamma^{2} n^{2}}{n} \right\} \\
&= 2n \sqrt{\frac{1+\gamma}{1-\gamma}} \cdot \exp \left\{-\lambda \gamma^{2} \sqrt{n} \right\} \xrightarrow{n \to \infty} 0,
\end{align}
which completes the proof of Proposition \ref{PROP_6}.
\section*{Appendix G - Proof of Proposition \ref{PROP_7}}
\renewcommand{\theequation}{G.\arabic{equation}}
\setcounter{equation}{0}
Let us denote $N=N(\bX_{1};0)$.
For any $\mu \geq 0$, it follows from Markov's inequality that
\begin{align}
\pr \left\{N \geq n + B_{n} \right\}
&= \pr \left\{e^{\mu N} \geq e^{\mu(n + B_{n})} \right\} \\
\label{D_ToRef1}
&\leq \frac{\mathbb{E}\left[e^{\mu N}\right]}{e^{\mu(n + B_{n})}},
\end{align}
and thus, since \eqref{D_ToRef1} holds for every $\mu \geq 0$, it follows that
\begin{align}
\label{ToRef9}
\pr \left\{N \geq n + B_{n} \right\}
&\leq \inf_{\mu > 0} \frac{\mathbb{E}\left[e^{\mu N}\right]}{e^{\mu(n + B_{n})}}.
\end{align}
Note that
\begin{align}
N = \sum_{m=1}^{n+A_{n}} I_{m} + \sum_{m=1}^{n-A_{n}} J_{m},
\end{align}
where $I_{m} \sim \text{Ber}(P_{n,0})$, for all $m \in \{1,2,\ldots, n+A_{n}\}$, and $J_{m} \sim \text{Ber}(P_{n,1})$, for all $m \in \{1,2,\ldots, n-A_{n}\}$, and all of these binary random variables are independent.
We get that
\begin{align}
\mathbb{E}\left[e^{\mu N}\right]
&= \mathbb{E}\left[\exp\left\{\mu \left(\sum_{m=1}^{n+A_{n}} I_{m} + \sum_{m=1}^{n-A_{n}} J_{m}\right) \right\}\right] \\
&= \mathbb{E}\left[\prod_{m=1}^{n+A_{n}} e^{\mu I_{m}} \cdot \prod_{m=1}^{n-A_{n}} e^{\mu J_{m}} \right] \\
\label{D_ToRef2}
&= \prod_{m=1}^{n+A_{n}} \mathbb{E}\left[ e^{\mu I_{m}} \right] \cdot \prod_{m=1}^{n-A_{n}} \mathbb{E}\left[ e^{\mu J_{m}} \right] \\
&= \left(1-P_{n,0} + P_{n,0}e^{\mu}\right)^{n+A_{n}} \cdot \left(1-P_{n,1} + P_{n,1}e^{\mu}\right)^{n-A_{n}} \\
\label{D_ToRef3}
&= \left[1+P_{n,0}(e^{\mu}-1)\right]^{n+A_{n}} \cdot \left[1+P_{n,1}(e^{\mu}-1)\right]^{n-A_{n}} \\
\label{D_ToRef5}
&\leq \left[1+P_{n}(e^{\mu}-1)\right]^{n+A_{n}} \cdot \left[1+P_{n}(e^{\mu}-1)\right]^{n-A_{n}} \\
\label{D_ToRef6}
&= \left[1+P_{n}(e^{\mu}-1)\right]^{2n}.
\end{align}
where \eqref{D_ToRef2} is due to the independence of all binary random variables and \eqref{D_ToRef5} follows from the fact that the probability to update to `0' is upper-bounded by $P_{n}$.
Substituting \eqref{D_ToRef6} back into \eqref{ToRef9} yields that
\begin{align}
\pr \left\{N \geq n + B_{n} \right\}
&\leq \inf_{\mu > 0} \frac{\left[1+P_{n}(e^{\mu}-1)\right]^{2n}}{\exp\{\mu(n + B_{n})\}} \\
&= \inf_{\mu > 0} \exp \left\{2n \log \left[1+P_{n}(e^{\mu}-1)\right] - \mu(n + B_{n})\right\} \\
\label{ToRef10}
&= \exp \left\{\inf_{\mu > 0} \{2n \log \left[1+P_{n}(e^{\mu}-1)\right]-\mu(n + B_{n}) \} \right\}.
\end{align}
Upon defining
\begin{align}
f(\mu) = 2n \log \left[1+P_{n}(e^{\mu}-1)\right]-\mu(n + B_{n}),
\end{align}
we find that the solution to $f'(\mu)=0$ is given by
\begin{align}
\mu^{*}
= \log \left(\frac{\left(\frac{1}{2} + \frac{B_{n}}{2n}\right)\cdot(1-P_{n})}{\left(\frac{1}{2} - \frac{B_{n}}{2n}\right)\cdot P_{n}}\right).
\end{align}
Substituting it back into \eqref{ToRef10} provides that
\begin{align}
\pr \left\{N \geq n + B_{n} \right\}
&\leq \exp \left\{2n \log \left[1+P_{n}(e^{\mu^{*}}-1)\right]-\mu^{*}(n + B_{n}) \right\} \\
&= \exp \left\{-2n \cdot \left[\left(\frac{1}{2} + \frac{B_{n}}{2n}\right)\log\frac{\frac{1}{2} + \frac{B_{n}}{2n}}{P_{n}} + \left(\frac{1}{2} - \frac{B_{n}}{2n}\right)\log\frac{\frac{1}{2} - \frac{B_{n}}{2n}}{1-P_{n}} \right] \right\} \\
&= \exp \left\{-2n \cdot D\left(\frac{1}{2} + \frac{B_{n}}{2n} \middle\| P_{n} \right) \right\}.
\end{align}
which completes the proof of Proposition \ref{PROP_7}.
\section*{Appendix H - Proof of Proposition \ref{PROP_8}}
\renewcommand{\theequation}{H.\arabic{equation}}
\setcounter{equation}{0}
Assume that the numbers of zeros and ones are $n+\psi_{n}$ and $n-\psi_{n}$, respectively.
If an agent starts with a `0', then the probability to decide in favor of `0' is upper-bounded by
\begin{align}
&\pr\left\{\text{Bin}\left(n+\psi_{n}-1,p_{n}\right) + 1 \geq \text{Bin}\left(n-\psi_{n},p_{n}\right) \right\} \\
\label{ref40}
&~~~\leq \pr\left\{\text{Bin}\left(n+\psi_{n},p_{n}\right) + 1 \geq \text{Bin}\left(n-\psi_{n},p_{n}\right) \right\}.
\end{align}
If an agent starts with a `1', then the probability to decide in favor of `0' is upper-bounded by
\begin{align}
&\pr\left\{\text{Bin}\left(n+\psi_{n},p_{n}\right) \geq \text{Bin}\left(n-\psi_{n}-1,p_{n}\right) + 1 + 1 \right\} \\
&~~~\leq \pr\left\{\text{Bin}\left(n+\psi_{n},p_{n}\right) \geq \text{Bin}\left(n-\psi_{n}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \right\} \\
\label{ref41}
&~~~= \pr\left\{\text{Bin}\left(n+\psi_{n},p_{n}\right) \geq \text{Bin}\left(n-\psi_{n},p_{n}\right) \right\}.
\end{align}
Since \eqref{ref41} cannot be larger than \eqref{ref40}, we continue with \eqref{ref40}.
From now on, we upper-bound the probability in \eqref{ref40}, to be denoted by $P_{n}$.
Note that
\begin{align}
P_{n}
&= \pr\left\{\text{Bin}\left(n+\psi_{n},p_{n}\right) + 1 \geq \text{Bin}\left(n-\psi_{n},p_{n}\right) \right\} \\
&= \pr\left\{\text{Bin}\left(n-\psi_{n},p_{n}\right) + \text{Bin}\left(2\psi_{n},p_{n}\right) + 1 \geq \text{Bin}\left(n-\psi_{n},p_{n}\right) \right\} \\
&= \pr\left\{X + Z + 1 \geq Y \right\},
\end{align}
where $X,Y \sim \text{Bin}\left(n-\psi_{n},p_{n}\right)$ and $Z \sim \text{Bin}\left(2\psi_{n},p_{n}\right)$.
It follows from the law of total probability that
\begin{align}
\pr\left\{X + Z + 1 \geq Y \right\}
\label{ref44}
&= \sum_{m=0}^{2\psi_{n}} \pr\left\{X + m + 1 \geq Y \right\} \pr\left\{Z = m \right\}.
\end{align}
As for the probability $\pr\left\{X + m + 1 \geq Y \right\}$, we have that
\begin{align}
\pr\left\{X + m + 1 \geq Y \right\}
\label{ref42}
= \pr\left\{X \geq Y \right\} + \pr\left\{X + 1 = Y \right\} + \ldots + \pr\left\{X + m + 1 = Y \right\}.
\end{align}
Substituting \eqref{ref1} back into \eqref{ref42} yields
\begin{align}
&\pr\left\{X + m + 1 \geq Y \right\} \nn \\
&~~~~~~~~= \frac{1}{2} + \frac{1}{2} \cdot \pr\{X = Y\} + \pr\left\{X + 1 = Y \right\} + \ldots + \pr\left\{X + m + 1 = Y \right\} \\
\label{ref43}
&~~~~~~~~\leq \frac{1}{2} + \sum_{i=0}^{m+1} \pr\{X + i = Y \}.
\end{align}
As for the summands in \eqref{ref43}, we have that
\begin{align}
\pr\left\{X + i = Y \right\}
&= \sum_{\ell=0}^{N-i} \pr\{X=\ell\} \cdot \pr\{Y=\ell+i\} \\
\label{ToExp0}
&\leq \sqrt{\sum_{\ell=0}^{N-i} \left(\pr\{X=\ell\}\right)^{2}}
\sqrt{\sum_{\ell=0}^{N-i} \left(\pr\{Y=\ell+i\}\right)^{2}} \\
&= \sqrt{\sum_{\ell=0}^{N-i} \left(\pr\{X=\ell\}\right)^{2}}
\sqrt{\sum_{\ell=i}^{N} \left(\pr\{Y=\ell\}\right)^{2}} \\
&\leq \sqrt{\sum_{\ell=0}^{N} \left(\pr\{X=\ell\}\right)^{2}}
\sqrt{\sum_{\ell=0}^{N} \left(\pr\{Y=\ell\}\right)^{2}} \\
&=\sum_{\ell=0}^{N} \left(\pr\{X=\ell\}\right)^{2} \\
&=\sum_{\ell=0}^{N} \pr\{X=\ell\} \cdot \pr\{Y=\ell\}\\
\label{ref46}
&= \pr\{X=Y\},
\end{align}
where \eqref{ToExp0} follows from the Cauchy-Schwarz inequality. Substituting \eqref{ref46} back into \eqref{ref43} yields that
\begin{align}
\pr\left\{X + m + 1 \geq Y \right\}
&\leq \frac{1}{2} + \sum_{i=0}^{m+1} \pr\{X = Y\} \\
\label{ref47}
&= \frac{1}{2} + (m+2)\pr\{X = Y\}.
\end{align}
Upper-bounding \eqref{ref44} with \eqref{ref47} yields
\begin{align}
P_{n}
&= \sum_{m=0}^{2\psi_{n}} \pr\left\{X + m + 1 \geq Y \right\} \pr\left\{Z = m \right\} \\
&\leq \sum_{m=0}^{2\psi_{n}} \left[ \frac{1}{2} + (m+2)\pr\{X = Y\} \right] \pr\left\{Z = m \right\} \\
&= \frac{1}{2} + \sum_{m=0}^{2\psi_{n}} (m+2)\pr\{X = Y\} \pr\left\{Z = m \right\} \\
&= \frac{1}{2} + \pr\{X = Y\}(\mathbb{E}[Z]+2) \\
\label{ref45}
&= \frac{1}{2} + \pr\{X = Y\}(2\psi_{n}p_{n}+2).
\end{align}
The following result, which is proved in Appendix I, is analogues to Lemma \ref{Lemma_Binomial1} in Appendix B.
\begin{lemma} \label{Lemma_Binomial2}
Let $X,Y \sim \text{Bin}\left(N,p_{n}\right)$, $N=n-\psi_{n}$, be two independent binomial random variables with $p_{n} = \tfrac{\lambda}{\sqrt{n}}$.
Define the sequence $\{\delta_{n}\}_{n=1}^{\infty}$ according to
\begin{align} \label{Delta_Def}
n^{\delta_{n}} = \sqrt{\log(n^{\theta})},~~\theta > 5.
\end{align}
Then, for all sufficiently large $n$,
\begin{align}
\pr\{X = Y\}
\leq \frac{15}{\lambda n^{1/4-\delta_{n}}}.
\end{align}
\end{lemma}
Continuing from \eqref{ref45}, we arrive at
\begin{align}
P_{n}
&\leq \frac{1}{2} + \frac{30(\psi_{n}p_{n}+1)}{\lambda n^{1/4-\delta_{n}}} \\
&\leq \frac{1}{2} + \frac{60\psi_{n}p_{n}}{\lambda n^{1/4-\delta_{n}}},
\end{align}
which proves Proposition \ref{PROP_8}.
\section*{Appendix I - Proof of Lemma \ref{Lemma_Binomial2}}
\renewcommand{\theequation}{I.\arabic{equation}}
\setcounter{equation}{0}
Consider the following:
\begin{align}
&\pr\{X = Y\} \nn \\
&~~~~=\sum_{\ell=0}^{N} \left(\pr\{X=\ell\}\right)^{2}\\
&~~~~=\sum_{\ell=0}^{N} \left[\binom{N}{\ell} p_{n}^{\ell} (1-p_{n})^{N-\ell} \right]^{2} \\
&~~~~= \left[\binom{N}{0} p_{n}^{0} (1-p_{n})^{N} \right]^{2}
+\sum_{\ell=1}^{N-1} \left[\binom{N}{\ell} p_{n}^{\ell} (1-p_{n})^{N-\ell} \right]^{2}
+ \left[\binom{N}{N} p_{n}^{N} (1-p_{n})^{0} \right]^{2}\\
\label{ref70}
&~~~~= (1-p_{n})^{2N}
+\sum_{\ell=1}^{N-1} \left[\binom{N}{\ell} p_{n}^{\ell} (1-p_{n})^{N-\ell} \right]^{2} + p_{n}^{2N}.
\end{align}
We continue by upper-bounding the PMF of the binomial random variable $X=\text{Bin}(N,p)$, which is given by
\begin{align}
\label{ref60}
P_{X}(k) = \binom{N}{k} p^{k} (1-p)^{N-k},~~~k \in [0:N].
\end{align}
In order to upper-bound the binomial coefficient in \eqref{ref60}, we use the Stirling's bounds in
\begin{align}
\sqrt{2 \pi n} \cdot n^{n} \cdot e^{-n}
\leq n!
\leq e \sqrt{n} \cdot n^{n} \cdot e^{-n},~~n \geq 1,
\end{align}
and get that
\begin{align}
\binom{N}{k}
&= \frac{N!}{k! \cdot (N-k)!} \\
\label{ref61}
&\leq \frac{e}{2\pi} \sqrt{\frac{N}{k(N-k)}}
\exp \left\{ -N \left[\frac{k}{N} \log \left(\frac{k}{N}\right)
+ \left(1-\frac{k}{N}\right) \log \left(1-\frac{k}{N}\right) \right] \right\}.
\end{align}
Since $e < 2\pi$, substituting \eqref{ref61} back into \eqref{ref60} yields that for any $k=1,2,\ldots,n-1$
\begin{align}
P_{X}(k)
\label{ref62}
&\leq \sqrt{\frac{N}{k(N-k)}}
\exp \left\{ -N D\left(\frac{k}{N} \middle\| p \right) \right\},
\end{align}
where $D(\alpha \| \beta)$, for $\alpha,\beta \in [0,1]$, is defined in \eqref{DEF_Bin_DIVERGENCE}.
As for the middle term in \eqref{ref70}, it follows from \eqref{ref62} that
\begin{align}
\sum_{\ell=1}^{N-1} \left[\binom{N}{\ell} p_{n}^{\ell} (1-p_{n})^{N-\ell} \right]^{2}
\label{ref71}
&\leq \sum_{\ell=1}^{N-1}
\frac{N}{\ell(N-\ell)}
\exp \left\{ -2N D\left(\frac{\ell}{N} \middle\| p_{n} \right) \right\}.
\end{align}
In order to upper-bound \eqref{ref71},
let $\epsilon_{n} = \tfrac{1}{n^{3/4-\delta_{n}}}$, $n=1,2, \ldots$, where $\delta_{n} \to 0$ as $n \to \infty$, according to its definition in \eqref{Delta_Def}. Note that $\epsilon_{n}$ converges to zero faster than $p_{n}=\tfrac{\lambda}{\sqrt{n}}$ and define the set of numbers
\begin{align}
{\cal N}_{n} = \{N(p_{n}-\epsilon_{n}),N(p_{n}-\epsilon_{n})+1,\ldots,Np_{n},\ldots,N(p_{n}+\epsilon_{n})\},
\end{align}
whose cardinality is given by
\begin{align}
\label{Cardinality3}
|{\cal N}_{n}| = 2 N \epsilon_{n} +1.
\end{align}
Denote ${\cal M}_{n} = \{1,2,\ldots,N-1\} \cap {\cal N}_{n}^{\mbox{\tiny c}}$.
For any $\ell \in {\cal M}_{n}$, it follows that
\begin{align}
D\left(\frac{\ell}{N} \middle\| p_{n} \right)
&\geq D\left(p_{n}+\epsilon_{n} \middle\| p_{n} \right)\\
\label{ref75}
&= (p_{n}+\epsilon_{n}) \log \left(\frac{p_{n}+\epsilon_{n}}{p_{n}}\right)
+ (1-p_{n}-\epsilon_{n}) \log \left(\frac{1-p_{n}-\epsilon_{n}}{1-p_{n}}\right) \\
\label{ref73}
&= \left(\frac{1}{\sqrt{n}}+\frac{1}{n^{3/4-\delta_{n}}}\right)\log\left(1+\frac{1}{n^{1/4-\delta_{n}}}\right) \nn \\
&~~~~~~+ \left(1-\frac{1}{\sqrt{n}}-\frac{1}{n^{3/4-\delta_{n}}}\right)\log\left(1-\frac{1}{n^{3/4-\delta_{n}}-n^{1/4-\delta_{n}}}\right),
\end{align}
where in \eqref{ref73} we have substituted $p_{n}=\tfrac{1}{\sqrt{n}}$, since the actual value of $\lambda$ is immaterial for the asymptotic behavior of \eqref{ref75} as $n \to \infty$.
In order to lower-bound \eqref{ref73}, let us use the facts that $\log(1+t) \geq t-\tfrac{t^{2}}{2}$ for all $t\geq0$ and $\log(1-t)\geq -t-t^{2}$ for all $t\geq0$ sufficiently small. We find that \eqref{ref73} is lower-bounded by
\begin{align}
&\left(\frac{1}{\sqrt{n}}+\frac{1}{n^{3/4-\delta_{n}}}\right)\left(\frac{1}{n^{1/4-\delta_{n}}}-\frac{1}{2n^{1/2-2\delta_{n}}}\right) \nn \\
&~~~~~- \left(1-\frac{1}{\sqrt{n}}-\frac{1}{n^{3/4-\delta_{n}}}\right)\left(\frac{1}{n^{3/4-\delta_{n}}-n^{1/4-\delta_{n}}} + \frac{1}{(n^{3/4-\delta_{n}}-n^{1/4-\delta_{n}})^{2}}\right),
\end{align}
which simplifies to
\begin{align}
&\frac{n^{2\delta_{n}}-n^{3\delta_{n}-1/4}-2n^{2\delta_{n}-1/2}+2n^{3\delta_{n}-3/4}+n^{2\delta_{n}-1}+n^{3\delta_{n}-5/4}}{2n - 4\sqrt{n} + 2} \\
&~~~\geq \frac{n^{2\delta_{n}}-n^{3\delta_{n}-1/4}-2n^{2\delta_{n}-1/2}}{2n - 4\sqrt{n} + 2} \\
&~~~\geq \frac{n^{2\delta_{n}}-\tfrac{1}{4}n^{2\delta_{n}}-\tfrac{1}{4}n^{2\delta_{n}}}{2n} \\
\label{ref72}
&~~~= \frac{n^{2\delta_{n}}}{4n} \\
&~~~\stackrel{\triangle}{=} \xi_{n}.
\end{align}
We now continue from \eqref{ref71} and arrive at
\begin{align}
&\sum_{\ell=1}^{N-1}
\frac{N}{\ell(N-\ell)}
\exp \left\{ -2N D\left(\frac{\ell}{N} \middle\| p_{n} \right) \right\} \nn \\
\label{C_ToExp2}
&~~~\leq
\sum_{\ell \in {\cal M}_{n}}
\frac{N}{\ell(N-\ell)}
\exp \left\{ -2N \xi_{n} \right\}
+
\sum_{\ell \in {\cal N}_{n}} \frac{N}{\ell(N-\ell)} \\
&~~~\leq
\sum_{\ell \in {\cal M}_{n}}
\frac{N}{(N-1)}
\exp \left\{ -2N \xi_{n} \right\}
+
\sum_{\ell \in {\cal N}_{n}} \frac{N}{N(p_{n}-\epsilon_{n})[N-N(p_{n}-\epsilon_{n})]} \\
\label{C_ToExp3}
&~~~\leq
N \exp \left\{ -2N \xi_{n} \right\}
+
\frac{2N \epsilon_{n} +1}{N(1-p_{n}+\epsilon_{n})(p_{n}-\epsilon_{n})} \\
\label{ToRef5}
&~~~\leq
N \exp \left\{ -2N \xi_{n} \right\}
+
\frac{2N \epsilon_{n} +N \epsilon_{n}}{N(1-\frac{1}{2})(p_{n}-\tfrac{1}{2}p_{n})} \\
\label{ToRef6}
&~~~\leq
N \exp \left\{ -2N \xi_{n} \right\}
+ \frac{12\epsilon_{n}}{p_{n}}.
\end{align}
where \eqref{C_ToExp2} follows from the lower bound in \eqref{ref72} and the fact that $D(\alpha\|\beta) \geq 0$ in general. The passage to \eqref{C_ToExp3} is due to the fact that $|{\cal M}_{n}| \leq N-1$ as well as \eqref{Cardinality3} and in \eqref{ToRef5}, we upper-bounded $1$ by $N\epsilon_{n}$, $p_{n}-\epsilon_{n}$ by $\tfrac{1}{2}$, and $\epsilon_{n}$ by $\tfrac{1}{2}p_{n}$.
Upper-bounding \eqref{ref70} with \eqref{ToRef6} yields that
\begin{align}
\pr\{X = Y\}
\leq (1-p_{n})^{2N}
+ N \exp \left\{ -2N \xi_{n} \right\}
+ \frac{12\epsilon_{n}}{p_{n}} + p_{n}^{2N}.
\end{align}
Substituting $N=n-\psi_{n}$ and using the fact that $\lim_{n \to \infty} \frac{\psi_{n}}{n} = 0$ implies that for all sufficiently large $n$,
\begin{align}
&\pr\{X = Y\} \nn \\
&~~~~\leq \left(1-\frac{\lambda}{\sqrt{n}}\right)^{2(n-\psi_{n})}
+ (n-\psi_{n}) \exp \left\{ -2(n-\psi_{n})\xi_{n} \right\}
+ \frac{12}{\lambda n^{1/4-\delta_{n}}} + \left(\frac{\lambda}{\sqrt{n}}\right)^{2(n-\psi_{n})} \\
&~~~~\leq \left(1-\frac{\lambda}{\sqrt{n}}\right)^{2(n-\tfrac{n}{2})}
+ n \exp \left\{ -2(n-\tfrac{n}{2})\xi_{n} \right\}
+ \frac{12}{\lambda n^{1/4-\delta_{n}}} + \left(\frac{\lambda}{\sqrt{n}}\right)^{2(n-\tfrac{n}{2})} \\
&~~~~= \exp\left\{n \cdot \log \left(1-\frac{\lambda}{\sqrt{n}}\right) \right\}
+ n \exp \left\{ -n\xi_{n} \right\}
+ \frac{12}{\lambda n^{1/4-\delta_{n}}} + \left(\frac{\lambda}{\sqrt{n}}\right)^{n}.
\end{align}
Substituting the expression of $\xi_{n}$ from \eqref{ref72} yields
\begin{align}
\label{ref74}
\pr\{X = Y\}
&\leq
\frac{12}{\lambda n^{1/4-\delta_{n}}}
+\exp\left\{n \cdot \log \left(1-\frac{\lambda}{\sqrt{n}}\right) \right\} + \left(\frac{\lambda}{\sqrt{n}}\right)^{n} + n \exp \left\{-\frac{n^{2\delta_{n}}}{4} \right\}.
\end{align}
For the specific choice $n^{\delta_{n}} = \sqrt{\log(n^{\theta})}$, the last term in \eqref{ref74} is given by
\begin{align}
n \exp \left\{-\frac{n^{2\delta_{n}}}{4} \right\}
&= n \exp \left\{-\frac{\log(n^{\theta})}{4} \right\} \\
&= \frac{1}{n^{\theta/4-1}}.
\end{align}
Hence, if $\theta > 5$, there exists a sufficiently large $n$, such that the first term in \eqref{ref74} is larger than the other three terms, and hence,
\begin{align}
\pr\{X = Y\}
\leq \frac{15}{\lambda n^{1/4-\delta_{n}}},
\end{align}
which completes the proof of Lemma \ref{Lemma_Binomial2}.
\section*{Appendix J - Proof of Proposition \ref{PROP_9}}
\renewcommand{\theequation}{J.\arabic{equation}}
\setcounter{equation}{0}
\subsubsection*{Step 1: A Simplification for the Consensus Probability}
Due to symmetry, we only analyze the case $\mathsf{I}_{0} > \mathsf{I}_{1}$.
It follows that
\begin{align}
\pr\{{\cal C}_{n}\}
&= \pr\{N(\bX_{1};0) = 2n\}\\
&= \pr \left\{\bigcap_{i=1}^{2n} \{\bX_{1}(i) = 0\} \right\} \\
&= \prod_{i=1}^{2n} \pr \left\{ \bX_{1}(i) = 0 \right\} \\
\label{ToSubs2}
&= \prod_{i=1}^{2n} \left(1-\pr\left\{ \bX_{1}(i) = 1 \right\}\right).
\end{align}
\subsubsection*{Step 2: A Lower Bound on $\pr\left\{ \bX_{1}(i) = 1 \right\}$}
If an agent starts with a `0', then the probability to decide in favor of `1' is lower-bounded by
\begin{align}
&\pr\left\{\text{Bin}\left(n-C_{n},p_{n}\right) \geq \text{Bin}\left(n+C_{n}-1,p_{n}\right) +2 \right\} \nn \\
\label{ToCallE0}
&~~\geq \pr\left\{\text{Bin}\left(n-C_{n},p_{n}\right) \geq \text{Bin}\left(n+C_{n},p_{n}\right) + 2 \right\}.
\end{align}
If an agent starts with a `1', then the probability to decide in favor of `1' is lower-bounded by
\begin{align}
&\pr\left\{\text{Bin}\left(n-C_{n}-1,p_{n}\right) + 1 \geq \text{Bin}\left(n+C_{n},p_{n}\right) \right\} \nn \\
&~~\geq \pr\left\{\text{Bin}\left(n-C_{n}-1,p_{n}\right) + \text{Bin}\left(1,p_{n}\right) \geq \text{Bin}\left(n+C_{n},p_{n}\right) \right\} \\
\label{ToCallE1}
&~~= \pr\left\{\text{Bin}\left(n-C_{n},p_{n}\right) \geq \text{Bin}\left(n+C_{n},p_{n}\right) \right\}.
\end{align}
Since \eqref{ToCallE0} cannot be larger than \eqref{ToCallE1}, we continue with \eqref{ToCallE0}.
From now on, we lower-bound the probability in \eqref{ToCallE0}, to be denoted by $Q_{n}$.
The probability in \eqref{ToCallE0} can be written explicitly as
\begin{align}
\label{ToCont10}
Q_{n} = \sum_{\ell=0}^{n-C_{n}} \sum_{k=0}^{n+C_{n}}
\binom{n-C_{n}}{\ell} p_{n}^{\ell} (1-p_{n})^{n-C_{n}-\ell}
\binom{n+C_{n}}{k} p_{n}^{k} (1-p_{n})^{n+C_{n}-k} \mathbbm{1} \{\ell \geq k+2\}.
\end{align}
In order to continue, recall from \eqref{ToRef4} that
\begin{align}
P_{X}(k)
\label{ToRef7}
&\geq \frac{\sqrt{2\pi}}{e^{2}} \sqrt{\frac{1}{k}}
\exp \left\{ -n D\left(\frac{k}{n} \middle\| p \right) \right\} ,
\end{align}
where $D(\alpha \| \beta)$, for $\alpha,\beta \in [0,1]$, is defined in \eqref{DEF_Bin_DIVERGENCE}.
Substituting twice this lower bound into \eqref{ToCont10}, we arrive at
\begin{align}
\label{ToCont1}
Q_{n}
&\geq
\frac{2\pi}{e^{4}}
\sum_{\ell=0}^{n-C_{n}} \sum_{k=0}^{\ell-2}
\sqrt{\frac{1}{\ell}}
\exp \left\{ -(n-C_{n}) D\left(\frac{\ell}{n-C_{n}} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~\times
\sqrt{\frac{1}{k}}
\exp \left\{ -(n+C_{n}) D\left(\frac{k}{n+C_{n}} \middle\| p_{n} \right) \right\} \\
\label{ToExp7}
&\geq
\frac{8\pi}{e^{4}}
\sum_{\ell=(n-C_{n})p_{n}}^{(n+C_{n})p_{n}}
\sum_{k=0}^{\ell-2} \frac{1}{\sqrt{\ell k}}
\exp \left\{ -(n-C_{n}) D\left(\frac{\ell}{n-C_{n}} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -(n+C_{n}) D\left(\frac{k}{n+C_{n}} \middle\| p_{n} \right) \right\} \\
\label{ToExp8}
&\geq
\frac{8\pi}{e^{4}}
\sum_{\ell=(n-C_{n})p_{n}}^{(n+C_{n})p_{n}}
\sum_{k=\ell - C_{n}p_{n}}^{\ell-2}
\frac{1}{\sqrt{\ell(\ell-2)}}
\exp \left\{ -(n-C_{n}) D\left(\frac{\ell}{n-C_{n}} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -(n+C_{n}) D\left(\frac{k}{n+C_{n}} \middle\| p_{n} \right) \right\} \\
\label{ToExp9}
&\geq
\frac{8\pi}{e^{4}(n+C_{n})p_{n}}
\sum_{\ell=(n-C_{n})p_{n}}^{(n+C_{n})p_{n}}
\sum_{j=2}^{C_{n}p_{n}}
\exp \left\{ -(n-C_{n}) D\left(\frac{\ell}{n-C_{n}} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -(n+C_{n}) D\left(\frac{\ell-j}{n+C_{n}} \middle\| p_{n} \right) \right\} \\
\label{ToCont3}
&=
\frac{8\pi}{e^{4}(n+C_{n})p_{n}}
\sum_{m=0}^{2C_{n}} \sum_{j=2}^{C_{n}p_{n}}
\exp \left\{ -(n-C_{n}) D\left(\frac{(n-C_{n}+m)p_{n}}{n-C_{n}} \middle\| p_{n} \right) \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -(n+C_{n}) D\left(\frac{(n-C_{n}+m)p_{n}-j}{n+C_{n}} \middle\| p_{n} \right) \right\},
\end{align}
where \eqref{ToExp7} follows from the condition $\lim_{n \to \infty} C_{n}/n = 0$, which implies that for all large enough $n$, both $(n-C_{n})p_{n} \geq 0$ and $(n+C_{n})p_{n} \leq n-C_{n}$ hold.
The inequality in \eqref{ToExp8} follows from the fact that $\ell \geq C_{n}p_{n}$, for all sufficiently large $n$.
In \eqref{ToExp9} we changed the summation index from $k$ to $j$ according to $k=\ell-j$, with $j \in \{2,3,\ldots,C_{n}p_{n}\}$, and in \eqref{ToCont3} we changed the summation index from $\ell$ to $m$ according to $\ell=(n-C_{n}+m)p_{n}$, with $m \in \{0,1,\ldots,2C_{n}\}$.
In order to upper-bound the divergence terms in \eqref{ToCont3}, we use the reverse Pinsker inequality in \eqref{Reverse_PINSKER}.
Then, after some algebraic work, we arrive at
\begin{align}
Q_{n}
&\geq
\frac{8\pi}{e^{4}(n+C_{n})p_{n}}
\sum_{m=0}^{2C_{n}} \sum_{j=2}^{C_{n}p_{n}}
\exp \left\{ -(n-C_{n}) \cdot \frac{2}{p_{n}} \frac{p_{n}^{2} m^{2}}{(n-C_{n})^{2}} \right\} \nn \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times
\exp \left\{ -(n+C_{n}) \cdot \frac{2}{p_{n}} \frac{[p_{n}(2C_{n}-m)+j]^{2}}{(n+C_{n})^{2}} \right\} \\
&=
\frac{8\pi}{e^{4}(n+C_{n})p_{n}}
\sum_{m=0}^{2C_{n}} \sum_{j=2}^{C_{n}p_{n}}
\exp \left\{ - 2 \frac{p_{n} m^{2}}{n-C_{n}} \right\} \cdot
\exp \left\{ - \frac{2}{p_{n}} \frac{[p_{n}(2C_{n}-m)+j]^{2}}{n+C_{n}} \right\} \\
\label{ToExp10}
&\geq
\frac{8\pi}{e^{4}(n+C_{n})p_{n}}
\sum_{m=0}^{2C_{n}} \sum_{j=2}^{C_{n}p_{n}}
\exp \left\{ -2 \frac{p_{n}m^{2}}{n-C_{n}} \right\} \cdot
\exp \left\{ - \frac{2}{p_{n}} \frac{[p_{n}(2C_{n}-m)+2p_{n}C_{n}]^{2}}{n-C_{n}} \right\} \\
&\geq
\frac{4\pi C_{n}p_{n}}{e^{4}(n+C_{n})p_{n}}
\sum_{m=0}^{2C_{n}}
\exp \left\{ -2 \frac{p_{n}m^{2}}{n-C_{n}} \right\} \cdot
\exp \left\{-2 \frac{p_{n}(4C_{n}-m)^{2}}{n-C_{n}} \right\} \\
&=
\frac{4\pi C_{n}}{e^{4}(n+C_{n})}
\sum_{m=0}^{2C_{n}}
\exp \left\{ - 2p_{n} \cdot \frac{m^{2}+(4C_{n}-m)^{2}}{n-C_{n}} \right\} \\
\label{ToRef20}
&=
\frac{4\pi C_{n}}{e^{4}(n+C_{n})}
\sum_{m=0}^{2C_{n}}
\exp \left\{ - 4p_{n} \cdot \frac{(2C_{n}-m)^{2} + 4C_{n}^{2}}{n-C_{n}} \right\},
\end{align}
where \eqref{ToExp10} is true since $n-C_{n} \leq n+C_{n}$, and due to the fact that $j$ is obviously upper-bounded by $2p_{n}C_{n}$.
Now, the exponent in \eqref{ToRef20} is maximized at $m=0$, and thus
\begin{align}
Q_{n}
&\geq \frac{4\pi C_{n}}{e^{4}(n+C_{n})}
\sum_{m=0}^{2C_{n}}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\} \\
&\geq \frac{8\pi C_{n}^{2}}{e^{4}(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\}.
\end{align}
\subsubsection*{Step 3: Wrapping Up}
Continuing from \eqref{ToSubs2}, we finally arrive at
\begin{align}
\pr\{{\cal C}_{n}\}
&\leq \prod_{i=1}^{2n} \left(1-\frac{8\pi C_{n}^{2}}{e^{4}(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\}\right) \\
&= \left(1 - \frac{8\pi C_{n}^{2}}{e^{4}(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\} \right)^{2n} \\
&= \exp\left\{ 2n \cdot \log\left(1 - \frac{8\pi C_{n}^{2}}{e^{4}(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\} \right) \right\} \\
\label{ToExp6}
&\leq \exp\left\{ - \frac{16\pi n C_{n}^{2}}{e^{4}(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\} \right\} \\
&\leq \exp\left\{ -\frac{n C_{n}^{2}}{2(n+C_{n})}
\exp \left\{ - 32p_{n} \cdot \frac{C_{n}^{2}}{n-C_{n}} \right\} \right\},
\end{align}
where \eqref{ToExp6} is due to $\log(1-y) \leq -y$.
This completes the proof of Proposition \ref{PROP_9}.
\section*{Acknowledgment}
The author is grateful to Yonatan Shadmi (Technion) for reading the manuscript and providing some valuable comments, and especially for supplying the motivation to prove Theorem \ref{Main_THEOREM2}.
|
1304.4242
|
\section{Introduction}
The distribution of neutral and ionized gas in the circumgalactic
environment of galaxies is known to be an important indicator of
the past and present evolution of galaxies. Both the infall of
metal-poor gas from intergalactic space and from satellite galaxies
and the outflow of metal-rich gaseous material through galactic winds
represent key phenomena that determine the spatial distribution and
the physical state of the circumgalactic gas around massive galaxies.
From observations and theoretical studies, it is known that galaxy
interactions between gas-rich galaxies can transport large
amounts of neutral and ionized gas into the circumgalactic environment of
galaxies. In the local Universe, the most massive of these extended tidal
gas features can be observed in the 21\,cm line of neutral hydrogen (H\,{\sc i}).
The most prominent nearby example of a tidal gas stream produced by
the interaction of galaxies is the Magellanic Stream (MS), a massive
($\sim 10^8-10^9\,M_{\sun}$) stream of neutral and ionized gas in
the outer halo of the Milky Way at a distance of $\sim 50-60$ kpc
(e.g., Wannier \& Wrixon 1972; Gardiner \& Noguchi 1996; Weiner \& Williams 1996;
Putman et al.\,2003; Br\"uns et al.\,2005;
Fox et al.\,2005, 2010; Koerwer 2009; Besla et al.\,2007, 2010, 2012). The Magellanic
Stream spans over 200\degr\ on the sky (e.g., Nidever et al.\,2010) and has
a (mean) metallicity that is lower than that of the Milky Way, but
comparable with the metallicity found in the SMC and LMC
($0.1-0.5$ solar; Lu et al.\,1994;
Gibson et al.\,2000; Sembach et al.\,2001; Fox et al.\,2010, 2013).
The MS also contains dust grains and diffuse molecular hydrogen
(H$_2$; Sembach et al.\,2001; Richter et al.\,2001a).
A number of theoretical studies, including tidal models and
ram-pressure stripping models, have been carried out to describe the
Stream's motion in the extended halo of the Milky Way and pinpoint
its origin in one of the two Magellanic Clouds (Gardiner \& Noguchi 1996;
Mastropietro et al.\,2005; Connors et al.\,2006; Besla et al.\,2010;
Diaz \& Bekki 2011).
The origin and fate of the Magellanic Stream is
closely related to the trajectories of LMC and SMC
(e.g., Connors et al.\,2004, 2006; Besla et al.\,2007), and
any realistic model of the MS thus needs to consider
the dynamical and physical state of the
Milky Way/Magellanic Clouds system as a whole
(see also Bland-Hawthorn et al.\,2007; Heitsch \& Putman 2009).
While early tidal models have assumed that the Magellanic Stream is
a product from the tidal interaction between LMC and SMC as they
periodically orbit the Milky Way (e.g., Gardiner \& Noguchi 1996),
more recent proper motion measurements of the Magellanic Clouds (MCs)
(Kallivayalil et al.\,2006a, 2006b, 2013) indicate that the MCs may be
on their first passage around the Milky Way. Some subsequent tidal models
(Besla et al.\,2010; Diaz \& Bekki 2011) thus favour a first-infall
scenario for the Magellanic Stream. Moreover, while many
models (e.g., Connors et al.\,2006) place the origin
of the Stream's gaseous material in the SMC, other, more
recent studies trace back at least part of the Stream's gaseous material
in the LMC (e.g., Nidever et al.\,2008). The latter study also highlights
the role of energetic blowouts from star-forming regions in the LMC
for the formation of the Stream.
Clearly, further theoretical studies and observations are
required to pinpoint the origin of the MS based on different
(independent) methods.
In the first paper in our series analyzing the chemical and physical
conditions in the Magellanic Stream (Fox et al.\,2013; hereafter
Paper\,I), we have investigated MS absorption in the UV and optical
along the lines of sight toward RBS\,144, NGC\,7714\, PHL\,2525, and
HE\,0056$-$3622.
In this paper we analyze the MS using UV and optical absorption-line
spectra of the Seyfert\,1 galaxy Fairall\,9 ($z_{\rm em}=0.047$).
Located at $l=295.1$ and
$b=-57.8$ the Fairall\,9 sightline lies only 14.3\degr\ on the sky
from the SMC. This sightline is the best-studied in absorption of all
MS directions (Songaila 1981; York et al.\,1982;
Lu, Savage \& Sembach 1994; Gibson et al.\,2000;
Richter et al.\,2001a; Sembach et al.\,2003), largely because
the Fairall\,9 is bright in both the optical and the UV and the
Stream's H\,{\sc i} column in this direction is large
(log $N$(H\,{\sc i}$)\approx 20$; see Gibson et al.\,2000).
The high column of neutral gas ensures that a wide range of low-ionization
UV metal lines are detectable in the Stream, and even molecular hydrogen
was observed in the MS toward Fairall\,9 data from the
\emph{Far Ultraviolet Spectroscopic Explorer} (\emph{FUSE}; Richter et al.\,2001a;
Wakker 2006).
Using a spectrum of Fairall\,9 obtained with the Goddard High Resolution
Spectrograph (GHRS) onboard the \emph {Hubble Space Telescope} (HST) together
with Parkes 21\,cm H\,{\sc i} data Gibson et al.\,(2000) derived a metallicity
of the Stream toward Fairall\,9
of [S/H$]=-0.55\pm0.06^{+0.17}_{-0.21}$ ($\sim 0.3$ solar),
which represented the most accurate metallicity determination of the
Stream from UV absorption line data at that time. This metallicity is
consistent with either an SMC or LMC origin of the gas. A difficulty for
constraining the origin of the MS in one or the other Magellanic Cloud
arises from the fact that the gas in the Stream was
stripped from its parent galaxy $\sim 1-2$ Gyr ago
(e.g., Gardiner \& Noguchi 1996; Connors et al.\,2006; Nidever et al.\,2008),
but has not experienced
any further metal enrichment since then, while the parent galaxy
underwent further chemical evolution. The MS does not contain
any massive stars (e.g., Mathewson et al.\,1979), in contrast to the
Magellanic Bridge (Irwin, Kunkel \& Demers 1985). This aspect
needs to be taken into account when comparing metal abundances
in the Stream with present-day LMC and SMC abundances.
To increase the accuracy of the metallicity determination of the
Stream toward Fairall\,9 and to obtain more detailed information on the
chemical composition of the gas and dust in the Stream, more
accurate spectral data are desirable.
Because the Magellanic Stream is a massive gas cloud with complex
internal kinematics (e.g., Nidever et al.\,2008), data with
high spectral resolution and a high signal-to-noise (S/N)
ratio are required to fully resolve the Stream's velocity-component structure and to
detect weak absorption features from the various metal ions that
have their transitions in the ultraviolet (UV) and in the optical.
As part of our ongoing project to study the properties of the Magellanic Stream
in absorption along multiple lines of sight (see also Paper\,I)
we obtained high-resolution optical data of Fairall\,9 from
the Ultraviolet and Visible Echelle Spectrograph (UVES)
installed on the Very Large Telescope (VLT)
and medium-resolution UV data from the Cosmic Origins Spectrograph
(COS) onboard the \emph{HST}, both data sets providing
absoption spectra with excellent S/N ratios.
The combination of these data sets, as desribed in this study,
therefore provides a particular promising strategy
to study in great detail the chemical and physical conditions in the
Magellanic Stream in this direction.
This paper is organized as follows: in Sect.\,2 we describe the observations
and the data reduction. The column density measurements and the profile
modeling are explained in Sect.\,3. In Sect.\,4 we derive
chemical and physical properties of the gas in the MS. We discuss
our results in Sect.\,5. Finally, a summary of our study is given
in Sect.\,6.
\section{Observations and spectral analysis}
\subsection{VLT/UVES observations}
Fairall\,9 was observed with the VLT/UVES spectrograph (Dekker et al.\,2010)
in 2010 under ESO program ID\,085.C-0172(A)\,(PI: A. Fox). The observations
were taken in Service Mode using Dichroic\,1 in the 390+580 setting,
a 0.6\arcsec\ slit, and no rebinning. The observations were
carried out under good seeing conditions ($<$0.8\arcsec).
The raw data were reduced with the standard UVES pipeline, using
calibration frames taken close in time to the corresponding
science frames. The reduction steps involve subtraction of the bias
level, inter-order background, sky background, night sky emission
lines, and cosmic ray hits. The frames were then flat-fielded,
optimally extracted and merged. The wavelength scale was corrected for
atmospheric dispersion and heliocentric velocity and then placed
into the local standard of rest (LSR) velocity frame. Multiple
exposures on the same target
were registered onto a common wavelength grid and then added.
The final spectra have a very high spectral resolution of
$R\approx70\,000$ corresponding to a FWHM of $4.3$ km\,s$^{-1}$.
They cover the wavelength range between $3300$ and $6800$ \AA.
The S/N ratio per resolution element is $40$ at
$3500$ \AA\,(Ti\,{\sc ii}), $65$ at $4000$ \AA\,(Ca\,{\sc ii}),
and $83$ at $6000$ \AA\,(Na\,{\sc i}). The UVES data thus
provide much higher sensitivity and substantially higher
spectral resolution than previous optical measurements of
Fairall\,9 (Songaila 1981).
\vspace{0.6cm}
\begin{figure}[t!]
\epsscale{1.20}
\plotone{f1.eps}
\caption{{\it Upper panel:} H\,{\sc i} column-density map of the
Magellanic Stream in the general direction of Fairall\,9, based on
21cm data from GASS (the angular resolution is $15\farcm6$).
The map shows the distribution of neutral gas
in the LSR velocity range between $+100$ and $+250$ km\,s$^{-1}$.
The directions to LMC and SMC are indicated with the white arrows.
{\it Lower panel:} H\,{\sc i} column-density map of the
Fairall\,9 filament in the Stream, based on
21cm data from ATCA+Parkes (the angular resolution is
$1\farcm7 \times 1\farcm4$).
}
\end{figure}
\subsection{\emph{HST}/COS observations}
Fairall 9 was observed with the \emph{HST}/COS spectrograph
(Green et al.\,2012) in 2012 under \emph{HST} program
ID 12604 (PI: A. Fox). A four-orbit visit provided a total of
5378\,s of exposure time with the G130M/1291 wavelength setting, and
6144\,s with the G160M/1589 setting. With each grating,
all four FP-POS positions were used to dither the position of the
spectrum on the detector to reduce the fixed-pattern noise.
The raw data were processed and combined with the CALCOS
pipeline (v2.17.3). For the coaddition of the individual
exposures we used interstellar absorption lines as
wavelength reference. The final, co-added spectra then
were transformed into the LSR velocity frame. The COS data
cover the UV wavelength range between $1131$ and $1767$ \AA.
The spectra have a resolution of $R\approx16\,000$
(FWHM\,$\approx 19$ km\,s$^{-1}$) and a pixel size of $\sim 7$
km\,s$^{-1}$. The S/N ratio per resolution element is
$37$ at $1200$ \AA\, and $26$ at $1550$ \AA.
In order to minimize geocoronal emission, which
contaminates the absorption lines of
O\,{\sc i} $\lambda 1302$, Si\,{\sc ii} $\lambda 1304$
and H\,{\sc i} Ly\,$\alpha$ in the velocity range
$-200$ km\,s$^{-1}\!\la\!v_{\rm LSR}\!\la\!+200$ km\,s$^{-1}$
during orbital daytime, we re-reduced the data with a night-only
extraction. For this, data were extracted from those
time intervals when the Sun's altitude was less than $20$ degrees.
\subsection{\emph{FUSE} observations}
As part of our study we also re-analyze archival
\emph{FUSE} spectra of Fairall\,9. These spectra were obtained in 2000
under \emph{FUSE} program ID P101 (PI: K.R. Sembach) with a
total exposure time of 34\,827\,s. They
show strong molecular hydrogen absorption arising in the MS,
as presented by Richter et al.\,(2001a).
The \emph{FUSE} spectra have a resolution of $\sim 20$ km\,s$^{-1}$
(FWHM), and cover the wavelength range $912-1180$ \AA. The raw data were
reduced using the CALFUSE pipeline v3.2.1. The individual exposures
were carefully coadded using interstellar lines as wavelength
reference. Unfortunately, the S/N in the FUSE data is very low,
which severely hampers the analysis of absorption lines
(see also Richter et al.\,2001a; Wakker 2006).
From a detailed inspection of the individual spectra
in the different segments we conclude that
only the data from the lithium-fluoride coated
segment 1A (LiF\,1A) can be properly coadded
without introducing large systematic errors in the flux
distribution in the spectrum. In the coadded LiF\,1A data,
that are used by us to re-analyze the H$_2$ absorption in the MS,
we measure a S/N of $\sim 6$ per resolution element at $1020$ \AA.
Only for the wavelength range $1050-1082$ \AA\, does
the S/N ratio rise up to a maxium of $\sim 15$ per resolution
element due to the increased background flux from the broad, redshifted
Ly\,$\beta$ emission from Fairall\,9.
\subsection{\emph{GASS} 21cm observations}
The H\,{\sc i} 21cm data for the Fairall\, sightline were taken from
the Galactic-All Sky Survey (GASS, McClure-Griffiths et al.\,2009,
Kalberla et al.\,2010). The survey was observed with the 64-m radio telescope at Parkes.
The data cubes have an angular resolution of $15\farcm6$, leading to an
RMS of $57\,\mathrm{mK}$ per spectral channel ($\Delta v=0.8\,\mathrm{km\,s}^{-1}$).
This value translates to an H\,{\sc i} column density detection limit of
$N$(H\,{\sc i}$)_{\rm lim}=4.1\times10^{18}$ cm$^{-2}$,
assuming a Gaussian-like emission line with a width of
$20\,\mathrm{km\,s}^{-1}$ FWHM. Fig.\,1, upper panel, shows an H\,{\sc i} column
density map of the local environment of the Magellanic Stream
centered on Fairall\,9 based on the GASS 21cm data.
\subsection{ATCA 21cm data}
We supplement our measurements with higher-resolution H\,{\sc i} data
obtained with the Australia Compact Telescope Array (ATCA). The data in the
direction of Fairall\,9 were observed in 1998 and 1999 by Mary Putman
using the 750A, 750B, and 750D configurations. The ATCA is an east-west
interferometer with six antennas. Each antenna has a diameter of 22\,m.
For the observations a correlator band width of 4\,MHz was chosen,
resulting in a velocity resolution of about $0.8$ km\,s$^{-1}$.
Since the Fairall 9 cloud is too large to fit within a single pointing a
mosaic consisting of three fields was made. The observing time for each
of these three fields was 12\,hours leading to a full u-v coverage.
The FWHM of the synthesized ATCA beam is $1\farcm7 \times 1\farcm4$.
The ATCA data set was reduced by Christian Br\"uns with the MIRIAD
software. To circumvent the missing short-spacings problem, single-dish
data from the Parkes telescope were used. Image deconvolution and
combination with the single-dish data were performed with the
Miriad-Task MOSMEM. The Parkes data used for the short-spacings
correction were obtained in the framework of an H\,{\sc i} survey of the
Magellanic System (see Br\"uns et al.\,2005 for details).
\begin{figure}[t!]
\epsscale{0.90}
\plotone{f2.eps}
\caption{Optical absorption profiles of Ca\,{\sc ii}, Na\,{\sc i},
and Ti\,{\sc ii} from VLT/UVES data of Fairall\,9 are shown.
Absorption from the Magellanic Stream is seen at LSR velocities between
$+130$ and $+240$ km\,s$^{-1}$. The solid red line display the
best possible Voigt profile fit to the data. The individual
velocity components are indicated by the tic marks. In the
upper two panels the H\,{\sc i} 21cm emission spectra toward
Fairall\,9 from GASS and ATCA data are plotted for comparison.
}
\end{figure}
\subsection{Spectral analysis methods}
Our strategy for the analysis of the optical and UV absorption-line
data of Fairall 9 combines different techniques to optimally account
for the different spectral resolutions and S/N in the data.
The reduced and coadded spectra from UVES, COS, and \emph{FUSE} first were
continuum-normalized using low-order polynomials that were
were fit locally to the spectral regions of interest.
For the high-resolution VLT/UVES data we have used Voigt-profile
fitting to decompose the MS absorption pattern in the optical lines
of Ca\,{\sc ii}, Na\,{\sc i}, and Ti\,{\sc ii} into
individual absorption components (Voigt components) and to
derive column densities ($N$) and Doppler parameters ($b$ values) for
the individual components. For the fitting, we made use of
the {\tt FITLYMAN} package implemented in the ESO-MIDAS analysis
software (Fontana \& Ballester 1995). Laboratory wavelengths and
oscillator strengths have been taken from the compilation
of Morton (2003).
Note that with our fitting technique we are able to measure $b$ values
smaller than the intrumental resolution as we {\it simultaneously}
fit the line doublets of Ca\,{\sc ii} and Na\,{\sc i}, so that
relative strengths of the lines are taken into account for the
determination of both $b$ and $N$. From this fitting procedure
we obtain a component model for the Fairall\,9 sightline, in which
the LSR velocity centroids of the individual absorption
components and the $b$ values for the low ions are defined.
In addition to Voigt-profile fitting, we have used the apparent
optical depth method (AOD method; Savage \& Sembach 1991) to derive
total gas column densities for the (unsaturated) optical absorption
profiles of Ca\,{\sc ii}, Na\,{\sc i}, and Ti\,{\sc ii}.
The AOD analysis was made using the custom-written
MIDAS code {\tt span} that allows us to measure equivalent widths
and AOD column densities (and their errors) in absorption spectra
from a direct pixel integration.
For the medium-resolution \emph{HST}/COS data we have used profile
{\it modeling} and the AOD method to derive column densities and
column density limits for
the various different low, intermediate, and high ions that have
detectable transitions in the COS wavelength range.
The ion transitions in the COS data considered in this study
include C\,{\sc ii} $\lambda 1334.5$,
C\,{\sc ii}$^{\star}$ $\lambda 1335.7$,
C\,{\sc iv} $\lambda\lambda 1548.2,1550.8$,
N\,{\sc i} $\lambda\lambda 1199.6,1200.2,1200.7$,
N\,{\sc v} $\lambda\lambda 1238.8,1242.8$,
O\,{\sc i} $\lambda 1302.2$,
Al\,{\sc ii} $\lambda 1670.8$,
Si\,{\sc ii} $\lambda\lambda 1190.4,1193.3,1260.4,1304.4,1526.7$,
Si\,{\sc iii} $\lambda 1206.5$,
Si\,{\sc iv} $\lambda\lambda 1393.8,1402.8$,
P\,{\sc ii} $\lambda 1152.8$,
S\,{\sc ii} $\lambda\lambda 1250.6,1253.8,1259.5$,
Fe\,{\sc ii} $\lambda\lambda 1143.2,1144.9,1608.5$,
and Ni\,{\sc ii} $\lambda 1370.1$. For the profile modeling
of neutral and singly-ionized species that trace predominantly
neutral gas in the MS we have used as input the component
model defined by the optical Ca\,{\sc ii} absorption (see above).
In this model, the LSR velocities and $b$ values
of the seven absorption components are constrained
by the Ca\,{\sc ii} fitting results, while the column
density for each component is the main free parameter
that can be varied for each ion listed above. Our previous studies of
Ca\,{\sc ii} in the Galactic halo (Richter et al.\,2005, 2009;
Ben Bekhti et al.\,2008, 2011; Wakker et al.\,2007, 2008)
and in intervening absorbers at low redshift (Richter et al.\,2011)
have demonstrated that Ca\,{\sc ii} is an an excellent tracer
for the distribution of neutral and partly ionized gas and
its velocity-component structure, even in regions where
Ca\,{\sc ii} is not the dominant ionization state.
Based on the Ca\,{\sc ii} model, and using a modified
version of {\tt FITLYMAN}, we have calculated for each
individual UV line a synthetic absorption profile, for
which we have convolved an initial Voigt profile with
input parameters $(v_i,b_i,N_i)$ for $i=1...7$ with
the COS line-spread function (LSF) that is
appropriate for the wavelength of the line. For
the COS LSF we have used the improved LSF model
described by Kriss (2011). Then, the
columns $N_i$ were varied to minimize the differences
between the synthetic absorption profile and the
observed COS data. This method delivers reliable
(total) column densities for those ions that have
multiple transitions with substantially
different oscillator strengths in the COS wavelength
range and for individual lines that are not fully
saturated. More details about the accuracy of
this method are presented in Sect.\,3.1 and in
the Appendix.
For the intermediate and
high ions (Si\,{\sc iii}, Si\,{\sc iv}, C\,{\sc iv})
the component structure and $b$ values are expected to be
different from that of the low ions (as the gas phase
traced by these ions often is spatially distinct from
the gas phase traced by the low ions), but no
information is available on the true component structure
of these ions from the optical data. Therefore,
we did not try to model the absorption profiles
of the high-ion lines, but estimated total column
densities (and limits) for these ions solely from the AOD
method. Similarly, we used solely the AOD method to determine
the column density of C\,{\sc ii}$^{\star}$.
For the analysis of H$_2$ in the Magellanic
Stream detected in the Fairall\,9 \emph{FUSE} data (Richter et al.\,2001a),
we have modeled the H$_2$ absorption using
synthetic spectra generated with {\tt FITLYMAN}. As model
input we take into accout the component structure and line
widths seen in the optical Ca\,{\sc ii}/Na\,{\sc i} absorption,
together with a Gaussian LSF according to the \emph{FUSE} spectral
resolution. H$_2$ wavelengths and oscillator strengths
have been adopted from Abgrall \& Roueff (1989).
Details on the H$_2$ modeling are presented
in Sect.\,3.3.
\begin{figure}[t!]
\epsscale{1.2}
\plotone{f3.eps}
\caption{Velocity profiles of various low ions in the COS
data of Fairall\,9 are shown. The FUV data for
Si\,{\sc ii} $\lambda 1020.7$ are from \emph{FUSE}. The velocity
components related to gas in the Magellanic Stream, as
identified in the optical Ca\,{\sc ii} data, are
indicated with the tic marks. The red solid line indicates
the best-fitting absorption model for the UV absorption, as
described in Sect.\,3.2. The single, saturated lines of
C\,{\sc ii} $\lambda 1334.5$ and O\,{\sc i} $\lambda 1302.2$
are not considered in our model.
}
\end{figure}
\vspace{0.6cm}
\section{Column-density measurements}
\subsection{Metal absorption in the optical}
Optical absorption related to gas in the Magellanic Stream
in the velocity range $120-230$ km\,s$^{-1}$ (this velocity
range is defined by the H\,{\sc i} 21\,cm data of the Stream) is
detected in the UVES spectrum of Fairall\,9 in the lines of
Ca\,{\sc ii} ($\lambda\lambda 3934.8,3969.6$), Na\,{\sc i}
($\lambda\lambda 5891.6,5897.6$), and Ti\,{\sc ii} ($\lambda 3384.7$).
Fig.\,2 shows the normalized absorption
profiles of these ions plotted on the LSR velocity scale together
with the H\,{\sc i} 21cm emission profiles from GASS and ATCA.
Ca\,{\sc ii} absorption is detected in seven individual
absorption components centered at $v_{\rm LSR}=+143,+163,+172,
+188, +194,+204$ and $+218$ km\,s$^{-1}$ with logarithmic
column densities in the range log $N$(Ca\,{\sc ii}$)=11.29-11.68$
(where $N$ is in units [cm$^{-2}$] throughout the paper). The red solid
line in Fig.\,2 indicates the best Voigt-profile fit to
the data. All Ca\,{\sc ii}, Na\,{\sc i} and Ti\,{\sc ii}
column-density measurements are summarized in Table 1.
\input{tab1}
The strongest Ca\,{\sc ii}
absorption component in the MS (in terms of the absorption depth)
is component 4 at $+188$ km\,s$^{-1}$; this very narrow
component is also detected in both Na\,{\sc i} lines (Fig.\,2).
From the simultaneous fit of the Ca\,{\sc ii} and Na\,{\sc i} doublets
we obtain a very small Doppler parameter for component 4 of
$b=1.8$ km\,s$^{-1}$. The small $b$ value for this component
is confirmed by fits to the individual lines of Ca\,{\sc ii}
and Na\,{\sc i}, which all imply $b<2$ km\,s$^{-1}$.
The detection of Na\,{\sc i} together with the
small $b$ value indicates that the gas in component 4 is
relatively cold and dense and possibly is confined in a
dense core with little turbulence.
Components 3,4, and 5 are also detected
in Ti\,{\sc ii} (Fig.\,2).
Since Ti\,{\sc ii} and H\,{\sc i} have almost identical ionization
potentials (Table 2), the detection of Ti\,{\sc ii}
suggests that most of the neutral
gas column density is contained in these three components.
This conclusion is supported by the H\,{\sc i} 21cm emission profiles
from GASS and ATCA (Fig.\,2; first two panels), which also have their maxima
in the velocity range between $+170$ and $+200$ km\,s$^{-1}$.
Weak and relatively broad ($b=7.4$ km\,s$^{-1}$) Na\,{\sc i} absorption
is also detected in component 1 at $+143$ km\,s$^{-1}$.
As mentioned in Sect.\,2.5, we also have used the AOD method to determine
the total column densities for Ca\,{\sc ii}, Na\,{\sc i} and Ti\,{\sc ii}
by integrating over the velocity range relevant for MS absorption
(i.e., $+130$ km\,s$^{-1} \leq v_{\rm LSR} \leq +230$ km\,s$^{-1}$.)
The column densities derived by these two different methods agree
very well within their $1 \sigma$ error ranges (see Table 1, rows 10 and 11).
\subsection{Metal absorption in the UV}
In Fig.\,3 we show normalized UV absorption profiles of the low
ions C\,{\sc ii}, N\,{\sc i}, O\,{\sc i}, Al\,{\sc ii}, Si\,{\sc ii},
S\,{\sc ii}, and Fe\,{\sc ii}. The absorption profiles of the
intermediate and high ions Si\,{\sc iii}, C\,{\sc iv}
and Si\,{\sc iv} are shown in Fig.\,4.
Following the procedure described in Sect.\,2.5, we have reconstructed the
absorption pattern of UV metal lines of the low ions,
using the component model defined by the optical lines of Ca\,{\sc ii}.
As already mentioned, the modeling method provides relevant results
only for those ions that have multiple transitions in the available
COS wavelength band and for single lines that are not (or at most mildly)
saturated. In our case, the modeling method could be used to determine
column densities for N\,{\sc i}, Al\,{\sc ii}, Si\,{\sc ii}, S\,{\sc ii},
and Fe\,{\sc ii}, while for the fully saturated (single) lines of
C\,{\sc ii} and O\,{\sc i} the modeling does not yield relevant
column density limits.
In Fig.\,3, the best-fitting column-density model for each ion is overlaid
with the red solid line. The best-fitting model assumes the same $b$ values
for the individual subcomponents as derived for Ca\,{\sc ii} from
the Voigt fit of the UVES data. The ion column densities for each velocity
component are listed in Table\,1 in rows $2-7$; the total column density
for each ion is given in row\,10. As can be seen in Fig.\,3,
the absorber model successfully reproduces the shape of the absorption
lines in the UV.
From Table\,1 follows that the relative column-densities in the
seven velocity components differ slightly from ion to ion. These
differences are expected, because most ions have different ionization
potentials and therefore trace slightly different gas phases within the
absorber. In addition, differential dust depletion for some elements
may also be relevant in this context.
Because of the relatively limited spectral resolution of the COS instrument
and the resulting overlap between neighbouring velocity components
the accuracy for the determination of the column densities for individual
subcomponents is relatively low ($\sim 0.30$ dex, typically).
Much better constrained are the {\it total} column densities (Table 1, row 10)
for the ion multiplets N\,{\sc i}, S\,{\sc ii}, Si\,{\sc ii} and Fe\,{\sc ii},
which have $1\sigma$ uncertainties $<0.05$ dex.
This is because the integrated (total) column density for each of these
ions is determined predominantly by the integrated equivalent widths
of the weaker lines and the equivalent width {\it ratios}
of the weaker and stronger transitions.
In the Appendix we present additonal model plots to provide a more
detailed insight into the allowed parameter range $(b_i,N_i)$
for the individual subcomponents and their impact
on the shape of the absorption profiles and total column-density
estimates.
Next to the absorption modeling we have used the AOD method to determine
total column densities (or column-density limits) in the COS data for each of
the ions listed above (Table\,1, row\,11). For the unsaturated lines of
S\,{\sc ii}, Si\,{\sc ii}, and Fe\,{\sc ii} the total column densities derived
from the AOD method are in excellent agreement ($< 1\sigma$)
with the total column densities determined from the profile modeling.
Note that the absorption of C\,{\sc ii}$^{\star}$ in the MS will
be discussed separately in Sect.\,4.4.
\begin{figure}[t!]
\epsscale{1.0}
\plotone{f4.eps}
\caption{COS velocity profiles of the intermediate and
high ions Si\,{\sc iii}, C\,{\sc iv} and Si\,{\sc iv}. The
velocity structure in these ions is very different from that
of the low ions, suggesting that they trace spatially
distinct regions within the MS. High-velocity Si\,{\sc iv}
absorption between $+150$ and $+200$ km\,s$^{-1}$ in the
Si\,{\sc iv} $\lambda 1402.8$ line is possibly blended with
intergalactic absorption.}
\end{figure}
\subsection{H\,{\sc i} 21cm emission}
The GASS 21cm velocity profile shown in the upper panel of Fig.\,2
indicates H\,{\sc i} emission from gas in the Magellanic Stream
in the LSR velocity range between $+100$ and $+250$ km\,s$^{-1}$.
The MS emission shows an asymmetric pattern with a peak
near $v_{\rm LSR}\approx +180$ km\,s$^{-1}$, surrounded
by weaker satellite componets at lower and higher radial
velocities. The overall shape of the emission profile
mimics that of the inversed absorption profiles of
the low-ion lines of N\,{\sc i} and S\,{\sc ii}
indicating that both the pencil-beam absorption data and
the GASS emission data, that have a beam-size of $15\farcm6$,
sample the same physical structure. Integrating the GASS data
in the above given velocity range for the single pixel that
covers the position of Fairall\,9 we obain a total H\,{\sc i}
column density in the Stream of log $N$(H\,{\sc i}$)=19.95$
($N$(H\,{\sc i}$)=(8.96\pm0.09)\times 10^{19}$ cm$^{-2}$).
The pixel value is somewhat smaller than the beam-averaged
column density, which is log $N$(H\,{\sc i}$)=19.98$.
In the second panel of Fig.\,2 we show the 21cm emission
profile for the Fairall\,9 direction from the high-resolution ATCA data
in the LSR velocity range between $+120$ and $+230$ km\,s$^{-1}$.
Because of the relatively low S/N in the ATCA data, the
spectrum shown in Fig.\,2 was binned for displaying purposes
to $8.3$ km\,s$^{-1}$ wide pixels.
The overall shape of the profile from the ATCA data
is roughly similar to the one from GASS, but in the ATCA
spectrum the MS 21cm emission peaks
at a somewhat higher velocity ($v_{\rm LSR}\approx +190$ km\,s$^{-1}$,
thus coinciding with the peak absorption in Ca\,{\sc ii} and
Na\,{\sc i}; see Fig.\,2) and at a slightly higher
brightness temperature. The MS emission profile from ATCA
is, however, somewhat narrower than the GASS profile. Integration
over the MS velocity range in the (unbinned) ATCA data yields a
total H\,{\sc i} column density in the Stream of
$N$(H\,{\sc i}$)=(8.6\pm1.5)\times 10^{19}$ cm$^{-2}$ or
log $N$(H\,{\sc i}$)=19.93^{+0.07}_{-0.08}$, thus in
excellent agreement with the GASS data.
For comparison, Gibson et al.\,(2000) find
log $N$(H\,{\sc i}$)=19.97$
($N$(H\,{\sc i}$)=(9.35\pm0.47)\times 10^{19}$ cm$^{-2}$)
using Parkes 21cm observations of Fairall\,9 with a
beam size of $14\farcm0$. From the
21cm data of the {\it Leiden-Argentine-Bonn Survey} (LAB;
Kalberla et al.\,2005) with $\sim 0.5\deg$ resolution we
obtain for the same velocity range an H\,{\sc i}
column density in the Stream of log $N$(H\,{\sc i}$)=19.97$.
The H\,{\sc i} column densities in the MS toward Fairall\,9
derived from different instruments with very different
angular resolutions agree with each other
within their $1\sigma$ error ranges. Our conclusion is that
beam smearing is not expected to be
a critical issue for the determination of metal
abundances in the MS toward Fairall\,9
(using 21cm data in combination with the pencil-beam
UVES and COS data). In the following, we adopt the value of
log $N$(H\,{\sc i}$)=19.95$ for the total H\,{\sc i} column
density in the Stream toward Fairall\,9 together
with an appropriate estimate of the systematic error
of $0.03$ dex that accounts for the limited spatial
resolution of the 21cm data.
As it is shown in
the Appendix, this H\,{\sc i} column density in the MS
is in agreement with the observed shape of the H\,{\sc i}
Ly\,$\alpha$ absorption line in the COS data of Fairall\,9,
which provides an independent (albeit less stringent) measure
for the neutral-gas column density in the Stream in
this direction (see also Wakker, Lockman, \& Brown 2011
for a detailed discussion on this topic).
\subsection{Molecular hydrogen absorption}
The detection of H$_2$ absorption in the Magellanic Stream in the
\emph{FUSE} data of Fairall\,9 has been reported previously by
Richter et al.\,(2001a) and Wakker (2006).
H$_2$ absorption is seen near $v_{\rm LSR}=+190$ km\,s$^{-1}$ in the
rotational levels $J=0,1$ and $2$. Although the overall S/N in
the \emph{FUSE} data is low ($\leq15$ per resolution element;
see Sect.\,2.5), the H$_2$ absorption
is clearly visible in many lines due to the relatively high
H$_2$ column density (Richter et al.\,2001a).
\begin{figure*}[t!]
\epsscale{1.0}
\plotone{f5.eps}
\caption{Continuum-normalized \emph{FUSE} spectrum of Fairall\,9 in the wavelength
range between $1075$ and $1081$ \AA. Absorption by H$_2$ from the rotational
levels $J=0,1$ and $2$ related to gas in the Magellanic Stream are indicated
above the spectrum using the common notation scheme. The best-fitting
model for the H$_2$ absorption is overplotted with the red solid line.
}
\end{figure*}
\input{tab2}
We have reanalyzed the Fairall\,9 \emph{FUSE} data based on an improved
data reduction pipeline and the component model
discussed in the previous sections. Richter et al.\,(2001a)
have used a curve-of-growth technique to derive the column densities
$N(J)$ and $b$ values of the H$_2$ rotational states $J=0,1,2$, assuming
as single MS absorption component. In contrast, we here model the
absorption spectrum of the MS H$_2$ absorption in the \emph{FUSE} Fairall\,9
data using synthetic spectra. For our model we assume
that the H$_2$ absorption arises in the region with the highest
gas density in the Stream, which is component 4 at $v_{\rm LSR}=+188$
km\,s$^{-1}$ detected in the UVES data in
Ca\,{\sc ii}, Na\,{\sc i}, and Ti\,{\sc ii} (Table\,1).
In fact, from absorption studies of
Na\,{\sc i} and H$_2$ in interstellar gas it is well known that these
two species trace the same gas phase (i.e., cold neutral and cold
molecular gas; e.g., Welty et al.\,2006).
We adopt $b_{\rm turb}=1.8$ km\,s$^{-1}$ for the H$_2$ absorption
in component 4, as suggested by the Na\,{\sc i} and Ca\,{\sc ii}
absorption in the same component (Table\,1). We further assume
(and prove later) that the kinetic temperature of the H$_2$
absorbing gas is $T_{\rm kin}\approx100$ K in component 4.
It is usually assumed that the Doppler parameter is composed
of a turbulent and a thermal component, so that
$b^2=b_{\rm turb}^2+b_{\rm th}^2$. For light elements
and molecules (such as H$_2$) $b_{\rm th}$ must not be
neglected. We therefore adopt $b$(H$_2)=2.0$\,km\,s$^{-1}$
for our H$_2$ model spectrum.
Fig.\,5 shows our best fitting H$_2$ model (red solid line)
together with the \emph{FUSE} data for the wavelength range between
$1070$ and $1080$ \AA. Since the $b$ value is fixed to
$b$(H$_2)=2.0$\,km\,s$^{-1}$ and the velocity to
$v_{\rm LSR}=+188$ km\,s$^{-1}$, the only free parameters in
our model are the H$_2$ column densities $N(J)$. Table\,2
summarizes the best fitting H$_2$ column densities using
this method. The given errors represent the
uncertainties for log $N(J)$ under the assumption
that $b=2.0$ km\,s$^{-1}$. The resulting total H$_2$ column
density is log $N$(H$_2)=17.93^{+0.19}_{-0.16}$ and
the fraction of hydrogen in molecular form is
log $f_{\rm H_2}=$\,log\,$[2N($H$_2)/(N$(H\,{\sc i}$)+2N($H$_2))]=-2.03$.
Note that the value for log $f_{\rm H_2}$ derived in this way
represents a sightline-average; the local molecular fraction
in the H$_2$ absorbing region must be higher
(because $N$(H\,{\sc i}) in this region is smaller).
The H$_2$ column densities we derive from our spectral modeling
are substantially ($\sim 1$ dex) higher than the values presented
in the earlier studies of the Fairall\,9 \emph{FUSE} data
(Richter et al.\,2001a; Wakker 2006). The
reason for this discrepancy is the substantially lower
$b$ value that we adopt here. This more realistic $b$ value
for the H$_2$ absorbing gas is a direct consequence of the
previously unknown complex absorption pattern in the MS
toward Fairall\,9, that is visible only in the high-resolution optical
data of Ca\,{\sc ii} and Na\,{\sc i} (see Sect.\,3.1). Our study
suggests, that, without the support of high-resolution and
high S/N optical or UV data that can be used to resolve
the true sub-component structure of the absorbers and to deliver
reliable $b$ values for the subcomponents, the interpretation of
interstellar H$_2$ absorption lines in low-resolution and low-S/N data
from \emph{FUSE} can be afflicted with large systematic uncertainties.
\section{Physical and chemical conditions in the gas}
\subsection{Ionization conditions and overall metallicity}
The presence of high, intermediate and low ions, as well as
molecular hydrogen, in the Magellanic Stream toward Fairall\,9
in seven individual velocity components implies a complex
multi-phase nature of the gas. To obtain meaningful results
for the chemical composition and physical conditions
from the measured ion column densities it is necessary to
consider in detail the ionization conditions in the
absorbing gas structure. For an element M we define
its relative gas-phase abundance compared to the
solar abundance in the standard way
[M/H]\,=\,log\,(M/H)\,$-$\,log\,(M/H)$_{\sun}$.
Solar reference abundances have been adopted from
Asplund et al.\,(2009).
The best ion to study the overall metallicity of neutral and weakly
ionized interstellar gas is O\,{\sc i}.
Neutral oxygen and neutral hydrogen have similar ionization
potentials and there is a strong charge-exchange reaction that
couples both ions in gas with sufficiently high density.
For our sightline, however, the O\,{\sc i} column density
is not well constrained because of the saturation of the
O\,{\sc i} $\lambda 1302.2$ line and the lack of other,
weaker O\,{\sc i} transitions in the COS wavelength range.
Because it is impossible to measure the O\,{\sc i} column density
in the MS using our component models and the AOD method to
a satisfactory accuracy, we do not further consider the
O\,{\sc i} $\lambda 1302.2$ line for our analysis.
For the determination of the overall metallicity of the gas in
the Magellanic Stream toward Fairall\,9 we choose instead
S\,{\sc ii}, which is another useful ion for measuring interstellar
abundances, as singly-ionized sulfur is an excellent tracer for
neutral hydrogen without being depleted into dust grains
(e.g., Savage \& Sembach 1996). We also
have a very accurate column-density determination
of log $N$(S\,{\sc ii}$)=14.77\pm 0.02$ in the Magellanic Stream,
based on the absorption modeling of the two S\,{\sc ii} lines
at $1250.6$ and $1253.8$ \AA\, and (independently) from
the AOD method (see Table 1).
Our value is slightly higher than the S\,{\sc ii} column density
of log $N$(S\,{\sc ii}$)=14.69\pm 0.03$ derived by Gibson et al.\,(2000)
from \emph{HST}/GHRS data.
From the measured (S\,{\sc ii}/H\,{\sc i}) ratio we obtain an initial
estimate of the sulfur abundance in the Stream toward Fairall\,9
of [S/H$]\approx -0.30$. Since the ionization
potential of S\,{\sc ii} (23.2 eV; see Table 3) is higher
than that of H\,{\sc i} (13.6 eV), we have to model the ionization
conditions in the gas to obtain a more precise estimate for
the sulfur abundance (and the abundances of the other detected
metals) in the Stream toward Fairall\,9. While previous studies
of Galactic halo clouds (Wakker et al.\,1999; Gibson et al.\,2000)
have demonstrated that the (S\,{\sc ii}/H\,{\sc i}) ratio is
a very robust measure for the sulfur-to-hydrogen ratio,
hardly being affected by the local ionizing radiation field,
the ionization modeling is important to obtain a relibale estimate
of the systematic uncertainty that arises from the use of
S\,{\sc ii} as reference indicator for the overall $\alpha$
abundance in the gas. For this task, we use the
photoionization code Cloudy (v10.00; Ferland et al.\,1998),
which calculates the expected column densities for different ions
as a function of the ionization parameter $U=n_{\gamma}/n_{\rm H}$
for a gas slab with a given neutral gas column density
and metallicity, assuming that the slab is illuminated
by an external radiation field. As radiation field we adopt
a combined Milky Way plus extragalactic ionizing
radiation field based on the models by Fox et al.\,(2005)
and Bland-Hawthorn \& Maloney (1999,2002),
appropriately adjusted to the position of the MS relative
to the Milky Way disk.
\begin{figure}[t!]
\epsscale{1.2}
\plotone{f6.eps}
\caption{
{\it Upper panel:} Cloudy photoionization model of low-ionization species
in the MS toward Fairall\,9. The colored lines indicate the predicted
ion column densities as a function of the ionization parameter $U$ and
gas density $n_{\rm H}$ for a gas slab with a (total) neutral hydrogen
column density of log $N$(H\,{\sc i}$)=19.95$ and a metallicity
of [Z/H$]=-0.30$. Observed ion column densities are
indicated on the right-hand side with the
filled colored circles. The gray-shaded area marks the
range for $U$ and $n_{\rm H}$ that is relevant for our study
based on the oberved Ca\,{\sc ii} column density (see Sect.\,4.1).
{\it Lower panel:} differences between the observed and the predicted
ion column densities, $\Delta N=N_{\rm Cloudy} - N_{\rm obs.}$; they
are discussed in Sect.\,4.1. For Ni\,{\sc ii}, the bar and the arrow indicate an
upper limit for $\Delta N$(Ni\,{\sc ii}), while for Ca\,{\sc ii} the bar
and the double-arrow indicate the allowed range for $\Delta N$(Ca\,{\sc ii}).
}
\end{figure}
While our absorption modeling provides rough estimates for the
ion column densities in the seven sub-components, we refrain
from trying to model the ionization conditions in the
individual components. This is because the unknown H\,{\sc i}
column densities in these components, together with the unknown
geometry of the overall gas structure, would lead to large
systematic uncertainties for such models.
Instead, we have used Cloudy to obtain {\it integrated}
elemental abundances in the gas of the Magellanic Stream
toward Fairall\,9 (representing the optical-depth weighted
mean of the individual element abundances in all subcomponents).
It is, however, important to emphasize that meaningful results for
integrated elemental abundances from Cloudy can be obtained only
for those ions for which the column densities in the ionization
model do not strongly depend on the gas density (and ionization parameter),
because the latter quantities are expected to vary {\it substantially}
among the individual subcomponents.
For our Cloudy model we assume a neutral gas column density
of log $N$(H\,{\sc i}$)=19.95$, an overall metallicity of
[Z/H$]=-0.3$, and solar {\it relative} abundances of the
metals from Asplund et al.\,(2009), based on our results described above.
In Fig.\,6, upper panel, we display the expected logarithmic column
densities for low and intermediate ions as a function of log $U$
and log $n_{\rm H}$ for this model. The measured total column densities
for these ions (Table 1, 10th column) are plotted on the right hand side
of the panel. The total Ca\,{\sc ii} column density of
log $N$(Ca\,{\sc ii}$)=12.37$ sets a limit for the
(averaged) ionisation parameter and density in the gas
of log $\langle U \rangle \leq -3.55$ and log $\langle n_{\rm H} \rangle
\geq -1.95$. The corresponding density range
that we consider as relevant for our Cloudy model
($2.5\leq$\,log $n_{\rm H}$\,$\leq -1.95$) is indicated
in Fig.\,6 with the gray-shaded area.
For this range, the expected column densities of N\,{\sc i},
Si\,{\sc ii}, Fe\,{\sc ii}, S\,{\sc ii}, Al\,{\sc ii},
Ni\,{\sc ii}, and Ti\,{\sc ii} are nearly
independent of log $U$ and log $n_{\rm H}$. For S\,{\sc ii},
in particular, the column density varies within only $0.03$ dex
in the above given density range, supporting the previous
conclusions from Wakker et al.\,(1999) and Gibson et al.\,(2000).
In the lower panel of Fig.\,6 we show the differences ($\Delta N$)
between the measured column densities and the mean column
densities predicted by Cloudy. For Ni\,{\sc ii}
and Ca\,{\sc ii}, the arrows indicate the upper limit and the
allowed range for $\Delta N$, respectively.
\input{tab3}
From the comparison between the predicted and measured
ion column densities, together with the solar reference abundances
from Asplund et al.\,(2009), we obtain the following gas-phase abundances:
[N/H$]=-1.15\pm 0.06$,
[Si/H$]=-0.57\pm 0.06$,
[Fe/H$]=-0.86\pm 0.06$,
[S/H$]=-0.30\pm 0.04$,
[Al/H$]=-0.92\pm 0.08$,
[Ti/H$]=-1.62\pm 0.07$,
[Ni/H$]\leq-0.81$, and
[Ca/H$]\leq0$.
The listed errors include the uncertainties from the
column density measurements for the metal ions and
H\,{\sc i} (Table 1), the systematic uncertainty due to the
beam size of the 21cm measurements (Sect.\,3.3), and
the systematic uncertainty for the ionization correction
from the Cloudy model (see above). The relative contributions
of these uncertainties to the total error budget are roughly
identical. Note that we here do not consider
the systematic errors that come with the solar reference
abundances (Asplund et al.\,2009; see Table 3, third row), because these
errors are irrelevant for the comparison between our results
and other abundance measurements, if the identical reference
values are used. All gas-phase abundances derived with Cloudy
are listed in Table 3, sixth row.
The measured S/H ratio of [S/H$]=-0.30\pm 0.04$ corresponds to an
overall sulfur abundance in the Magellanic Stream toward
Fairall\,9 of $0.50^{+0.05}_{-0.04}$ solar. The value for
[S/H] is $0.25$ dex higher
than the value derived for the MS toward Fairall\,9
by Gibson et al.\,(2000) based on GHRS data. Several factors
contribute to this discrepancy: (a) the somewhat higher ($+0.08$ dex)
S\,{\sc ii} column density that we derive from the COS data,
(b) the slightly lower H\,{\sc i} column density ($-0.02$ dex)
that we obtain from the GASS data, and (c) the substantially
higher ($0.15$ dex) solar reference abundance of sulfur
(Asplund et al.\,2009) that we use to derive [S/H] compared to the
value from Anders \& Grevesse (1989) used by Gibson et al.\,(2000).
Remarkably, the sulfur abundance we derive here is
{\it higher} than the present-day stellar sulfur abundances in the SMC
([S/H$]=-0.53\pm 0.15$) {\it and} in the LMC ([S/H$]=-0.42\pm 0.09$;
Russell \& Dopita 1992), but matches the present-day interstellar
S abundances found in H\,{\sc ii} regions in the Magellanic
Clouds (Russell \& Dopita 1990). This interesting result will
be further discussed in Sect.\,5. Because of the large abundance
scatter of sulfur in the Magellanic Clouds and
because S appears to be underabundant
compared to oxygen in the LMC and overabundant compared to oxygen
in the SMC (Russel \& Dopita 1992), the measured (S/H) ratio alone
does not provide a constraining parameter to pinpoint the origin of the
gaseous material in the Stream toward Fairall\,9 in either SMC
or LMC.
From our Cloudy model it follows that all other elements listed above have
gas-phase abundances (or abundance limits) that are lower than that of S.
Next to the intrinsic nucleosynthetic MS abundance pattern, dust depletion is
another important effect that contributes to the deficiency of certain
elements (e.g., Ti, Al, Fe, Si) in the gas phase. This aspect will be discussed in
Sect.\,4.3. Note, again, that we do not attempt to constrain a single
value for log $U$ or log $n_{\rm H}$ with our Cloudy model, as our
sightline passes a complex multiphase structure that is expected to span
a substantial {\it range} in gas densities and ionization parameters.
In this paper, we do not further analyze the gas phase that
is traced by the intermediate and high ions Si\,{\sc iii},
C\,{\sc iv} and Si\,{\sc iv}, which
probably is arising in the ionized boundary layer between the neutral
gas body of the Stream and the ambient hot coronal halo gas of the
Milky Way (see Fox et al.\,2010).
For completeness, we show in Fig.\,4 the velocity
profiles of the available transitions of Si\,{\sc iii},
C\,{\sc iv} and Si\,{\sc iv}
in the COS spectrum of Fairall\,9. The
velocity structure in these ions clearly is different from that
of the low ions, suggesting that they trace spatially
distinct regions in the MS. The high-ion absorption
toward Fairall\,9 will be included in a forthcoming
paper that will discuss those ions in several lines of
sight intersecting the Magellanic Stream
(A.J. Fox et al.\,2014, in preparation).
\begin{figure}[t!]
\epsscale{1.0}
\plotone{f7.eps}
\caption{
COS velocity profile of C\,{\sc ii}$^{\star}$ $\lambda 1335.7$ (upper panel).
C\,{\sc ii}$^{\star}$ absorption from the Magellanic Stream
is visible between $+120$ and $+200$ km\,s$^{-1}$. For comparison,
the velocity profile of S\,{\sc ii} $\lambda 1250.6$ is also
shown (lower panel). The component structure of the low ions is indicated
with the tic marks.}
\end{figure}
\subsection{Nitrogen abundance}
The measured gas phase abundance of nitrogen in the Magellanic Stream
toward Fairall 9 is [N/H$]=-1.15\pm 0.06$, which is $0.85$ dex lower than
the abundance of sulfur. In diffuse neutral gas the depletion of nitrogen into
dust grains is expected to be very small (Savage \& Sembach 1996).
Consequently, the low nitrogen abundance in the Stream reflects the
nucleosynthetic enrichment history of the gas and thus provides important
information about the origin of the gas.
For the SMC, Russell \& Dopita (1992) derive a mean present-day
stellar nitrogen abundance of [N/H$]=-1.20$. This value
is close to the value derived for the MS toward
Fairall\,9, although the gas in the Stream has not been enriched
with nitrogen since it was stripped off its parent galaxy.
The nitrogen abundances found in SMC H\,{\sc ii} regions and
supernova remnants span a rather wide range of
$-1.37\leq$[N/H$]\leq -0.95$ (Russell \& Dopita 1990). Thus,
the observed nitrogen abundance in the MS toward Fairall\,9
would be in accordance with an SMC origin of
the gas only if the (mean) nitrogen abundance in the SMC
has not increased substantially after the Stream was
separated $\sim 1-2$ Gyr ago.
In the LMC, in contrast, the mean present-day stellar nitrogen
abundance is found to be much higher ([N/H$]=-0.69$).
The N abundance range in LMC H\,{\sc ii} regions and supernova
remnants is [N/H$]=-0.86$ to $-0.38$, the minimum [N/H] still being
substantially higher than the nitrogen abundance in the
Stream toward Fairall\,9. Therefore, an LMC origin of
the MS is plausible only if the LMC has substantially
increased its nitrogen abundance during
the last $\sim 2$ Gyr.
In Sect.\,5.1 we will further consider these interesting aspects
and discuss the observed N abundance in the MS in the context of the
enrichment history of the Magellanic Clouds.
\subsection{Dust depletion pattern}
Heavy elements such as Al, Si, Fe, Ca, Ti, Fe and Ni are known to be strongly
depleted into dust grains in interstellar gas in the Milky Way and
other galaxies, while other elements such as S, O, N are
only very mildly or not at all affected by dust depletion
(e.g., Savage \& Sembach 1996; Welty et al.\,1997).
To study the depletion pattern of these elements in the
Magellanic Stream we define the depletion values
from the relative abundance of each depleted element, M,
in relation to the abundance of sulfur.
We define accordingly
log $\delta$(M)=\,[M/H$]_{\rm MS}-$[S/H$]_{\rm MS}$.
Using this definition, the depletions values are
log $\delta$(Si$)=-0.27$,
log $\delta$(Fe$)=-0.56$,
log $\delta$(Ti$)=-1.32$,
log $\delta$(Al$)=-0.62$, and
log $\delta$(Ni$)\leq -0.51$ (see also Table 3, seventh row).
Such depletion values are typical for warm Galactic halo clouds,
as derived from UV absorption-line measurements (see Savage \& Sembach 1996;
their Fig.\,6).
Note that the depletion values (or limits) would be different, if
the {\it relative} chemical abundances of the depleted elements would
differ from the relative chemical composition of the
sun (Asplund et al.\,2009).
In view of the abundance patterns found in the LMC and SMC
(Russell \& Dopita 1990,1992), this is actually a likely scenario,
but no further conclusions can be drawn at this point
without knowing the exact intrinsic chemical composition of the
MS gas toward Fairall\,9.
Since the expected Ca\,{\sc ii} column density in MS gas is expected
to depend strongly on the local gas density and ionization parameter
(Fig.\,6), the depletion value of Ca cannot be tightly constrained,
but may lie anywhere in the range log $\delta$(Ca$) \geq -1.58$.
Under typical interstellar conditions, Ca is strongly depleted
into dust grains with large depletion values that are similar to those
of other elements that have condensation temperatures above
$T=1500$ K (e.g., Ti; Savage \& Sembach 1996).
If one {\it assumes} that
log $\delta$(Ca$) \approx$\,log $\delta$(Ti) for the MS
toward Fairall\,9, then Fig.\,6 would imply that
the average density in the neutral gas in the Stream
in this direction is log $n_{\rm H}\approx 0$ or
$n_{\rm H}\approx 1$ cm$^{-3}$.
This density, together with the neutral gas column density,
would imply a characteristic thickness of the absorbing
neutral gas layer of $d=N($H\,{\sc i}$)/n_{\rm H}\approx 30$ pc.
At first glance, one may regard this as a relatively small value.
The complex velocity-component structure of the
absorber indicates, however, that the absorber is composed
of several small cloudlets along the line of sight that
(together) are extending over a spatial range that is
larger than the value indicated by the {\it mean} thickness.
In view of the relatively high metal abundance derived
for the kinematically complex absorber toward Fairall\,9,
the presence of a population of metal-enriched gas
clumps at small scales in the MS is not an unlikely scenario.
\subsection{Physical conditions in the H$_2$ absorbing region}
As discussed in Sect.\,3.1, the UVES spectrum of Fairall\,9 shows
the presence of Na\,{\sc i}, Ca\,{\sc ii}, and possibly Ti\,{\sc ii}
in component 4. In the same component, the detected H$_2$ absorption
is expected to arise (Sect.\,3.3), indicating the presence
of a cold, dense (and predominantly neutral) gas phase in the MS
in this direction. In the following, we want
to combine the information from the different instruments to
explore the physical and chemical conditions in the H$_2$ absorbing
gas phase in the MS toward Fairall\,9.
\subsubsection{Gas density}
While our Cloudy model does not provide any relevant information
on the {\it local} gas densities in the individual subcomponents,
we can use the observed molecular hydrogen fraction in the gas to
constrain $n_{\rm H}$ in the H$_2$ absorbing component 4, assuming
that the abundance of H$_2$ relative to H\,{\sc i} is governed by
a formation-dissociation equilibrium. In a formation-dissociation
equilibrium the neutral to
molecular hydrogen column density ratio in an interstellar
gas cloud is given by
\begin{equation}
\frac{N({\rm H\,I})}{N({\rm H}_2)} =
\frac{\langle k \rangle \,\beta}{R\,n({\rm H\,I})\,\phi},
\end{equation}
\noindent
where $\langle k \rangle \approx 0.11$ is the
probability that the molecule is dissociated after photo absorption,
$\beta$ is the photo-absorption rate per second within the
cloud, and $R$ is the H$_2$ formation rate on dust grains
in units cm$^{3}$\,s$^{-1}$.
For low molecular hydrogen fractions we can
write $n_{\rm H} = n({\rm H\,I})+2n({\rm H}_2)
\approx n({\rm H\,I})$. The parameter $\phi \leq 1$
in equation (1)
describes the column-density fraction of the H\,{\sc i}
that is physically related to the H$_2$ absorbing gas,
i.e., the fraction of the neutral gas atoms that can
be transformed into H$_2$ molecules (see Richter et al.\,2003
for details).
Our absorption modeling of the undepleted low ions
S\,{\sc ii} and N\,{\sc i} indicates that component
4 contains $\sim 15-30$ percent of neutral gas column,
so that we assume $\phi=0.15-0.30$ in lack of any
more precise information.
For interstellar clouds that are optically thick in
H$_2$ (i.e., for log $N$(H$_2)\gg 14$) H$_2$ line self-shielding
needs to be considered in equation (1), because the self-shielding
reduces the photo-absorption rate ($\beta$) in the cloud interior.
Draine \& Bertoldi (1996) find that the H$_2$ self-shielding
can be expressed by the relation $\beta=S\,\beta_0$,
where $S=(N_{\rm H_2}/10^{14}$cm$^{-2})^{-0.75}<1$ is the
self-shielding factor and $\beta_0$ is the photo absorption rate
at the edge of the cloud. The parameter $\beta_0$ is directly
related to the intensity of the ambient UV radiation field
at the edge of the H$_2$-bearing clump. Compared to the
Milky Way disk, where UV bright stars dominate
the mean photo-absorption rate of
$\beta_{\rm 0,MW}=5.0\times 10^{-10}$ s$^{-1}$
(e.g., Spitzer 1978), the value for $\beta_0$
in the Magellanic Stream is expected to be substantially
smaller. This is because the solid angle of the Milky
Way disk is relatively small at 50 kpc distance and
the contribution of the extragalactic UV background to
$\beta_{\rm 0,MS}$ is expected to be (relatively) small, too.
From the model by Fox et al.\,(2005) follows that the UV
flux between 900 and 1100 \AA\, is reduced by a factor
of $160$ compared to the mean flux within the Milky Way disk,
so that we assume $\beta_{\rm 0,MS}=3.1\times 10^{-12}$ s$^{-1}$.
With a total H$_2$ column density of $8.5\times 10^{17}$ cm$^{-2}$
in the Stream the self-shielding factor $S$ becomes $1.13 \times
10^{-3}$, so that the photo-absorption rate in the cloud core is
estimated as $\beta = 3.5\times 10^{-15}$ s$^{-1}$.
For the H$_2$ grain formation rate in the Magellanic
Stream we adopt the value derived for the SMC
based on \emph{FUSE} H$_2$ absorption-line data
(Tumlinson et al.\,2002), i.e.,
$R_{\rm MS} = 3 \times 10^{-18}$ cm$^{3}$\,s$^{-1}$. This
value is 10 times smaller than $R$ in the
disk of the Milky Way (Spitzer 1978).
If we solve equation (3) for $\phi n_{\rm H}\,=\phi n({\rm H\,I})$
and include the values given above, we obtain
a density of $\phi n_{\rm H}\approx 1$ cm$^{-3}$ or
$n_{\rm H}\approx 4-8$ cm$^{-3}$ for $\phi=0.15-0.30$. These
densities are very close to the density derived
for the H$_2$ absorbing gas in the Leading Arm of the
MS toward NGC\,3783
($n_{\rm H}\approx 10$ cm$^{-3}$; Sembach et al.\,2001).
Note that H$_2$ absorption has also been detected
in the Magellanic Bridge (Lehner 2002).
\subsubsection{Gas temperature}
\begin{figure}[t!]
\epsscale{1.2}
\plotone{f8.eps}
\caption{Rotational excitation of H$_2$ in the Magellanic
Stream. The logarithmic H$_2$ column density
for each rotational state, log $N(J)$, divided by its statistical weight,
$g_J$, is plotted against the rotational excitation energy, $E_J$.
The fits for the excitation temperatures
(assuming a Boltzmann distribution) for $J=0,1$ and
$J=2,3$ are indicated with the solid and dashed line,
respectively.
}
\end{figure}
In Fig.\,8 we plot the measured logarithmic H$_2$ column density
for each rotational state, log $N(J)$, divided by its statistical weight,
$g_J$, against the rotational excitation energy, $E_J$.
Rotational excitation energies and statistical weights have been
adopted from the compilation of Morton \& Dinerstein (1976).
The data points in Fig.\,8 can be fit by a Boltzmann
distribution (i.e, a straight line in this plot),
where the slope characterizes the (equivalent)
excitation temperature for the rotational levels considered
for the fit. For the two rotational ground states $J=0,1$
the Boltzmann distribution thus is given by the equation
$N(1)/N(0)=g_1/g_0\,{\rm exp}\,(-E_{01}/kT_{01})$.
Using this equation, together with our measured column
densities for $J=0,1$, we find an excitation temperature
of $T_{\rm 01}=93^{+149}_{-39}$ K (Fig.\,8 solid line).
Because the two rotational ground states most likely are
collisionally excited, $T_{\rm 01}$ represents a robust
measure for the kinetic temperature in the H$_2$ absorbing gas
(e.g., Spitzer 1978); however, because of the saturation of the
H$_2$ lines and the resulting relatively large errors for
$N(0,1)$, the uncertainty for $T_{\rm 01}$ is
substantial.
In a similar manner, we derive for the rotational states
$J=2,3$ an upper limit for the
equivalent Boltzmann temperature of
$T_{\rm 23}\leq 172$K (Fig.\,8 dashed line). This higher
excitation temperature for $J\geq2$ reflects excitation
processes other than collisions, such as UV pumping
and H$_2$ formation pumping (Spitzer 1978). However,
the relatively low temperature limit of $T_{\rm 23}\leq 172$ K
reflects the low intensity of the ambient
dissociating UV radiation field at the position of the MS.
The above given excitation temperatures deviate from
our earlier estimates (Richter et al.\,2001a). This is not
surprising, however, since the values for $N(J)$ have
substantially changed, too (see Sect.\,3.3).
The derived value of $T_{\rm 01}=93$ K lies in a
temperature regime
that is typical for diffuse H$_2$ gas in the disk of the
Milky Way and in the Magellanic Clouds (e.g., Savage et al.\,1977;
Tumlinson et al.\,2002; de Boer et al.\,1998; Richter et al.\,1998;
Richter 2000).
From the density and temperature estimate in the H$_2$ absorbing
gas we now are able to provide a direct estimate of the thermal
pressure, $P/k=nT$, in the cold neutral gas in the MS. With
$n_{\rm H}= 8$ cm$^{-3}$ and $T=93$ K we obtain
a pressure of $P/k=744$ cm$^{-3}$\,K. This value is
in good agreement with previous estimates for the
thermal pressure in the cold neutral phase of the
MS from H\,{\sc i} 21cm
measurements (e.g., Wakker et al.\,2002).
The thickness of the H$_2$ absorbing structure in the MS
is $d_{\rm H_2}=\phi\,N$(H\,{\sc i}$)/n_{\rm H}
\approx 0.6-1.2$ pc. This
small dimension of the absorbing gas clump
explains the very small velocity dispersion
($b=1.8$ km\,s$^{-1}$; see Table 1) that is measured for
this component.
\subsection{Electron density and C$^{+}$cooling rate
from C\,{\sc ii}$^{\star}$ absorption}
In Fig.\,7 (upper panel) we show the LSR velocity profile of
the C\,{\sc ii}$^{\star}$
$\lambda 1335.7$ line, based on the COS data. Weak absorption
is visible at MS velocities, but the component structure is
different from that of the other low ions (e.g., S\,{\sc ii};
Fig.\,7, lower panel).
Because we could not identify any other (e.g., intergalactic)
origin for this absorption feature, we assume that it is
caused by C\,{\sc ii}$^{\star}$ in the Magellanic Stream.
The C\,{\sc ii}$^{\star}$ feature is characterized by a stronger and broader
absorption peak near $+150$ km\,s$^{-1}$, falling
together with components 1 and 2 defined in Table\,1, and
another weak and narrow component that is seen near $+190$ km\,s$^{-1}$,
most likely associated with component 4, in which H$_2$, Na\,{\sc i},
and Ca\,{\sc ii} is detected. Therefore, the relative strengths
of the C\,{\sc ii}$^{\star}$ components appear to be inverted from
those of the ground-state species. This indicates multi-phase gas,
in which the ionization fractions and electron densities vary among the
different absorption components.
Under typical interstellar conditions, the relative population of the
fine-structure levels of ionized carbon (C$^{+}$) are governed by the balance
between collisions with electrons and the radiative decay
of the upper level into the ground state $2s^2 2p ^2P_{1/2}$.
The C\,{\sc ii}$^{\star}$ $\lambda 1335.7$ transition arises from
the $2s^2 2p ^2P_{3/2}$ state, which has an energy of $\sim 8\times 10^{-5}$
eV above the ground state. Measurements of the column-density
ratios $N$(C\,{\sc ii}$^{\star}$)/$N$(C\,{\sc ii}) thus can be
used to estimate the electron densities $n_{\rm e}$ in different
interstellar environments, including Galactic high-velocity clouds
(HVCs) and circumgalactic gas
structures (e.g., Zech et al.\,2008; Jenkins et al.\,2005).
For the C\,{\sc ii}$^{\star}$ absorption in the MS toward
Fairall\,9 we measure a total column density of
log $N$(C\,{\sc ii}$^{\star})=13.35\pm 0.07$ for the velocity
range $v_{\rm LSR}=120-220$ km\,s$^{-1}$ using the AOD
method. For the weak absorption associated with component 4
we derive log $N$(C\,{\sc ii}$^{\star})=12.63\pm 0.03$ (AOD).
The C\,{\sc ii} $\lambda 1334.5$ absorption in the MS is fully saturated
and thus does not provide a constraining limit on $N$(C\,{\sc ii})
(see Table 1). We therefore use S\,{\sc ii} as a proxy, because
the S\,{\sc ii}/C\,{\sc ii} ratio is expected to be constant
over a large density range (Fig.\,6) and both elements
are not expected to be depleted into dust grains in the MS. Using
a solar (S/C) ratio (Asplund et al.\,2009), we estimate from the values listed
in Table 1 that the total C\,{\sc ii} column density in the MS
is log $N_{\rm tot}$(C\,{\sc ii}$)\approx 16.10$ and
the column density in component 4 is
log $N_{4}$(C\,{\sc ii}$)\approx 15.30$.
Because the C$^{+}$ fine-structure population depends
strongly on the gas temperature (see, e.g., Spitzer 1978),
we do not attempt to estimate a mean value for $n_{\rm e}$ from
$N_{\rm tot}$(C\,{\sc ii}$^{\star})$/$N_{\rm tot}$(C\,{\sc ii}).
We know that the temperature in this multi-phase
absorber spans a large range between $\sim 100$ K and
probably few $1000$ K, and therefore an
estimate for $\langle n_{\rm e} \rangle$ would be meaningless.
Instead, we concentrate on component 4, for which we
know the gas temperature and density from the analysis
of the H$_2$ rotational excitation (Sect.\,4.4).
Keenan et al.\,(1986) have calculated electron excitation
rates for the C$^{+}$ fine-structure transitions for a
broad range of physical conditions in interstellar gas.
Using our temperature and density estimates for the
gas in component 4 ($T=93$ K and $n_{\rm H}=4-8$
cm$^{-3}$) together with their predicted population
rate ratios ($2s^2 2p ^2P_{3/2}$/$2s^2 2p ^2P_{1/2}$)
for $T=100$ K and $n_{\rm H}=5-10$ cm$^{-3}$ (their Fig.\,2),
the measured column-density ratio
log[$N$(C\,{\sc ii}$^{\star}$)/$N$(C\,{\sc ii})$]=-2.67$
implies that the electron density in component 4
is small compared to $n_{\rm H}$, namely
$n_{\rm e}\leq 0.05$ cm$^{-3}$. This low value
for $n_{\rm e}$ is in excellent agreement with
expectations for a stable cold, neutral medium
in gas with subsolar metallicities (Wolfire et al.\,1995).
Having a robust measure for $N_{\rm tot}$(C\,{\sc ii}$^{\star}$),
it is also possible to estimate the C$^{+}$ cooling rate in
the Magellanic Stream toward Fairall\,9.
Because ionized carbon is a major cooling agent of interstellar
gas in a wide range of environments (e.g., Dalgarno \& McCray 1972),
the C$^{+}$ cooling rate is an important quantity that governs the
thermal state of diffuse gas inside and outside of galaxies.
For the hydrogen and electron densities in the Magellanic Stream
(see above) the C$^{+}$ cooling rate is governed predominantly
by spontaneous de-excitations, while collisional de-excitations
can be neglected (e.g., Spitzer 1978).
Following Lehner, Wakker \& Savage (2004),
the C$^{+}$ cooling rate per neutral hydrogen atom can be estimated as
$l_C=2.89 \times 10^{-20}\,N($C$^{+})/N($H\,{\sc i}$)$\,erg\,s$^{-1}$.
If we adopt log $N$(H\,{\sc i}$)=19.95$ and log $N$(C\,{\sc ii}$^{\star})=13.35$
we derive a mean (sightline average) C$^{+}$ cooling rate per neutral hydrogen atom
of log $l_C=-26.14$ for the MS toward Fairall\,9. This value is almost one
order of magnitude lower
than the cooling rate derived for the one-tenth-solar metallicity HVC Complex C
(log $l_C\approx -27$), but is relatively close to the values derived for solar-metallicity
clouds in the Milky Way disk and in the lower Galactic halo
(log $l_C\approx -26$; see Lehner, Wakker \& Savage 2004, their Table\,4).
This result underlines the importance of metal cooling for the thermal state
of metal-rich circumgalactic gas absorbers in the local Universe.
\section{Discussion}
\subsection{Enrichment history of the Magellanic Stream}
\subsubsection{Alpha and nitrogen abundances}
One major result of our study is the surprisingly high
metallicity of the MS toward Fairall\,9. From the
measured sulfur abundance of [S/H$]=-0.30\pm 0.04$
follows that the $\alpha$ abundance in the Stream in
this direction ($l=295, b=-58$) is as high as $0.5$ solar, which is
$\sim 5$ times higher than the $\alpha$ abundance derived
for other sightlines passing the MS toward
NGC\,7469 ($l=347, b=-64$), RBS\,144 ($l=299, b=-66$), and
NGC\,7714 ($l=88, b=-56$; Fox et al.\,2010, 2013).
Our detailed analysis of the ionization conditions in the
gas, the comparison between H\,{\sc i} 21cm measurements
from different radio telescopes with different beam sizes,
and the analysis of the H\,{\sc i} Ly\,$\alpha$ absorption
toward Fairall\,9 (see Appendix) do not provide any
evidence, that this high sulfur abundance could be a result
from the various systematic errors that come along with
our analysis (see also the discussion in Gibson et al.\,2000
on this topic).
We thus are forced to conclude that the measured high sulfur
abundance in the gas reflects the true chemical composition
of the Stream in this direction.
As a guide to the following discussion, we have plotted
in Fig.\,9 the derived MS abundances
(black filled circles) together with
the SMC and LMC present-day stellar and nebular
abundances (Russel \& Dopita 1992; Hughes et al.\,1998;
red (SMC) and green (LMC) filled circles).
With a sulfur abundance of $0.5$ solar, the MS toward Fairall\,9
exhibits an $\alpha$ abundance that is {\it higher} than the average
present-day $\alpha$ abundance in both SMC and LMC
($\sim 0.3$ and $\sim 0.4$ solar, respectively; Russell \& Dopita 1992).
This finding has profound implications for our understanding of the
origin of the Magellanic Stream and its enrichment history.
Early tidal models suggest that the MS was stripped from the
Magellanic Clouds $\sim 2$ Gyr ago (Gardiner \& Noguchi 1996). More
recent models, that take into account that the Magellanic Clouds
possibly are on their first pass through the Milky Way halo
(see, e.g., Kallivayalil et al.\,2013),
have included other relevant mechanisms that would
help to unbind the gas, such as blowouts and tidal resonances
(Connors et al.\,2006; Nidever et al.\,2008; Besla et al.\,2010).
One would expect that
the Stream's chemical composition must reflect that of its
parent galaxy at the time, when the gas was separated from the
Magellanic Clouds, since the MS does not contain stars that
would increase its metallicity since then. Following the
well-defined age-metallicity relations of SMC and LMC (Pagel \&
Tautvaisiene 1998; Harris \& Zaritsky 2004; 2009),
one would expect that the Stream's present-day
$\alpha$ abundance is at most $\sim 0.2$ solar if originating
in the SMC and $\sim 0.25$ solar if originating
in the LMC. While the observed MS abundances toward RBS\,144,
NGC\,7469, and NGC\,7714 are in line with these abundance limits
(and actually favour an SMC origin for this part of the
Stream; see Paper\,I), the sulfur abundance in the
MS toward Fairall\,9 is at least twice as high as expected
from a simple tidal model that assumes a homogeneous pre-enrichment
of the SMC/LMC gas (e.g., Dufour 1975) before the Stream was stripped off.
A second important finding of our study, that adds to this puzzle,
is the relatively low nitrogen abundance in the MS toward
Fairall\,9. The [N/S] ratio ($=[$N/$\alpha$] ratio) is $-0.85$ dex,
which is very low for the relatively high $\alpha$ abundance of
[$\alpha$/H]$=-0.3$ when compared to nearby extragalactic H\,{\sc ii}
regions and high-redshift damped Lyman $\alpha$ (DLA) systems
(see Pettini et al.\,2008; Jenkins et al.\,2005, and references
therein). The $\alpha$-process elements O, S, and Si are believed
to be produced by Type II supernovae from massive progenitor stars,
while the production of N, as part of the CNO cycle in stars
of different masses, is less simple (and not yet fully understood).
The so-called ``primary'' nitrogen production occurs when the
seed elements C and O are produced within the star during the
helium burning phase, while for ``secondary'' nitrogen
production these seed elements were already present when the star
condensed out of the ISM (Pettini et al.\,2002;
Henry \& Prochaska 2007). Primary N is believed to be produced
predominantly by stars of intermediate masses on the asymtotic
giant branch (e.g., Henry et al.\,2000). Because of the longer
lifetime of intermediate-mass stars, primary N therefore is expected
to be released into the surrounding ISM with a time delay of
$\sim 250$ Myr compared to $\alpha$ elements. At low metallicities
less than $\sim 0.5$ solar (i.e., at LMC/SMC metallicities
$\sim 1-2$ Gyr ago, when the MS was separated from its parent galaxy)
the nitrogen production is predominantly primary (see Pettini
et al.\,2008).
The low [N/S] and high [S/H] ratios observed in the MS toward
Fairall\,9 therefore indicate an abundance pattern that is dominated by
the $\alpha$ enrichment from massive stars and Type II supernovae,
while only very little (primary) nitrogen was deposited into
the gas.
\begin{figure}[t!]
\epsscale{1.2}
\plotone{f9.eps}
\caption{Comparison of present-day metal abundances in the
SMC (red dots), LMC (green dots), and the Magellanic Stream
toward Fairall\,9 (black dots). For SMC and LMC the metallicities
are derived from stellar and nebular abundances (from Russell \&
Dopita 1992; Hughes et al.\,1998).
}
\end{figure}
\subsubsection{Origin of the gas}
The most plausible scenario that would explain
such an enrichment history, is, that the gas that later became
the Stream was {\it locally} enriched in the Magellanic Clouds
$\sim 1-2$ Gyr ago with $\alpha$ elements by several supernova explosions in a star
cluster or OB association, and then separated/stripped from the stellar body
of the parent galaxy {\it before} the primary nitrogen was
dumped into the gas and the metals could mix into the ambient
interstellar gas. Eventually, the supernova explosions
may have pushed away the enriched material from the stellar disk
so that the gas was already less gravitationally bound at the
time its was stripped and incorporated into the Magellanic Stream.
In this scenario, it is the present-day N abundance in the MS that
defines the metallicity floor of the parent galaxy and thus
provides clues to the origin of the gas.
Chemical evolution models of the Magellanic Clouds suggest that the
mean metallicities of SMC and LMC $1-2$ Gyr ago were
$0.2-0.3$ dex lower than today (Pagel \& Tautvaisiene 1998;
Harris \& Zaritsky 2004, 2009).
Therefore, and because of the substantially higher mean N abundance
in the LMC compared to the SMC (see Fig.\,9), the low
N abundance measured in the Stream toward Fairall\,9
favours a SMC origin of the gas. An LMC origin is also possible,
however, because the scatter in the
present-day N abundances in the LMC is large (see Sect.\,4.2)
and the age-metallicity relations from Pagel \& Tautvaisiene (1998)
and Harris \& Zaritsky (2004,2009) may not apply to nitrogen
because of its special production mechanism. In fact, considerung
the position of the Fairall\,9 sightline (see Paper\,I, Fig.\,1) and
the radial velocity of the MS absorption in this direction,
the bifurcation model of Nidever et al.\,(2008) {\it predicts} that the
high-velocity gaseous material toward Fairall\,9 is part
of the LMC filament of the Stream und thus should have a different
chemical composition than the SMC filament traced by the other
MS sightlines (Paper\,I). Another aspect that may be of relevance
in this context is the fact that the Fairall\,9 sightline
lies only 14.3\degr\ on the sky from the SMC. The relative high
metal-abundance in the Stream toward Fairall\,9 thus may reflect
the increasing importance of continuous ram-pressure stripping of
metal-enriched SMC gas as the MCs get closer to the Milky Way.
While the above outlined enrichment scenario appears feasible to
explain qualitatively the observed trend seen in the abundance pattern
in the MS toward Fairall\,9, the question arises, how many massive
stars and subsequent supernova explosions would then be required
to lift the [S/N] and [S/H] ratios to such a high level. To
answer this question, one would first need to have an idea about
volume and mass of the gas that is enriched in this manner.
The fact that Fox et al.\,(2010, 2013) have determined a
much lower $\alpha$ abundance in the Stream of $\sim 0.1$ solar
toward three other MS sightlines using similar data
possibly suggests, that the high metallicity and low [N/S] ratio
toward Fairall\,9 represents a relatively local
phenomenon.
In Sect.\,4.3, we have estimated a thickness of the absorbing
neutral gas layer in the Stream of $d\approx30$ pc from
the mean gas density ($n_{\rm H}=1.0$ cm$^{-3}$) and
the total column density, assuming that the dust depletion values
of Ti and Ca are similiar in the gas. In the following, we
regard this value as a realistic {\it lower} limit for the true
physical size of the absorbing gas region. If we adopt the
SN yields from Kobayashi et al.\,(2006) and assume a spherical
symmetry for the absorbing gas region with a diameter and density as
given above, we calculate that only a handful of massive
stars would be required to boost the $\alpha$ abundance
from initially $0.1$ solar to $0.5$ solar
in such a small volume of gas. A much higher
number of high-mass stars would be required, however,
if the volume to be enriched would be substantially larger, as we
outline in the following.
Because the total mass of a spherical gas cloud can be written
as the product of cloud volume and gas density
($M\propto Vn_{\rm H}\propto d^3n_{\rm H}$), and because we use the
relation $d=N_{\rm H}/n_{\rm H}$ to estimate $d$, we can write
$M\propto n_{\rm H}^{-2}$ for a fixed (measured) hydrogen column
density in the gas. Therefore, if the mean gas density in the
MS toward Fairall\,9 would be $0.1$ cm$^{-3}$ rather than the
assumed $1.0$ cm$^{-3}$, then the diameter of the cloud
would be 300 pc instead of
30 pc, but it would require 100 times more massive stars
to enrich this large volume to the level required.
In view of the stellar content of the most massive star
forming regions in the Magellanic Clouds (e.g., 30\,Doradus;
Melnick 1985) and the possible presence of a major star burst
in the MCs at the time when the Magellanic Stream was formed
(Harris \& Zaritsky 2009; Weisz et al.\,2013), a size
of a few dozen up to a few hundred pc together with the
proposed enrichment scenario is fully consistent with
our understanding of the star-formation history of the
Magellanic Clouds.
In summary, the observed abundance pattern in the MS toward
Fairall\,9 suggests the presence of a high-metallicity gas filament
in the Stream in this direction, possibly originating in a
region with enhanced star-formation activity in the Magellanic Clouds
$\sim 1-2$ Gyr ago. Our study indicates that the chemical composition
and thus the enrichment history of the MS seems to be more complex
than previously thought. The Stream possibly is composed
of different gas layers that have different chemical
compositions and that originate from different regions
with local star-formation histories within the Magellanic Clouds.
Our findings therefore support (but do not require) the idea that
there is a metal-enriched filament in the Stream toward Fairall\,9 that
originates in the LMC (but not the SMC), as concluded by
Nidever et al.\,(2008) from a systematic Gaussian decomposition
of the H\,{\sc i} velocity profiles of the LAB 21cm all-sky
survey (Kalberla et al.\,2005). To further explore this
scenario it will be of great importance to identify other
sightlines that pass the supposed LMC filament in the Stream
with background QSOs that are bright enough to be observed with
\emph{HST}/COS. Such observations would also
be helpful to investigate the $\alpha$ abundances
in the LMC filament based on oxygen rather than sulfur and
to further explore element ratios that could help
to constrain the enrichment history of the gas.
Interestingly, the MS absorber toward Fairall\,9 is not the
only example of a circumgalactic gas structure in the nearby
Universe that exhibits a high overall metallicity together
with a low N/$\alpha$ abundance ratio. For example,
Jenkins et al.\,(2005) have measured [$\alpha$/H]$=-0.19$
and [N/$\alpha]<0.59$ in an intervening Lyman-limit system (LLS)
at $z=0.081$ toward the quasar PHL\,1811. This absorption system
appears to be associated with two nearby spiral galaxies at
impact parameters $\rho=34\,h_{70}^{-1}$ kpc and
$\rho=87\,h_{70}^{-1}$ kpc.
It possibly represents gaseous material that has been ejected
or stripped from these galaxies (or other galaxies nearby) and
thus possibly has an origin that is very similar to that of the
Magellanic Stream. Note that low [N/$\alpha$] ratios are
also found in low-metallicity HVCs, such as Complex C (e.g.,
Richter et al.\,2001b). Clearly, a systematic study of N/$\alpha$
ratios in circumgalactic metal absorbers at low $z$ with
\emph{HST}/COS would be an important project to investigate
whether similar abundance patterns are typical for circumgalactic
gaseous structures in the local Universe.
\subsection{Physical conditions and small-scale structure in the gas}
In addition to the very interesting chemical properties, the combined
UVES/COS/\emph{FUSE} /GASS/ATCA data set provides a deep insight into the physical
conditions in the Stream toward Fairall\,9. The data show a complex
multiphase gas structure that possibly spans a large range in gas
densities and ionization conditions.
The presence of H$_2$ absorption toward Fairall\,9
and NGC\,3783 (Sembach et al.\,2001) indicate that the MS and its
Leading Arm hosts a widespread (because of the
large absorption cross section/detection rate) cold neutral
gas phase, possibly structured in a large amount of small,
dense clumps or filaments. Assuming a total area of the
MS and its Leading Arm of $\sim 1500$ deg$^2$, and
a typical size for these clumps of a few pc, there
could be millions of these dense structures in the
neutral gas body of the Stream, if the two H$_2$ detections
toward Fairall\,9 and NGC\,3783 reflects the true
absorption-cross section of this gas phase throughout the
Stream's neutral gas body.
The fact that the MS can maintain significant amounts of H$_2$
at moderate gas densities ($n_{\rm H}=1-10$ cm$^{-3}$)
probably is a result of the relatively low intensity of
the dissociating UV radiation at the Stream's location due
to the absence of local UV sources (see also Fox et al.\,2005).
Note that the low excitation temperature of H$_2$ for $J\geq 2$
(see also Sembach et al.\,2001) provides independent evidence for a
relatively weak ambient UV radiation field, and thus supports
this scenario. In the MS, the low dissociation rate is expected to
overcompensate the low formation rate (because of the lower
metallicity and dust content compared to the Milky Way ISM),
thus resulting in molecular gas fractions that are relatively
high for the given total gas column. With a grain-formation
rate of $R_{\rm MS} = 3 \times 10^{-18}$ cm$^{3}$\,s$^{-1}$ and
a density of $n_{\rm H}=5$ cm$^{-3}$ (see Sect.\,4.4), the
H$_2$ formation time is long, $t_{\rm form}=
(R_{\rm MS}n_{\rm H})^{-1}\approx 2$ Gyr, which matches the
tidal age of the Stream. Therefore, it seems most likely
that the H$_2$ (or at least some fraction of it) has already
formed in the parent galaxy and then survived the subsequent
tidal stripping.
A similar conclusion was drawn from the H$_2$ observations in the Leading
Arm of the MS toward NGC\,3783 (Sembach et al.\,2001).
Elaborating our idea, that the gas was locally enriched by massive stars
shortly before the Stream was separated from its parent galaxy, one
could imagine the presence of relatively dense, compressed shells and
fragments that formed the seed structures for the formation
of H$_2$ that then were carried along with the Stream's gaseous
body.
Independent constraints on the physical conditions in the gas
come from the observed Na\,{\sc i}/Ca\,{\sc ii} ratio.
We measure a Na\,{\sc i}/Ca\,{\sc ii}
column-density ratio of $0.23$ in component 4, where the
H$_2$ is expected to reside. In the Milky Way ISM, such a ratio
is typical for a dust-bearing warm neutral medium (WNM), where
$n_{\rm H}\leq 10$ cm$^{-3}$ and $T=10^2-10^4$ K, typcially,
and Ca\,{\sc ii} serves a trace species (e.g., Crawford 1992).
In such gas, and {\it without} dust depletion of Ca and Na, a nearly
constant Na\,{\sc i}/Ca\,{\sc ii} of $0.025$ ratio would be
expected from detailed ionization models of these ions
(see Crawford 1992; Welty et al.\,1996; Richter et al.\,2011).
If these numbers would apply also to the conditions in the MS,
they would indicate relative dust depletions of Ca and Na
of log $\delta$(Ca$)-$\,log $\delta$(Na$)\approx 0.9$, or
log $\delta$(Ca$)\approx0.9$ for log $\delta$(Na$)=0$. These
values are in very good agreement with the dust-depletion
estimates from our Cloudy model discussed in Sect.\,4.3.
\subsection{Relevance to intervening QSO absorbers}
The Magellanic Stream represents a prime example for
a high-column density circumgalactic tidal gas stream
in the local Universe. If the Stream would be seen as
QSO absorber, it would be classified as LLS, sub-DLA
(sub-Damped Lyman $\alpha$ absorber), or DLA,
depending on the position of the sightline passing through
the gas. The H\,{\sc i} column-density maps presented in
Fig.\,1 provides an estimate on the
covering fractions of these column-density regimes
in the MS in the direction of Fairall\,9.
The results from our multi-sightline campaign to study
the properties of the MS therefore are also of relevance
for the interpretation of interevening absorption-line
systems at low redshift.
\subsubsection{Inhomogeneity of absorbers}
The first important result from our
study that is of relevance for QSO absorption-line studies is,
that the physical conditions {\it and} the
chemical composition appear to vary {\it substantially}
within the Stream.
If such inhomogeneities were typical for tidal gas
filaments around galaxies, then the interpretation of absorption
spectra from circumgalactic gas structures around more distant
galaxies (in terms of metallicity, molecular content,
physical conditions, gas mass, origin, etc.) would be
afflicted with large systematic uncertainties. In fact,
most studies that aim at exploring the connection between
galaxies and their surrouding circumgalactic gas are
limited to single sightlines that pass the galaxy
environment at a random impact parameter (e.g.,
Thom et al.\,2011; Tumlinson et al.\,2011; Ribaudo et al.\,2011).
Physical and chemical properties derived from single-sightline
studies, however, may not be representative at all for the
conditions in the general gaseous environment
(in the same way as conditions in the MS along the Fairall\,9
sightline are {\it not} representative for the Stream as
a whole).
The inhomogeneous chemical composition of the Stream implies
that the metals possibly are not well mixed in the gas.
Former abundance variations in the parent galaxy
due to local star-formation events thus may have been frozen
into the Stream's spatial metal distribution. This would be not
too surprising, however, since the main processes that stir up and mix
the interstellar gas within galaxies (i.e., supernova explosions,
stellar winds) cannot take place in the Stream simply due to the lack
of stars.
\subsubsection{Molecular absorption}
The two detections of H$_2$ absorption in the MS with \emph{FUSE}
toward Fairall\,9 and NGC\,3783 (Sembach et al.\,2001)
indicate that tidal gas streams around galaxies may
typically host a widespread, cold gas phase that has
a substantial absorption cross section. This scenario
is supported by the recent detection of H$_2$ absorption
in another, more distant circumgalactic tidal gas stream
beyond the Local Group (Crighton et al.\,2013).
These findings also remind us that
the presence of H$_2$ in an intervening absorber with
a complex velocity structure and a high neutral gas
column density does not {\it necessarily} mean that one
traces a gaseous disk of a galaxy.
The relatively large
absorption cross section of H$_2$ in neutral gas structures
{\it around} galaxies, as found in the Milky Way halo
(see Richter 2006 and references therein), can be explained
by the circumstance that the physical conditions in these
star-less gas clouds favour the formation of diffuse
molecular structures even at relatively moderate gas
densities. In particular, the relative low intensity
of the local UV radiation field, the efficient process
of H$_2$ line self-shielding, and the lack of the distructive
influence of massive stars probably represent
important aspects in this context.
H$_2$ observations in circumgalactic gas clouds not
only provide important information on the physical and
chemical conditions in these structures, they also are of
high relevance for our understanding of the physics
of molecular hydrogen in diffuse gas under conditions
that are very differnt from that in the local ISM
(see also Sembach et al.\,2001).
The transition from neutral to molecular gas, in particular,
represents one of the most important processes that govern the
evolutionary state of galaxies at low and high redshift.
Detailed measurements of H$_2$ fractions and dust-depletion
patterns in (star-less) tidal streams at low redshift
can help to constrain the critical formation rate of
molecular hydrogen in low-metallicity environments and
thus could be of great importance to better understand the
distribution and cross section of molecular gas in
and around high-redshift galaxies.
\section{Summary}
In this second paper of our ongoing series to study the Magellanic Stream
in absorption we have analyzed newly
obtained optical and UV absorption-line data from \emph{HST}/COS and
VLT/UVES together with archival \emph{FUSE} and H\,{\sc i} 21cm emission
data from GASS and ATCA to study the chemical composition and the physical
conditions in the Magellanic Stream in the direction of the quasar
Fairall\,9. Our main results are the following:\\
\\
1. Metal absorption in the Magellanic Stream (MS) is detected
in seven individual absorption components centered at
$v_{\rm LSR}=+143,+163,+182,+194,+204$ and $+218$ km\,s$^{-1}$,
indicating a complex internal velocity strucutre of the MS
in this direction. Detected ions, atoms and molecules in
the Stream include
C\,{\sc iv}, Si\,{\sc iv}, Si\,{\sc iii}, C\,{\sc ii},
C\,{\sc ii}$^{\star}$, Al\,{\sc ii}, Si\,{\sc ii},
S\,{\sc ii}, Ca\,{\sc ii}, Ti\,{\sc ii}, Fe\,{\sc ii},
Ti\,{\sc ii}, O\,{\sc i}, N\,{\sc i}, Na\,{\sc i}, and H$_2$.\\
\\
2. From the unsaturated S\,{\sc ii} absorption and a Cloudy
photoionization model we obtain an $\alpha$
abundance in the Stream of [S/H$]=-0.30\pm 0.04$ ($0.50$ solar),
which is substantially higher than that found in the Stream
along the lines of sight toward NGC\,7469, RBS\,144, and NGC\,7714
(Fox et al.\,2010, 2013).
Unfortunately, the unresolved, saturated O\,{\sc i} $\lambda 1302.2$
line cannot be used to independently constrain the
$\alpha$ abundance in the MS toward Fairall\,9. Contrary to
sulfur, we measure a very low nitrogen abundance in the gas of
[N/H$]=-1.15\pm 0.06$. The resulting [N/S] ratio is $-0.85$ dex,
which is very low compared to other environments with
similarly high $\alpha$ abundances.
The low [N/S] and high [S/H] ratios observed in the MS toward
Fairall\,9 suggest an abundance pattern that is dominated by
the $\alpha$ enrichment from massive stars and Type II supernovae,
while only very little primary nitrogen was deposited into
the gas when the Stream was separated from the
Magellanic Clouds.\\
\\
3. The detection of very narrow Na\,{\sc i} and H$_2$ absorption
(with $b\approx 2$ km\,s$^{-1}$) in the component at $v_{\rm LSR}=+188$
km\,s$^{-1}$ indicates the presence of a compact (pc-scale), cold gas stucure
in the MS along this sightline. From the analysis of the
newly reduced archival \emph{FUSE} data of Fairall\,9 we measure a total
molecular hydrogen column density of log $N$(H$_2)=17.93^{+0.19}_{-0.16}$, improving
previous results from Richter et al.\,(2001a). From the analysis
of the H$_2$ rotational excitation we obtain a kinetic temperature
in the cold neutral gas phase of $T\approx 93$ K.
For the gas density we derive $n\approx 4-8$ cm$^{-3}$,
assuming that the H$_2$ gas is in a formation-dissociation
equilibrium. The resulting estimate for the thermal gas pressure
is $P/k\approx 750$ cm$^{-3}$\,K, thus in good agreement with values derived
from previous studies of the Stream. The detection of H$_2$ absorbing
structures in the MS, whose linear and angular sizes must be very
small ($\sim 1$ pc and $\sim 3$ arcseconds), indicates
that the neutral gas body of the Stream is highly clumped and
structured. We discuss the implications of physical and chemical
inhomogeneities in circumgalactic gas structures on our understanding
of intervening QSO absorption-line systems. \\
\\
4. The relatively low column densities of Fe\,{\sc ii}, Ti\,{\sc ii},
Ni\,{\sc ii}, Al\,{\sc ii}, and Ca\,{\sc ii} indicate that the gas phase
abundances of these elements are affected by dust depletion. We combine our
column-density measurements for these ions with a Cloudy
photoionization model and derive dust
depletion values relative to sulfur of
log $\delta$(Si$)= -0.27$,
log $\delta$(Fe$)= -0.56$,
log $\delta$(Ni$)\leq -0.51$,
log $\delta$(Ti$)= -1.32$,
log $\delta$(Al$)= -0.62$, and
log $\delta$(Ca$)= 0$ to $-1.58$.
These depletion values are similar to those
found in warm, diffuse clouds in the lower
Milky Way halo (Savage \& Sembach 1996).\\
\\
5. Our study indicates that the enrichment history of the
Magellanic Stream as well as the physical conditions in the
Stream are more complex than previously known.
The abundances and gas-to-dust ratios measured in the
Stream along the Fairall\,9 sightline are substantially
higher than what is found along other MS sightlines.
The high sulfur abundance in the gas possibly indicates a
substantial $\alpha$ enrichment from massive stars
in a region of enhanced star-formation $\sim 1-2$ Gyr ago
that then was stripped from the
Magellanic Clouds and incorporated into the MS
before the gas could be enriched in nitrogen from
intermediate-mass stars.
Our findings are in line with the idea that the
metal-enriched filament in the Stream toward Fairall\,9
originates in the LMC (Nidever et al.\,2008), but an
SMC origin is also possible (and slightly favoured by
the low nitrogen abundance in the gas).
\acknowledgments
We gratefully thank Gurtina Besla and Mary Putman for helpful
discussions and Christian Br\"uns for providing the
supporting ATCA 21cm data.
Support for this research was provided by NASA through
grant HST-GO-12604 from the Space Telescope Science Institute,
which is operated by the Association of Universities for Research
in Astronomy, Incorporated, under NASA contract NAS5-26555.
\input{reflist}
\newpage
\section*{REFERENCES}
\begin{footnotesize}
\noindent
Abgrall, H., \& Roueff, E. 1989, A\&A, 79, 313
\noindent
\\
Anders, E., \& Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197
\noindent
\\
Asplund, M., Grevesse, N., Jacques Sauval, A., \& Scott, P. 2009, ARA\&A, 47, 481
\noindent
\\
Ben Bekhti, N., Richter, P., Westmeier, T., \& Murphy, M.T. 2008,
A\&A, 487, 583
\noindent
\\
Ben Bekhti, N., Winkel, B., Richter, P., et al. 2011, A\&A, 542, 110
\noindent
\\
Besla, G., Kallivayalil, N., Hernquist, L., et al. 2007, ApJ, 668, 949
\noindent
\\
Besla, G., Kallivayalil, N., Hernquist, L., et al. 2010, ApJ, 721, L97
\noindent
\\
Besla, G., Kallivayalil, N., Hernquist, L., et al. 2012, MNRAS, 421, 2109
\noindent
\\
Bland-Hawthorn, J., \& Maloney, P. R. 1999, ApJ, 510, L33
\noindent
\\
Bland-Hawthorn, J., Sutherland, R., Agertz, O., \& Moore, B. 2007, ApJ, 670, L109
\noindent
\\
Bland-Hawthorn, J.,\& Maloney, P. R. 2002, in ASP Conf. Ser. 254, Extragalactic
Gas at Low Redshift, ed. J. S. Mulchaey \& J. T. Stocke (San Francisco: ASP),
267
\noindent
\\
Br\"uns, C., Kerp, J., Stavely-Smith, L., et al. 2005, A\&A, 432, 45
\noindent
\\
Crawford, I.A. 1992, MNRAS, 259, 47
\noindent
\\
Crighton, N. H. M., Bechtold, J., Carswell, R. F. et al. 2013, MNRAS, in press (astro-ph/1210.0905)
\noindent
\\
Connors, T. W., Kawata, D., Maddison, S. T., \& Gibson, B. K. 2004, PASA, 21, 222
\noindent
\\
Connors, T. W., Kawata, D., \& Gibson, B. K. 2006, MNRAS, 371, 108
\noindent
\\
Dalgarno, A., \& McCray, R. A. 1972, ARA\&A, 10, 375
\noindent
\\
de Boer, K. S., Richter, P., Bomans, D. J., Heithausen, A., \& Koornneef, J.
A\&A, 338, L5
\noindent
\\
Dekker, H., D'Odorico, S., Kaufer, A., Delabre, B., \& Kotzlowski,
H. 2000, SPIE, 4008, 534
\noindent
\\
Diaz, J. D., \& Bekki, K. 2011, ApJ, 413, 2015
\noindent
\\
Draine, B., \& Bertoldi, F. 1996, ApJ, 468, 269
\noindent
\\
Dufour, R.J. 1975, ApJ, 195, 315
\noindent
\\
Ferland, G. J., Korista, K. T., Verner, D. A. et al. 1998, PASP, 110, 761
\noindent
\\
Fontana, A., \& Ballester, P. 1995, ESO Messenger, 80, 37
\noindent
\\
Fox, A. J., Wakker, B. P., Savage, B. D., et al. 2005, ApJ, 630, 332
\noindent
\\
Fox, A.J., Wakker, B.P., Smoker, J.V., et al. 2010, ApJ, 718, 1046
\noindent
\\
Fox, A.J., Richter, P., Wakker, B.P. et al. 2013, ApJ, in press (astro-ph/1304.4240)
\noindent
\\
Gardiner, L.T. \& Noguchi, M. 1996, MNRAS, 278, 191
\noindent
\\
Gibson, B. K., Giroux, M. L., Penton, S. V., et al. 2000, AJ, 120, 1830
\noindent
\\
Green, J. C., Froning, C. S., Osterman, S., et al. 2012, ApJ, 744, 60
\noindent
\\
Harris, J., Zaritsky, D. 2004, AJ, 127, 1531
\noindent
\\
Harris, J., Zaritsky, D. 2009, AJ, 138, 1243
\noindent
\\
Heitsch, F. \& Putman, M. E. 2009, ApJ, 698, 1485
\noindent
\\
Henry, R. B. C., Edmunds, M. G, K\"oppen, J. 2000, ApJ, 541, 660
\noindent
\\
Henry, R. B. C. \& Prochaska, J. X. 2007, PASP, 119, 962
\noindent
\\
Hughes, J. P., Hayashi, I., \& Koyama, K. 1998, ApJ, 505, 732
\noindent
\\
Irwin, M. J., Kunkel, W. E., Demers, S. 1985, Nature, 318, 160
\noindent
\\
Jenkins, E. B., Bowen, D. V., Tripp, T. M., \& Sembach, K. R. 2005,
ApJ, 623, 767
\noindent
\\
Kalberla, P. M. W., Burton, W. B., Hartmann, D., et al. 2005,
A\&A, 440, 775
\noindent
\\
Kalberla, P. M. W., McClure-Griffiths, N. M., Pisano, D. J., et al.
2010, A\&A, 521, 17
\noindent
\\
Kallivayalil, N., van der Marel, R. P., Alcock, C., et al. 2006a,
ApJ, 638, 772
\noindent
\\
Kallivayalil, N., van der Marel, R. P., \& Alcock, C. 2006b, ApJ,
652, 1213
\noindent
\\
Kallivayalil, N., van der Marel, R. P., Besla, G., Anderson, J., \& Alcock, C. 2013,
ApJ, 764, 161
\noindent
\\
Keenan, F. P., Lennon, D. J., Johnson, C. T., \& Kingston, A. E. 1986,
MNRAS, 220, 571
\noindent
\\
Kobayashi, C., Umeda, H., Nomoto, K., Tominaga, N., \& Ohkubo, T. 2006,
ApJ, 653, 1145
\noindent
\\
Koerwer, J.F. 2009, AJ, 138, 1
\noindent
\\
Kriss, G. A. 2011, \emph{COS} Instrument Science Report, 1
\noindent
\\
Lehner, N. 2002, ApJ, 578, 126
\noindent
\\
Lehner, N., Wakker, B.P., \& Savage, B.D. 2004, ApJ, 615, 767
\noindent
\\
Lehner, N., Howk, J.C., Keenan, F.P., \& Smoker, J.V. 2008, ApJ 678, 219
\noindent
\\
Lu, L., Savage, B. D., \& Sembach, K. R. 1994, ApJ, 437, L119
\noindent
\\
Mastropietro, C., Moore, B.,Mayer, L.,Wadsley, J., \& Stadel, J. 2005, MNRAS,
363, 509
\noindent
\\
Mathewson, D. S., Ford, V. L., Schwarz, M. P., \& Murray, J. D. 1979, IAUS, 84, 547
\noindent
\\
Melnick, J. 1985, A\&A, 153, 235
\noindent
\\
Morton, D. C., \& Dinerstein, H. L. 1976, ApJ, 204, 1
\noindent
\\
Morton, D. C. 2003, ApJS, 149, 205
\noindent
\\
McClure-Griffiths, N. M., Pisano, D. J., Calabretta, M. R., et al. 2009,
ApJS, 181, 398
\noindent
\\
Nidever, D. L., Majewski, S. R., \& Burton, W. B. 2008, ApJ, 679, 432
\noindent
\\
Nidever, D. L., Majewski, S. R., Burton, W. B., \& Nigra, L. 2010,
ApJ, 723, 1618
\noindent
\\
Pagel, B. E. J., \& Tautvaisiene, G. 1998, MNRAS, 299, 535
\noindent
\\
Pettini, M., Ellison, S. L., Bergeron, J., \& Petitjean, P. 2002,
A\&A, 391, 21
\noindent
\\
Pettini, M., Zych, B. J., Steidel, C. C., \& Chaffee, F. H. 2008,
MNRAS, 385, 2011
\noindent
\\
Putman, M. E., Staveley-Smith, L., Freeman, K. C., Gibson, B. K., \& Barnes, D. G. 2003,
ApJ, 586, 170
\noindent
\\
Ribaudo, J., Lehner, N., Howk, J. C., et al. 2011,
ApJ, 743, 207
\noindent
\\
Richter, P., Widmann, H., de Boer, K. S. et al. 1998, A\&A, 338, L9
\noindent
\\
Richter, P. 2000, A\&A, 359, 1111
\noindent
\\
Richter, P., Sembach, K. R., Wakker, B. P., \& Savage, B. D., 2001a,
ApJ, 562, L181
\noindent
\\
Richter, P., Savage, B. D., Wakker, B. P., Sembach, K. R., \& Kalberla, P. M. W.
2001b, ApJ, 549, 281
\noindent
\\
Richter P., Wakker B. P., Savage B. D., \& Sembach K. R. 2003, ApJ, 586, 230
\noindent
\\
Richter, P., Westmeier, T., \& Br\"uns 2005, A\&A, 442, L49
\noindent
\\
Richter, P. 2006, Rev.\,Mod.\,Astron., 19, 31
\noindent
\\
Richter, P., Charlton, J. C., Fangano, A. P. M., Ben Bekhti, N.,
\& Masiero, J. R. 2009, ApJ, 695, 1631
\noindent
\\
Richter, P., Krause, F., Fechner, C., Charlton, J. C.,
\& Murphy, M. T. 2011, A\&A, 528, A12
\noindent
\\
Russell, S. C., \& Dopita, M. A. 1990, ApJ, 74, 93
\noindent
\\
Russell, S. C., \& Dopita, M. A. 1992, ApJ, 384, 508
\noindent
\\
Savage, B. D., Drake, J. F., Budich, W., \& Bohlin, R. C. 1977, ApJ,
216, 291
\noindent
\\
Savage, B. D., \& Sembach, K. R. 1991, ApJ, 379, 245
\noindent
\\
Savage, B. D. \& Sembach, K. R. 1996, ARA\&A, 34, 279
\noindent
\\
Sembach, K. R., Howk, J. C., Savage, B. D., \& Shull, J. M. 2001,
AJ, 121, 992
\noindent
\\
Songaila, A. 1981, ApJ, 243, L19
\noindent
\\
Spitzer, L. 1978, {\it Physical processes in the interstellar medium},
(New York Wiley-Interscience)
\noindent
\\
Thom, C., Werk, J. K., Tumlinson, J., et al. 2011, ApJ, 736, 1
\noindent
\\
Tumlinson, J., Shull, J. M., Rachford, B. L., et al. 2002, ApJ, 566, 857
\noindent
\\
Tumlinson, J., Werk, J. K., Thom, C., et al. 2011, ApJ, 733, 111
\noindent
\\
Wakker, B. P., Howk, J. C., Savage, B. D., et al. 1999, Nature, 402, 388
\noindent
\\
Wakker, B. P., Oosterloo, T. A., \& Putman, M. E. 2002, AJ, 123, 1953
\noindent
\\
Wakker, B.P. 2006, ApJS, 163, 282
\noindent
\\
Wakker, B. P., York, D. G., Howk, J. C., et al.\,2007, ApJ, 670, L113
\noindent
\\
Wakker, B. P., York, D. G., Wilhelm, R., et al. 2008, ApJ, 672, 298
\noindent
\\
Wakker, B. P., Lockman, F. J., \& Brown, J. M. 2011, ApJ, 728, 159
\noindent
\\
Wannier, P., \& Wrixon, G. T. 1972, ApJ, 173, L119
\noindent
\\
Weiner, B. J., \& Williams, T. B. 1996, AJ, 111, 1156
\noindent
\\
Weisz, D. R., Doplhin, A. E., Skillmann, E. D., et al.\,2013, MNRAS, 431, 364
\noindent
\\
Welty, D. E., Morton, D. C., \& Hobbs, L.M. 1996, ApJS, 106, 533
\noindent
\\
Welty, D. E., Lauroesch, J. T., Blades, J. C., Hobbs, L.M., \& York, D. G. 1997,
ApJ, 489, 672
\noindent
\\
Welty, D. E., Federman, S. R., Gredel, R., Thorburn, J. A., \& Lambert, D. L. 2006,
ApJS, 165, 138
\noindent
\\
Wolfire, M. G., McKee, C. F., Hollenbach, D., \& Tielens, A. G. G. M.
1995, ApJ, 453, 673
\noindent
\\
York, D. G., Songaila, A., Blades, J. C., et al. 1982, ApJ, 255, 467
\noindent
\\
Zech, W. F., Lehner, N., Howk, J. C., Dixon, W. V. D., \& Brown, T. M. 2008, ApJ,
679, 460
\end{footnotesize}
|
1408.1982
|
\section{Introduction}
Despite the major importance of the Type Ia Supernovae (SN Ia), as cosmological distance indicators, to the discovery of the accelerated expansion of the Universe and the dark energy
\citep{1998AJ....116.1009R,1999ApJ...517..565P}, the unknown nature of its progenitors is still a great concern. It is a theoretical consensus that
the progenitors are binary systems with a massive C/O white dwarf (WD) which ignites when it reaches the Chandrasekhar mass limit. However, the nature of the companion star
is under heavy debate yet. It can be a non-degenerate companion star in the
single degenerate (SD) channel or a WD companion in the double degenerate channel. See \citet{maoz} and \citet{2012NewAR..56..122W} for
recent reviews on the topic of SN Ia progenitors.
In the single degenerate channel, the Close Binary Supersoft Sources (CBSS) and the V Sge stars are strong candidates to SN Ia progenitors, given
their massive WD and high accretion rates. In CBSS \citep{1997ARA&A..35...69K} the WD experiences hydrostatic
nuclear burning on its surface because of the very high ($\dot{M}\sim 10^{-7}$ M$_{\sun}~$yr$^{-1}$) accretion rate from the near main
sequence secondary star. Firstly discovered as supersoft X-ray sources in the Magellanic Clouds, only two CBSS have been discovered in the Galaxy so far (namely QR And and MR Vel). To address the problem
of the discrepancy between the number of CBSS discovered in other galaxies and in the Milk Way, \citet{1998PASP..110..276S} proposed that the V Sge class of stars could be the galactic
counterpart of the CBSS, not recognized as such in our Galaxy because of the absorption of the supersoft emission by the interstellar gas.
The V Sge stars are spectroscopically characterized by high-ionization emission lines of \mbox{O\,{\sc vi}} and \mbox{N\,{\sc v}} and
by the ratio of the equivalent widths of \mbox{He\,{\sc ii}} 4686 {\AA} to H$\beta$ usually greater than 2. Other characteristic features are P Cyg profiles, indicating strong wind
in the systems, and the lack or weakness of \mbox{He\,{\sc i}} lines. Their orbital periods range from 5 to 12 h and their orbital light curves are either low-amplitude sinusoidal or
high-amplitude asymmetric with primary and secondary eclipses. As in the CBSS, no spectroscopic evidence of atmospheric absorption features
from the secondary star has been found until now in the V Sge stars, although in V Sge itself the discovery of narrow Bowen fluorescence emission features of \mbox{O\,{\sc iii}}
3132 and 3444 {\AA} characterized it as a double-lined spectroscopic binary, allowing the estimate of the mass ratio \citep{1965ApJ...141..617H}.
Four members initially composed the V Sge class: V Sge \citep{1965ApJ...141..617H}, V617 Sgr \citep{1999A&A...351.1021S,2007A&A...471L..25S}, WX Cen \citep{2004MNRAS.351..685O} and DI Cru
(HD104994, WR46), but the latter has left the V Sge class being re-classified as a quasi Wolf--Rayet (qWR) star \citep{2004PASP..116..311O}.
Attempts were made to increase the number of known
members in the V Sge class, but until now the low number of members still remains. The candidates WR 7a \citep{2003MNRAS.346..963O} and HD 45166 \citep{2005A&A...444..895S}
have also been classified as qWR stars, as in the case of DI Cru. \citet{2007MNRAS.382.1158W} noted the spectral similarity of IPHAS J025827.88+635234.9 to V Sge and to the cataclysmic variable
QU Car, however \citet{2014Ap&SS.349..361K} claims its photometric behaviour does not fit the typical behaviour of the V Sge stars and CBSS.
Recently, \citet{2008AJ....135.1649K} (hereafter KAH08) suggested the inclusion of QU Car in the V Sge class.
QU Car, although being bright ($m_v \sim 11.4$) and known since 1968, is very poorly understood. It was discovered as an irregular variable with observational features similar to Sco X-1 by
\citet{1968ApL.....1..247S}. \citet{1969ApJ...157..709S} suggested its classification as an old nova and reported flickering as large as 0.2 mag in time-scales of minutes, but found no periodicity
in the light curve that could be associated to orbital motions. This author also reported that the spectroscopic emission lines are weakest and absorption are strongest during times of
maximum flickering activity. \citet{1982ApJ...261..617G} (hereafter GP82) determined an orbital period of 10.9 h from the radial velocities of the \mbox{He\,{\sc ii}} 4686 {\AA}
emission line, a dominant optical spectral feature besides the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex at 4630--4660 {\AA}. No spectral features of a secondary star were found in their
spectra, despite the long orbital period, leading to the suggestion that the light of the system is dominated by the WD and accretion disc. They suggested the classification of QU Car as
Nova-like. While GP82 set a lower limit of 500 pc to the distance to QU Car, \citet{2003MNRAS.338..401D} estimated a possible distance of 2 kpc. If this is really the case, the luminosity would be
$10^{37}$ erg~s$^{-1}$ and the mass accretion rate would be close to $10^{-7}$ M$_{\sun}$~yr$^{-1}$, which are typical values of the CBSS.
Nevertheless, a comparison made by \citet{2003MNRAS.338..401D} between the optical spectrum they obtained for QU Car and CBSS published spectra, showed that the absence of \mbox{O\,{\sc vi}}
in the former indicates that the degree of ionization in QU Car is lower than in CBSS. Based on the spectrum they also argue that the secondary may be an early-type R star.
KAH08 obtained new optical spectra of QU Car and performed a radial velocity analysis of the \mbox{He\,{\sc ii}} 4686 {\AA} line in order to provide a modern ephemeris,
but surprisingly they could not find, in their data set, the 10.9 h orbital period previously determined by GP82 in data taken 27 years before, using the same emission line. KAH08
propose that line profile variations due to an erratic wind may be responsible for the non-detection of the 10.9 h periodicity. In their spectra they also found signals of
the forbidden [\mbox{O\,{\sc iii}}] and [\mbox{N\,{\sc ii}}] emission lines, indicative of a nebula. This may possibly be related to the presence of a strong wind.
Based on the similarity of the spectra of QU Car to that of the V Sge star WX Cen, KAH08 proposed its inclusion on the V Sge class. Besides, they analysed
a long term AAVSO photometric time series of QU Car and discovered high and low brightness states similar to the ones presented by the V Sge stars. The same set of AAVSO
data, plus ASAS photometric monitoring, were analysed by \citet{2012MNRAS.425.1585K} (hereafter KHW12). In that data QU Car presented high states with $m_v \sim 11.5$ mag and less frequent
low states lasting for $\sim 100$ d when the magnitude was below 12 mag. KHW12 proposed to link QU Car to the V Sge class and to the Accretion Wind Evolution (AWE)
model \citep{2003ApJ...598..527H} which reproduces the high/low brightness levels as well as the off/on soft X-rays states of V Sge.
In an observational program to search for galactic counterparts of the CBSS and new members of the V Sge class, we selected QU Car for photometric and spectroscopic studies.
In this paper we present our efforts to better understand this elusive system. In section 2 we present our data and in section 3 we discuss the extensive series of spectra,
both in terms of line profile and radial velocity variabilities, and also discuss the photometric data. Our conclusions are presented in section~4.
\section{Observations and data reduction}
QU Car was observed with all three telescopes at the Observat\'orio Pico dos Dias (OPD -- LNA/MCTI) located in southeast Brazil.
Photometric observations were carried out on ten nights during 2009, 2011 and 2012 (Table~\ref{jophoto}) at both 0.6-m Zeiss and Boller \& Chivens telescopes.
We employed three distinct thin, back-illuminated detectors: the E2V CCD47-20 (CCD S800), the SITe SI003AB (CCD 101) and the E2V CCD42-40 (CCD IkonL).
Time series of images were obtained through the Johnson V filter, with individual exposure times of 30 seconds. The timings were provided by a GPS
receiver. Bias and dome flatfield exposures were used for correction of the detector read-out noise and sensitivity using standard
\textsc{iraf}\footnote {\textsc{iraf} is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for
Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} tasks.
The differential aperture photometry was performed with the DAOPHOT II package using apertures and background annulus defined by the instantaneous
PSF measured at each image.
\begin{table}
\centering
\caption{Journal of photometric observations of QU~Car.\label{jophoto}}
\begin{tabular}{@{}lccc@{}}
\hline
Date & Length of & Telescope & CCD \\
(UT) & observation (h) & & \\
\hline
2009 Jun 06 & 2.8 & 0.6 m B\&C & S800 \\
2011 Apr 13 & 7.7 & 0.6 m Zeiss & 101 \\
2011 Apr 22 & 6.3 & 0.6 m Zeiss & 101 \\
2011 Apr 23 & 3.5 & 0.6 m Zeiss & 101 \\
2011 Apr 24 & 3.7 & 0.6 m Zeiss & 101 \\
2011 May 11 & 1.7 & 0.6 m Zeiss & IkonL\\
2011 Jun 21 & 7.0 & 0.6 m Zeiss & IkonL\\
2012 Mar 14 & 8.1 & 0.6 m Zeiss & 101 \\
2012 Mar 25 & 9.3 & 0.6 m Zeiss & 101 \\
2012 Mar 26 & 6.0 & 0.6 m Zeiss & 101 \\
\hline
\end{tabular}
\end{table}
The spectroscopic data were obtained with the 1.6-m Perkin--Elmer and the 0.6-m Boller \& Chivens telescopes at OPD in 12 nights during 2004, 2008, 2010 and 2012.
Table~\ref{jospec} presents a journal of the spectroscopic observations. Two thin, back-illuminated Marconi detectors (CCD 098 and CCD 105) were used with the
Coud\'e or Cassegrain spectrographs. Bias and flatfield corrections were applied as usual. The width of the slit was adjusted to the conditions of the seeing.
We took exposures of calibration lamps after every third exposure of the star, in order to determine accurate wavelength calibration solutions.
The image reductions, spectra extractions and wavelength calibrations were executed with \textsc{iraf} standard routines.
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Journal of spectroscopic observations of QU~Car.\label{jospec}}
\begin{tabular}{@{}ccccccccc@{}}
\hline
Date & Telescope & Spectrograph & Grating & CCD & Exp. Time & Number & Resol. & Spec. range \\
(UT) & & & (l mm$^{-1}$) & & (s) & of exps. & ({\AA}) & ({\AA}) \\
\hline
2004 Feb 17 & 1.6 m P-E & Coud\'e & 600 & 098 & 600 & 5 & 0.7 & 4040--5170 \\
2004 Mar 01 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 8 & 0.7 & 4040--5170 \\
2004 Mar 11 & 0.6 m B\&C & Cass. & 900 & 105 & 900 & 9 & 2 & 3800--5230 \\
2008 Apr 02 & 1.6 m P-E & Coud\'e & 600 & 105 & 900 & 1 & 0.6 & 4500--5000 \\
2010 Feb 13 & 1.6 m P-E & Coud\'e & 600 & 098 & 600 & 26 & 0.6 & 4205--5340 \\
2010 Feb 14 & 1.6 m P-E & Coud\'e & 600 & 098 & 600 & 27 & 0.6 & 4205--5340 \\
2010 Feb 16 & 1.6 m P-E & Coud\'e & 600 & 098 & 600 & 17 & 0.6 & 4205--5340 \\
2012 Mar 01 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 4 & 0.7 & 4000--5140 \\
2012 Mar 02 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 8 & 0.7 & 4000--5140 \\
2012 Mar 03 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 1 & 0.7 & 4000--5140 \\
2012 Mar 04 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 6 & 0.7 & 4000--5140 \\
2012 Mar 05 & 1.6 m P-E & Coud\'e & 600 & 098 & 1200 & 21 & 0.7 & 4000--5140 \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\section[]{Data analysis and results}
\subsection{Variability in the spectral features}
The average spectrum of QU Car constructed with data obtained in 2004, 2010 and 2012, normalized to the continuum, is presented in Fig.~\ref{averspec}. It is dominated by the emission
features of \mbox{He\,{\sc ii}} 4686 {\AA}
and the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex. The Balmer emission lines are superposed on broad absorption features. Emission lines of \mbox{N\,{\sc v}} 4603, 4619
and 4945 {\AA}, \mbox{N\,{\sc iii}} 4379 and 4515 {\AA}, \mbox{O\,{\sc ii}} 4415 {\AA}, \mbox{He\,{\sc i}} 4922 {\AA} and [\mbox{O\,{\sc iii}}]
5007 {\AA} are also present. The features in the spectra are highly variable, both in terms of intensities and profiles. Fig.~\ref{3spec} shows the average spectra for 2004, 2010 and 2012, and Table~\ref{lines}
lists the equivalent widths and FWHM of the emission lines in the average spectra of these years.
The broad Balmer absorption troughs are variable and were also observed by GP82, \citet{2003MNRAS.338..401D} and KAH08. A possible explanation for the origin of these troughs could be the
spectral features of the white dwarf atmosphere, although it would imply that the emission of the primary has a substantial contribution in the optical, in contrast to other evidences
of a high accretion rate occurring in QU Car. A more convincing explanation is that the Balmer troughs are formed in the optically thick accretion disc. Such absorptions are observed
in the spectra of dwarf novae during eruption and in UX UMa nova-likes (so called thick-disc CVs) as RZ Gru \citep{Warner}. During the evolution to the maximum of the dwarf nova eruption
there is a transition from emission-line to absorption-line spectrum, as well described for SS Cyg \citep{1984ApJ...286..747H}. An important point to note is that in such cases
the Balmer decrement is much steeper in the emission than in the absorption lines, resulting in H$\alpha$ in emission while higher series members present progressively
stronger absorptions \citep{Warner}. This seems to be the case observed in the spectrum of QU Car presented, for instance, in fig. 1 of KAH08.
The ratio of the equivalent widths of the \mbox{He\,{\sc ii}} 4686 {\AA} to H$\beta$ emission lines, which is typically greater than 2 in CBSS and V Sge stars, varied from
$EW_{\mbox{He\,{\sc ii}}}/EW_{H\beta}=2.4$ in our 2004 data to 4 in 2010 and back to 2 in 2012, while GP82 measured $EW_{\mbox{He\,{\sc ii}}}/EW_{H\beta}=2$ in 1979
and KAH08 obtained a value
always lower than 1 in 2006 and 2007 data.
These ratios, however, are uncertain and should be looked with caution, since the presence of the variable Balmer absorption troughs in all observed spectra
of QU Car affects the measurements of the EW of the H$\beta$ emission. We, like KAH08, measured the EW of the Balmer lines only in their emission cores.
The ratio of the EW of the \mbox{He\,{\sc ii}} to the Bowen \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex, on the other hand,
changed from 0.5 to 0.9 and to 0.6 between 2004, 2010 and 2012.
In the 2012 average spectrum the \mbox{N\,{\sc v}} 4603 and 4619 {\AA} emission lines are present, while \mbox{N\,{\sc v}} 4945 {\AA} is visible in the average spectra of 2010 and 2012. These
high-ionization lines, together with the strong \mbox{He\,{\sc ii}} 4686 {\AA} line, are defining features of the V Sge stars. But, differently from what happens
in the V Sge stars, in QU Car the \mbox{He\,{\sc i}} 4922 {\AA} emission line is present, as can be seen in its 2004 and 2012 average spectra. \citet{1998PASP..110..276S} analysed the presence
of \mbox{He\,{\sc ii}} and the absence of \mbox{He\,{\sc i}} in the V Sge stars and suggested that these lines are formed by photoionization in a matter limited region, in contrast
with the radiation limited case often found in Cataclysmic Variables. In this context, it is interesting to note the possible anti-correlation between the intensities of the \mbox{He\,{\sc ii}} 4686 {\AA}
and \mbox{He\,{\sc i}} 4922 {\AA} lines in our QU Car spectra: while in 2004 \mbox{He\,{\sc ii}} is less intense than the Bowen complex, the \mbox{He\,{\sc i}} line is quite prominent
($EW_{\mbox{He\,{\sc i}}} = -0.5$). This situation is inverted in 2010, when \mbox{He\,{\sc ii}} is more intense than the Bowen complex and the \mbox{He\,{\sc i}} line is marginally detected
($EW_{\mbox{He\,{\sc i}}} = -0.1$). In 2012 an intermediate situation occurs.
In order to examine the variability of the emission line profiles in more detail, we performed the temporal variance spectrum (TVS) analysis on our spectroscopic data. In this procedure, the temporal
variance is calculated for each wavelength pixel, from the residuals of each continuum normalized spectrum to the average spectrum. In our implementation, the temporal variance spectrum is
the square root of the variance as a function of wavelength.
A characteristic indicator of the variability of each spectral feature is the ratio between its variance and its intensity, $\sigma/I$.
The TVS can be a useful method to distinguish between different kinds of features present in a line spectrum. Telluric lines, for instance, can present very high values for $\sigma/I$, while
interstellar lines should not appear in the TVS. Also, if a line has no intrinsic profile variation but has radial velocity displacement only, its TVS should present an unambiguous double peak
profile with $\sigma/I \sim K/FWHM$.
For further details on the TVS method see \citet{1996ApJS..103..475F}.
We constructed the TVS for the strongest spectral lines, such
as the Bowen complex, \mbox{He\,{\sc ii}} 4686 {\AA} and H$\beta$, using our 110 spectra obtained in 2010 and 2012 (the lower number of spectra obtained in 2004 precludes this analysis
for that year).
Fig.~\ref{tvs} shows the observed average intensity spectrum of QU Car and the calculated TVS. The most striking feature is the difference between the TVS profiles of the Bowen complex
and of the \mbox{He\,{\sc ii}} line. While the TVS of \mbox{He\,{\sc ii}} presents a rich and intense profile, the Bowen complex show marginally significant features above the 1 per cent
statistical level, representing a lower variability in this complex. In Fig.~\ref{tvs} we also show the rest positions of the strongest lines (\mbox{C\,{\sc iii}}, \mbox{N\,{\sc iii}} and
\mbox{O\,{\sc ii}}) of the Bowen blend \citep{1975ApJ...198..641M}, which coincides with some features in the respective TVS.
The \mbox{He\,{\sc ii}} intensity emission line has an asymmetric compound profile with a main peak at 4685 {\AA} ($\Delta v = -30$ km s$^{-1}$) and some smaller side emissions.
The TVS of \mbox{He\,{\sc ii}} is much more asymmetric, with a central dip centred at the velocity of $-120$ km s$^{-1}$. It shows strong variability at $\Delta v = -420$ km s$^{-1}$,
with $\sigma/I=6$ per cent, that seems to be associated to a blue emission component observed in the intensity spectrum. The red wing of \mbox{He\,{\sc ii}} also presents relevant variability in
the TVS, with $\sigma/I=5$ per cent, which may be associated to a red emission component at $\Delta v = +240$ km s$^{-1}$. These possible associations are clearer in the TVS
constructed with separate 2010 and 2012 data, not shown in this paper. The core of the line, on the other hand, shows lower variability, with $\sigma/I=4$ per cent.
Another possible interpretation to the behaviour in the blue wing of \mbox{He\,{\sc ii}} is a variable P Cyg profile causing maximum variance in the blue.
The H$\beta$ emission line presents a situation similar to \mbox{He\,{\sc ii}}. In the blue wing of this emission line the TVS shows a peak displaced at $\Delta v = -270$ km s$^{-1}$ with
$\sigma/I=5$ per cent, and in this same velocity the intensity spectrum presents a clear peak. In the red wing, the TVS has a peak at $\Delta v = +220$ km s$^{-1}$ with $\sigma/I=6$ per cent, again
at the same velocity of a emission component in the intensity spectrum. But, differently from \mbox{He\,{\sc ii}}, the central core of H$\beta$ emission exhibit high variability with
$\sigma/I=6$ per cent. An interesting fact is that the peak of the line is displaced to $\Delta v = -40$ km s$^{-1}$ in the intensity spectrum while the peak in the TVS is displaced to
$\Delta v = +60$ km s$^{-1}$. We do not have interpretation for these velocities.
No statistically significant variability is evident in the blue wing of the broad absorption feature of H$\beta$, while in the red wing the variability is only marginally significant at the 1 per cent level.
The [\mbox{O\,{\sc iii}}] 5007 {\AA} forbidden line, observed before in QU Car by KAH08 and KHW12, is not accompanied by
the weaker component of the doublet at 4959~{\AA}.
In spectra obtained in 2006 and 2007, KAH08 observed this feature split in two components with velocities of $-500$ and $+370$ km s$^{-1}$, suggesting their formation in the front and
back sides of an expanding shell. In 2010 and 2011, KHW12 observed only one weak velocity component of [\mbox{O\,{\sc iii}}] 5007 {\AA}. Our data clearly show both components in
the average spectra taken in 2004, 2010 and 2012. Table~\ref{OIIIvel} presents the velocities of these components measured in the average
spectra of each year. The TVS in the spectral region of the [\mbox{O\,{\sc iii}}] 5007 {\AA} line does not exhibit statistically significant variations.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{averspec.eps}}
\vspace*{10pt}
\caption{Average normalized spectrum of QU Car constructed with 2004, 2010 and 2012 OPD data. The positions of identified features are indicated. \label{averspec}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\centerline{\includegraphics[width=84mm]{3spec.eps}}
\caption{ Average normalized spectra of QU Car in 2004, 2010 and 2012. \label{3spec}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\centerline{\includegraphics[width=1.1\columnwidth]{TVSstack2.eps}}
\caption{Average intensity spectrum and TVS for the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex and \mbox{He\,{\sc ii}} 4686 {\AA} (upper panel) and H$\beta$ (bottom panel).
The TVS ordinates are represented in the right axis and give the amplitudes as percentage of the
normalized continuum. The TVS statistical threshold significance of p=1, 5 and 30 per cent are represented by dot-dashed, dashed and dotted horizontal lines, respectively. The vertical dashed lines
mark the position of the rest velocity for the relevant line presented in each panel. Rest positions of the lines of \mbox{C\,{\sc iii}}, \mbox{N\,{\sc iii}} and
\mbox{O\,{\sc ii}} are indicated by the bars. \label{tvs}}
\end{figure}
\begin{table*}
\centering
\caption{\label{lines}Equivalent widths (EW) and FWHM of spectral lines of QU Car in the average spectra of 2004, 2010 and 2012.
The EW of the Balmer lines are measured only in the emission cores. For [\mbox{O\,{\sc iii}}] 5007 {\AA} we present the measurements of the
blue and red components as well as the full emission line (see text for details).}
\begin{tabular}{@{}lcccccc@{}}
\hline
Species & EW in & FWHM in & EW in & FWHM in & EW in & FWHM in \\
& 2004 ({\AA}) & 2004 (km s$^{-1}$) & 2010 ({\AA}) & 2010 (km s$^{-1}$) & 2012 ({\AA})& 2012 (km s$^{-1}$) \\
\hline
H$\delta$ & $-0.7$ & 700 & ... & ... & $-0.5$ & 590 \\
H$\gamma$ & $-0.2$ & 430 & $-0.3$ & 570 & $-0.4$ & 400 \\
\mbox{N\,{\sc iii}} 4379 {\AA} & $-0.7$ & 1330 & $-0.3$ & 1030 & $-0.8$ & 2350 \\
\mbox{O\,{\sc ii}} 4415 {\AA} & $-0.7$ & 1360 & $-0.4$ & 980 & $-0.5$ & 1160 \\
\mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}}/\mbox{O\,{\sc ii}} 4642 {\AA}& $-4.1$ & 1420 & $-2.8$ & 1350 & $-3.0$ & 1400 \\
\mbox{He\,{\sc ii}} 4686 {\AA} & $-1.9$ & 770 & $-2.4$ & 740 & $-1.8$ & 720 \\
H$\beta$ & $-0.8$ & 600 & $-0.6$ & 480 & $-0.9$ & 460 \\
\mbox{He\,{\sc i}} 4922 {\AA} & $-0.5$ & 820 & $-0.1$ & 380 & $-0.3$ & 730 \\
$[\mbox{O\,{\sc iii}}]$ 5007 {\AA} full & $-0.9$ & 2120 & $-0.5$ & 1640 & $-0.9$ & 1960 \\
$[\mbox{O\,{\sc iii}}]$ 5007 {\AA} blue & $-0.2$ & 780 & $-0.1$ & 1020 & $-0.2$ & 930 \\
$[\mbox{O\,{\sc iii}}]$ 5007 {\AA} red & $-0.5$ & 1050 & $-0.2$ & 610 & $-0.3$ & 760 \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\centering
\caption{\label{OIIIvel} Velocities of the blue and red components of the $[\mbox{O\,{\sc iii}}]$ 5007 {\AA} emission line in the average spectra of 2004, 2010 and 2012.
}
\begin{tabular}{@{}ccc@{}}
\hline
Year & v$_-$ & v$_+$ \\
& (km s$^{-1}$) & (km s$^{-1}$) \\
\hline
2004 & $-370$ & $+670$ \\
2010 & $-490$ & $+340$ \\
2012 & $-560$ & $+490$ \\
\hline
\end{tabular}
\end{table}
\subsection{The spectroscopic orbital period}
We measured radial velocities of the \mbox{He\,{\sc ii}} 4686 {\AA} line by fitting a gaussian function to the peaks of the line profiles, and used the values obtained to search for periodicities.
Fig.~\ref{lombspec} shows the Lomb--Scargle \citep{1982ApJ...263..835S} periodogram of our 2010 and 2012 radial velocity data, which correspond to the most homogeneous set of spectra, as well as the
periodograms we constructed from the GP82 and KAH08 \mbox{He\,{\sc ii}} 4686 {\AA} radial velocities.
\begin{figure}
\vspace*{10pt}
\centerline{\includegraphics[width=84mm]{triplelomb.eps}}
\caption{Lomb--Scargle periodogram of the \mbox{He\,{\sc ii}} 4686 {\AA} radial velocities from \citet{1982ApJ...261..617G} (top), \citet{2008AJ....135.1649K} (middle) and from our 2010 and 2012
spectra (bottom). The vertical line indicates the period of 10.94 h (2.193 d$^{-1}$), which is the highest peak in our power spectrum.
\label{lombspec}}
\end{figure}
The individual periodograms of the three sets of data taken in 2004, 2010 and 2012 at OPD are very similar to the periodogram of the 2010/2012 combined data set shown in Fig.~\ref{lombspec}.
This result shows that the 10.9 h orbital period determined by GP82 from 1979 and 1980 data, which was later absent in radial velocity data of KAH08 taken in 2006 and 2007,
reappears as P = 10.94 h in our spectra obtained in 2010 and 2012, being also present in our 2004 data. The spectroscopic ephemerides associated to the \mbox{He\,{\sc ii}} 4686 {\AA} emission line
from 2010 and 2012 data are shown in Table~\ref{ephem}. The zero phase is defined as the crossing from positive to negative values of the radial velocity, when compared
to the systemic velocity $\gamma$ of each set, P is the orbital period from that set and $K_1$ is the semi-amplitude of the sinusoidal fit. In Fig.~\ref{vrfase} we present the radial
velocity curves of 2010 and 2012 data, folded with the period and epoch of each respective ephemeris.
Fig.~\ref{trailedspec2010} shows the binned average spectra of the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex,
\mbox{He\,{\sc ii}} 4686 {\AA} and H$\beta$ lines, obtained in 2010 (our largest data set), phased with the orbital period and ${T_0}$ from the 2010 ephemeris,
each spectrum being the average of five to ten spectra depending on the bin, while Fig.~\ref{greenst} presents the trailed spectrograms of the same data set.
The \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex, \mbox{He\,{\sc ii}} 4686 {\AA} and H$\beta$ lines are clearly in phase.
When comparing the structures in the line profiles of \mbox{He\,{\sc ii}} 4686 {\AA} and H$\beta$ at the same orbital phase bin, one can see that these structures are quite similar
(see, for instance, the profiles at phase 0.65), perhaps indicating that these features are produced in the same location.
The profiles of these lines are strongly variable, specially H$\beta$ which additionally experiences the presence of the variable absorption trough. When exploring the individual spectra
we could see that, in some occasions, H$\beta$ almost completely
disappears in the noise, usually during phases 0.8 to 0.3. However, due to the low S/N ratio in the individual spectra we could not ensure whether this phenomenon is associated to the
orbital phase or to a non-orbital source of variability, although the disappearance of H$\beta$ for about half an orbital cycle was also reported by GP82.
\begin{table*}
\caption{\label{ephem} Radial velocity parameters of the \mbox{He\,{\sc ii}} 4686 {\AA} emission line from 2010 and 2012 data. }
\begin{center}
\begin{tabular}{l c c c c}
\hline
Year & ${T_0}$ & P & K$_1$ & $\gamma$ \\
& (HJD) & (d) & (km s$^{-1}$) & (km s$^{-1}$) \\
\hline
2010 & 2~455~241.785 $\pm$ 0.027 & 0.450 $\pm$ 0.009 & 134 $\pm$ 28 & $-21$ \\
2012 & 2~455~992.920 $\pm$ 0.036 & 0.456 $\pm$ 0.010 & 155 $\pm$ 30 & $-63$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{vrfasestackcorr.eps}}
\caption{\mbox{He\,{\sc ii}} 4686 {\AA} radial velocity curve from 2010 (upper panel) and 2012 (lower panel) data, folded with the period and epoch given in the
respective ephemeris. The solid curve is the sinusoidal fit to the data
and the horizontal line represents the systemic velocity $\gamma$.
\label{vrfase}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{trailedSpec.eps}}
\caption{Average continuum subtracted 2010 spectra of the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex and \mbox{He\,{\sc ii}} 4686 {\AA} (left panel) and H$\beta$
(right panel) in nine phase bins. The effective phase of each bin is indicated in the ordinate axes. The vertical dashed lines mark the position of the rest velocity of
\mbox{He\,{\sc ii}} 4686 {\AA} and H$\beta$.
\label{trailedspec2010}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{figura.eps}}
\caption{Trailed spectrograms of the \mbox{C\,{\sc iii}}/\mbox{N\,{\sc iii}} complex and \mbox{He\,{\sc ii}} 4686 {\AA} (upper panel) and H$\beta$
(lower panel) line profiles of 2010 data, binned into 0.02 phase intervals. The orbital cycle is repeated for clarity.
\label{greenst}}
\end{figure}
\subsection{The photometric orbital period}
After seeing the recovery of the 10.9 h orbital period in our spectroscopic data, we searched for periodic modulations in the photometric data.
In order to better compare our photometric data to the high/low brightness levels published by KAH08 and KHW12, we converted our differential
magnitudes to V magnitudes. For that we used as reference one of our differential comparison stars, C1, which is registered in the Tycho-2 catalogue \citep{2000A&A...355L..27H} as TYC 9212-2118-1,
with $m_v = 11.089 (\pm 0.071)$.
The light curves of QU Car show many kinds of variability, with time-scales ranging from minutes to months. The most prominent one is flickering, with time-scales of
tens of minutes and amplitudes of 0.2 mag as can be seen, for example, in the light curve of 2011 June 21 (upper light curve in Fig.~\ref{fotaltobaixo}). Superposed on this flickering there are
slow (hours) and smooth non-periodic fluctuations of tenths of magnitude. QU Car also presents different brightness levels with amplitude of about 0.5 mag and time-scales of
tens of days in our set of data (Fig.~\ref{fotlevels}), which were also reported by KAH08 and KHW12.
It is important to say that QU Car was never caught below visual magnitude 11.7 in our set of data, differently from observations described in KHW12, where it
occasionally dropped below 12 mag.
There seems to exist in our data a correlation between the
brightness level and the flickering activity of QU Car, which was also registered
by \citet{1986PASP...98.1336K}. Fig.~\ref{fotaltobaixo} shows a comparison of the light curves obtained on 2011 June 21, when it was at the bright state, and on 2012 March 25
when it was at the lower state. The lower brightness state,
which is about 0.3 mag fainter in this comparison, presents flickering with amplitude ten times smaller than the flickering observed in the higher state. In other occasions when the star was
in this lower brightness state (2011 May 11 and 2012 March 26) the same flickering attenuation was observed.
\begin{figure}
\vspace*{10pt}
\centerline{\includegraphics[width=84mm]{fotaltobaixocalib.eps}}
\caption{ Differential light curve of QU Car obtained in 2011 June 21, which is dominated by flickering (upper curve), light curve obtained in 2012 March 25, in which flickering is
attenuated (middle curve) and the superposed light curves of the comparison star C1
for both dates (bottom curve). The vertical displacement of 0.3 mag between the light curves of QU Car is real and shows the variation in the brightness level. The
magnitudes of the C1 are shifted by 0.37 mag and the runs are arbitrarily offset in time for
display purposes. The gaps in the light curves were caused by clouds.
\label{fotaltobaixo}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{fotlevelscalib.eps}}
\caption{ Photometric data of QU Car taken at OPD in 2009, 2011 and 2012 (upper points) and of the comparison star (bottom points). The stability of the comparison star brightness shows that
the variation of 0.5 mag seen in QU Car data, in time-scales of tens of days, is real.
The arrows indicate the nights of 2011 May 11 and 2012 March 25 and 26 (see text for details).
\label{fotlevels}}
\end{figure}
We applied the Lomb--Scargle method to search for periodicities in the photometric data. The periodograms constructed with the higher brightness level data do not show any relevant peak.
On the other hand, despite the short length of the light curves, the periodogram of the lower state photometric data presents a period of $\sim11.1$ h
(Fig.~\ref{lombfot}) which is consistent with the 10.9 h period obtained from the \mbox{He\,{\sc ii}} 4686 {\AA} line radial velocities. In Fig.~\ref{lcurve} we show the light curves of
2012 March 25 and 26 phased on the 10.94 h period. The amplitude of the modulation is about 0.15 mag. This appears to be the first photometric detection of an orbital modulation in
QU Car and, if so, it occurred when the amplitude of the flickering had reduced from its typical value of 0.2 mag in the higher brightness state to $\sim0.02$ mag in the lower state, unveiling
the 0.15 mag orbital modulation. Photometric monitoring on QU Car were carried out by \citet{1969ApJ...157..709S}, GP82 and \citet{1986PASP...98.1336K} but no orbital modulation was detected.
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{lombfot.eps}}
\caption{Lomb--Scargle periodogram of the photometric data taken in 2012 March 25 and 26. The vertical line indicates the orbital period of 10.94 h (2.193 d$^{-1}$) obtained from the power
spectrum of the \mbox{He\,{\sc ii}} 4686 {\AA} spectral line.
\label{lombfot}}
\end{figure}
\begin{figure}
\vspace*{10pt}
\resizebox{\hsize}{!}{\includegraphics[clip]{lcurvecalib.eps}}
\caption{Light curve for the nights 2012 March 25 (filled circles) and 2012 March 26 (open circles), folded with the orbital period of 10.94 h. The length of the light curves are 9.3 h and
6.0 h, respectively.
\label{lcurve}}
\end{figure}
\section{Conclusions}
An important result of this work is the recovery of the 10.9 h orbital period in the spectroscopic data of QU Car, but even more important may be the discovery of the long sought
orbital modulation in the photometric observations. This orbital modulation has an amplitude of 0.15 mag and could be buried in the usual 0.2 mag flickering. The attenuation
of the flickering when the system is in the lower brightness state was already noted by \citet{1986PASP...98.1336K} and is also observed in our data, and it seems that this attenuation
is needed to unveil the photometric orbital modulation. It is worth to say that, in our photometry, QU Car was not seen in its lowest registered brightness level.
KAH08 suspected that the non-detection of the orbital modulation in their spectroscopic data could be related to the presence of a wind
distorting the spectral line profiles. In addition to the line profile variations -- LPV -- the wind also manifests itself in the form of observed nebular lines and in the P Cyg profiles,
and is a key ingredient of the Accretion Wind Evolution model for CBSS and V Sge stars. As a tool to investigate the LPV we applied the TVS method to our spectra, which showed
that the \mbox{He\,{\sc ii}} 4686 {\AA} line, although used to map the radial velocities of the system, is heavily affected by the profile variations in a complex way.
As suggested by \citet{2003MNRAS.338..401D} and KAH08 QU Car has many features that link it to the CBSS and V Sge classes, like the strong wind, nebular lines,
high and low states, high accretion rate, luminosity and \mbox{He\,{\sc ii}}/H$\beta$ line ratio. However one of the defining characteristics of the CBSS/V Sge stars are the
\mbox{O\,{\sc vi}} 3811--34 {\AA} emission lines, which are absent in the spectrum of QU Car \citep{2003MNRAS.338..401D}. The lack of detection of \mbox{O\,{\sc vi}} is consistent
with a lower degree of ionization in QU Car
when compared to other CBSS/V Sge systems, but could be also due to variability in this line, like the one observed in the spectra of WX Cen \citep{2004MNRAS.351..685O}.
The TVS method has proven to be a useful tool to enhance the signal of
strongly variable \mbox{O\,{\sc vi}} lines in noise dominated spectra of other V Sge candidates, as WR 7a \citep{2003MNRAS.346..963O} and WX Cen \citep{2004MNRAS.351..685O}.
Unfortunately our QU Car spectra do not cover the \mbox{O\,{\sc vi}} line region (except for 2004 March 11, but then the noise dominated that region), so new bluer spectra
combined with the TVS analysis are needed to search for this important
feature.
In order to understand a system so variable as QU Car, extensive observations with several techniques are mandatory. Long term photometric observations, specially in the low state,
are wanted to consolidate the photometric orbital modulation found in this work, and simultaneous photometric and spectroscopic data would provide means to investigate the
correlation between the brightness states and the spectral lines behaviour.
\section*{Acknowledgements}
A. S. Oliveira and H. J. F. Lima acknowledge FAPESP -- Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo -- for financial support under grants 03/12618-7 and 10/12805-5.
|
1408.1999
|
\section{Introduction}
A most useful property of Lebesgue measure $\lambda$ is its (inner) \emph{regularity}: For any measurable set $E$, we can find an $F_\sigma$ set $F \subseteq E$ with $\lambda(E) = \lambda(F)$.
In other words, any measurable set can be represented as an $F_\sigma$ set plus a nullset.
This means that, for measure theoretic considerations, $E$ can be replaced by an $F_\sigma$, simplifying the complicated topological structure of arbitrary measurable sets.
It is a basic result in geometric measure theory that regularity holds for $s$-dimensional \emph{Hausdorff measure} $\mathcal{H}^s$ ($s>0$), too, with one important restriction.
\begin{thm}[Besicovitch and Moran \cite{BesicMoran}]
If $E$ is $\mathcal{H}^{s}$-measurable and of finite $\mathcal{H}^{s}$-measure, then there exists an $F_\sigma$ set $F \subseteq E$ so that $\mathcal{H}^{s}(F) = \mathcal{H}^{s}(E)$.
\end{thm}
In particular, one can approximate measurable sets of finite measure from inside through closed sets.
\begin{cor}[Subsets of finite measure] \label{cor:inner_reg}
\begin{equation} \label{equ:inner_reg}
\mathcal{H}^{s}(E) = \sup \{ \mathcal{H}^{s}(C) \colon C \subseteq E \text{ closed}, \mathcal{H}^{s}(C) < \infty \},
\end{equation}
\end{cor}
The requirement that $\mathcal{H}^{s}(E) < \infty$ is essential, since Besicovitch\cite{besicovitch:approximation_1954a} later showed that
there is a $G_\delta$ set $G$ of Hausdorff dimension $1$ so that for every $F_\sigma$ subset $F \subset G$, the Hausdorff dimension of $G\setminus F$ is $1$, too.
Nevertheless, one can ask whether \eqref{equ:inner_reg} remains true even when $\mathcal{H}^{s}(E)$ is infinite. This is indeed so. In fact, one can approximate $E$ in measure by closed sets of \emph{finite measure}, provided $E$ is \emph{analytic}.
\begin{thm} \label{BD}
The \emph{Subsets of finite measure} property \eqref{equ:inner_reg} holds for any analytic $(\pmb{\S}^1_1)$ subset of the real line.
\end{thm}
This result is one of the cornerstones of geometric measure theory, since it allows to pass from a set of infinite measure,
which may be cumbersome to deal with since $\mathcal{H}^{s}$ is not $\sigma$-finite, to a closed set of finite measure, on which $\mathcal{H}^{s}$ is much better behaved.
The theorem was first shown for closed sets in Euclidean space by Besicovitch\cite{Besicovitch} and extended to analytic sets by Davies \cite{Davies}. We will therefore refer to Theorem \ref{BD} also as the \emph{Besicovitch-Davies Theorem}.
Besicovitch \cite{besicovitch:concentrated_1933a} had shown before that there exists a measurable set in the Euclidean plane every subset of which has $1$-dimensional measure either $0$ or $\infty$.
Hence some restrictions on the definability of $E$ are necessary for \eqref{equ:inner_reg} to hold.
Moreover, the existence of subsets of finite measure also depends on the underlying space, as well as on the nature of the dimension function.
Davies and Rogers \cite{davies-rogers:problem_1969} constructed a compact metric space $X$ and a dimension function $h$ such that $X$ has infinite $\mathcal{H}^{h}$-measure but $X$ does not contain any sets of finite positive $\mathcal{H}^{h}$-measure.
A few years before, on the other hand, Larman \cite{larman:hausdorff_1967} had shown that \eqref{equ:inner_reg} does hold for a class of compact metric spaces (those of \emph{finite dimension} in the sense of \cite{larman:theory_1967}).
Rogers \cite{Rogers} proves it for complete, separable \emph{ultrametric spaces}. Hence \eqref{equ:inner_reg} holds for Cantor space $2^\omega$ and Baire space $\omega^\omega$.
Most recently, using a quite different approach, Howroyd \cite{howroyd} was able to prove the validity of \eqref{equ:inner_reg} for any analytic subset of a complete separable metric space.
It holds also for generalized Hausdorff measures $\mathcal{H}^{h}$, too, provided the dimension function $h$ does not decrease to $0$ too rapidly.
In the following, we will study the complexity of finding subsets of positive measure in Cantor space $2^\omega$, endowed with the standard metric
\[
d(x,y) = \begin{cases}
2^{-\min\{n\colon x(n) \neq y(n)\}} & x \neq y \\
0 & x = y. \\
\end{cases}
\]
The hierarchies of effective descriptive set theory allow for a further ramification of regularity properties. Any (boldface) Borel set is effectively (lightface) Borel relative to a parameter.
Hence we can, for instance, given a (lightface) $\S^0_\alpha$ set, measure how hard it is to find a $\S^0_2(y)$ subset of the same measure, by proving lower bounds on the parameter $y \in 2^\omega$.
Dobrinen and Simpson~\cite{Dobrinen-Simpson} investigated this question for $\S^0_3$ sets in Lebesgue measure and discovered an interesting connection with measure-theoretic \emph{domination properties}.
Kjos-Hanssen \cite{Kjos-Hanssen:2007a} in turn linked measure-theoretic domination properties to LR-reducibility, a reducibility concept from algorithmic randomness.
Recently, Simpson \cite{simpson} gave a complete characterization of the regularity problem for Borel sets with respect to Lebesgue measure.
One of his results states that the property that every $\S^0_{\alpha+2}$ ($\alpha$ a recursive ordinal) subset of $2^\omega$ has a $\S^0_2(Y)$ subset of the same Lebesgue measure holds if and only if $0^{(\alpha)} \leq_{LR} Y$.
His paper \cite{simpson} also contains a survey of previous results along with an extensive bibliography.
In this paper, we study the complexity of the corresponding inner regularity for Hausdorff measure on $2^\omega$, extending and refining previous work by the authors \cite{K.Reimann:10}.
We will see that, in contrast to the case of Lebesgue measure, finding subsets of positive Hausdorff measure can generally not be done with the help of a \emph{hyperarithmetic} oracle.
The core observation is that determining whe\-ther a set of reals has \emph{positive Hausdorff measure} is more similar to determining whe\-ther it is \emph{non-empty} than to determining whe\-ther it has \emph{positive Lebesgue measure}.
Determining the exact strength of the Besicovitch-Davies Theorem is not only of intrinsic interest.
A family of important problems in theoretical computer science ask some version of the question to what extent randomness (which is a useful computational tool) can be extracted from a weakly random source (which is often all that is available).
Such questions can also be expressed in computability theory. The advantage, and simultaneously the disadvantage, of doing so is that one abstracts away from considering any particular model of efficient computation.
One way to conceive of weak randomness is in terms of \emph{effective Hausdorff dimension}.
Miller \cite{M} and Greenberg and Miller \cite{GM} obtained a negative result for randomness extraction: there is a real of effective Hausdorff dimension 1, that does not Turing compute any Martin-L\"of random real.
Despite this negative result, effective Hausdorff dimension, which is a ``lightface'' form of Hausdorff dimension, has independent interest,
as it seems to offer a way to redevelop much of geometric measure theory (for example Frostman's Lemma \cite{Reimann}) in a more effective way.
Another conception of weak randomness comes from considering sets that differ from Martin-L\"of random sets only on a sparse set of bits \cite{Extracting}, or sets that are subsets of Martin-L\"of sets \cites{Kurtz, MRL, Law}.
Actually, these conceptions are related, as we will try to illustrate with the help of the set $\operatorname{BN1R}$ of all reals that bound no 1-random real in the Turing degrees,
i.e., those reals to which no Martin-L\"of random real is Turing reducible.
\begin{thm}\label{min}
The set $\operatorname{BN1R}$ has Hausdorff dimension 1.
\end{thm}
\begin{proof}
This is merely a relativization of a theorem of Greenberg and Miller \cite{GM}.
\end{proof}
Theorem \ref{min} says that high effective Hausdorff dimension is not sufficient to be able to extract randomness. It can also be used to deduce that infinite subsets of random sets are not sufficiently close to being random, either.
The set $\operatorname{BN1R}$ is Borel, so by the Besicovitch-Davies Theorem, for any $s < 1$ it has a closed subset $C$ that has non-zero $\mathcal{H}^s$-measure.
Each closed set $C$ in Cantor space is $\P^{0}_{1}(x)$ for some oracle $x$. By a reasoning similar to \cite{DK}*{Theorem 4.3}, each $x$-random closed set contains a member of $C$.
It follows by reasoning as in \cite{MRL} that each $x$-random set has an infinite subset that does not Turing compute any $1$-random (Martin-L\"of random) set.
Thus, if $x$ could be chosen recursive, we would have a positive answer to the following question.
\begin{que}\label{q1}
Does each $1$-random subset of $\omega$ have an infinite subset that computes no $1$-random sets?
\end{que}
A partial answer to this question is known, using other methods:
\begin{thm}[Kjos-Hanssen \cite{Law}]
Each $2$-random set has an infinite subset that computes no $1$-random sets.
\end{thm}
But it is easy to see that the set $x$ just referred to cannot be chosen recursive. To wit,
any $\P^0_1$ class of non-zero $\mathcal{H}^s$-measure contains a $\P^0_1$ class consisting entirely of reals of effective Hausdorff dimension $\geq s$.
By the computably enumerable degree basis theorem this subclass has a path of c.e.\ degree. But every such real, being of diagonally non-computable Turing degree, is Turing complete by Arslanov's completeness criterion,
and hence computes a $1$-random\footnote{For these basic facts, on may consult a textbook such as Downey and Hirschfeldt \cite{DH}.}.
In the present article we show that $X$ can be taken recursive in Kleene's $O$, but in general, for arbitrary $\S^{1}_{1}$ classes (or even just arbitrary $\P^{0}_{2}$ classes), $x$ cannot be taken hyperarithmetical.
We expect the reader to be familiar with basic descriptive set theory and the effective part on hyperarithmetic sets and Kleene's $\mathcal{O}$.
Standard references are \cite{Rogers} and \cite{Sacks}. We also assume basic knowledge of Hausdorff measures and dimension, as can be found in \cite{CARogers} or \cite{mattila}.
\subsection*{\em The generalized join-operator}
We will frequently need to generate sets of non-zero $\mathcal{H}^s$-measure. The following ``coded-product'' construction presents a convenient method to do this.
Given two reals $x,y \in 2^\omega$ and an infinite, co-infinite $A \subseteq \omega$, we define their \emph{$A$-join} $x \join[A] y$ as follows.
Assume $A = \{ a_1 < a_2 < \dots \}$ and $\omega\setminus A = \{b_1 < b_2 < \dots \}$. Let $x \join[A] y$ be the unique real $z$ such that $z(a_n) = x(n)$ and $z(b_n) = y(n)$ for all $n$. For sets
sets $X, Y \subseteq 2^\omega$, we define $X \join[A] Y$ as
\[
X \join[A] Y = \{ x \join[A] y \colon x \in X, y \in Y \}.
\]
For rational $s = a/b$, $0 < s < 1$, $a,b$ relatively prime, the \emph{canonical $s$-join}, is given by letting $A = \{ bn + i \colon n \in \omega, i < a \}$. In this case, we write $x \join[s] y$ and $X \join[s] Y$.
Measure-theoretically, the join behaves like a ``coded'' product.
\begin{pro} \label{pro:hmeas-join}
Assume $A \subseteq \omega$ is such that for some $r > 0$ and $c > 0$,
\[
|A \cap \{0,\dots, n-1\}| \geq rn - c \quad \text{ for all $n$}.
\]
If $E, F \subseteq 2^\omega$, $s,t \in [0,1]$ are such that $\mathcal{H}^s E > 0$ and $\mathcal{H}^t F > 0$, then
\[
\mathcal{H}^{rs+(1-r)t} (E \join[A] F) > 0.
\]
\end{pro}
This follows by a straightforward adaptation of the corresponding result for Euclidean spaces (see \cite{mattila}, Theorem 8.10).
\section{The Besicovitch-Davies Theorem}
In 1952, Besicovitch \cite{Besicovitch} proved the following theorem (for Euclidean space in place of $2^\omega$).
\begin{thm}
If $F \subseteq 2^\omega$ is closed and $\mathcal{H}^{s} F = \infty$, then, for any $c > 0$ there exists a closed set $C \subseteq F$ such that $c < \mathcal{H}^s C < \infty$.
\end{thm}
The version for Cantor space follows from a paper by Larman \cite{Larman}.
Two technical lemmas play a crucial role in Besicovitch's proof (both hold in Cantor space, see e.g.\ \cite{CARogers}).
\begin{enumerate}[(1)]
\item The \emph{Increasing Sets Lemma} (valid in compact metric spaces): If $\{E_n\}$ is an increasing sequence of sets, then for $E = \bigcup E_n$, for any $m$,
\[
\mathcal{H}^s_{2^{-m}} E = \lim_n \mathcal{H}^s_{2^{-m}} E_n.
\]
(Note that here $\mathcal{H}^s$ is considered as an outer measure.)
\item The \emph{Decreasing Sets Lemma}: If $\{C_n\}$ is a decreasing sequence of closed sets in $2^\omega$, then for $C = \bigcap C_n$,
\[
\mathcal{H}^s_{2^{-(m+1)}} E \geq \frac{c}{2} \lim_n \mathcal{H}^s_{2^{-m}} C_n,
\]
where $c$ is some positive, finite constant. (In Cantor space, we can choose $c = 1$)
\end{enumerate}
In the same journal in which Besicovitch's paper appeared, Davies \cite{Davies} published a proof showing that Besicovitch's result can be extended to analytic $(\mathbf{\S}^1_1)$ sets.
We reformulate Davies' argument in Cantor space in a way suitable for our analysis.
\begin{thm} \label{Davies}
Suppose $E \subseteq 2^\omega$ is $\S^1_1$. Assume further that $E$ is not $\sigma$-finite for $\mathcal{H}^s$, i.e.\ $E$ is not a countable union of measurable sets of finite $\mathcal{H}^s$-measure.
Then there exists a closed set $C \subseteq E$ of infinite $\mathcal{H}^s$-measure.
\end{thm}
\begin{proof}
Pick a recursive relation $R(\sigma, \tau)$ such that
\[
x \in E \quad \Longleftrightarrow \quad \exists g \in \omega^\omega\: \forall n \in \omega \: R(x\Rest{n}, g\Rest{n}).
\]
We define, for $\tau \in \omega^{<\omega}$,
\[
A_\tau = \{x \colon \forall n \leq |\tau| \: R(x\Rest{n}, \tau\Rest{n})
\]
Then $\{A_\tau\}_{\tau \in \omega^{<\omega}}$ forms a \emph{regular Souslin scheme} and we have
\[
E = \bigcup_{f \in \omega^\omega} \bigcap_n A_{f\Rest{n}}.
\]
Given $\alpha,\beta \in \omega^{<\omega}\cup\omega^\omega$, we write $\alpha \leq \beta$ if $\alpha(n) \leq \beta(n)$ for all $n \in \operatorname{dom}(\alpha) \cap \operatorname{dom}(\beta)$. Put
\[
E^\sigma = \bigcup_{\substack{f \in \omega^\omega \\ f \leq \sigma}} \bigcap_n A_{f\Rest{n}}.
\]
We have $E^{\Tup{n}} \nearrow E$. Choose $m_1$ so that $\mathcal{H}^s_{2^{-m_1}} E > 1$. By the Increasing Sets Lemma, we can choose $r_1$ so that
\[
\mathcal{H}^s_{2^{-m_1}} E^{\Tup{r_1}} > 1
\]
and $E^{\Tup{r_1}}$ is not $\sigma$-finite for $\mathcal{H}^s$, in particular $\mathcal{H}^s E^{\Tup{r_1}} = \infty$.
The latter is possible since if there were no such $r_1$, as $E = \bigcup E^{\Tup{n}}$, $E$ would be $\sigma$-finite for $\mathcal{H}^s$, contradicting our assumption.
Now we can continue the construction inductively. We obtain a function $r \in \omega^\omega$ and a sequence of natural numbers $m_1 \leq m_2 \leq m_3 \leq \dots$ such that
\begin{align*}
(a) & \qquad \mathcal{H}^s_{2^{-m_i}} E^{r\Rest n} > i \text{ for } 1\leq i \leq n, \\
(b) & \qquad E^{r\Rest n} \text{ is not $\sigma$-finite for $\mathcal{H}^s$}.
\end{align*}
We define
\[
C_n = \bigcup_{\substack{|\tau| = n \\ \tau \leq r\Rest{n}}} A_{\tau} \qquad \text{and} \qquad C = \bigcap_n C_n.
\]
Note that each $C_n$, and hence $C$, is closed.
By definition of $C_n$ we have $E^{r\Rest{n}} \subseteq C_n$ for all $n$. By $(a)$, $\mathcal{H}^s_{2^{-m_n}} C_n > n$. Moreover, $C_n \supseteq C_{n+1}$ for all $n$.
We can hence apply the \emph{Decreasing Sets Lemma} and obtain $\mathcal{H}^s_{2^{-(m_n+1)}} C = \infty$ for all $n$, and thus $\mathcal{H}^s C = \infty$.
It remains to show that $C \subseteq E$. Note that if $x \in C$, then for all $n$ there exists a $\tau_n \leq r\Rest{n}$ of length $n$ such that $x \in A_{\tau_n}$.
The set of all such $\tau_n$ (for any $n$) forms an infinite, finite branching tree. Hence by König's Lemma, there exists $f \leq r$ such that $x \in \bigcap_n A_{f\Rest{n}}$, that is, $x \in E$.
\end{proof}
Note that, by the final argument of the preceding proof, we can write
\[
C = \bigcup_{g \leq r} \bigcap_n A_{g\Rest{n}} = \{ x \colon \exists g \leq r \, \forall n \: R(x\Rest{n}, g\Rest{n}) \}.
\]
Note also that if $E$ is of $\sigma$-finite $\mathcal{H}^s$-measure, then the above construction may not produce, at some stage, an $r_n$ such that $E^{\Tup{r_1, \dots, r_n}}$ is of infinite $\mathcal{H}^s$-measure.
But in this case $\mathcal{H}^s$ behaves like a finite Borel measure on a Borel set. We can then mimic the above construction, working directly with $\mathcal{H}^s$ instead of $\mathcal{H}^s_\delta$, and obtain a closed subset of positive $\mathcal{H}^s$-measure.
Combining both cases, we obtain the following corollary.
\begin{cor} \label{BD-cor}
For each $\S^1_{1}$ class $E$ of non-zero $\mathcal{H}^s$-measure, written in canonical form
\[
E = \{x \colon \exists g \,\forall n R(x\Rest{n},g\Rest{n})\}
\]
where $R$ is a recursive predicate, there exists a function $r\in \omega^\omega$ such that for each $f$ majorizing $r$, the class
\[
C_{f}:=\{x \colon \exists g \leq f \, \forall n \, R(x\Rest{n},g\Rest{n})\}
\]
is a $\P^0_1(f)$ subclass of non-zero $\mathcal{H}^s$-measure.
\end{cor}
\section{Index set complexity}
In this section we determine the index set complexity of the following problem:
\begin{quote}
\em Given an index of an (effectively) analytic set $E \subseteq 2^\omega$, how hard is it to decide whether $E$ has non-zero (or finite) $\mathcal{H}^s$-measure?
\end{quote}
In our analysis we will always assume that $s$ is rational. This avoids technical complications arising from non-computable $s$ (which can be addressed by working relative to an oracle representing $s$).
Initially, one may think that the computational difficulty in determining whe\-ther a set of reals has \emph{positive Hausdorff measure} could be similar to the difficulty in determining whether it has \emph{positive Lebesgue measure},
but we find that it is more similar to the determining whether it is \emph{non-empty} -- and this is more difficult than the measure question.
While questions about Lebesgue measure can often be answered using an arithmetical oracle, for non-emptiness we often have to go beyond even the hyperarithmetical.
As we shall see, this level of difficulty first arises at the $G_{\delta}$ ($\mathbf{\P^{0}_{2}}$) level;
we start by going over the simpler cases of open ($\mathbf{\S^{0}_{1}}$), closed ($\mathbf{\P^{0}_{1}}$), and $F_{\sigma}$ ($\mathbf{\S^{0}_{2}}$) sets.
\begin{pro}\label{s01}
For any rational $0 < s < 1$, the following families are identical, and have $\S^0_1$-complete index sets.
\begin{enumerate}[(a)]
\item $\S^0_1$ classes that are nonempty;
\item $\S^0_1$ classes that have non-zero $s$-dimensional Hausdorff measure;
\item $\S^0_1$ classes that have non-zero Lebesgue measure.
\end{enumerate}
\end{pro}
\begin{proof}
Given a set $W_e \subseteq 2^{<\omega}$, let $\Cyl{W} = \bigcup_{\sigma \in W} \Cyl{\sigma}$, where $\Cyl{\sigma} = \{x \in 2^\omega \colon \sigma \subset x \}$.
Since any non-empty open set has positive Lebesgue measure, and having positive Lebesgue measure implies having infinite $\mathcal{H}^s$-measure for any $s< 1$, the three statements are equivalent.
Any $\S^0_1$ class is given as $\Cyl{W_e}$ for some c.e.\ set $W_e$. The corresponding index sets are c.e.\ since
$\Cyl{W_e} \neq \emptyset$ if and only if $W_e \neq \emptyset$ if and only if $\exists s,\sigma \; \varphi_{e,s}(\sigma)\downarrow$, and they are complete by Rice's Theorem.
\end{proof}
Next, we compare the cases of $\P^{0}_{1}$ classes. It turns out deciding whether a $\P^0_1$ class has positive Lebesgue or Hausdorff measure is only slightly more complicated than deciding whether it is non-empty.
In the following, we let $T_e$ be the $e$-th recursive tree,
\[
T_e = \{ \sigma \colon \forall \tau \ensuremath{\subseteq} \sigma \; \phi_{e,|\sigma|}(\tau) \uparrow \}.
\]
\begin{pro}\label{p01left}
The set of indices of $\P^{0}_{1}$ classes that are nonempty is $\P^{0}_{1}$-complete.
\end{pro}
\begin{proof}
A tree $T$ does not have an infinite path if and only if for some level $n$, no string of length $n$ is in $T$.
If $T$ is recursive, the latter event is c.e.\ and hence the set $\{e \colon [T_e] \neq\emptyset \}$ is $\P^{0}_{1}$. It is $\P^{0}_{1}$-hard by Rice's Theorem.
\end{proof}
\begin{pro}
The set of indices of $\P^{0}_{1}$ classes that have positive Lebesgue measure is $\S^{0}_{2}$-complete.
\end{pro}
\begin{proof}
Given a tree $T$, $[T]$ has positive Lebesgue measure if and only if $$\exists n \forall m \; |T^m | \geq 2^{m-n},$$ where $T^m = T \cap \{0,1\}^m$. This follows from the dominated convergence theorem.
Hence the corresponding index set is $\S^0_2$.
One can reduce the $\S^0_2$ complete set $\operatorname{Fin} = \{e \colon W_e \text{ finite} \}$ to it by effectively building, for each $e$, a tree $T_e$ such that if and only if a given $W_e$ is finite, the measure is positive.
This is achieved by cutting the measure in half (i.e.\ terminating an appropriate number of nodes) whenever another number enters $W_e$.
In detail, there exists (by the Church-Turing thesis) a recursive function $f$ such that $T^0_{f(e)} = \{\epsilon\}$ and
\[
T^{s+1}_{f(e)} = \{\sigma\ensuremath{\mbox{}^\frown} i \colon \sigma \in T^s_{f(e)}, \: i \leq 1 \overset{{ }_\bullet}{-} |W_{e,s+1}\setminus W_{e,s}| \}.
\]
This $f$ is a many-one reduction from $\operatorname{Fin}$ to the set $\{e \colon [T_e] \text{ has positive Lebesgue measure}$
\end{proof}
\begin{thm}\label{p01}
For any rational $0<s<1$, the set of indices of $\P^{0}_{1}$ classes of non-zero $\mathcal{H}^s$-measure is $\S^{0}_{2}$-complete.
\end{thm}
\begin{proof}
Given a tree $T$, $\mathcal{H}^s[T] = 0$ if and only if
\[
\forall m \, \exists n \: \mathcal{H}^s_{2^{-n}} [T] < 2^{-m}.
\]
By the Decreasing Sets Lemma, the latter is equivalent to
\[
\forall m \, \exists n,k \: \mathcal{H}^s_{2^{-n}} \Cyl{T^k} < 2^{-m}.
\]
The property $\mathcal{H}^s_{2^{-n}} \Cyl{T^k} < 2^{-m}$ however, is decidable:
One has to check only a finite number of covers - in case $n \leq k$ any set $U \subseteq T \cap 2^{<\omega}$ such that $\Cyl{U} \supseteq \Cyl{T^m}$ and for all $\sigma \in U$, $n \leq |\sigma| \leq k$,
and only the cover $\{\sigma \colon |\sigma| = n \text{ and } \sigma \text{ extends some } \tau \in T^k\}$ if $n > k$. It follows that the set
\[
\{e \colon [T_e] \text{ has non-zero $\mathcal{H}^s$-measure} \}
\]
is $\S^0_2$.
We can again reduce $\operatorname{Fin}$ to this set to show it is $\S^0_2$-complete. This time the idea is to control the branching rate of a Cantor set.
Whenever a new element enters $W_e$, we delay the next branching for a long time. Let $s = a/b$, $a,b$ relatively prime.
Define a recursive set $A \subseteq \omega$ as follows: Set $l_0 = 0$ and $A\Rest{l_0} = \ensuremath{\epsilon}$. Given $A\Rest{l_s}$, let
\[
l_{s+1} = \begin{cases}
l_{s} + b & \text{if } W_{e,s+1}\setminus W_{e,s} = \emptyset, \\
2^{l_{s}} & \text{otherwise. }
\end{cases}
\]
Put $A(i) = 1$ for all $l_s \leq i < a$ and $A(j) = 0$ for $a \leq j < l_{s+1}$. Finally define
\[
C_{f(e)} = 2^\omega \join[A] \{0\},
\]
where $0$ denotes the real that is zero at all positions.
If $W_e$ is infinite, then $A$ has large gaps, and it is not hard to see that in this case $C_{f(e)}$ has $\mathcal{H}^s$-measure $0$.
If $W_e$ is finite, on the other hand, $C_{f(e)}$ is bi-Lipschitz equivalent to $2^\omega \join[s] \{0\}$, and the latter set has positive $\mathcal{H}^s$-measure by Proposition \ref{pro:hmeas-join},
a property that is preserved under bi-Lipschitz equivalence (see e.g.\ \cite{falconer:1990})
\end{proof}
Next, we look at the question whether a $\P^0_1$ class has \emph{finite} Hausdorff measure.
It turns out this question is $\S^0_3$-complete, and hence indicates that finding closed subsets of finite measure is strictly more difficult in the case of Hausdorff measures than for Lebesgue measure.
It is crucial here that Hausdorff measures are \emph{not} $\sigma$-finite.
\begin{thm}
For any rational $0<s<1$, the set of indices of $\P^{0}_{1}$ classes of finite $\mathcal{H}^s$-measure is $\S^{0}_{3}$-complete.
\end{thm}
\begin{proof}
Given a tree $T$, $[T]$ has finite $\mathcal{H}^s$-measure if and only if
\[
\exists c \: \forall n \: \exists \text{ finite } F \subset T \; \; [\text{ all $\sigma$ have length $\geq n$ and } \sum_{\sigma \in F} 2^{-s|\sigma|} < c ].
\]
(It suffices to consider only finite covers since $[T]$ is compact.) Hence
\[
\{e \colon [T_e] \text{ has finite $\mathcal{H}^s$-measure} \}
\]
is $\S^0_3$.
We show it is $\S^0_3$-complete by reducing the set $\operatorname{Cof} = \{e \colon W_e \text{ is cofinite} \}$
to it.
Suppose $s = a/b$, where $a,b$ are relatively prime. Let $A = \{ bn + i \colon n \in \omega, i < a \}$. We define the co-r.e.\ set $B$ by letting $k \in B$ if and only if $k \in A$ or, if there exists an $n$ such that $b(2n) \leq k < b(2n+1)$ and
\[
n \not \in W_{e}.
\]
Put $C_{f(e)} = 2^\omega \join[B] \{0\}$. Since $B$ is co-r.e.\ it is straightforward to verify that $C_{f(e)}$ is $\P^0_1$.
We claim that $C_{f(e)}$ has finite $\mathcal{H}^s$-measure if and only if $W_e$ is cofinite.
To see this, note that if $W_e$ is cofinite, $C_{f(e)}$ is bi-Lipschitz equivalent to $2^\omega \join[s] \{0\}$, which has finite $\mathcal{H}^s$-measure (see, for example, \cite{mattila}).
If, on the other hand, $W_e$ has an infinite complement, then there exist infinitely many blocks of size $b$ in $B$, as opposed to just the blocks of size $a$ a priori present in $B$.
It follows that the Cantor-like set defined by $C_{f(e)}$ has finite $\mathcal{H}^h$-measure, where $\mathcal{H}^h$ is a generalized Hausdorff measure given by a dimension function
\[
h(2^{-n}) = 2^{-(sn + \alpha(n))},
\]
where $\alpha(n) \to \infty$ for $n \to \infty$. It follows that $h(2^{-n})/2^{-sn} \to 0$ for $n \to \infty$. Therefore, $C_{f(n)}$ has infinite (in fact, non-$\sigma$ finite) $\mathcal{H}^s$-measure (see \cite{falconer:1990}, \cite{mattila}).
\end{proof}
\begin{pro}\label{s02left}
The set of indices of $\S^{0}_{2}$ classes that are nonempty is $\S^{0}_{2}$-complete.
\end{pro}
\begin{proof}
Let $E$ be a $\S^0_2$ class, and let $R_e$ be a recursive relation so that
\[
x \in E \quad \iff \quad \exists n \, \forall m \; R_e(x\Rest{m}, n)
\]
$E$ is non-empty if and only if one of the $\P^0_1$ classes $\{z \colon \forall m \; R_e(z\Rest{m}, n) \}$ is non-empty.
Deciding whether a $\P^0_1$-class is non-empty is $\P^0_1$-complete, as we saw in Proposition \ref{p01left}. Hence deciding whether, for given $e$, $\exists n \, \forall m \; R_e(x\Rest{m}, n)$, is $\S^0_2$-complete.
\end{proof}
\begin{pro}\label{s02}
For any rational $0<s<1$, the set of indices of $\S^{0}_{2}$ classes of non-zero $\mathcal{H}^s$-measure is $\S^{0}_{2}$-complete.
\end{pro}
\begin{proof}
Assume $E = \bigcup_n F_n$, where each $F_n$ is closed. Let $E_n = \bigcup_{m\leq n} F_n$. Since $E$ is measurable and $\mathcal{H}^s$ is a Borel measure, we have $\mathcal{H}^s E = \lim_n \mathcal{H}^s E_n$.
Hence $E$ has non-zero $\mathcal{H}^s$-measure if and only if one of the $F_n$ has.
If $E$ is $\S^0_2$, then the indices of the $\P^0_1$ classes $E_n$ can be obtained effectively and uniformly. The result now follows from Theorem \ref{p01}.
\end{proof}
Passing from $\S^0_2$ to $\P^0_2$ classes, we see a significant jump in complexity.
\begin{thm}\label{p02}
For any rational $0<s<1$, the set of indices of $\P^{0}_{2}$ classes that have non-zero $\mathcal{H}^s$-measure is $\S^1_1$-complete.
\end{thm}
\begin{proof}
Suppose $E$ is a $\P^{0}_{2}$ class. Consider the $s$-join
\[
F = 2^\omega \join[s] E
\]
By Proposition \ref{pro:hmeas-join}, $F$ has non-zero $\mathcal{H}^s$-measure if and only if $E$ is not empty.
Since the set of indices of $\P^{0}_{2}$ classes in $2^\omega$ that are nonempty is $\S^{1}_{1}$-hard, so is the set of indices of $\P^{0}_{2}$ classes that have non-zero $\mathcal{H}^s$-measure.
By Corollary \ref{BD-cor}, the set of indices of $\S^1_1$ classes that are of non-zero $\mathcal{H}^s$-measure is $\S^1_1$, since
\[
\mathcal{H}^s \{x \colon \exists g \forall n \, R(x\Rest{n},g\Rest{n})\} > 0 \quad
\Longleftrightarrow \quad \exists f \; \mathcal{H}^s C_f > 0,
\]
where $C_f$ is the $\P^0_1(f)$ class from Corollary \ref{BD-cor}.
\end{proof}
A straightforward computation shows that the set of indices of $\P^0_2$ classes that have non-zero Lebesgue measure is $\S^0_3$.
Hence at the level $\P^0_2$ it is far more complicated to determine whether a class has non-zero Hausdorff measure than whether it has non-zero Lebesgue measure.
Our results are summarized in Figure \ref{summary-table}.
\begin{figure}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Family & Nonempty? & Positive Hausdorff measure? & Positive Lebesgue measure? \\
\hline
&&&\\
$\S^0_1$ & $\S^0_1$-complete & $\S^0_1$-complete & $\S^0_1$-complete \\
&&&\\
\hline
&&&\\
$\P^0_1$ & $\P^0_1$-complete & & \\
&&$\S^0_2$-complete & $\S^0_2$-complete\\
$\S^0_2$ & $\S^0_2$-complete & & \\
&&&\\
\hline
&&&\\
$\P^0_2$ & & & $\S^0_3$ \\
& $\S^1_1$-complete & $\S^1_1$-complete &\\
$\S^1_1$ & & & \\
&&&\\
\hline
\end{tabular}
\end{center}
\caption{Index set complexity of some classes of reals. For example, the set of indices of $\P^{0}_{2}$ classes that are of non-zero Hausdorff measure is $\S^{1}_{1}$-complete, and this is shown in Theorem \ref{p02}.}
\label{summary-table}
\end{figure}
\section{Closed subsets of non-zero Hausdorff measure}\label{3}
We now turn to the question how difficult it is to find a closed subset of non-zero Hausdorff measure.
We will measure this in terms of the recursion theoretic complexity of the parameter needed to define such a closed subset.
Corollary \ref{BD-cor} tells us that given a $\S^1_1$ class $E$ of non-zero $\mathcal{H}^s$-measure, we can find a function $r: \omega \to \omega$ such that there exists a $\P^0_1(r)$ subclass of non-zero $\mathcal{H}^s$-measure. How complex is $r$?
We will see that a few fundamental results in higher recursion theory facilitate the classification of the possible complexities.
\begin{df}
A set $B \subseteq \omega^\omega$ is called a \emph{basis} for a pointclass $\Gamma$ if each nonempty collection of reals that belongs to $\Gamma$ has a member in $B$.
\end{df}
We shall be particularly interested in the case $\Gamma=\S^{1}_{1}$. Here several bases are known.
\begin{thm}[Basis theorems for $\S^{1}_{1}$]\label{basis}
Each of the following classes is a basis for $\S^{1}_{1}$:
\begin{enumerate}
\item[$(1)$] $\{x \colon x \le_{\ensuremath{\operatorname{T}}} \mathcal O\}$, the reals recursive in some $\P^{1}_{1}$ set $($Kleene, see Rogers \cite{Rogers}*{XLII(b)}$)$;
\item[$(2)$] $\{x \colon x<_{h}\mathcal O\}$, the reals of hyperdegree strictly below $\mathcal O$ $($Gandy \cite{Gandy}; see also Rogers \cite{Rogers}*{XLIII(a)}$)$;
\item[$(3)$] $\{x \colon x\not\le_{h} z \And z\not\le_{h} x\}$, where $z$ is any given non-hyperarithmetical real $($Gandy, Kreisel, and Tait \cite{GKT}$)$.
\end{enumerate}
\end{thm}
We first show that any basis for $\S^1_1$ contains a function that specifies a subset of non-zero Hausdorff measure.
\begin{thm}\label{oh}
Let $0< s < 1$ be rational. For each set $B \subseteq \omega^\omega$ that is a basis for $\S^{1}_{1}$ and each $\S^{1}_{1}$ class $E$ of non-zero $\mathcal{H}^s$-measure,
there is some $f \in B$ such that $E$ has a $\P^{0}_{1}(f)$ subclass of non-zero $\mathcal{H}^s$-measure.
\end{thm}
\begin{proof}
Let $R$ be a recursive predicate such that
\[
E = \{x \colon \exists g \forall n \, R(x\Rest{n},g\Rest{n})\}
\]
For any function $f: \omega \to \omega$,
\[
C_{f} =\{x \colon \exists g \leq f \, \forall n \, R(x\Rest{n},g\Rest{n})\}
\]
is a $\P^{0}_{1}(f)$ subclass of $E$.
Now consider the set
\[
\{ f\in\omega^\omega \colon \mathcal{H}^s C_{f} > 0 \}
\]
By Theorem \ref{p01}, this is a $\S^{0}_{2}$ class in $\omega^\omega$, in particular it is $\S^{1}_{1}$, hence it has a member $f \in B$. $C_{f}$ for such an $f$ is a $\P^{0}_{1}(G_{f})$ class, where $G_f$ is the graph of $f$.
\end{proof}
In particular, $E$ always has a $\P^{0}_{1}(\mathcal O)$ subclass of non-zero Hausdorff measure.
We will see next that there are examples, even of $\P^0_2$ classes, where no hyperarithmetical real is powerful enough to define a $\P^0_1$ subclass of non-zero Hausdorff measure.
\begin{thm}\label{top}
Let $0< s < 1$ be rational. There is a $\P^{0}_{2}$ class $G$ of non-zero $\mathcal{H}^s$-measure such that the following holds:
If $x \in 2^\omega$ is such that some $\P^0_1(x)$ subclass of $G$ has non-zero $\mathcal{H}^s$-measure, then $x \geq_{\ensuremath{\operatorname{T}}} H$ for every $H \in \operatorname{HYP}$.
\end{thm}
\begin{proof}
Let $\operatorname{HYP}$ denote the collection of all hyperarithmetical reals. Note that the set
\[
E = \{ z \in 2^\omega \colon \forall H \in\operatorname{HYP}\, H \le_{\ensuremath{\operatorname{T}}} z \}
\]
is $\S^{1}_{1}$ (an observation made by Enderton and Putnam \cite{EP}).
$E$ has Hausdorff dimension $1$, since it contains the upper cone of $\mathcal O$, and Reimann \cite{reimann:phd} has shown that the upper cone of any Turing degree has Hausdorff dimension $1$. It follows that $\mathcal{H}^t E = \infty$ for any $t < 1$.
Suppose $x$ is such that there is a $\P^{0}_{1}(x)$ subclass of $E$ that is of non-zero $\mathcal{H}^s$-measure and hence non-empty.
We apply two basis theorems for $\P^0_1$ classes (or rather, their relativized versions).
By the low basis theorems each $H \in \operatorname{HYP}$ is recursive in a real $y$ that is low relative to $x$,
and by the hyperimmune-free basis theorem, each $H \in \operatorname{HYP}$ is recursive in a real $z$ that is hyperimmune-free relative to $x$.
The reals $y$ and $z$ form a minimal pair over $x$, since no non-computable degree comparable with $x'$ can be hyperimmune-free relative to $x$, and being hyperimmune-free relative to $x$ is closed downwards in the Turing degrees.
Hence we must have that $H\le_{T} x$ for every $H \in \operatorname{HYP}$.
It remains to show that we can replace $E$ by a $\P^0_2$ class with the same property. Every $\S^1_1$ class is the projection of a $\P^0_2$ class.
Instead of using the standard projection on coded pairs, we can use a projection along a $t$-join, i.e.\ there exists a $\P^0_2$ class $G \subseteq 2^\omega$ such that
\[
E = \{ z \colon \exists y \; (z \join[t] y\in G) \}.
\]
with $t = s+\varepsilon < 1$, where $\varepsilon >0$ is sufficiently small.
If we let $\pi_t(z \join[t] y) = z$ be the projection of an $t$-join onto the first ``coordinate'', then for all $x \in G$,
\[
d(\pi_t(x), \pi_t(x')) \leq d(x,x')^t,
\]
hence $\pi_t$ is Hölder continuous with exponent $t$. It follows that
\[
\mathcal{H}^{s} G \geq \mathcal{H}^{s/t} \pi_s(G) = \mathcal{H}^{s/t} E = \infty.
\]
On the other hand, every element of $G$ still computes every hyperarithmetic real, since every element of $G$ is the join of an element of $E$ with another real.
Hence the argument above remains valid and we get that if $x$ defines a $\P^0_1(x)$ subclass of $G$ of non-zero $\mathcal{H}^s$-measure, $x \geq_{\ensuremath{\operatorname{T}}} H$ for every $H \in \operatorname{HYP}$.
\end{proof}
We can also give a sufficient condition for hyperarithmeticity based on the ability to define a closed subset of positive measure. This follows from Solovay's characterization of hyperarithmetic reals through fast growing functions.
\begin{df}[Solovay \cite{Solovay}]
A family $F$ of infinite sets of natural numbers is said to be \emph{dense} if each infinite set of natural numbers has a subset in $F$.
A set $A$ of natural numbers is said to be \emph{recursively encodable} if the family of infinite sets in which $A$ is recursive is dense.
\end{df}
\begin{thm}[Solovay \cite{Solovay}]\label{Solo}
The recursively encodable sets coincide with the hyperarithmetic sets.
\end{thm}
\begin{thm}\label{hyponly}
Let $0< s < 1$ be rational, assume that $E \subseteq 2^\omega$ is $\S^1_1$, and let $y \in 2^\omega$. If for some $U$ then $Y$ is hyperarithmetical.
\end{thm}
\begin{proof}
Suppose $y$ is recursive in each $x$ defining a $\P^0_1$ subclass of non-zero $\mathcal{H}^s$-measure of $E$. In
particular, it is recursive in any (graph of a) function $r$ as in Corollary \ref{BD-cor}, and any $f$ dominating $r$. If
$A \subseteq \omega$ is infinite, it has an infinite subset $B = \{b_0 < b_1 < b_2 < \dots \}$ so that the function
$p_B(n) = b_n$ dominates $r$ and hence defines a closed subset of non-zero $\mathcal{H}^s$-measure. It follows that $y$ is recursively encodable and thus hyperarithemtic.
\end{proof}
\section{Mass problems}\label{4}
It is beneficial to phrase the preceding results as \emph{mass problems}. Recall that a mass problem is a subset of $2^\omega$. Given a $\S^1_1$ class $E$ of non-zero $\mathcal{H}^s$-measure, $0 < s < 1$ rational, we define the mass problem
\[
S(E) = \{ x \in 2^\omega \colon \text{$E$ has a $\P^{0}_{1}(x)$ subclass of non-zero $\mathcal{H}^s$-measure} \}.
\]
For sets of reals $X$, $Y$, $X$ is called \emph{weakly (Muchnik) reducible} to $Y$, $X \leq_w Y$ if for each $y \in Y$ there is some $x \in X$ such that $x \leq_{\ensuremath{\operatorname{T}}} y$.
Our results now read as follows.
\begin{enumerate}[(1)]
\item If $z \in 2^\omega$ is $\P^{1}_{1}$-complete then $S(E)\le_{w}\{z\}$ for any $E$. (Theorem \ref{oh})
\item There is a $\P^{0}_{2}$ class $G$ such that for each hyperarithmetical real $y$, $\{y\}\le_{w} S(G)$. (Theorem \ref{top})
\item For each real $y$, if $\{y\}\le_{w}S(E)$ for some $E$, then $y$ is hyperarithmetical. (Theorem \ref{hyponly})
\end{enumerate}
The situation is summarized in Figure \ref{fig:Ln}.
\begin{figure}[htb!]
\centering%
\includegraphics[scale=.33]{Davies_Graph.pdf}
\caption{
The relative position in the Muchnik lattice of the various mass problems $S(E)$.
At the top is Kleene's $\mathcal{O}$, according to Theorems \ref{basis}(1) and \ref{oh}. The ellipse represents the hyperarithmetical sets $\operatorname{HYP}$ with their cofinal sequence $\{0^{(\alpha)} \colon \alpha < \omega_1^{CK} \}$.
The top $S(G)$ class is located as indicated in Theorem \ref{top}. Each $S(E)$ bounds only sets in HYP, per Theorem \ref{hyponly}.
It is not known which of the classes $E$ here displayed might represent the set of Turing degrees in $\operatorname{BN1R}$.
}
\label{fig:Ln}
\end{figure}
\newpage
\begin{bibsection}
\begin{biblist}
\bib{besicovitch:concentrated_1933a}{article}{
author={Besicovitch, A. S.},
title={Concentrated and rarified sets of points},
journal={Acta Math.},
volume={62},
date={1933},
number={1},
pages={289--300},
issn={0001-5962},
review={\MR{1555386}},
doi={10.1007/BF02393607},
}
\bib{Besicovitch}{article}{
author={Besicovitch, A. S.},
title={On existence of subsets of finite measure of sets of infinite
measure},
journal={Nederl. Akad. Wetensch. Proc. Ser. A. {\bf 55} = Indagationes
Math.},
volume={14},
date={1952},
pages={339--344},
review={\MR{0048540 (14,28e)}},
}
\bib{besicovitch:approximation_1954a}{article}{
author={Besicovitch, A. S.},
title={On approximation in measure to Borel sets by $F_\sigma$-sets},
journal={J. London Math. Soc.},
volume={29},
date={1954},
pages={382--383},
issn={0024-6107},
review={\MR{0062201 (15,943f)}},
}
\bib{BesicMoran}{article}{
author={Besicovitch, A. S.},
author={Moran, P. A. P.},
title={The measure of product and cylinder sets},
journal={J. London Math. Soc.},
volume={20},
date={1945},
pages={110--120},
issn={0024-6107},
review={\MR{0016448 (8,18f)}},
}
\bib{Davies}{article}{
author={Davies, R. O.},
title={Subsets of finite measure in analytic sets},
journal={Nederl. Akad. Wetensch. Proc. Ser. A. {\bf 55} = Indagationes
Math.},
volume={14},
date={1952},
pages={488--489},
review={\MR{0053184 (14,733g)}},
}
\bib{davies-rogers:problem_1969}{article}{
author={Davies, Roy O.},
author={Rogers, C. A.},
title={The problem of subsets of finite positive measure},
journal={Bull. London Math. Soc.},
volume={1},
date={1969},
pages={47--54},
issn={0024-6093},
review={\MR{0240267 (39 \#1616)}},
}
\bib{DK}{article}{
author={Diamondstone, David},
author={Kjos-Hanssen, Bj{\o}rn},
title={Martin-L\"of randomness and Galton-Watson processes},
journal={Ann. Pure Appl. Logic},
volume={163},
date={2012},
number={5},
pages={519--529},
issn={0168-0072},
review={\MR{2880270}},
doi={10.1016/j.apal.2011.06.010},
}
\bib{Dobrinen-Simpson}{article}{
author={Dobrinen, Natasha L.},
author={Simpson, Stephen G.},
title={Almost everywhere domination},
journal={J. Symbolic Logic},
volume={69},
date={2004},
number={3},
pages={914--922},
issn={0022-4812},
review={\MR{2078930 (2005d:03079)}},
doi={10.2178/jsl/1096901775},
}
\bib{DH}{book}{
author={Downey, Rodney G.},
author={Hirschfeldt, Denis R.},
title={Algorithmic randomness and complexity},
series={Theory and Applications of Computability},
publisher={Springer, New York},
date={2010},
pages={xxviii+855},
isbn={978-0-387-95567-4},
review={\MR{2732288 (2012g:03001)}},
doi={10.1007/978-0-387-68441-3},
}
\bib{EP}{article}{
author={Enderton, H. B.},
author={Putnam, Hilary},
title={A note on the hyperarithmetical hierarchy},
journal={J. Symbolic Logic},
volume={35},
date={1970},
pages={429--430},
issn={0022-4812},
review={\MR{0290971 (45 \#65)}},
}
\bib{falconer:1990}{book}{
author={Falconer, Kenneth},
title={Fractal geometry},
note={Mathematical foundations and applications},
publisher={John Wiley \& Sons, Ltd., Chichester},
date={1990},
pages={xxii+288},
isbn={0-471-92287-0},
review={\MR{1102677 (92j:28008)}},
}
\bib{Gandy}{article}{
author={Gandy, R. O.},
title={On a problem of Kleene's},
journal={Bull. Amer. Math. Soc.},
volume={66},
date={1960},
pages={501--502},
issn={0002-9904},
review={\MR{0122724 (23 \#A64)}},
}
\bib{GKT}{article}{
author={Gandy, R. O.},
author={Kreisel, G.},
author={Tait, W. W.},
title={Set existence},
journal={Bull. Acad. Polon. Sci. S\'er. Sci. Math. Astronom. Phys.},
volume={8},
date={1960},
pages={577--582},
issn={0001-4117},
review={\MR{0159747 (28 \#2964a)}},
}
\bib{GM}{article}{
author={Greenberg, Noam},
author={Miller, Joseph S.},
title={Diagonally non-recursive functions and effective Hausdorff
dimension},
journal={Bull. Lond. Math. Soc.},
volume={43},
date={2011},
number={4},
pages={636--654},
issn={0024-6093},
review={\MR{2820150 (2012g:03112)}},
doi={10.1112/blms/bdr003},
}
\bib{Kurtz}{article}{
author={Greenberg, Noam},
author={Miller, Joseph S.},
title={Lowness for Kurtz randomness},
journal={J. Symbolic Logic},
volume={74},
date={2009},
number={2},
pages={665--678},
issn={0022-4812},
review={\MR{2518817 (2010b:03050)}},
doi={10.2178/jsl/1243948333},
}
\bib{howroyd}{article}{
author={Howroyd, J. D.},
title={On dimension and on the existence of sets of finite positive
Hausdorff measure},
journal={Proc. London Math. Soc. (3)},
volume={70},
date={1995},
number={3},
pages={581--604},
issn={0024-6115},
review={\MR{1317515 (96b:28004)}},
doi={10.1112/plms/s3-70.3.581},
}
\bib{Larman}{article}{
author={Larman, D. G.},
title={On Hausdorff measure in finite-dimensional compact metric spaces},
journal={Proc. London Math. Soc. (3)},
volume={17},
date={1967},
pages={193--206},
issn={0024-6115},
review={\MR{0210874 (35 \#1759)}},
}
\bib{Law}{article}{
author={Kjos-Hanssen, Bj{\o}rn},
title={A strong law of computationally weak subsets},
journal={J. Math. Log.},
volume={11},
date={2011},
number={1},
pages={1--10},
issn={0219-0613},
review={\MR{2833148}},
doi={10.1142/S0219061311000980},
}
\bib{Kjos-Hanssen:2007a}{article}{
author={Kjos-Hanssen, Bj{\o}rn},
title={Low for random reals and positive-measure domination},
journal={Proc. Amer. Math. Soc.},
volume={135},
date={2007},
number={11},
pages={3703--3709},
issn={0002-9939},
review={\MR{2336587 (2008g:03070)}},
doi={10.1090/S0002-9939-07-08648-0},
}
\bib{Extracting}{article}{
author={Freer, Cameron E.},
author={Kjos-Hanssen, Bj{\o}rn},
title={Randomness extraction and asymptotic Hamming distance},
journal={Log. Methods Comput. Sci.},
volume={9},
date={2013},
number={3},
pages={3:27, 14},
issn={1860-5974},
review={\MR{3116543}},
doi={10.2168/LMCS-9(3:27)2013},
}
\bib{MRL}{article}{
author={Kjos-Hanssen, Bj{\o}rn},
title={Infinite subsets of random sets of integers},
journal={Math. Res. Lett.},
volume={16},
date={2009},
number={1},
pages={103--110},
issn={1073-2780},
review={\MR{2480564 (2010b:03051)}},
doi={10.4310/MRL.2009.v16.n1.a10},
}
\bib{KN}{article}{
author={Kjos-Hanssen, Bj{\o}rn},
author={Nerode, Anil},
title={Effective dimension of points visited by Brownian motion},
journal={Theoret. Comput. Sci.},
volume={410},
date={2009},
number={4-5},
pages={347--354},
issn={0304-3975},
review={\MR{2493984 (2009k:68100)}},
doi={10.1016/j.tcs.2008.09.045},
}
\bib{K.Reimann:10}{article}{
author={Kjos-Hanssen, Bj{\o}rn},
author={Reimann, Jan},
title={The strength of the Besicovitch-Davies theorem},
conference={
title={Programs, proofs, processes},
},
book={
series={Lecture Notes in Comput. Sci.},
volume={6158},
publisher={Springer, Berlin},
},
date={2010},
pages={229--238},
review={\MR{2678134 (2012c:03202)}},
doi={10.1007/978-3-642-13962-8\_26},
}
\bib{KL}{article}{
author={Kumabe, Masahiro},
author={Lewis, Andrew E. M.},
title={A fixed-point-free minimal degree},
journal={J. Lond. Math. Soc. (2)},
volume={80},
date={2009},
number={3},
pages={785--797},
issn={0024-6107},
review={\MR{2559129 (2011d:03064)}},
doi={10.1112/jlms/jdp049},
}
\bib{larman:hausdorff_1967} {article}{
author={Larman, D. G.},
title={On Hausdorff measure in finite-dimensional compact metric spaces},
journal={Proc. London Math. Soc. (3)},
volume={17},
date={1967},
pages={193--206},
issn={0024-6115},
review={\MR{0210874 (35 \#1759)}},
}
\bib{larman:theory_1967}{article}{
author={Larman, D. G.},
title={A new theory of dimension},
journal={Proc. London Math. Soc. (3)},
volume={17},
date={1967},
pages={178--192},
issn={0024-6115},
review={\MR{0203691 (34 \#3540)}},
}
\bib{mattila}{book}{
author={Mattila, Pertti},
title={Geometry of sets and measures in Euclidean spaces},
series={Cambridge Studies in Advanced Mathematics},
volume={44},
note={Fractals and rectifiability},
publisher={Cambridge University Press, Cambridge},
date={1995},
pages={xii+343},
isbn={0-521-46576-1},
isbn={0-521-65595-1},
review={\MR{1333890 (96h:28006)}},
doi={10.1017/CBO9780511623813},
}
\bib{M}{article}{
author={Miller, Joseph S.},
title={Extracting information is hard: a Turing degree of non-integral
effective Hausdorff dimension},
journal={Adv. Math.},
volume={226},
date={2011},
number={1},
pages={373--384},
issn={0001-8708},
review={\MR{2735764 (2012a:03114)}},
doi={10.1016/j.aim.2010.06.024},
}
\bib{reimann:phd}{misc}{
author={Reimann, J.},
title={Computability and fractal dimension},
organization={doctoral dissertation, Universit{\"a}t {H}eidelberg},
date={2004},
}
\bib{Reimann}{article}{
author={Reimann, Jan},
title={Effectively closed sets of measures and randomness},
journal={Ann. Pure Appl. Logic},
volume={156},
date={2008},
number={1},
pages={170--182},
issn={0168-0072},
review={\MR{2474448 (2010a:03043)}},
doi={10.1016/j.apal.2008.06.015},
}
\bib{CARogers}{book}{
author={Rogers, C. A.},
title={Hausdorff measures},
publisher={Cambridge University Press, London-New York},
date={1970},
pages={viii+179},
review={\MR{0281862 (43 \#7576)}},
}
\bib{Rogers}{book}{
author={Rogers, Hartley, Jr.},
title={Theory of recursive functions and effective computability},
edition={2},
publisher={MIT Press, Cambridge, MA},
date={1987},
pages={xxii+482},
isbn={0-262-68052-1},
review={\MR{886890 (88b:03059)}},
}
\bib{Sacks}{book}{
author={Sacks, Gerald E.},
title={Higher recursion theory},
series={Perspectives in Mathematical Logic},
publisher={Springer-Verlag, Berlin},
date={1990},
pages={xvi+344},
isbn={3-540-19305-7},
review={\MR{1080970 (92a:03062)}},
doi={10.1007/BFb0086109},
}
\bib{simpson}{article}{
author={Simpson, Stephen G.},
title={Mass problems and measure-theoretic regularity},
journal={Bull. Symbolic Logic},
volume={15},
date={2009},
number={4},
pages={385--409},
issn={1079-8986},
review={\MR{2682785 (2012c:03116)}},
doi={10.2178/bsl/1255526079},
}
\bib{Soare}{book}{
author={Soare, Robert I.},
title={Recursively enumerable sets and degrees},
series={Perspectives in Mathematical Logic},
note={A study of computable functions and computably generated sets},
publisher={Springer-Verlag, Berlin},
date={1987},
pages={xviii+437},
isbn={3-540-15299-7},
review={\MR{882921 (88m:03003)}},
doi={10.1007/978-3-662-02460-7},
}
\bib{Solovay}{article}{
author={Solovay, Robert M.},
title={Hyperarithmetically encodable sets},
journal={Trans. Amer. Math. Soc.},
volume={239},
date={1978},
pages={99--122},
issn={0002-9947},
review={\MR{0491103 (58 \#10375)}},
}
\end{biblist}
\end{bibsection}
\end{document}
|
1808.05754
|
\section{Introduction}
Computational retinal disease methods \cite{tan2009detection,lalezary2006baseline} has been investigated extensively through different signal processing techniques. Retinal diseases are accessible to machine learning techniques due to their visual nature in contrast to other common human diseases requiring invasive techniques for diagnosis or treatments. Typically, the diagnosis accuracy of retinal diseases based on the clinical retinal images is highly dependent on the practical experience of a physician or ophthalmologist. However, training highly-skilled ophthalmologists usually take years and the number of them, especially in the less-developed area, is still far from enough. Therefore, developing an automatic retinal diseases detection system is important, and it will broadly facilitate diagnostic accuracy of retinal diseases. Moreover, for remote rural areas, where there are even no ophthalmologists locally to screen retinal disease, the automatic retinal diseases detection system also helps non-ophthalmologists find the patient of the retinal disease, and further, refer them to the medical center for further treatment.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{model_flowchart-4.pdf}
\end{center}
\caption{This figure represents our proposed two-streams model. A raw retinal image as an input of DNNs, U-Net, and as the other input to a contrast enhancement algorithm. Then we pass the output of U-Net to two separated PCA processing. Finally, the output of these two PCA modules is sent as inputs to the retina disease classifier, SVM, which give the outcome of predicted retina disease.}
\label{fig:figure1}
\end{figure}
The development of automatic diseases detection (ADD) \cite{sharifi2002classified} alleviates enormous pressure from social healthcare systems. Retinal symptom analysis \cite{abramoff2010retinal} is one of the important ADD applications given that it offers a unique opportunity to improve eye care on a global stage. The World Health Organization estimates that age-related macular degeneration (AMD) and Diabetic Retinopathy, which are two typical retinal diseases, are expected to affect over 500 million people worldwide by 2020 \cite{pizzarello2004vision}.
Moreover, the increasing number of cases of diabetic retinopathy globally requires extending efforts in developing visual tools to assist in the analytic of the series of retinal disease. These decision support systems for retinal ADD, as \cite{bhattacharya2014watermarking} for non-proliferative diabetic retinopathy have been improved from recent machine learning success on the high dimensional images processing by featuring details on the blood vessel. \cite{lin2000rotation} demonstrated an automated technique for the segmentation of the blood vessels by tracking the center of the vessels on Kalman Filter. However, these pattern recognition based classification still rely on hand-crafted features and only specify for evaluating single retinal symptom. Despite extensive efforts using wavelet signal processing, retinal ADD remains a viable target for improved machine learning techniques applicable for point-of-care (POC) medical diagnosis and treatment in the aging society \cite{cochocki1993neural}.
To the best of our knowledge, the amount of clinical retinal images are less compared to other cell imaging data, such as blood cell and a cancer cell. However, a vanilla deep learning based diseases diagnosis system requires large amounts of data. Therefore, we propose a novel visual-assisted diagnosis algorithm which is based on an integration of the support vector machine and deep neural networks. The primary goal of this work is to automatically classify 52 specific retinal diseases for human beings with the reliable clinical-assisted ability on the intelligent medicine approaches. To foster the long-term visual analytics research, we also present a visual clinical label collection, EyeNet, including several crucial symptoms as AMD, DR, uveitis, BRVO, BRAO.
\vspace{+0.3cm}
\noindent\textbf{Contributions.}
\begin{itemize}
\item We design a novel two-streams-based algorithm on the support vector machine and deep neural networks to facilitate medical diagnosis of retinal diseases.
\item We present a new clinical labels collection, EyeNet, for Ophthalmology with 52 retina diseases classes as a crucial aid to the ophthalmologist and medical informatics community.
\item Finally, we visualize the learned features inside the DNNs model by heat maps. The visualization helps in understanding the medical comprehensibility inside our DNNs model.
\end{itemize}
\section{Related Work}
In this section, we review some works related to our proposed method. We divide the related works into three parts including medical dataset comparison, dimension reduction by feature extraction, and image segmentation by neural networks.
\noindent\textbf{2.1 Medical Dataset Comparison}
Large-scale datasets help the performance of deep learning algorithms comparable to human-level on the tasks of speech recognition \cite{hannun2014deep}, image classification and recognition \cite{deng2009imagenet}, and question answering \cite{rajpurkar2016squad,antol2015vqa,huang2017vqabq,huang2017novel,huang2017robustness}. In the medical community, large scale medical datasets also help algorithms achieve expert-level performance on detection of skin cancer \cite{esteva2017dermatologist}, diabetic retinopathy \cite{gulshan2016development}, heart arrhythmias \cite{rajpurkar2017cardiologist}, pneumonia \cite{rajpurkar2017chexnet}, brain hemorrhage \cite{grewal2018radnet}, lymph node metastases \cite{bejnordi2017diagnostic}, and hip fractures \cite{gale2017detecting}.
Recently, the number of openly available medical datasets is growing. In Table \ref{table:table3}, we try to provide a summary of the publicly available medical image datasets related to ours. According to Table \ref{table:table3}, we notice that the recently released ChestXray14 \cite{wang2017chestx} is the largest medical dataset containing 112,120 frontal-view chest radiographs with up to 14 thoracic pathology labels. Moreover, the smallest medical dataset is DRIVE \cite{staal2004ridge} containing 40 retina images. Regarding the openly available musculoskeletal radiograph databases, the Stanford Program for Artificial Intelligence in Medicine and Imaging has a medical dataset containing pediatric hand radiographs annotated with skeletal age (AIMI). The Digital Hand Atlas \cite{gertych2007bone} includes the left-hand radiographs which are from children of various ages labeled with radiologist readings of bone age. Then, our proposed EyeNet contains 52 classes of diseases and 1747 images.
\begin{table*}
\begin{center}
\scalebox{0.65}{
\begin{tabular}{| l | l | l | l |}
\hline
~~~\textbf{Name of Dataset} & \textbf{Study Type} & \textbf{Label} & \textbf{Number of Images}\\ \hline
~~\textbf{EyeNet} & ~~~~\textbf{Retina} & ~~~~~~\textbf{Labels mining of Retinal Diseases} & ~~~~~~\textbf{1747}\\ \hline
~~DRIVE \cite{staal2004ridge} & ~~~~Retina & ~~~~~~Retinal Vessel Segmentation & ~~~~~~40\\ \hline
~~MURA \cite{rajpurkar2017mura} & ~~~~Musculoskeletal (Upper Extremity) & ~~~~~~Abnormality & ~~~~~~40,561\\ \hline
Digital Hand Atlas \cite{gertych2007bone} & ~~~~Musculoskeletal (Left Hand) & ~~~~~~Bone Age & ~~~~~~1,390 \\ \hline
ChestX-ray14 \cite{wang2017chestx} & ~~~~Chest & ~~~~~~Multiple Pathologies & ~~~~~~112,120 \\ \hline
DDSM \cite{heath2000digital}& ~~~~Mammogram & ~~~~~~Breast Cancer & ~~~~~~10,239 \\ \hline
\end{tabular}}
\caption{Overview of available different types of medical label collection and image datasets.}
\label{table:table3}
\end{center}
\end{table*}
\noindent\textbf{2.2 Dimension Reduction by Feature Extraction}
Feature extraction is a method to make the task of pattern classification or recognition easier. In image processing and pattern recognition, feature extraction is one of the special forms of dimensionality reduction \cite{costa2005classification} in some sense.
The purpose of feature extraction is to exploit the most relevant information based on the original data and describe the information in a space with lower dimensions. For example, typically the size of original medical image data, such as functional magnetic resonance imaging (fMRI) scans, is very large and it causes algorithms computationally inefficient. In this case, we will transform the original data into a reduced representation set of features. That is, we exploit a set of feature vectors to describe the original data and the process is called image feature extraction. In \cite{fukunaga2013introduction}, the authors mention that the representation by extracted feature vectors should have a dimensionality that corresponds to the intrinsic dimensionality of the original data. Then, the intrinsic dimensionality of data is the minimum number of parameters required to account for the properties of the original data. Moreover, the authors of \cite{jimenez1998supervised} claim that dimensionality reduction mitigates the curse of dimensionality and the other undesired properties of spaces with high dimensions. The dimensionality reduction by feature extraction method has been used in many different application fields such as document verification \cite{yang2016stacked}, character recognition \cite{trier1996feature}, extracting information from sentences \cite{huang2017novel,srihari1999information,huang2017vqabq}, machine translation \cite{somers1999example,bahdanau2014neural} and so on.
\noindent\textbf{2.3 Image Segmentation by Neural Networks}
Typically, researchers exploit the convolutional neural networks to do image classification tasks with a single class output label. However, in the biomedical image processing tasks, the output should contain localization. That is to say, a class label is assigned to each pixel. Furthermore, thousands of images in training set are typically beyond reach in the biomedical tasks. Therefore, the authors of \cite{ciresan2012deep} train a neural networks model, with sliding-window, to predict the output class label of each pixel by providing a sub-region, small patch, around that pixel as input.
In \cite{ciresan2012deep}, we know that the neural network model can do localization and the number of training data, in the sense of patches, is much larger than the training images. Apparently, \cite{ciresan2012deep} has two drawbacks. First, there exists some trade-off between the use of context and localization accuracy. Then, since the model runs separately for each small patch and there is much redundancy due to overlapping patches, it is not efficient in the sense of computational speed. Recently, the authors of \cite{hariharan2015hypercolumns,seyedhosseini2013image} have proposed an approach which can do the good localization and use of context at the same time.
In the U-Net paper \cite{ronneberger2015u}, the authors build upon an even more elegant neural network architecture, the so-called fully convolutional network \cite{long2015fully}. The authors modify the architecture such that it works with very few training images and produces even more accurate image segmentation. The main idea of \cite{long2015fully} is to supplement a usual contracting neural network by the successive layers. Then, the authors exploit upsampling operators to replace pooling operators, so the resolution of output is enhanced by these layers. In order to do localization, the authors combine the upsampled output and high-resolution features from the contracting path. Furthermore, a successive convolutional layer learns to assemble a more accurate output based on this information. Due to the advantages of U-Net mentioned above, we modify and incorporate the U-Net to our proposed method.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{fig1_final.png}
\end{center}
\caption{The figure shows the result of U-Net effects on (a), unseen eyeball clinical images with different morphological shapes. (b) is the ground truth and (c) is the generated result of vessel-subtracted U-Net. Based on (b) and (c), we discover that the results are highly similar to the ground truth.}
\label{fig:figure2}
\end{figure*}
\section{Methodology}
In this section, we present the workflow of our proposed model, referring to Figure \ref{fig:figure1}.
\noindent\textbf{3.1 U-Net}
DNNs has greatly boosted the performance of image classification due to its power of image feature learning \cite{simonyan2014very}. The active retinal disease is characterized by exudates around retinal vessels resulting in cuffing of the affected vessels \cite{khurana2007comprehensive}. However, ophthalmology images from clinical microscopy are often overlayed with white sheathing and minor features. Segmentation of retinal images has been investigated as a critical \cite{rezaee2017optimized} visual-aid technique for ophthalmologists. U-Net \cite{ronneberger2015u} is a functional DNNs, especially for segmentation. Here, we proposed a modified version of U-Net by reducing the copy and crop processes with a factor of two. The adjustment could speed up the training process and have been verified as an adequate semantic effect on small size images. We use cross-entropy for evaluating the training processes as:
\[
\ E = \sum_{x\in \Omega }w(x)log(p_{l}(x)) \hspace{+1.25cm} (1)
\]
where $p_{l}$ is the approximated maximum function, and the weight map is then computed as:
\[
\ w(x)=w_{c}(x)+w_{0}\cdot exp(\frac{-(d_{x1}+d_{x2})^2}{2\sigma^2}) \hspace{+0.5cm} (2)
\]
$d_{x1}$ designates the distance to the border of the nearest edges and $d_{x2}$ designates the distance to the border of the second nearest edges. LB score is shown as \cite{cochocki1993neural}. We use the deep convolutional neural network (CNN) of two $3\times3$ convolutions. Each step followed by a rectified linear unit (ReLU) and a $2\times2$ max pooling operation with stride 2 for downsampling; a layer with an even x- and y-size is selected for each operation. For the U-Net model, we use existing DRIVE \cite{staal2004ridge} dataset as the training segmentation mask. Then, we use Our proposed model converges at the 44th epoch when the error rate of the model is lower than $0.001$. The Jaccard similarity of our U-Net model is 95.59\% by validated on a 20\% test set among EyeNet shown in Figure \ref{fig:figure2}. This model is robust and feasible for different retinal symptoms as illustrated in Figure \ref{fig:figure3}. The area under ROC curve is $0.979033$ and the area under the Precision-Recall curve is $0.910335$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig2_dis_fi.pdf}
\end{center}
\caption{This figure illustrates the qualitative results of contrast enhancement algorithm from the original clinical images to the (b) histogram equalization (c) contrast-limited adaptive histogram equalization.}
\label{fig:figure3}
\vspace{-0.3cm}
\end{figure*}
\noindent\textbf{3.2 Principal Component Analysis as Eigenface in the limit of Sparse Data}
\[
\lambda_{k}=\frac{1}{M}\sum_{n=1}^{M}(u^{T}_{k}\Phi_{n})^{2} ~~~~~(3)
\]
Eigenface \cite{turk1991face} is classical and high-efficient image recognition technique derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. Even with a single training image, previous works \cite{lyons1999automatic,wu2002face} of eigenface already established robust automatic classification with confident accuracy ($85.6\%$) by combined principal component analysis (PCA) and SVM classifiers. As a biological feature, retinal images share similar properties with the human face for a potential with eigenface recognition \cite{moghaddam1998beyond} included finite semantic layout between facial features and ophthalmological features \cite{akram2011retinal}. The eigenface could be calculated \cite{turk1991face} by maximizing the equivalent $(3)$, where $\Phi_{n}$ represent the face differ, $u_{k}$ is a chosen $k_{th}$ vector, $\lambda_{k}$ is the $k_{th}$ eigenvalue, and $M$ is a number of the dimension space.
In our experiment, we select the $k_{UNet} = 40$ and $k_{RGB}=61$ to generate a eigenface with highest accuracy for the U-Net-stream and RGB-stream separately.
\noindent\textbf{3.3 Support Vector Machine}
Support Vector Machine is a machine learning technique for classification, regression, and other learning tasks. Support vector classification (SVC) in SVM, map data from an input space to a high-dimensional feature space, in which an optimal separating hyperplane that maximizes the boundary margin between the two classes is established.
The hinge loss function is shown as:
\[
\ \frac{1}{n}\left [ \sum_{i=1}^{n} max(0,1-y_{i}(\vec{w}\cdot\vec{x_{i}}-b))\right ]+\lambda \left \| \vec{w} \right \|^2 \hspace{+0.5cm} (4)
\]
Where the parameter $\lambda$ determines the trade off between increasing the margin-size and ensuring that the $\vec{x_{i}}$ lies on the right side of the margin. We use radial basis function (RBF) and polynomial kernel for SVC, which have been widely discussed \cite{kuo2014kernel} as a kernel-based fast SVC for images.
\noindent\textbf{3.4 Contrast Enhancement}
Contrast enhancement techniques play a vital role in image processing to bring out the information that exists within a less dynamic range of that image. As a major clinical feature, fundus \cite{crick2003textbook,akram2005common} structure is highly related \cite{tang2015contrast} the image contrast \cite{noyel2017superimposition}. Here, we use histogram equalization for the contrast enhancement in retinal images. Compare to the original images, images after histogram equalization show the light color detail as lesions in Figure \ref{fig:figure3}(b). Images after contrast-limited adaptive histogram equalization (CLAHE) give further features as areas of retinopathy in Figure \ref{fig:figure3}(c).
\section{Efforts on Retinal Dataset}
Retina Image Bank (RIB) \cite{rib} is an international clinical project launched by American Society of Retina Specialists in 2012, which allow retina specialists and ophthalmic photographers around the world to share the existing clinical cases online for medicine-educational proposes for patients and physicians in developing countries lack training resource. Any researcher could join as a contributor for dedicating the retinal images or as a visitor using the medical images and label for non-commercial propose. With the recent success on dataset collection, such as ImageNet \cite{krizhevsky2012imagenet}, we believe that the effort of sorting and mining the clinical labels from RIB is valuable. With a more developer-friendly information pipeline, both Ophthalmology and Computer Vision community could go further on the analytical researches on medical informatics. Our proposed label collection, EyeNet is mainly based on the RIB and following the RIB's using guideline.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{cam_AAAI_2019_3.pdf}
\caption{We use figure (i, j), where $i = a, b, c$ and $j = 1, 2, 3, 4$, to demonstrate that our proposed method can capture the similar lesion areas as the ophthalmologist's manual annotations, i.e., the yellow sketches.}
\label{fig:figure4}
\end{figure*}
\section{Experiments}
In this section, we describe the implementation details and experiments we conducted to validate our proposed method.
\noindent\textbf{5.1 Label Collection}
For experiments, the EyeNet is randomly divided into three parts: 70\% for training, 10\% for validation and 20\% for testing. All the training data have to go through the PCA before SVM. All classification experiments are trained and tested on the EyeNet.
\noindent\textbf{5.2 Setup}
The EyeNet has been processed to U-Net to generate a subset with a semantic feature of the blood vessel. For the DNNs and Transfer Learning models, we directly use the RGB images from the retinal dataset. EyeNet will be published online after getting accepted. For the CLAHE processing, we use $adapthisteq$ function from the image toolbox in MATLAB.
\noindent\textbf{5.3 Deep Convolutional Neural Networks}
CNN has demonstrated extraordinary performance in visual recognition tasks \cite{krizhevsky2012imagenet}, and the state of the art is in a great many vision-related benchmarks and challenges \cite{xie2017aggregated}. With little or no prior knowledge and human effort in feature design, it yet provides a general and effective method solving variant vision tasks in variant domains. This new development in computer vision has also shown great potential for helping/replacing human judgment in vision problems like medical imaging \cite{esteva2017dermatologist}, which is the topic we try to address in this paper. In this section, we introduce several baselines in multi-class image recognition and compare their results on the EyeNet.
\noindent\textbf{Baseline1-AlexNet}
AlexNet \cite{krizhevsky2012imagenet} brought up a succinct network architecture, with 3 fully connected layers, 5 convolutional layers, and the activation function is ReLU \cite{nair2010rectified}.
\noindent\textbf{Baseline2-VGG}
The authors of VGG \cite{simonyan2014very} exploit the filters (3x3) repeatedly to replace the large filters (5x5,7x7) in traditional architectures. By increasing depths of the network, it achieved better results on ImageNet with fewer parameters.
\noindent\textbf{Baseline3-ResNet}
Residual Networks \cite{hedeep}, one of the most popular neural networks today, utilize skip connections or short-cuts to jump over some layers. With skip connections, the network essentially collapses into a shallower network in the initial phase and this makes it easier to be trained, and then it expands its layers as it learns more of the feature space.
\noindent\textbf{Baseline4-SqueezeNet}
In real world, medical imaging tasks usually require a small and effective model to adapt to limited resources. As some deep neural networks cost several hundred megabytes to store, SqueezeNet \cite{iandola2016squeezenet} adopting model compression techniques has achieved the accuracy of AlexNet level with around 500 times smaller models.
\noindent\textbf{5.4 Transfer Learning}
We use a transfer learning framework from the normalized ImageNet \cite{krizhevsky2012imagenet} to the EyeNet for solving the small samples issue on the computational retinal visual analytics. With sufficient and utilizable training classified model, Transfer Learning resolves the challenge of Machine Learning in the limit of minimal amount of training labels and it drastically reduces the data requirements. The first few layers of DNNs learn features , similar to Gabor filters and color blobs, and these features appear not to be specific to any particular task or dataset and thus applicable to other datasets and tasks \cite{yosinski2014transferable}. Our experiments show the significant improvement after we apply the pretrained parameters on our deep learning based models, referring to Table \ref{table:table2} and Table \ref{table:table5}.
\begin{table}[t]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{| l | l | l |}
\hline
\textbf{Hybrid-Ratio} & \textbf{RBF}& \textbf{Polyn.} \\ \hline
~~~0\%~:~100\%& ~~~~0.8159 & ~~~~~~~0.8391 \\ \hline
~~~40\%~:~60\%& ~~~~0.8371 & ~~~~~~~0.8381 \\ \hline
~~~50\%~:~50\% & ~~~~\textbf{0.9002} & ~~~~~~~0.8632 \\ \hline
~~~60\%~:~40\%& ~~~~0.8881 & ~~~~~~~\textbf{0.9040} \\ \hline
~~~100\%~:~0\%& ~~~~0.8324 & ~~~~~~~0.8241 \\
\hline
\end{tabular}}
\caption{Accuracy comparison of the two-streams model with Radial basis function (RBF) and polynomial kernel. We use the hybrid-ratio \cite{yang2018novel} of the mixed weighted voting between two multi-SVCs trained from images over U-Net and CLAHE.
\vspace{-0.8cm}
\label{table:table1}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{| l | l | l |}
\hline
~~~\textbf{Model} & \textbf{Pretrained} & \textbf{Random Init.} \\ \hline
~~AlexNet & ~~~~0.7912 & ~~~~~~0.4837 \\ \hline
~~VGG11 & ~~~~ 0.8802 & ~~~~~~\textbf{0.7579} \\ \hline
~~VGG13 & ~~~~0.8721 & ~~~~~~ 0.7123 \\ \hline
~~ResNet18 & ~~~~\textbf{0.8805} & ~~~~~~ 0.7250 \\ \hline
SqueezeNet & ~~~~0.8239 & ~~~~~~0.5625 \\
\hline
\end{tabular}}
\caption{Accuracy comparison of three DNNs baselines on EyeNet.
\vspace{-0.3cm}
\label{table:table2}
\end{center}
\vspace{-0.3cm}
\end{table}
\begin{table}[t]
\begin{center}
\scalebox{1.0}{
\begin{tabular}{| l | l | l |}
\hline
~~~\textbf{Model} & \textbf{Pretrained} & \textbf{Random Init.} \\ \hline
~~AlexNet & ~~~~0.7952 & ~~~~~~0.4892 \\ \hline
~~VGG11 & ~~~~ 0.8726 & ~~~~~~0.7583 \\ \hline
~~VGG13 & ~~~~\textbf{0.8885} & ~~~~~~ \textbf{0.7588} \\ \hline
~~ResNet18 & ~~~~0.8834 & ~~~~~~ 0.6741 \\ \hline
SqueezeNet & ~~~~0.8349 & ~~~~~~0.5721 \\
\hline
\end{tabular}}
\caption{Accuracy comparison of three DNNs baselines on EyeNet.
\vspace{-0.8cm}
\label{table:table5}
\end{center}
\end{table}
\noindent\textbf{5.5 Two-Streams Results}
\vspace{-0.1cm}
All SVM has implemented in Matlab with libsvm \cite{chang2011libsvm} module. We separate both the original retinal dataset and the subset to three parts included 70\% training set, 20\% test set, and 10\% validation set. By training two multiple-classes SVM models on both original EyeNet and the subset, we implement a weighted voting method to identify the candidate of retina symptom. We have testified different weight ratio as $Hybrid-Ratio$, SVM model with \{Images over CLAHE: Image over U-Net \}, with different accuracy at Table \ref{table:table1}. We have verified the model without over-fitting by the validation set via normalization on the accuracy with \~2.03\% difference.
\noindent\textbf{5.6 Deep Neural Networks Results}
All DNNs are implemented in PyTorch. We use identical hyperparameters for all models. The training lasts 400 epochs. The first 200 epochs take a learning rate of 1e-4 and the second 200 take 1e-5. Besides, we apply random data augmentation during training. In every epoch, there is $70\%$ probability for a training sample to be affinely transformed by one of the operations in \{flip, rotate, transpose\}$\times$\{random crop\}. Though ImageNet and our Retinal Dataset are much different, using weights pretrained on ImageNet rather than random ones has boosted test accuracy of any models with 5 to 15 percentages, referring to Table \ref{table:table2}. Besides, pretrained models tend to converge much faster than random initialized ones as suggested in Figure \ref{fig:figure4}. The performance of DNNs on our retinal dataset can greatly benefit from the knowledge of other domains.
\noindent\textbf{5.7 Neuron Visualization for Medical Images}
Importantly, we verified the hypothesis that vessel-based segmentation and contrast enhancement are two coherent features to decide the type of retinal diseases. Using techniques of generating class activation maps introduced by \cite{zhou2015cnnlocalization}, we visualized feature maps of the final convolution layer of ResNet18 (which is one of our deep learning model baselines). We notice that the features learned by deep learning models agree with our intuitions about developing the two-stream machine learning model. In fact, in the clinical diagnosis process, "vessel patterns" and "fundus structure" are also the two most crucial features to identify the symptom of different diseases. These two types of features actually cover more than 80\% of retinal diseases \cite{crick2003textbook,akram2005common}.
\section{Conclusion and Future Work}
In this work, we have designed a novel hybrid model for visual-assisted diagnosis based on the SVM and U-Net. The performance of this model shows the higher accuracy, 90.40\%, over the other pre-trained DNNs models as an aid for ophthalmologists. Also, we propose the EyeNet to benefit the medical informatics research community. Finally, since our dataset not only contains images but also text information of the images, image captioning and Visual Question Answering \cite{huang2017vqabq,huang2017novel,huang2017robustness} based on the retinal images are also the interesting future directions. Our work may also help the remote rural area, where there are no ophthalmologists locally, to screen retinal disease without the help of ophthalmologists in the future.
\bibliographystyle{splncs}
|
1607.02035
|
\section{Introduction}
For more than a century, a lot of studies have been done on the dynamical structure of the \textrm{Main Asteroid Belt}, a large concentration of asteroids with semi-major axes between those of Mars and Jupiter. Daniel Kirkwood proposed and then discovered the famous gaps in the distribution of main belt asteroids, which bear his name. The \textrm{``Kirkwood gaps''} are almost vacant ranges in the distribution of semi-major axes, corresponding to the locations of the strongest mean motion resonances with Jupiter, occurring when the ratio of the orbital motions of an asteroid and a planet (in this case Jupiter) can be expressed as the ratio of two small integers. Now we know that mean motion resonances with all the planets of the solar system exist throughout the main belt, rendering it a dynamically complex region.
The gravitational interactions in the solar system also cause secular perturbations, which affect the orbits on long timescales. If the frequency, or a combination of frequencies, of the variations of the orbital elements of a small body becomes nearly commensurate to those of the planetary system, a secular resonance occurs, amplifying the effect of the perturbations. The importance of secular resonances has been pointed out already in the 19th century by \citet{Leverrier,Tisserand} and \citet{Charlier1,Charlier2}, who noticed a match between the $\nu_{\rm 6}$ secular resonance and the inner end of the main belt. A century later, thanks to the works of \citet{Froeschle1989,Morbidelli1991,Knezevic1991,Milani1992} and \citet{Michel1997} amongst others, we have a map of the locations of the most important secular resonances throughout the solar system. We thus have a clear picture of how the dynamical environment of the solar system, and consequently of the main belt, is shaped by the major planets.
Recently in \citet{Novakovic2015a} we have reported on the role of the linear secular resonance with (1)~Ceres on the post-impact orbital evolution of asteroids belonging to the (1726) Hoffmeister family. Contrary to previous belief, in which massive asteroids were only considered to be able to influence the orbits of smaller bodies by their mutual close encounters \citep{Nesvorny2002} and maybe low order mean-motion resonances \citep{Christou2012}, we have a concrete example that they can strongly affect the secular evolution of the orbits of the latter through secular resonances. Also, \citet{Li2016} have found that a secular resonance between two members of the Himalia Jovian satellite group can affect their orbital evolution, and \citet{Carruba2016} showed that secular resonances with Ceres tend to drive away asteroids in the orbital neighborhood of Ceres, giving more evidence that secular resonances in general can be important even if the perturbing body is relatively small. These results generate a number of questions regarding the potential role of such resonances in the dynamical evolution of the main asteroid belt in general.
The scope of this work is to improve our general picture of the dynamical structure of the main belt, by studying the importance of the secular perturbations caused by the two most massive asteroids (1)~Ceres and (4)~Vesta.
\section{Methodology}
Here we describe the methods we followed to obtain a general picture of the effect of secular resonances with massive asteroids across the main belt.
Our main analysis is based on numerical integrations of the orbits of test particles with initial conditions such that they cross the path of the resonance we are interested in, in order to evaluate the effect of the latter on their orbits.
To do so we first need to decide which secular resonances we should focus on.
Naturally, the first candidate for a perturbing body is (1)~Ceres, being the most massive asteroid, and having been proven to have a significant effect on the members of the Hoffmeister asteroid family through its linear nodal secular resonance\footnote{As in the usual nomenclature, $g$ \& $g_i$ stand for the proper frequency of the longitude of perihelion, and $s$ \& $s_i$ stand for the proper frequency of the longitude of the ascending node of the perturbed asteroid and the $i^{\rm {th}}$ perturbing body respectively. We will throughout this paper be using subscript ``c'' for Ceres and ``v'' for Vesta, while numbered subscripts refer to the fundamental frequencies of the Solar system.} $\nu_{\rm {1c}}=s-s_{\rm c}$ \citep{Novakovic2015a}. We are also considering secular resonances with the second largest asteroid, (4)~Vesta. Despite the fact that Vesta is less massive than Ceres, we wish to evaluate whether the perturbations arising from secular resonances with it are important for the dynamical evolution of asteroidal orbits, compared to other dynamical mechanisms. We focus on the linear secular resonances with (1)Ceres and (4)Vesta, namely $\nu_{\rm {1c}}=s-s_{\rm c}$, $\nu_{\rm {c}}=g-g_{\rm c}$ and $\nu_{\rm {1v}}=s-s_{\rm v}$, $\nu_{\rm {v}}=g-g_{\rm v}$ as these resonances are expected to give rise to the strongest perturbations on the orbits of asteroids.
The first step of our study is to locate the path of each secular resonance we are interested in, across the main belt, in order to choose initial conditions for our test particles accordingly. While the locations of the secular resonances can easily be found analytically \citep{Knezevic1991}, giving a clear overall idea, we have found that the error of this approach for high eccentricities and inclinations is too high for the needs of our study, preventing us from accurately selecting the appropriate initial conditions that ensure interaction of the asteroids with the secular resonances.
We have decided thus to proceed with a different approach, based on the synthetic proper elements \citep{Knezevic2000}, and more specifically the proper frequencies, of the main belt asteroids, as released by the AstDyS service\footnote{available at: ${http://hamilton.dm.unipi.it/astdys2/}$}. From the catalog of proper elements we extract the proper frequencies of (1)~Ceres ($g_{\rm c}=54.07''/{\rm {yr}},\,s_{\rm c}=-59.17''/{\rm {yr}}$) and (4)~Vesta ($g_{\rm v}=36.87''/{\rm {yr}},\,s_{\rm v}=-39.59''/{\rm {yr}}$). Then, for a given secular resonance that we want to visualize, we select from the catalog those asteroids with proper frequencies that satisfy the corresponding resonant equation, within some margin corresponding to the strength of each resonance. To decide on the value of this margin we benefited from the analytical work of \citet{Knezevic1991}, where they use $2''/{\rm {yr}}$ for the most powerful secular resonance $\nu_{\rm 6}$, and $0.5''/{\rm {yr}}$ for weaker, fourth-degree resonances such as the $g+s-g_{\rm 6}-s_{\rm 6}$. Expecting that the secular resonances with massive asteroids should be relatively weak, we used $0.2''/{\rm {yr}}$ as a margin\footnote{The width of $0.2''/{\rm {yr}}$ used does not necessarily correpond to the width of the librating region of each resonance, which may vary across the main belt, but rather serves as a probe to easily visualize the paths of the resonances.} . The asteroids with proper frequencies within these margins should lie along the path of the secular resonance in question. \autoref{fig:pathsv1c} shows an example of this approach for the secular resonance $\nu_{\rm {1c}}$, where the analytical solution is also plotted for comparison. Note the difference for high eccentricity and inclination between the two methods.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{ai_v1c.png}
\caption{The location of the $v_{\rm {1c}}$ secular resonance on the $(a_{\rm p},\sin{i_{\rm p}})$ plane. Solid lines represent the analytical solution for different values of the eccentricity (see legend). The colored dots show the resonant asteroids within $0.2''/{\rm {yr}}$, while the different colors correspond to values of eccentricity centered to those of the analytical solutions and spanning 0.05 in each direction}
\label{fig:pathsv1c}
\end{center}
\end{figure}
Having obtained the location of each secular resonance, we can decide on the parts of the main belt that should be studied. There is no strict rule for selecting initial conditions other than the proximity to the location of the secular resonance we examine in each case. We thus chose initial conditions in such a way, that a wide range of the proper elements of the main belt asteroids is sampled sufficiently for each case, as we will describe individually below.
After selecting which parts we want to study, we proceed in the following way: We create groups of 20 fictitious particles with similar initial conditions and integrate their orbits for $50 \,{\rm {Myrs}}$, using the Orbit9 propagator\footnote{available within the $OrbFit$ package at: ${http://adams.dm.unipi.it/\sim orbmaint/orbfit/}$}, within two dynamical models: one including the four giant planets, from Jupiter to Neptune, and the massive asteroid relevant for each resonance as main perturbers\footnote{In these simulations we used values of $4.76\times10^{-10}$ and $1.3\times10^{-10} M_{\odot}$ for the masses of Ceres and Vesta, respectively \cite{Baer2011,Kuzmanoski2010}.}, and another one only with the four planets, which serves as a reference. Both dynamical models also incorporate the Yarkovsky effect as a secular drift in semi-major axis. This drift is expected to force the test particles to cross the resonance, causing the simulation that includes the massive asteroid as a perturber to reveal the resonant effect. We selected a value of ${{\rm d}a\over{\rm d}t}=4\cdot10^{-4}\,{\rm AU}\cdot {\rm {Myr}}^{-1}$ for the strength of the Yarkovsky induced drift, that may be considered as a typical reference value for asteroids of 1~km in diameter \citep{Vok2015}. This value allows for reasonably short integration times ($50 \,{\rm {Myrs}}$) while allowing enough time for the evolution of the test particles inside the resonance to investigate the respective perturbations. From the numerical integrations we obtain the time evolution of the asteroids' mean orbital elements. We then partition these in a running window manner in order to compute the time series of the synthetic proper elements \citep{Knezevic2000} for each asteroid. The comparison of the evolution of the test particles' proper orbital elements between the two dynamical models reveals the role of the secular resonances with the massive asteroids.
\section{Results}
\subsection{Secular resonances with Ceres}
\subsubsection{The $\nu_{\rm {1c}}$ resonance}
The first secular resonance we studied is the linear nodal secular resonance $\nu_{\rm {1c}}$. \autoref{fig:v1c} shows the proper semi-major axis versus the sine of proper inclination and the proper eccentricity projections of the main asteroid belt. The resonant asteroids, the ones that satisfy the relation $|s-s_{\rm c}|\leq0.2''/{\rm {yr}}$, are highlighted, revealing the location of the resonance. Since the secular resonances are represented as surfaces in the three dimensional proper element space, we use a color code to grasp the third dimension when projecting on the plane.
We notice that the secular resonance crosses the middle $(2.5<a_{\rm p}<2.82\,{\rm {AU}})$ and outer $(2.82<a_{\rm p}<3.26\,{\rm {AU}})$ parts of the main belt. In the top panel of \autoref{fig:v1c}, we see that this resonance's projection on the $(a_{\rm p},e_{\rm p})$ plane appears as a wide strip that crosses the middle belt at an angle. This strip has a well defined lower boundary which corresponds to zero inclination, with the upper boundary being due to the gap in the distribution of asteroids at $\sin{i_{\rm p}}\sim0.3$. In the outer belt the resonant asteroids are less and more localized: two concentrations are found in the region $2.82<a_{\rm p}<2.9$ and another two at high inclinations past $3 \,{\rm {AU}}$, corresponding to asteroid families as will be discussed in the next section.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{v1c.png}
\caption{The location of the $v_{\rm {1c}}$ secular resonance on the $(a_{\rm p},e_{\rm p})$ plane (top), and on the $(a_{\rm p},\sin{i_{\rm p}})$ plane (bottom). The gray dots represent all main belt asteroids, and the colored points the resonant ones for different inclinations (top panel) and eccentricities (bottom panel) according to the respective color codes given in the legend.}
\label{fig:v1c}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{pathsb.png}
\caption{The location of the groups of initial conditions for the simulations about the $v_{\rm {1c}}$ secular resonance on the $(a_{\rm p},e_{\rm p})$ plane. The gray dots represent all main belt asteroids, and the colored points the resonant ones for different inclinations according to the color code. Black circles denote the location and black arrows the Yarkovsky drift direction of each group of initial conditions.}
\label{fig:v1cinit}
\end{center}
\end{figure}
In order to study the effect of the resonance, we considered three regions that are crossed by the resonance. Since as discussed above this secular resonance forms a strip like shape in the middle belt on the $(a_{\rm p},e_{\rm p})$ plane, it is intuitive to choose the initial conditions for our test particles just outside this strip, so they are forced to cross the resonance by drifting in semi-major axis due to the Yarkovsky effect. This idea will also guide the selection of the initial conditions for the other secular resonances. Therefore, we can distinguish the relevant regions into the very low and moderate inclination parts of the middle belt, and the high inclination part of the outer belt.
We created a number of groups of 20 test particles, as shown in \autoref{fig:v1cinit} for each region. The initial conditions of the particles within each group were generated with a variance of the order of $10^{-3}$ in the semi-major axis, eccentricity and inclination values, and identical random angular elements. We then we integrated numerically their orbits within the two dynamical models we explained above. From the resulting time series of their mean orbital elements we calculated the evolution in time of their synthetic proper elements \citep{Knezevic2000} using running windows of $10\,{\rm {Myrs}}$ length, shifting them by $2\,{\rm {Myrs}}$ steps. As this secular resonance is a linear one involving only the proper frequency of the precession of the ascending node $(s)$ of the test particles, it only produces perturbations in their proper inclination and not in their eccentricity. Therefore we are only interested in the evolution of the proper inclination of the affected asteroids. The situation is the opposite for the secular resonances where the proper frequency of the precession of the longitude of perihelion $(g)$ is involved, perturbing only the eccentricities and not the inclinations of the asteroids.
A representative example of the results for each region is shown in \autoref{fig:example}. The left panels show the evolution in time of the proper inclination of a single particle belonging to a group of initial conditions, plotted over the time evolution of the resonant critical angle $\sigma=\Omega-\Omega_{\rm c}$. We see that the crossing of the resonance, corresponding to the libration of the critical angle, results in excitation of the proper inclination when Ceres is included in the model as a perturber, whereas for the same initial condition the inclination of the orbit remains stable if we do not include Ceres. The right panels show the evolution in the proper semi-major axis versus sine of proper inclination plane $(a_{\rm p},\sin{i_{\rm p}})$ of the 20 particles of each group, in the two dynamical models.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{exv1c.png}
\caption{Orbital evolution due to the secular resonance $\nu_{\rm {1c}}$ for the three representative regions. Top: low inclination middle belt, Mid: high inclination middle belt, Bottom: high inclination outer belt. Left panels: In black the evolution of the critical angle $\sigma=\Omega-\Omega_{\rm c}$ of a test particle.The red line shows the evolution of the sine of proper inclination of the same test particle with Ceres included in the model. The blue line shows the evolution of the proper inclination of the same test particle without Ceres in the model. Right panels: The evolution of the 20 test particles of the whole group in the two dynamical models, red with Ceres and blue without. }
\label{fig:example}
\end{center}
\end{figure}
In \citet{Novakovic2015a} we have shown that asteroids entering the resonance experience oscillations in their inclination for as long as their critical angle librates, as seen in the left panel of \autoref{fig:example}. In order to quantify the effect of the perturbation induced by Ceres through the secular resonance, we measure the maximal change in the proper inclination of the test particles, as they cross the resonance. For the low inclination middle belt we have measured an average amplitude of variations of the order of $\Delta\sin{i_{\rm p}}=7\cdot10^{-4}$ for the groups of test particles with semi-major axes close to that of Ceres $(a_{\rm {p(Ceres)}}=2.767\,{\rm {AU}})$, decreasing to $4\cdot10^{-4}$ as we move to lower semi-major axes towards $2.6\,{\rm {AU}}$ for our innermost group. For the high inclination middle belt we found an average amplitude of $3\cdot10^{-4}$. In the farther part of the outer belt ($a_{\rm p}>3\,{\rm {AU}}$), the amplitude of the oscillations is substantially smaller, around $1-2\cdot10^{-4}$, making it more difficult to separate the effect of the secular resonance from the other perturbing mechanisms that act in this region.
\subsubsection{The $\nu_{\rm {c}}$ resonance}
Following the same procedure we find the location of the $\nu_{\rm {c}}$ secular resonance, by plotting the asteroids that satisfy the resonant relation $|g-g_{\rm c}|\leq0.2''/{\rm {yr}}$. The result is shown in \autoref{fig:vc}, showing the proper semi-major axis versus the sine of proper inclination of the main belt $(a_{\rm p},\sin{i_{\rm p}})$, with the resonant asteroids highlighted in color for different proper eccentricities. This secular resonance also crosses mostly the middle part of the main belt, as well as the high inclination part of the outer belt.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{vc.png}
\caption{The location of the $v_{\rm c}$ secular resonance on the $(a_{\rm p},e_{\rm p})$ plane (top), and on the $(a_{\rm p},\sin{i_{\rm p}})$ plane (bottom). The gray dots represent all main belt asteroids, and the colored points the resonant ones for different inclinations (top panel) and eccentricities (bottom panel) according to the respective color codes given in the legend.}
\label{fig:vc}
\end{center}
\end{figure}
Continuing our previous approach, we distinguish three regions to focus our study on: The low eccentricity $(e_{\rm p}<0.05)$ and high eccentricity $(e_{\rm p}>0.2)$ parts of the middle belt, and the outer belt. Representative examples of the behavior of asteroid orbits in these three regions, resulting from our numerical integrations of test particles are shown in \autoref{fig:examplevc}. We determine the strength of this resonance by the amplitude of the induced oscillations in proper eccentricity, as shown in the example of \autoref{fig:examplevc}. For the low eccentricity middle belt we found a maximum amplitude of $0.01$, for test particles close to Ceres (in terms of semi-major axis), decreasing to $0.003$ as we move further away. For the high eccentricity middle belt and inner part of the outer belt, the measured amplitude was of the order of $0.003$, while the relative strength of the perturbations from other causes increased significantly. For the farther part of the outer belt ($a_{\rm p}>3\,{\rm {AU}}$), although we have a clear signature from the critical angle that the test particles cross the secular resonance, its impact on the eccentricities of the orbits is effectively zero, as the two models give statistically indistinguishable results, as can be seen in the bottom part of \autoref{fig:examplevc}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{exvc.png}
\caption{Orbital evolution due to the secular resonance $\nu_{\rm c}$ for the three representative regions. Top: low eccentricity middle belt, Mid: high eccentricity middle belt, Bottom: outer belt. Left panels: In black the evolution of the critical angle $\sigma=\varpi-\varpi_{\rm c}$ of a test particle.The red lines show the evolution of the proper eccentricity of the same test particle with Ceres included in the model. The blue lines show the evolution of the proper eccentricity of the same test particle without Ceres in the model. Right panels: The evolution of the 20 test particles of the whole group in the two dynamical models, red with Ceres and blue without. }
\label{fig:examplevc}
\end{center}
\end{figure}
\subsection{Secular resonances with Vesta}
\subsubsection{The $\nu_{\rm {1v}}$ resonance}
As the asteroid (4)~Vesta is located in the inner $(2<a_{\rm p}<2.5\,{\rm {AU}})$main belt, we expect the secular resonances involving it to predominantly affect this region. Indeed in \autoref{fig:v1v} we see the location of the $\nu_{\rm {1v}}$ secular resonance by highlighting the asteroids with proper frequencies that satisfy the relation $|s-s_{\rm v}|=0.2''/{\rm {yr}}$, where we see that the inner belt is crossed by the resonance in a wide range of eccentricities and inclinations, while there are also some resonant asteroids with high inclinations in the middle belt.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{v1v.png}
\caption{The location of the $v_{\rm {1v}}$ secular resonance on the $(a_{\rm p},e_{\rm p})$ plane (top) and on the $(a_{\rm p},\sin{i_{\rm p}})$ plane (bottom). The gray dots represent all main belt asteroids, and the colored points the resonant ones for different inclinations (top panel) and eccentricities (bottom panel) according to the respective color codes given in the legend.}
\label{fig:v1v}
\end{center}
\end{figure}
The situation with this resonance is slightly different than with the ones involving Ceres. When we examine the path of the resonance in \autoref{fig:v1v}, we see that the high inclination region of the inner belt is also highly eccentric, while the high inclination resonant region of the middle belt has also a low eccentricity part. This led to the result we present in \autoref{fig:examplev1v}, that is the high inclination part of the inner belt, despite being close in semi-major axis to Vesta, shows no distinctive evolution caused by the resonance, whereas the resonant region in the middle belt, has a very small $(\sim0.0002)$, but identifiable signature of inclination excitation due to the resonance. The low inclination part of the inner belt is showing as expected the largest amplitudes of oscillations in the sine of inclination, of the order of $0.004$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{exv1v.png}
\caption{Orbital evolution due to the secular resonance $\nu_{\rm {1v}}$ for the three representative regions. Top: low inclination inner belt, Mid: high inclination inner belt, Bottom: high inclination middle belt. Left panels: In black the evolution of the critical angle $\sigma=\Omega-\Omega_{\rm v}$ of a test particle.The red line shows the evolution of the sine of proper inclination of the same test particle with Vesta included in the model. The blue line shows the evolution of the proper inclination of the same test particle without Vesta in the model. Right panels: The evolution of the 20 test particles of the whole group in the two dynamical models, red with Vesta and blue without.}
\label{fig:examplev1v}
\end{center}
\end{figure}
\subsubsection{The $\nu_{\rm {v}}$ secular resonance}
The last secular resonance we studied is the one involving the precession frequency of the perihelion of (4)~Vesta, namely $\nu_{\rm {v}}$. \autoref{fig:vv} shows the location of the asteroids whose proper frequencies $g$ satisfy the relation $|g-g_{\rm v}|\leq0.2''/{\rm {yr}}$, revealing the location of the secular resonance across the main belt as in the previous cases. In the $(a_{\rm p},\sin{i_{\rm p}})$ plane we notice the pretty clear path of the resonance, crossing the inner belt from low to moderate inclinations, continuing to the high inclination part of the middle belt and on to a very high inclination range of the outer main belt, always covering a very wide range of eccentricities as can be seen in the $(a_{\rm p},e_{\rm p})$ plane. For this resonance we focused our numerical simulations on the inner belt only. The method we used for revealing the effect of each resonance depends on the action of the Yarkovsky effect in order to force the test particles through the secular resonances. This means that it is difficult to apply this scheme if a secular resonance's path is parallel, or almost parallel, to the $a_{\rm p}$ axis, as is the case for the $\nu_{\rm {v}}$ secular resonance in the middle belt, and for this reason we did not manage to investigate this part. In the inner belt we found oscillations in proper eccentricity with amplitudes of the order of $0.004$ as shown in \autoref{fig:exvv}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{vv.png}
\caption{The location of the $v_{v}$ secular resonance on the $(a_{\rm p},e_{\rm p})$ plane (top), and on the $(a_{\rm p},\sin{i_{\rm p}})$ plane (bottom). The gray dots represent all main belt asteroids, and the colored points the resonant ones for different inclinations (top panel) and eccentricities (bottom panel) according to the respective color codes given in the legend.}
\label{fig:vv}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{exvv.png}
\caption{Orbital evolution due to the secular resonance $\nu_{\rm {v}}$ for the inner belt. Left panel: In black the evolution of the critical angle $\sigma=\Omega-\Omega_{\rm v}$ of a test particle.The red line shows the evolution of the proper eccentricity of the same test particle with Vesta included in the model. The blue line shows the evolution of the proper eccentricity of the same test particle without Vesta in the model. Right panel: The evolution of the 20 test particles of the whole group in the two dynamical models, red with Vesta and blue without.}
\label{fig:exvv}
\end{center}
\end{figure}
The results for all the cases we investigated are summarized in \autoref{Table:1}. Where ranges are given, the largest value corresponds to asteroids with proper semi-major axes close to those of the respective perturbing body (Ceres or Vesta). We notice that the maximal values of the changes in proper inclination and eccentricity caused by Ceres are almost two times bigger compared to the ones caused by Vesta, a consequence of the fact that Ceres is approximately 3.5 times more massive than Vesta, thus exerting stronger perturbations as expected.
\begin{table}[]
\centering
\caption{Summary table of the maximal changes in the proper elements of the main belt asteroids caused by the secular resonances with Ceres and Vesta.}
\label{Table:1}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|l|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Secular resonance}} & \multirow{2}{*}{Measured quantity} & \multicolumn{3}{c|}{Range in $a_{\rm p}\,({\rm {AU}})$} \\ \cline{3-5}
\multicolumn{1}{|c|}{} & & $2<a_{\rm p}<2.5$ & $2.5<a_{\rm p}<3$ & $a_{\rm p}>3$ \\ \hline
$\nu_{\rm {1c}}$ & $\Delta\sin(i_{\rm p})$ & - & $4-7\cdot10^{-4}$ & $2\cdot10^{-4}$ \\ \hline
$\nu_{\rm c}$ & $\Delta e_{\rm p}$ & - & $3\cdot10^{-3}-1\cdot10^{-2}$ & $3\cdot10^{-3}$ \\ \hline
$\nu_{\rm {1v}}$ & $\Delta\sin(i_{\rm p})$ & $4\cdot10^{-4}$ & $2\cdot10^{-4}$ & - \\ \hline
$\nu_{\rm {v}}$ & $\Delta e_{\rm p}$ & $4\cdot10^{-3}$ & - & - \\ \hline
\end{tabular}
}
\end{table}
\subsection{Asteroid families}
One important aspect of the action of the secular resonances with massive asteroids we have presented is the effect they may have on the orbital evolution of asteroid family members. Since the asteroid families are more or less compact in the space of proper elements, the action of the secular resonances should give a distinct signature, identifiable merely by the shape of the family member distributions in the different projections of the proper elements. Indeed in \citet{Novakovic2015a} we have shown that the asymemtric shape in the proper semi-major axis versus proper inclination plane $(a_{\rm p},\sin{i_{\rm p}})$ of the Hoffmeister family is caused by the $v_{\rm {1c}}$ secular resonance with Ceres. Also in \citet{Novakovic2016} we have shown that this case is not unique, as the asteroid families (1128) Astrid and (1521) Seinajoki, also owe their irregular shapes in the proper elements space to the $v_{\rm {1c}}$ secular resonance.
Although it is out of the scope of this work to study individual asteroid families for possible interactions with the secular resonances, as our aim is a more global view of their importance, we find it worthy to present which asteroid families are expected to be influenced. This is done in a similar way as our numerical method of finding the location of the resonances. Instead of looking at the whole catalog of proper elements for resonant asteroids, we are instead looking in the catalog of only those asteroids that belong to asteroid families. For this we use the classification of \citet{Milani2014}. In this way we can find which families are crossed by the secular resonances we present here and which, if any, show signs of interaction with them.
\subsubsection{Asteroid families interacting with the $\nu_{\rm {1c}}$ secular resonance}
Using the method described above, we find that ten asteroid families have a significant number of their members currently in resonance\footnote{By significant we mean a number of the order of at least ten asteroids in regular, non-chaotic orbits. We make this discrimination as there may be asteroids with proper frequencies that satisfy the resonant relation, but the error in their frequency is large, resulting from other effects such as a mean motion resonance.} as shown in \autoref{fig:v1cfam}. These families are: (3) Juno, (5) Astraea, (31) Euphrosyne, (93) Minerva, (569) Misa, (847) Agnia, (1128) Astrid, (1521) Seinajoki, (1726) Hoffmeister and (3827) Zdenekhorsky.
Apart from the families of (1128) Astrid, (1521) Seinajoki and (1726) Hoffmeister which we have already studied separately, as mentioned above, the $\nu_{\rm {1c}}$ secular resonance may be of some importance for the families of (569) Misa (847) Agnia and (3827) Zdenenkovsky as these are close to (1)~Ceres in terms of semi-major axis, and cover ranges in the sine of proper inclination comparable to the magnitude of the induced perturbations as we measured them.
The case of (847) Agnia may be of particular interest, as this family is also crossed by the $z_1=g+s-g_6-s_6$ secular resonance. Indeed in the $(a_{\rm p},\sin{i_{\rm p}})$ plane the two resonances cross the family in a perpendicular way with respect to each other, and because of that we discovered some hints that the secular resonance with Ceres might be able to drive asteroids out of the $z_1$. Of course, this requires further investigation to be proven, that is out of the scope of this work.
The family of (31) Euphrosyne is another example of potential interaction between resonances, as it is crossed by a multitude of them. The secular resonances with the giant planets are more powerful than $\nu_{\rm {1c}}$ in this region, and play an important role in the evolution of the family \citep{Carruba2014}. Still it is possible that even a weak perturbation by $\nu_{\rm {1c}}$ may have an amplified effect due to the interaction with them.
Finally the family of (93) Minerva is crossed by the 3J-1S-1A three body resonance \citep{Nesvorny1998} at the same location where the $\nu_{\rm {1c}}$ crosses it, making the effect of the latter practically indistinguishable.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{famv1c.png}
\caption{Asteroid families crossed by the secular resonance $\nu_{\rm {1c}}$ with Ceres. Gray dots represent all main melt asteroids, and blue dots those who belong to asteroid families. The red points represent resonant asteroids belonging to asteroid families (highlighted in black boxes).}
\label{fig:v1cfam}
\end{center}
\end{figure}
\subsubsection{Asteroid families interacting with the $\nu_{\rm {c}}$ secular resonance}
In the same manner we identify the asteroid families that are crossed by the $\nu_{\rm {c}}$ secular resonance, shown in \autoref{fig:vcfam}. The families crossed by this resonance are: (93) Minerva, (410) Chloris, (7744) 1986QA$_{1}$ and (10955) Harig. Of these families (410) Chloris and (7744) 1986QA$_{1}$ are narrow enough in proper eccentricity so that the secular resonance could be of some importance in their evolution whereas (93) Minerva and (10955) Harig might also seem to be good candidates for further study, as they are large families and their shapes suggest possible influence by the secular resonance. However such a study is not trivial as for the case of (93) Minerva the $\nu_{\rm {c}}$ secular resonance and the 3J-1S-1A overlap, as in the previous case, and the latter dominates the perturbations in eccentricity, whereas Harig is in a place where many secular resonances with the giant planets converge, making it impossible to distinguish the effect of Ceres.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{famvc.png}
\caption{Asteroid families crossed by the secular resonance $\nu_{\rm c}$ with Ceres. Gray dots represent all main melt asteroids, and blue dots those who belong to asteroid families. The red points represent resonant asteroids belonging to asteroid families (highlighted in black boxes).}
\label{fig:vcfam}
\end{center}
\end{figure}
\subsubsection{Asteroid families interacting with the $\nu_{\rm {1v}}$ secular resonance}
In \autoref{fig:v1vfam} we present the results for the case of the $\nu_{\rm {1v}}$ secular resonance with Vesta. We found five families that are crossed by the resonance, which are: (4)~Vesta, (135) Hertha, (480) Hansa, (945) Barcelona and (2076) Levin. Our interest for this case is drawn not in the big families, where nothing special seems to happen, but at the very high inclination family of (945) Barcelona. The size of this family in the proper elements space is comparable to the magnitude of the perturbations given by the $\nu_{\rm {1v}}$ secular resonance, and it shows some hints of irregular shape where it is crossed by it. Even the possibility that a secular resonance with Vesta might be important at such a high inclination in the middle belt is intriguing, and deserves further study.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{famv1v.png}
\caption{Asteroid families crossed by the secular resonance $\nu_{\rm {1v}}$ with Vesta. Gray dots represent all main melt asteroids, and blue dots those who belong to asteroid families. The red points represent resonant asteroids belonging to asteroid families (highlighted in black boxes).}
\label{fig:v1vfam}
\end{center}
\end{figure}
\subsubsection{Asteroid families interacting with the $\nu_{\rm {v}}$ secular resonance}
In \autoref{fig:vvfam} we present the asteroid families which we found to be crossed by the last secular resonance we consider here, the $\nu_{\rm {v}}$ secular resonance with Vesta. We found six such families, namely: (4)~Vesta, (31) Euphrosyne, (135) Hertha, (163) Erigone, (170) Maria and (729) Watsonia. However, we were unable to relate any specific property of these families to the existence of the resonance, as these families are either too large, in which case the perturbations can not lead to significant alteration of their shape, or too far away from Vesta, where the perturbations are not strong enough.
In any case, the effect of the resonances on the shapes of the families
depends on whether the capture inside the resonance is long-lasting or
not,
as well as on the path of the resonance with respect to the family.
Therefore, a precise answer if the asteroidal secular resonances
play an important role in the dynamics of any of the families, requires
detailed study of each particular family. In this respect, the families
mentioned in this work should only be considered as good candidates for
this kind of investigation.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{famvv.png}
\caption{Asteroid families crossed by the secular resonance $\nu_{\rm {v}}$ with Vesta. Gray dots represent all main melt asteroids, and blue dots those who belong to asteroid families. The red points represent resonant asteroids belonging to asteroid families (highlighted in black boxes).}
\label{fig:vvfam}
\end{center}
\end{figure}
\section{Discussion and conclusions}
We have found the locations of the four linear secular resonances with (1)~Ceres and (4)~Vesta using a numerical approach that identifies asteroids which according to their proper frequencies appear to be in resonance. The secular resonances with Ceres mostly cover the middle part of the main belt, with some extension to the high inclination part of the outer belt, whereas those with Vesta cover the inner belt and a moderate to high inclination part of the middle and outer belt.
Our numerical simulations have shown that the effects of these resonances on the orbits of main belt asteroids is considerable, especially when the latter have semi-major axes close to the respective perturbing massive asteroid. \citet{Milani1992,Milani1994} have studied the effect of non-linear secular resonances with the giant planets on the proper elements of main-belt asteroids. They found that resonant asteroids' proper elements undergo secular oscillations with amplitudes comparable to what we measured for the secular resonances with Ceres and Vesta. In the outer belt, which is considered to be far enough from both, we cannot clearly distinguish the impact of the secular resonances among the other dynamical mechanisms that act in the region. Although, as we have shown, the effect of the latter diminishes with increasing distance from the relevant massive asteroid in each case, it is crucial to note that in specific regions of the main belt, secular resonances with massive asteroids are equally, if not more important than the non-linear ones with the giant planets.
Finally we have identified which asteroid families are crossed by each resonance. There are cases where the size of the families in the proper elements space is comparable to the amplitude of the oscillations induced by the secular resonance that crosses them (e.g. 1726 Hoffmeister). In these cases the secular resonances studied here should have the most evident effect on the post-impact evolution of asteroid family members.
\section*{Acknowledgments}
This work has been supported by the European Union [FP7/2007-2013], project:
STARDUST-The Asteroid and Space Debris Network. BN also acknowledges support
by the Ministry of Education, Science and Technological Development of the Republic
of Serbia, Project 176011. Numerical simulations were run on the PARADOX-III cluster
hosted by the Scientific Computing Laboratory of the Institute of Physics Belgrade.
The authors would like to thank the referees Miroslav Bro{\v z} and Valerio Carruba for their constructive comments which helped improve the quality of this article.
|
2109.13062
|
\section{Introduction}
The recent worldwide health challenge caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a lot of fear and uncertainty for humanity. SARS-CoV-2 is a genetic variant of coronavirus that causes coronavirus disease 2019 (COVID-19). The crisis that governments started to face in the early stage of this phenomenon was controlling the pandemic and Covid-19 outbreak alongside maintaining economic balance and other aspects of governmental matters. One of the essentials to assist decision-makers in developing better solutions has been analyzing the pandemic growth and forecasting Covid-19 cases.
Accurate prediction of Covid-19 daily cases can assist governments with macro-decisions and controlling the pandemic better. Meanwhile, artificial intelligence techniques have proven that they are capable and accurate in finding patterns from indistinctive and complicated data features in different phenomena, such as pandemic epidemiological studies. Since the emergence of SARS-CoV-2, researchers have applied various techniques to study different aspects of the current pandemic, such as predicting COVID-19 cases growth rate.
Although multiple techniques such as \cite{Pathan2020,Arora2020} have utilized a variant of Recurrent Neural Networks (RNN) to predict daily cases, their proposed models have two shortcomings. Firstly, many studies \cite{Lee2020,Hawas2020} have chosen the framing range by assuming a fixed specific number. However, RNN and LSTM require a proven best time step for framing the sequence data that guarantees sufficient distinctive features, and on the other hand, it prevents adding too much data to mislead the model. In other words, by controlling the amount of the sequence information, we try to provide the model with the most informative data sequence for training without feeding it extra data that can cause noise.
Secondly, they utilized customized architectures obtained by trial and error, which might still not be the best topology chosen. As a result of these two main factors, there will be so much inaccuracy in prediction.
In this paper, we have taken a deep neuroevolutionary approach, using the Binary Bat algorithm to optimize the hyperparameters of a recurrent neural network with Long Short-Term Memory (LSTM) layers to predict daily cases. Hyperparameters optimization is an NP-hard problem as the optimal solution cannot be guaranteed to be obtained unless by performing an exhaustive search in the feasible region. Therefore, we have chosen the BBA algorithm as a well-known metaheuristic technique for exploring the best set of hyperparameters in the search space.
This approach helps us obtain the optimum time-sequence as well as the most optimized architecture for our deep learning framework. We also introduce a new feature augmentation version of the latest available public COVID-19 dataset provided by the European Center for Disease Prevention and Control. It will be shown that the model's accuracy is increased with the help of the new features and can simulate the regional pandemic behavior more precisely. To validate the framework and the final model, we have conducted various experiments that, in all cases, show the effectiveness of our approach.
In the following sections, we first investigated the related works and briefly talked about the background. In section 3, our proposed model is explained in detail, and we discussed why this approach had been taken for forecasting COVID-19 cases. In section 4, experimental results are presented and investigated in detail, and finally, in section 5, we discussed the conclusion and possible future works.
\section{Related works}\label{relatedworkssection}
There are many studies on applications of artificial intelligence for the Covid-19 pandemic \cite{Lalmuanawma2020, Ke2020, Tuli2020}. One of the main topics among these studies is predicting new cases to help health managers plan and develop appropriate strategies to deal with the Covid-19. Here we study some of them:
ArunKumar et al. \cite{ArunKumar2021} predicted the future trends of the cumulative fatalities of the top 10 countries in the range of 60 days, using RNN along with Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTM).
\cite{Gautam2021} utilized transfer learning in LSTM networks to forecast COVID-19 cases using the early COVID infected countries such as Italy and used the learned model to predict cases in other countries. The results of the model on multiple countries showed the effectiveness of this approach.
Shastri et al. \cite{Shastri2021} proposed a nested ensemble model using LSTM to enhance the accuracy of predicting daily cases of India.
Abbasimehr et al. \cite{Abbasimehr2021} studied three different hybrid deep models, namely multi-head attention, LSTM, and CNN, optimized with a Bayesian algorithm to forecast COVID-19 cases. The results showed the superiority of their proposed model among the studied benchmark models.
\cite{Chandra2021} used LSTM, bidirectional LSTM and encoder-decoder LSTM models for multi-step forecasting of COVID-19 two-month ahead cases in India. They claimed that the deep models are promising in terms of finding the long-term prediction of cases.
Salgotra et al. \cite{Salgotra2020} utilized gene expression programming (GEP) to present a model for predicting confirmed cases (CC) and death cases (DC) in the fifteen most affected countries of the world. Two GEP models were introduced for CC and DC for all 15 countries. The results were shown that GEP provides better results than neural network models when the total experimental data is limited.
To estimate the possible spread of the Coronavirus 2 (SARS-CoV-2) in three Indian cities, a new GEP based model was presented in \cite{Salgotra2020a}. The proposed model is utilized to predict the total number of cases based on CC, DC, and the other three parameters.
In \cite{Chimmula2020}, LSTM was used to predict the trends and possible stopping time of the current COVID-19 outbreaks in Canada. Since COVID-19 is a time series dataset, sequential networks are useful to extract a pattern from it. In \cite{Chimmula2020}, the internal connections of LSTM were established to improve its performance. The results show that the ending point of the COVID-19 outbreak was predicted in June 2020 in Canada.
To enhance the public health management in dealing with the COVID-19 in two high daily incidences of new cases and deaths, \cite{daSilva2020} applied some machine learning algorithms such as quantile random forest and support vector regression to forecast one, three, and six-days-ahead the Covid-19 cumulative cases.
\section{Background}\label{idbackgroundsection}
\subsection{Binary Bat Algorithm}
Bat algorithm has been inspired by simplification and simulation of the echolocation capability of bats in 2010 \cite{Yang2010}. Similar to other population-based algorithms, BA starts with randomly generated individuals. In BA, each bat represents an individual, which is a solution in the search space. Each bat can be represented by a group of vectors: frequency, velocity, and position. For the \textit{i}th bat these vectors are updated according to equation \eqref{eq:Eq.1}, \eqref{eq:Eq.2} and \eqref{eq:Eq.3} respectively.
\begin{equation} \label{eq:Eq.1}
f_i=f_min+\beta(f_max-f_min)
\end{equation}
\begin{equation} \label{eq:Eq.2}
v_i\ (t)=v_i\ (t-1)+f_i\ (x_i\ (t)-x^\ast)
\end{equation}
\begin{equation} \label{eq:Eq.3}
x_i\ (t)=x_i\ (t-1)+v_i\ (t)
\end{equation}
Where $f_\textit{i}$ is the frequency of $i$th bat, $f_{max}$ and $f_{min}$ show the maximum and minimum value of frequency respectively. $\beta$ is random number in the interval [0,1]. $v_i$ and $x_i$ indicate the velocity and position of the $i$th bat. $x^\ast$ shows the best position by the entire population so far. Algorithm~\ref{alg:BA} shows the pseudo-code of the basic BA.
\begin{algorithm}
\caption{Pseudo-code of BA \cite{Yang2010}}\label{alg:BA}
\begin{algorithmic}[1]
{\footnotesize
\Procedure{BA}{}
\State Initialize the position, velocity and frequency of bats $(x_i.v_i.f_i\ \ i=1\cdots n)$.
\Repeat:
\State Update frequencies, velocities and positions of bats using Eqs.\eqref{eq:Eq.1} to \eqref{eq:Eq.3}.
\State {\bf if} $rand>r_i$
\State \hspace{10pt}Select a solution among the best solutions
\State \hspace{10pt}Generate a local solution around the best solution
\State end
\State {\bf if} $rand<A_i$ and $f(x_i)<f(x^\ast)$
\State \hspace{10pt}Accept the new solutions
\State \hspace{10pt}Modify the value of $r_i$ and $A_i$
\State end
\State Rank the bats and update the $x^\ast$
\Until{the stop criterion is satisfied}
\EndProcedure
}
\end{algorithmic}
\end{algorithm}
\vspace{-0.0em}
where \textit{n} is the number of bats (the population size) and $rand$ is a uniformly distributed random real number in the range [0,1]. \textit{r} is pulse emission rate and increase over the course of iteration by the following equation:
\begin{equation} \label{eq:Eq.4}
r_i(t+1)=r_i(0)(1-e^{\gamma t})
\end{equation}
Where $\gamma$ is constant and $r_i(0)$ shows the initial pulse emission rate of \textit{i}th bat. BA utilizes a local search (lines 5-8) to create a solution near the obtained ones.
\begin{equation} \label{eq:Eq.5}
x_{new}=x_{old}+\varepsilon\bar{A}(t)
\end{equation}
In Eq.\eqref{eq:Eq.5}, $x_{old}$ is one of the current best solutions which is selected by some selection mechanism. $\varepsilon$ is a random number in the interval [-1,1], and A is the average loudness of all bats, which is calculated as follows:
\begin{equation} \label{eq:Eq.6}
A_i(t+1)=\alpha A_i(t)
\end{equation}
Based on Eq.\eqref{eq:Eq.6} loudness $A_i$ is decreased as the iteration processed. $\alpha$ is similar to the cooling factor in simulated annealing \cite{Yang2010}.
The basic BA was developed for solving continuous problems \cite{Yang2010}. A binary version of BA (BBA) was developed in \cite{Mirjalili2014}. BBA employs a v-shaped transfer function to transfer all real-valued velocities to the range of [0,1] as follows:
\begin{equation} \label{eq:Eq.7}
V\left(v_{ij}\left(t\right)\right)=\left|\frac{2}{\pi}\arctan(\frac{\pi}{2}v_{ij}\left(t\right))\right|
\end{equation}
where $v_{ij}\left(t\right)$ show the \textit{j}th element of vector $v_i$ at iteration \textit{t}. In BBA, the rule for updating bat's position is redefined as:
\begin{equation} \label{eq:Eq.8}
x_{ij}\left(t+1\right)=\left\{\begin{matrix}{(x_{ij}\left(t\right))}^{-1}&\ \ \ \ if\ rand<V\left(v_{ij}\left(t+1\right)\right)\\x_{ij}\left(t\right)\ \ \ \ \ \ &rand\geq V\left(v_{ij}\left(t+1\right)\right)\\\end{matrix}\right.
\end{equation}
\subsection{Deep Recurrent Networks}
Recurrent Neural Networks (RNN) were proposed as a solution to overcome simple neural networks' inability to learn sequence data. In sequential data such as signals \cite{Xiong2018}, stock price \cite{Rather2015}, machine translation \cite{Liu2015}, the temporal arrangement and chain dependency of samples create meaningful patterns throughout the time. Since simple neural networks have a feed-forward structure, they cannot learn time-variant features.
To overcome this shortcoming, different variety of feedback node connections were proposed as a variant of RNNs \cite{Chung2014, Schuster1997, Soltani2016}. These connections shape a directed graph in the temporal sequence direction that can learn and extract the sequential data's temporal instinct patterns.
In RNNs, unlike simple neural networks, each node's output depends on the output of previous nodes. In other words, it can be said that RNNs are capable of memorizing previous computations to the current state. Fig.\ref{fig:simpleRecurrent} indicates a simple recurrent network.
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{Recurrent.jpg}}
\caption{A simple schematic of recurrent network with the input, hidden and an output layer.}\label{fig:simpleRecurrent}
\end{figure}
As it’s displayed in Figure \ref{fig:simpleRecurrent}, $x_t$ is the input at time step t, $y_t$ is the hidden state at step \textit{t} which is also shown as rectangle units and has the role of memory in the network, and finally $y_t$ is the hidden state at time step \textit{t}. In this manner, recurrent networks can utilize previous computations.
Although this structure seems to be promising in terms of keep tracking of previous states and working similar to memory, simple recurrent networks are not capable of memorizing more than a few earlier time steps due to the vanishing gradient problem \cite{ShivaPrakash2019}.
Vanishing gradient encounters when Neural Network or, in this case, RNN is being trained by gradient-based learning and backpropagation method. Backpropagation computes the gradient of the output loss with respect to the network's weights. The gradient is calculated using the chain rule and relative derivative. As a result of the consecutive multiplication of the chain rule, the gradient value usually drops to a tiny number in deeper neural networks, and as a result, the network stops learning.
This means that the network will soon be incapable of learning the complicated instinct features of the sequence data and discover the long-term dependencies. In other words, it can not remember more complicated time-dependent sequential information, which is responsible for long-term memory.
\subsection{Long Short-term Memory }
To overcome the simple RNNs' shortcomings, Long Short-Term Memory (LSTM) \cite{Hochreiter1997} was proposed. The vanilla LSTM has the same chain-like architecture as RNN, which was introduced in the previous section. However, each memory unit of LSTM has a different structure and consists of more complicated functionalities than the vanilla RNN.
In LSTM, each memory cell makes small modifications to the information by simple mathematical operators such as multiplication and addition on the information flowing through a mechanism called Cell states. This way, the LSTM unit can selectively keep or forget the information.
This information generally has three main dependencies. Firstly, the previous information that is passed by the memory after the last timestep through the cell state. Secondly, the previous cell's output which is also known as the hidden state, and lastly, the input at the current timestep.
Another important term in LSTM is the analogy with conveyor belts as a mechanism to move the information flow through the LSTM block. As the information is being passed alongside the conveyor belts, the information can be added, removed, or modified by utilizing simple linear operators and Sigmoid neural net layers. This way of controlling information lets this primary component, also known as the Cell state, play a key role in keeping the main information and features for that particular time step.
The generic architecture of LSTM is provided in Fig.\ref{fig:LSTM}.
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{LSTM.jpg}}
\caption{The overall architecture of an unrolled LSTM layer with scheme of an LSTM unit's internal structure.\\
Image is retrieved from http://colah.github.io/posts/2015-08-Understanding-LSTMs/}\label{fig:LSTM}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{{individual.jpg}}
\caption{Each individual consists of 4 different parts with different encodings.In this image, sample individual $n$ is illustrated and splitted into the mentioned subcomponents. }\label{fig:individual}
\end{figure*}
\section{Forecasting COVID-19 with NAS-BBA}
\subsection{Dataset and Challenges}
SARS-COV-2 is a newly discovered virus in late 2019. The world health organization officially announced the pandemic caused by this virus on December 31st, 2019. Therefore, to study this virus's epidemiological behavior, especially in the first states of its outbreak, there was not much data available to analyze. It's also worth mentioning that, to have a fair epidemiological evaluation and train an accurate model of the pandemic, we must only study the regions with the same culture and social behavior since an epidemic is highly dependent on those factors. These reasons lead us to have very limited data.
In this paper, we use the open geographic distribution data of COVID-19 cases worldwide retrieved from the European Centre for Disease Prevention and Control\footnote[1]{\url{https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases}} to build a model for forecasting Iran's daily cases on non-lockdown days. The raw version of this dataset consists of 12 features. An overview of 5 samples of the data with some of their main features is provided in Table .\ref{tab:rawdata}. We utilize two features from this dataset to combine with two new features that we will introduce shortly.
\begin{table*}[!htb]
\captionsetup{size=footnotesize}
\caption{An overview of several features from the raw data.} \label{tab:rawdata}
\setlength\tabcolsep{0pt}
\footnotesize\centering
European Centre for Disease Prevention and Control’s Dataset
\smallskip
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccccccc}
\toprule
dateRep & day & month & year & cases & deaths & countriesAndTerritories & Cumulative\_number \\
\midrule \vspace{2pt}
20/03/2020 & 20 & 3 & 2020 & 1046 & 149 & Iran & 17.963 \\ \vspace{2pt}
21/03/2020 & 21 & 3 & 2020 & 1237 & 146 & Iran & 17.968 \\ \vspace{2pt}
22/03/2020 & 22 & 3 & 2020 & 966 & 123 & Iran & 17.834 \\ \vspace{2pt}
23/03/2020 & 23 & 3 & 2020 & 1028 & 129 & Iran & 18.177 \\ \vspace{2pt}
24/03/2020 & 24 & 3 & 2020 & 1411 & 127 & Iran & 19.162 \\
\bottomrule
\end{tabular*}
\end{table*}
Since one of the main causes of spreading COVID-19 is human interactions and in-person communications, controlling this matter was one of the macro decision-makers first concerns. As a result, by early April 2020, over one-third of the global population was under some form of movement restriction, quarantine, or COVID-19 lockdown. Although to prevent further economic damage, most countries' health organizations started considering new protocols for routine economic activities and workplaces such as decreasing the number of employees and monitoring their health condition \cite{Cirrincione2020}. Meanwhile, research works \cite{Tay2020} showed that there is another important factor that can have a serious impact on the outbreak despite all the protocols, and that is the tendency of people to break the quarantine and have in-person social communications \cite{Koh2020}. Therefore one of the main factors we have taken into account in this paper is the impact of non-workdays or holidays on the pandemic case numbers.
To do so, we introduce the first augmented feature by determining the type of days based on whether it's a holiday or it's a regular workday and is called "d\_type". We extracted the holidays' status of Iran from Google Calendar API and gave them the value of 1 if the corresponding day was a holiday and 0 if it's a workday. The second augmented feature is extracted from the holidays feature because each holiday increases people's tendency for unnecessary gatherings in quarantine. Therefore we introduced the "gathering" feature, and each sample gets a value of 1 if it's a holiday or it's a non-holiday, and between two holidays; otherwise, it gets 0. Lastly, we use an index to keep track of the sequence. A part of the new data is shown in Table .\ref{tab:AugmentedData}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.1}
\tabcolsep=0.08cm
\begin{center}
\caption{An overview of the indexed new data with feature augmentation.}\label{tab:AugmentedData}
\vspace{0em}
\scalebox{1} {
\begin{tabular}{ccccc}
\hline
index & \hspace{1.1em}cases & \hspace{1.1em}c\_num & \hspace{1.1em}d\_type & \hspace{1.1em}gathering \\
\hline
128 &\hspace{1.1em} 2472 &\hspace{1.1em} 43.371 &\hspace{1.1em} 0 &\hspace{1.1em} 0 \\
129 &\hspace{1.1em} 2449 &\hspace{1.1em} 42.732 &\hspace{1.1em} 0 &\hspace{1.1em} 0 \\
130 &\hspace{1.1em} 2563 &\hspace{1.1em} 42.064 &\hspace{1.1em} 1 &\hspace{1.1em} 1 \\
131 &\hspace{1.1em} 2612 &\hspace{1.1em} 41.434 &\hspace{1.1em} 0 &\hspace{1.1em} 1 \\
132 &\hspace{1.1em} 2596 &\hspace{1.1em} 40.255 &\hspace{1.1em} 1 &\hspace{1.1em} 1 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.0em}
\end{table}
\subsection{Neural Architecture Search with BBA}
To deal with the mentioned challenges, we have taken an Evolutionary Neural Architecture Search (NAS) approach to optimize the deep model hyperparameters. The optimization of hyperparameters is an NP-hard problem, which means finding the optimal solution we require to perform an exhaustive search in the solution space using metaheuristic techniques. Therefore we have chosen the BBA as it is a widely used metaheuristic algorithm \cite{Gupta2019,Nakamura2012} and as the main paper claims, it is superior to its other competitive binary algorithms. From now, we refer to this proposed framework as NAS-BBA.
Previous research \cite{Zoph2016,Stanley2002} suggest that neural architecture search techniques are able to design the simplest topology for the network as well as increase the performance of the final output. The NAS approach also develops a deep architecture with a sufficient number of parameters and not too many. This helps the model deal with limited data and learn abstract information from layers without getting overfitted. In the meanwhile, training time is longer for RNNs architecture compared to the architectures that can process the data in parallel. Moreover, adjusting the numbers of LSTM layers and the number of units in each layer will result in a large number of architectures. This is a time-consuming process and requires so much trial and error. Therefore NAS is a handy and reasonable approach to design an efficient deep model.
Before using BBA to optimize the model, there are two main factors that we have to consider. We first have to define an encoding for the population's individuals so it can represent the problem clearly. The second thing that we have to focus on is utilizing a convenient fitness function for the problem. We will discuss these two factors and how we customized them for forecasting COVID-19 cases.
\subsubsection{Defining Individuals}
The individuals are defined using a hybrid encoding structure as the population of BBA. Each individual consists of 4 parts. The first two $C_l^n$ and $A_l^n$ encoded in the Binary scheme. Vector $C_l^n$ is responsible for determining the existence of a layer. In other words, if element $C_3^n$ has the value of 1, it means the layer is activated, and if it has the value of 0, it shows the absence of the corresponding layer in the model. The second binary vector $A_l^n$ represents the activation function used in each layer. In this study, we encoded ReLU with 1 and Sigmoid function by 0.
The last two vectors are encoded in gray-code. The third vector is $U_k^n$ that can be split into \textit{k} subvectors. Each of the \textit{k} new vectors determine the number of units in the corresponding layer, and lastly, the fourth vector $T^n$ represents the number of timesteps in which we use to frame the data for the sequential model. A simple representation of this encoding scheme is provided in Figure.\ref{fig:individual}.
The overall number of elements in each individual is fixed and can be calculated as Eq.\eqref{eq:Eq.9}. $l$ and $a$ are the maximum numbers of layers and activations, respectively. For instance, $l=3$ means that there are three layers that BBA can determine their existence. The first logarithm term in the equation is responsible for converting the maximum number of units in each layer $k$ to the suitable number of binary units capable of representing it. Likewise, the second logarithm term converts the maximum timesteps that we defined to the number of gray-code encoding units. It is also worth mentioning that $\left\lceil x\right\rceil$ maps $x$ to the least integer, greater than or equal to $x$.
\begin{equation} \label{eq:Eq.9}
L=l+a+\sum_{k=1}^{l}\left\lceil\log_2{u_k}\right\rceil+\ \left\lceil\log_2{t}\right\rceil
\end{equation}
\subsubsection{Selecting Fitness Function}
For evaluating each Deep Model corresponding to each individual in the population, we need to select a convenient fitness function. Since the final goal is forecasting COVID-19 daily cases, we can conclude that the problem is regression. As the literature of artificial neural networks and deep learning models in regression problems suggests, we select Mean Squared Error (MSE) as BBA's fitness function. Eq.\eqref{eq:Eq.10}
\begin{equation} \label{eq:Eq.10}
MSE=\ \frac{1}{n}\sum_{i=1}^{n}{(y_i-{\hat{y}}_i)^{2}}
\end{equation}
\subsubsection{Training }
In the training phase, for each generation, the current population will be altered as in Eq.\eqref{eq:Eq.8}. Each individual will then be split into parts explained previously and mapped to the corresponding component of the deep model as a candidate solution. Then the deep model will be trained with training as long as the specified criteria are met. Finally, the trained model will be evaluated with unseen data, and the MSE value will be returned to the BBA as the individual's fitness value. In this way, we can evaluate every generated model. This process will end when BBA's termination condition is reached, and the $gbest$ will be returned as the best solution.
\section{Experiments}
In this section, we first introduce the deep structure used for forecasting COVID-19 cases, then we address the experimental setting and specified parameters, and finally, we discuss the experimental results.
\subsection{Deep Model Structure}
In this paper, we utilized a 5-layer deep recurrent network using vanilla LSTM units to forecast COVID-19 cases. We used an architecture consisting of two LSTM layers and two dense layers, and the output layer. Since we need at least one LSTM layer to extract the time dependency information and one output layer to output the predicted result, we consider the first and last layers as fixed layers and do not define a $C_l^n$ element in the individual corresponding to these layers. A simple scheme of this architecture is illustrated in Fig.\ref{fig:DeepStructure}.
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{Deep_Structure.jpg}}
\caption{An overview of the whole deep model structure. LSTM 1 shown in dark green and the Output layer in dark blue show the fixed layers in the model.}\label{fig:DeepStructure}
\end{figure}
\subsection{Experimental Setting}
\begin{itemize}
\item Dataset: In the experiments, we determined the maximum number of timesteps for framing the sequence data to 31. This number requires 5 elements of our individuals to be encoded in gray code. Before the evaluation of each individual, data is first framed into $n$ samples of $t$ timesteps and $f$ features as shown in Fig.\ref{fig:SequenceTensor}. We split it into two train and test data with the ratio of 80:20, respectively. Then we normalize the data, so it gets rescaled to the range of $\left[0,1\right]$ and use the train data for the training phase and test data to evaluate the model.\vspace{5pt}
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{SequenceTensor.jpg}}
\caption{A schematic overview of $n$ samples with $t$ timesteps and $f$ features.}\label{fig:SequenceTensor}
\end{figure}
\item Deep Model: For the deep model, the maximum number of units in each layer is 31 for each normal LSTM layer and 63 for the other two dense layers. For the last layer, we used a single neuron to predict the output. As the literature suggests, we added a dropout rate of 0.8 to each LSTM layer and l2 regularization with a lambda rate of 0.01. We encoded the ReLU function by 1, and the Sigmoid function by 0 for each individual's activation element corresponding to dense and outputs layers' activations. The final structure of each individual consists of 32 elements. In the experiments, we trained each model by 200, 500, and 1000 epochs to evaluate the BBA's individuals' fitness, and after obtaining the best architecture, we trained the model by 2,000 epochs. Throughout this study, we run every experiment three times and report the mean RMSE loss as the final score.
\vspace{5pt}
\item BBA: The number of population, iterations and input parameters of the BBA is set as the base research paper \cite{Mirjalili2014} determined and is provided in Table \ref{tab:BBASetting}. It should be mentioned that, due to the high computational time of fitness evaluation, the number of BBA's iteration, population number, and Deep Models' epochs are kept limited for the experiments.
\end{itemize}
\begin{table}[h]
\renewcommand{\arraystretch}{1.1}
\tabcolsep=0.08cm
\begin{center}
\caption{BBA Experimental settings.}\label{tab:BBASetting}
\vspace{0em}
\scalebox{1} {
\begin{tabular}{lc}
\hline
Parameters \hspace{1.3em} & \hspace{1.1em}Values \\
\hline
Population\hspace{1.3em} &\hspace{1.1em} 10,30 \\
$F_min$ \hspace{1.3em} &\hspace{1.1em} 0 \\
$F_max$ \hspace{1.3em} &\hspace{1.1em} 1 \\
A \hspace{1.3em} &\hspace{1.1em} 0.25 \\
r \hspace{1.3em} &\hspace{1.1em} 0.5 \\
$\epsilon$ \hspace{1.3em} &\hspace{1.1em} [-1,1]\\
$\alpha$ \hspace{1.3em} &\hspace{1.1em} 0.9 \\
$\gamma$ \hspace{1.3em} &\hspace{1.1em} 0.9 \\
BBA iterations \hspace{1.3em} &\hspace{1.1em} 100\\
\hline
Model iterations \hspace{1.3em} &\hspace{1.1em} 200, 500, 1000\\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.0em}
\end{table}
The model is implemented with Tensorflow 2.2.0 in Python bound with BBA's MATLAB code retrieved from https://www.mathworks.com/matlabcentral/fileexchange/44707-binary-bat-algorithm. To obtain unbiased results, all experiments are conducted using the same PC with the detailed configuration settings, as shown in Table \ref{tab:PCSetting}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.1}
\tabcolsep=0.08cm
\begin{center}
\caption{Experimental environment's configuration settings.}\label{tab:PCSetting}
\vspace{0em}
\scalebox{1} {
\begin{tabular}{lc}
\hline
Name & \hspace{1.05em}Configuration Settings \\
\hline
\small \vspace{1.05em}\textit{\footnotesize\textbf{Hardware}} & \\
\footnotesize CPU &\footnotesize\hspace{1.05em} Intel Core i7-6700HQ \\
\footnotesize CPU Frequency &\footnotesize\hspace{1.1em} 2.60GHz \\
\footnotesize RAM &\footnotesize \hspace{1.05em} 32GB \\
\footnotesize GPU &\footnotesize\hspace{1.05em} NVIDIA GeForce GTX 980\\
\hline
\small \vspace{1.05em}\footnotesize\textit{\textbf{Software}} & \\
\footnotesize Operating System &\footnotesize\hspace{1.05em} Windows 10 Pro 64-bit \\
\footnotesize Python &\footnotesize\hspace{1.05em} 2.7.6 \\
\footnotesize Implementation Environment &\footnotesize\hspace{1.05em} MATLAB R2018b \\
\footnotesize Tensorflow &\footnotesize\hspace{1.05em} 2.2.0 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.0em}
\end{table}
\subsection{Experimental Results}
To evaluate the effectiveness of the proposed approach, we conducted several different experiments on the COVID-19 dataset. We first run the framework on a population of 10 and 20 with 200 epochs and compare the two output models. Then we studied the influence of epoch number on the improvement of the final architecture by setting it to 200, 500, and 1000 (M1-M3) and compared them with five customized models (Network1-Network5).
To study the introduced data features' effectiveness, we train and test the best model on the initial data and the new data with augmented features(M1 vs. M4). The obtained architectures from the NAS-BBA framework and the self-defined architectures with their corresponding detailed information are provided in Table .\ref{tab:ModelComparisons}.
\subsubsection{Results}
As it can be observed in Table.\ref{tab:ValidationLoss}, there was a meaningful improvement in M1 to M3 networks when the epoch numbers increased from 200 to 1000. This also proves that the higher numbers of epochs give a sufficient amount of time to the NAS-BBA framework for a better evaluation of fitness corresponding to each individual. In other words, suppose we set the number of epochs to 1000 for the framework. This helps the deep architecture corresponding to each individual to be trained for a longer time and, as a result, provide a more accurate RMSE as fitness value, and therefore the best individual will be chosen with less error.
Also, to study the effect of the population number on the framework's accuracy, we conducted experiments on NAS-BBAS with 10 and 20 individuals (M1 and M4). Due to the high computational time of Deep Models evaluations for each individual, we kept the number of epochs to 200 for the BBA fitness evaluation. As shown in Fig.\ref{tab:ModelComparisons}, the mean loss value obtained by the NAS-BBA with 20 individuals had significant improvement compared to the 10-individual version. It is also evident in Fig.\ref{fig:p10vsp20} that both validation and train loss of NAS-BBA with a population of 20 (P20) has a decreasing trend to the last epoch. On the contrary, the P10 version almost started getting overfitted from the 1700th epoch, and the validation loss started increasing from then.
\begin{table}[h]
\renewcommand{\arraystretch}{1.1}
\tabcolsep=0.08cm
\begin{center}
\caption{Mean RMSE loss obtained by 3 individual runs over COVID-19 dataset with augmented features.}\label{tab:ValidationLoss}
\vspace{0em}
\scalebox{1} {
\begin{tabular}{lc}
\hline
Model & \hspace{3.1em}Mean RMSE Loss \\
\hline
M1:P10,E200 &\hspace{3.1em} 1.85e-3 \\
M2:P10,E500 &\hspace{3.1em} 1.61e-3 \\
M3:P10,E1000 &\hspace{3.1em} 1.35e-3 \\
M4:P20,E200 &\hspace{3.1em} \textbf{1.23e-3} \\
Network1 &\hspace{3.1em} 3.39e-3 \\
Network2 &\hspace{3.1em} 4.50e-3 \\
Network3 &\hspace{3.1em} 6.69e-3 \\
Network4 &\hspace{3.1em} 2.74e-3 \\
Network5 &\hspace{3.1em} 3.15e-3 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.0em}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{p10vsp20.jpg}}
\caption{Train and Validation loss value of the model obtained by NAS-BBA with 10 and 20 individuals with 100 iteration and 200 epochs.}\label{fig:p10vsp20}
\end{figure}
\begin{table*}[!htb]
\captionsetup{size=footnotesize}
\caption{This table provides the main hyperparameteres that represent each architecture used in the experiments. The M1 to M4 models are obtained from the NAS-BBA framework and Network1 to Network 5 are self-defined architectures for better evaluation of the models.} \label{tab:ModelComparisons}
\setlength\tabcolsep{0pt}
\footnotesize\centering
Deep Models' Architecture Used for Forecasting COVID-19 Cases
\smallskip
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccc}
\toprule
Network Name & Time steps & Existence & Activation Functions & LSTM1 & LSTM2 & Dense1 & Dense2 \\
\midrule \vspace{2pt}
M1:P10,E200 & 21 & EEE & RR & 18 & 26 & 9 & 63 \\ \vspace{2pt}
M2:P10,E500 & 16 & EEE & RR & 24 & 27 & 16 & 3 \\ \vspace{2pt}
M3:P10,E1000 & 23 & EEE & RR & 12 & 29 & 16 & 2 \\ \vspace{2pt}
M4:P20,E200 & 24 & EEE & RR & 25 & 20 & 9 & 33 \\ \vspace{2pt}
Network1 & 32 & EEE & RR & 32 & 32 & 64 & 64 \\ \vspace{2pt}
Network2 & 28 & EEE & RR & 20 & 20 & 32 & 32 \\ \vspace{2pt}
Network3 & 20 & EEE & RR & 20 & 20 & 32 & 32 \\ \vspace{2pt}
Network4 & 16 & EEE & RR & 24 & 24 & 16 & 32 \\ \vspace{2pt}
Network5 & 10 & EEE & RR & 16 & 16 & 16 & 32 \\
\hline
\multicolumn{2}{l}{\scriptsize E: Existent } & \multicolumn{2}{l|}{\scriptsize N: Non-Existent:} & \multicolumn{2}{l}{\scriptsize R: ReLU } & \multicolumn{2}{l}{\scriptsize S: Sigmoid}\\
\bottomrule
\end{tabular*}
\end{table*}
To further show the importance of having an optimized architecture to forecast COVID-19 cases and also showing the effectiveness of NAS-BBA, the performance of 5 more networks (Network1-Network5) was evaluated. We introduce these networks by setting their hyperparameters in the initially defined range. The important train and validation loss graphs of models are provided in Fig.\ref{fig:BigAll}, and for better observing differences, train and validation's loss values are plotted in Fig.\ref{fig:LossChart}.
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{LossChart.jpg}}
\caption{Train and validation loss of the main architectures.}\label{fig:LossChart}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{{BigAll.jpg}}
\caption{The left graph shows the train and validation Loss graphs of the most important models and the right graphs displays the Loss value on only the validation data.}\label{fig:BigAll}
\end{figure*}
From the left learning curve graph of the Network2 model in Fig.\ref{fig:BigAll}, it is evident that although the training learning curve of this model keeps a decreasing trend till the last epoch, the validation curve starts getting a sharply increasing curve after around the 1000th epoch. On the other side, it can also be seen that Network4's train and validation loss graph both keep the decreasing trend almost through the whole training phase, but the network cannot decrease the loss value from the 1600th epoch. This happens due to the insufficient number of time steps or hidden units. Lastly, we can see that Network3 also gets overfitted shortly after around epoch 700 and doesn't have any further improvements despite the fact that its hyperparameters such as hidden units and timesteps are closer to the one selected by NAS-BBA.
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{{Epoch_Loss_Two_Dataset.jpg}}
\caption{Learning curve of M4 architecture, showing the loss of training and validation on the new COVID-19 dataset with augmented features and the original data.}\label{fig:Epoch_Loss_Two_Dataset}
\end{figure}
\subsubsection{Effect of Dataset with Augmented Features}
To validate the effectiveness of the proposed COVID-19 dataset with augmented features, we train the best-generated model (M4) with the new dataset and compare it with the original one. The learning curve plot of the two settings is illustrated in Fig.\ref{fig:Epoch_Loss_Two_Dataset}.
As it is evident in Fig.\ref{fig:Epoch_Loss_Two_Dataset}, the validation loss of the single feature data doesn't improve much after the 1000th epoch. However, it can be observed that in the train and validation loss obtained by learning the new dataset, the model almost keeps the regular decreasing trend till the last epoch. Also, the final validation loss for the model trained with the new data is superior over the model trained by the original data. The training and validation loss of the original and new datasets are provided in Table .\ref{tab:NewDataOldDataTrainValidation}.
Another important thing to interpret from Fig.\ref{fig:Epoch_Loss_Two_Dataset} is the overfitting of the best model on the original data in a short time after about the 400th epoch. This also shows that the model wasn't capable of finding sufficient distinguished features on the training dataset to increase the accuracy of forecasting cases on validation data.
\begin{table}[h]
\renewcommand{\arraystretch}{1.1}
\tabcolsep=0.08cm
\begin{center}
\caption{Mean train and validation RMSE loss obtained by 3 individual runs over the new dataset with augmented features and the original data.}\label{tab:NewDataOldDataTrainValidation}
\vspace{0em}
\scalebox{1} {
\begin{tabular}{lcc}
\hline
Model & \hspace{1.1em}Train Loss & \hspace{1.1em}Validation Loss \\
\hline
M4+Our Dataset &\hspace{1.1em} \textbf{1.34e-3} &\hspace{1.1em} \textbf{1.64e-3} \\
M4+Original Dataset &\hspace{1.1em} 3.05e-3 &\hspace{1.1em} 3.39e-3 \\
\hline
\end{tabular}
}
\end{center}
\vspace{-0.0em}
\end{table}
\section{Conclusions}\label{conclusionssection}
In this paper, we proposed a new approach and dataset to forecast the daily cases of COVID-19 more accurately than the common approaches that are based on trial and error in finding an acceptable architecture. We also mentioned the limitations caused by the data-hungry problem in deep learning models and investigated how the proposed approach guarantee to find an architecture that can provide a promising solution despite having limited data such as COVID-19 cases. To validate our proposed approach, NAS-BBA, we provided a set of detailed experiments and compared the results of different custom architectures and the ones obtained by the NAS-BBA framework. In all the cases, the results validate the proposed approach's effectiveness in finding the best deep architecture for forecasting COVID-19 cases. Finally, we trained the best-generated model on the original data and the one proposed in this paper. The results indicate that our proposed dataset with augmented features provides a significant improvement to the model.
From this paper, there can be several topics for the research community that deserve further study. First, as we also mentioned in the experiment section, the deep model training phase for BBA individual fitness evaluation is time-consuming. One can find utilizing an alternative method for the training phase that is more accurate and faster. Secondly, the impact of other hyperparameters such as learning rates, regularization lambda coefficient, or optimization method on improving the final model can be studied. Also, an alternate optimization method can be utilized \cite{Rahbar2020} for tuning the hyperparameters. Lastly, in this paper, we employed vanilla LSTM units to forecast COVID-19 cases. In future research, other variants of LSTM, recurrent units, and advanced structures such as the combination of convolution and recurrent neural networks can be utilized, and their efficacy can be studied.
\bibliographystyle{unsrtnat}
|
2010.09579
|
\section{Introduction}
\label{sec:intro}
Gravitational clustering and nonlinear bias induce a non-Gaussianity in the large-scale structure (LSS) of the Universe that can be quantified by higher-order correlation functions, like the 3-point correlation function (3PCF) and its Fourier counterpart, the bispectrum.
The measurement of the galaxy bispectrum in redshift surveys can contribute additional constraining power towards a wide range of science goals, improving on what is achievable using only 2-point correlations, like the power spectrum.
To date, the most precise measurements of the galaxy bispectrum and 3PCF are from the SDSS Baryon Oscillation Spectroscopic Survey \cite{Gil-Marin:2014sta,Gil-Marin:2014baa,Gil-Marin:2016wya,Slepian:2016kfz,Pearson:2017wtw,PearsonSamushia2018errata,Sugiyama:2018yzo}.
In the near future, spectroscopic galaxy surveys like
DESI,\footnote{\url{https://www.desi.lbl.gov}}
Euclid,\footnote{\url{https://www.euclid-ec.org}}
SPHEREx,\footnote{\url{https://spherex.caltech.edu}}
and the Roman Space Telescope\footnote{\url{https://wfirst.ipac.caltech.edu}}
\cite{Levi:2013gra,Laureijs:2011gra,Dore:2014cca,Spergel:2015sza}
will map galaxy distributions over larger areas of the sky, to higher redshifts, and with more precision than before, opening up new opportunities for higher-order galaxy clustering statistics to be used as stronger probes of $\Lambda$CDM (the standard cosmological model dominated by a cosmological constant called $\Lambda$ and cold dark matter), dark energy,
modified gravity theories, primordial non-Gaussianity, and massive neutrinos (e.g.~\cite{
Chan:2016ehg,Byun:2017fkz,
Song:2015gca,Gagrani:2016rfy,Yankelevich:2018uaz,Gualdi:2020ymf,Agarwal:2020lov,
Yamauchi:2017ibz,Bose:2018zpk,Bose:2019wuz,
Tellarini:2016sgp,Karagiannis:2018jdt,
Ruggeri:2017dda,Hahn:2019zob}).
Bispectrum data sets are naturally much larger than for power spectra, because they typically capture the correlation amplitudes for a large number of triangle bins, $B(k_1,k_2,k_3)$, rather than being a function of only one wavenumber, like $P(k)$.
This presents practical challenges that can potentially limit the full exploitation of the data that will be available from upcoming LSS surveys.
One particularly acute challenge for bispectrum analyses is the large number of mock catalogs that are typically necessary to accurately estimate the large data covariance matrices for galaxy clustering analyses.
One remedy to this problem has been to explore the accuracy of fast, approximate simulation codes that reduce the computational resources needed for generating mocks \cite{Monaco:2016pys,Colavincenzo:2018cgf}.
Alternatively, it may be possible to obtain equivalent covariance matrices using fewer or smaller volume mocks (e.g.~\cite{
Joachimi:2016xhk,
FriedrichEifler2018,Hall:2018umb,
Pearson:2015gca,
Howlett:2017vwp})
or even no mocks, if an accurate theoretical model of the covariance matrix is available (e.g.~\cite{Mohammed:2016sre,Sugiyama:2019ike,Wadekar:2019rdu,Taruya:2020qoy}).
A third strategy that has been pursued in tandem is to develop methods that compress the information contained in the bispectrum into smaller, more manageable data sets.
Much work to date falls into this last category of bispectrum compression methods.
Using the standard bispectrum estimator, choosing to use wider wavenumber bins reduces the total number of triangle bins, but at the same time erases some of the triangle-dependence that encodes cosmological information.
Other compression methods and compressed bispectrum observables include:
Karhunen-Lo\`{e}ve compression of the standard bispectrum estimator \cite{Gualdi:2017iey,Gualdi:2018pyw}
(which is similar to the MOPED algorithm \cite{Heavens:1999am,Heavens:2020spq}),
subspace projection of the standard bispectrum estimator \cite{Philcox:2020zyp},
binning triangles based on their geometrical properties \cite{Gualdi:2019ybt,Gualdi:2019sfc},
skew-spectra \cite{Pratten:2011kh,Schmittfull:2014tca,MoradinezhadDizgah:2019xun},
position-dependent power spectra (also called integrated bispectra) \cite{Chiang:2014oga,Chiang:2015eza,Chiang:2015pwa},
line correlation functions \cite{Obreschkow:2012yb,Wolstenhulme:2014cla,Eggemeier:2015ifa,Eggemeier:2016asq,Franco:2018yag,Ali:2018sdk,Byun:2020hun},
and the modal bispectrum.
The focus of this work is to implement and explore the last of these, and to compare it with a standard bispectrum analysis.
The modal bispectrum describes the bispectrum as a linear combination of smooth 3-dimensional basis functions, such that the observable data are the expansion coefficients over a chosen basis.
If the bispectrum is relatively smooth, and the chosen basis is suitable for describing changes in the bispectrum induced by the model parameters we wish to constrain, we expect that the modal expansion coefficients will provide an efficient compression of the cosmological information that is typically distributed over a large number of triangle bins.
The modal expansion method was originally developed in the context of primordial non-Gaussianity in the cosmic microwave background \cite{Fergusson:2009nv,Fergusson:2010dm,Ade:2013ydc} before it was adapted for the LSS bispectrum \cite{Fergusson:2010ia,Regan:2011zq,Schmittfull:2012hq}.
It has been used to test and develop theoretical models of the matter bispectrum \cite{Schmittfull:2012hq,Lazanu:2015rta,Lazanu:2015bqo} and compare the matter bispectra measured from different dark matter simulation codes \cite{Hung:2019ygc} and mock-making prescriptions \cite{Hung:2019nma}.
These previous works have been in real space, and an extension of the modal expansion method to redshift space was outlined in \cite{Regan:2017vgi}.
The first direct comparison between the modal bispectrum and the standard bispectrum estimators was in \cite{Byun:2017fkz}, where Fisher forecasted constraints on $\Lambda$CDM cosmological parameters and galaxy bias using the real-space matter modal bispectrum and standard bispectrum estimators were equivalent.
For this work, we have implemented a new modal bispectrum analysis pipeline that uses the real-space halo bispectrum to constrain galaxy bias and shot noise parameters.
This builds on previous work by implementing the modal bispectrum method in a Markov chain Monte Carlo (MCMC) analysis pipeline for the first time.
Where possible, we have adhered closely to the standard bispectrum analysis in \cite{Oddo:2019run}, so the two estimators can be rigorously compared.
In the process of implementing the new modal bispectrum pipeline, we have explored several technical details of the modal method's implementation that have not previously been presented in the literature, and we discuss how the modal estimator pipeline is sensitive (or not) to these details.
The main message of this work is that the modal bispectrum estimator provides an extremely efficient compression of the information contained within the halo bispectrum, resulting in parameter constraints that are at least as strong as the standard bispectrum estimator.
Depending on the specific settings within the modal bispectrum pipeline, we find that as few as 10 modal expansion coefficients are necessary for galaxy bias and shot noise parameter constraints to converge when the smallest scale is given by $k_{\rm max} \approx 0.10 \, h\,{\rm Mpc}^{-1}$.
While the modal decomposition method does require some additional calculations and machinery, we show that this overhead is small, especially when compared to the benefits of having a highly compressed data set.
For example, in this work we use the modal bispectrum pipeline to show how bispectrum constraints can depend on the number of mock catalogs that are used to estimate the covariance matrix---a result that has not been attempted using the standard bispectrum estimator due to the extremely large number of mocks that would be required.
The outline of the paper is as follows.
In Section \ref{sec:method} we review the modal decomposition method.
In Section \ref{sec:data_and_analysis} we describe the data that we use, our likelihood modeling, and MCMC simulation details.
We discuss our new results in Section \ref{sec:results} and summarize our main conclusions in Section \ref{sec:conclusions}.
\section{Modal decomposition method}
\label{sec:method}
The basic premise of the modal decomposition method is that the bispectrum is a relatively smooth function of Fourier-space triangles, so the bispectrum that is normally measured in a large number of triangle bins, $N_{\rm triangles}$, is very well approximated by a linear combination of a smaller number, $N_{\rm modes}$, of basis mode functions.
We write this as
\begin{equation}
w(k_1,k_2,k_3)B(k_1,k_2,k_3) \approx \sum_{n=0}^{N_{\rm modes}-1} \beta^Q_n \, Q_n(k_1,k_2,k_3),
\label{eq:wB_expansion}
\end{equation}
where $w(k_1,k_2,k_3)$ is a weighting function, $B(k_1,k_2,k_3)$ is the bispectrum, $\beta^Q_n$ are a set of modal expansion coefficients, and $Q_n(k_1,k_2,k_3)$ are the modal basis functions.
The modal coefficients therefore correspond to the amplitudes of template bispectra in the data.
The optimal estimator for the amplitude of a single bispectrum template for an isotropic and statistically homogeneous density field in the limit of weak non-Gaussianity is \cite{Babich:2005en}
\begin{equation}
\hat{\varepsilon} = \frac{1}{N_\varepsilon}
\int_{\mathbf{k}_1}
\int_{\mathbf{k}_2}
\int_{\mathbf{k}_3}
(2\pi)^3 \delta_D(\mathbf{k}_{123})
B^{\rm template}(k_1,k_2,k_3)
\frac{\delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3}}
{P(k_1)P(k_2)P(k_3)},
\end{equation}
where we have introduced the shorthand notations $\int_{\mathbf{k}} \equiv \int \frac{{\rm d}^3k}{(2\pi)^3}$ and $\mathbf{k}_{123} \equiv \mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3$ to make our expressions more compact.
$\delta_\mathbf{k}$ is the observed density field on a discretized Fourier-space grid, $P(k_i)$ is the total power spectrum (including the shot noise), and $N_\varepsilon$ is a normalization constant.\footnote{
We choose our Fourier transform convention so that the forward and backward transforms are
$\delta(\mathbf{k}) = \int {\rm d}^3x \, \delta(\mathbf{x}) e^{i \mathbf{k} \cdot \mathbf{x}}$ and
$\delta(\mathbf{x}) = (2\pi)^{-3} \int {\rm d}^3k \, \delta(\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}}$.
}
After defining the quantity
\begin{equation}
\hat{\mathcal{B}}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) \equiv \frac{\delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3}}{V} \bf{1}_{\mathbf{k}_{123}},
\end{equation}
where $V$ is the survey volume and
$\bf{1}_{\mathbf{k}_
{123}}$ is a Kronecker symbol that is unity if $\mathbf{k}_{123}=0$ and zero otherwise,
we see that the amplitude estimator is effectively a normalized weighted inner product between $B^{\rm template}$ and $\hat{\mathcal{B}}$,
\begin{equation}
\hat{\varepsilon} = \frac{V}{N_\varepsilon} \llangle wB^{\rm template} | w\hat{\mathcal{B}} \rrangle,
\end{equation}
where the definition of the inner product is
\begin{equation}
\llangle wB^{\rm template} | w\hat{\mathcal{B}} \rrangle \equiv
\int_{\mathbf{k}_1}
\int_{\mathbf{k}_2}
\int_{\mathbf{k}_3}
(2\pi)^3 \delta_D(\mathbf{k}_{123})
\frac{wB^{\rm template}(k_1,k_2,k_3) w\hat{\mathcal{B}}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3)}
{k_1k_2k_3}
\label{eq:innerprod}
\end{equation}
and the weighting function is
\begin{equation}
w(k_1,k_2,k_3) \equiv \frac{\sqrt{k_1k_2k_3}}{\sqrt{P(k_1)P(k_2)P(k_3)}}.
\label{eq:w}
\end{equation}
Then the normalization constant must be $N_\varepsilon \equiv V \llangle wB^{\rm template} | wB^{\rm template} \rrangle$ so that the ensemble average of the estimated amplitude $\langle\hat{\varepsilon}\rangle \rightarrow 1$ if $\langle\hat{\mathcal{B}}\rangle \rightarrow B^{\rm template}$.
If the ensemble average $\langle \hat{\mathcal{B}}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) \rangle = B^{\rm obs}(k_1,k_2,k_3)$, we can perform the angular integrals in the inner product analytically, following steps detailed in \cite{Fergusson:2010ia}, which we also review here.
We first use
\begin{equation}
\delta_D(\mathbf{k}_{123}) = \frac{1}{(2\pi)^3} \int {\rm d}^3x \, e^{i\mathbf{k}_{123}\cdot\mathbf{x}}
\label{eq:delta}
\end{equation}
and rewrite the exponential part as
\begin{equation}
e^{i\mathbf{k}\cdot\mathbf{x}} =
4\pi \sum_{\ell m} i^\ell j_\ell(kx)
Y_{\ell m}(\hat{\mathbf{k}}) Y_{\ell m}^*(\hat{\mathbf{x}})
\end{equation}
to get
\begin{align}
\llangle wB^{\rm template} | wB^{\rm obs} \rrangle =
\int {\rm d}^3x (4\pi)^3
&\left[
\int_{\mathbf{k}_1}
\sum_{\ell_1 m_1} i^{\ell_1} j_{\ell_1}(k_1x)
Y_{\ell_1 m_1}(\hat{\mathbf{k}}_1) Y_{\ell_1 m_1}^*(\hat{\mathbf{x}})
\right] \nonumber \\
\times &\left[
\int_{\mathbf{k}_2}
\sum_{\ell_2 m_2} i^{\ell_2} j_{\ell_2}(k_2x)
Y_{\ell_2 m_2}(\hat{\mathbf{k}}_2) Y_{\ell_2 m_2}^*(\hat{\mathbf{x}})
\right] \nonumber \\
\times &\left[
\int_{\mathbf{k}_3}
\sum_{\ell_3 m_3} i^{\ell_3} j_{\ell_3}(k_3x)
Y_{\ell_3 m_3}(\hat{\mathbf{k}}_3) Y_{\ell_3 m_3}^*(\hat{\mathbf{x}})
\right] \nonumber \\
\times &
\frac{wB^{\rm template}(k_1,k_2,k_3) wB^{\rm obs}(k_1,k_2,k_3)}
{k_1k_2k_3}.
\label{eq:inner product with jYY}
\end{align}
The integral over $\hat{\mathbf{k}}_i$ inside each pair of square brackets is\footnote{Our spherical harmonics are normalized such that $\int {\rm d}\Omega_{\mathbf{k}} Y_{\ell m}(\hat{\mathbf{k}})^2 = 1$ and $Y_{00} = 1/\sqrt{4\pi}$.}
\begin{equation}
\int {\rm d}\Omega_{\mathbf{k}_i} Y_{\ell_i m_i}(\hat{\mathbf{k}}_i) = \sqrt{4\pi}\delta_{\ell_i0}\delta_{m_i0},
\end{equation}
which forces all $\ell_i$ and $m_i$ in eq.~\eqref{eq:inner product with jYY} to be zero, giving
\begin{align}
\llangle wB^{\rm template} | wB^{\rm obs} \rrangle = \int {\rm d}^3x \frac{(4\pi)^{9/2}}
{(2\pi)^9} & \left[ \int {\rm d}k_1 \, k_1^2 \, j_0(k_1x) Y_{00}(\hat{\mathbf{x}}) \right] \nonumber \\
\times & \left[ \int {\rm d}k_2 \, k_2^2 \, j_0(k_2x) Y_{00}(\hat{\mathbf{x}}) \right] \nonumber \\
\times & \left[ \int {\rm d}k_3 \, k_3^2 \, j_0(k_3x) Y_{00}(\hat{\mathbf{x}}) \right] \nonumber \\
\times & \frac{wB^{\rm template}(k_1,k_2,k_3) wB^{\rm obs}(k_1,k_2,k_3)}{k_1k_2k_3}.
\label{eq: inner product with jY}
\end{align}
In the final step, integration over $\mathbf{x}$ using
\begin{align}
\int {\rm d}x \,x^2 j_0(k_1x)j_0(k_2x)j_0(k_3x) &= \frac{\pi}{8k_1k_2k_3} \\
\int {\rm d}\Omega_{\mathbf{x}} Y_{00}(\hat{\mathbf{x}})^3 &= \frac{1}{\sqrt{2\pi}}
\end{align}
shows that the inner product is
\begin{eqnarray}
\llangle wB^{\rm template} | wB^{\rm obs} \rrangle = \frac{1}{8\pi^4}
\int_{\mathcal{V}_T} {\rm d}k_1 \, {\rm d}k_2 \, {\rm d}k_3
\, wB^{\rm template}(k_1,k_2,k_3) wB^{\rm obs}(k_1,k_2,k_3),
\label{eq:inner product tetrapyd}
\end{eqnarray}
where the subscript $\mathcal{V}_T$ signifies that the 3-dimensional integral must only cover the volume, sometimes called a \textit{tetrapyd}, where $(k_1,k_2,k_3)$ can form a closed triangle.
Therefore, estimating the amplitude of a given template bispectrum is closely related to calculating a weighted inner product between the template and observed bispectrum over the tetrapyd space.
In previous literature on the modal bispectrum, the inner product in eq.~\eqref{eq:inner product tetrapyd} is sometimes written in terms of $(x_1,x_2,x_3)$ instead of $(k_1,k_2,k_3)$, where $x_i \equiv (k_i-k_{\rm min})/(k_{\rm max}-k_{\rm min})$, such that the allowed $(x_1,x_2,x_3)$ form a tetrapyd that fits inside of a unit cube.
We will sometimes use a different notation to define the inner product over this unit tetrapyd,
\begin{equation}
\langle f | g \rangle \equiv \int_{\mathcal{V}_T} {\rm d}x_1 \, {\rm d}x_2 \, {\rm d}x_3 \, f(x_1,x_2,x_3) g(x_1,x_2,x_3).
\end{equation}
\subsection{Modal estimator}
\label{subsec:estimator}
In this work, it is not the amplitude of one template that we are interested in, but the expansion coefficients of a general bispectrum on a chosen set of basis functions.
In this case, we make the replacement $wB \rightarrow w\hat{\mathcal{B}}$ in eq.~\eqref{eq:wB_expansion} and take the inner product of both sides with $Q_m$ to obtain
\begin{equation}
\llangle Q_m | w\hat{\mathcal{B}} \rrangle = \sum_{n=0}^{N_{\rm modes}-1} \hat{\beta}^Q_n \, \gamma_{nm},
\label{eq:modal_linear_eq}
\end{equation}
where we have defined the positive-definite symmetric matrix $\gamma_{nm} \equiv \llangle Q_n | Q_m \rrangle$.\footnote{In \cite{Byun:2017fkz}, the inner product over the unit tetrapyd, $\overline{\gamma} \equiv \langle Q|Q \rangle$, was also defined and used to convert between $\gamma$ matrices calculated over different $k$-ranges, given by $k_{\rm min}$ and $k_{\rm max}$.
This was motivated by the assumption that $\overline{\gamma}$ could be computed once, and thereafter $\gamma$ over a general $k$-range could be computed as $\gamma = (k_{\rm max}-k_{\rm min})^3/(8\pi^4) \overline{\gamma}$.
However, we note that this was not quite correct; this rescaling can only be done when $k_{\rm min}=0$, because in general $\gamma$ depends on $k_{\rm min}$ and $k_{\rm max}$ in a way that cannot be factored out.
We thank Dionysios Karagiannis for noticing this.}
To estimate the modal coefficients, $\hat{\beta}^Q_n$, we first measure the $N_{\rm modes}$ inner products, $\llangle Q_n | w\hat{\mathcal{B}} \rrangle$ on the left-hand side, and solve the linear matrix equation in eq.~\eqref{eq:modal_linear_eq}.
To make the measurement of
\begin{eqnarray}
\llangle Q_n | w\hat{\mathcal{B}} \rrangle
&=& \frac{1}{V}
\int_{\mathbf{k}_1}
\int_{\mathbf{k}_2}
\int_{\mathbf{k}_3}
(2\pi)^3 \delta_D(\mathbf{k}_{123})
\frac{Q_n(k_1,k_2,k_3) \, \delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3}}
{\sqrt{k_1k_2k_3}\sqrt{P(k_1)P(k_2)P(k_3)}}
\label{eq:QwB estimator}
\end{eqnarray}
computationally tractable, we require that the $Q_n$ basis functions can be written in separable form as a product of three 1-dimensional functions,
\begin{equation}
Q_n(k_1,k_2,k_3) = q_{\{p}(k_1) q_r(k_2)q_{s\}}(k_3).
\end{equation}
The $p$, $r$, and $s$ subscripts on the right side index the different 1-dimensional functions that we have chosen, and the curly brackets require that the $Q_n$ functions are invariant to permutations of $k_1$, $k_2$, and $k_3$.
In Appendix \ref{app:1d qn}, we give additional details on how we compute the $q_n(k)$ from either normal or Legendre polynomials and how we have chosen the mapping between $\{ prs \} \leftrightarrow n$.
Taking advantage of the separability of $Q_n$ and using eq.~\eqref{eq:delta} to rewrite the delta function in its exponential form,
the final expression for $\llangle Q_n | w\hat{\mathcal{B}} \rrangle$ simplifies into a computationally tractable expression written concisely as
\begin{equation}
\llangle Q_n | w\hat{\mathcal{B}} \rrangle = \frac{1}{V} \int {\rm d}^3x \, M_{\{p}(\mathbf{x})M_r(\mathbf{x})M_{s\}}(\mathbf{x}),
\label{eq:QwB estimator with Mrx}
\end{equation}
where
\begin{eqnarray}
M_r(\mathbf{x}) \equiv \int \frac{{\rm d}^3k}{(2\pi)^3} \frac{e^{i\mathbf{k}\cdot\mathbf{x}}}{\sqrt{kP(k)}} \, q_r(k) \, \delta_{\mathbf{k}}.
\label{eq:Mrx}
\end{eqnarray}
Therefore the inner product can be computed very efficiently using fast Fourier transform (FFT) routines (such as FFTW\footnote{\url{http://www.fftw.org}} or Intel MKL\footnote{\url{https://software.intel.com/content/www/us/en/develop/tools/math-kernel-library.html}} libraries), if the basis of $Q_n$ functions are multiplicative separable.
We note that this estimator requires minimal modifications to the standard bispectrum estimator, which takes the form
\begin{align}
\hat{B}(k_1,k_2,k_3) =& \frac{V}{N_\triangle(k_1,k_2,k_3)}
\int_{\mathbf{q}_1}
\int_{\mathbf{q}_2}
\int_{\mathbf{q}_3}
(2\pi)^3 \delta_D(\mathbf{q}_{123})
\tilde{\Pi}_{k_1}(\mathbf{q}_1)
\tilde{\Pi}_{k_2}(\mathbf{q}_2)
\tilde{\Pi}_{k_3}(\mathbf{q}_3)
\delta_{\mathbf{q}_1} \delta_{\mathbf{q}_2} \delta_{\mathbf{q}_3} \nonumber \\
=& \frac{V}{N_\triangle(k_1,k_2,k_3)} \int {\rm d}^3x
\, \mathcal{D}_{k_1}(\mathbf{x}) \mathcal{D}_{k_2}(\mathbf{x}) \mathcal{D}_{k_3}(\mathbf{x}),
\label{eq:B estimator}
\end{align}
where
\begin{equation}
\mathcal{D}_k(\mathbf{x}) \equiv \int \frac{{\rm d}^3q}{(2\pi)^3} \, e^{i \mathbf{q} \cdot \mathbf{x}} \, \tilde{\Pi}_k(\mathbf{q}) \, \delta_{\mathbf{q}},
\end{equation}
and $\tilde{\Pi}_k(\mathbf{q})$ is a binning function that is 1 if $|\mathbf{q}| \in [ k - \Delta k/2, k + \Delta k/2 ]$ and zero otherwise.
$N_\triangle$ is the number of $(\mathbf{q}_1,\mathbf{q}_2,\mathbf{q}_3)$ triangles that are averaged inside a $(k_1,k_2,k_3)$ triangle bin,
\begin{align}
N_\triangle(k_1,k_2,k_3) &\equiv V^2
\int_{\mathbf{q}_1}
\int_{\mathbf{q}_2}
\int_{\mathbf{q}_3}
\delta_D(\mathbf{q}_{123}) \,
\tilde{\Pi}_{k_1}(\mathbf{q}_1)
\tilde{\Pi}_{k_2}(\mathbf{q}_2)
\tilde{\Pi}_{k_3}(\mathbf{q}_3) \nonumber \\
&= \frac{V^2}{(2\pi)^3} \int {\rm d}^3x \, \Pi_{k_1}(\mathbf{x}) \Pi_{k_2}(\mathbf{x}) \Pi_{k_3}(\mathbf{x}),
\end{align}
where $\Pi_k(\mathbf{x})$ is the inverse Fourier transform of $\tilde{\Pi}_k(\mathbf{q})$.
Comparing the standard bispectrum estimator with the estimator for $\llangle Q_n | w\hat{\mathcal{B}} \rrangle$, we see that they both require very similar computational steps, and the modal estimator recovers the bispectrum estimator by making the replacement $q_r(k) / \sqrt{kP(k)} \rightarrow \tilde{\Pi}_k (\mathbf{q})$.
The critical difference, however, is that for a single realization, while the bispectrum estimator is computed once per $(k_1,k_2,k_3)$ triangle bin, the modal estimator is computed once for each $Q_n$.
Also, the memory requirement for the bispectrum estimator is such that each $k_i$ bin requires a full Fourier grid to store the corresponding $\mathcal{D}_{k_i}(\mathbf{x})$, while for the modal estimator each 1-dimensional basis function $q_r$ requires its own grid to store $M_r(\mathbf{x})$.
Therefore the modal estimator is typically more computationally efficient compared to the standard bispectrum estimator.
Once the $\llangle Q_n | w\hat{\mathcal{B}} \rrangle$ have been measured, we estimate $\hat{\beta}^Q_n$ by numerically solving the linear equation in eq.~\eqref{eq:modal_linear_eq}.
Using the $\hat{\beta}^Q$ coefficients, we can also calculate a reconstructed bispectrum as
\begin{equation}
B_{\rm rec} = \frac{1}{w} \sum_n \hat{\beta}^Q_n \, Q_n.
\label{eq:Brec}
\end{equation}
Later, in Section \ref{subsec:Brec}, we compare this bispectrum, $B_{\rm rec}$, with the bispectrum measured using the standard estimator, $\hat{B}$.
We note that the $\gamma$ matrix only needs to be computed once for a desired wavenumber range $(k_{\rm min},k_{\rm max})$ and choice of $Q_n$ basis functions.
Different methods for calculating the inner products in $\gamma$ have been discussed to date in the literature, and one of the goals of this work is to compare these methods.
In the next section, we summarize the inner product methods that we implement and compare in this work.
\subsection{Inner product methods}
\label{subsec:gamma_methods}
Here, we briefly describe the four different methods we have implemented in this work for calculating the inner product matrix
\begin{eqnarray}
\gamma \equiv \llangle Q_n | Q_m \rrangle = \frac{1}{8\pi^4}
\int_{\mathcal{V}_T} {\rm d}k_1 \, {\rm d}k_2 \, {\rm d}k_3
\, Q_n(k_1,k_2,k_3) \, Q_m(k_1,k_2,k_3).
\label{eq:QnQm_tetrapyd}
\end{eqnarray}
\subsubsection*{Monte Carlo integration}
This method uses the Monte Carlo algorithm called Vegas included in the public Cuba library for multidimensional numerical integration \cite{Hahn:2004fe,Hahn:2014fua} to calculate the inner product via random sampling of the 3-dimensional tetrapyd space.
The free parameters for this method are the convergence tolerance and the maximum number of samples.
In addition to being very slow to converge, we find it quite challenging, despite different choices in the integration parameters, to avoid a non-positive definite $\gamma$, which cannot be used for the modal analysis pipeline.
\subsubsection*{Voxels}
This method divides the cubic volume of $(k_1,k_2,k_3)$ from $k_{\rm min}$ up to $k_{\rm max}$ into a grid of smaller cubes, called \textit{voxels}, and calculates eq.~\eqref{eq:QnQm_tetrapyd} by integrating over each voxel using tri-linear interpolation of the integrand within each voxel (as described in Appendix A2 of \cite{Byun:2017fkz}).
Because of the shape of the tetrapyd volume, some care must be taken to properly integrate over voxels that intersect with the tetrapyd boundary.
The only free parameter of this method is the grid resolution set by the number of individual voxels, $N_v$, spanning the chosen $k$-range in each of the three dimensions.
\subsubsection*{3D FFT}
This method calculates the inner product in the same way that the modal estimator does: we take the expression for $\llangle Q_n | w\hat{\mathcal{B}} \rrangle $ in eq.~\eqref{eq:QwB estimator} and make the replacement $w\hat{\mathcal{B}} \rightarrow Q_m$ to find
\begin{eqnarray}
\llangle Q_n | Q_m \rrangle
&=&
\int_{\mathbf{k}_1}
\int_{\mathbf{k}_2}
\int_{\mathbf{k}_3}
(2\pi)^3 \delta_D(\mathbf{k}_{123})
\frac{Q_n(k_1,k_2,k_3) \, Q_m(k_1,k_2,k_3)}{k_1 k_2 k_3} \\
&=& \int {\rm d}^3x \int_{\mathbf{k}_1} \int_{\mathbf{k}_2} \int_{\mathbf{k}_3}
e^{i \mathbf{k}_{123} \cdot \mathbf{x}}
\,
\frac{q_{\{p}(k_1) q_r(k_2)q_{s\}}(k_3)
\, q_{\{a}(k_1) q_b(k_2)q_{c\}}(k_3)}{k_1 k_2 k_3}.
\end{eqnarray}
Then, similarly to eq.~\eqref{eq:QwB estimator with Mrx}, we write this in a compact form as \cite{Hung:2019ygc}
\begin{eqnarray}
\llangle Q_n | Q_m \rrangle = \frac{1}{6} \int {\rm d}^3x
&& \{M_{pa}(\mathbf{x}) \left[ M_{rb}(\mathbf{x}) M_{sc}(\mathbf{x}) + M_{rc}(\mathbf{x}) M_{sb}(\mathbf{x}) \right] \nonumber \\
&&+ M_{pb}(\mathbf{x}) \left[ M_{rc}(\mathbf{x}) M_{sa}(\mathbf{x}) + M_{ra}(\mathbf{x}) M_{sb}(\mathbf{x}) \right] \nonumber \\
&&+ M_{pc}(\mathbf{x}) \left[ M_{ra}(\mathbf{x}) M_{sb}(\mathbf{x}) + M_{rb}(\mathbf{x}) M_{sa}(\mathbf{x}) \right] \},
\end{eqnarray}
where we have defined
\begin{equation}
M_{pa}(\mathbf{x}) \equiv \int_\mathbf{k} e^{i\mathbf{k}\cdot\mathbf{x}} \, \frac{q_p(k) \, q_a(k)}{k}.
\end{equation}
Like the modal estimator that we have already discussed, this expression for $\llangle Q_n | Q_m \rrangle$ can be computed quickly using existing FFT software.
The free parameters of this method are, as with any discrete Fourier transform, the real-space volume and the FFT grid resolution.
\subsubsection*{1D FFT\footnote{We thank Dionysios Karagiannis for suggesting the 1D FFT method.}} This method computes the inner product using 1-dimensional FFTs by evaluating the expression in eq.~\eqref{eq: inner product with jY} after the replacements $wB^{\rm template} \rightarrow Q_n$ and $wB^{\rm obs} \rightarrow Q_m$.
In this case, using $j_0(k_ix) = \sin(k_ix)/k_ix$ and $Y_{00} = 1/\sqrt{4\pi}$, the inner product becomes
\begin{eqnarray}
\llangle Q_n|Q_m \rrangle = \frac{1}{2\pi^5} \int {\rm d}x \, \frac{1}{x}
&& \{ F_{pa}(x) [ F_{rb}(x) F_{sc}(x) + F_{rc}(x) F_{sb}(x)] \nonumber \\
&&+ F_{pb}(x) \left[ F_{rc}(x) F_{sa}(x) + F_{ra}(x) F_{sb}(x) \right] \nonumber \\
&&+ F_{pc}(x) \left[ F_{ra}(x) F_{sb}(x) + F_{rb}(x) F_{sa}(x) \right] \},
\label{eq:1dfft QQ}
\end{eqnarray}
where
\begin{equation}
F_{pa}(x) \equiv \int {\rm d}k \,q_p(k)\,q_a(k)\,\sin(kx).
\label{eq:1dfft Fx}
\end{equation}
The integral over $k$ in $F_{pa}(x)$ can be performed using 1-dimensional FFTs, as described in Chapter 13.9 of \textit{Numerical Recipes} \cite{Press1996}, while the outermost 1-dimensional integral over $x$ can be done using standard numerical integration methods (in our case, the Cuhre routine included in the Cuba library).
We have deferred the numerical details of this calculation to Appendix \ref{app:1dfft}.
This method has two free parameters corresponding to the grid resolutions in $k$ and $x$, and we find that the resulting $\gamma$ is positive-definite only when these resolutions are sufficiently high.
\subsubsection*{Summary}
We have described four different methods for computing the inner product matrix, $\gamma \equiv \llangle Q|Q \rrangle$.
The Monte Carlo routine, called Vegas in the Cuba library, fails to converge to positive-definite $\gamma$, so we do not use it subsequently in this work.
The remaining three methods we have implemented are calculated independently and give numerically different results for $\gamma$, stemming from the fact that each method makes different assumptions and approximations about the inner product.
The voxel and 1D FFT methods assume that the $\mathbf{k}_i$ wavevectors are sampled very finely, such that the inner product is effectively a continuous integral.
These two methods are still completely different in their numerical implementation.
In contrast, the 3D FFT method assumes each $\mathbf{k}_i$ is discretely sampled in three dimensions, and so it is the only method that accounts explicitly for the discrete sampling of Fourier space, treating this sampling in the same way that the modal estimator is applied to the data through $\llangle Q_n | w\mathcal{B} \rrangle$ in eq.~\eqref{eq:QwB estimator with Mrx}.
Our benchmark constraints use the 3D FFT method, and part of Section \ref{subsec:checks} investigates whether the methods described here have an impact on the resulting parameter constraints.
\subsection{Bispectrum model and custom modes}
\label{subsec:model}
We use a tree-level standard perturbation theory (SPT) model for the real-space halo bispectrum that matches the modeling in \cite{Oddo:2019run}:
\begin{eqnarray}
B_h(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) = && b_1^3 B_m(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) + b_2 b_1^2 \Sigma(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) + 2 \gamma_2 b_1^2 K(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) \nonumber \\
&& + \frac{1+\alpha_1}{\overline{n}} b_1^2 [P_L(k_1)+P_L(k_2)+P_L(k_3)]
+ \frac{1+\alpha_2}{\overline{n}^2},
\label{eq:Bh}
\end{eqnarray}
where the tree-level matter bispectrum is
\begin{align}
B_m(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) = 2[&F_2(\mathbf{k}_1,\mathbf{k}_2) P_L(k_1) P_L(k_2) \nonumber \\
+ &F_2(\mathbf{k}_1,\mathbf{k}_3) P_L(k_1) P_L(k_3) \nonumber \\
+ &F_2(\mathbf{k}_2,\mathbf{k}_3) P_L(k_2) P_L(k_3)]
\end{align}
and $P_L(k)$ is the linear matter power spectrum.
$b_1$ and $b_2$ are the linear and quadratic bias parameters, while $\gamma_2$ is the tidal bias.
The shot noise terms in the second row of eq.~\eqref{eq:Bh} are parametrized by $\alpha_1$ and $\alpha_2$, such that $\alpha_1=\alpha_2=0$ correspond to Poissonian shot noise.
The kernel definitions are
\begin{eqnarray}
F_2(\mathbf{k}_1,\mathbf{k}_2) \equiv && \frac{5}{7} + \frac{1}{2} \frac{\mathbf{k}_1 \cdot \mathbf{k}_2}{k_1k_2} \left( \frac{k_1}{k_2} + \frac{k_2}{k_1} \right) + \frac{2}{7} \left( \frac{\mathbf{k}_1 \cdot \mathbf{k}_2}{k_1k_2} \right)^2 \label{eq:F2} \\
\Sigma(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) \equiv && P_L(k_1)P_L(k_2) + P_L(k_1)P_L(k_3) + P_L(k_2)P_L(k_3) \\
K(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3) \equiv && \left[ (\hat{\mathbf{k}}_1 \cdot \hat{\mathbf{k}}_2)^2 - 1\right] P_L(k_1)P_L(k_2) \nonumber \\
&+&\left[ (\hat{\mathbf{k}}_1 \cdot \hat{\mathbf{k}}_3)^2 - 1\right] P_L(k_1)P_L(k_3) \nonumber \\
&+&\left[ (\hat{\mathbf{k}}_2 \cdot \hat{\mathbf{k}}_3)^2 - 1\right] P_L(k_2)P_L(k_3).
\end{eqnarray}
In this work, we consider the M5 model in \cite{Oddo:2019run}, where the cosmological parameters are fixed and we only vary the bias and shot noise parameters, $(b_1,b_2,\gamma_2,\alpha_1,\alpha_2)$.
This allows the theory predictions for the halo bispectrum to be computed very quickly, as a linear combination of precomputed terms corresponding to the $B_m$, $\Sigma$, $K$, and $P_L$ terms in eq.~\eqref{eq:Bh} after accounting for binning effects.
Similarly, in this work we require fast theoretical predictions for the modal coefficients, $\beta^Q_n$, and we achieve this by taking advantage of \textit{custom modes}.
First proposed in \cite{Hung:2019ygc}, custom modes are a set of four separable basis functions that by design reproduce exactly the tree-level matter bispectrum.
We can see that this should be possible by rewriting the $F_2$ perturbation theory kernel in eq.~\eqref{eq:F2} as
\begin{equation}
F_2(k_1,k_2,k_3) = \frac{5}{7} + \frac{1}{2} \left(\frac{k_3^2-k_1^2 - k_2^2}{2k_1k_2}\right)\left(\frac{k_1}{k_2} + \frac{k_2}{k_1}\right) + \frac{2}{7}\left(\frac{k_3^2-k_1^2 - k_2^2}{2k_1k_2}\right)^2,
\end{equation}
where, after expanding this expression, we see that each term will be separable in $k_1$, $k_2$, and $k_3$.
More specifically, the weighted bispectrum, $wB_m$, can be written as a linear combination of four modes,
\begin{eqnarray}
Q_0^{\rm tree}(k_1,k_2,k_3) &=& q_{\{0}^{\rm tree}(k_1) q_1^{\rm tree}(k_2) q_{1\}}^{\rm tree}(k_3) \\
Q_1^{\rm tree}(k_1,k_2,k_3) &=& q_{\{0}^{\rm tree}(k_1) q_2^{\rm tree}(k_2) q_{3\}}^{\rm tree}(k_3) \\
Q_2^{\rm tree}(k_1,k_2,k_3) &=& q_{\{1}^{\rm tree}(k_1) q_3^{\rm tree}(k_2) q_{4\}}^{\rm tree}(k_3) \\
Q_3^{\rm tree}(k_1,k_2,k_3) &=& q_{\{3}^{\rm tree}(k_1) q_3^{\rm tree}(k_2) q_{5\}}^{\rm tree}(k_3),
\end{eqnarray}
where the custom 1-dimensional basis functions are
\begin{eqnarray}
q_0^{\rm tree}(k) &=& \sqrt{\frac{k}{P(k)}} \frac{5}{14} \label{eq:q0tree} \\
q_1^{\rm tree}(k) &=& \sqrt{\frac{k}{P(k)}} P_L(k) \\
q_2^{\rm tree}(k) &=& -\,\sqrt{\frac{k}{P(k)}} P_L(k) k^2 \\
q_3^{\rm tree}(k) &=& \sqrt{\frac{k}{P(k)}} \frac{P_L(k)}{k^2} \\
q_4^{\rm tree}(k) &=& \sqrt{\frac{k}{P(k)}} \frac{3}{14} k^2 \\
q_5^{\rm tree}(k) &=& \sqrt{\frac{k}{P(k)}} \frac{1}{14} k^4. \label{eq:q5tree}
\end{eqnarray}
We note that it is important to distinguish between $P_L(k)$, which is the linear power spectrum appearing in the tree-level matter bispectrum model, from $P(k)$ (without a subscript) which is the power spectrum appearing in the definition of the weighting function, $w(k_1,k_2,k_3)$.
In addition to the four custom modes above, in this work we add two more,
\begin{eqnarray}
Q_4^{\rm tree}(k_1,k_2,k_3) &=& q_{\{0}^{\rm tree}(k_1) q_0^{\rm tree}(k_2) q_{1\}}^{\rm tree}(k_3) \\
Q_5^{\rm tree}(k_1,k_2,k_3) &=& q_{\{0}^{\rm tree}(k_1) q_0^{\rm tree}(k_2) q_{0\}}^{\rm tree}(k_3),
\end{eqnarray}
to model the shot noise terms.
With these definitions for six custom modes in total, $Q_n^{\rm tree}$ for $n=0,...,5$, we can reproduce the tree-level halo bispectrum model in eq.~\eqref{eq:Bh} corresponding to values of the parameters $(b_1,b_2,\gamma_2,\alpha_1,\alpha_2)$ by choosing the modal coefficients, $\beta_n^{\rm tree}$, as shown in Table \ref{tab:betaQcustom}, i.e.
\begin{equation}
wB_h(k_1,k_2,k_3) = \sum_{n=0}^{5} \beta_n^{\rm tree} Q_n^{\rm tree}(k_1,k_2,k_3).
\end{equation}
The fact that we can write the model predictions for $\beta_n^{\rm tree}$ as trivial functions of $(b_1,b_2,$ $\gamma_2,\alpha_1,\alpha_2)$ means that, as in the standard bispectrum analysis of \cite{Oddo:2019run}, the calculation of the model predictions is very fast, allowing for MCMC simulations to run quickly.
We state for emphasis that this expansion of the halo bispectrum model is \textit{exact}---it is not an approximation.
In our subsequent analysis, unless otherwise mentioned explicitly, we always use a basis where the first six basis functions are these $Q_n^{\rm tree}$, and starting with the seventh basis function, we use the $Q_n$ that we have introduced earlier, which are either constructed from 1-dimensional normal or Legendre polynomials, which we call $q_n(k)$.
For a general model of the halo bispectrum or a general cosmological parameter set, it may not be possible to predict theoretical values of $\beta^Q_n$ as quickly as what we use here.
Finding a general strategy to manage this problem is outside the scope of this work, but it is an interesting challenge for future work.
We note that this computational bottleneck has a counterpart in the standard bispectrum pipeline, where it is necessary to quickly calculate predictions for $B(k_1,k_2,k_3)$ in all triangle bins, accounting for the bin size, for a general bispectrum model and parameter set.
\begin{center}
\begin{table}
\begin{center}
{\renewcommand{\arraystretch}{1.5}%
\begin{tabular}{lccccc}
\hline
$\beta_0^{\rm tree}=$ & $6b_1^3$ & $+\frac{42}{5}b_1^2b_2$ & $-\frac{42}{5}b_1^2\gamma_2$ & & \\
$\beta_1^{\rm tree}=$ & $6b_1^3$ & & $-\frac{42}{5}b_1^2\gamma_2$ &&\\
$\beta_2^{\rm tree}=$ & $6b_1^3$ & & $-28b_1^2\gamma_2$ &&\\
$\beta_3^{\rm tree}=$ & $6b_1^3$ & & $+21b_1^2\gamma_2$ &&\\
$\beta_4^{\rm tree}=$ & & & & $\frac{588}{25} b_1^2 \left( \frac{1+\alpha_1}{\overline{n}} \right)$&\\
$\beta_5^{\rm tree}=$ & & & & & $\frac{2744}{125} \left( \frac{1+\alpha_2}{\overline{n}^2} \right)$ \\
\hline
\end{tabular}}
\end{center}
\caption[Theoretical predictions for modal coefficients]{For an input model with free parameters $(b_1,b_2,\gamma_2,\alpha_1,\alpha_2)$, we list the modal coefficients for the six custom modes, $Q_n^{\rm tree}$ for $n=0,...,5$, that reproduce the tree-level halo bispectrum model in eq.~\eqref{eq:Bh}.}
\label{tab:betaQcustom}
\end{table}
\end{center}
\subsection{Orthonormal basis}
\label{subsec:orthonormal}
Finally, in this section we introduce one more set of basis functions that are rotations of any general set of separable $Q_n$ (which may include custom modes).
We label this new basis $R_n$ and call it the orthonormal basis because it satisfies
\begin{equation}
\llangle R_n | R_m \rrangle = \frac{(k_{\rm max}-k_{\rm min})^3}{8\pi^4} \delta_{nm}.
\label{eq:RR equals delta}
\end{equation}
We comment that the factor of $(k_{\rm max}-k_{\rm min})^3/(8\pi^4)$ on the right hand side only changes the overall amplitude of all $R_n$ by a constant factor, and it is only present here because we made the arbitrary choice to require that the $R_n$ basis is orthonormal in the unit tetrapyd space, i.e.~$\langle R_n | R_m \rangle = \delta_{nm}$.
Then we require that $R_n$ is a linear combination of the $Q_n$ basis as
\begin{equation}
R_n \equiv \sum_m \lambda^{-1}_{nm} \, Q_m,
\label{eq:R equals lambda Q}
\end{equation}
and by deriving $\llangle R|R \rrangle$ and setting it equal to the identity matrix, we see that $\gamma$ and $\lambda$ are related by
\begin{equation}
\gamma = \frac{(k_{\rm max}-k_{\rm min})^3}{8\pi^4} \lambda \cdot \lambda^T,
\end{equation}
where $\lambda$ is the lower triangular matrix resulting from the Cholesky decomposition.
Fig.~\ref{fig:QnRn} shows the six custom modes $Q_n^{\rm tree}$ and the first six $R_n$ for $k_{\rm max} \approx 0.10 \, h \,{\rm Mpc}^{-1}$, with the 4-dimensional data represented in a plot similar in style to \cite{Fergusson:2009nv}.\footnote{The $R_n$ shown in the figure were calculated the default modal settings described in detail at the beginning of Section \ref{sec:results}.}
The three axes in each plot are the $x_i$ defined by $(k_i - k_{\rm min})/(k_{\rm max} - k_{\rm min})$, where the origin is in the lower left corner.
To focus on the overall triangle-dependence of each basis function, we have normalized each one to equal unity when $x_1=x_2=x_3=1$ in the upper right corner.
We have also removed the half of the tetrapyd with $x_1 > x_2$, if the vertical axis is $x_3$, to show the interior of the tetrapyd region.
We notice that the $Q^{\rm tree}_1$, $Q^{\rm tree}_2$, and $Q^{\rm tree}_3$ modes are similar and peak (in red) at the edges of the tetrapyd corresponding to squeezed triangles, while the other three custom modes are largest at equilateral triangles with $k_1=k_2=k_3=k_{\rm max}$.
The plots showing $Q^{\rm tree}_0$ and $R_0$ are identical because $R_0 \propto Q^{\rm tree}_0$ by construction.
Unlike the $Q_n^{\rm tree}$, however, all of the $R_n$ look dissimilar because they have been defined to be orthogonal to each other, i.e. $\langle R_n | R_m \rangle = \delta_{nm}$.
\begin{figure}[th]
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.33\textwidth]{Q0.pdf}%
\includegraphics[width=0.33\textwidth]{Q1.pdf}%
\includegraphics[width=0.33\textwidth]{Q2.pdf}%
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.33\textwidth]{Q3.pdf}%
\includegraphics[width=0.33\textwidth]{Q4.pdf}%
\includegraphics[width=0.33\textwidth]{Q5.pdf}%
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.33\textwidth]{R0.pdf}%
\includegraphics[width=0.33\textwidth]{R1.pdf}%
\includegraphics[width=0.33\textwidth]{R2.pdf}%
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.33\textwidth]{R3.pdf}%
\includegraphics[width=0.33\textwidth]{R4.pdf}%
\includegraphics[width=0.33\textwidth]{R5.pdf}%
\end{subfigure}
\caption[$Q_n$ and $R_n$ basis functions]{Plots of the six custom $Q_n$ basis functions (first two rows) and the first six $R_n$ basis functions (bottom two rows) for $k_{\rm max} \approx 0.10 \, h\,{\rm Mpc}^{-1}$.
Each function is plotted over the unit tetrapyd of allowed $(x_1,x_2,x_3)$ combinations, with the origin in the lower left corner, and for easier readability is normalized to unity at $x_1=x_2=x_3=1$ in the upper right corner.
These plots show only half of the tetrapyd to show more of the interior region.}
\label{fig:QnRn}
\end{figure}
Using the fact that the weighted bispectrum must be the same regardless of the basis,
\begin{equation}
\sum_n \beta^Q_n \, Q_n = \sum_m \beta^R_m \, R_m,
\end{equation}
we see that the two sets of coefficients are related by
\begin{equation}
\beta^R = \lambda^T \cdot \beta^Q.
\end{equation}
In the modal pipeline of this work, it is never necessary to calculate anything with $R_n$ itself directly. We just use its definition to work with the $\beta^R$ coefficients as our data, rather than the $\beta^Q$.
Whether we do our MCMC analysis in terms of $\beta^Q$ or $\beta^R$ does not matter, but the $\beta^R$ have the advantage that the numerical values of the $\beta^R_n$ coefficients for $n < N_{\rm modes}$ will not change if the size of the basis, given by $N_{\rm modes}$, is increased.
This is because $\lambda^{-1}$ in eq.~\eqref{eq:R equals lambda Q} is lower triangular, so $R_n$ only depends on $Q_m$ with $m \leq n$, and $\beta^R$ can also be expressed as $\llangle R_n | wB \rrangle = (k_{\rm max}-k_{\rm min})^3/(8\pi^4) \beta^R_n$.
On the other hand, we can see from $\llangle Q | wB \rrangle = \gamma \cdot \beta^Q$ that all numerical values of $\beta^Q$ will change as the basis set is increased.
The definition of $R_n$ in eq.~\eqref{eq:RR equals delta} corresponds to defining $\beta^R$ that are orthogonal in the limit of Gaussian covariance.
We note that the $\beta^R_n$ can be written as
\begin{eqnarray}
\beta^R_n &=& \frac{8\pi^4}{(k_{\rm max}-k_{\rm min})^3} \llangle R_n | w\mathcal{B} \rrangle \\
&=& \frac{8\pi^4}{(k_{\rm max}-k_{\rm min})^3} \frac{1}{V}
\int_{\mathbf{k}_1}
\int_{\mathbf{k}_2}
\int_{\mathbf{k}_3}
(2\pi)^3 \delta_D(\mathbf{k}_{123})
\frac{R_n(k_1,k_2,k_3) \, \delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3}}
{\sqrt{k_1k_2k_3}\sqrt{P(k_1)P(k_2)P(k_3)}},
\end{eqnarray}
such that the covariance $\left< \beta^R_n \beta^R_m \right>$ requires evaluating $\langle \delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3} \delta_{\mathbf{k}_1'} \delta_{\mathbf{k}_2'} \delta_{\mathbf{k}_3'} \rangle$. The leading-order Gaussian contribution to this is
\begin{equation}
\langle \delta_{\mathbf{k}_1} \delta_{\mathbf{k}_2} \delta_{\mathbf{k}_3} \delta_{\mathbf{k}_1'} \delta_{\mathbf{k}_2'} \delta_{\mathbf{k}_3'} \rangle_G = 6 (2\pi)^9 \delta_D(\mathbf{k}_1+\mathbf{k}_1') \delta_D(\mathbf{k}_2+\mathbf{k}_2') \delta_D(\mathbf{k}_3+\mathbf{k}_3') P(k_1)P(k_2)P(k_3),
\label{eq:gaussian 6pt correlator}
\end{equation}
such that the Gaussian covariance for the modal coefficients is
\begin{align}
\left< \beta^R_n \beta^R_m \right>_G &= \frac{6}{V} \left[ \frac{8\pi^4}{(k_{\rm max}-k_{\rm min})^3} \right]^2 \llangle R_n | R_m \rrangle \\
&= \frac{6}{V}\frac{8\pi^4}{(k_{\rm max}-k_{\rm min})^3} \delta_{nm}.
\label{eq:betaR gaussian covariance}
\end{align}
Non-Gaussian contributions to the covariance will generally couple orthonormal modal coefficients with different $n$ and $m$.
Later, in Section \ref{subsec:cov}, we evaluate the impact of assuming the Gaussian covariance on the parameter constraints.
\subsection{Summary}
\label{subsec:method summary}
Here we put together the practical steps of the modal method necessary to implement it, and summarize the expressions necessary to estimate and model modal coefficients, $\beta^Q$ and $\beta^R$, with or without custom modes included.
In the first step, we choose a $k$-range, bounded by $k_{\rm min}$ and $k_{\rm max}$, and we choose the 1-dimensional basis functions $q_n(k)$ that are combined to get the separable basis of $Q_n$, which may or may not include custom modes, depending on the choice of the user and the bispectrum model.
After the $Q_n$ have been defined, $\gamma$ is computed for this basis, using a chosen method---we have discussed four options in Section \ref{subsec:gamma_methods}.
Then we use the Cholesky decomposition to numerically calculate $\lambda$, which defines the orthonormal $R_n$ basis.
In the second step, we obtain our measurements from simulations. This is done by measuring $\llangle Q | w\mathcal{B} \rrangle$, and then solving
\begin{align}
\llangle Q | w\mathcal{B} \rrangle &= \gamma \cdot \hat{\beta}^Q \\
\llangle Q | w\mathcal{B} \rrangle &= \frac{(k_{\rm max}-k_{\rm min})^3}{8\pi^4} \lambda \cdot \hat{\beta}^R
\end{align}
to get the modal coefficients.
In the last step, we need a function for predicting $\beta^Q(\theta)$ and $\beta^R(\theta)$.
When custom modes are used, we set the modal coefficients to be
\begin{align}
\beta^Q_n(\theta)&=\left\{
\begin{array}{@{}ll@{}}
\beta^{\rm tree}_n(\theta) & \text{if}\ 0\leq n \leq 5 \\
0 & \text{otherwise}
\end{array}\right. \\
\beta^R(\theta) &= \lambda^T \cdot \beta^Q(\theta).
\end{align}
When custom modes are \textit{not} used, we have to estimate the $\beta^Q$ in
\begin{align}
\sum_{n=0}^5 \beta^{\rm tree}_n(\theta) \, Q_n^{\rm tree} \approx \sum_m \beta^Q_m \, Q_m,
\end{align}
where the sum over $m$ on the right side does not include any custom modes.
Taking the inner product of this with $Q$ on both sides (where $Q$ again does not include any custom modes), we find that we need to solve
\begin{align}
\llangle Q | Q^{\rm tree} \rrangle \cdot \beta^{\rm tree}(\theta) &= \gamma \cdot \beta^Q(\theta) \\
\llangle Q | Q^{\rm tree} \rrangle \cdot \beta^{\rm tree}(\theta) &= \frac{(k_{\rm max}-k_{\rm min})^3}{8\pi^4} \lambda \cdot \beta^R(\theta)
\end{align}
for $\beta^Q$ and/or $\beta^R$.
Therefore, when custom modes are not included in the modal basis, we need to precompute the rectangular matrix $\llangle Q|Q^{\rm tree} \rrangle$ as a means to obtaining theory predictions quickly.
\section{Data and analysis}
\label{sec:data_and_analysis}
\subsection{Simulations and mock halo catalogs}
\label{subsec:data}
We use the same simulation and halo catalog data as in \cite{Oddo:2019run}, since the aim of our work is to compare the modal bispectrum constraints with the results from the standard bispectrum in that work.
The data consist of two sets of simulations.
The first is a suite of 298 $N$-body simulations, called Minerva, created using the Gadget-2 code and first presented in \cite{Grieb:2015bia}.
Each realization is a $L=1500 \,h^{-1}\,{\rm Mpc}$ box evolved to $z=1$ based on the same fiducial flat $\Lambda$CDM cosmology.
Halos are identified using a friends-of-friends algorithm such that the minimum halo mass is $1.12 \times 10^{13} \, h^{-1}\,M_\odot$, and the mean number density is $\overline{n} = 2.13 \times 10^{-4} \, h^3\,{\rm Mpc}^{-3}$.
The measurements from these Minerva simulations are the data that we fit in our analysis.
We also use a set of 10,000 mock halo catalogs generated using the approximate $N$-body code Pinocchio \cite{Monaco:2001jg,Monaco:2013qta,Munari:2016aut}, which in \cite{Oddo:2019run} were used to obtain bispectrum covariance matrices.
298 of these realizations have initial conditions that match those of the Minerva realizations.
The halos in the Pinocchio simulations were chosen with a different mass threshold (compared to the Minerva data) such that the large-scale amplitude of the total halo power spectrum matches what is measured in the Minerva $N$-body simulations, because in the Gaussian limit the bispectrum covariance depends directly on the total power spectrum \cite{Oddo:2019run}.
For the modal estimator, as with the bispectrum measurements in \cite{Oddo:2019run}, we map the halo positions to the grid of halo density values using the 4th-order interlacing method in \cite{Sefusatti:2015aex} obtained with the public PowerI4\footnote{\url{https://github.com/sefusatti/PowerI4}} code and run the estimator using a FFT grid size of $N_g = 256$.
We note that by simultaneously fitting to 298 Minerva realizations, the results we present in Section \ref{sec:results} correspond to a total volume of $\approx 1{,}000 \, h^{-3}\,{\rm Gpc}^3$, which is much larger than any real galaxy survey planned for the near future.
Therefore the results we present should be interpreted as a proof of principle of the modal bispectrum method, while the exact numerical values of the parameter constraints will change for more realistic survey scenarios in smaller volumes.
\subsection{Likelihood and MCMC}
\label{subsec:likelihood}
We implement two likelihood functions: a Gaussian likelihood, which we take to be our benchmark case, and the likelihood proposed by Sellentin and Heavens in \cite{Sellentin:2015waz} (SH in what follows).
The two likelihoods differ in how they account for errors in the estimated covariance matrix, due to the fact that it is estimated from a finite number of mocks, but they both assume that the observable data is Gaussian distributed.
We have checked that the probability distribution functions of the $\beta^R_n$ modal coefficients measured in the $N$-body simulations and mock catalogs do not show any strong indications of non-Gaussianity.
In Section \ref{subsec:shlike}, we compare results from the two likelihoods.
The Gaussian likelihood for our analysis is
\begin{equation}
\ln \mathcal{L} = - \frac{1}{2} \sum_n \sum_m \delta\beta^R_n \; \hat{C}^{-1}_{nm} \; \delta\beta^R_m
\equiv - \frac{1}{2} (\delta\beta^R)^T \cdot \hat{C}^{-1} \cdot \delta\beta^R,
\label{eq:lnL gaussian}
\end{equation}
where $\delta\beta^R_n \equiv \beta^R_n(\theta) - \beta^{R, \,{\rm sims}}_n$.
The covariance matrix estimated from $N_s$ mock catalogs is
\begin{equation}
\widetilde{C}_{nm} \equiv \frac{1}{N_s-1} \sum_i^{N_s} (\beta^{R(i)}_n - \overline{\beta}^R_n)
(\beta^{R(i)}_m - \overline{\beta}^R_m).
\end{equation}
Though this is an unbiased estimator of the covariance matrix, taking the inverse of this $\widetilde{C}_{nm}$ will result in a biased estimate of the precision matrix, which can be statistically debiased by a multiplicative factor \cite{Kaufmann1967,Anderson2003,Hartlap2007},
\begin{equation}
\hat{C}^{-1} = \Gamma \, \widetilde{C}^{-1},
\end{equation}
where
\begin{equation}
\Gamma \equiv \frac{N_s - N_{\rm bins} -2}{N_s-1}
\end{equation}
and $N_{\rm bins}$ is the number of data bins.
We note however that any one particular $\widetilde{C}$ will have statistical noise such that applying the $\Gamma$ factor may actually bring the final parameter constraints closer to, or further away from, what we would have obtained using the true precision matrix.
In other words, the $\Gamma$ factor does not take into account the statistical nature of the estimated precision matrix---the fact that it is still a noisy estimate of an unknown quantity.
Instead, SH derives a likelihood that is the Gaussian likelihood marginalized over the unknown covariance matrix, conditioned on our estimate of it, to arrive at
\begin{equation}
\ln \mathcal{L} = -\frac{N_s}{2} \ln \left[ 1 + \frac{(\delta\beta^R)^T \cdot \widetilde{C}^{-1} \cdot \delta\beta^R}{N_s-1} \right] + \ln\left(\frac{\overline{c}_p}{\sqrt{\det \widetilde{C}}}\right).
\label{eq:lnL ng}
\end{equation}
$\overline{c}_p$ is a constant that depends only on $N_s$ and $N_{\rm bins}$, and since we assume the covariance matrix does not depend on the parameters, we drop the second $\ln(...)$ term.
For one particular estimate of the covariance matrix, this SH likelihood will also have errors that are too large or too small, and be biased.
However, on average the SH likelihood should yield parameter constraints that are closer to the true one.
The two likelihoods should equal each other, and approach the true answer, in the limit that the covariance matrix is well-estimated by a sufficiently large $N_s$.
Conversely, they will show differences when $N_s$ is small, e.g. $\Gamma \ll 1$.
As in \cite{Oddo:2019run}, we simultaneously fit all 298 Minerva realizations, which means that our total likelihood is
\begin{equation}
\ln \mathcal{L}_{\rm total} = \sum_i^{N_R} \ln \mathcal{L}_i,
\end{equation}
where $\ln \mathcal{L}_i$ is the likelihood for one realization, and equal to eq.~\eqref{eq:lnL gaussian} or eq.~\eqref{eq:lnL ng}.
We use wide, uniform priors for all parameters, and we compare our results with those from \cite{Oddo:2019run} using their `broad' priors, which were $b_1 \in [0.5,5]$, $b_2,\gamma \in [-5,5]$, $\alpha_1 \in [-10,10]$, and $\alpha_2 \in [-100,100]$.
The analysis in \cite{Oddo:2019run} also explored the effect of narrower priors on $\alpha_1$ and $\alpha_2$, as well as fitting the simulation data with models with fewer than five parameters.
However, in this work we only consider the five parameter model (called M5 in \cite{Oddo:2019run}) with broad priors, because we expect that a modal estimator pipeline fitting for more parameters with less informative priors will present a stronger test of the modal method.
Our MCMC simulations are run using the code emcee\footnote{\url{https://emcee.readthedocs.io}} \cite{ForemanMackey:2012ig}.
Each chain uses 100 walkers that are started within a small sphere around an approximate maximum likelihood point.
Our convergence criteria are that the integrated autocorrelation time $\tau$ is stable to within 1\% and that the chain is at least $50\tau$ long.
After the chains have converged, we use the getdist\footnote{\url{https://getdist.readthedocs.io}} package to analyze the chains and produce contour plots of the posteriors \cite{Lewis:2019xzd}.
\section{Results}
\label{sec:results}
Unless otherwise stated, the results we present use following default settings for the modal estimator pipeline.
The first six $Q_n$ basis functions are the six custom modes defined in Section \ref{subsec:model}, and the rest of the $Q_n$ basis functions are constructed from 1-dimensional $q_n$ functions that are normal polynomials.
The basis functions are defined over the wavenumber range with $k_{\rm min}=1.5 \, k_f \approx 0.006 \, h\,{\rm Mpc}^{-1}$ and one of two choices of $k_{\rm max}$, $13.5\,k_f \approx 0.06 \, h\,{\rm Mpc}^{-1}$ or $24.5\,k_f \approx 0.10 \, h\,{\rm Mpc}^{-1}$, where $k_f = 2\pi/L$ is the fundamental wavenumber and $L=1500 \,h^{-1}\,{\rm Mpc}$ is the size of a simulation box.
We use the 3D FFT method to compute the inner product matrix $\gamma$, where the grid resolution is $N_g=256$ and the size of the real-space Fourier volume matches that of the simulation boxes.
The power spectrum that appears in the weighting which defines the basis is the average total halo power spectrum $P_h$ measured in the Minerva simulations.
We present our data consisting of the measured modal coefficients in Section \ref{subsec:mean and cov}.
Sections \ref{subsec:Brec} and \ref{subsec:benchmark} compare the modal bispectrum and standard bispectrum estimators, first by considering the similarities and differences in the bispectra that are measured, and then by comparing their resulting parameter constraints.
The subsequent sections then focus exclusively on the modal bispectrum.
In Section \ref{subsec:checks}, we explore how the parameter constraints from the modal bispectrum can depend on a variety of settings within the modal bispectrum pipeline, while in Section \ref{subsec:correlators} we discuss other ways that the convergence of the modal expansion could be estimated.
Sections \ref{subsec:cov} and \ref{subsec:shlike} quantify how our results depend on the estimated covariance matrices and likelihood modeling that are used in our analysis.
\subsection{Mean and covariance of modal coefficients}
\label{subsec:mean and cov}
In Fig.~\ref{fig:minerva}, we show the means, $\overline{\beta}^R_n$, and errors, $\Delta\beta^R_n$, of modal coefficients that are measured in the 298 realizations of Minerva $N$-body simulations for $k_{\rm max} = 13.5 \, k_f$ and $24.5 \, k_f$, where $k_{\rm min}=1.5 \, k_f$ in both cases.
The dashed gray horizontal lines show the Gaussian predictions for the error in eq.~\eqref{eq:betaR gaussian covariance}, which depends on $k_{\rm max}$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{betaR_Minerva_mean_and_err_kmax13pt5.pdf}
\includegraphics[width=\textwidth]{betaR_Minerva_mean_and_err_kmax24pt5.pdf}
\hfill
\caption[Modal expansion coefficients measured from Minerva $N$-body simulations]{Means $\overline{\beta}^R_n$ and errors $\Delta\beta^R_n$ of modal expansion coefficients measured from 298 Minerva $N$-body simulations. The two panels show the measurements for $k_{\rm max} = 13.5 \, k_f$ (top, blue circles) and $24.5 \, k_f$ (bottom, red circles). Filled (empty) circles correspond to positive (negative) values. The gray dashed lines correspond to the Gaussian predictions for the error given in eq.~\eqref{eq:betaR gaussian covariance}.}
\label{fig:minerva}
\end{figure}
$\overline{\beta}^R_n$ for the first few $n$ are more easily detected, and if $k_{\rm max}$ is higher, more of the first few $n$ have a higher signal-to-noise ratio, $\overline{\beta}^R_n / \Delta\beta^R_n$: for $k_{\rm max} = 13.5 \, k_f$ ($24.5 \, k_f$), two (five) modes have a signal-to-noise ratio greater than 1.
The errors on the coefficients measured from simulations are in good agreement with the Gaussian predictions, but there are some deviations, particularly for higher $k_{\rm max}$ and small $n$.
These observations can be explained with the reasoning that, since clustering is more non-linear on smaller scales, the case with higher $k_{\rm max}$ will have larger $\overline{\beta}^R_n$ and more non-Gaussian $\Delta\beta^R_n$.
The fact that these effects are concentrated at the low $n$ modes can be explained by how the orthonormal basis of $R_n$ functions have been defined.
Since each $R_n$ is by definition a linear combination of $Q_0$, ..., $Q_n$ that is orthogonal to (i.e. in the Gaussian limit, has no covariance with) all previous $R_m$ for $m \leq n$, we generally expect higher $R_n$ modes to have smaller amplitudes in the data, making the Gaussian error approximation more accurate.
In Fig.~\ref{fig:pinocchio}, we compare the means and errors of the modal expansion coefficients from the 298 matched Pinocchio realizations to the $N$-body measurements.
The errors measured from Pinocchio are always within 10\% of the errors from Minerva.
For the first few modes, we note that the mean $\beta^R$ measured from Pinocchio can differ from the $N$-body result by more than $\sim 0.1 \, \sigma$, but this is not unexpected.
We recall that the halos in the mock catalogs were selected such that the total power spectrum, or the bispectrum variance, matched that of Minerva.
Thus, by construction, the bispectrum variance from the mocks agrees with that of the $N$-body simulations, while the mean bispectrum from the mocks is suppressed relative to the $N$-body mean \cite{Oddo:2019run}.
This overall suppression is what causes the $\beta^R_0$ from Pinocchio to be less than what is measured in Minerva.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{betaR_mocks_mean_and_err_kmax13pt5.pdf}
\includegraphics[width=\textwidth]{betaR_mocks_mean_and_err_kmax24pt5.pdf}
\hfill
\caption[Modal expansion coefficients measured from Pinocchio mock halo catalogs]{Means and errors of modal expansion coefficients measured from 298 realizations of Pinocchio mock halo catalogs are compared with the same measurements from the Minerva $N$-body simulations with matching initial conditions. The two panels show the measurements for $k_{\rm max} = 13.5 \, k_f$ (top, blue circles) and $24.5 \, k_f$ (bottom, red circles). Filled (empty) circles correspond to positive (negative) values. The bottom subplots show that the errors from the mocks are always within 10\% of the $N$-body simulations.}
\label{fig:pinocchio}
\end{figure}
In Fig.~\ref{fig:covariance}, we compare the correlation matrices from Minerva vs the 10,000 Pinocchio mocks.
As in the bispectrum case \cite{Oddo:2019run}, we find that the correlation coefficients are generally closer to zero when 10,000 mocks are used, compared to 298 Minerva simulations.
When only the 298 Pinocchio mocks with matched initial conditions are compared, the correlation coefficients agree very well (though we do not show this in a plot).
Modal coefficients are more correlated when $k_{\rm max}$ is higher.
We compare how the different estimates of the covariance matrix (using 298 Minerva simulations, 298 matched Pinocchio mocks, or all 10,000 Pinocchio mocks) impact the parameter constraints in Section \ref{subsec:cov}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{bR_cov.pdf}
\hfill
\caption[Correlation matrices for $\beta^R$]{Correlation matrices for 108 $\beta^R$ modal coefficients at $k_{\rm max} = 13.5 \, k_f$ (left) and $24.5 \, k_f$ (right). Within each matrix, the lower triangular elements are from the 298 Minerva simulations while the upper triangular elements are from the full set of 10,000 Pinocchio realizations. Correlation coefficients tend to be closer to zero for lower $k_{\rm max}$ and when more realizations are used.}
\label{fig:covariance}
\end{figure}
\subsection{Standard bispectrum vs reconstructed bispectrum}
\label{subsec:Brec}
In this section, we look at the relationship between the standard bispectrum estimator in eq.~\eqref{eq:B estimator} and the reconstructed bispectrum in eq.~\eqref{eq:Brec}
at the level of an individual realization, the mean of many simulations, and the resulting covariance.
Both estimators capture information about the bispectrum, but they are not equivalent, and in this section we examine which properties of the two estimators are the same or not.
For the comparisons in this section, we include scales up to $k_{\rm max}=13.5 \, k_f$, and adopt triangle bins of width $\Delta k = s \, k_f$ with $s=1$ for the standard bispectrum estimator.
In this case, the standard bispectrum estimator measured the bispectrum in 294 triangle bins.
Within each bin, we compare the standard estimator measurement with the bin-averaged value of $B_{\rm rec}$ in eq.~\eqref{eq:Brec}.
We have also done the comparison using $B_{\rm rec}$ evaluated on effective triangles with sorted side lengths, which was shown in \cite{Oddo:2019run} to provide a good approximation to the true bin average, and the results that follow are not changed.
Fig.~\ref{fig:B vs Brec 1sim} compares the standard bispectrum estimate $B$ with $B_{\rm rec}$ for a single realization, showing that the two do not measure the same value in each bin.
This is not unexpected, as the two estimators perform rather different operations on the same density grid in Fourier space in order to produce these bispectrum estimates.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{B_vs_Brec_1sim_kmax13pt5_binavg.pdf}
\hfill
\caption[Standard bispectrum vs reconstructed bispectrum from 1 realization]{Comparison of the standard bispectrum vs reconstructed bispectrum on one Minerva simulation for $k_{\rm max}=13.5 \, k_f$.
In the top panel, filled (empty) markers signify positive (negative) values.
The standard bispectrum measurement and the reconstructed bispectra do not obtain the same values in each bin, though the scatter between them is typically within two times the $1\,\sigma$ error of the standard estimator, $\Delta B$.
}
\label{fig:B vs Brec 1sim}
\end{figure}
Then, in Fig.~\ref{fig:B vs Brec allsims}, we extend the comparison to the suite of all 298 Minerva simulations.
In this case, the mean $B$ and $B_{\rm rec}$ are in much better agreement, on a bin-by-bin basis, where the differences are typically $\lesssim 10\%$ of the error on the standard bispectrum estimate (middle panel).
The difference is that the standard bispectrum estimates have more scatter, while the reconstructed bispectrum with fewer modes has less scatter.
In the bottom panel of Fig.~\ref{fig:B vs Brec allsims}, we show one key difference between the standard bispectrum and modal bispectrum estimates: the error on the reconstructed bispectrum is typically suppressed relative to the error on the standard bispectrum, and it depends on the number of modes used in the reconstruction.
This is shown also for the correlation matrices in Fig.~\ref{fig:B vs Brec corr}, where we see that the triangle bins are much more correlated (and anti-correlated) when the reconstructed bispectrum is used, especially with fewer modes.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{B_vs_Brec_allsims_kmax13pt5_binavg.pdf}
\hfill
\caption[Standard bispectrum vs reconstructed bispectrum averaged over all Minerva simulations]{Comparison of the standard bispectrum vs reconstructed bispectrum means and errors over all 298 Minerva simulations for $k_{\rm max}=13.5\,k_f$.
While the means agree to within $\sim 10\%$ of the error of the standard bispectrum estimator (middle panel), the error in the reconstructed bispectrum is generally suppressed relative to the error in the standard bispectrum measurement (bottom panel).
This suppression is larger when fewer modes are used in the reconstruction.
}
\label{fig:B vs Brec allsims}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{B_vs_Brec_corr_kmax_13pt5_binavg.pdf}
\hfill
\caption[Standard bispectrum vs reconstructed bispectrum correlation matrices]{Comparison of correlation matrices from the standard bispectrum and reconstructed bispectra for 294 triangle bins and $k_{\rm max}=13.5 \, k_f$.
The noticeable differences in the correlation matrices for $B$ and $B_{\rm rec}$ indicate that the covariance matrix for the standard bispectrum estimator cannot be recovered by the modal bispectrum.
This discrepancy is especially pronounced when fewer modes are used in the reconstruction.
}
\label{fig:B vs Brec corr}
\end{figure}
The underlying reason for this discrepancy is that the full $N_{\rm tri}$-dimensional Gaussian distribution that is captured by the standard bispectrum covariance cannot be compressed into a $N_{\rm modes}$-dimensional one, unless $N_{\rm modes} = N_{\rm tri}$, in which case there is no practical compression of the data.
This can be illustrated by calculating the covariance of the reconstructed bispectrum as
\begin{align}
{\rm Cov} \left[ B_{\rm rec}(\Delta_i), B_{\rm rec}(\Delta_j) \right] &=
\frac{1}{w(\Delta_i)w(\Delta_j)} \sum_n \sum_m {\rm Cov} \left[ \beta^R_n, \beta^R_m \right] R_n(\Delta_i)R_m(\Delta_j).
\end{align}
In the limit of Gaussian covariance for $\Delta_i = \Delta_j$,
\begin{align}
{\rm Var} \left[ B_{\rm rec} (\Delta_i) \right] &= \frac{1}{w(\Delta_i)^2} \sum_n {\rm Var} \left[ \beta^R_n \right] R_n(\Delta_i)^2,
\end{align}
such that adding more modes to the modal expansion will always increase the variance on $B_{\rm rec}$, and conversely, using fewer modes will suppress the variance on $B_{\rm rec}$ (as seen in the bottom panel of Fig.~\ref{fig:B vs Brec allsims}).
The modal method produces errors on $B_{\rm rec}$ that are suppressed relative to errors from the standard bispectrum estimator.
This implies that measurements of the modal coefficients cannot give estimates of bispectrum errors that are directly relevant for standard bispectrum pipelines.
For example, errors on $B_{\rm rec}$ should not be used to judge how well theoretical model predictions for the bispectrum would work in a pipeline using the standard bispectrum estimator.
\subsection{Benchmark comparison: modal bispectrum vs standard bispectrum}
\label{subsec:benchmark}
Here we compare parameter constraints from the modal bispectrum and the standard bispectrum for the default modal pipeline settings detailed at the beginning of Section \ref{sec:results}.
In subsequent sections, we demonstrate how the modal bispectrum constraints can depend on these settings.
To put the bispectrum constraints in context, we also compare them with an independent constraint of $b_1 = 2.7081 \pm 0.0012$ from using chi-squared minimization to fit the ratio of cross-power spectra $P_{hm}/P_{mm}$ from the Minerva simulations \cite{Oddo:inprep}.
The fitting function is $P_{hm}(k)/P_{mm}(k) = b_1 + {\rm coefficient} \times k^2$ up to $k_{\rm max}=0.044 \,h\,{\rm Mpc}^{-1}$, where the $k^2$ term is used to account for scale-dependent loop corrections, which for the small error bars corresponding to the total volume of the Minerva simulations can be important even at scales as large as $k \approx 0.02 \, h\,{\rm Mpc}^{-1}$.
The benchmark constraints are shown in Fig.~\ref{fig:benchmark} for $k_{\rm max} = 13.5 \, k_f \approx 0.06 \, h\,{\rm Mpc}^{-1}$.
This $k_{\rm max}$ was chosen such that we conservatively consider scales over which the tree-level halo bispectrum model has been shown to accurately describe the data, and at this $k_{\rm max}$ we are able to compare with standard bispectrum results from all three bin widths in \cite{Oddo:2019run}, $s=1$, 2, and 3.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_comparison_kmax_13pt5kf_benchmark.pdf}
\hfill
\caption[Modal bispectrum vs standard bispectrum benchmark comparison for $k_{\rm max} = 13.5 \, k_f$]{Benchmark comparison between modal bispectrum and standard bispectrum constraints for $k_{\rm max} = 13.5 \, k_f \approx 0.06 \, h \,{\rm Mpc}^{-1}$. The vertical dashed black line represents the constraint on $b_1$ from the large-scale ratio of cross-power spectra $P_{hm}/P_{mm}$.
Standard bispectrum constraints with three different bin sizes are shown: $s=1$ (red), 2 (green), and 3 (gray).
The modal bispectrum constraints with six modes (dark blue) and 59 modes (light blue) are identical, indicating that the modal bispectrum constraints have already converged with six modes.}
\label{fig:benchmark}
\end{figure}
The modal constraints using six modes and 59 modes have the same posteriors, with the contours overlapping to the point of making the 59 modes case almost invisible, indicating that the parameter constraints have fully converged with the six custom modes.
This is fully consistent with the analysis in \cite{Oddo:2019run} which showed that the tree-level bispectrum model is a good fit to the data up to this $k_{\rm max}$.
We find that the modal bispectrum constraints are consistent with, though not identical to, the standard bispectrum constraints.
We note that the modal bispectrum estimator, while it is a measure of the bispectrum, is an independent estimation of it, so we do not require (in the sense of a test) that the constraints be identical.
Although all bin choices for the standard bispectrum estimator lead to consistent outcomes, the smallest bin width ($s=1$) takes better advantage of the shape-dependence of the bispectrum leading to slightly narrower constraints.
In comparison, the modal decomposition accounts for the shape-dependence of the bispectrum without loss of information due to the binning of wavenumbers.
In Fig.~\ref{fig:kmax} we perform the same comparison except with the $k$-range extended to $k_{\rm max} = 24.5 \, k_f \approx 0.10 \, h\,{\rm Mpc}^{-1}$.
In this case, by comparing the modal bispectrum constraints with different numbers of modes, we see that the six custom modes are no longer sufficient, but 10 modes has already converged, as there is no further benefit to using 59 modes.
The fact that there is further information in four additional modes, on top of the six custom modes, is a sign that the halo bispectrum deviates from the tree-level prediction on scales within this $k$-range, as studied in much more detail in \cite{Oddo:2019run}.
Although we know that the tree-level halo bispectrum model is no longer a good model for the data to these scales, we can still compare the constraints with those from the standard bispectrum.
At this $k_{\rm max}$, we can only compare the standard bispectrum constraints with the $s=1$ binning.
Fig.~\ref{fig:kmax} shows that the results from the modal bispectrum and standard bispectrum agree very well, with only a small bias relative to each other (most visible in the 1-dimensional posterior for $b_2$ which is biased by $\sim 0.5\,\sigma$).
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_comparison_kmax_24pt5kf_benchmark.pdf}
\hfill
\caption[Modal bispectrum vs standard bispectrum benchmark comparison for $k_{\rm max} = 24.5 \, k_f$]{Comparison of modal bispectrum and standard bispectrum constraints at $k_{\rm max} = 24.5 \, k_f \approx 0.10 \, h\,{\rm Mpc}^{-1}$. At this $k_{\rm max}$, we can only compare with the $s=1$ binning of the standard bispectrum estimator (red).
By comparing the constraints using six custom modes (dark blue), 10 modes (gray), and 59 modes (light blue), we find that 10 modes are sufficient for the modal constraints to converge, as there is no further change when 59 modes are used.
}
\label{fig:kmax}
\end{figure}
Finally, we comment on the amount of compression that has been achieved in these benchmark comparisons. At $k_{\rm max} = 13.5 \, k_f$, the number of triangle bins that were used by the standard bispectrum estimator are 294 bins for $s=1$, 49 bins for $s=2$, and 19 bins for $s=3$, and we found that the modal bispectrum constraints had converged using only six custom modes.
At $k_{\rm max} = 24.5 \, k_f$, the standard bispectrum used 1,585 bins with $s=1$, yielding constraints that were very similar to the modal bispectrum using 10 modes.
This shows that the modal bispectrum is able to efficiently compress the information contained in the bispectrum into a data set that is 3 to 160 times smaller, while preserving most of the important cosmological information that we are interested in.
\subsection{Robustness checks in the modal implementation}
\label{subsec:checks}
In this section, we vary the settings in the modal analysis pipeline away from the benchmark settings to see how the modal constraints are sensitive to these choices.
\subsubsection*{Normal vs Legendre polynomials}
The six custom modes are defined to fit the tree-level bispectrum, and so their form is independent of whether we choose normal or (shifted) Legendre polynomials as our $q_n(k)$.
Therefore, we take $k_{\rm max} = 24.5 \, k_f$ and 10 modes, and compare the constraints between choosing normal polynomials vs Legendre polynomials.
The constraints from the Legendre polynomials are identical to the modal constraints with 10 modes and normal polynomials shown in Fig.~\ref{fig:kmax}, and plotted together only one set of posteriors would be visible, so to save space we do not show this comparison in a figure.
\subsubsection*{Custom modes}
What is the impact of including the custom modes?
To see this, we perform the same analysis as in the benchmark case for $k_{\rm max} = 13.5\, k_f$ and $24.5\, k_f$, but do not include the six custom modes, and see how the modal expansion convergence is affected by not including custom modes.
The case for $k_{\rm max} = 13.5\, k_f$ is in Fig.~\ref{fig:custom_kmax13pt5}, and for $k_{\rm max} = 24.5\, k_f$ is in Fig.~\ref{fig:custom_kmax24pt5}.
For lower $k_{\rm max}$, we find that 16 modes are sufficient for the parameter errors to agree to within 8\% with the result from six custom modes, and the parameter means are shifted by $\lesssim 0.25 \, \sigma$ compared to the six custom modes case.
For higher $k_{\rm max}$, the expansion converges with 31 modes, compared to only 10 modes when six of these are the custom modes.
(It appears to converge also with 16 modes, but we find that this is not stable, because the 16 modes case changes when compared with 23 modes.)
The means are shifted by $\lesssim 0.3 \, \sigma$ and the errors are consistent to within 1\% compared to the benchmark case with custom modes.
This shows that the inclusion of custom modes which are constructed based on an informative theoretical model for the bispectrum can help to compress the data into the most efficient basis.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_kmax_13pt5kf_custom_modes.pdf}
\hfill
\caption[Impact of custom modes for $k_{\rm max} = 13.5 \, k_f$]{\label{fig:custom_kmax13pt5} Comparison of modal bispectrum constraints at $k_{\rm max} = 13.5 \, k_f \, \approx 0.06 \, h \,{\rm Mpc}^{-1}$ without and with custom modes.
We compare constraints without custom modes using 7 modes (gray), 11 modes (green), and 16 modes (red) with the converged benchmark constraints that used 6 custom modes (dark blue).
}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_kmax_24pt5kf_custom_modes.pdf}
\hfill
\caption[Impact of custom modes for $k_{\rm max} = 24.5 \, k_f$]{\label{fig:custom_kmax24pt5} Comparison of modal bispectrum constraints at $k_{\rm max} = 24.5 \, k_f \approx 0.10 \, h \,{\rm Mpc}^{-1}$ without and with custom modes.
We compare constraints without custom modes using 11 modes (gray), 16 modes (light blue), 23 modes (green), and 31 modes (red) with the converged benchmark constraints that used 10 custom modes (dark blue).
}
\end{figure}
\subsubsection*{Inner product methods}
The benchmark results used a $\gamma$ matrix computed with the 3D FFT method using $N_g=256$ and $L=1500 \, h^{-1}\,{\rm Mpc}$.
In this section, we compare constraints that have used three different numerical methods for computing $\gamma$: 3D FFT, voxels, and 1D FFT.
(As mentioned in Section \ref{subsec:gamma_methods}, we do not include in this comparison the $\gamma$ computed using the Vegas routine in Cuba, as this results in matrices that are not positive-definite and therefore cannot be used.)
All three methods are unique and have their own free parameters, which roughly correspond to the resolution of the inner product integration in $k$-space.
By comparing constraints obtained with $\gamma$ matrices computed from each method, we find that both the method and how its free parameters are set can affect the resulting constraints.
Figs.~\ref{fig:gamma_kmax13pt5} and \ref{fig:gamma_kmax24pt5} compare the methods for $k_{\rm max} = 13.5 \, k_f$ and $24.5 \, k_f$, respectively.
The $N_v$ that is shown for the voxel method is the number of voxel cells in each dimension, and the $N$ and $M$ resolution parameters for the 1D FFT method (defined in Appendix \ref{app:1dfft}), are converged; doubling these values did not show any changes in the resulting constraints.
When $k_{\rm max}$ is small, we find that the voxel and 1D FFT methods converge on the same posterior for sufficiently high resolutions, but they strongly disagree with the 3D FFT result in both the position and size of the posterior.
For higher $k_{\rm max}$, the different methods produce posterior contours that agree in their size, but with non-negligible shifts (biases) between them.
The observation that the voxel and 1D FFT methods converge to each other for both $k_{\rm max}$, while disagreeing more strongly with the 3D FFT calculation at lower $k_{\rm max}$, reflects the fact that the inner product calculation captured by $\gamma$ must be treated using the same discretization scheme as the measurements in order to obtain correct constraints that are consistent with the standard bispectrum analysis.
The voxel and 1D FFT methods are ways of calculating the inner product assuming it is a smooth continuous integral, and it is not straightforward to adapt these methods such that they account for the same discretization effects as the measurements, while the 3D FFT method is, by construction, computing the inner product in the same way that the measurements are taken.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_kmax_13pt5kf_gamma_method.pdf}
\hfill
\caption[Comparison of inner product methods for $k_{\rm max} = 13.5 \, k_f$]{Comparison of modal bispectrum constraints at $k_{\rm max} = 13.5 \, k_f \approx 0.06 \, h\,{\rm Mpc}^{-1}$ using different inner product methods. The voxel (green) and 1D FFT (red) methods have converged towards each other, and both disagree with the 3D FFT result (gray) that agreed with the standard bispectrum constraint in Fig.~\ref{fig:benchmark}. The 3D FFT case with $L=3000 \, h^{-1}\,{\rm Mpc}$ (dark blue) is also biased, illustrating that getting the discretization as in the measurements is important.}
\label{fig:gamma_kmax13pt5}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_kmax_24pt5kf_gamma_method.pdf}
\hfill
\caption[Comparison of inner product methods for $k_{\rm max} = 24.5 \, k_f$]{Comparison of modal bispectrum constraints at $k_{\rm max} = 24.5 \, k_f \approx 0.10 \, h\,{\rm Mpc}^{-1}$ using different inner product methods. As in Fig.~\ref{fig:gamma_kmax13pt5}, the voxel (green) and 1D FFT (red) methods have converged towards each other, and both are biased relative to the correct 3D FFT result (gray). The 3D FFT case with $L=3000 \, h^{-1}\,{\rm Mpc}$ (dark blue) is also biased, illustrating that getting the discretization as in the measurements is important.}
\label{fig:gamma_kmax24pt5}
\end{figure}
When $k_{\rm max}$ is low, the discretization of the Fourier grid during the estimation is more important to take into account, because fewer triangle configurations are being averaged (i.e. the tetrapyd is more sparsely sampled).
This interpretation also be confirmed another way: within the 3D FFT method, the resolution is increased if the size of the FFT box, $L$, is larger. If we increase the resolution by increasing $L$ to be twice as large as the simulation box, we find in both Figs.~\ref{fig:gamma_kmax13pt5} and \ref{fig:gamma_kmax24pt5} that the resulting posterior becomes more similar to the voxel and 1D FFT case, as we would expect.
It may be the case that for a higher $k_{\rm max}$ than what we have used in this work, the different methods for computing $\gamma$ may yield results that are similar enough that the different methods can be interchangeable.
This is expected because more cosmological information is contained in non-linear scales and the continuous integration of the voxel and 1D FFT methods becomes a better approximation to the discretized inner product when $k_{\rm max}$ is higher.
However, we emphasize that for the range of scales that we have used in this work, $k_{\rm max} \lesssim 0.10 \, h\,{\rm Mpc}^{-1}$, the different methods for computing $\gamma$ are \textit{not} interchangeable, and the 3D FFT method is the only one which treats the inner product identically to how the measurements are performed, resulting in correct constraints.
The 3D FFT method is also preferable for the speed and ease of its calculation.
The 3D FFTs are performed very quickly using the same FFT routines which are already necessary for the modal bispectrum (and standard bispectrum) measurements, while the other voxel and 1D FFT methods require different algorithms which must be coded independently and, in our implementation, are not as fast.
As $\gamma$ must only be computed once for a fixed $k_{\rm max}$, we do not anticipate that the computation of $\gamma$ which is unique to the modal bispectrum analysis increases the computational cost of using the modal method by a significant amount.
\subsubsection*{Dependence on FFT grid resolution}
Depending on $k_{\rm max}$, the FFT grid on which the 3D FFTs are calculated can have a configuration-space grid resolution much smaller than our default value of $N_g = 256$.
Since $L$ fixes the resolution of the Fourier grid to be $k_f \equiv 2\pi/L$, increasing $N_g$ increases the $k_{\rm max}$ that can be probed without too much aliasing contamination.
\cite{Jeong2010} and \cite{Sefusatti:2015aex} have suggested that the standard FFT bispectrum estimator, which takes a form very similar to the modal estimator, can probe up to $k_{\rm max} = k_f N_g/3 = 2 k_{\rm Ny}/3$, where $k_{\rm Ny}$ is the Nyquist wavenumber, unlike the power spectrum estimator which is valid up to $k_{\rm max} = k_f N_g/2 = k_{\rm Ny}$.
This is because the factor of $e^{i \mathbf{k}_{123} \cdot \mathbf{x}}$ in the estimator is invariant under shifts (in one dimension) of each $k_i$ to $k_i+k_f N_g/3$ \cite{Sefusatti:2015aex}.
On the other hand, the opposite has been argued by \cite{Watkinson:2017zbs} for the FFT bispectrum estimator and \cite{Hung:2019ygc} for the modal estimator---that these estimators are valid up to $k_{\rm max} = k_{\rm Ny}$.
Here we fix our $k$-range to have $k_{\rm max} = 13.5 \, k_f$ and test which criterion for $N_g$, either $N_g > 3 k_{\rm max}/k_f \approx 41$ or $N_g > 2 k_{\rm max}/k_f = 27$ is sufficient to return the same constraints from the modal estimator pipeline as the benchmark value of $N_g = 256$.
We note that changing $N_g$ requires the pipeline to be run from the beginning, starting with the construction of the configuration-space density grid, and including the calculation of $\gamma$ with the 3D FFT method.
Fig.~\ref{fig:gamma_kmax13pt5_Ng} compares the constraints for different values of $N_g = 34$, 42, and 256.
We find that using $N_g = 34$ leads to constraints that strongly disagree with the benchmark case of $N_g = 256$, while $N_g = 42$ gives identical constraints to the benchmark case, showing that the modal estimator is valid only up to $2k_{\rm Ny}/3$.
If this result is explained by the argument in \cite{Sefusatti:2015aex}, then we would expect this result to also hold for the FFT-based standard bispectrum estimator, which also has a factor of $e^{i \mathbf{k}_{123} \cdot \mathbf{x}}$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_kmax_13pt5kf_dependence_on_Ng.pdf}
\hfill
\caption[Dependence on FFT grid resolution $N_g$]{Comparison of modal bispectrum constraints at $k_{\rm max} = 13.5 \, k_f \approx 0.06 \, h\,{\rm Mpc}^{-1}$ using different FFT grid resolutions, $N_g$. The case with $N_g=42$ (dark blue) yields identical results to the benchmark case with $N_g=256$ (gray, hidden underneath the dark blue contours), because both satisfy $N_g >3 k_{\rm max}/k_f \approx 41$, while $N_g=34$ (red) is insufficient and leads to incorrect constraints.}
\label{fig:gamma_kmax13pt5_Ng}
\end{figure}
\subsubsection*{Weighting}
In the benchmark analysis, the bispectrum was weighted by $w$ in eq.~\eqref{eq:w}, where the power spectrum $P(k_i)$ was the average total halo power spectrum measured from the Minerva simulations.
What is the effect of using a different weighting function?
We note that changing the weighting only changes two parts of the modal pipeline.
First, the $q_n^{\rm tree}$ functions in eqs.~\eqref{eq:q0tree}--\eqref{eq:q5tree} will change, such that the factor of $\sqrt{k/P(k)}$ in each one will be different.
This will, however, not change the fact that the six custom modes, $Q_n^{\rm tree}$, are able to reconstruct the tree-level halo bispectrum model exactly.
The second change is that when $\llangle Q | w\hat{\mathcal{B}} \rrangle$ in eq.~\eqref{eq:QwB estimator} is estimated from simulations, the factor of $[ \sqrt{k_1 k_2 k_3}\sqrt{P(k_1)P(k_2)P(k_3)}]^{-1/2}$ in the integrand will change to $w/[k_1k_2k_3]$.
We have considered the case where $w$ takes the same form as in eq.~\eqref{eq:w}, but $P(k_i)$ is set to the linear \textit{matter} power spectrum $P_L$.
This power spectrum is different to the benchmark weighting in that the power spectrum does not have any halo bias, non-linearities, or shot noise.
Therefore, we use this situation to reflect an analysis where the halo power spectrum in the weight is not perfectly modeled or measured.
We find that this difference does not have any effect on the resulting parameter constraints, implying that the modal bispectrum constraints are not strongly affected by the particular power spectrum that is used for the weighting.
In particular, it does not change how quickly the modal expansion converges.
Still, there is a reason to prefer the optimal weighting with the non-linear total halo power spectrum, which is that it is in this case that the covariance of the $\beta^R$ is best approximated by the Gaussian covariance expression in eq.~\eqref{eq:betaR gaussian covariance}.
If the linear power spectrum is used in the weighting, the $P(k_1)P(k_2)P(k_3)$ that appears in eq.~\eqref{eq:gaussian 6pt correlator} does not cancel out with the power spectra in the weight, such that the Gaussian covariance expression for $\beta^R$ is not eq.~\eqref{eq:betaR gaussian covariance}.
This result does not necessarily mean that the parameter constraints are totally immune to especially sub-optimal choices for $w$.
We have checked that when $w=1$ is adopted, in other words, no weighting at all is used, the information in the bispectrum is less efficiently extracted.
This is shown in Fig.~\ref{fig:weighting_kmax13pt5}, which compares the constraints in the `no weight' case with the benchmark results for $k_{\rm max}=13.5 \, k_f$.
With no weighting, we find that the constraints using six custom modes is much weaker than, though still consistent with, the benchmark case of six custom modes with default weighting.
This difference must originate from the choice of weighting, because in both cases the six custom modes, by construction, can exactly reproduce the tree-level bispectrum model that is a good description of the bispectrum up to this $k_{\rm max}$.
The fact that the no weighting constraints are weaker is consistent with the fact that less optimal weighting of the $(k_1,k_2,k_3)$ Fourier triangles should lead to less information being extracted.
However, even in this case, the modal expansion method can compensate for the less-than-optimal weighting if a larger number of modes are included.
For this $k_{\rm max}$, Fig.~\ref{fig:weighting_kmax13pt5} shows that the benchmark constraints can be recovered if 21 modes are included.\footnote{
If $w=1$, we note that both the last custom mode $Q_5^{\rm tree}$ and $Q_0$ will be constants that do not have any dependence on $(k_1,k_2,k_3)$, so a modal basis that includes both of them will correspond to having a non-positive definite $\gamma$ matrix.
For this reason, when $w=1$ and custom modes are included, the basis sets with more than six modes leave out the $Q_0$ mode, but otherwise have the same ordering of $Q_n$ functions as in the rest of this work.
}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{M5_comparison_kmax_13pt5kf_weighting.pdf}
\hfill
\caption[Dependence on weighting function]{Comparison of modal bispectrum constraints at $k_{\rm max} = 13.5 \, k_f \approx 0.06 \, h\,{\rm Mpc}^{-1}$ using the benchmark weighting function in eq.~\eqref{eq:w} (green) vs no weight, where $w=1$.
When the six custom modes are used in both cases, $w=1$ (red) leads to weaker parameter constraints, but this sub-optimal choice of weighting can be fully compensated for by using more modes.
For this $k_{\rm max}$, 12 modes (dark blue) is not sufficient, but 21 modes (gray, overlapping exactly with the green contours) are able to recover the same constraints as the more optimal weighting.
}
\label{fig:weighting_kmax13pt5}
\end{figure}
\subsection{Modal expansion correlators}
\label{subsec:correlators}
One way of measuring the accuracy of the modal expansion is to define and compute so-called \textit{correlators} that quantify different aspects of the accuracy of the reconstruction.
For example, the shape correlator $\mathcal{S}$, amplitude correlator $\mathcal{A}$, and total correlator $\mathcal{T}$ are \cite{Lazanu:2015rta,Hung:2019ygc}
\begin{eqnarray}
\mathcal{S}(B_i,B_j) &\equiv& \dfrac{[B_i,B_j]}{\sqrt{[B_i,B_i][B_j,B_j]}} \\
\mathcal{A}(B_i,B_j) &\equiv& \dfrac{\sqrt{[B_i,B_i]}}{\sqrt{[B_j,B_j]}} \\
\mathcal{T}(B_i,B_j) &\equiv& 1 - \sqrt{1-2\mathcal{S}(B_i,B_j)\mathcal{A}(B_i,B_j)+\mathcal{A}(B_i,B_j)^2},
\label{eq:correlators}
\end{eqnarray}
where the square brackets notation above from \cite{Hung:2019ygc} is
\begin{equation}
[B_i,B_j] \propto \sum_n \beta^{R(i)}_n \, \beta^{R(j)}_n,
\end{equation}
such that $[B_i,B_j]$ is proportional to our $\llangle wB_i | wB_j \rrangle$.
The shape correlator $\mathcal{S}$ takes values between -1 and 1 and is insensitive to constant multiplicative factors that change the bispectrum amplitude, while the amplitude correlator $\mathcal{A}$ can take any positive value.
The total correlator $\mathcal{T}$ is sensitive to both the shape and the amplitude of the bispectrum, such that if both the shape and amplitude are perfectly matched, then $\mathcal{S}=\mathcal{A}=\mathcal{T}=1$, and if either the shape or the amplitude are not perfectly reproduced then $\mathcal{T} < 1$.
These correlators have an intuitive quantitative meaning in cases where an amplitude parameter (like $f_{\rm NL}$, the amplitude of primordial non-Gaussianity, for example) is measured with Gaussian data covariances \cite{Hung:2019ygc}.
We show plots of $\mathcal{S}$ and $\mathcal{T}$ between the reconstructed bispectrum with $n$ modes, $B_i = B_{\rm rec}(N_{\rm modes}=n)$, and the reconstructed bispectrum with 108 modes, $B_j = B_{\rm rec}(N_{\rm modes}=108)$, in Fig.~\ref{fig:correlators}.
The figure is for $k_{\rm max}=24.5\,k_f \approx 0.10\, h\,{\rm Mpc}^{-1}$ and considers the mean modal bispectrum from 298 Minerva simulations.
The vertical gray lines at $N_{\rm modes}=10$ and $31$ mark the number of modes that we previously found in Sections \ref{subsec:benchmark} and \ref{subsec:checks} were sufficient for converged parameter constraints, with and without custom modes, respectively.
Both panels show that the correlators are already $\mathcal{S}, \mathcal{T} \gtrsim 0.99$ with six custom modes, and further improvements are gained slowly as more modes are accumulated.
The correlators approach unity more slowly in the absence of custom modes, but these too show small improvements after $\sim 25$ modes.
This behavior implies that the $\mathcal{S}$ and $\mathcal{T}$ correlators are not good indicators for predicting how many modes would be sufficient to use the modal bispectrum to constrain cosmological parameters of interest.
Given only the information in Fig.~\ref{fig:correlators}, it is not obvious how many modes will be needed for any specific purpose.
This is partly because the correlators do not take into account other information that will influence the number of sufficient modes, such as which parameters the modal pipeline will be used to measure.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{correlators.pdf}
\caption[Correlators as a function of $n_{\rm max}$]{Shape correlator $\mathcal{S}$ (top) and total correlator $\mathcal{T}$ (bottom) for $k_{\rm max}=24.5\,k_f$.
Smaller values of $1-\mathcal{S}$ and $1-\mathcal{T}$ correspond to smaller differences in the reconstructed bispectrum compared to the case where the maximum number of modes are used.
The vertical gray lines mark $N_{\rm modes}=10$ and $31$, which we previously found was sufficient to obtain robust parameter constraints when custom modes are included or excluded, respectively.
}
\label{fig:correlators}
\end{figure}
In this work, we determined a sufficient number of modes by checking that parameter posteriors had converged.
However, this requires many steps, including measuring modal coefficients from a large number of simulations and running MCMC simulations multiple times for different $N_{\rm modes}$ to validate our results.
In the absence of these, one may consider calculating Fisher forecasts to estimate the number of modes that would be needed, to check that the modal bispectrum still provides a good compression of the information in the bispectrum.
Commonly in Fisher matrix analyses, the fiducial parameter values are kept fixed, and parameter errors are forecasted.
However, in this work we found that even though very few modes are needed to reproduce the size and degeneracy directions of the parameter contours, more modes are typically needed to reduce the bias in the positions of the contours in parameter space.
(This implies that the modes are better at capturing the derivatives of the bispectrum with respect to our chosen parameters, than it is at capturing the mean bispectrum.)
Therefore, it is also prudent to estimate the bias using the Fisher formalism \cite{Tegmark1997,Knox1998}.
The Fisher matrix corresponding to the modal pipeline is
\begin{equation}
F_{ij} = \sum_{n,m}^{N_{\rm modes}-1} \frac{\partial \beta^R_n}{\partial \theta_i} \; \hat{C}(N_{\rm modes})^{-1}_{nm} \; \frac{\partial \beta^R_m}{\partial \theta_j},
\end{equation}
where the partial derivatives are evaluated for our tree-level model at a chosen fiducial.
The parameter covariance matrix is then $F^{-1}$.
The bias in the parameters due to an unaccounted for systematic error can be estimated as (e.g.~\cite{Amara2008})
\begin{eqnarray}
b(\theta_i) &=& (F^{-1})_{ij} \, b_j \\
b_j &=& \sum_{n,m}^{{\rm max}\;N_{\rm modes}-1} \beta^{R,{\rm sys}}_n \; \hat{C}({\rm max}\;N_{\rm modes})^{-1}_{nm} \; \frac{\partial \beta^R_m}{\partial \theta_j}
\label{eq:bias bj}
\end{eqnarray}
where $\beta^{R,{\rm sys}}_n$ is a source of residual systematic uncertainty. In our case, to calculate the bias due to truncating at a certain number of modes,
\begin{eqnarray}
\beta^{R, {\rm true}}_n &=& \beta^{R, {\rm obs}}_n \quad \ \ {\rm for}\ n < {\rm max}\;N_{\rm modes} \\
\beta^{R, {\rm truncated}}_n &=&
\begin{cases}
\beta^{R, {\rm obs}}_n & {\rm for}\ n < N_{\rm modes} \\
0 & \text{otherwise}
\end{cases} \\
\beta^{R, {\rm sys}}_n &=& \beta^{R, {\rm true}}_n - \beta^{R, {\rm truncated}}_n \nonumber \\
&=& \begin{cases}
0 & {\rm for}\ n < N_{\rm modes} \\
\beta^{R, {\rm obs}}_n & {\rm for}\ n \geq N_{\rm modes},
\end{cases}
\end{eqnarray}
where $\beta^{R, {\rm obs}}_n$ is the average measured from the Minerva simulations.
We set ${\rm max}\;N_{\rm modes}=108$ if custom modes are included, and ${\rm max}\;N_{\rm modes}=102$ if they are not.
Taking the case where $k_{\rm max}=24.5 \, k_f$, we show the Fisher forecasted errors and bias in Fig.~\ref{fig:fisher}.
Explicitly, we show $|\Delta\sigma|$ and $|\Delta\theta|$, where
\begin{eqnarray}
\Delta\theta &\equiv& \frac{b(\theta)}{\sigma_\theta({\rm max}\;N_{\rm modes})}
\label{eq:Delta theta Nmodes} \\
\Delta\sigma &\equiv& \frac{\sigma_\theta(N_{\rm modes})}{\sigma_\theta({\rm max}\;N_{\rm modes})}-1,
\label{eq:Delta sigma Nmodes}
\end{eqnarray}
and $\sigma_\theta$ is the Fisher forecasted error for parameter $\theta$.
To keep the plot simple, at each $N_{\rm modes}$ we have plotted the $|\Delta\sigma|$ and $|\Delta\theta|$ that is the largest among the five parameters.
This illustrates a simple case where we already have MCMC simulations, and we are simply verifying, after the fact, that Fisher forecasts can provide similar indications of modal expansion convergence.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{fisher_bias.pdf}
\caption[Fisher forecasts for modal convergence]{Results from Fisher forecasts on the parameter error (top) and bias (bottom), defined in eqs.~\eqref{eq:Delta sigma Nmodes} and \eqref{eq:Delta theta Nmodes}, as a function of total modes used.
The vertical gray lines mark $N_{\rm modes}=10$ and $31$, which we previously found was sufficient to obtain robust parameter constraints when custom modes are included or excluded, respectively.
The shaded gray regions are where the error is within 10\%, and the bias is within $0.1\,\sigma$, of the result with max $N_{\rm modes}$.
}
\label{fig:fisher}
\end{figure}
However, we note that the bias due to a truncation of the modal expansion is a purely non-Gaussian effect that cannot be estimated without the non-Gaussian covariance matrix.
This is because the Gaussian covariance is diagonal, such that the $b_j$ in eq.~\eqref{eq:bias bj} would only be sensitive to the systematics in those $\beta^{R,{\rm sys}}_n$ modes that also explicitly vary with the $\theta$ parameters of the modeling.
Still, once a non-Gaussian covariance matrix is obtained, the Fisher formalism may allow for a forecast of how many modes are necessary for the errors and bias to converge to a desired level.
This may be useful if a non-Gaussian covariance matrix is available, but one wants to estimate roughly how many modes may be needed without running potentially expensive MCMC simulations for multiple scenarios.
\subsection{Covariances}
\label{subsec:cov}
All of the results we have discussed so far used covariance matrices estimated from the full set of 10,000 Pinocchio mocks.
In this section, we explore how the modal bispectrum constraints are sensitive to the covariance matrix that is used.
Unless otherwise mentioned explicitly, the results in this subsection use $k_{\rm max} = 24.5 \, k_f \approx 0.10\, h \,{\rm Mpc}^{-1}$ and 10 modes.
In the limit of Gaussian covariance, the covariance matrix for the $\beta^R_n$ modal coefficients is diagonal, with the same variance for each $n$.
When $k_{\rm max}=13.5 \, k_f$ with six modes, we find that the Gaussian covariance approximation is very accurate, giving the same constraints as the fully non-Gaussian covariance matrix estimated from 10,000 Pinocchio mocks.
However, when $k_{\rm max}=24.5 \, k_f$ with 10 modes, as shown in Fig.~\ref{fig:covariance comparison}, the Gaussian covariance underestimates the parameter errors by up to 20\%, and the constraints are biased by up to $2\,\sigma$, depending on the parameter.
Fig.~\ref{fig:covariance comparison} also shows that constraints using covariance matrices estimated from 298 Minerva simulations, 298 Pinocchio mocks with matched initial conditions, and the full set of 10,000 Pinocchio mocks are in good agreement: parameter errors agree to within 10\% and biases are small, less than $\sim 0.4\,\sigma$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{covariance_comparison.pdf}
\caption[Impact of difference covariance matrix estimates]{Comparison of parameter constraints when different covariance matrices are used to obtain constraints at $k_{\rm max}=24.5 \, k_f \approx 0.10\, h\,{\rm Mpc}^{-1}$ using 10 modes.
The Gaussian covariance matrix (green) leads to biased and underestimated parameter constraints, but the covariance matrices estimated from 298 Minerva simulations (dark blue) and 298 Pinocchio simulations with matched initial conditions (red) are in good agreement with the benchmark modal constraints that used the full set of 10,000 Pinocchio simulations (gray, nearly identical to the dark blue and red contours).}
\label{fig:covariance comparison}
\end{figure}
In Fig.~\ref{fig:covariance convergence}, we show how the parameter means and errors can depend on the number of mocks used to estimate the covariance matrix.
From the 10,000 Pinocchio mocks, we take subsets of the mocks divided into groups of $N_s$ and compare the resulting constraints.
This is along the lines of \cite{Blot:2015cvj}, which considered the impact of covariance matrix errors on cosmological parameter constraints from the power spectrum.
Similarly to eqs.~\eqref{eq:Delta theta Nmodes} and \eqref{eq:Delta sigma Nmodes}, we define and show
\begin{eqnarray}
\Delta\theta &\equiv& \frac{\overline{\theta}(N_s) - \overline{\theta}(N_s=10^4)}{\sigma_\theta(N_s=10^4)}
\label{eq:Delta theta Ns} \\
\Delta\sigma &\equiv& \frac{\sigma_\theta(N_s)}{\sigma_\theta(N_s=10^4)}-1
\label{eq:Delta sigma Ns}.
\end{eqnarray}
Each subset of mocks corresponds to a single gray circle in each panel of Fig.~\ref{fig:covariance convergence}, while the red points and error bars show the mean and standard deviation of the gray circles at one value of $N_s$.
This comparison shows that the parameter errors are recovered to within 10\% with only 300 mocks, but many more mocks are typically necessary to reduce the bias to the same level. For example, $N_s > 2000$ would be needed to reduce the bias to $\lesssim 0.1\,\sigma$.
One caveat to this result, however, is that the red points in the plot are not independent, since they are dividing up the same realizations, just in different groups.
This fact will tend to make the different $N_s$ appear more consistent with the case we are comparing with, using all 10,000 mocks.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,keepaspectratio]{covariance_convergence.pdf}
\caption[Convergence of the covariance matrix]{Comparison of the bias and change in parameter errors, for $k_{\rm max}=24.5 \, k_f$ with 10 modes, as a function of how many Pinocchio mocks, $N_s$, are used to estimate the covariance.
$\Delta\theta$ and $\Delta\sigma$ are defined in eqs.~\eqref{eq:Delta theta Ns} and \eqref{eq:Delta sigma Ns}.
The gray circles mark the result from each set of $N_s$ mocks, while the red error bars are the $1\,\sigma$ scatter of the gray circles. The gray shaded bands mark the regions where deviations are within $0.1\,\sigma$ of the benchmark result using $N_s=10^4$.}
\label{fig:covariance convergence}
\end{figure}
\subsection{Gaussian vs Sellentin-Heavens likelihood}
\label{subsec:shlike}
We compared the two likelihoods when all $N_s = 10^4$ mocks are used to estimate the covariance, and we find that they result in indistinguishable parameter constraints, which is the expected behavior when $N_s$ is very large.
For some smaller value of $N_s$, we expect that the two likelihoods will show different results.
For $k_{\rm max}=24.5 \, k_f$ with 10 modes, we show this comparison for two values of $N_s$, $N_s=20$ and 300, in Fig.~\ref{fig:shlike}.
For $N_s=20$, we simulate 50 analyses, and for $N_s=300$ we simulate 33.
$\Delta\theta$ and $\Delta\sigma$ are defined in eqs.~\eqref{eq:Delta theta Ns} and \eqref{eq:Delta sigma Ns}, and for each MCMC simulation, we plot five points for $\Delta\theta$ and $\Delta\sigma$, one point for each parameter.
\begin{figure}[t]
\centering
\includegraphics[height=0.8\textheight,keepaspectratio]{gaussian_vs_sh_likelihood.pdf}
\caption[Gaussian vs Sellentin-Heavens likelihood]{Comparison of the bias (left column) and errors (right column) obtained with the Gaussian and SH likelihoods when covariance matrices are estimated using $N_s=300$ (top row) or $N_s=20$ (middle row) mocks.
$\Delta\theta$ and $\Delta\sigma$ are defined in eqs.~\eqref{eq:Delta theta Ns} and \eqref{eq:Delta sigma Ns}.
The bottom row shows histograms of posterior biases and sizes for the likelihoods when $N_s=20$.
The solid lines are Gaussian fits to the histograms and show that the Sellentin-Heavens likelihood on average is slightly closer to the answer given by the full set of $10^4$ mocks.
}
\label{fig:shlike}
\end{figure}
The top row of Fig.~\ref{fig:shlike} shows that $N_s=300$ is sufficiently high to make differences between the two likelihoods negligible when constraints are compared between individual sets of 300 mocks.
When $N_s=20$ (middle row), which is much closer to the number of data bins, $N_{\rm modes}=10$, the two likelihoods can produce posteriors that are noticeably different.
For any one analysis using 20 mocks, the two posteriors will be biased relative to each other and can produce parameter errors that are too big or too small relative to the true answer, which we assume is the result from $10^4$ mocks.
On average though, both likelihoods have posteriors that are unbiased relative to the true answer, but the parameter error from both likelihoods will be larger than the case with $N_s = 10^4$.
This is expected, because the parameter constraints should be worse when the covariance is less well estimated from fewer mocks.
However, with the SH likelihood, the result on average is closer to the truth: the scatter in the bias $|\Delta \theta|$ is smaller (bottom left panel), the average parameter errors are closer to the truth (bottom right panel), and the distribution of parameter errors are more tightly scattered around the true value (also bottom right panel).
Despite this, we find that in practice it does not matter which likelihood is implemented, because in either case enough mocks would have to be used to ensure that the parameter constraints are not dominated by the covariance matrix error.
In this work, we find that once $N_s$ is large enough for either likelihood to be stable to within a few tens of per cent (as shown in Fig.~\ref{fig:covariance convergence}),
the two likelihoods will produce identical results.
\section{Conclusions}
\label{sec:conclusions}
In this work, we have implemented an MCMC analysis using the compressed modal bispectrum for the first time.
By using the same data, modeling, and analysis choices as \cite{Oddo:2019run}, we are able to rigorously compare the constraints from the standard bispectrum estimator and the modal bispectrum estimator within a controlled setting.
Specifically, we use the real-space tree-level halo bispectrum model to constrain the halo bias and shot noise parameters measured in the Minerva $N$-body simulations, which represents an idealized survey with volume $\approx 1{,}000 \, h^{-3}\,{\rm Gpc}^3$.
Our key result is that the modal bispectrum provides a very efficient compression of the information in the bispectrum, while requiring minimal new calculations compared to the standard bispectrum analysis;
the critical components of the pipeline are the the modal estimator in eqs.~\eqref{eq:QwB estimator with Mrx} and \eqref{eq:Mrx} and the inner product matrix, $\gamma$, and both can be computed with minor modifications to the standard bispectrum estimator.
We find that for $k_{\rm max} = 13.5 \, k_f \approx 0.06\, h\,{\rm Mpc}^{-1}$ $(24.5\, k_f \approx 0.10\, h\,{\rm Mpc}^{-1})$, the constraints on halo bias and shot noise parameters converge with only 6 (10) modal coefficients, yielding very similar constraints compared to the standard bispectrum analysis in \cite{Oddo:2019run} that used $\sim 20$ to 1,600 triangle bins.
We showed that this convergence of the constraints with $N_{\rm modes}$ is only qualitatively reflected by the shape and total correlators (in Section \ref{subsec:correlators}), but Fisher forecasts can estimate the $N_{\rm modes}$ needed for parameter constraints to converge to a desired level.
We tested the robustness of the modal bispectrum constraints to different user choices within the modal pipeline implementation.
We find that the choice between the normal polynomials or shifted Legendre polynomials for constructing the $Q_n$ basis functions has no impact on the results, but using a near-optimal weighting function $w$ to weight Fourier triangles and including some customized basis functions, like $Q_n^{\rm tree}$, that are more tuned to the parameters being constrained can help minimize the number of modes needed for more efficient compression.
We also compared different methods of computing the inner product matrix, $\gamma_{nm} \equiv \llangle Q_n|Q_m \rrangle$, a critical piece of the pipeline, and find that only the 3D FFT method is always correct, though other methods appear to be approximately correct for higher $k_{\rm max}$.
This is because the 3D FFT method calculates the inner product between basis functions, $\llangle Q_n|Q_m \rrangle$, on the Fourier-space grid in the same way that the modal estimator calculates the inner product between basis functions and the data, $\llangle Q|w\mathcal{B} \rrangle$, treating data and theory in the most consistent way possible.
The voxel and 1D FFT methods, on the other hand, take the continuous limit of the inner product, which becomes a good approximation when the Fourier grid is very fine.
We also noted that, while the modal bispectrum and standard bispectrum estimators are both summary statistics of the true bispectrum, they are performing different operations on the density grid in Fourier space, $\delta(\mathbf{k})$.
Thus they are not always interchangeable and care should be taken when comparing the two.
To illustrate this, we have shown that they agree on the mean bispectrum averaged over many simulations (i.e.~the mean standard bispectrum estimator vs the mean reconstructed bispectrum), but for one realization they give different answers for the triangle-dependence of the measured bispectrum.
Additionally, the two estimators have different error properties, and the covariance of $B_{\rm rec}$ cannot be used as a substitute for the covariance of measurements made with the standard bispectrum estimator.
The highly efficient compression achieved by the modal bispectrum and the large number of simulations available have allowed us to explore how the modal estimator constraints depend on the the number of simulations used to estimate the covariance, $N_s$, and whether $N$-body simulations or Pinocchio approximate mocks are used.
Such calculations can usually only be done in a limited way for bispectrum data sets using the standard estimator because of its much larger size.
We find that the covariance matrices from 298 $N$-body simulations, 298 Pinocchio mocks with matched initial conditions, and the full set of 10,000 mocks lead to constraints that are biased by up to $\sim 0.4\,\sigma$ relative to each other, and to reduce this bias to $\lesssim 0.1\,\sigma$ would require $N_s > 2{,}000$ mocks.
We also show that the Gaussian and Sellentin-Heavens likelihood functions only show different results when $N_s$ is extremely low.
However, since $N_s$ should be large enough such that the error in the covariance matrix estimate is subdominant with either likelihood function (at which point the two likelihoods give identical parameter constraints), in practice either likelihood could be used, though in principle the Sellentin-Heavens likelihood is more theoretically motivated from a Bayesian perspective.
This work has shown that the modal method remains a promising avenue for accessing cosmological information in the bispectrum through a compressed data set, and we have developed a better understanding of how to implement and interpret the estimator and its results.
However, the modal bispectrum pipeline presented here would require further work before it could be applied to a realistic galaxy catalog, including redshift-space distortions, and potentially probing a larger range of scales (higher $k_{\rm max}$), where a theoretical model beyond the tree-level SPT bispectrum would be necessary.
A larger $k_{\rm max}$ will most likely require more modes, ideally including more custom modes that would be theoretically motivated.
If a theoretical model beyond the separable tree-level SPT one is used, computational methods in the pipeline will need to be adapted so that the inner product $\llangle Q| wB^{\rm theory} \rrangle$ could still be computed quickly.
(The analogous problem in the standard bispectrum analysis is handled by evaluating the theoretical bispectrum at an effective triangle in each triangle bin \cite{Oddo:2019run}.)
It is likely that if $k_{\rm max}$ is sufficiently high, the calculation of the inner product could be well-approximated by another method that does not require evaluating the theory on the Fourier-space grid.
Additionally, the modal bispectrum method presented here would need to be extended to capture anisotropies coming from redshift-space distortions \cite{Regan:2017vgi}, which are always present in real observations and also act as a source of more cosmological information.
We plan to investigate these outstanding issues in future work.
\acknowledgments
We are grateful to Pierluigi Monaco for providing the Pinocchio mock halo catalogs and to Claudio Dalla Vecchia and Ariel S\'{a}nchez for providing the Minerva $N$-body simulations.
We thank Dionysios Karagiannis for suggesting the use of the 1D FFT inner product method.
We also wish to thank the Institute for Fundamental Physics of the Universe (IFPU) in Trieste, Italy for hosting the workshop of the Euclid Galaxy Clustering Higher-order Statistics Work Package where part of this work was done.
JB is supported by the Sinergia Grant No.~173716 from the Swiss National Science Foundation. ES acknowledges support from PRIN MIUR 2015 Cosmology and Fundamental Physics: illuminating the Dark Universe with Euclid.
The modal bispectrum analysis was performed on the Baobab cluster at the University of Geneva.
\bibliographystyle{JHEP}
|
1210.8369
|
\section{Introduction}
Let $\Omega$ be a bounded polyhedral Lipschitz domain in ${\mathbb R}^d$, $d\ge2$.
We consider a homogeneous Dirichlet boundary value problem for a certain non-linear
second-order elliptic partial differential equation (PDE)
\begin{subequations}
\label{intro:modelproblem}
\begin{align}\label{intro:L}
\operator{L} u(x):= -\textrm{div}\big(\matrix{A}(x,\nabla u)\big) + g(x,u,\nabla u) &= f(x)
\quad\text{in }\Omega,\\
\label{intro:boundary}
u&=0
\quad\quad\;\,\text{on }\Gamma:=\partial\Omega.
\end{align}
\end{subequations}
The differential operator $\operator{L} = \operator{A} + \operator{K}$ is
split into a principal part
$\operator{A}u = -\textrm{div}\big(\matrix{A}(\cdot,\nabla u)\big)$ and a compact perturbation
$\operator{K}u = g(\cdot,u,\nabla u)$, see Subsection~\ref{section:nonlin} for the precise regularity assumptions.
This framework also includes the case of general linear second-order elliptic operators
\begin{align}\label{intro:linearL}
\operator{L} u:= -\textrm{div}(\matrix{A}\nabla u) + \vector{b}\cdot\nabla u + cu.
\end{align}
We consider a common
adaptive mesh-refining algorithm which iterates the following loop
\begin{align}\label{intro:afem}
\boxed{\texttt{ solve }}
\quad\longrightarrow\quad
\boxed{\texttt{ estimate }}
\quad\longrightarrow\quad
\boxed{\texttt{ mark }}
\quad\longrightarrow\quad
\boxed{\texttt{ refine }}
\end{align}
The module \texttt{solve} computes a piecewise polynomial finite element
approximation $U_\ell$ of $u$ with respect to a given mesh ${\mathcal T}_\ell$.
For \texttt{estimate}, we use a residual error estimator, see
e.g.~\cite{ao00,v96}. Next, the D\"orfler marking criterion~\cite{doerfler}
is used to single out elements for refinement. Finally, \texttt{refine}
leads to a locally refined and improved mesh ${\mathcal T}_{\ell+1}$ by means of
the newest vertex bisection algorithm (NVB).
So far, available results on
convergence and quasi-optimality of adaptive finite element methods (AFEM)
from the literature essentially dealt with the linear, symmetric, and elliptic case~\eqref{intro:linearL} with
$\vector{b}=0$ and $c\ge0$, see
e.g.~\cite{bdd,bn,ckns,doerfler,ks,stevenson07}
and the references therein. As far as the linear and non-symmetric
case $\vector{b}\neq0$ is concerned, we are only aware of the works~\cite{cn,mn}
which, however, considered the
special situation $\textrm{div}\,\vector{b}=0$ and $c\ge0$. Moreover, their
analysis requires the interior node property for the refinement at least after
a fixed number of steps, which has been introduced in~\cite{mns} to guarantee
a discrete lower bound for the error. Finally, the proofs of convergence
and quasi-optimality in~\cite{cn,mn} assume the initial mesh ${\mathcal T}_0$ to be
sufficiently fine although the assumption $\textrm{div}\,\vector{b}=0$ already
ensures ellipticity of the associated bilinear form $b(\cdot,\cdot)$ in the
weak formulation of~\eqref{intro:modelproblem}, i.e.\ the operator $\operator{L}$ in~\eqref{intro:linearL}
is uniformly elliptic.
All this is different to the present work, and the advances over the state
of the art, see e.g.~\cite{ckns,cn,ks}, are fourfold:
\begin{itemize}
\item[(i)] In the linear case~\eqref{intro:linearL}, our assumptions on the data $\matrix{A} = \matrix{A}(x)$,
$\vector{b}=\vector{b}(x)$, and $c=c(x)$ only ensure that the bilinear form
$b(\cdot,\cdot)$ of the weak formulation of~\eqref{intro:modelproblem} is
continuous and satisfies a G\r{a}rding inequality on $H^1_0(\Omega)$.
\item[(ii)] As for the symmetric case~\cite{ckns}, we only rely on standard
newest vertex bisection, and the interior node property is avoided.
\item[(iii)] If $b(\cdot,\cdot)$ is elliptic, we avoid any assumption on the
initial mesh ${\mathcal T}_0$. If $b(\cdot,\cdot)$ satisfies a G\r{a}rding inequality, we
require the same assumption on the initial mesh as~\cite{cn,mn} to ensure
well-posedness of the finite element formulations.
\item[(iv)] To the best of the authors' knowledge and besides~\cite{bdk} for the particular $p$-Laplace problem, this work provides the first quasi-optimality result for a class of non-linear problems.
\end{itemize}
From a technical point of view, our analytical argument works as follows and is illustrated for the linear operator $\operator{L}$ from~\eqref{intro:linearL} with induced bilinear form $b(\cdot,\cdot)$:
First, the estimator reduction
\begin{align}\label{intro:estconv}
\eta_{\ell+1}^2 \le q\,\eta_\ell^2 + C\,\enorm{U_{\ell+1}-U_\ell}^2
\end{align}
together with a C\'ea-type quasi-optimality already implies convergence
$U_\ell \to u$ as $\ell\to\infty$ (Proposition~\ref{prop:conv}),
see also~\cite{estconv} for this \emph{estimator reduction principle}.
Here, $0<q<1$ and $C>0$ are generic constants, and
$\enorm\cdot$ denotes the energy quasi-norm induced by $b(\cdot,\cdot)$.
Second, the novel contribution in our analysis is that this additional
knowledge allows us to prove a quasi-Pythagoras theorem
\begin{align}\label{intro:orthogonality}
\enorm{U_{\ell+1}-U_\ell}^2
+ \enorm{u-U_{\ell+1}}^2
\leq \frac{1}{1-\varepsilon}\,\enorm{u-U_\ell}^2
\end{align}
for all $\varepsilon>0$ and $\ell\ge\ell_0(\varepsilon)$ sufficiently large
(Proposition~\ref{prop:quasiqo})
which unlike~\cite{cn,mn} avoids any additional assumption on the mesh-size
of ${\mathcal T}_\ell$. With estimator reduction~\eqref{intro:estconv}
and quasi-orthogonality~\eqref{intro:orthogonality}
at hand, we next observe $R$-linear convergence
\begin{align}\label{intro:linear}
\eta_{\ell+k} \le Cq^k\eta_\ell
\quad\text{for all }\ell,k\in{\mathbb N}
\end{align}
of the error estimator (Theorem~\ref{thm:rconv}) with further generic constants
$C>0$ and $0<q<1$.
Finally, the $R$-linear convergence~\eqref{intro:linear} suffices to follow
the paths of~\cite{stevenson07,ckns}
to prove even quasi-optimal convergence rates in the sense of
\begin{align}\label{intro:optimal}
(u,f) \in \mathbb A_s
\quad\Longleftrightarrow\quad
\eta_\ell \le C\,(\#{\mathcal T}_\ell-\#{\mathcal T}_0)^{-s}
\quad\text{for all }\ell\in{\mathbb N},
\end{align}
i.e.\ each theoretically possible convergence order ${\mathcal O}(N^{-s})$ for the
error estimator will asymptotically be achieved by AFEM. The approximation
class $\mathbb A_s$ involved in~\eqref{intro:optimal} is defined in Section~\ref{section:optimality}.
By means of reliability and efficiency of the error estimator $\eta_\ell$ used,
this quasi-optimality result can equivalently be stated in terms of error plus
oscillations as is done in~\cite{ckns,cn,ks,stevenson07}. As has first been
observed in~\cite{dirichlet3d}, our approach and proof of~\eqref{intro:optimal}, however, fully
avoids the use of lower bounds for the error, i.e.\ all constants are
independent of the efficiency estimate.
For the nonlinear problem~\eqref{intro:modelproblem}, we observe that estimator reduction~\eqref{intro:estconv}, $R$-linear convergence~\eqref{intro:linear}, as well as quasi-optimality~\eqref{intro:optimal} do not hinge on linearity of $\operator{L}$. We thus bootstrap the arguments developed for the linear case to prove a quasi-Pythagoras theorem~\eqref{intro:orthogonality} for nonlinear $\operator{L}$ (Proposition~\ref{prop:nlquasiqo}), and may derive convergence of AFEM with quasi-optimal algebraic rates.
The remainder of this paper is organized as follows:
For the sake of a clear presentation, we first consider the linear case~\eqref{intro:linearL} with elliptic bilinear form $b(\cdot,\cdot)$ corresponding to the weak formulation of~\eqref{intro:modelproblem}. This case already includes the main ideas of how
to cope with compact perturbations.
In Section~\ref{section:modelproblem}, we explicitly state the assumptions on
the differential operator $\operator{L}$ from~\eqref{intro:linearL}, recall the
continuous and discrete variational formulation of~\eqref{intro:modelproblem},
and give the necessary details on the four modules of~\eqref{intro:afem}.
Section~\ref{section:convergence} then provides the estimator
reduction~\eqref{intro:estconv}, which follows as in~\cite{ckns}, and
the quasi-Galerkin orthogonality~\eqref{intro:orthogonality} which relies
on the convergence of AFEM and compactness arguments.
The short Section~\ref{section:contraction} proves $R$-linear
convergence~\eqref{intro:linear} of the error estimator by use
of~\eqref{intro:estconv}--\eqref{intro:orthogonality}.
We stress that, so far, the analysis does neither hinge on the precise
mesh-refinement used, nor on the adaptivity parameter chosen.
By use of intrinsic properties of NVB, we then prove quasi-optimal convergence
rates~\eqref{intro:optimal} in Section~\ref{section:optimality}.
A final Section~\ref{section:extensions} is concerned with extensions of
our analysis. Amongst other topics, we discuss other boundary conditions
than~\eqref{intro:boundary} as well as changes of our analysis if the bilinear form $b(\cdot,\cdot)$
satisfies only a G\r{a}rding inequality. Subsection~\ref{section:nonlin} bootstraps the arguments of the previous sections and incorporates the non-linear case~\eqref{intro:L} into the analysis.
In all statements, the constants involved and their dependencies are explicitly stated. In proofs, however, we use the symbol $\lesssim$ to abbreviate $\leq$ up to a multiplicative constant. Moreover, $\simeq$ abbreviates that both estimates $\lesssim$ and $\gtrsim$ hold.
\section{Model Problem \& Adaptive Algorithm}
\label{section:modelproblem}
This section is devoted to state the model problem~\eqref{intro:modelproblem} with linear differential operator~\eqref{intro:linearL} in weak form and to collect all the ingredients needed to formulate the adaptive algorithm. The presented problem is not the most general case on which the developed theory can be applied, but it allows for a rather simple presentation and illustrates the main difficulties of the problem. We refer to Section~\ref{section:extensions} for possible extensions and generalizations.
\subsection{Variational formulation}
For a given right-hand side $f\in L^2(\Omega)$, we consider the elliptic boundary value problem~\eqref{intro:modelproblem} with linear operator $\operator{L}$ from~\eqref{intro:linearL}.
For the weak formulation, the error estimator, and to prove optimal convergence rates, we require some regularity assumptions on the coefficients. We assume that
$\matrix{A}=\matrix{A}(x) \in{\mathbb R}^{d\times d}$ with $\matrix{A}\in \big(W_1^\infty(\Omega)\big)^{d\times d}$ is a symmetric matrix, $\vector{b}=\vector{b}(x) \in {\mathbb R}^d$ with $\vector{b}\in \big(L^\infty(\Omega)\big)^d$ is a vector, and $c=c(x)\in{\mathbb R}$ with $c\in L^\infty(\Omega)$ is a scalar. Here, $W_1^\infty(\Omega):=\set{a \in L^\infty(\Omega)}{\nabla a \in \big(L^\infty(\Omega)\big)^d\text{ in the weak sense}}$ coincides with the space of Lipschitz continuous functions.
This allows to write down the weak formulation of~\eqref{intro:modelproblem}: Find $u\in H^1_0(\Omega):=\set{v\in H^1(\Omega)}{v|_{\Gamma}=0\text{ in the sense of traces}}$ such that
\begin{align}\label{eq:weakform}
b(u,v):= \int_\Omega \matrix{A}\nabla u\cdot\nabla v + \vector{b}\cdot\nabla u\,v + cu v\,dx = \int_\Omega fv\,dx\quad\text{for all }v\in H^1_0(\Omega).
\end{align}
According to Sobolev's embedding theorem, there holds $H^1_0(\Omega)\subset L^{2d/(d-2)}(\Omega)$. The bilinear form $b(\cdot,\cdot)$ is therefore well-defined and bounded
with
\begin{align}\label{eq:continuous}
|b(u,v)|\leq \c{continuous}\normLtwo{\nabla u}{\Omega}\normLtwo{\nabla v}{\Omega}\quad\text{for all }u,v\in H^1_0(\Omega),
\end{align}
where the constant $\setc{continuous}:=C_{\Omega}\big(\norm{\matrix{A}}{L^\infty(\Omega)} + \norm{\vector{b}}{L^{d/(d+2)}(\Omega)} + \norm{c}{L^{d/2}(\Omega)}\big)$ depends only on the coefficients of $\operator{L}$ as well as the Poincar\'e constant $C_\Omega>0$ of $\Omega$.
Additionally, we assume that the coefficients ensure that $b(\cdot,\cdot)$ is elliptic, i.e.
\begin{align}\label{eq:elliptic}
b(u,u)\geq \c{elliptic} \normLtwo{\nabla u}{\Omega}^2\quad\text{for all }u\in H^1_0(\Omega)
\end{align}
for some constant $\setc{elliptic}>0$ which may also depend on $C_\Omega>0$, see Section~\ref{section:extensions} if $b(\cdot,\cdot)$ satisfies only a G\r{a}rding inequality.
Now, the Lax-Milgram lemma guarantees unique solvability of~\eqref{eq:continuous} for all $f\in L^2(\Omega)$ and proves continuous dependence $\normLtwo{\nabla u}{\Omega}\lesssim \norm{f}{ H^{-1}(\Omega)}\leq \normLtwo{f}{\Omega}$. Here, $H^{-1}(\Omega):= H^1_0(\Omega)^\star$ denotes the dual space of $H^1_0(\Omega)$, and duality is understood with respect to the extended $L^2$-scalar product, i.e.
\begin{align*}
\normHme{f}{\Omega} := \sup_{v\in H^1_0(\Omega)\setminus\{0\}} \frac{\int_\Omega fv\,dx}{\normLtwo{\nabla v}{\Omega}}.
\end{align*}
Moreover, the bilinear form $b(\cdot,\cdot)$ defines a \textit{quasi}-norm $\enorm{\cdot}:=b(\cdot,\cdot)^{1/2}$, i.e.\ $\enorm{\cdot}$ is definite and homogeneous, but satisfies the triangle inequality only up to some multiplicative constant. Due to ellipticity and continuity of $b(\cdot,\cdot)$, it holds
\begin{align}\label{eq:normequiv}
\c{norm}^{-1}\normLtwo{\nabla v}{\Omega}\leq\enorm{v}\leq \c{norm} \normLtwo{\nabla v}{\Omega}\quad\text{for all } v\in H^1_0(\Omega)
\end{align}
for a constant $\setc{norm}=\max\{\c{continuous}^{1/2},\c{elliptic}^{-1/2}\}>0$.
\subsection{Discrete formulation}
For any regular triangulation ${\mathcal T}_\ell$ of $\Omega$ (see Section~\ref{sec:mesh} below) and $p\geq 1$, we consider the piecewise polynomials
\begin{align*}
{\mathcal P}^p({\mathcal T}_\ell):=\set{V_\ell\in L^2(\Omega)}{\text{for all }T\in{\mathcal T}_\ell,\,V_\ell|_T\text{ is a polynomial of degree at most }p}
\end{align*}
as well as the conforming ansatz and test-space
\begin{align*}
{\mathcal S}^p_0({\mathcal T}_\ell):={\mathcal P}^p({\mathcal T}_\ell)\cap H^1_0(\Omega)\subset \mathcal{C}(\overline{\Omega}).
\end{align*}
Now, the discrete formulation of~\eqref{eq:continuous} reads: Find $U_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)$ such that
\begin{align}\label{eq:discrete}
b(U_\ell,V_\ell)=\int_\Omega f\, V_\ell\, dx \quad\text{for all }V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell).
\end{align}
As in the continuous case~\eqref{eq:continuous}, existence and uniqueness of $U_\ell$ follows from the Lax-Milgram lemma. Moreover, there holds the C\'ea lemma
\begin{align}\label{eq:cea}
\normLtwo{\nabla(u-U_\ell)}{\Omega}\leq\frac{\c{continuous}}{\c{elliptic}}\,\min_{V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)} \normLtwo{\nabla(u-V_\ell)}{\Omega}.
\end{align}
\subsection{Error estimator}
We use the standard weighted-residual error estimator with the local contributions
\begin{align*}
\eta_\ell(T)^2:= |T|^{2/d}\normLtwo{\operator{L}|_TU_\ell-f}{T}^2 + |T|^{1/d}\normLtwo{[\matrix{A}\nabla U_\ell\cdot n]}{\partial T\cap\Omega}^2\quad\text{for all }T\in{\mathcal T}_\ell,\,\ell\in{\mathbb N}.
\end{align*}
Here, $|T|$ is the $d$-dimensional volume of $T\in{\mathcal T}_\ell$, and $[\matrix{A}\nabla U_\ell\cdot n]|_E:= \big(\matrix{A}\nabla U_\ell|_{T_1}\big)\cdot n_{T_1} + \big(\matrix{A}\nabla U_\ell|_{T_2}\big)\cdot n_{T_2}$ denotes the conormal jump over the facet $E:=T_1\cap T_2 $ for all $T_1,T_2\in{\mathcal T}_\ell$, where $n_{T_1},\,n_{T_2}$ denote the outward pointing normal units on the respective element boundaries. Note that due to the regularity assumptions on the coefficients, there holds $\operator{L}|_TU_\ell \in L^2(T)$ for all $T\in{\mathcal T}_\ell$. The error estimator $\eta_\ell$ is defined as the $\ell_2$-sum of the elementwise contributions
\begin{align*}
\eta_\ell^2 := \sum_{T\in{\mathcal T}_\ell}\eta_\ell(T)^2.
\end{align*}
As shown in e.g.~\cite{ao00,v96}, the error estimator is reliable, i.e. for all regular triangulations ${\mathcal T}_\ell$ and corresponding solutions $U_\ell$ of~\eqref{eq:discrete}, it holds
\begin{align}\label{eq:reliable}
\normLtwo{\nabla(u-U_\ell)}{\Omega}\leq \c{reliable}\eta_\ell
\end{align}
for a constant $\setc{reliable}>0$.
Moreover, $\eta_\ell$ is also efficient, i.e.
\begin{subequations}\label{eq:efficient}
\begin{align}\label{eq:efficienta}
\c{efficient}^{-1}\eta_\ell \leq \normLtwo{\nabla(u-U_\ell)}{\Omega} + {\rm osc}_\ell(U_\ell)
\end{align}
for a constant $\setc{efficient}>0$ and oscillation terms
\begin{align}\label{eq:efficientb}
{\rm osc}_\ell(U_\ell)^2:=\sum_{T\in{\mathcal T}_\ell} |T|^{2/d}\normLtwo{(1-\Pi_\ell^{p-1})(\operator{L}|_TU_\ell-f)}{T}^2,
\end{align}
\end{subequations}%
where $\Pi_\ell^{p-1}:\,L^2(\Omega)\to{\mathcal P}^{p-1}({\mathcal T}_\ell)$ denotes the $L^2$-orthogonal projection.
The constants $\c{reliable},\c{efficient}>0$ depend only on $\gamma$-shape regularity of ${\mathcal T}_\ell$ (see Section~\ref{sec:mesh} below), the polynomial degree $p\geq 1$, and on $\Omega$. We stress that unlike~\cite{ckns,cn,ks}, efficiency~\eqref{eq:efficient} is not used throughout our analysis.
\subsection{Adaptive algorithm}
Now, we are in the position to formulate the adaptive algorithm~\eqref{intro:afem} in detail.
\begin{algorithm}\label{algorithm}
\textsc{Input:} Initial triangulation ${\mathcal T}_0$ and adaptivity parameter $0<\theta\leq 1$.\\
\textbf{Loop: }For $\ell=0,1,2,\ldots$ do ${\rm (i)}-{\rm(iv)}$
\begin{itemize}
\item[\rm(i)] Compute discrete solution $U_\ell$ of~\eqref{eq:discrete}.
\item[\rm(ii)] Compute refinement indicators $\eta_\ell(T)$ for all $T\in{\mathcal T}_\ell$.
\item[\rm(iii)] Determine set ${\mathcal M}_\ell\subseteq{\mathcal T}_\ell$ of minimal cardinality such that
\begin{align}\label{eq:doerfler}
\theta\,\eta_\ell(T)^2 \le \sum_{T\in{\mathcal M}_\ell}\eta_\ell(T)^2.
\end{align}
\item[\rm(iv)] Refine (at least) the marked elements $T\in{\mathcal M}_\ell$ to obtain the triangulation ${\mathcal T}_{\ell+1}$.
\end{itemize}
\textsc{Output:} Approximate solutions $U_\ell$ and error estimators
$\eta_\ell$ for all $\ell\in{\mathbb N}$.
\end{algorithm}
\subsection{Mesh refinement}\label{sec:mesh}
Given an initial mesh ${\mathcal T}_0$ which is regular in the sense of Ciarlet, we construct the subsequent meshes ${\mathcal T}_\ell$ by local refinement with the newest vertex bisection for simplicial meshes in ${\mathbb R}^d$, $d\geq 2$, see e.g.~\cite[Chapter 4]{v96} resp.~\cite{stevenson08}.
Consequently, the set of meshes which can be obtained reads
\begin{align}\label{eq:triangulations}
\mathbb T := \set{{\mathcal T}_\ell}{{\mathcal T}_\ell\text{ is a refinement of }{\mathcal T}_0}.
\end{align}
The finite subset of meshes with at most $N\in{\mathbb N}$ elements more than the initial mesh is defined as
\begin{align*}
\mathbb T_N := \set{{\mathcal T}_\ell\in\mathbb T}{\#{\mathcal T}_\ell-\#{\mathcal T}_0\le N}.
\end{align*}
The meshes ${\mathcal T}_\ell\in\mathbb T$ are regular in the sense of Ciarlet and $\gamma$-shape regular in the sense of
\begin{align}\label{refinement:shaperegular}
\gamma^{-1}\, |T|^{1/d}&\leq {\rm diam}(T)\leq \gamma\,|T|^{1/d}
\end{align}
for some $\gamma\geq 1$ which depends only on ${\mathcal T}_0$.
A refined element $T\in{\mathcal T}_\ell$ is split into at least two sons, i.e.\ we have
\begin{align}\label{refinement:sons}
\#({\mathcal T}_\star\setminus{\mathcal T}_\ell) \leq \#{\mathcal T}_\star-\#{\mathcal T}_\ell
\end{align}
for all refinements ${\mathcal T}_\star\in\mathbb T$ of ${\mathcal T}_\ell\in\mathbb T$.
As a key property for the optimality proof, the crucial closure estimate, for the meshes generated by Algorithm~\ref{algorithm}, is satisfied
\begin{align}\label{refinement:closure}
\#{\mathcal T}_\ell - \#{\mathcal T}_0
\le\c{mesh}\,\sum_{j=0}^{\ell-1}\#{\mathcal M}_{j}\quad\text{for all }\ell\in{\mathbb N}
\end{align}
with some constant $\setc{mesh}>0$ which depends only on ${\mathcal T}_0$. For $d\geq3$, ${\mathcal T}_0$ has to satisfy a certain condition on the reference edges, cf.~\cite{bdd,stevenson08}, while this assumption can be dropped for $d=2$, see the recent work~\cite{kpp}.
Finally, for two meshes ${\mathcal T}_\ell,{\mathcal T}_\star\in\mathbb T$ there is a coarsest common refinement
${\mathcal T}_\ell\oplus{\mathcal T}_\star\in\mathbb T$ which satisfies
\begin{align}\label{refinement:overlay}
\#({\mathcal T}_\ell\oplus{\mathcal T}_\star)\le \#{\mathcal T} + \#{\mathcal T}' - \#{\mathcal T}_0,
\end{align}
see~\cite{ckns,stevenson07}. We stress that newest-vertex bisection is a binary refinement rule, and the coarsest common refinement ${\mathcal T}_\ell\oplus{\mathcal T}_\star$ is just the overlay of both meshes.
\section{Convergence \& Quasi-Orthogonality}
\label{section:convergence}
The aim of this section is to prove convergence, without relying on symmetry properties of $\operator{L}$, which can be done by use of the concept of estimator reduction~\cite{estconv}.
To that end, we define the subspace ${\mathcal S}^p_0({\mathcal T}_\infty)$ of $H^1_0(\Omega)$ which is \textit{theoretically} affected by Algorithm~\ref{algorithm} as
\begin{align}\label{eq:xinfty}
{\mathcal S}^p_0({\mathcal T}_\infty):= \overline{\bigcup_{\ell\in{\mathbb N}} {\mathcal S}^p_0({\mathcal T}_\ell)},
\end{align}
where the closure is taken with respect to the $H^1$-norm. With convergence $U_\ell\to u$ and hence $u\in{\mathcal S}^p_0({\mathcal T}_\infty)$ at hand, we are then able to prove a novel quasi-Galerkin orthogonality estimate~\eqref{eq:quasiqo}, which is sufficient to prove linear convergence~\eqref{eq:rconv} as well as optimal convergence rates~\eqref{eq:optimality}.
\subsection{Convergence}
The following result is proved in~\cite{ckns} for symmetric $\operator{L}$ and shows that the error estimator $\eta_\ell$ is contractive up to a certain perturbation.
\begin{lemma}\label{lem:estred}
There exist constants $0<\setq{estred}<1$ and $\setc{estred}>0$, such that there holds
\begin{align}\label{eq:estred}
\eta_{\ell+1}^2 \leq \q{estred} \eta_\ell^2 + \c{estred}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\quad\text{for all }\ell\in{\mathbb N}.
\end{align}
The constants $\q{estred}$ and $\c{estred}$ depend only on $\theta$, $\gamma$-shape regularity of ${\mathcal T}_{\ell+1}$, the polynomial degree $p\in{\mathbb N}$, and on $\Omega$.
\end{lemma}
\begin{proof}
The proof follows verbatim the proof of~\cite[Corollary~3.4]{ckns}. Therefore, we give a rough sketch only.
The application of Young's inequality $2ab\leq a^2+b^2$ proves for $\delta>0$
\begin{align*}
\begin{split}
\eta_{\ell+1}^2&\leq (1+\delta)\sum_{T^\prime\in{\mathcal T}_{\ell+1}} \Big(|T^\prime|^{2/d}\normLtwo{\operator{L}|_{T^\prime}U_\ell -f}{T^\prime}^2+|T^\prime|^{1/d}\normLtwo{[\matrix{A}\nabla U_\ell\cdot n]}{\partial T^\prime\cap \Omega}^2\Big)\\
&\qquad+(1+\delta^{-1})\sum_{T^\prime\in{\mathcal T}_{\ell+1}} \Big(|T^\prime|^{2/d}\normLtwo{\operator{L}|_{T^\prime}(U_{\ell+1}-U_\ell)}{T^\prime}^2\\
&\qquad\qquad+|T^\prime|^{1/d}\normLtwo{[\matrix{A}\nabla (U_{\ell+1}-U_\ell)\cdot n]}{\partial T^\prime\cap \Omega}^2\Big).
\end{split}
\end{align*}
By use of the regularity assumption on the coefficients and standard inverse estimates as well as the Poincar\'e inequality, we obtain
\begin{align}\label{eq:stable}
\begin{split}
\eta_{\ell+1}^2&\leq(1+\delta)\sum_{T^\prime\in{\mathcal T}_{\ell+1}} \Big(|T^\prime|^{2/d}\normLtwo{\operator{L}|_{T^\prime}U_\ell -f}{T^\prime}^2+|T^\prime|^{1/d}\normLtwo{[\matrix{A}\nabla U_\ell\cdot n]}{\partial T^\prime\cap \Omega}^2\Big)\\
&\leq(1+\delta)\sum_{T^\prime\in{\mathcal T}_{\ell+1}} \Big(|T^\prime|^{2/d}\normLtwo{\operator{L}|_{T^\prime}U_\ell -f}{T^\prime}^2+|T^\prime|^{1/d}\normLtwo{[\matrix{A}\nabla U_\ell\cdot n]}{\partial T^\prime\cap \Omega}^2\Big)\\
&\qquad\qquad\qquad\qquad+(1+\delta^{-1})\c{inv}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2.
\end{split}
\end{align}
The constant $\setc{inv}>0$ depends only on the $\gamma$-shape regularity of ${\mathcal T}_{\ell+1}$, the norms
$\norm{\matrix{A}}{W_1^\infty(\Omega)}^2, \norm{\vector{b}}{L^\infty(\Omega)}^2, \norm{c}{L^\infty(\Omega)}^2$, and on the polynomial degree $p\in{\mathbb N}$.
Next, the sum is split into two sums over $T^\prime \in {\mathcal T}_\ell\cap{\mathcal T}_{\ell+1}$ and $T^\prime \in{\mathcal T}_{\ell+1}\setminus{\mathcal T}_\ell$. We use the reduction of the element size $|T^\prime|\leq |T|/2$ for $T^\prime\subset T$ being a son of a refined element $T\in{\mathcal T}_\ell\setminus{\mathcal T}_{\ell+1}$. Since ${\mathcal M}_\ell\subseteq {\mathcal T}_\ell\setminus{\mathcal T}_{\ell+1}$, one ends up with
\begin{align*}
\eta_{\ell+1}^2&\leq (1+\delta)\Big(2^{-1/d}\hspace{-3mm}\sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_{\ell+1}} \eta_\ell(T)^2+\hspace{-4mm}\sum_{T\in{\mathcal T}_\ell\cap{\mathcal T}_{\ell+1}}\eta_\ell(T)^2\Big)
+(1+\delta^{-1})\c{inv}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\\
&\leq(1+\delta)\Big(2^{-1/d}\sum_{T\in{\mathcal M}_\ell} \eta_\ell(T)^2 +\sum_{T\in{\mathcal T}_\ell\setminus{\mathcal M}_\ell}\eta_\ell(T)^2\Big)
+(1+\delta^{-1})\c{inv}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\\
&\leq(1+\delta)\Big((2^{-1/d}-1)\sum_{T\in{\mathcal M}_\ell} \eta_\ell(T)^2
+\eta_\ell^2\Big)+(1+\delta^{-1})\c{inv}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2.
\end{align*}
Finally, D\"orfler marking~\eqref{eq:doerfler} proves~\eqref{eq:estred} with
\begin{align*}
\q{estred}=\big(1-\theta(1-2^{-1/d}\big)(1+\delta)\in (0,1)\quad\text{and}\quad\c{estred}=(1+\delta^{-1})\c{inv}
\end{align*}
for $\delta>0$ sufficiently small.
\end{proof}
Adaptive algorithms of the type of Algorithm~\ref{algorithm} with nested ansatz spaces ${\mathcal S}^p_0({\mathcal T}_\ell)\subseteq {\mathcal S}^p_0({\mathcal T}_{\ell+1})$ have in common that there holds
\textit{a~priori} convergence. This has already been observed in the early work~\cite{bv} and has later also been used in~\cite{msv} to prove a general plain convergence result for AFEM.
\begin{lemma}\label{lem:apriori}
The sequence of Galerkin approximations $U_\ell$ of Algorithm~\ref{algorithm} is convergent in $H^1_0(\Omega)$, i.e.\ there exists $u_\infty\in{\mathcal S}^p_0({\mathcal T}_\infty)$ with
\begin{align}\label{eq:apriori}
U_\ell \to u_\infty \quad\text{as }\ell\to\infty.
\end{align}
\end{lemma}
\begin{proof}
The space ${\mathcal S}^p_0({\mathcal T}_\infty)$ is a closed subspace of $H^1_0(\Omega)$ and therefore the Lax-Milgram lemma guarantees existence and uniqueness of a solution $u_\infty\in{\mathcal S}^p_0({\mathcal T}_\infty)$ of~\eqref{eq:discrete} with test space ${\mathcal S}^p_0({\mathcal T}_\infty)$ instead of ${\mathcal S}^p_0({\mathcal T}_\ell)$. The Galerkin approximations $U_\ell$ are also Galerkin approximations of $u_\infty$, since ${\mathcal S}^p_0({\mathcal T}_\ell)\subseteq {\mathcal S}^p_0({\mathcal T}_\infty)$ for all $\ell\in{\mathbb N}$. Therefore, the C\'ea lemma shows
\begin{align*}
\normLtwo{\nabla(u_\infty-U_\ell)}{\Omega}\lesssim \min_{V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)} \normLtwo{\nabla (u_\infty-V_\ell)}{\Omega}\to 0
\end{align*}
as $\ell \to \infty$.
\end{proof}
The combination of estimator reduction~\eqref{eq:estred} and a~priori convergence~\eqref{eq:apriori} yields convergence of Algorithm~\ref{algorithm}.
\begin{proposition}\label{prop:conv}
Algorithm~\ref{algorithm} is convergent in $H^1_0(\Omega)$, i.e.
\begin{align}\label{eq:conv}
U_\ell \to u \in H^1_0(\Omega)\quad\text{as }\ell\to\infty.
\end{align}
In particular, this implies $u=u_\infty\in{\mathcal S}^p_0({\mathcal T}_\infty)$.
\end{proposition}
\begin{proof}
According to Lemma~\ref{lem:apriori}, the estimator reduction~\eqref{eq:estred} of Lemma~\ref{lem:estred} takes the form
\begin{align*}
\eta_{\ell+1}^2\leq \q{estred}\eta_\ell^2 + \alpha_\ell
\end{align*}
with $\alpha_\ell\geq 0$ and $\lim_{\ell\to\infty}\alpha_\ell=0$. From this, elementary calculus proves $\lim_{\ell\to\infty} \eta_\ell=0$, see e.g.~\cite{estconv}. Finally, reliability~\eqref{eq:reliable} of $\eta_\ell$ concludes the proof.
\end{proof}
\subsection{Quasi-Galerkin orthogonality}\label{section:quasiqo}
The standard proof of the Pythagoras theorem $\enorm{u-U_{\ell+1}}^2+\enorm{U_{\ell+1}-U_\ell}^2=\enorm{u-U_\ell}^2$ relies on Galerkin orthogonality and symmetry of $b(\cdot,\cdot)$. The following lemmata provide a workaround for our case of a non-symmetric bilinear form $b(\cdot,\cdot)$. We stress that the quasi-orthogonality proof makes explicit use of the fact that we already have convergence $U_\ell\to u$ in $H^1_0(\Omega)$ and $u\in {\mathcal S}^p_0({\mathcal T}_\infty)$.
\begin{lemma}\label{lem:compact}
The operators $\operator{A}, \operator{K}:\, H^1_0(\Omega) \to H^{-1}(\Omega)$ are bounded. Moreover, $\operator{A}$ is symmetric and $\operator{K}$ is compact.
\end{lemma}
\begin{proof}
The symmetry of $\operator{A}$ is obvious, and both operators $\operator{A}$ and $\operator{K}$ are also bounded, i.e.
\begin{align*}
\normHme{\operator{A}v}{\Omega}&\leq \norm{\matrix{A}}{L^\infty(\Omega)} \normLtwo{\nabla v}{\Omega},\\
\normHme{\operator{K}v}{\Omega}&\leq \normLtwo{\operator{K}v}{\Omega}\leq (\norm{\vector{b}}{L^{d/(d+2)}(\Omega)}+ \norm{c}{L^{d/2}(\Omega)}) \normLtwo{\nabla v}{\Omega},
\end{align*}
for all $v\in H^1_0(\Omega)$.
It remains to prove that $\operator{K}$ is compact.
The Rellich compactness theorem shows that the embedding $\iota:\,H^1_0(\Omega)\hookrightarrow L^2(\Omega)$ is a compact operator. Therefore, according to Schauder's theorem, see e.g.~\cite[Theorem~4.19]{rudin}, the adjoint operator $\iota^\star:\,L^2(\Omega)\to H^{-1}(\Omega)$ is also compact. Obviously, $\iota^\star:\,L^2(\Omega)\to H^{-1}(\Omega)$ coincides with the natural embedding, and we may write
\begin{align*}
\operator{K} = \iota^\star \circ \operator{K} : H^1_0(\Omega)\to L^2(\Omega)\to H^{-1}(\Omega).
\end{align*}
Therefore, $\operator{K}$ is the composition of a bounded operator and a compact operator and hence compact. This concludes the proof.
\end{proof}
\begin{lemma}\label{lem:weakconv}
The sequences $(e_\ell)_{\ell\in{\mathbb N}}$ and $(E_\ell)_{\ell\in{\mathbb N}}$ defined by
\begin{align*}
e_\ell:=\begin{cases}\frac{u-U_\ell}{\normLtwo{\nabla(u-U_\ell)}{\Omega}},& \text{ for }u\neq U_\ell,\\
0, &\text{ else,}\end{cases}\quad\text{and}\quad
E_\ell:=\begin{cases}\frac{U_{\ell+1}-U_\ell}{\normLtwo{\nabla(u-U_\ell)}{\Omega}},&\text{ for }U_{\ell+1}\neq U_\ell,\\
0, &\text{ else,}\end{cases}
\end{align*}
converge to zero, weakly in $ H^1_0(\Omega)$.
\end{lemma}
\begin{proof}
We prove weak convergence of $e_\ell$ to zero. The weak convergence of $E_\ell$ follows with the same arguments.
Let $(e_{\ell_j})$ be a subsequence of $(e_\ell)$. Due to boundedness $\normLtwo{\nabla e_{\ell_j}}{\Omega}\leq 1$ for all $j\in{\mathbb N}$, we may extract a weakly convergent subsequence $(e_{\ell_{j_k}})$ of $(e_{\ell_j})$ with
\begin{align*}
e_{\ell_{j_k}} \rightharpoonup w \in H^1_0(\Omega).
\end{align*}
First, note that $u,U_\ell\in{\mathcal S}_0^p({\mathcal T}_\infty)$ implies $e_\ell\in {\mathcal S}_0^p({\mathcal T}_\infty)$ and hence $w\in{\mathcal S}_0^p({\mathcal T}_\infty)$. Second, for all $\ell_{j_k}\geq \ell$ with $e_{\ell_{j_k}}\neq 0$ and all $V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)$, it holds
\begin{align*}
b(e_{\ell_{j_k}},V_\ell) = \normLtwo{\nabla(u-U_{\ell_{j_k}})}{\Omega}^{-1} b(u-U_{\ell_{j_k}}, V_\ell) = 0.
\end{align*}
For any $\ell\in{\mathbb N}$, $V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)$, and $\varepsilon>0$, there exists $k_0\in{\mathbb N}$ such that for all $k\geq k_0$, it holds
\begin{align*}
|b(w,V_\ell) |=| \dual{w}{\operator{L}^\star V_\ell} | \leq \varepsilon + |\dual{e_{\ell_{j_k}}}{\operator{L}^\star V_\ell} |= \varepsilon + |b(e_{\ell_{j_k}},V_\ell)| = \varepsilon,
\end{align*}
since $k_0$ is chosen large enough such that ${\ell_{j_k}}\geq \ell$. Therefore
\begin{align*}
b(w,V_\ell)=0 \quad\text{for all }V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell) \text{ and } \ell\in{\mathbb N}.
\end{align*}
Due to definiteness of $b(\cdot,\cdot)$ and $w\in{\mathcal S}^p_0({\mathcal T}_\infty):=\overline{\bigcup_{\ell\in{\mathbb N}}{\mathcal S}^p_0({\mathcal T}_\ell)}$, this implies $w=0$. Altogether, we have now shown that each subsequence of $e_\ell$ has a subsequence which converges weakly to zero. This immediately implies weak convergence $e_\ell \rightharpoonup 0$ as $\ell\to\infty$.
\end{proof}
The previous lemma shows that although $(E_\ell)_{\ell\in{\mathbb N}}$ is no orthonormal sequence, it shares the property of weak convergence to zero with orthonormal systems. Note that our proof already used convergence $U_\ell\to u$ as $\ell\to\infty$ in the sense that we required $u-U_\ell\in {\mathcal S}^p_0({\mathcal T}_\infty)$. This suffices to prove the following quasi-Pythagoras theorem.
\begin{proposition}\label{prop:quasiqo}
For any $0<\varepsilon<1$, there exists $\ell_0\in{\mathbb N}$ such that
\begin{align}\label{eq:quasiqo}
\enorm{U_{\ell+1}-U_\ell}^2\leq \frac{1}{1-\varepsilon}\,\enorm{u-U_\ell}^2-\enorm{u-U_{\ell+1}}^2
\end{align}
for all $\ell\geq \ell_0$.
\end{proposition}
\begin{proof}
Lemma~\ref{lem:weakconv} shows that $e_\ell,E_\ell\rightharpoonup 0 $ as $\ell\to \infty$. Due to Lemma~\ref{lem:compact}, $\operator{K}$ is compact. Therefore, we have strong convergence $\operator{K}e_\ell, \operator{K} E_\ell \to 0$ in $H^{-1}(\Omega)$ as $\ell\to\infty$ . This shows
\begin{align*}
\dual{\operator{K}(u-U_{\ell+1})}{U_{\ell+1}-U_\ell} &= \dual{\operator{K}e_{\ell+1}}{U_{\ell+1}-U_\ell} \normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\leq \normHme{\operator{K}e_{\ell+1}}{\Omega}
\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}
\end{align*}
as well as
\begin{align*}
\dual{\operator{K}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}} &= \dual{\operator{K}E_\ell}{u-U_{\ell+1}} \normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\\
&\leq \normHme{\operator{K}E_\ell}{\Omega}
\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}.
\end{align*}
For any $\delta>0$, this may be employed to obtain some $\ell_0\in{\mathbb N}$ such that for all $\ell\geq \ell_0$, it holds
\begin{align*}
|\dual{\operator{K}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}|
&+ |\dual{\operator{K}(u-U_{\ell+1})}{U_{\ell+1}-U_\ell}|\\
&\leq \delta \normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}.
\end{align*}
Together with Galerkin orthogonality
\begin{align}\label{eq:galorth}
0=b(u-U_{\ell+1},V_{\ell+1})=\dual{\operator{L}(u-U_{\ell+1})}{V_{\ell+1}}\quad\text{for all }V_{\ell+1}\in{\mathcal S}_0^p({\mathcal T}_{\ell+1}),
\end{align}
we estimate
\begin{align}\label{eq:key}
\begin{split}
|\dual{\operator{L}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}|&= |\dual{\operator{A}(u-U_{\ell+1})}{U_{\ell+1}-U_\ell} + \dual{\operator{K}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}|\\
&\leq |\dual{\operator{L}(u-U_{\ell+1})}{U_{\ell+1}-U_\ell}| + |\dual{\operator{K}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}|\\
&\qquad\qquad + |\dual{\operator{K}(u-U_{\ell+1})}{U_{\ell+1}-U_\ell}|\\
&\leq \delta\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}.
\end{split}
\end{align}
The definition of $\enorm{\cdot}$ and Galerkin orthogonality~\eqref{eq:galorth} yield
\begin{align*}
\enorm{u-U_{\ell+1}}^2 + \enorm{U_{\ell+1}-U_\ell}^2 + \dual{\operator{L}(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}=\enorm{u-U_\ell}^2,
\end{align*}
whence
\begin{align*}
\enorm{U_{\ell+1}-U_\ell}^2\leq \enorm{u-U_\ell}^2-\enorm{u-U_{\ell+1}}^2 + \delta\c{norm}^2\enorm{u-U_{\ell+1}}\enorm{U_{\ell+1}-U_\ell}.
\end{align*}
The application of Young's inequality $2ab\leq a^2 +b^2 $ and the choice $\varepsilon=\delta\c{norm}^2/2$ conclude the proof.
\end{proof}
\section{Contraction}\label{section:contraction}
The quasi-Pythagoras theorem~\eqref{eq:quasiqo} from Proposition~\ref{prop:quasiqo} allows to prove $R$-linear convergence of the error estimator $\eta_\ell$.
Compared with the analysis of the symmetric case~\cite{ckns}, this is a weaker result. However, $R$-linear convergence is still sufficient to prove quasi-optimal convergence rates in Section~\ref{section:optimality}.
\begin{theorem}\label{thm:rconv}
There exist constants $0<\setq{rconv}<1$ and $\setc{rconv}>0$ such that for all $\ell,k\in{\mathbb N}$, there holds
\begin{align}\label{eq:rconv}
\eta_{\ell+k}^2 \leq \c{rconv}\q{rconv}^k\, \eta_\ell^2.
\end{align}
The constants $\q{rconv}$ and $\c{rconv}$ depend only on $\q{estred}$, $\c{estred}$, $\c{norm}$, and $\c{reliable}$.
\end{theorem}
\begin{proof}
We employ the estimator reduction~\eqref{eq:estred} and reliability~\eqref{eq:reliable} to obtain for $N\geq \ell+1$ and $\alpha<1-\q{estred}$
\begin{align*}
\sum_{k=\ell+1}^N \eta_k^2 &\leq \sum_{k=\ell+1}^N \big(\q{estred}\eta_{k-1}^2 +\c{estred}\normLtwo{\nabla(U_k-U_{k-1})}{\Omega}^2\big)\\
&\leq \sum_{k=\ell+1}^N \Big((\q{estred}+\alpha)\eta_{k-1}^2 +\c{estred}\big(\normLtwo{\nabla(U_k-U_{k-1})}{\Omega}^2-\alpha\c{reliable}^{-2}\c{estred}^{-1}\normLtwo{\nabla(u-U_{k-1})}{\Omega}^2\big)\Big),
\end{align*}
Rearranging the terms in the above estimate, we end up with
\begin{align*}
(1-\q{estred}-\alpha) \sum_{k=\ell+1}^N \eta_k^2 &
\leq (1+\q{estred}+\alpha)\eta_{\ell}^2 + \c{estred}\c{norm}^2\sum_{k=\ell+1}^N \big(\enorm{U_k-U_{k-1}}^2- \delta\enorm{u-U_{k-1}}^2\big).
\end{align*}
where $\delta= \alpha\c{reliable}^{-2}\c{estred}^{-1}\c{norm}^{-4}$.
Next, we aim at proving that the sum on the right-hand side is bounded above by $\eta_\ell^2$ for all $N\in{\mathbb N}$.
To that end, we employ Lemma~\ref{prop:quasiqo} with $\varepsilon>0$ such that $1/(1-\varepsilon) \leq 1+\delta$.
This gives a number $\ell_0\in {\mathbb N}$ such that for all $N>\ell\geq \ell_0$, we may estimate
\begin{align}\label{eq:rconvhelp2}
\sum_{k=\ell+1}^N \big(\enorm{U_{k}-U_{k-1}}^2 -\delta\enorm{u-U_{k-1}}^2\big)
&\leq\sum_{k=\ell+1}^N \big((\frac{1}{1-\varepsilon}-\delta)\enorm{u-U_{k-1}}^2-\enorm{u-U_{k}}^2\big)\nonumber\\
&\leq\sum_{k=\ell+1}^N \big(\enorm{u-U_{k-1}}^2-\enorm{u-U_{k}}^2\big)\\
&\leq \enorm{u-U_\ell}^2\leq \c{norm}^2\c{reliable}^2\eta_\ell^2.\nonumber
\end{align}
For all $\ell<\ell_0$, we first observe that $\enorm{u-U_\ell}=0$ implies $\enorm{U_k-U_{k-1}}=0$ for all $k\geq \ell+1$, since $U_k=u=U_{k-1}$. Therefore, we obtain with the convention $\infty\cdot 0=0$
\begin{align*}
C_{\rm sup}:=\sup_{\ell\in\{1,\ldots,\ell_0\}} \Big(\enorm{u-U_\ell}^{-2}\sum_{k=\ell+1}^{\ell_0}\enorm{U_{k}-U_{k-1}}^2\Big)<\infty.
\end{align*}
In combination with~\eqref{eq:rconvhelp2}, we thus see
\begin{align*}
\sum_{k=\ell+1}^N& \big(\enorm{U_k-U_{k-1}}^2 -\delta\enorm{u-U_{k-1}}^2\big)\leq (1+C_{\rm sup})\c{norm}^2\c{reliable}^2\eta_\ell^2\quad\text{for all }\ell\in{\mathbb N},\,N> \ell.
\end{align*}
Plugging everything together, we have so far shown
\begin{align}\label{eq:rconvfinal}
\sum_{k=\ell+1}^\infty \eta_k^2 \leq\c{help}\eta_\ell^2\quad\text{for all }\ell\in{\mathbb N},
\end{align}
for some constant $\setc{help}>0$ which depends only on $\q{estred}$, $\c{estred}$, $\c{norm}$, and $\c{reliable}$.
Therefore, we get
\begin{align*}
(1+\c{help}^{-1}) \sum_{k=\ell+1}^\infty \eta_k^2 \leq \sum_{k=\ell+1}^\infty \eta_k^2 +\eta_\ell^2 = \sum_{k=\ell}^\infty \eta_k^2,
\end{align*}
and hence by induction
\begin{align*}
\eta_{\ell+j}^2\leq \sum_{k=\ell+j}^\infty \eta_k^2\leq (1+\c{help}^{-1})^{-j}\sum_{k=\ell}^\infty \eta_k^2\leq (1+\c{help})(1+\c{help}^{-j})^{-k}\eta_\ell^2
\quad\text{for all }\ell,k\in{\mathbb N}.
\end{align*}
This concludes the proof with $\q{rconv}=1/(1+\c{help}^{-1})$ and $\c{rconv}=(1+\c{help})$.
\end{proof}
\begin{remark}
Note that the $R$-linear convergence of Theorem~\ref{thm:rconv} holds for arbitrary adaptivity parameters $0<\theta<1$. Moreover, the result is independent of NVB in the sense that the proof only requires that $|T^\prime|\leq q|T|$ for some $0<q<1$ and all sons $T^\prime\subset T$ of refined elements $T\in{\mathcal T}_{\ell}\setminus{\mathcal T}_{\ell+1}$. This property holds for each feasible mesh-refinement strategy and for NVB with $q=2^{-1/d}$. Finally, the minimal cardinality of the set ${\mathcal M}_\ell$ of marked elements has not been used, yet. Instead, Theorem~\ref{thm:rconv} holds as long as the set ${\mathcal M}_\ell\subseteq {\mathcal T}_\ell$ satisfies the D\"orfler marking~\eqref{eq:doerfler} and, in particular, for ${\mathcal M}_\ell={\mathcal T}_\ell$.
\end{remark}
\begin{remark}
Note that the proof of Theorem~\ref{thm:rconv} does neither use linearity nor uniform ellipticity of $\operator{L}$. Instead, we only require reliability~\eqref{eq:reliable}, estimator reduction~\eqref{eq:estred}, quasi-Galerkin orthogonality~\eqref{eq:quasiqo} as well as equivalence~\eqref{eq:normequiv} of the norm $\normLtwo{\nabla(\,\cdot\,)}{\Omega}$ and the energy quasi-norm $\enorm{\cdot}$ on $H^1_0(\Omega)$. With these ingredients, our analysis is thus also capable to cover certain nonlinear problems as discussed in Section~\ref{section:nonlin}.
\end{remark}
\section{Optimal Convergence Rates}\label{section:optimality}
With Theorem~\ref{thm:rconv} at hand, we are in the position to prove quasi-optimal convergence rates for the sequence of Galerkin solutions obtained from Algorithm~\ref{algorithm}. First, however, we have to clarify what is the best possible convergence rate that can be aimed at. To that end, we follow e.g.~\cite{ckns} and define the approximation class $\mathbb A_s$ by
\begin{subequations}\label{eq:approxclasstotalerror}
\begin{align}\label{eq:approxclasstotalerrora}
(u,f)\in\mathbb A_s\quad\overset{\rm def}{\Longleftrightarrow}\quad \norm{(u,f)}{\mathbb A_s}:=\sup_{N\in{\mathbb N}}N^s\sigma(N;u,f)<\infty
\end{align}
for all $s>0$, where
\begin{align}\label{eq:approxclasstotalerrorb}
\sigma(N;u,f):=\inf_{{\mathcal T}_\star\in\mathbb T_N}\inf_{V_\star\in{\mathcal S}^p_0({\mathcal T}_\star)}\big(\normLtwo{\nabla(u-V_\star)}{\Omega}^2+{\rm osc}_\star(V_\star)^2\big)^{1/2}
\end{align}
\end{subequations}
and ${\rm osc}_\star$ is the oscillation term from~\eqref{eq:efficient} corresponding to the mesh ${\mathcal T}_\star$. We refer to~\cite{bddp,gm} for a characterization of approximation classes in terms of Besov regularity. However, in this work, we follow~\cite{dirichlet3d} and use an equivalent definition of $\mathbb A_s$, which involves the error estimator $\eta_\ell$ only. This equivalence is part of the next lemma which is also implicitly contained in~\cite[Lemma~5.2]{ckns}.
\begin{lemma}\label{lem:totalerror}
There exists a constant $\setc{totalerror}>0$ such that for all ${\mathcal T}_\star\in\mathbb T$ there holds
\begin{align}\label{eq:totalerror}
\c{totalerror}^{-1}\eta_\star^2\leq\inf_{V_\star\in{\mathcal S}^p_0({\mathcal T}_\star)}\big(\normLtwo{\nabla(u-V_\star)}{\Omega}^2+{\rm osc}_\star(V_\star)^2\big)\leq\c{totalerror} \eta_\star^2.
\end{align}
Hence, $\mathbb A_s$ from~\eqref{eq:approxclasstotalerror} can equivalently be characterized as
\begin{align}\label{eq:approxclass}
(u,f)\in\mathbb A_s\quad\Longleftrightarrow\quad \sup_{N\in{\mathbb N}}\inf_{{\mathcal T}_\star\in\mathbb T_N}\,N^s\eta_\star<\infty
\end{align}
for all $s>0$. The constant $\c{totalerror}$ depends only on $\c{continuous},\c{elliptic}$, the $\gamma$-shape regularity of ${\mathcal T}_\star$ and the polynomial degree $p\in{\mathbb N}$.
\end{lemma}
\begin{proof}
First, we prove~\eqref{eq:totalerror}. To that end, we observe $ \normLtwo{\nabla(u-U_\star)}{\Omega}^2+{\rm osc}_\star(U_\star)^2\simeq \eta_\star^2$, which follows from reliability~\eqref{eq:reliable}, efficiency~\eqref{eq:efficient} as well as ${\rm osc}_\star(U_\star)\leq \eta_\star$. Moreover, the lower bound
\begin{align}\label{eq:totalerrorhelp}
\inf_{V_\star\in{\mathcal S}^p_0({\mathcal T}_\star)}\big(\normLtwo{\nabla(u-V_\star)}{\Omega}^2+{\rm osc}_\star(V_\star)^2\big)\leq \normLtwo{\nabla(u-U_\star)}{\Omega}^2+{\rm osc}_\star(U_\star)^2
\end{align}
holds since $U_\star\in{\mathcal S}^p_0({\mathcal T}_\star)$. To prove the converse estimate in~\eqref{eq:totalerrorhelp}, we argue as in Lemma~\ref{lem:estred} and use a standard inverse estimate as well as the Poincar\'e inequality, to see
\begin{align*}
{\rm osc}_\star(U_\star)^2&=\sum_{T\in{\mathcal T}_\star}|T|^{2/d}\normLtwo{(1-\Pi_\star^{p-1})(\operator{L}|_TU_\star-f)}{T}^2\\
&\lesssim\sum_{T\in{\mathcal T}_\star}|T|^{2/d}\normLtwo{(1-\Pi_\star^{p-1})\operator{L}|_T(U_\star-V_\star)}{T}^2 + {\rm osc}_\star(V_\star)^2\\
&\lesssim\big(\norm{\matrix{A}}{W_1^\infty(\Omega)}^2+\norm{\vector{b}}{L^\infty(\Omega)}^2+\norm{c}{L^\infty(\Omega)}^2\big)\sum_{T\in{\mathcal T}_\star}|T|^{2/d}\norm{U_\star-V_\star}{H^2(T)}^2 + {\rm osc}_\star(V_\star)^2\\
&\lesssim \normLtwo{\nabla(U_\star-V_\star)}{\Omega}^2 + {\rm osc}_\star(V_\star)^2.
\end{align*}
Finally, by use of the C\'ea lemma, we end up with
\begin{align*}
{\rm osc}_\star(U_\star)^2&\lesssim \normLtwo{\nabla(u-U_\star)}{\Omega}^2 +\normLtwo{\nabla(u-V_\star)}{\Omega}^2 + {\rm osc}_\star(V_\star)^2\\
&\lesssim \normLtwo{\nabla(u-V_\star)}{\Omega}^2 + {\rm osc}_\star(V_\star)^2.
\end{align*}
The combination of the last three estimates proves~\eqref{eq:totalerror}. The characterization~\eqref{eq:approxclass} follows with~\eqref{eq:totalerror} and the definition of $\sigma(N;u,f)$ in~\eqref{eq:approxclasstotalerrorb}.
\end{proof}
In our opinion, this characterization allows for a clearer presentation of the proof of the following quasi-optimality theorem and, in particular, we shall see that unlike the analysis of~\cite{ckns,cn,ks,stevenson07}, the upper bound for optimal adaptivity parameters $0<\theta<1$ does not depend on the efficiency constant $\c{efficient}$. The following result is the main theorem of this section.
\begin{theorem}\label{thm:optimal}
Define $\theta_\star:= (1+\c{inv}\c{drel})^{-1}$ with the constants $\c{drel}>0$ from Lemma~\ref{lem:drel} and $\c{inv}>0$ from the proof of Lemma~\ref{lem:estred}.
Then, for all adaptivity parameters $0<\theta<\theta_\star$ and all $s>0$, there exists a constant $\setc{optimal}>0$ such that
\begin{align}\label{eq:optimality}
(u,f)\in\mathbb A_s\quad\Longleftrightarrow\quad \eta_\ell \leq \c{optimal}\norm{(u,f)}{\mathbb A_s}(\#{\mathcal T}_\ell-\#{\mathcal T}_0)^{-s}\quad\text{for all }\ell\in{\mathbb N}.
\end{align}
The constant $\c{optimal}$ depends only on $\theta$, $s$, $\q{rconv}$, $\c{rconv}$, $\c{efficient}$, and $\c{mesh}$, and the proof relies on the properties~\eqref{refinement:shaperegular}--\eqref{refinement:overlay} of NVB.
\end{theorem}
For the proof of the quasi-optimality theorem, we need a refined reliability property of the error estimator $\eta_\ell$.
\begin{lemma}[discrete reliability]\label{lem:drel}
There exists a constant $\setc{drel}>0$ such that for all refinements ${\mathcal T}_\star\in\mathbb T$ of a triangulation ${\mathcal T}_\ell\in\mathbb T$, it holds
\begin{align}\label{eq:drel}
\normLtwo{\nabla(U_\star-U_\ell)}{\Omega}^2\leq \c{drel}\sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star}\eta_\ell(T)^2.
\end{align}
The constant $\c{drel}$ depends only on the $\gamma$-shape regularity of ${\mathcal T}_0$, the polynomial degree $p\in{\mathbb N}$, and on $\Omega$.
\end{lemma}
\begin{proof}
The statement is proven for $\vector{b}=0$ and $c\geq 0$ in~\cite[Lemma~3.6]{ckns}. The proof for the present case follows verbatim.
\end{proof}
So far, we have observed that D\"orfler marking~\eqref{eq:doerfler} implies contraction of $\eta_\ell$ (Proposition~\ref{thm:rconv}). Now, we prove, in some sense, the converse. We follow the concept of proof of~\cite{dirichlet3d} and stress that unlike e.g.~\cite{ckns,cn,ks,stevenson07} our proof does not use efficiency~\eqref{eq:efficient} of $\eta_\ell$.
\begin{lemma}[Optimality of D\"orfler marking]\label{lem:doerfler}
Let $0<\theta < \theta_\star:=(1+\c{inv}\c{drel})^{-1}$. Then, there exists $0<\setq{doerfler}<1$ such that for all refinements ${\mathcal T}_\star\in\mathbb T$ of a triangulation ${\mathcal T}_\ell\in\mathbb T$ the following statement is true
\begin{align}
\eta_\star^2 \leq \q{doerfler}\eta_\ell^2 \quad\implies\quad \theta\eta_\ell^2 \leq \sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star} \eta_\ell(T)^2.
\end{align}
\end{lemma}
\begin{proof}
Analogously to~\eqref{eq:stable}, we estimate for $\delta>0$
\begin{align}\label{eq:mon}
\begin{split}
\eta_\ell^2 &= \sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star}\eta_\ell(T)^2+\sum_{T\in{\mathcal T}_\ell\cap{\mathcal T}_\star} \eta_\ell(T)^2\\
&\leq \sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star}\eta_\ell(T)^2+(1+\delta^{-1})\sum_{T\in{\mathcal T}_\ell\cap{\mathcal T}_\star} \eta_\star(T)^2 + (1+\delta)\c{inv}\normLtwo{\nabla(U_\star-U_\ell)}{\Omega}^2\\
&\leq \sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star}\eta_\ell(T)^2+(1+\delta^{-1})\q{doerfler}\eta_\ell^2+(1+\delta)\c{inv}\normLtwo{\nabla(U_\star-U_\ell)}{\Omega}^2.
\end{split}
\end{align}
Rearranging the terms and employing the discrete reliability~\eqref{eq:reliable}, we end up with
\begin{align*}
\frac{1-(1+\delta^{-1})\q{doerfler}}{1+(1+\delta)\c{inv}\c{drel}}\,\eta_\ell^2\leq\sum_{T\in{\mathcal T}_\ell\setminus{\mathcal T}_\star}\eta_\ell(T)^2.
\end{align*}
According to $\theta<(1+\c{inv}\c{drel})^{-1}$, we may finally choose $\delta>0$ and $0<\q{doerfler}<1$ sufficiently small to ensure
\begin{align*}
\theta \leq \frac{1-(1+\delta)\q{doerfler}}{1+(1+\delta^{-1})\c{inv}\c{drel}}< \frac{1}{1+\c{inv}\c{drel}}.
\end{align*}
This concludes the proof.
\end{proof}
Now, we are in the position to prove Theorem~\ref{thm:optimal}. We stress that the concept of proof goes back to~\cite{stevenson07} and has been adopted by~\cite{ckns} and all succeeding works. We put emphasis on the fact that, first, efficiency~\eqref{eq:efficient} of $\eta_\ell$ is not needed and that, second, $R$-linear convergence~\eqref{eq:rconv} instead of plain contraction in each step of the adaptive loop is sufficient.
\begin{proof}[Proof of Theorem~\ref{thm:optimal}]
Let $\lambda>0$ denote a free parameter, which is fixed later on.
The definition of the approximation class $\mathbb A_s$ allows for given $\varepsilon^2:=\lambda \eta_\ell^2>0$ to choose a mesh ${\mathcal T}_\varepsilon\in\mathbb T$ such that
\begin{align*}
\eta_\varepsilon \leq \varepsilon\quad\text{and}\quad \#{\mathcal T}_\varepsilon-\#{\mathcal T}_0 \lesssim \norm{(u,f)}{\mathbb A_s}^{1/s} \varepsilon^{-1/s}.
\end{align*}
Now, consider the overlay ${\mathcal T}_\star:={\mathcal T}_\varepsilon\oplus{\mathcal T}_\ell$ and argue similarly to~\eqref{eq:stable} to see
\begin{align*}
\eta_\star^2 \lesssim \eta_\varepsilon^2 + \normLtwo{\nabla(U_\star-U_\varepsilon)}{\Omega}^2 \lesssim \eta_\varepsilon^2 \leq \lambda \eta_\ell^2,
\end{align*}
where we used the definition of $\varepsilon>0$. We choose $\lambda>0$ sufficiently small such that Lemma~\ref{lem:doerfler} is applicable and conclude that
${\mathcal T}_\ell\setminus{\mathcal T}_\star$ satisfies the D\"orfler marking~\eqref{eq:doerfler}. By definition of step~(iii) of Algorithm~\ref{algorithm}, the set ${\mathcal M}_\ell$ of marked elements is a set of minimal cardinality which satisfies the D\"orfler marking. Therefore, we obtain by use of~\eqref{refinement:sons} and~\eqref{refinement:overlay}
\begin{align}\label{eq:marked}
\begin{split}
\#{\mathcal M}_\ell\leq \#({\mathcal T}_\star\setminus{\mathcal T}_\ell)\leq \#{\mathcal T}_\star-\#{\mathcal T}_\ell\leq \#{\mathcal T}_\varepsilon - \#{\mathcal T}_0
&\lesssim \norm{(u,f)}{\mathbb A_s}^{1/s} \varepsilon^{-1/s}\\
&\lesssim \norm{(u,f)}{\mathbb A_s}^{1/s}\eta_\ell^{-1/s}
\end{split}
\end{align}
for all $\ell\in{\mathbb N}$.
Finally, the closure estimate~\eqref{refinement:closure} and the contraction~\eqref{eq:rconv} of Proposition~\ref{thm:rconv} yield
\begin{align*}
\#{\mathcal T}_\ell-\#{\mathcal T}_0\lesssim \sum_{j=0}^{\ell-1}\#{\mathcal M}_\ell\lesssim \norm{(u,f)}{\mathbb A_s}^{1/s}\sum_{j=0}^{\ell-1}\eta_j^{-1/s}\lesssim \norm{(u,f)}{\mathbb A_s}^{1/s}\eta_\ell^{-1/s}\sum_{j=0}^{\ell-1}\q{rconv}^{(\ell-j)/s}.
\end{align*}
Exploiting the convergence of the geometric series, we end up with
\begin{align*}
\eta_\ell^2\lesssim \norm{(u,f)}{\mathbb A_s} (\#{\mathcal T}_\ell-\#{\mathcal T}_0)^{-s}\quad\text{for all }\ell\in{\mathbb N}.
\end{align*}
Altogether, this proves that each theoretically possible convergence rate for the estimator is, in fact, asymptotically achieved by the adaptive algorithm. The converse implication in~\eqref{eq:optimality} is obvious. This concludes the proof.
\end{proof}
\begin{remark}
We stress that the proof of Theorem~\ref{thm:optimal} depends only on properties~\eqref{refinement:shaperegular}--\eqref{refinement:overlay} of NVB, $R$-linear convergence~\eqref{eq:rconv} of the estimator used, and the discrete reliability~\eqref{eq:drel}. In particular, there is no explicit use of the properties of the differential operator $\operator{L}$, i.e.\ neither linearity nor uniform ellipticity is required.
\end{remark}
\section{Extensions}\label{section:extensions}
In this section, we want to discuss some possible extensions of our analysis.
\subsection{Minimal cardinality of marked elements}
The choice of the set of marked elements ${\mathcal M}_\ell$ in step~(iii) of Algorithm~\ref{algorithm} to be a set of minimal cardinality which satisfies the D\"orfler marking~\eqref{eq:doerfler}, requires to sort the set $\set{\eta_\ell(T)}{T\in{\mathcal T}_\ell}$, which takes at least $\mathcal{O}\big(\#{\mathcal T}_\ell\log(\#{\mathcal T}_\ell)\big)$ operations. In comparison to $\mathcal{O}(\#{\mathcal T}_\ell)$ operations for iterative solvers on sparse matrices, marking becomes the bottleneck of Algorithm~\ref{algorithm}. To overcome this problem, we may allow the set ${\mathcal M}_\ell$ to be of \emph{almost} minimal cardinality in the sense of
\begin{align}\label{eq:almost}
\#{\mathcal M}_\ell \leq \c{almost} \#\widetilde{\mathcal M}_\ell\quad\text{for all }\ell\in{\mathbb N},
\end{align}
where $\widetilde {\mathcal M}_\ell $ is a set of minimal cardinality which satisfies D\"orfler marking and $\setc{almost}>0$ is an arbitrary but fixed constant. All the proofs hold true up to an the additional factor $\c{almost}$, which is involved in~\eqref{eq:marked}. The relaxation~\eqref{eq:almost} allows to apply an inexact sorting algorithm based on binning of the data (see e.g.~\cite{ms}) which performs in $\mathcal{O}(\#{\mathcal T}_\ell)$ operations.
\subsection{Other mesh-refinement strategies}
Instead of simple \emph{newest-vertex} bisection, one can consider other mesh-refinement strategies which satisfy~\eqref{refinement:sons}--\eqref{refinement:overlay}, since no other property of the mesh refinement strategy is used throughout this paper. In particular, one could use up to $m$ newest vertex bisections per marked element, where $m\in{\mathbb N}$ is a fixed number, cf.\ e.g.~\cite{ks}.
This includes the strategy proposed in~\cite{cn} which uses additional bisections every $n$-th step to ensure the interior node property and hence to obtain a discrete lower bound on the error. Moreover, one can relax the regularity of the triangulations used and allow a fixed number of hanging nodes in each triangle $T\in{\mathcal T}_\ell$~\cite{bn}.
\subsection{Inhomogeneous Dirichlet data}
Let ${\mathcal S}^p({\mathcal T}_\ell):={\mathcal P}^p({\mathcal T}_\ell)\cap H^1(\Omega)$ with discrete trace space ${\mathcal S}^p({\mathcal T}_\ell|_\Gamma):=\set{V_\ell|_\Gamma}{V_\ell\in{\mathcal S}^p({\mathcal T}_\ell)}$.
We consider inhomogeneous Dirichlet data $g\in H^{1/2}(\Gamma)$ and an $H^{1/2}$-stable projection $P_\ell:\, H^{1/2}(\Gamma)\to {\mathcal S}^p({\mathcal T}_\ell|_\Gamma)$, for instance the Scott-Zhang projection~\cite{sz} for $p\geq 1$ or the $L^2$-projection for $p=1$ (see~\cite{kpp} for $H^1$-stability on NVB refined meshes).
The continuous problem we want to solve, now reads: Find $u\in H^1(\Omega)$ with $u|_{\partial \Omega}=g$ such that
\begin{align}\label{eq:continuousinhom}
\dual{\operator{L}u}{v}=b(u,v)=\int_\Omega fv\,dx \quad\text{for all }v\in H^1_0(\Omega).
\end{align}
The corresponding discrete formulation reads: Find $U_\ell \in{\mathcal S}^p({\mathcal T}_\ell)$ with $U_\ell|_\Gamma=P_\ell g$ such that
\begin{align}\label{eq:discreteinhom}
b(U_\ell,V_\ell)=\int_\Omega f V_\ell\,dx \quad\text{for all }V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell).
\end{align}
Well-posedness of~\eqref{eq:continuousinhom}--\eqref{eq:discreteinhom} is well-known and discussed, e.g., in~\cite{dirichlet3d,bcd,sv}.
The approximation error which is introduced via $g\approx P_\ell g$ results in an additional error quantity. We assume regularity $g\in H^1(\Gamma)$ and define the Dirichlet data oscillations
\begin{align*}
{\rm osc}_{g,\ell}:=\sum_{E\in{\mathcal T}_\ell|_\Gamma} \text{diam}(E)\normLtwo{\nabla_{\Gamma}(1-P_\ell)g}{E}^2,
\end{align*}
where $\nabla_{\Gamma}(\,\cdot\,)$ denotes the surface gradient on $\Gamma=\partial\Omega$.
Since the ansatz spaces are no longer nested, i.e. $U_{\ell+1}-U_\ell\notin {\mathcal S}^p_0({\mathcal T}_\ell)$, we have to rely on a modified marking strategy proposed in~\cite{stevenson07}. We replace the D\"orfler marking~\eqref{eq:doerfler}
by the following separate marking strategy with adaptivity parameters $0<\theta,\vartheta<1$:
\begin{itemize}
\item If ${\rm osc}_{g,\ell}^2\leq\vartheta \eta_\ell^2$, determine ${\mathcal M}_\ell\subseteq {\mathcal T}_\ell$ as a set of minimal cardinality which satisfies~\eqref{eq:doerfler}.
\item If ${\rm osc}_{g,\ell}^2>\vartheta \eta_\ell^2$, determine ${\mathcal M}_\ell\subseteq {\mathcal T}_\ell$ as a set of minimal cardinality which satisfies
\begin{align}\label{eq:sdoerfler2}
\theta {\rm osc}_{g,\ell}^2\leq \sum_{T\in{\mathcal M}_\ell} {\rm osc}_{g,\ell}(T)^2.
\end{align}
\end{itemize}
Now, the analysis of~\cite{dirichlet3d} can easily be transfered to the present problem as well, where $\eta_\ell$ in~\eqref{eq:estred},~\eqref{eq:rconv}, and~\eqref{eq:approxclass}--\eqref{eq:optimality} is replaced by $\rho_\ell:=\eta_\ell+{\rm osc}_{g,\ell}$. For usual choices of $P_\ell$ as above, one obtains convergence of AFEM by means of the estimator reduction principle~\cite[Theorem~4]{dirichlet3d}. Moreover, for arbitrary $P_\ell$ and sufficiently small marking parameters $0<\vartheta,\theta<1$, we obtain the optimality result of Theorem~\ref{thm:optimal}, cf.~\cite[Theorem~6]{dirichlet3d}.
For $d=2$, one may even use nodal interpolation to discretize the inhomogeneous Dirichlet data. Then, the combined D\"orfler marking~\eqref{eq:doerfler} for $\rho_\ell:=\eta_\ell+{\rm osc}_{g,\ell}$ instead of $\eta_\ell$ yields the contraction result of Theorem~\ref{thm:rconv}. Moreover, for sufficiently small $0<\theta<1$, Theorem~\ref{thm:optimal} remains valid. We refer to~\cite{fpp} in case of symmetric $\operator{L}=-\Delta$ and stress that the analysis can easily be transfered to the present setting.
\subsection{Coercive but not uniformly elliptic bilinear forms}
Assume that instead of ellipticity~\eqref{eq:elliptic}, there holds a G\r{a}rding inequality
\begin{align}\label{eq:garding}
b(u,u)+\c{garding}\normLtwo{u}{\Omega}^2\geq \rr{garding}\normLtwo{\nabla u}{\Omega}^2\quad\text{for all }u \in H^1(\Omega)
\end{align}
with constants $0<\setrr{garding}<1$ and $\setc{garding}>0$
We have to assume that $b(\cdot,\cdot)$ is definite on the continuous level, i.e. for all $v\in H^1_0(\Omega)
, it holds
\begin{subequations}\label{eq:injective}
\begin{align}
b(v,w)&=0\quad\text{for all } w\in H^1_0(\Omega) \quad\implies\quad v=0,\\
\label{eq:injectiveinfty} b(v_\infty,w_\infty)&=0\quad\text{for all } w_\infty\in {\mathcal S}_0^p({\mathcal T}_\infty) \quad\implies\quad v_\infty=0.
\end{align}
\end{subequations}
This together with Fredholm's alternative already guarantees the unique solvability of~\eqref{eq:continuous} and~\eqref{eq:discrete} with test and ansatz space ${\mathcal S}^p_0({\mathcal T}_\infty)$ instead of ${\mathcal S}^p_0({\mathcal T}_\ell)$.
\begin{remark}
Usually, the conditions~\eqref{eq:injective} are guaranteed under the assumption that the mesh-size of the initial mesh ${\mathcal T}_0$ is sufficiently small and that the solution $w\in H^1_0(\Omega)$ of the dual problem
\begin{align*}
b(v,w)=\int_\Omega f v\,dx\quad\text{for all }v\in H^1_0(\Omega)
\end{align*}
satisfies some regularity estimate
\begin{align*}
\norm{w}{H^{1+s}(\Omega)}\lesssim \norm{f}{L^2(\Omega)}\quad\text{for some }s>0,
\end{align*}
see e.g.~\cite[Theorem~5.7.6]{bs}.
\end{remark}
Now, we may apply~\cite[Theorem~4.2.9]{ss} to obtain the following result.
\begin{lemma}
There exists an index $\ell_0\in{\mathbb N}$ such that for all $\ell\geq\ell_0$ the discrete formulation~\eqref{eq:discrete} is uniquely solvable, and it holds
\begin{align}\label{eq:ceagarding}
\normLtwo{\nabla(u_\infty -U_\ell)}{\Omega}\leq \c{cea}\min_{V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)}\normLtwo{\nabla(u_\infty-V_\ell)}{\Omega},
\end{align}
where $u_\infty\in{\mathcal S}^p_0({\mathcal T}_\infty)$ denotes the unique solution of~\eqref{eq:discrete} with ${\mathcal S}^p_0({\mathcal T}_\infty)$ instead of ${\mathcal S}^p_0({\mathcal T}_\ell)$.
\end{lemma}
\begin{proof}
Since~\eqref{eq:garding} states that $b(u,v)+ \c{garding}\dual{u}{v}_{L^2(\Omega)}$ is elliptic and $\dual{\cdot}{\cdot}_{L^2(\Omega)}$ is a compact perturbation, we apply~\cite[Theorem~4.2.9]{ss} on the Hilbert space ${\mathcal S}^p_0({\mathcal T}_\infty)$ and the dense sequence of subspaces ${\mathcal S}^p_0({\mathcal T}_\ell)$ for $\ell\to\infty$.
\end{proof}
The above lemma allows to prove a~priori convergence from Lemma~\ref{lem:apriori} and consequently convergence $U_\ell\to u$ in $H^1_0(\Omega)$ as well as $u\in{\mathcal S}^p_0({\mathcal T}_\infty)$. Moreover, Lemma~\ref{lem:weakconv} still holds true, since we assumed definiteness of $b(\cdot,\cdot)$ on ${\mathcal S}^p_0({\mathcal T}_\infty)$ in~\eqref{eq:injectiveinfty}.
\begin{lemma}\label{lem:ellipticgarding}
There exists an index $\ell_1\in{\mathbb N}$ such that for all $\ell\geq\ell_1$ there holds
\begin{align*}
\normLtwo{\nabla(u-U_\ell)}{\Omega}\leq \c{norm}\enorm{u-U_\ell} \quad\text{and}\quad\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\leq \c{norm}\enorm{U_{\ell+1}-U_\ell}.
\end{align*}
\end{lemma}
\begin{proof}
With~\eqref{eq:garding} and $b(\cdot,\cdot)=\enorm{\cdot}^2$, we may estimate
\begin{align*}
\normLtwo{\nabla(u-U_{\ell})}{\Omega}^2&\lesssim \enorm{u-U_\ell}^2 +\normLtwo{u-U_{\ell}}{\Omega}^2\\
&=\enorm{u-U_\ell}^2 +\normLtwo{e_\ell}{\Omega}^2\normLtwo{\nabla(u-U_\ell)}{\Omega}^2.
\end{align*}
Lemma~\ref{lem:weakconv} shows weak convergence $e_\ell\rightharpoonup 0$ in $H^1_0(\Omega)$. The Rellich compactness theorem thus implies strong convergence $e_\ell\to 0$ in $L^2(\Omega)$. Therefore, there exists an index $\ell_1\in{\mathbb N}$ such that
there holds
\begin{align*}
\normLtwo{\nabla(u-U_{\ell})}{\Omega}^2\lesssim \enorm{u-U_\ell}^2\quad\text{for all }\ell\geq\ell_1.
\end{align*}
The statement for $U_{\ell+1}-U_\ell$ follows analogously.
\end{proof}
Lemma~\ref{lem:weakconv} together with Lemma~\ref{lem:ellipticgarding} allows to prove the quasi-Galerkin orthogonality of Proposition~\ref{prop:quasiqo} and consequently also the $R$-linear convergence of Theorem~\ref{thm:rconv}. Therefore, all the results from Section~\ref{section:optimality} hold and, in particular, we obtain the optimality result of Theorem~\ref{thm:optimal}.
\subsection{Non-linear operators $\operator{L}$}\label{section:nonlin}
We consider the following \emph{non-linear} operator
\begin{align*}
\operator{L}u(x):= -\text{div} \matrix{A}(x,\nabla u(x)) + g(x,u(x),\nabla u(x)),
\end{align*}
for functions $\matrix{A}:\,\Omega\times{\mathbb R}^d \to {\mathbb R}^d$ and $g:\, \Omega\times{\mathbb R}\times {\mathbb R}^d \to {\mathbb R}$. We assume that $\matrix{A}(\cdot,\nabla u),g(\cdot,u,\nabla u)\in L^2(\Omega)$ for all $u\in H^1_0(\Omega)$. Then, the weak formulation of~\eqref{intro:modelproblem} reads: Find $u\in H^1_0(\Omega)$ such that
\begin{align}\label{eq:continuousnonlin}
\dual{\operator{L}u}{v}=\int_\Omega \matrix{A}(x,\nabla u(x))\cdot\nabla v(x) + g(x,u(x),\nabla u(x)) v(x)\,dx = \int_\Omega fv\,dx
\end{align}
for all $v\in H^1_0(\Omega)$. We define two auxiliary operators $\operator{A},\operator{K}:\,H^1_0(\Omega)\to H^{-1}(\Omega)$ as
\begin{align*}
\operator{A}v:=-\text{div}\matrix{A}(\cdot,\nabla v)\quad\text{and}\quad\operator{K}v:=g(\cdot,v,\nabla v)\quad\text{for all }v\in H^1_0(\Omega).
\end{align*}
We formally define the residual error estimator for a mesh ${\mathcal T}_\ell$
\begin{align}\label{eq:nlest}
\eta_\ell^2:= \sum_{T\in{\mathcal T}_\ell}\big(|T|^{2/d}\normLtwo{\operator{L}|_TU_\ell - f}{T}^2 + |T|^{1/d} \normLtwo{[\matrix{A}(\cdot,\nabla U_\ell)\cdot n]}{\partial T\cap\Omega}^2\big).
\end{align}
The solvability and uniqueness of~\eqref{eq:continuousnonlin} as well as the regularity assumptions needed such that~\eqref{eq:nlest} is well-defined are part of the subsequent sections.
\subsubsection{Regularity assumptions}\label{section:nlreg}
We consider the frame of strongly monotone operators and require the following regularity assumptions on $\operator{L}$:
\begin{subequations}\label{eq:smon1}
\begin{align}\label{eq:smon1a}
\norm{\operator{A}\nabla w-\operator{A}\nabla v}{H^{-1}(\Omega)}&\leq \c{nllip}\normLtwo{\nabla(w-v)}{\Omega},\\
\normLtwo{\operator{K}w-\operator{K}v}{\Omega}&\leq \c{nllip}\normLtwo{\nabla(w-v)}{\Omega}\label{eq:smon1b}
\end{align}
\end{subequations}
for all $w,v\in H^1_0(\Omega)$ and some constant $\setc{nllip}>0$ as well as
\begin{align}\label{eq:smon2}
\dual{\operator{L}w-\operator{L}v}{u-v}\geq \c{nlelliptic} \normLtwo{\nabla(w-v)}{\Omega}^2
\end{align}
for all $w,v\in H^1_0(\Omega)$ and some constant $\setc{nlelliptic}>0$.
These assumptions, in particular, allow to apply the main theorem on strongly monotone operators~\cite[Theorem~26.A]{zeidler} and to obtain the unique solvability of~\eqref{eq:continuousnonlin} as well as of~\eqref{eq:discrete}. Additionally,~\eqref{eq:smon1}--\eqref{eq:smon2} guarantee that the norms of the residual and the error are equivalent, i.e. \begin{align}\label{eq:resequiv}
\normHme{\operator{L}u-\operator{L}U_\ell}{\Omega}\simeq \normLtwo{\nabla(u-U_\ell)}{\Omega}\quad\text{for all }\ell\in{\mathbb N}.
\end{align}
We also obtain the C\'ea lemma~\eqref{eq:cea} with the constant $2\c{nllip}/\c{nlelliptic}$.
Moreover, we require that~\eqref{eq:nlest} is well-defined and that there holds the estimator reduction~\eqref{eq:estred} from Lemma~\ref{lem:estred}. For possible non-linearities $\matrix{A}$ which allow for~\eqref{eq:estred}, we refer to Lemma~\ref{lem:nlwell} below.
We assume that $\operator{L}:\, H_0^1(\Omega)\to H^{-1}(\Omega)$ as well as $\operator{A}:\, H_0^1(\Omega)\to H^{-1}(\Omega)$ are twice Fr\'echet differentiable, i.e. there exist
\begin{align}\label{eq:frechet}
\begin{split}
D\operator{L},D\operator{A}:&\, H^1_0(\Omega)\to L(H^1_0(\Omega),H^{-1}(\Omega)),\\
D^2\operator{L},D^2\operator{A}:&\, H^1_0(\Omega)\to L\big(H^1_0(\Omega),L(H^1_0(\Omega),H^{-1}(\Omega))\big).
\end{split}
\end{align}
The second derivative should be bounded locally around the solution $u$ of~\eqref{eq:continuousnonlin} i.e., there exists $\varepsilon_{\ell oc}>0$ with \definec{nlbound}
\begin{align}\label{eq:nlbounded}
\begin{split}
\c{nlbound}:=\sup_{\normLtwo{\nabla(u-v)}{\Omega}<\varepsilon_{\ell oc}}\Big(\norm{&D^2\operator{L}(v)}{L\big(H^1_0(\Omega),L(H^1_0(\Omega),H^{-1}(\Omega))\big)}\\
&+\norm{D^2\operator{A}(v)}{L\big(H^1_0(\Omega),L(H^1_0(\Omega),H^{-1}(\Omega))\big)}\Big)<\infty.
\end{split}
\end{align}
Finally, we assume that $D\operator{A}(v):\,H^1_0(\Omega)\to H^{-1}(\Omega)$ is symmetric for all $v\in H^1_0(\Omega)$, i.e.\ for all $w_1,w_2\in H^1_0(\Omega)$ holds
\begin{align*}
\dual{D\operator{A}(v)(w_1)}{w_2}=\dual{D\operator{A}(v)(w_2)}{w_1}.
\end{align*}
\begin{remark}
Note that if $\matrix{A}:\,\Omega\times{\mathbb R}^d \to {\mathbb R}^d$ and $g:\, \Omega\times{\mathbb R}\times {\mathbb R}^d \to {\mathbb R}$ are twice differentiable, and if the Jacobian
$J_y\matrix{A}(x,y)\in {\mathbb R}^{d\times d}$ additionally is a symmetric matrix, then $\operator{L}$ and $\operator{A}$ satisfy~\eqref{eq:frechet} as well as~\eqref{eq:nlbounded}. Moreover, $D\operator{A}(v)$ is symmetric for all $v\in H^1_0(\Omega)$, since there holds for $w\in H^1_0(\Omega)$
\begin{align*}
D\operator{A}(v)(w)={\rm div}\Big(\big(J_y\matrix{A}(\cdot,\nabla v(\cdot))\big) \big(\nabla w(\cdot)\big)\Big),
\end{align*}
where $J_y\matrix{A}(x,y)$ denotes the Jacobian of $\matrix{A}$ with respect to $y$.
\end{remark}
\begin{example}
We stress that the assumptions on $\operator{A}$ and $\operator{L}$ posed, cover for instance non-linear material laws in magnetostatics, where e.g.\ $\matrix{A}(\cdot,\cdot)$ takes the form
\begin{align*}
\matrix{A}(x,\nabla u(x)) = \Big(1+\frac{1}{1+|\nabla u(x)|^2}\Big)\nabla u(x).
\end{align*}
E.g.\ for $d=2$, the Jacobi-matrix $J_y\matrix{A}(x,y)$ reads as
\begin{align*}
J_y\matrix{A}(x,y):=\left(\begin{array}{cc}
\frac{-2y_1^2}{(1+|y|^2)^2} & \frac{-2y_1y_2}{(1+|y|^2)^2}\\
\frac{-2y_1y_2}{(1+|y|^2)^2} &\frac{-2y_2^2}{(1+|y|^2)^2}
\end{array}\right)
+
\left(\begin{array}{cc}
1+\frac{1}{1+|y|^2} & 0\\
0 &1+\frac{1}{1+|y|^2}
\end{array}\right).
\end{align*}
We refer to e.g.~\cite{rzmp} for further examples.
\end{example}
\begin{lemma}\label{lem:nlwell}
Sufficient regularity assumptions in addition to~\eqref{eq:smon1b} and~\eqref{eq:smon2} to guarantee that the error estimator~\eqref{eq:nlest} is well-defined and satisfies the estimator reduction~\eqref{eq:estred} are, for instance, either of the following conditions $(i)$ and $(ii)$:
\begin{itemize}
\item[(i)] $\matrix{A}(\cdot,\cdot):\,\Omega\times{\mathbb R}^d\to{\mathbb R}^d$ is Lipschitz continuous and there exists a constant $\setc{nlwell1}>0$ such that for all $\ell\in{\mathbb N}$ and all $V_\ell,W_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)$ there holds ${\rm div}\matrix{A}(\cdot, V_\ell(\cdot))\in L^2(\Omega)$ as well as
\begin{align}\label{eq:nlwell1}
\normLtwo{{\rm div}|_T\big(\matrix{A}(\cdot,V_\ell(\cdot))-\matrix{A}(\cdot, W_\ell(\cdot))\big)}{T}\leq\c{nlwell1}\norm{V_\ell-W_\ell}{H^2(T)}\quad\text{for all }T\in{\mathcal T}_\ell.
\end{align}
\item[(ii)] There holds $p=1$ (lowest-order case) as well as
\begin{align*}
\matrix{A}(x,y)=\matrix{A}(y)\quad\text{for all }x\in\Omega,\,y\in {\mathbb R}^d,
\end{align*}
and additionally $\matrix{A}(\cdot):\,{\mathbb R}^d\to{\mathbb R}^d$ is Lipschitz continuous.
\end{itemize}
\end{lemma}
\begin{proof}
The jump terms in~\eqref{eq:nlest} are well-defined in both cases $(i)$ and $(ii)$ since $\matrix{A}(\cdot,\nabla U_\ell(\cdot))$ is a piecewise Lipschitz continuous function. Moreover, this shows that ${\rm div}\matrix{A}(\cdot,\nabla U_\ell(\cdot))\in L^\infty(T)\subset L^2(T)$ for all $T\in{\mathcal T}_\ell$. Therefore,~\eqref{eq:nlest} is well-defined.
Given $T_+,T_-\in{\mathcal T}_\ell$ as well as $W_\ell,V_\ell\in{\mathcal S}^p_0({\mathcal T}_\ell)$, the Lipschitz continuity also proves the following pointwise estimate for all $x\in T_+\cap T_-$
\begin{align*}
|[(\matrix{A}(x,\nabla W_\ell(x))&-\matrix{A}(x,\nabla V_\ell(x)))\cdot n]|\\
&\lesssim \Big|\big(\matrix{A}(x,(\nabla W_\ell)|_{T_+}(x))-\matrix{A}(x,(\nabla V_\ell)|_{T_+}(x))\big)\cdot n|_{T_+}\\
&\qquad+
\big(\matrix{A}(x,(\nabla W_\ell)|_{T_-}(x))-\matrix{A}(x,(\nabla V_\ell)|_{T_-}(x))\big)\cdot n|_{T_-}\Big|\\
&\lesssim \Big|(\nabla W_\ell)|_{T_+}(x)-(\nabla V_\ell(x))|_{T_+} \Big|+ \Big|(\nabla W_\ell)|_{T_-}(x)-(\nabla V_\ell)|_{T_-}(x) \Big|.
\end{align*}
Combining the estimate above with the trace inequality for polynomials, we obtain
\begin{align}\label{eq:nlwellhelp1}
|T_+|^{1/d} \normLtwo{[(\matrix{A}(\cdot,\nabla W_\ell)-\matrix{A}(\cdot,\nabla V_\ell))\cdot n]}{T_+\cap T_-}^2\lesssim
\normLtwo{\nabla(W_\ell-V_\ell)}{T_+\cup T_-}^2.
\end{align}
This hidden constant depends only on the polynomial degree $p\in{\mathbb N}$ as well as the Lipschitz continuity of $\matrix{A}(\cdot,\cdot)$ and the $\gamma$-shape regularity of ${\mathcal T}_\ell$.
It remains to prove a similar estimate for the volume residual in~\eqref{eq:nlest}, i.e.
\begin{align}\label{eq:nlwellhelp2}
|T|^{2/d}\normLtwo{\operator{L}|_TW_\ell - \operator{L}|_T V_\ell}{T}^2\lesssim \normLtwo{\nabla(W_\ell-V_\ell)}{T}^2\quad\text{for all }T\in{\mathcal T}_\ell.
\end{align}
In case of $(i)$, this follows immediately from the combination of~\eqref{eq:nlwell1} and~\eqref{eq:smon1b} together with a standard inverse estimate.
In case of $(ii)$, we observe that $\nabla U_\ell$ is piecewise constant. Therefore, $\matrix{A}(\nabla U_\ell)$ is also piecewise constant and hence $\operator{A}(\nabla U)={\rm div}\matrix{A}(\nabla U(\cdot)) = 0$. Thus, $\operator{L}|_TV_\ell=(\operator{K}V_\ell)|_T$, and it suffices to apply~\eqref{eq:smon1b} to prove~\eqref{eq:nlwellhelp2}.
With the estimates~\eqref{eq:nlwellhelp1}--\eqref{eq:nlwellhelp2}, the proof of Lemma~\ref{lem:estred} still holds true with the obvious modifications. This concludes the proof.
\end{proof}
\subsubsection{Auxiliary results}
This section provides some technical lemmata, which are used to transfer the results from the linear case to the present non-linear case.
\begin{lemma}\label{lem:nlrel}
The residual error estimator satisfies reliability~\eqref{eq:reliable} as well as discrete reliability~\eqref{eq:drel}.
Moreover, there holds convergence
\begin{align}\label{eq:nlconv}
\normLtwo{\nabla(u-U_\ell)}{\Omega}\to 0\quad\text{as }\ell\to\infty.
\end{align}
\end{lemma}
\begin{proof}
The residual error estimator $\eta_\ell$ is well-defined by assumption in Section~\ref{section:nlreg}. With the equivalence~\eqref{eq:resequiv}, the standard arguments apply to prove reliability~\eqref{eq:reliable} and also the proof of discrete reliability~\eqref{eq:drel} follows analogously to~\cite{ckns}.
The estimator reduction holds by assumption in Section~\ref{section:nlreg} and therefore Proposition~\ref{prop:conv} holds true and proves~\eqref{eq:nlconv}.
\end{proof}
\begin{lemma}\label{lem:aux1}
The operator $(D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u:\,{\mathcal S}^p_0({\mathcal T}_\infty)\to {\mathcal S}^p_0({\mathcal T}_\infty)^\star$ is injective.
\end{lemma}
\begin{proof}
With~\eqref{eq:smon2} and the definition of the Fr\'echet derivative, there holds for all $v\in {\mathcal S}^p_0({\mathcal T}_\infty)$ with $\normLtwo{\nabla v}{\Omega}=1$
\begin{align*}
\dual{((D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u)(v)}{v}&=\lim_{\delta\to 0} \delta^{-2} \dual{\operator{L}(u+\delta v)-\operator{L}u}{u+\delta v- u}\\
&\gtrsim \delta^{-2}\normLtwo{\nabla(u+\delta v-u)}{\Omega}^2 =1.
\end{align*}
Hence, we have $((D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u)(v)\neq 0$ in ${\mathcal S}^p_0({\mathcal T}_\infty)^\star$ for all $v\in {\mathcal S}^p_0({\mathcal T}_\infty)\setminus\{0\}$. This concludes the proof.
\end{proof}
\begin{lemma}[Taylor]\label{lem:taylor}
For all $v,w\in H^1_0(\Omega)$ with $\normLtwo{\nabla(u-v)}{\Omega}+\normLtwo{\nabla(u-w)}{\Omega}\leq \varepsilon_{\ell oc}$, there holds
\begin{subequations}\label{eq:taylor}
\begin{align}\label{eq:taylor1}
\normHme{\operator{L}w-\operator{L}v- D\operator{L}(w-v)}{\Omega}&\leq \c{nlbound}\normLtwo{\nabla(w-v)}{\Omega}^2,\\
\normHme{\operator{A}w-\operator{A}v- D\operator{A}(w-v)}{\Omega}&\leq \c{nlbound}\normLtwo{\nabla(w-v)}{\Omega}^2.\label{eq:taylor2}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
The local boundedness~\eqref{eq:nlbounded} together with~\cite[Theorem 6.5]{cp} applied to the operators $\operator{L}$ and $\operator{A}$ proves the statement.
\end{proof}
\subsubsection{Quasi-orthogonality}
Following the steps of Section~\ref{section:quasiqo}, we derive a similar result for the present, non-linear case.
\begin{lemma}\label{lem:nlweakconv}
The sequence $(e_\ell)_{\ell\in{\mathbb N}}$ defined by
\begin{align*}
e_\ell:=\begin{cases}\frac{u-U_\ell}{\normLtwo{\nabla(u-U_\ell)}{\Omega}},& \text{ for }u\neq U_\ell,\\
0, &\text{ else}\end{cases}
\end{align*}
converges to zero, weakly in $ H^1_0(\Omega)$.
\end{lemma}
\begin{proof}
With Galerkin-orthogonality and the convention $\infty\cdot 0 = 0$, we obtain
\begin{align*}
\lim_{\ell\to\infty}\frac{\dual{\operator{L}u-\operator{L}U_\ell}{V_k}}{\normLtwo{\nabla(u-U_\ell)}{\Omega}} = 0\quad\text{for all }V_k\in {\mathcal S}^p_0({\mathcal T}_k)\text{ and }k\in{\mathbb N}.
\end{align*}
By continuity of the duality brackets, this results in convergence for all $v\in{\mathcal S}^p_0({\mathcal T}_\infty)$
\begin{align*}
\frac{\dual{\operator{L}u-\operator{L}U_\ell}{v}}{\normLtwo{\nabla(u-U_\ell)}{\Omega}}\to 0\quad\text{as }\ell\to\infty.
\end{align*}
By use of~\eqref{eq:taylor1}, we observe for all $v\in{\mathcal S}^p_0({\mathcal T}_\infty)$
\begin{align*}
\frac{|\dual{\operator{L}u-\operator{L}U_\ell}{v}|}{\normLtwo{\nabla(u-U_\ell)}{\Omega}}&\geq \frac{|\dual{(D\operator{L}u)(u-U_\ell)}{v}|}{\normLtwo{\nabla(u-U_\ell)}{\Omega}}
-\c{nlbound}\normLtwo{\nabla(u-U_\ell)}{\Omega}\normLtwo{\nabla v}{\Omega}.
\end{align*}
With convergence $U_\ell \to u$ in $H^1_0(\Omega)$ from~\eqref{eq:nlconv}, this implies immediately
\begin{align}\label{eq:nlhelp}
\frac{|\dual{u-U_\ell}{((D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u)^\star v}|}{\normLtwo{\nabla(u-U_\ell)}{\Omega}}\to 0\quad\text{as }\ell\to\infty\quad\text{for all }v\in{\mathcal S}^p_0({\mathcal T}_\infty).
\end{align}
According to Lemma~\ref{lem:aux1}, $(D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u$ is injective. Therefore, its adjoint $((D\operator{L})|_{{\mathcal S}^p_0({\mathcal T}_\infty)}u)^\star$ is surjective onto ${\mathcal S}^p_0({\mathcal T}_\infty)^\star$. Hence,~\eqref{eq:nlhelp} is equivalent to $e_\ell\rightharpoonup 0$ as $\ell\to\infty$. This concludes the proof.
\end{proof}
To abbreviate notation, we define the quasi-metric
\begin{align*}
{\rm d}\hspace{-0.6mm}{\rm l}(w,v)^2:=\dual{\operator{L}w-\operator{L}v}{w-v}\quad\text{for all }w,v\in H^1_0(\Omega).
\end{align*}
Note that due to~\eqref{eq:smon1}--\eqref{eq:smon2}, there holds
\begin{align}\label{eq:dequiv}
\c{norm}^{-1}\normLtwo{\nabla(w-v)}{\Omega}\leq {\rm d}\hspace{-0.6mm}{\rm l}(w,v)\leq\c{norm}\normLtwo{\nabla(w-v)}{\Omega}\quad\text{for all } w,v\in H^1_0(\Omega)
\end{align}
with $\c{norm}=\max\{2\c{nllip},\c{nlelliptic}^{-1}\}>0$.
\begin{proposition}\label{prop:nlquasiqo}
For any $\varepsilon>0$, there exists $\ell_0\in{\mathbb N}$ such that
\begin{align}\label{eq:nlquasiqo}
{\rm d}\hspace{-0.6mm}{\rm l}(U_{\ell+1},U_\ell)^2\leq \frac{1}{1-\varepsilon}\,{\rm d}\hspace{-0.6mm}{\rm l}(u,U_\ell)^2-{\rm d}\hspace{-0.6mm}{\rm l}(u,U_{\ell+1})^2
\end{align}
for all $\ell\geq \ell_0$.
\end{proposition}
\begin{proof}
Due to convergence $U_\ell\to u$ in $H^1_0(\Omega)$~\eqref{eq:nlconv}, there exists $\ell_1\in{\mathbb N}$ such that for all $\ell\geq\ell_1$ we may apply~\eqref{eq:taylor2}, to obtain
\begin{align*}
|\dual{\operator{A}U_{\ell+1}-\operator{A}U_\ell}{u-U_{\ell+1}}|&\leq |\dual{D\operator{A}(U_{\ell+1})(U_{\ell+1}-U_\ell)}{u-U_{\ell+1}}|\\
&\qquad\qquad+\c{nlbound}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}
\end{align*}
Using the symmetry of $D\operator{A}(U_{\ell+1})$, we conclude
\begin{align*}
|\dual{\operator{A}U_{\ell+1}-\operator{A}U_\ell}{u-U_{\ell+1}}|&\leq |\dual{D\operator{A}(U_{\ell+1})(u-U_{\ell+1})}{U_{\ell+1}-U_\ell}|\\
&\qquad\qquad+\c{nlbound}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\leq |\dual{\operator{A}u-\operator{A}U_{\ell+1}}{U_{\ell+1}-U_\ell}|\\
&\qquad\qquad+\c{nlbound}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\qquad\qquad+\c{nlbound}\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}^2.
\end{align*}
Analogously to the estimate above, we obtain a lower estimate. For any $\delta>0$, we may thus use convergence $U_\ell\to u$ as $\ell\to\infty$ to find an index $\ell_0\in {\mathbb N}$ such that
\begin{align*}
\big||\dual{\operator{A}U_{\ell+1}-\operator{A}U_\ell}{u-U_{\ell+1}}|&-|\dual{\operator{A}u-\operator{A}U_{\ell+1}}{U_{\ell+1}-U_\ell}|\big|\\
&\qquad\qquad\leq\delta\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}
\end{align*}
for all $\ell\geq\ell_0$.
Since $e_\ell$ converges to zero weakly in $H^1_0(\Omega)$, we have strong convergence $e_\ell\to 0$ as $\ell\to\infty$ in $L^2(\Omega)$. This together with Lipschitz continuity~\eqref{eq:smon1b} allows to estimate
\begin{align*}
|\dual{\operator{K}U_{\ell+1}-\operator{K}U_\ell}{u-U_{\ell+1}}|\lesssim \normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{e_{\ell+1}}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}
\end{align*}
and hence
\begin{align*}
| \dual{\operator{K}U_{\ell+1}-\operator{K}U_\ell}{u-U_{\ell+1}}|\leq \delta \normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}
\end{align*}
for all $\ell\geq\ell_1$. The adjoint term follows analogously, since
\begin{align*}
|\dual{\operator{K}u-\operator{K}U_{\ell+1}}{U_{\ell+1}-U_\ell}|\leq |\dual{\operator{K}u-\operator{K}U_{\ell+1}}{U_{\ell+1}-u}|+|\dual{\operator{K}u-\operator{K}U_{\ell+1}}{u-U_\ell}|.
\end{align*}
So far, we end up with
\begin{align*}
| \dual{\operator{K}U_{\ell+1}-\operator{K}U_\ell}{u-U_{\ell+1}}|&+|\dual{\operator{K}u-\operator{K}U_{\ell+1}}{U_{\ell+1}-U_\ell}|\\
&\leq \delta \big(\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\qquad +\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}^2 \\
&\qquad + \normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\normLtwo{\nabla(u-U_\ell)}{\Omega}\big)\\
&\leq \delta/2\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2+2\delta\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}^2\\
&\qquad\qquad+\delta/2\normLtwo{\nabla(u-U_\ell)}{\Omega}^2
\end{align*}
by use of Young's inequality. Putting everything together, we obtain
\begin{align*}
|\dual{(\operator{A}+\operator{K})U_{\ell+1}&-(\operator{A}+\operator{K})U_\ell}{u-U_{\ell+1}}|\\
&\leq
|\dual{\operator{A}u-\operator{A}U_{\ell+1}}{U_{\ell+1}-U_\ell}|
+\delta\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\qquad+|\dual{\operator{K}U_{\ell+1}-\operator{K}U_\ell}{u-U_{\ell+1}}|\\
&\leq |\dual{(\operator{A}+\operator{K})u-(\operator{A}+\operator{K})U_{\ell+1}}{U_{\ell+1}-U_\ell}|\\
&\qquad+\delta\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}\\
&\qquad+|\dual{\operator{K}U_{\ell+1}-\operator{K}U_\ell}{u-U_{\ell+1}}|
+|\dual{\operator{K}u-\operator{K}U_{\ell+1}}{U_{\ell+1}-U_\ell}|\\
&\leq 3\delta\big(\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2
+\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}^2
+\normLtwo{\nabla(u-U_\ell)}{\Omega}^2\big),
\end{align*}
where we used Galerkin orthogonality $\dual{(\operator{A}+\operator{K})u-(\operator{A}+\operator{K})U_{\ell+1}}{U_{\ell+1}-U_\ell}=0$ to obtain the last estimate.
With that at hand, we obtain similarly to~\eqref{eq:key}
\begin{align*}
{\rm d}\hspace{-0.6mm}{\rm l}(U_{\ell+1},U_\ell)^2&\leq {\rm d}\hspace{-0.6mm}{\rm l}(u,U_\ell)^2 - {\rm d}\hspace{-0.6mm}{\rm l}(u,U_{\ell+1})^2 + |\dual{(\operator{A}+\operator{K})U_{\ell+1}-(\operator{A}+\operator{K})U_\ell}{u-U_{\ell+1}}|\\
&\leq {\rm d}\hspace{-0.6mm}{\rm l}(u,U_\ell)^2 - {\rm d}\hspace{-0.6mm}{\rm l}(u,U_{\ell+1})^2 +3\delta\big(\normLtwo{\nabla(U_{\ell+1}-U_\ell)}{\Omega}^2\\
&\qquad\qquad+\normLtwo{\nabla(u-U_{\ell+1})}{\Omega}^2
+\normLtwo{\nabla(u-U_\ell)}{\Omega}^2\big).
\end{align*}
With the equivalence~\eqref{eq:dequiv}, we conclude
\begin{align*}
(1-3\c{norm}\delta){\rm d}\hspace{-0.6mm}{\rm l}(U_{\ell+1},U_\ell)^2&\leq (1+3\c{norm}\delta){\rm d}\hspace{-0.6mm}{\rm l}(u,U_\ell)^2 - (1-3\c{norm}\delta){\rm d}\hspace{-0.6mm}{\rm l}(u,U_{\ell+1})^2
\end{align*}
for all $\ell\geq\ell_0$. Finally, we choose $\delta>0$ sufficiently small such that $ (1+3\c{norm}\delta)/(1-3\c{norm}\delta)\leq 1/(1-\varepsilon)$ and conclude the proof.
\end{proof}
Together with the estimator reduction~\eqref{eq:estred} which holds by assumption in Section~\ref{section:nlreg}, the quasi-Galerkin orthogonality~\eqref{eq:nlquasiqo} of Proposition~\ref{prop:nlquasiqo} allows to prove the $R$-linear convergence of Theorem~\ref{thm:rconv}, if one exchanges $\enorm{u-U_{\ell+1}}$ and $\enorm{U_{\ell+1}-U_\ell}$ with ${\rm d}\hspace{-0.6mm}{\rm l}(u,U_{\ell+1})$ and ${\rm d}\hspace{-0.6mm}{\rm l}(U_{\ell+1},U_\ell)$, respectively. Therefore, all the results from Section~\ref{section:optimality} hold (cf.\ the remarks after Theorem~\ref{thm:rconv} and the proof of Theorem~\ref{thm:optimal}) and, in particular, we obtain the optimality result of Theorem~\ref{thm:optimal}.
|
cond-mat/9912007
|
\section{Introduction}
\label{introduction}
The core level spectral function $A_d(\epsilon)$ of a localized core
orbital immersed in a conduction electron sea, as observed in the
photoemission of electrons after X-ray absorption has long been known
to show nonanalytic threshold behavior characterized by fractional
power laws $A_d(\epsilon) \propto \epsilon^{-\alpha_d}$ in the frequency
distance to the threshold $\epsilon = \omega - E_0$. As shown by
Anderson \cite{Anderson67}, this can be understood by considering that
the sudden creation of a deep hole in the electronic core of an ion in
a metal (or the filling of an empty core state) disturbs the Fermi sea
of the conduction electrons so strongly that the subsequent relaxation
into the new ground state follows a fractional power law in time
rather than the usual exponential dependence. This is due to the fact
that the ground states of the initial state and the final state are
orthogonal in the limit of an infinite system (``orthogonality
catastrophe''). At finite, but small $\epsilon$ the relaxation
process involves excitation of a large number of particle-hole pairs
out of the Fermi sea of conduction electrons. A similar situation arises
at the X-ray absorption threshold. There it has been argued that in addition
to the above an excitonic effect appears, as first discussed by Mahan
\cite{Mahan67}. A theoretical
description requires the use of infinite order perturbation theory.
The problem is in some sense the simplest situation in which strong
electron correlations are generated by a sudden change of electron
occupations of a level coupled to a Fermi sea. The same generic
problem is at the heart of the Kondo problem, or generally speaking,
of quantum impurity problems, which can be understood as a succession
of X-ray edge problems generated by successive flips of the impurity
spin or pseudospin. In an even more general context, such problems
arise in lattice models of correlated electrons, when the hopping of
an electron from one site to the next changes the occupation of these
sites, causing a corresponding rearrangement of the whole Fermi
system. Given the existing evidence that high temperature
superconductors, heavy fermion compounds and other metallic systems
are governed by strong electron correlation effects, which are at
present only poorly understood, there is an urgent need for generally
applicable theoretical methods capable of dealing with these complex
situations.
A powerful method of many-body physics, which directly addresses the
consequences of a change in occupation number of a local level is the
pseudoparticle representation \cite{Barnes76,Cole84}. Within this
framework one introduces pseudoparticles for each of the states of
occupation of a given energy level, i.e.~fermions for the singly
occupied level and bosons for the empty level. It is well known that a
representation of this type for the infinite $U$ Anderson model of a
magnetic impurity in a metal can give surprisingly good results
already in second-order self-consistent perturbation theory
[``non-crossing-approximation'' (NCA)] in the hybridization of local
level and conduction band \cite{NCA}. However, at low temperatures
and low energies the NCA fails to control the infrared singular
behavior of the pseudoparticle spectral functions at threshold.
Application of the NCA to the problem of the core hole spectral
function gives a threshold exponent $\alpha_d$ independent of the
occupation of the core state, in contradiction with the exactly known
result.
We have recently developed an approximation scheme, which appears to
overcome the difficulties of NCA \cite{Kroha97,Kroha98}. It is based
on the idea of including singular behavior emerging in any of the
two-particle channels. There are two relevant channels, the
pseudofermion-conduction electron and the slave boson-conduction
electron channel. In both channels the ladder diagrams are summed, the
resulting $T\/$-matrices are self-consistently included in the
self-energies, as is required within a conserving approximation
scheme. The main results of this conserving $T\/$-matrix
approximation (CTMA) are: ($i$) the (exactly known) infrared threshold
exponents of the pseudoparticle spectral functions are recovered \cite{Kroha97},
($ii$) the thermodynamic quantities spin susceptibility and specific
heat show local Fermi liquid behavior in the single channel
case \cite{Kroha05} and
($iii$) in the multi channel case, non-Fermi liquid behavior is
found \cite{Kroha05},
in quantitative agreement with exact results available in certain
limiting cases.
One of the most stringent tests of a many-body method is the
calculation of the core hole spectral function. In this paper we
report the results of an application of the CTMA to this problem.
The organization of the paper is as follows. In section
\ref{physical_model}, we summarize the most important results of the
exact solution of the X-ray model \cite{Noz3,Schotte69}, notably those
for the threshold exponents for the photoemission and the X-ray
absorption. Then, in section \ref{representation}, we recall the
pseudoparticle representation of a spinless Anderson impurity
Hamiltonian \cite{Menge88} and point out its equivalence to the X-ray
model in the infrared limit. The conserving pseudopartcle approximation
up to infinite order in the hybridization $V$ is discussed in section
\ref{theory} and compared with the parquet equation approach of
Nozi{\`e}res {\em et al.\/} \cite{Noz2} in section \ref{comparison}.
The numerical results are discussed in section \ref{results}. In
appendix \ref{ctma_equations} we give explicitly the self-consistent
equations which determine the auxiliary particle self-energies within
the CTMA.
\section{Physical model}
\label{physical_model}
The absorption of an X-ray photon by a deep level core electron and
the subsequent emission of the electron leaves a core hole, which is
seen by the conduction electrons as a suddenly created screened
Coulomb potential. The simplest model Hamiltonian describing this
situation is given by \cite{Mahan67,Noz3,Noz2,Ohtaka90,Noz1}
\begin{equation}
\label{Xray_Hamiltonian}
H=\sum_{\V{k}\sigma} \left( \epsilon^{\phantom{\dag}}_{\V{k}}
-\mu \right) c^\dag_{\V{k}\sigma} c^{\phantom{\dag}}_{\V{k}\sigma}
+ E^{\phantom{\dag}}_d d^\dag d + V_d \sum_{\sigma}
c^\dag_{0\sigma} c^{\phantom{\dag}}_{0\sigma} d d^\dag_{\phantom 0} \;,
\end{equation}
where $c^{\phantom{\dag}}_{\V{k}\sigma}$ ($c^\dag_{\V{k}\sigma}$) are
the conduction electron field operators for momentum and spin
eigenstates $|\V{k} \sigma \rangle$, with energy $\epsilon_{\V{k}}$
and chemical potential $\mu$. The energy of the deep level is $E_d$,
and $V_d$ is the screened Coulomb interaction between the conduction
electrons at the site of the hole ($c^\dag_{0\sigma}$,
$c^{\phantom{\dag}}_{0\sigma}$) and the hole (with operators $d^\dag$,
$d$; the spin state of the hole is irrelevant here). We assume that
the hole is localized and does not have internal structure, i.e.~we
neglect the finite life time of the hole due to Auger effect as well
as a possible recoil of the hole. The Coulomb interaction between the
conduction electrons is absorbed into a quasiparticle renormalization.
\paragraph*{Photoemission.---}
The spectral function of the hole, $A_d(\epsilon)$, which can be
measured in photoemission experiments, is obtained from the
one-particle core hole Green's function $G_d(t)=-i \langle
T[d(t)d^\dag(0)] \rangle$, subjected to the initial condition that the
core hole occupation number $d^\dag d = 0$ for times $t<0$ (before the
photoemission process), by taking the imaginary part of its Fourier
transform, $A_d(\omega)=(1/\pi)\mbox{Im}G_d(\omega-i0)$. The initial
condition is equivalent to the trace $\langle \cdots \rangle$ in the
definiton of $G_d(t)$ being taken only over states with hole
occupation equal to zero. It is this restriction which implies the
non-trivial dynamics of the X-ray problem. $A_d(\omega)$ is
proportional to the spectral weight of processes, where a photon is
absorbed by the metal, subsequently emitting the deep level core
electron. The energy $\omega$ required for this process is bounded
from below by the threshold energy $E_0=E_F-E_{\mbox{\scriptsize
core}}-\Delta E$, where $E_{\mbox{\scriptsize core}}$ and
$E_F$ are the core level energy and the Fermi energy, respectively,
and $\Delta E$ is a renormalization due to core hole-conduction
electron interactions. In the following we will choose the zero of
energy such that $E_0=0$ (i.e.~$\epsilon=\omega-E_0$). The spectral
function $A_d(\epsilon)$ then shows singular threshold behavior
\begin{equation}
A_d(\epsilon)=\frac{C_d}{\epsilon^{\alpha_d}} \qquad (\epsilon \to 0^+) \; .
\end{equation}
In a landmark paper Nozi{\`e}res and De Dominicis \cite{Noz3}
showed that the exponent $\alpha_d$ depends only on the scattering phase
shift $\eta$ of the conduction electrons off
the core hole and calculated it as ($s\/$-wave-scattering)
\begin{eqnarray}
\alpha_d = 1-\left( \frac{\eta}{\pi} \right)^2
= 1-n^2_d \; ,
\end{eqnarray}
where Friedel's sum rule $\eta=\pi n_d$ has been used to express
$\eta$ in terms of the occupation number of the core level, $n_d$.
\paragraph*{X--ray absorption.---}
The X-ray absorption cross section is given by the two particle
Green's function $G_2(t)=-i\Theta(t) \langle [ d^\dag_{\phantom 0}(t)
c^{\phantom{\dag}}_{0\sigma}(t), c^\dag_{0\sigma}(0) d(0) ] \rangle$
as $d\sigma/d\epsilon \propto \mbox{Im} G_2(\epsilon-i0)$. The absorption
cross section is finite for $\epsilon > 0$ and again shows singular
threshold behavior
\begin{equation}
\frac{d\sigma}{d\epsilon} = \frac{C_a}{\epsilon^{\alpha_a}}
\qquad (\epsilon \to 0^+) \; .
\end{equation}
The exponent $\alpha_a$ has been calculated by Nozi{\`e}res and De
Dominicis \cite{Noz3} with the result
\begin{eqnarray}
\alpha_a = \frac{2\eta}{\pi} - \left( \frac{\eta}{\pi} \right)^2
= 2n_d - n^2_d \; .
\label{absorption_exponent}
\end{eqnarray}
\section{Pseudoparticle representation of the X-ray model}
\label{representation}
As will be seen below, it is useful to formulate the core hole problem in
terms of pseudoparticles in order to impose the initial condition. We define
fermion operators $f^+$ ($f$) and boson operators $b^+$ ($b$) creating
(annihilating) the occupied or empty core level. The transition amplitude $V$
of an electron from the core level into the conduction band describes the
hybridization of these two systems. The Hamiltonian of this system takes the
form of an Anderson impurity Hamiltonian for spinless particles (spin
degeneracy $N=1$):
\begin{eqnarray}
\label{Anderson_Hamiltonian}
H&=&\sum_{\V{k}} \left( \epsilon^{\phantom{\dag}}_{\V{k}} - \mu \right)
c^\dag_{\V{k}} c^{\phantom{\dag}}_{\V{k}} \\
&+& E^{\phantom{\dag}}_d f^\dag f + V \left(
f^{\dag}_{\phantom{0}}bc^{\phantom{\dag}}_0
+ \mbox{h.~c.} \right)
+ \lambda Q \; , \nonumber
\end{eqnarray}
where $c_0=\sum_{\V{k}}c_{\V{k}}$ annihilates a conduction electron at
the impurity site. The constraint $Q=f^{\dag}f+b^{\dag}b=1$ ensuring
that the core level is either empty or occupied is implemented by
adding the last term in (\ref{Anderson_Hamiltonian}), where $\lambda$
is associated with the operator constraint $Q=1$ and may be
interpreted as the negative of a chemical potential for the
pseudoparticles \cite{Cole84}. As has been shown previously
\cite{Cole84,Kroha98,Costi96}, the limit
$\lambda \to \infty$ imposes the constraint exactly
and is equivalent to taking all expectation values of pseudoparticle
operators in the Hilbert subspace with
$Q=0$ (no core hole present). Thus, in the present context,
it implements exeactly
the X-ray initial condition of sudden creation of the core
hole. The auxiliary particle Green's functions are expressed in terms of
their self-energies as
$G^{-1}_{f}(i\omega _n) = \left[ G^0_{f,b}(i\omega _n) \right]^{-1} -
\Sigma_{f}(i\omega _n)$,
$G^{-1}_{b}(i\nu _m) = \left[ G^0_{b}(i\nu _m) \right]^{-1} -
\Sigma_{b}(i\nu _m)$, where $G^0_f (i\omega _n) =1/(i\omega _n -E_d)$ and
$G^0_b (i \nu _m) =1/i\nu _m $ are the
respective non-interacting Green's functions and
$i\omega _n=(2n+1)\pi/\beta$, $i\nu _m=2m\pi/\beta$ denote
the fermionic and bosonic Matsubara frequencies.
In the model (\ref{Anderson_Hamiltonian}) one may distinguish two
distinct regimes, where the impurity occupation number $n_d$
at infinitely long time after suddenly switching on the interaction
is large ($n_d \to 1$, $E_d<0$) or small ($n_d \to 0$, $E_d>0$).
Since, due to the hybridization, $n_d$ is equal and opposite in sign
to the change of the conduction electron number (i.e. screening charge)
induced by the presence of the impurity, $n_d=-\Delta n_c$,
these regimes correspond via the Friedel sum rule to large
($\eta \to \pi$) and small ($\eta \to 0$) scattering phase shifts,
respectively (see detailed discussion below), and may, therefore,
be termed the strong and the weak coupling regions.
We now show the formal equivalence between the X-ray model
Eq.~(\ref{Xray_Hamiltonian}) and the slave particle Hamiltonian
Eq.~(\ref{Anderson_Hamiltonian}) at low energies
both in the weak and in the strong coupling regions.
In the strong coupling region, an effective low-energy model
is derived from the Anderson Hamiltonian (\ref{Anderson_Hamiltonian})
by integrating out the slave boson degree of freedom
(or, equivalently, by means of a Schrieffer-Wolff transformation
\cite{SchriefferWolff} onto the part of the Hilbert space
involving only states with the core level occupied).
The interaction term in the resulting effective action reads
\begin{eqnarray}
\label{Seffint}
S_{\mbox{\scriptsize int}}&=&-V^2\frac{1}{\beta^3}
\sum_{i\omega_n,i\omega'_n,i\nu_m} G_b^0(i\nu_m)\\
&\times& c^\dag_{0}(i\omega'_n-i\nu_m) c^{\phantom{\dag}}_{0}(i\omega_n)
f(i\omega'_n) f^\dag(i\omega_n+i\nu_m) \; , \nonumber
\end{eqnarray}
where, in addition, the projection onto the physical
Hilbert space
is imposed by taking $\lambda \rightarrow \infty$.
At low
\setlength{\unitlength}{1mm}
\begin{figure}
\epsfxsize7.7cm
\centering\leavevmode\epsfbox{figure1.eps}
\vspace*{0.5cm}
\caption{Diagrammatic representation of the effective
low-energy interaction in the strong coupling regime,
Eq.~(\ref{Seffint}), and its contraction to a density-density
interaction at low excitation energies.
Solid, dashed and wiggly lines correspond here and in the following to
conduction electron, pseudo\-fermion and slave boson propagators,
respectively.}
\label{fig1}
\end{figure}
\noindent
excitation energy relative to the core level, i.e. when the conduction electron
energies after analytical continuation are
$|\omega |$, $|\omega'-\nu | \ll |E_d|$ and the
pseudofermions have energies $\omega'$, $\omega+\nu \approx E_d$
(see Fig.~\ref{fig1}), the non-interacting slave boson Green's
function in Eq.~(\ref{Seffint})
is taken at $\nu \approx E_d$ and thus reduces to $1/E_d$.
The resulting effective Hamiltonian is thus
given by Eq.~(\ref{Xray_Hamiltonian}), with electron operators $d^\dag$, $d$
replaced by pseudofermions $f^\dag$, $f$, interacting with
the conduction electrons via the repulsive, {\it instantaneous}
potential $V_d = -V^2/E_d >0$.
In order to derive the effective low-energy Hamiltonian in the
weak coupling domain ($n_d \to 0$, $E_d>0$),
it is useful to observe that the model Eq. (\ref{Anderson_Hamiltonian})
is in the physical Hilbert space
invariant under the special particle-hole transformation
$f\leftrightarrow b$, $c\leftrightarrow c^\dag$ and $E_d \to -E_d$.
Integrating out the high energy states, i.e. the fermionic degrees of
freedom in this case, and then performing this particle-hole transformation,
the resulting low-energy Hamiltonian is again given
by Eq.~(\ref{Xray_Hamiltonian}), with the replacement $d^\dag$, $d$ $\to$
$f^\dag$, $f$, and the attractive interaction potential
$V_d = -V^2/E_d <0$ between conduction electrons and local pseudofermions.
Having, thus, established the formal connection between the original
X-ray model Eq. (\ref{Xray_Hamiltonian}) and the auxiliary particle
Hamiltonian (\ref{Anderson_Hamiltonian})
in the weak and in the strong coupling regions, we now turn
to showing that the photoemission and X-ray absorption spectra are
given by the slave boson and the pseudofermion spectral functions,
respectively.
\paragraph*{Photoemission.---}
The retarded Green's function $G_b^R(t)=-i\Theta(t) \langle [ b(t),
b^\dag(0) ]_- \rangle$ describes the propagation of the empty
$d\/$-level in time. The corresponding spectral function after
projection onto the physical sector $Q=1$, $A_b^+(\omega) = -
\lim_{\lambda \to \infty} \mbox{Im} G_b^R(\omega)/\pi$ can be
represented in terms of the exact eigenstates of the system without
the $d\/$-level, $|0,n\rangle$, and with the $d\/$-level,
$|1,n\rangle$, as \cite{Costi96,Kroha98}
\begin{eqnarray}
A_b(\omega)&=&\\
\frac{1}{Z_{Q=0}}&\sum_{m,n}&|\langle1,m|b^+|0,n\rangle|^2
e^{-\beta \epsilon_{0,n}}\delta(\epsilon+\epsilon_{0,n}
-\epsilon_{1,m}) \; . \nonumber
\end{eqnarray}
At zero temperature ($\beta=1/T=\infty$), $A_b(\epsilon)$ is zero for
$\epsilon = \omega -E_0 < 0$, where $E_0=\epsilon_{1,0}-\epsilon_{0,0}$ is the
difference of the ground state energies for the $Q=1$ and $Q=0$
systems. Near the threshold, $\epsilon \gtrsim 0$, $A_b(\epsilon)$ has
a power law singularity (infrared divergence), $A_b(\epsilon) \propto
\epsilon^{-\alpha_b}$, for exactly the same reason as the hole
spectral function $A_d(\epsilon)$ considered above: the states
$|0,n\rangle$ (free Fermi sea) and $|1,n\rangle$ (Fermi sea in
presence of a potential scattering center) are orthogonal, giving rise
to the orthogonality catastrophe \cite{Anderson67}.
The exponent $\alpha_b$ is therefore given in terms of the phase
shift $\eta _b$ (for $s\/$-wave scattering) as
$\alpha_b=1-\left( {\eta _b}/{\pi} \right)^2 \;$. Using the Friedel
sum rule and the fact that in the photoemission process (boson propagator)
the impurity occupation number changes from initially $0$ to $n_d>0$ in the
final state, we obtain the characteristic dependence on $n_d$,
\begin{equation}
\label{boson_exponent}
\alpha_b=1-n_d^2 \; .
\end{equation}
We may conclude that the threshold behavior of the physical hole
spectral function $A_d(\epsilon)$ and the slave boson spectral
function $A_b(\epsilon)$ is governed by the same exponent,
$\alpha_d=\alpha_b$, provided the scattering phase shift is the same.
\paragraph*{X-ray absorption.---}
In a similar way, the threshold behavior of the X-ray absorption
cross section $d\sigma/d\epsilon$ may be obtained from the pseudofermion
Green's function. As shown in section \ref{physical_model},
$d\sigma/d\epsilon$ is proportional to the imaginary part of the two
particle Green's function $G_2(t)=-i\Theta(t) \langle [ d^\dag_{\phantom
0}(t) c^{\phantom{\dag}}_{0\sigma}(t), c^\dag_{0\sigma}(0) d(0) ]
\rangle$. The corresponding quantity here is the slave
boson-conduction electron correlation function
\begin{equation}
G_{bc}(t)=-i\Theta(t) \langle [ b(t)c^{\phantom{\dag}}_0(t),
c^\dag_{0\sigma}(0) b^{\dag}_{\phantom{0}}(0)] \rangle \; ,
\end{equation}
which is given in terms of the pseudofermion Green's function
$G_f(\epsilon)$ (after Fourier transformation) as
\begin{equation}
G_{bc}(\epsilon)=\frac{1}{V^2}\left[ \left( G_f^0(\epsilon) \right)^{-1}
G_f(\epsilon) - 1 \right] \left( G_f^0(\epsilon) \right)^{-1} \; .
\end{equation}
It follows that the spectral functions are related by
$A_{bc}(\epsilon) \propto A_f(\epsilon) \sim \epsilon^{-\alpha_f}$,
i.e. the X-ray absorption exponent is identical to the pseudofermion threshold
exponent $\alpha _f$.
The latter is again determined by the orthogonality
catastrophe argument, considering that the initial state of the system
is now the conduction electron Fermi sea plus the filled $d\/$-level.
The phase shift $\eta_f$, again given
via the Friedel sum rule as the change of the occupation number
from the initial to the final state, is now different,
$\eta_f=(n_d -1)\pi$, leading to the expression
\begin{equation}
\label{fermion_exponent}
\alpha_f=2n_d-n^2_d \; .
\end{equation}
Comparison with (\ref{absorption_exponent}) again shows that the infrared
behavior of the pseudofermion spectral function is indeed identical to that of
the two particle Green's function $G_2$, as expected.
It should be mentioned that in the intermediate coupling or
``mixed valence'' domain, $\pi N(0) V^2 \approx |E_d|$
($n_d \approx 1/2$), a Schrieffer-Wolff type projection
is no longer valid because of large level occupancy fluctuations.
The formal derivation of the X-ray
model (\ref{Xray_Hamiltonian}) from the pseudoparticle model
(\ref{Anderson_Hamiltonian}) in the ``mixed valence'' regime
involves a retarded effective interaction, in contrast to
Eq. (\ref{Xray_Hamiltonian}). However, since
the Hamiltonian Eq.~(\ref {Anderson_Hamiltonian})
is a faithful representation of a {\it non-interacting} system
(via the identification $d^{\dag} = f^{\dag}b$), where the
constraint $Q=f^{\dag}f+b^{\dag}b = 1$ merely serves to implement
the X-ray initial condition of sudden switching on the interaction
between localized states and the conduction electrons (see above),
the system is described by single-particle wave functions
even in the valence fluctuation regime of this spinless model.
The analysis of the pseudoparticle threshold exponents
$\alpha _b$, $\alpha _f$ in terms of the corresponding scattering phase
shifts $\eta _b$, $\eta _f$ and the Friedel sum rule, as given above,
then also applies in the valence fluctuation regime.
It has been verified explicitly by a numerical renormalization group
calculation of the pseudoparticle threshold exponents that their
$n_d$ dependence, given in Eqs. (\ref{boson_exponent}),
(\ref{fermion_exponent}), is valid over the complete
range of the core level occupation number $n_d$ \cite{Costi94}.
The preceding analysis shows explicitly
that in the auxiliary particle representation
the threshold exponents of both the
X-ray photoemission and absorption are determined by the infrared
behavior of single-particle propagators, involving the physics of the
orthogonality catastrophe for auxiliary bosons or pseudofermions only
\cite{hopfield69,schotte2.69}.
{\it There is no separation into single particle effects and excitonic effects.}
\section{Conserving theory}
\label{theory}
In the previous section we reformulated the core hole problem by
introducing auxiliary particles and showed on general grounds
that the threshold exponents of X-ray absorption and
photoemission spectra can be extracted from one particle properties, namely
the auxiliary fermion and slave boson Green's functions respectively.
In this section a systematic self-consistent approximation is formulated
to calculate these functions.
As a minimal requirement the constraint $Q=1$ has to be fulfilled in any
approximate theory. The constraint is closely related to the invariance of the
system under a simultaneous local (in time) gauge transformation $f(\tau) \to
e^{\Theta(\tau)}f(\tau)$, $b(\tau) \to e^{\Theta(\tau)}b(\tau)$. The
Lagrange multiplier $\lambda$ assumes the role of a local gauge field and
transforms as $\lambda \to \lambda + i \partial \Theta / \partial \tau$. Any
approximate scheme respecting the gauge symmetry will preserve the charge $Q$
in time. The simultaneous transformations $f(\tau) \to
e^{\Theta(\tau)}f(\tau)$, $c_{\V{k}}(\tau) \to e^{\Theta(\tau)}c_{\V{k}}(\tau)$,
$\mu(\tau) \to \mu(\tau)+i\partial \Theta / \partial \tau$ lead to the
conservation of the total fermion number $n_f+\sum_{\V{k}} c^\dag_{\V{k}}
c^{\phantom{\dag}}_{\V{k}} = \mbox{const.}$ where $\mu$ is the chemical potential of
the conduction electrons (we choose $\mu=0$). Any theory which
preserves these symmetries is called conserving and may be generated by
functional derivation from a generating functional $\Phi$ of closed skeleton
diagrams \cite{Baym61}.
\paragraph*{NCA. ---}
We are interested in the limit of weak hybridization $V$. So let us
first consider the lowest order approximation. The conserving
approximation scheme requires the self-energies to be determined
self-consistently,
\begin{figure}
\epsfxsize7.7cm
\centering\leavevmode\epsfbox{figure2.eps}
\vspace*{0.5cm}
\caption{(a) Diagrammatic representation of the NCA generating functional.
(b) and (c) display the pseudo\-fermion and slave boson self-energies
derived from the NCA functional by functional derivation.}
\label{fig2}
\end{figure}
\noindent which amounts to an infinite resummation of
perturbation theory even if only the lowest order skeleton diagram ist
kept (which is known as the ``non-crossing-approximation'' (NCA)
\cite{NCA}, see Fig.~\ref{fig2}). The NCA is known to yield good
results in the absence of or sufficiently far away from a Fermi liquid
fixed point \cite{Cox93,Kroha98}. Hence the NCA is not appropriate in
the X-ray problem. The reason is that no parquet diagrams (see
Fig.~\ref{fig5}) are included in the lowest order approximation.
By functional derivation of $\Phi$ one obtains for
the slave particle self-energies $\Sigma_f=\delta \Phi / \delta G_f$,
$\Sigma_b=\delta \Phi / \delta G_b$ which are diagramatically given in
Fig.~\ref{fig2} and yield the set of coupled integral equations
\begin{eqnarray}
\label{NCA_Gleichungen}
\Sigma_f(\epsilon)&=&V^2\int_{-\infty}^{\infty} \frac{du}{\pi} \,
G_b(\epsilon+u)A_c(-u)f(u) \nonumber \\
\Sigma_b(\epsilon)&=&V^2\int_{-\infty}^{\infty} \frac{du}{\pi} \,
G_f(u+\epsilon)A_c(u)f(u) \;
\end{eqnarray}
where $A_c(\epsilon)$ is the non-interacting local conduction electron
spectral density. At zero temperature $T=0$ the integral equations
can be rewritten as ordinary differential equations (with a constant
density of states for the conduction electrons and for $\epsilon \to
0$) \cite{MH84}
\begin{eqnarray}
\frac{\partial}{\partial \epsilon} \frac{1}{A_f(\epsilon)}
&\sim& N(0)V^2A_b(\epsilon) \nonumber \\
\frac{\partial}{\partial \epsilon} \frac{1}{A_b(\epsilon)}
&\sim& N(0)V^2A_f(\epsilon) \; .
\end{eqnarray}
The solution displays the well-known infrared singularities
$A_{f,b}(\epsilon) \propto \epsilon^{-\alpha_{f,b}}\quad(\epsilon \to
0)$ where $\alpha_{f,b}=1/2$. These exponents obviously differ from
the exact results discussed before [Eqs.~(\ref{boson_exponent}) and
(\ref{fermion_exponent})].
Hence the NCA is not even in qualitative agreement with the exact
Fermi liquid properties of the model; it shows no dependence of the
exponents on the filling factor $n_d$ of the deep level.This is due to
the lack of vertex corrections which have to be included in infinite
orders of perturbation theory, because it can be shown by
power-counting arguments that there are no corrections to the NCA
exponents in any finite order \cite{Cox93}.
\begin{figure}
\epsfxsize7.7cm
\centering\leavevmode\epsfbox{figure3.eps}
\vspace*{0.5cm}
\caption{Diagramatic representation of the Bethe-Salpeter equations for the
$T\/$-matrices in Eqs.~(\ref{T_fc-channel}) and (\ref{T_bc-channel}),
respectively. The analytically continued equations, which are calculated
numerically, are discussed in appendix \ref{ctma_equations}.}
\label{fig3}
\end{figure}
\paragraph*{CTMA. ---}
We have to include the major singularities in each order of self-consistent
perturbation theory. These singularities emerge in the conduction electron and
pseudofermion $T\/$-matrix ($T_{fc}$) as well as in the conduction electron
and slave boson $T\/$-matrix ($T_{bc}$). In order to preserve gauge
invariance, self-consistency has to be imposed: the self-energies are
functionals of the Green's functions which in turn are expressed in terms of
self-energies, closing the set of self-consistent equations. The summation of
the corresponding ladder diagrams can be performed by solving the integral
equations for the $T\/$-matrices for the pseudofermions (see Fig.~\ref{fig3})
\cite{Kroha97}
\widetext
\top{-2.8cm}
\begin{eqnarray}
T_{fc}(i\omega_n,i\omega'_n;i\Omega_m) &=& V^2 G_b(i\omega_n+i\omega'_n-i\Omega_m)
\nonumber\\
&-&\frac{V^2}{\beta}\sum_{i\omega''_n}G_b(i\omega_n+i\omega''_n-i\Omega_m)
G_f(i\omega''_n)G_c(i\Omega_m-i\omega''_n)
T_{fc}(i\omega''_n,i\omega'_n;i\Omega_m) \;,
\label{T_fc-channel}
\end{eqnarray}
and the slave-bosons
\begin{eqnarray}
T_{bc}(i\nu_m,i\nu'_m;i\Omega_n) &=& V^2 G_f(i\nu_m+i\nu'_m-i\Omega_n)
\nonumber\\
&-&\frac{V^2}{\beta}\sum_{i\nu''_m}G_f(i\nu_m+i\nu''_m-i\Omega_n)
G_b(i\nu''_m)G_c(-i\Omega_n-i\nu''_m)
T_{bc}(i\nu''_m,i\nu'_m;i\Omega_n) \;.
\label{T_bc-channel}
\end{eqnarray}
\narrowtext
Here $\omega_n, \omega'_n, \omega''_n$ are fermionic frequencies
($\omega_n=(2n+1)\pi/\beta$), $\nu_m, \nu'_m, \nu''_m$ are bosonic
frequencies ($\nu_m=2m\pi/\beta$), and the center of mass frequency
$\Omega_{m,n}$ is bosonic in the case of $T_{fc}$ and fermionic for
$T_{bc}$. The self-energies $\Sigma_f$ and $\Sigma_b$
\begin{eqnarray}
\Sigma_f(i\omega_n) &=& \Sigma_f^{\mbox{\scriptsize{NCA}}}(i\omega_n) + \Sigma_f^{fc}(i\omega_n)
+ \Sigma_f^{bc}(i\omega_n) \\
\Sigma_b(i\nu_m) &=& \Sigma_b^{\mbox{\scriptsize{NCA}}}(i\nu_m) + \Sigma_b^{fc}(i\nu_m)
+ \Sigma_b^{bc}(i\nu_m)
\end{eqnarray}
calculated from $T_{fc}$ and $T_{bc}$, then follow from a generating
functional $\Phi$ (see Fig.~\ref{fig4}) by functional derivation. The
explicit expressions are given in appendix \ref{ctma_equations}.
\begin{figure}
\vspace*{0.4cm}
\epsfxsize7.7cm
\centering\leavevmode\epsfbox{figure4.eps}
\vspace*{0.5cm}
\caption{Diagrammatic representation of the CTMA generating
functional. The free energy diagram with two
conduction electron lines does not appear,
since it is not a skeleton diagram.}
\label{fig4}
\end{figure}
\section{Comparison with renormalized parquet equations}
\label{comparison}
The CTMA is closely related to the parquet equation approach by
Nozi{\`e}res {\em et al.\/} In Ref.~[\cite{Noz1}] these authors investigate the
X-ray model (\ref{Xray_Hamiltonian}) by the methods of perturbation
theory. Even to the lowest order one must sum the so-called parquet
diagrams, in close analogy with the Abrikosov theory of the Kondo
effect \cite{Abrikosov65}. In this approximation Mahan's prediciton
\cite{Mahan67} of the singularity in the X-ray absorption spectrum was
first confirmed. In a succeeding paper \cite{Noz2} the many-body
approach was generalized to include self-energy and vertex
renormalization in a self-consistent fashion. This self-consistent
formalism describes the reaction of divergent fluctuations on
themselves, and should, therefore, be useful in other more complicated
problems, such as the Kondo effect.
\begin{figure}
\epsfxsize7.5cm
\centering\leavevmode\epsfbox{figure5.eps}
\vspace*{0.5cm}
\caption{(a) Vertex renormalization and self-energy reproduced from
the parquet equation approach \protect\cite{Noz2}. These diagrams are
obtained from the corresponding ones in (b) by contracting the boson-lines
1, 2 and 3. The CTMA, therefore, contains the parquet contributions
of Ref. [\protect\cite{Noz2}] as a diagrammatic subclass.}
\label{fig5}
\end{figure}
In Ref.~[\cite{Noz2}] it is shown that the significant contributions in
logarithmic accuracy to the renormalized interaction and the deep level
self-energy are given by the diagrams reproduced in Fig.~\ref{fig5} (a).
Both graphs are included in the CTMA (see Fig.~\ref{fig5} (b)):
By collapsing the boson lines into points, i.e. by integrating out
the high energy bosonic degree of freedom in the strong coupling region
($n_d \to 1$) as done in section \ref{representation}, it is seen that
the X-ray interaction kernel (Fig. \ref{fig5} (a), left) can be
extracted from the $T_{bc}\/$-matrix, and
the deep level self-energy (Fig. \ref{fig5} (a), right)
is already included in the NCA. For weak coupling
($n_d \to 0$) analogous results are obtained by integrating out
the pseudofermionic degree of freedom and then interchanging bosons and
fermions, compare section \ref{representation}. The {\em self-consistent}
evaluation of these diagrams represents the renormalized
parquet analysis for the pseudoparticles.
{\em The advantage of our formulation is
that it is valid both in the weak coupling and in the strong coupling regime,
with symmetrical expressions in these two regions}.
The symmetry between weak and strong coupling is also visible in the
results for the threshold exponents (Fig.~\ref{fig7}).
Since the CTMA is not
restricted to parquet diagrams (which give the right asymptotic behaviour
only for $V \to 0$), but goes beyond the parquet approximation, one may
expect that its validity extends beyond the weak and the strong coupling
limits and interpolates correctly between these regimes.
This will be seen the following section.
\section{Numerical results}
\label{results}
The self-consistent solutions are obtained by first solving the linear
Bethe-Salpeter equations (\ref{T_fc_matrix}) and (\ref{T_bc_matrix})
for the $T\/$-matrices by matrix inversion on a grid of 200 frequency
points. First we insert NCA Green's functions into the $T\/$-matrix
equations. From the $T\/$-matrices the auxiliary particle
self-energies $\Sigma_f$ and $\Sigma_b$ are calculated corresponding
to Eqs.~(\ref{Sigma_f_fc}) and (\ref{Sigma_b_fc}), which give the
respective Green's functions. This process is iterated until
\begin{figure}
\epsfxsize7.7cm
\centering\leavevmode\hspace*{-0.4cm}\epsfbox{figure6.eps}
\vspace*{0.5cm}
\caption{Auxiliary particle spectral functions $A_f$ and $A_b$ in the weak
coupling regime in a logarithmic plot.
The energies are in units of the
half band width $D$. The slopes of the dashed lines indicate the exact
threshold exponents.}
\label{fig6}
\end{figure}
convergence is reached \cite{convergence}. The $T\/$-matrices show
nonalytic behavior in the infrared limit.
As can be seen from Fig.~\ref{fig6} the fermion and boson spectral
functions display power law behaviour at low frequencies
\cite{small_ed}. The power law behavior emerges in the infrared
limit, i.e.~for energies smaller than the low energy scale (which is
$E_d$). For smaller frequencies there is always a deviation from the
power law behaviour due to finite temperature. The exponents
extracted from the spectral functions at low but finite temperature
for various values of the deep
level filling $n_d$ in Fig.~\ref{fig7} are in good numerical agreement with the
exact results in the regions $n_d \in [0.0,0.3]$ and $n_d \in [0.7,1.0]$.
Note that in contrast to the $n_d$-dependent exponents within the CTMA the
NCA spectral functions always diverge with $n_d$-independent exponents
$\alpha_f=\alpha_b=1/2$. For intermediate coupling, $n_d \in
[0.3,0.7]$, the convergence of the self-consistent
\noindent scheme is very slow, and we find no
stable numerical solution. It remains to be seen whether
this is due to numerical instabilities or possibly due to the
importance of further vertex corrections beyond the CTMA.
\begin{figure}
\vspace*{0.4cm}
\epsfxsize7.9cm
\centering\leavevmode\hspace*{-0.4cm}\epsfbox{figure7.eps}
\vspace*{0.5cm}
\caption{Auxiliary particle threshold exponents exctracted from spectra as
in Fig.~\ref{fig8} for a number of deep level fillings $n_d$. The solid
lines represent the exact values derived in Eqs.~(\ref{boson_exponent})
and (\ref{fermion_exponent}).}
\label{fig7}
\end{figure}
A comparison of the CTMA results with the weak-coupling treatment,
which corresponds to $n_d \to 0$ in our model, shows that for finite
interaction strength renormalization effects are important (see
Fig.~\ref{fig8}). The connection between $n_d$ and $E_d/\Gamma$ is
exactly given by Friedel's sum rule $n_d=1/2-\arctan(E_d/\Gamma)/\pi$.
Again we mention the $n_d$ dependence of the exponent $\alpha_f$ in
contrast to the NCA result: To recover the Fermi liquid properties of
the model one thus has to go far beyond the lowest order
self-consistent approximation.
\begin{figure}
\vspace*{0.4cm}
\epsfxsize7.9cm
\centering\leavevmode\epsfbox{figure8.eps}
\vspace*{0.5cm}
\caption{Comparison of the CTMA results and the weak-coupling calculation
\protect \cite{Noz2,Noz1} for the threshold exponent of X-ray absorption spectra.}
\label{fig8}
\end{figure}
\section{conclusion}
In summary, we have calculated the exponents of threshold singularities
in the X-ray photoemission and absorption spectra, using a standard
many-body technique, where the empty and the singly occupied core level
are represented by separate fields, auxiliary bosons and pseudofermions,
respectively, coupled to the conduction electrons via a hybridization
interaction. In this formulation, the X-ray problem is described by a
spinless Anderson impurity model in pseudoparticle representation, and
the initial condition of sudden creation of the
impurity potential is implemented by the constraint
that all expectation values of local fermion or boson fields must be
calculated in the Hilbert subspace with pseudoparticle number
$Q=0$. The latter can be fulfilled exactly.
It was further shown that the X-ray photoemission cross section or core
level spectral function is given by the boson spectral function,
while the X-ray absorption cross section is proportional to the total
fermion hybridization vertex. Therefore, the X-ray photoemission and
absorption threshold exponents are identical to the infrared exponents
of the auxiliary boson and pseudofermion spectral functions, respectively.
It follows that both X-ray photoemission and
absorption are solely governed by the orthogonality catastrophe, and
there is no separation into single particle and excitonic effects.
In a more general context, the generalized SU($N$)$\times$SU($M$) Anderson
impurity models, classified by the spin degeneracy $N$ of
the local orbital and the number $M$ of degenerate conduction electron channels,
may be considered as standard models to describe strong
correlations induced by the restriction of no double occupancy of sites.
Depending on their symmetry, these models display Fermi
($N=M=1$ or $N\geq M+1$) or non-Fermi liquid behavior ($2\leq N\leq M$) at low
temperature \cite{Cox93}.
The present case of the spinless Anderson impurity model in slave boson
representation ($N=1$, $M=1$), Eq. (\ref{Anderson_Hamiltonian}),
may be considered as the most stringent test case for the development of new
methods for strongly correlated systems. This is because
for this case earlier approximation schemes like the
non-crossing approximation (NCA) fail in the most pronounced way to
even qualitatively describe the low-energy Fermi liquid behavior of this model,
i.e. the $n_d$ dependence of the infrared threshold exponents, while
in the non-Fermi liquid case the NCA gives the correct exponents at least
in the Kondo limit of these models \cite{Cox93}.
In the present paper we have applied a recently developed approximation scheme,
the conserving $T\/$-matrix approximation (CTMA) to the
$N=1$, $M=1$ Anderson impurity model to calculate
the X-ray photoemission and absorption threshold exponents on a
common footing. The CTMA includes the complete
subclass of diagrammatic contributions which,
in the limits of weak ($n_d \rightarrow 0$) and strong ($n_d \rightarrow 1$)
impurity scattering potential, reduce to the renormalized parquet diagrams,
which have been shown by Nozi{\`e}res et al.~\cite{Noz2} to
describe the exact infrared singular behavior in the
weak coupling regime of the X-ray problem. As a result, the CTMA recovers
the correct X-ray photoemission and absorption exponents in a wide region
around weak as well as strong coupling. In connection with earlier
results \cite{Kroha97} on the spin $1/2$ Anderson impurity model
($N=2$, $M=1$), this makes the CTMA the first standard many-body technique
to correctly describe the Fermi liquid regime of the Anderson impurity models
in a systematic way, including the smooth crossover to the high temperature
behavior.
We are grateful for discussions with J.~Brinkmann, T. A. Costi and T.~Kopp.
T.S.~acknowledges the support of the DFG-Graduiertenkolleg ``Kollektive
Ph{\"a}nomene im Festk{\"o}rper''. This work was supported in part by
SFB 195 of the Deutsche Forschungsgemeinschaft.
Computer support was provided by the
John-von-Neumann Institute for Computing, J{\"u}lich.
|
astro-ph/9912286
|
\section{Introduction: Have we detected primeval galaxies ?}
This paper is directed towards the question; Have we already detected primeval galaxies ?
The characteristics of a primeval galaxy might be
\begin{itemize}
\item high redshift ($z > 1$)
\item undergoing a major episode of star formation
(to form $10^{11} M_{\odot}$ in $\leq 10^{9}$ yrs, we need a star formation rate
$\dot{M}_{*} \geq 10^{2} M_{\odot} yr^{-1}$)
\item high gas fraction, say $M_{gas} \sim 10^{11} M_{\odot}$
\item evidence of interactions, mergers, dynamical youth.
\end{itemize}
First efforts to find such galaxies centred on spectroscopic searches for
Ly$\alpha$-emitting galaxies (see eg the review by Djorgovski and Thompson 1992). While
examples of such galaxies are now being found, these early surveys suggested
that either star formation must be a more protracted process, occurring in smaller
bursts (as expected in many bottom-up scenarios), or that dust extinction must play
a large role.
Steidel et al (1996) have shown that star-forming galaxies at z $>$ 3 can be found
through deep ground-based photometry in the U, G and R bands. The high redshift galaxies manifest
themselves as U-band 'dropouts' as the Lyman limit absorption is redshifted into the U
band. Over 500 spectroscopically confirmed high redshift galaxies have now been found
by this technique. Many have weak or non-existent Lyman $\alpha$ emission, which accounts
for the lack of success of the spectroscopic surveys. The role of dust in these galaxies
has been discussed by Pettini et al (1997, 1998a,b), Meurer et al (1997, 1998), Calzetti (1998). Pettini et al
(1997, 1998a,b), Dickinson (1998) and Steidel et al (1998). Even at redshift 3 it appears that dust
extinction may be appreciable. However star formation rates in these galaxies are not
exceptional, typically 1-30 $M_{\odot}/yr$.
The first evidence for galaxies with very high rates of star formation came from
infrared surveys. Starburst galaxies had been first identified by Weedman (1975) from their
ultraviolet excesses and characteristic emission-line spectra. Prior to
the launch of IRAS, balloon and airborne measurements had demonstrated
that $L_{fir} > L_{opt}$ for several starburst galaxies (see review by Sanders and Mirabel 1996).
Joseph et al (1984) proposed that interactions and mergers might play
a role in triggering starbursts.
One of the major discoveries of the IRAS mission was the existence of ultraluminous
infrared galaxies,
galaxies with $L_{fir} > 10^{12} h_{50}^{-2} L_{\odot} (h_{50} = H_{o}/50)$.
The peculiar Seyfert 2 galaxy Arp 220 was recognised as having an exceptional far infrared
luminosity early in the mission (Soifer et al 1984).
The conversion from far infrared luminosity to star formation rate has been discussed
by many authors (eg Scoville and Young 1983, Thronson and Telesco 1986, Rowan-Robinson et al 1997).
Rowan-Robinson (1999) has given an updated estimate of how the star-formation
rate can be derived from the far infrared luminosity, finding
$\dot{M}_{*,all} /[L_{60}/L_{\odot}]$ = 2.2 $\phi/\epsilon$ x$10^{-10}$
where $\phi$ takes account of the uncertainty in the IMF (= 1, for a standard
Salpeter function) and $\epsilon$ is the fraction of uv light absorbed by dust, estimated
to by 2$/$3 for starburst galaxies (Calzetti 1998).
We see that the star-formation rates in
ultraluminous galaxies are $ > 10^{2} M_{\odot} yr^{-1}$. However the time-scale
of luminous starbursts may be in the range $10^7 - 10^8$ yrs (Goldader et al 1997), so the
total mass of stars formed in the episode may typically be only 10$\%$ of the mass of a galaxy.
In this paper I discuss an even more extreme class of infrared galaxy, hyperluminous infrared
galaxies, which I define to be those with rest-frame infrared (1-1000 $\mu$m) luminosities, $L_{bol,ir}$,
in excess of $10^{13.22} h_{50}^{-2} L_{\odot}$ (=$10^{13.0} h_{65}^{-2} L_{\odot}$).
For a galaxy with an M82-like starburst spectrum this corresponds to
$L_{60} \geq 10^{13} h_{50}^{-2} L_{\odot}$, since the bolometric correction at 60 $\mu$m is 1.63.
Sanders and Mirabel (1996) have a slightly
more stringent criterion, $L_{bol,ir} \geq 10^{13} h_{75}^{-2} L_{\odot}$, but in practice they
use an estimate of $L_{bol}$ based on IRAS fluxes, which results in a demarcation almost identical
to that adopted here. While the emission at
rest-frame wavelengths 3-30 $\mu$m in these galaxies is often due to an AGN dust torus (see below),
I argue that their emission at rest-frame wavelengths
$\geq 50 \mu$m is primarily due to extreme starbursts, implying star formation rates
in excess of 1000 $M_o/yr$. These then are excellent candidates for being primeval galaxies, galaxies
undergoing a very major episode of star formation.
A preliminary version of these arguments was given by Rowan-Robinson (1996). Granato et al (1996)
modelled the seds of 4 hyperluminous galaxies (F10214, H1413, P09104 and F15307) in terms
of an AGN dust torus model. Hughes et al (1997)
argued that hyperluminous galaxies can not be thought of as primeval galaxies on the basis of their
estimates of the gas mass and star formation rates in these galaxies. However I show below that
their arguments are not compelling. Fabian et al (1994, 1998)
argued from X-ray evidence that 09104+4109 and 15307+325 are obscured AGN and this
interpretation is
supported for these objects by the non-detection of CO and submm continuum radiation
(Yun and Scoville 1999, Evans et al 1999). However the very severe upper limits on X-ray emission set by
Wilman et al (1999) for several hyperluminous infrared galaies led the latter to conclude
that the objects might be powered by starbursts. McMahon et al (1999)
interpret the submillimetre emission from high redshift quasars and other hyperluminous infrared
galaxies as powered, at least in part, by the AGN rather than by star formation.
On the other hand Frayer et al (1999a,b)
favour a starburst interpretation for 14011+0252 and 02399-0136.
I discuss these arguments further below and try to resolve the question of what fraction of the
far infrared luminosity is powered by a starburst or AGN. The use of model spectral energy
distributions derived from accurate radiative transfer codes is a significant advance on some
previous work.
I assume throughout that $H_o$ = 50, $\Omega_o$ = 1. With lower $\Omega_o$, more galaxies,
especially at higher redshifts, would satisfy the definition adopted.
\section{Properties of ultraluminous infrared galaxies}
Sanders et al (1988) discussed the properties of 10
IRAS ultraluminous galaxies with 60 $\mu$m fluxes $>$ 5 Jy and concluded
that (a) all were interacting, merging or had peculiar morphologies, (b) all
had AGN line spectra. On the other hand Leech et al (1989) found that only 2 of their
sample of 6 ultraluminous
IRAS galaxies had an AGN line spectrum. Leech et al (1994) found that 67
$\%$ of a much larger sample (42) of
ultraluminous galaxies were interacting, merging or peculiar. Lawrence et al 1989 had
found a much lower fraction
amongst galaxies of high but less extreme infrared luminosity. The
incidence of interacting, merging or peculiar galaxies by ir luminosity is summarised in
Fig 1 of Rowan-Robinson (1991): the proportion of galaxies which are peculiar, interacting
or merging increases steadily from 10-20$\%$ at low ir luminosities to $> 80\%$ for
ultraluminous ir galaxies.
The situation on point (b) remains controversial, though, since Lawrence et al (1999)
find only a fraction
21$\%$ of 81 ultraluminous galaxies in the QDOT sample to have AGN spectra (but on the basis
only of low-resolution spectra). Veilleux et al (1995)
find for a smaller sample (21 galaxies) that 33$\%$ of ultraluminous galaxies are Seyfert 1 or 2,
with an additional 29$\%$ having liner spectra, which they also classify as AGN. Veilleux et al
(1999a) find that 24$\%$ of 77 galaxies with $10^{12} < L_{ir} < 10^{12.3}$ are Seyfert 1 or 2,
increasing to 49$\%$ of 31 galaxies with $10^{12.3} < L_{ir} < 10^{12.8}$ (for $H_o = 75$). Veilleux et al
(1999b) find, from near-ir imaging studies, no evidence that liners should be considered to be AGN.
Sanders (1999) reports that most nearby ultraluminous ir galaxies contain an AGN at some level.
Rowan-Robinson and Crawford (1989) found that their standard starburst galaxy model gave
an excellent fit to the far
infrared spectrum of Mk 231, an archetypal ultraluminous ir galaxy. However their models
for Arp 220 appeared to
require a much higher optical depth in dust than the typical starburst galaxy. Condon et
al (1991) showed that
the radio properties of most ultraluminous ir galaxies were consistent with a starburst
model and argued that many of these
galaxies required an exceptionally high optical depth. This suggestion was confirmed by
the detailed models of
Rowan-Robinson and Efstathiou (1993) for the far infrared spectra of the Condon et al
sample.
Quasars and Seyfert galaxies, on the other hand, tend to show a characteristic mid
infrared continuum, broadly flat
in $\nu S_{\nu}$ from $3-30 \mu$m. This component was modelled by
Rowan-Robinson and Crawford (1989) as dust in the
narrow-line region of the AGN with a density distribution n(r) $\alpha$ $r^{-1}$ .
More realistic models of this
component based on a toroidal geometry are given by Pier and Krolik
(1992), Granato and Danese (1994), Rowan-Robinson (1995), Efstathiou and
Rowan-Robinson (1995). Rowan-Robinson
(1995) suggested that most quasars contain both (far ir) starbursts and (mid ir)
components due to (toroidal) dust
in the narrow line region.
Rowan-Robinson and Crawford (1989) were able to fit the IRAS colours and spectral
energy distributions of galaxies detected in all 4 IRAS bands with a mixture of 3 components,
emission from interstellar dust ('cirrus'), a starburst and an AGN dust torus. Recently
Xu et al (1998) have shown that the same 3-component approach can be used to fit
the ISO-SWS spectra of a large sample of galaxies. To accomodate the Condon et al (1991)
and Rowan-Robinson and Efstathiou (1993) evidence for higher optical depth starbursts,
Ferrigno et al (1999) have extended the Rowan-Robinson and Crawford (1989) analysis
to include a fourth component, an Arp220-like, high optical depth starburst, for
galaxies with log $L_{60} >$ 12. Efstathiou et al (1999) have given improved
radiative transfer models for starbursts as a function of the age of the starburst,
for a range of initial dust optical depths.
Sanders et al (1988) proposed, on the basis of spectroscopic arguments for a sample of 10 objects,
that all ultraluminous infrared galaxies contain an AGN and that the far infrared emission is
powered by this. Sanders et al (1989) proposed a specific model, in the context of
a discussion of the infrared emission from PG quasars, that
the far infrared emission comes from the outer parts of
a warped disk surrounding the AGN. This is a difficult hypothesis to disprove,
because if an arbitrary density distribution of dust is allowed at large distances from the
AGN, then any far infrared spectral energy distribution could in fact be generated.
In this paper I consider whether the AGN dust torus model of Rowan-Robinson (1995)
can be extended naturally to explain the far infrared and submilllimetre emission
from hyperluminous infrared galaxies, but conclude that in many cases this does
not give a satisfactory explanation. I also place considerable weight on
whether molecular gas is detected in the objects via CO lines.
Rigopoulou et al (1996) observed a sample of ultraluminous infrared galaxies from
the IRAS 5 Jy sample at submillimetre wavelengths, with the JCMT, and at X-ray wavelengths,
with ROSAT. They found that most of the far infrared and submillimetre spectra were fitted well
with the starburst model of Rowan-Robinson and Efstathiou (1993). The ratio of bolometric
luminosities at 1 keV and 60 $\mu$m lie in the range $10^{-5} - 10^{-4}$ and are
consistent with a starburst interpretation of the X-ray emission in almost all cases.
Even more conclusively, Lutz et al (1996) and Genzel et al (1998) have used ISO-SWS spectroscopy to show that
the majority of ultraluminous ir galaxies are powered by a starburst rather than an AGN.
\section{Hyperluminous Infrared Galaxies}
In 1988 Kleinmann et al identified P09104+4109 with a z = 0.44 galaxy, implying a total
far infrared luminosity of
1.5x$10^{13}$, a factor 3 higher than any other ultraluminous galaxy seen to that date.
In 1991, as part of a
programme of systematic identification and spectroscopy of a sample of
3400 IRAS Faint Source Survey (FSS) sources, Rowan-Robinson et al discovered IRAS F10214+4724,
an IRAS galaxy with z = 2.286 and a far
infrared luminosity of 3x$10^{14} h_{50}^{-2} L_{\odot}$ . This object
appeared to presage an entirely new class of infrared galaxies. The detection of a huge
mass of CO by Brown and vandenBout (1991), $10^{11} h_{50}^{-2} M_{\odot}$, confirmed by the
detection
of a wealth of molecular lines (Solomon et al 1992), and of submillimetre
emission at wavelengths 450-1250 $\mu$m (Rowan-Robinson et al 1991,1993, Downes
et al 1992), implying a huge mass of dust, $10^{9} h_{50}^{-2} M_{\odot}$
confirmed that this was an exceptional object. Early models suggested this might be
a giant elliptical galaxy
in the process of formation (Elbaz et al 92). Simultaneously with the growing evidence
for an exceptional starburst
in F10214, the Seyfert 2 nature of the emission line spectrum
(Rowan-Robinson et al 1991, Elston et al 1994a) was supported by the evidence for very
strong optical polarisation
(Lawrence et al 93, Elston et al 94b). Subsequently it has become clear that F10214
is a gravitationally lensed system
(Graham and Liu 1995, Broadhurst and Lehar 1995, Serjeant et al 1995, Eisenhardt et al 1996)
with a magnification
of about 10 at far infrared wavelengths, but not much greater than that (Green and
Rowan-Robinson 1996). Even when
the magnification of 10 is allowed for, F10214 is still an exceptionally luminous far ir source.
In 1992 Barvainis et al successfully detected submillimetre emission from the z=2.546
'clover-leaf' gravitationally
lensed QSO, H1413+117, which suggested that H1143 is of similar luminosity to F10214.
Subsequently (Barvainis et al 1995) they realized that the galaxy was an IRAS FSC source.
Here I want to place emphasis on the hyperluminous galaxies detected as a result of
unbiassed surveys at far infrared (and submillimetre) wavelengths.
The program of follow-up of IRAS FSS sources which led to the discovery of F10214
(Rowan-Robinson et al 1991, Oliver et al 1996) has also resulted in the discovery
of a further 6 galaxies or quasars with far ir luminosities $> 10^{13.22} h_{50}^{-2} L_{\odot}$
(McMahon et al 99). Four galaxies from the PSCz survey (Saunders et al 1996) of IRAS galaxies
brighter than 0.6 Jy at 60 $\mu$m fall into the hyperluminous category (a further two are blazars,
3C345 and 3C446, and these are not considered further here), and a further example has been
found by Stanford et al (1999) in a survey based on a comparison of the IRAS FSS with the VLA
FIRST radio survey. Two galaxies detected in submillimetre surveys at 850 $\mu$m with SCUBA
also fall into the hyperluminous category (but one of these only because of the effect of
gravitational lensing).
Cutri et al (1994) reported a search for IRAS FSS
galaxies with 'warm' 25/60 $\mu$m colours, which yielded the
z = 0.93 Seyfert 2 galaxy, F15307+3252 (see also Hines et al 1995). Wilman et al (1999)
have reported a further 2 hyperluminous galaxies from a more recent search for 'warm'
galaxies by Cutri et al (1999, in preparation).
Dey and van Breugel (1995)
reported a comparison of the Texas radio survey
with the IRAS FSS catalogue, which resulted in 5 galaxies with fir ir
luminosities $> 10^{13} h_{50}^{-2} L_{\odot}$. However three of these are present only in the
FSS Reject Catalogue and have not been able to be confirmed as far infrared sources to date. The
other two are discussed below. Four PG quasars from the list of Sanders et al (1989)
(two of which are part of the study of Rowan-Robinson (1995)), fall
into the hyperluminous category. Recently Irwin et al (1999) have found a z = 3.91
quasar which is associated with with IRAS FSS source F08279+5255, the highest redshift IRAS object
to date.
Finally, inspired by the success in finding highly redshifted submillimetre continuum and
molecular line emission
in F10214, several groups have studied an ad hoc selection of very high redshift quasars
and radio-galaxies, with
several notable successes (Andreani et al 1993, Dunlop et al 1995, Isaak et al 1995, McMahon
et al 1995b, Ojik et al 1995, Ivison 1995, Omont et al 1997, Hughes et al 1997, McMahon et al 1999).
Many of these detections
imply far ir luminosities $> 10^{13.22} h_{50}^{-2} L_{\odot}$ , assuming that
the far ir spectra are typical starbursts.
In all there are now
39 hyperluminous infrared galaxies known, which are listed in Tables 1-3 according to whether they are
(1) the result of unbiased 60 $\mu$m (or submm) surveys, (2) found from comparison of known quasar and
radio-galaxy lists with 60 $\mu$m catalogues, (3) found through submillimetre observations of
known high redshift AGN. Table 4 lists some luminous infrared galaxies which do not quite
meet my criteria, but have far infrared luminosities $>10^{13.0} L_{\odot}$. But to set these in perspective
there are a further 20 PSCz galaxies which have 13.00 $< log_{10} L_{ir}/L_{\odot} <$ 13.22
(for $H_o$ = 50).
From the surveys summarised in Table 1 we can estimate that the number of hyperluminous galaxies per
sq deg brighter than 200 mJy at 60 $\mu$m is 0.0027-0.0043, which would imply that there
are 100-200 hyperluminous IRAS galaxies over the whole sky, 25 of which are listed in Tables
1 and 2.
\section{Models for Hyperluminous Infrared Galaxies}
For a small number of these galaxies we have reasonably detailed continuum spectra from
radio to uv wavelengths.
Figures 1-17 show the infrared continua of these hyperluminous galaxies, with fits using radiative
transfer models (specifically the standard M82-like starburst model and
an Arp220-like high optical depth starburst model from Efstathiou et al
(1999) and the standard AGN dust torus model of Rowan-Robinson (1995). More than half of those
shown have measurements at at least 9 independent wavelengths
We now discuss the individual objects (and a few not plotted) in turn:
$\bf{F10214+4724}$
The continuum emission from F10214 was the subject of a detailed discussion by Rowan-Robinson
et al (1993).
Green and Rowan-Robinson (1995) have discussed starburst and AGN dust tori models for F10214.
Fig 1 shows M82-like and Arp 220-like starburst
models fitted to the sumbillimetre data for this galaxy. The former gives a
good fit to the latest data. The 60 $\mu$m flux requires an AGN dust torus
component. To accomodate the upper limits at 10 and 20 $\mu$m, it is necessary
to modify the Rowan-Robinson (1995) AGN dust torus model so that the maximum temperature
of the dust is 1000 K rather than 1600 K. I have also shown the effect of allowing the
dust torus to extend a further factor 3.5 in radius. This still does not account for the amplitude
of the submm emission. The implied extent of the narrow-line region for this extended AGN
dust torus model, which we
use for several other objects, would be 326 $(L_{bol}/10^{13} L_{\odot})^{1/2}$ pc
consistent with 60-600 $(L_{bol}/10^{13} L_{\odot})^{1/2}$ pc quoted by Netzer (1990).
Evidence for a strong starburst contribution to the ir emission from F10214
is given by Kroker et al (1996) and is supported by the high gas mass detected via
CO lines (see section 6). Granato et al (1996) attempt to model the whole sed of F10214
with an AGN dust torus model, but still do not appear to be able to account for the 60 $\mu$m
emission.
$\bf{F0023+1024}$
A starburst model fits the IRAS and ISO data (Verma et al 1999) well and there is a strong limit
on any AGN dust torus contribution
.
$\bf{SMMJ02399-0136}$
A starburst model fits the submm data well and the ISO detection at 15 $\mu$m
gives a very severe constraint on any AGN dust torus component. The starburst
interpretation of the submm emission is supported by the gas mass estimated from CO detections
(Frayer et al 1999b).
$\bf{F1421+3845}$
A starburst model is the most likely interpretation of the IRAS and ISO 60-180
$\mu$m data, but there are discrepancies. There is a strong limit
on any AGN dust torus component
.
$\bf{TX0052+4710}$
There is little evidence for a starburst contribution to the sed of this galaxy.
An extended AGN dust torus model fits the data reasonably well, apart from the ISO detection
by Verma et al (1999) at 180 $\mu$m.
$\bf{F08279+5255}$
An M82-like starburst is a good fit to the submm data and an AGN dust torus model is a good
fit to the 12-100 $\mu$m data. The high gas mass detected
via CO lines (Downes et al 1998) supports a starburst interpretation, though the ratio of
$L_{sb}/M_{gas}$ is on the high side (see section 6). The submm data can also
be modelled by an extension of the outer radius of the AGN dust torus and in this case the
starburst luminosities given in Table 2 will be upper limits.
$\bf{P09104+4109}$
The 100 $\mu$m upper limit places a limit on the starburst contribution and there is
no detection of CO emission in this galaxy. The 12-60 $\mu$m data can be fitted
by the extended AGN dust torus model.
$\bf{PG1148+549}$
A starburst is a natural explanation of the 100 $\mu$m excess compared to the AGN dust torus
fit to the 25 and 60 $\mu$m data, but detection in the submm and in CO would be important
for confirming this interpretation.
$\bf{PG1206+459}$
The IRAS 12-60 $\mu$m data and the ISO 12-200 $\mu$m data (Haas et al 1998) are well-fitted
by the extended AGN dust torus model and there is no evidence for a starburst.
$\bf{H1413+117}$
The submm data is well fitted by an M82-like starburst and the gas mass implied by the
CO detections (Barvainis et al 1994) supports this interpretation. The extended AGN dust torus model
discussed above does not account for the submm emission. However Granato et al
(1996) model the whole sed of H1413 in terms of an AGN dust torus model.
$\bf{15307+325}$
A starburst model gives a natural explanation for the 60-180 $\mu$m excess compared
to the AGN dust torus model required for the 6.7 and 14.3 $\mu$m emission (Verma et al
1999), but the non-detection of CO poses a problem for a starburst intepretation.
$\bf{PG1634+706}$
The IRAS 12-100 $\mu$m data and the ISO 150-200 $\mu$m data (Haas et al 1998) are well-fitted
by the extended AGN dust torus model
and an upper limit can be placed on any starburst component. The non-detection of CO is
consistent with this upper limit.
$\bf{4C0647+4134}$
An M82-like starburst model gives a reasonable fit to the submm data (although
the 1250 $\mu$m flux seems very weak). Observations
in the mid-ir are needed to constrain the AGN dust torus. Since the AGN is not
seen directly we have no constraints on its optical luminosity. We have used the
non-detection of this source by IRAS (which we take to imply S(60) $<$ 250 mJy)
to set a limit on $L_{tor}$. This limit is not strong enough to rule out an
AGN dust torus interpretation of the submm data.
$\bf{BR1202-0725}$
The submm data are well-fitted by an M82-like starburst model. The QSO is
seen directly, although with strong self-absorption in the lines (Storrie-
Lombardi et al 1996), so we can probably not use the QSO bolometric
luminosity to set a limit on $L_{tor}$.
Wilkes et al (1998) have detected this galaxy at 7, 12 and 25 $\mu$m
with ISO, which would imply that the QSO is undergoing very strong extinction.
An AGN dust torus model is capable of accounting for the whole spectrum but the starburst
interpretation of the submm emission is supported by the gas mass estimated from CO detections
(Ohta et al 1996, Omont et al 1996).
$\bf{PG1241+176}$
The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and
there is no evidence for a starburst component. The 1.3 mm flux is probably an
extrapolation of the radio continuum.
$\bf{PG1247+267}$
The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and
there is no evidence for a starburst component.
$\bf{PG1254+047}$
The ISO data of Haas et al (1999) can be fitted with an AGN dust torus model and
there is only weak evidence for a starburst component.
$\bf{BRI1335-0417}$
The submm data can be fitted with a starburst model and since the QSO
is seen directly we can use its estimated bolometric luminosity to set a limit
on $L_{tor}$, which makes it unlikely that the submm emission is from an
dust torus. The starburst
interpretation is supported by gas mass estimated from CO detections
(Guilloteau et al 1997).
$\bf{PC2047+0123}$
The limit on $L_{tor}$ from the bolometric luminosity of the QSO makes it unlikely
that an AGN dust torus is responsible for the 350 $\mu$m emission. However
a starburst model can not fit both the 350 and 1250 $\mu$m observed fluxes.
Observations at other submm wavelengths may help to clarify the situation.
\medskip
For the remaining objects we have only 60 $\mu$m or single submillimetre
detections and for these we estimate their far infrared luminosity, and other properties,
using the standard
starburst model of Efstathiou et al (1999). Tables 1-4 give the luminosities inferred in
the starburst ($L_{sb}$) (and fits of the A220 model in brackets) and AGN dust torus ($L_{tor}$)
components, or limits on these,
with an indication, from the row position of the estimate, of which wavelength the estimate
is made at. In Fig 18 we show the far infrared
luminosity derived for an assumed starburst component, versus
redshift, for hyperluminous galaxies, with lines indicating observational
constraints at 60, 800 and 1250 $\mu$m. Three of the sources with
(uncorrected) total bolometric luminosities
above $10^{14} h_{50}^{-2} L_{o}$ are strongly gravitationally
lensed. IRAS F10214+4724
was found to be lensed with a magnification which ranges from 100 at optical wavelengths
to 10 at far infrared wavelengths (Eisenhardt et al 1996, Green and Rowan-Robinson 1996).
The 'clover-leaf' lensed system H1413+117 has been found to have a magnification of
10 (Yun et al 1997). Downes et al (1998) report a magnification of 14 for F08279+5255.
Also, Ivison et al estimate a magnification of 2.5 for SMMJ02399-0136 and Frayer et al (1999a)
quote a magnification of 2.75 $\pm$0.25 for SMMJ14011+0252.
These magnifications have to be corrected for in estimating luminosities (and dust and
gas masses) and these corrections are indicated in Fig 18. It appears to be
a reasonable assumption that if a starburst luminosity in excess of $10^{14} L_{\odot}$
is measured then the source is likely to be lensed, so F14218 and TX1011 merit further
more careful study.
\begin{figure}
\epsfig{file=fig1.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for F10214, modelled with M82-type starburst
(solid curve), Arp 220-type starburst (broken curve), AGN dust torus (dotted curve, modified AGN
dust torus model - long-dashed curve).}
\end{figure}
\begin{figure}
\epsfig{file=fig2.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for F0023, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig3.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for SMMJ02399, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig4.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for F1421, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig5.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for TX0052, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig6.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for F08279, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig7.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for P019104, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig8.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1148, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig9.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1206, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig10.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for H1413, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig11.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for F15307, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig12.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1634, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig13.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for 4C0647, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig14.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for BR1202, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig15.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1241+176, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig16.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1247+267, notation as for Fig 1.}
\end{figure}
\begin{figure}
\epsfig{file=fig17.ps,angle=0,width=8cm}
\caption{
Observed spectral energy distribution for PG1254+047, notation as for Fig 1.}
\end{figure}
On the other hand there is strong evidence for a population of galaxies with far ir
luminosities in the
range 1-3x$10^{13} h_{50}^{-2} L_{\odot}$ . I have argued that in most cases the
rest-frame radiation longward
of 50 $\mu$m comes from a starburst component. The luminosities are such
as to require star formation rates in the range 3-10x$10^{3} h_{50}^{-2} M_{\odot} \/$yr,
which would in turn generate most of the heavy elements in a $10^{11} M_{\odot}$
galaxy in $10^{7} -10^{8}$ yrs. Most of
these galaxies can therefore be considered to be undergoing their most significant
episode of star formation, ie to be in the process of `formation'.
\section{The role of AGN}
It appears to be significant that a large fraction of these objects are Seyferts,
radio-galaxies or QSOs. For the
galaxies in Tables 2 and 3, this is a selection effect in
that
these objects are deliberately selected to be, or to be biased towards, AGN.
For the population of objects
found from direct optical follow-up of IRAS samples or 850 $\mu$m surveys (Table 1), out of 12 objects,
5 are QSOs or Seyfert 1,
1 is Seyfert 2, and 6 are narrow-line objects. Thus in at least 50 $\%$ of cases,
this phase of exceptionally high far ir luminosity is accompanied by AGN activity at
optical and uv wavelengths. This proportion might increase if high resolution
spectroscopy were available for all the galaxies. For comparison the proportion of
ultraluminous galaxies which contain AGN has also been estimated as 49 $\%$ (Veilleux
et al 1999). However despite the high proportion
of ultraluminous and hyperluminous galaxies which contain AGN, this does not prove that an
AGN is the source of the rest-frame far infrared radiation. The ISO-LWS mid-infrared
spectroscopic programme of
Genzel et al (1998), Lutz et al (1998), has shown that the far infrared radiation of
most ultraluminous galaxies is powered by a starburst, despite the presence of an AGN
in many cases. Wilman et al (1999) have shown that the X-ray emission from several
hyperluminous galaxies is too weak for them to be powered by a typical AGN.
In the Sanders et al (1989) picture, the far infrared and submillimetre emission would
simply come from the outer
regions of a warped disk surrounding the AGN. Some weaknesses of this picture
as an explanation of the far
infrared emission from PG quasars have been highlighted by Rowan-Robinson (1995).
A picture in which both a
strong starburst and the AGN activity are triggered by the same interaction or merger
event is far more likely
to be capable of understanding all phenomena (cf Yamada 1994, Taniguchi et al 1999).
Where hyperluminous galaxies are detected at rest-frame wavelengths in the range 3-30 $\mu$m
(and this can correspond to observed wavelengths up to 150 $\mu$m), the infrared spectrum is often found
to correspond well to emission from a dust torus surrounding an AGN (eg Figs 1, 6-10, 12).
This emission often contributes
a substantial fraction of the total infrared (1-1000 $\mu$m) bolometric luminosity. For
the 12 ir-selected objects of Table 1, the luminosity in the dust torus component
exceeds that in the starburst for 5 of the galaxies (42$\%$). The
advocacy of this paper for luminous starbursts relate only to the rest-frame emission at
wavelengths $\geq 50 \mu$m. Figure 19 shows the correlation between the
luminosity in the starburst component, $L_{sb}$, and the AGN dust torus
component, $L_{tor}$, for hyperluminous infrared galaxies, PG quasars
(Rowan-Robinson 1995), and IRAS galaxies detected in all 4 bands
(Rowan-Robinson and Crawford 1989) (this extends Fig 8 of Rowan-Robinson 1995). The range of
the ratio between these
quantities, with 0.1 $\leq L_{sb}/L_{tor} \leq$ 10, is similar over a very wide range of infrared
luminosity (5 orders
of magnitude), showing that the proposed separation into these
two components for hyperluminous ir galaxies is not at all implausible.
\begin{figure*}
\epsfig{file=fig18.ps,angle=0,width=12cm}
\caption{
Bolometric luminosity in starburst component for galaxies with luminosities $> 10^{13} L_{\odot}$
(Tables 1-4).
The galaxies labelled L are known to be lensed. Loci corresponding to the limits
set by S(60) = 200 mJy, S(800) = 20 mJy, and S(1250) = 2 mJy are shown.}
\end{figure*}
\begin{figure*}
\epsfig{file=fig19.ps,angle=0,width=12cm}
\caption{
Bolometric luminosity in AGN dust torus component versus bolometric luminosity
in starburst component: filled circles, hyperluminous ir galaxies (this
paper); open triangles, PG quasars (Rowan-Robinson 1995); crosses, IRAS
galaxies detected in 4 bands (Rowan-Robinson and Crawford 1989, galaxies with
only upper limits on $L_{tor}$ omitted).}
\end{figure*}
\section {Dust and gas masses}
The radiative transfer models can be used to derive dust masses and hence, via an
assumed gas-to-dust ratio, gas masses. For the M82-like starburst model used here the
appropriate conversion is $M_{dust} = 10^{-4.6} L_{sb}$, in solar units
(Green and Rowan-Robinson 1996). These estimates have been converted into
estimates of gas mass assuming $M_{gas}/M_{dust}$ = 300 (tables 1-4, col 9, bracketed values).
However these estimates will not assist in deciding
the plausibility of the starburst models, because the radiative transfer
models are automatically self-consistent models of massive star-forming molecular clouds.
Estimates derived from $\nu^{\beta} B_{\nu}(T_d)$ fits to the spectral energy
distributions are even less physically illuminating.
Far more valuable are the cases where direct estimates of gas mass can be derived from
molecular line (generally CO) observations. Where available, these estimates have been
given in Tables 1-4, col 9, taken from Frayer et al 1999a (and references therein),
Barvainis et al 1998, Evans et al (1999), Yun and Scoville (1999).
Figure 20 shows a comparison of the
estimates of $L_{sb}$ derived here with estimates of $M_{gas}$ derived from CO observations.
Also shown are results for ultraluminous ir galaxies (Solomon et al 1997) and for
more typical IRAS galaxies (Sanders et al 1991) (after rationalisation of some objects in
common).
The appropriate conversion factor from CO luminosity to gas mass is a
matter of some controversy. For luminous infrared galaxies, Sanders et al (1991)
used a characteristic value for molecular clouds in our Galaxy,
4.78 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$. Solomon et al (1997) found that such a value led
to gas mass estimates for ultraluminous infrared galaxies a factor of 3 or more in
excess of the dynamical masses and concluded that a value of 1.4 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$
was more appropriate for these galaxies. Downes and Solomon (1998) studied several
ultraluminous infrared galaxies in detail in CO 2-1 and 1-0 with the IRAM interferometer,
deriving an even lower value of 0.8 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$ on the basis
of radiative transfer models for the CO lines. However their gas masses are on average only 1/6th
of the (revised) inferred dynamical masses. In their detailed model for Arp 220, Scoville et al (1997)
found a conversion factor 0.45 times the Galactic value, ie 2.15 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$.
Combining this with an estimated ratio for T(3-2)/T(1-0) of 0.6, Frayer et al (1999a)
justify a value of 4 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$ for gas mass estimates of
hyperluminous galaxies derived from CO 3-2 observations.
In Tables 1-4 and Fig 19, I have followed Frayer et al (1999a) in using a conversion factor
of 4 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$ for hyperluminous galaxies. For other galaxies
in Fig 19 with luminosities $> 10^{11.5} L_{\odot}$, I have used a conversion factor of
2 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$ (in line with Scoville et al 1997, but a factor of 2 or so
higher than advocated by Downes and Solomon 1998); for galaxies with luminosities $< 10^{11.5} L_{\odot}$,
I have used the standard Galactic value, 4.78 $M_{\odot}(K km s^{-1} pc^{-2})^{-1}$.
\begin{figure*}
\epsfig{file=fig20.ps,angle=0,width=12cm}
\caption{
Bolometric luminosity in starburst component, versus mass in gas, deduced
from CO observations. Filled circles, hyperluminous ir galaxies (Frayer et al 1999,
Yun and Scovile 1999); open triangles, ultraluminous ir galaxies (Solomon et al
1997); crosses, IRAS galaxies (Sanders et al 1991).}
\end{figure*}
The range of ratios of $L_{sb}/M_{gas}$ for hyperluminous galaxies is consistent with that derived for
ultraluminous starbursts. For cases where we have estimates of gas mass both from
CO lines and from dust emission, the agreement is remarkably good (within a factor of 2).
There is a tendency for the time-scale for gas-consumption, assuming a star formation
rate given by eqn (1), to be shorter for the more luminous objects, in the range
$10^7 - 10^8$ yrs (alternatively this could indicate a higher value for the low-mass
cutoff in the IMF).
The cases where a strong limit can be set on $M_{gas}$ are also, generally, those where
the seds do not support the presence of a starburst component. After correction for the
effects of gravitational lensing, gas masses ranging up to 1-3 x $10^{11} M_{\odot}$
are seen in most hyperluminous galaxies, comparable with the total stellar mass of an $L_*$
galaxy ($10^{11.2} (M/4L) h_{50}^{-2}$). In fact 24/39 hyperluminous galaxies in Tables 1-3
have gas masses estimated either from CO or from dust emission $> 10^{11} M_{\odot}$
(after correction for effects of lensing, where known).
Hughes et al (1997) argue that a star-forming
galaxy can not be considered primeval unless it contains a total gas mass of
$10^{12} M_{\odot}$, but this seems to neglect the fact that 90 $\%$ of the mass of
galaxies resides in the dark (probably non-baryonic) halo.
\section{Conclusions}
(1) About 50 $\%$ of hyperluminous infrared galaxies selected in unbiassed infrared
surveys have AGN optical spectra. This is not in fact higher than the proportion seen in
ultraluminous ir galaxies by Vielleux et al (199a). For about half of the galaxies in this sample,
the AGN dust torus is the dominant contribution to the total ir (1-1000 $\mu$m) bolometric luminosity,
while in half of cases a starburst seems to be the dominant contributor.
(2) There is a need for both an AGN dust torus and starburst components to understand most
seds of hyperluminous ir galaxies (29/39).
Measured gas masses support, in most cases, the starburst interpretation of rest-frame
far-infrared and submm ($\lambda_{em} \geq 50 \mu$m)emission.
(3) There is a broad correlation between the luminosities of starburst and AGN dust torus
components (Fig 19). This may imply that there is a physical link between
the triggering of star formation and the feeding of a massive black hole. Taniguchi et al
(1999) have argued that during a merger giving rise to a luminous starburst, a pre-existing
black hole of $10^{7} M_{\odot}$ may grow into a large one $>10^{8} M_{\odot}$ and hence
form a quasar. Alternatively they suggest that a large black hole might be formed out of star
clusters with compact remnants.
(3) There is no evidence in most objects that an AGN powers a significant fraction of radiation
at rest-frame wavelengths $\geq$ 50 $\mu$m. For P09104 and PG1634, the non-detection of CO
emission is consistent with the absence of evidence in the sed for a starburst component. In
F08279, the mass of CO detected suggests a limit on the starburst luminosity which implies that
the observed submm radiation may simply be the long-wavelength tail of its AGN dust torus emission.
F15307 poses a problem: the sed can be understood as radiation from both an AGN dust torus and
an Arp-220 like starburst, but the upper limit on the molecular mass from the non-detection
of CO would then imply a very extreme ir-luminosity to gas-mass ratio.
(4) After correction for the effects of lensing, star-formation rates in excess of 2000 $M_{\odot}/yr$
are inferred in many of these galaxies (for a Salpeter IMF). This would be sufficient to exhaust the observed
reservoir of gas in $10^8$ yrs. These galaxies are undergoing extremely major episodes of star
formation, but we can not yet establish whether this is their first major burst of star formation.
(5) Further submm continuum and molecular line observations can provide a strong test of the models
for the seds proposed here.
|
astro-ph/9912340
|
\section{Introduction}
In recent years a number of active galaxies have been found to have
powerful H$_2$O maser emission in their nuclei (e.g.~Braatz, Wilson,
\& Henkel 1994; 1996). It is known that the H$_2$O megamaser
phenomenon is associated with nuclear activity since all such
megamaser sources are in either Seyfert 2 or LINER nuclei. The
standard model for Seyfert galaxies involves a central engine (black
hole and accretion disk) producing ionizing radiation, and an
``obscuring torus'' which shadows the ionizing radiation into
bi-conical beams along its rotation axis (see Antonucci 1993 for a
review). This beaming is readily seen in some Seyferts as bi-conical
emission-line structures (e.g. Pogge 1989). Extended radio emission,
when present, is usually aligned with the emission-line gas
(e.g. Wilson \& Tsvetanov 1994). Detailed studies also indicate a
strong interaction between the radio ejecta and the optically visible
ionized gas (Capetti et al.~1996; Falcke et al.~1996; Falcke, Wilson,
\& Simpson 1998; Ferruit et al.~1999).
It appears reasonable to infer that the masers trace molecular
material associated with the obscuring torus or an accretion disk that
feeds the nucleus. This notion was confirmed in great detail by VLBI
observations of the megamaser in NGC 4258 (Miyoshi et al. 1995;
Greenhill et al. 1995). The positions and velocities of the H$_2$O
maser lines show that the masing region is a thin disk in Keplerian
rotation around a central mass of $3.9\cdot10^7M_\odot$ at a distance
of $\approx$0.16 pc from that mass (Herrnstein et al. 1999).
Although plausible scenarios for the megamaser phenomenon exist
(e.g. Neufeld \& Maloney 1995), it is by no means clear how the
material which obscures the nucleus (the ``obscuring torus'') and
the masing disk are related. The masing disk may
be part of a geometrically thin, molecular accretion disk at smaller
radii than the torus, or the thin, central plane of a thick
torus in which the column density is high enough for strong
amplification. Alternatively, the whole structure could be a warped thin
disk, so the masing gas might be misaligned with the central accretion
disk. The most straightforward picture consistent with current data
would, however, have the masing disk, obscuring torus and any more
extended molecular cloud distribution as one coherent accretion
structure feeding the central engine, with the ionized thermal and
non-thermal radio plasma roughly along the rotation axis.
We have therefore started a program to observe the narrow-line regions
(NLR) of all known megamaser galaxies with the Hubble-Space-Telescope
(HST) to establish this often suggested link between the molecular
disk responsible for the maser emission and the obscuring torus
responsible for the ionization cones. We are also obtaining continuum
color images to search for the obscuring material directly.
The most luminous known H$_2$O maser source is found in the galaxy
TXS2226-184\footnote{The name used by Koekemoer et al.~(1995) does not
follow the suggested and by now accepted convention used later in the Texas
survey (Douglas et al.~1996).} (IRAS F22265-1826; Koekemoer et
al. 1995), at a redshift of z=0.025 (luminosity distance D=101 Mpc
for $H_0=75$ km sec$^{-1}$ Mpc$^{-1}$ and $q_0=0.5$; in the images
0\farcs1 correspond to 46 pc). Koekemoer et al.~(1995) referred to
this object as a gigamaser in view of its isotropic
luminosity in the 1.3 cm water line of 6100$\pm900 L_{\sun}$. In this
paper, we present H$\alpha$+[\ion{N}{2}]$\lambda\lambda$6548,6583 and
broad-band continuum observations of TXS2226-184, obtained with the HST
and the VLA. Our results indeed show a linear H$\alpha$+[\ion{N}{2}]
structure along the radio axis and perpendicular to a dust lane. This
supports the connection between megamaser emission, dusty disk,
obscuring torus, and the narrow-line region discussed above. We also
classify the host galaxy as a spiral.
\section{Observations and Data Reduction}
\subsection{HST Observations}
TXS2226-184 was observed with the Planetary Camera (PC) on board the
HST (pixel scale is 0\farcs0455/pixel) in three filters: F814W (red
continuum); F547M (green continuum); and F673N (redshifted
H$\alpha$+[\ion{N}{2}]$\lambda\lambda$6548,6583). The total integration
times were 120 sec, 320 sec, and 1200 sec respectively, all exposures
being split into two or three integrations to allow cosmic ray
rejection. All observations were performed within one orbit on 1998
December 6.
\subsection{HST Data Reduction}
The images were processed through the standard Wide-Field and
Planetary Camera 2 (WFPC2) pipeline data reduction at the Space
Telescope Science Institute. Further data reduction was done in IRAF
and included: cosmic ray rejection, flux calibration, and rotation to
the cardinal orientation. The zero of magnitude for each continuum
filter was determined from the HST data handbook in the
VEGAMAG\footnote{A system in which Vega has magnitude zero in all HST
filters. The zero-points of the canonical Johnson-Cousins system
differ from the corresponding HST filters by up to 0.02 magnitudes for
closely matched filters and up to 0.2 mag for the rest.}
system. Sometimes we will refer to the red and green continuum filters
as I and V, respectively, even though F547M is not a good match to
Johnson-Cousins V; an error of 0.2 mag can be expected. For the
continuum filters a constant background level was determined in an
emission-free region of the PC (to represent sky brightness) and
subtracted from the image. This correction is mainly important for
obtaining good color information in faint regions. The galaxy
continuum near the H$\alpha$+[\ion{N}{2}] line was determined by
combining the red and green continuum images, scaled to the filter
width of F673N and weighted by the relative offset of their mean
wavelengths from the redshifted H$\alpha$+[\ion{N}{2}] emission. The
continuum was then subtracted from the on-band image to obtain an
image of H$\alpha$+[\ion{N}{2}]. We did not apply any shifts between
the images because they were all taken within one orbit and at the
same position on the PC chip. From the two broad-band images, we
constructed a color map by dividing the green by the red filter image,
including only pixels where the flux was at least five times the
average noise level in each frame. To increase the signal-to-noise at
larger radii, we also computed color maps in which the original image
was block averaged by $2\times2$ or $4\times4$ pixels. Each of these
maps was also clipped at its 5 $\sigma$ level and sampled at the PC
pixel scale. The three maps were then combined, with each image being
weighted by its inverse blocking size. This allows one to have a
composite color map in which the bright center is shown at full
resolution and the outer, low-surface brightness regions (which were
clipped in the full resolution map) are seen at lower resolution.
This is similar to an unsharp mask technique.
\subsection{VLA Observations and Data Reduction}
We observed the galaxy with the VLA in A-configuration at 8.46 GHz and
15 GHz on 1999 August 01 in snapshot mode for 5 mins, and at 4.85 GHz
on 1998 May 21 for 10 mins. We observed a phase calibrator at the
beginning and end of the scan and 3C 48 as a flux density
calibrator. Using the AIPS software, the data were
self-calibrated and maps were produced.
\section{Results}
\subsection{Radio Map}
A slightly super-resolved map of TXS 2226-184 at 8.46 GHz using a
circular restoring beam of 0.2\arcsec{} is shown in Figure
\ref{radiomap} (bottom) where we have subtracted the central point
source to show the extended emission more clearly. The source is
resolved with a peak flux density of 15 mJy and a total flux density
of 23 mJy. The emission is elongated in PA $-37^\circ$ towards the NW
and in PA $146^\circ$ towards the SE. No further extended emission was
detected in our maps. This is also true for lower-resolution maps (VLA
C- \& B-configuration) at 5 and 8 where the flux densities agree with
ours (Golub \& Braatz 1998). The total fluxes at 4.85 GHz and 14.94
GHz are 37 and 13 mJy respectively. At these frequencies the source is
extended in the same direction as at 8.46 GHz. If we compare our total
flux densities with the flux density the galaxy had in the Texas
survey at 365 MHz (198 mJy; Douglas et al.~1996), we find the spectrum
to be steepening from $\alpha=-0.65$ ($S_\nu\propto\nu^\alpha$)
between 365 MHz and 4.85 GHz to $\alpha=-1$ between 8.46 GHz and 14.94
GHz. Because of the compact structure this steepening is most likely
not due to resolution effects. The position of the central radio
component is $\alpha=22^h26^m30\fs07$,
$\delta=-18\arcdeg26^\prime09\farcs6$ (B1950).
\subsection{HST Images}
Our HST images are shown in Figure~\ref{images}. The continuum map,
which is the combination of the red and green filters used also for
off-band subtraction, reveals a highly elongated galaxy along PA
$55^\circ$. The inner region (1\arcsec{} diameter) is bisected by a
dark band, presumably a nuclear dust lane. We have fitted an
elliptical Gaussian function
to the inner region to locate the centroid of the
continuum emission. The centroid thus found is marked with a cross in
Fig.~\ref{images} and we shall refer to this position as the
``nucleus'' of the galaxy. It is in the middle of the supposed dust
lane. The presence of this dust lane is further strengthened by the
color map, which shows a region of high reddening along PA 60$^\circ$
extending roughly 1\arcsec{} across the nucleus. We also see higher
reddening on the NW side of the galaxy than on the SE which, for a
disk galaxy, would indicate that the NW side is the nearer side of the
galaxy disk (Hubble 1943).
The H$\alpha$+[\ion{N}{2}] map shows a highly elongated structure
roughly along PA $-40^\circ\pm5^\circ$, i.e. in the same direction as
the radio emission, with a bright spot 0\farcs2 NW of the supposed
nucleus. The emission extends further towards the SE, with a broad,
``wiggly'' structure near the nucleus and a ``plume'' 1\farcs5 from
the nucleus. As in the continuum image, the adopted nucleus is not
very bright in H$\alpha$+[\ion{N}{2}], presumably because of
obscuration by the dust lane.
The adopted nucleus in the HST images is within 1\farcs5---the typical
error in absolute HST astro\-metry---of the radio nucleus. Therefore
we have assumed that the optical and radio nuclei coincide and shifted
the HST images accordingly (see Falcke et al.~1998 for a discussion of
VLA/HST registration and errors). The coordinates given in
Fig.~\ref{images} are after this shift has been performed.
In the larger field of view of all four WFPC2 chips, we find a number
of faint, extended sources around TXS2226-184 which are probably
galaxies. In particular, there is a highly elongated galaxy only
17\farcs2 SW (PA = --120$^\circ$) of the nucleus of TXS2226-184 at
$\alpha=22^h26^m29\fs0$, $\delta=-18\arcdeg26^\prime18\farcs3$
(B1950).
\begin{figure*}[htb]
\setcounter{figure}{1}
\centerline{\psfig{figure=f2.ps,width=0.65\textwidth,bbllx=3.5cm,bburx=14.6cm,bblly=8.8cm,bbury=19.3cm}}
\caption[]{\label{fit}Surface brightness $\mu$ of TXS2226-184 in the F814W
filter (I band) as a function of the semi-major axis in
arcseconds. The solid line is a disk galaxy fit with both bulge and
disk components, as described in the text. The dashed line is a fit
with only a bulge component.}
\end{figure*}
\subsection{Isophotes and Radial Profile Fitting}
We have fitted elliptical isophotes to the red continuum image of the
galaxy ignoring the innermost few pixels which are heavily affected by
the dust lane. The center was fixed at the adopted nucleus (see
Sec. 3.2). The ellipticity is close to zero at $R\simeq0\farcs5$,
below which it is strongly affected by the dust lane, and approaches a
constant value of around 0.6 beyond $R\ga3$\arcsec{}. Similarly, the
PA of the semi-major axis changes rapidly from $140^\circ$ to a value
of 65$^\circ$ at 0\farcs5, and stays essentially constant (at
50\arcdeg-60\arcdeg) at larger radii. The colors are relatively red
in the inner region, dropping from V--I$\sim$1.65 to around 1.35 at
the outer isophotes.
Figure \ref{fit} shows the azimuthally averaged surface brightness of
the isophotes as a function of $R$. This profile was fitted in IRAF
with a) an exponential disk profile,
\begin{equation}
S_{\rm disk} = S_0\cdot\exp\left(-{R\over R_0}\right),
\end{equation}
plus a bulge component (de Vaucouleurs 1948),
\begin{equation}
S_{\rm bulge} = S_{\rm e}\cdot\exp\left(-7.688\cdot\left(\left({R_{\rm
SMA}\over R_{\rm e}}\right)^{1/4} - 1\right)\right),
\end{equation}
to represent a spiral or S0 galaxy, and b) with a bulge component
(Eq.~2) only to represent an elliptical galaxy. While the fitting was
done using surface brightness, $S$, weighted by the inverse errors, we
give the results in the more conventional form of surface brightess
$\mu$ (in mag arcsec$^{-2}$).
For the disk + bulge model (a) we obtained a good fit (reduced
$\chi^2=0.86$) with the parameters $\mu_0=18.0$ mag arcsec$^{-2}$,
$R_0=2\farcs4$ (1.1 kpc), $\mu_{\rm e}=19.7$ mag arcsec$^{-2}$, and
$R_{\rm e}=0\farcs6$ (0.29 kpc). For a bulge component only (b),
i.e. an elliptical galaxy profile, the fit is much worse (reduced
$\chi^2=4.9$) and at large radii lies consistently above the data
(Fig.~\ref{fit}). The parameters we get here are $\mu_{\rm e}=22.8$
mag arcsec$^{-2}$ and $R_{\rm e}=22\farcs7$ (10.5 kpc). The results
clearly favor a spiral over an elliptical galaxy. The ellipticity of
TXS2226-184 ($e=1-b/a=0.61$ at 2\farcs7$<R<$6\farcs0) indicates an
inclination of the galaxy to the line of sight of 70$^\circ$ (using
$i=\arcsin{\sqrt{(1-(b/a)^2)/0.96}}$, e.g.~Whittle 1992). The details
of the fitting depend somewhat on how much of the inner region is
excluded, while the preference of a disk + bulge model over a
bulge-only model does not.
The difference between the magnitudes of the integrated bulge and the
galaxy as a whole in our spiral galaxy model (see Simien \& De
Vaucouleurs 1986) is $\Delta m_{\rm t}=1.9$ if we integrate along
elliptical isophotes with $e=0.61$. To correct for the inclination
dependent absorption (e.g. Tully et al.~1998) we would have to add
$\sim$0.5 mag to obtain the face-on value of this difference. Figure 2
and Eq.~4 of Simien \& De Vaucouleurs (1986) then would formally
indicate that TXS2228-184 is probably an Sb/c (RC2 Hubble type
$T=$4-5). However, this determination of the relative bulge luminosity
and the Hubble type classification is very uncertain. Still, our data
should be good enough to indicate that TXS2228-184 is later than S0.
The fact that we are measuring at I (Simien \& De Vaucouleurs use B)
strengthens this point, since one would expect the bulge to be more
prominent relative to the disk at I than at B. If we integrate our
surface luminosity profile to infinity the total I magnitude of disk
and bulge is 15.1 mag. The uncertainty in the cut-off radius due to a
low signal-to-noise in the outer isophotes may allow an increase of
this value by up to 0.4 mag.
\section{Discussion \& Summary}
Koekemoer et al.~(1995) have classified this galaxy as an elliptical
or S0 and speculated whether the unusually broad line-width of the
megamaser emission seen in this galaxy and in NGC1052 might be typical
of elliptical galaxies. Our HST images reveal that TXS2226-184 is
almost certainly not an elliptical, so NGC1052 is the only known
megamaser in an elliptical galaxy (Braatz, Wilson \& Henkel 1994). On
the other hand, the high inclination of TXS2226-184 strengthens the
tentative conclusion of Braatz, Wilson, \& Henkel (1997) that
megamasers are preferentially found in highly inclined galaxies. Six
out of fourteen spirals in their detected megamaser sample have now an
inclination $i>69^\circ$. This excess suggests that nuclear and large
scale dust disks in many active spiral galaxies are indeed related.
The NLR in TXS2226-184 is very elongated and reminiscent of the
jet-like NLR seen in many Seyfert galaxies, as imaged by HST
(e.g.~Capetti et al.~1996; Falcke et al.~1998). These gaseous
structures are believed to be produced in the interaction between
outflowing radio ejecta and the ISM (e.g. Falcke et al.~1998; Ferruit
et al.~1999). The fact that our radio map is elongated along exactly
the same direction as the NLR supports this view.
In addition to the NLR and radio jet, we find a dust lane in the
nucleus which aligns with the galaxy major axis and presumably
represents its normal interstellar medium. The elongation of the NLR
and the radio source perpendicular to the NE-SW dust lane suggests
that the nuclear accretion disk and the obscuring torus are more or
less coplanar with the stellar disk in TXS2226-184. Preliminary
results of VLBA observations of the masers in this galaxy indeed seem
to roughly show a NE-SW orientation along PA 20$^\circ$ (Greenhill
1999). How to interpret this structure and whether this indicates a
warp in the gas disk going from tens of pc to pc scales is unclear at
present. Further VLBI observations of masers and the continuum in this
and other maser sources together with HST observations of the host
galaxies could help to clarify the nature of the obscuring
torus/masing disk and its connection to the large scale molecular gas
structure of the AGN host galaxy.
\acknowledgements We thank Stacy McGaugh for helpful discussions on
galaxy classfications and Lincoln Greenhill for providing informations
on unpublished VLBA observations of TXS2226-184. This research was
supported by NASA under grant NAG8-1027 and HST GO 7278 and by NATO
grant SA.5-2-05 (GRG 960086)318/96. HF is supported by DFG grant
358/1-1\&2.
|
hep-ph/9912479
|
\section{Introduction}
It has been well established that QCD is the correct theory of
the strong interactions.
The vast body of evidence from lattice QCD simulations
at low and intermediate energies \cite{DeTar} is complemented by
perturbative calculations which become reliable at high energies where the
renormalized coupling constant is small due to asymptotic freedom.
In this lecture we will focus on another limit where QCD simplifies.
Because of spontaneous breaking
of chiral symmetry, QCD at low energy reduces to a theory of weakly
interacting Goldstone bosons. Although this theory cannot be derived from
QCD by means of an ab-initio calculation, its Lagrangian
is determined uniquely by chiral symmetry and Lorentz invariance.
The validity of this low-energy theory is based on the presence of a mass-gap
which is a highly nontrivial and nonperturbative feature of QCD.
One of the questions we wish to
address is to what extent the existence of an effective
low-energy theory imposes consistency conditions on the original theory.
This question was first posed in \cite{LS} for the
mass dependence of the QCD partition function. They argued that for
small quark masses it can be both obtained from the effective
partition function and from QCD. Since the mass dependence of the QCD partition
function is given by the average of the fermion determinant this imposes
consistency conditions on the average properties of the
eigenvalues of the QCD Dirac operator.
Some of the properties of the Dirac eigenvalues are
robust against {\it large} deformations of the gauge field action well
outside the scaling window.
This type of spectral universality has been investigated
primarily within the context
of Random Matrix Theory \cite{Hack,ADMN}.
What has been found is that correlations
of eigenvalues on the scale of the average level spacing are
universal, i.e. they are robust against
deformations of the probability distribution of the matrix elements.
The low-energy effective action is also insensitive to
{\it large} deformations of the gauge field action.
The reason is the existence of a mass
gap. In the next section we will relate this property
to spectral universality.
\section{Valence Quarks and the Low-Energy Limit of QCD}
Because the same mass occurs both in the
quark propagator and in the fermion determinant
the average Dirac spectrum cannot be obtained directly from the QCD
partition function. In order to access the
spectrum of the Euclidean Dirac operator $D$
one has to introduce the valence quark mass dependence
of the chiral condensate defined by \cite{Vplb,Osbornprl,OTV}
\begin{eqnarray}
\Sigma(m_v) = \langle {\rm Tr} \frac 1 {m_v + D} \rangle.
\end{eqnarray}
The average is over the Euclidean QCD action
which includes a fermion determinant
that depends on the the sea-quark masses. The spectral density
per unit space-time volume $V= L^4$ of the Dirac operator $D$
is directly related to to $\Sigma(m_v)$,
\begin{eqnarray}
\rho(\lambda)/V = \frac 1{2\pi} \left (\Sigma(i\lambda +\epsilon) -
\Sigma(i\lambda -\epsilon) \right ).
\end{eqnarray}
The generating function for $\Sigma(m_v)$
is defined by \cite{pqChPT,OTV,DOTV}
\begin{eqnarray}
Z^{\rm pq}(m_v,J) ~=~ \int\! [dA]
~\frac{\det(D +m_v+J)}{\det(D +m_v)}\prod_{f=1}^{N_{f}}
\det(D + m_f) ~e^{-S_{YM}[A]} ~.
\label{pqQCDpf}
\end{eqnarray}
In addition to the usual quarks this partition function contains
valence quarks and its bosonic superpartners.
The chiral condensates for the different (super-)flavors
are given by the same expression in terms of the eigenvalues
of the Dirac operator. We thus have a maximum breaking of
the axial flavor symmetry.
The low-energy modes are then given by the Goldstone modes associated
with the spontaneous breaking of this symmetry.
Their quark content can be either
two sea-quarks, one sea quark and one valence quark, or two valence quarks.
The low-energy effective partition
follows from the flavor super-symmetry and its breaking and
Lorentz invariance as is the case for the usual
chiral Lagrangian \cite{pqChPT,OTV,DOTV}.
One major difference is that in this case the Goldstone manifold is
a super-manifold with both compact and non-compact
degrees of freedom \cite{Martin,OTV,DOTV}.
\section{Scales in the Dirac Spectrum}
For a nonzero value of the chiral condensate $\Sigma$ we can identify
three important scales in the Dirac spectrum. The
first scale is the smallest nonzero eigenvalue of the Dirac operator
given by $\lambda_{\rm min} =1/\rho(0)=\pi/\Sigma V$.
The second scale is the valence quark mass for which the Compton wavelength of
the associated Goldstone bosons is
equal to the size of the box. Using the Gell-Mann-Oakes-Renner relation
we obtain as Thouless energy\cite{Altshuler,GL,Vplb,Osbornprl}
\begin{eqnarray}
m_c = \frac {F^2}{\Sigma L^2},
\end{eqnarray}
where $F$ the pion decay constant.
A third scale is given by a typical hadronic mass scale. The three scales
are ordered as $\lambda_{\rm min} \ll m_c \ll \Lambda$.
For valence quark masses $m_v \ll m_c$ the kinetic term in the effective
action can be neglected and the low-energy partition function can be
reduced to a zero dimensional integral.
However, any partition function with a mass gap and
the same pattern of chiral symmetry breaking as in QCD can be reduced this way.
The simplest such theory is chiral Random Matrix Theory (chRMT)
\cite{SVR}. In that case spontaneous breaking of chiral
symmetry arises in the limit of infinite matrices.
The advantage of working with chRMT is that is relatively simple to derive
the distribution of the smallest eigenvalues.
Of course, the results for $\Sigma(m_v)$
obtained from the partially quenched partition function and from chRMT
coincide \cite{OTV,DOTV}.
The kinetic term is also determined uniquely by chiral symmetry and Lorentz
invariance which allows us to calculate the Dirac spectrum in the domain
$m_v \ll \Lambda$. This results in the slope of the Dirac spectrum at
$\lambda = 0$ which for $N_f$ massless flavors is given by \cite{OTV,TV}
\begin{eqnarray}
\frac{\rho'(0)}{\rho(0)} =
\frac {(N_f-2)(N_f+\beta)}
{16\pi \beta}\frac{\Sigma_0 }{F^4}.
\end{eqnarray}
Here, $\beta$ denotes the Dyson index of the Dirac operator. For QCD with
fundamental fermions and three or more colors (with $\beta =2$) this result
was first derived in \cite{Smilstern}. The other two
possibilities, $\beta =1 $ and $\beta =4$, refer to QCD with fundamental
fermions and two colors and QCD with adjoint fermions and two or more colors,
respectively.
The domain below $m_c$ has been investigated extensively
by means of lattice QCD
simulations and agreement with the chRMT results has been found
\cite{Vplb,Tilo,Hip,Heller,hiplat99,Tiloval,Poulval,Karlval,Damtopo}.
A somewhat
surprising result is that the lattice QCD data reproduce the analytical
result for zero topological charge. This will be explained
in the next section.
\section{Approach to the Continuum Limit for Staggered Fermions}
The low-energy limit of QCD and the small Dirac
eigenvalues are described by the same partition function.
In order to recover the continuum $U_A(1)$ symmetry of the
staggered Dirac operator (without the $U_A(1)$ symmetry)
its smallest eigenvalues thus have to approach their
continuum limit as well.
The staggered Dirac operator can be written as
\begin{eqnarray}
D_{KS} = D_C + a^2 \Lambda^2 D_R,
\end{eqnarray}
where $D_C$ coincides with the continuum Dirac operator at low energies,
$ a$ is the lattice spacing
and $\Lambda$ is a typical hadronic mass scale. The condition that
the Dirac spectrum of $D_{KS}$ approaches that of $D_C$ is
(with $|| \cdot ||$ the norm of an operator)
\begin{eqnarray}
||a^2 \Lambda^2 D_R|| \ll \Delta \lambda a.
\label{condition}
\end{eqnarray}
With $\Sigma \sim \lambda^3$,
the spacing of the eigenvalues near zero in lattice units is given by
$\Delta \lambda a \sim 1/\rho(0) \sim 1/N\Lambda^{d-1} a^{d-1}$.
Since $||D_R|| \sim O(1)$, the
condition (\ref{condition}) can be rewritten as
\begin{eqnarray}
a^{d+1} \Lambda^{d+1} \ll \frac 1N \qquad {\rm or} \qquad
L \Lambda = N^{1/d} a \Lambda \ll N^{\frac 1d - \frac 1{d+1}}.
\end{eqnarray}
But we also require a sufficiently large lattice with $L\Lambda \gg 1$
resulting in
\begin{eqnarray}
N^{\frac 1{d(d+1)}} \gg 1.
\end{eqnarray}
Our $naive$ estimate for the total number of lattice points for staggered
fermions to approach the continuum limit is given by $ N \approx (3^{d+1})^d$.
In two dimensions we need lattices of the order of $27^2$ to be reasonably
close to the continuum limit. This number is consistent with
simulations of the Schwinger model with staggered fermions \cite{Hip}.
In four dimensions the situation is much worse.
According to the same estimate continuum physics is only seen on lattices
as large as $343^4$ which explains that todays staggered lattices show
agreement with chRMT results in the sector of zero
topological charge \cite{Tilo,Heller,Damtopo,Phil}.
\section{Conclusions}
We have argued that the the distribution of the smallest
eigenvalues of the Dirac operator is a signature for the
pattern of chiral symmetry breaking of the QCD
partition function. Both the correlations of the smallest
eigenvalues and the slope of the Dirac spectrum from
have been obtained from a chiral Lagrangian.
The intercept of the Dirac spectrum
determines the chiral condensate whereas its slope
fixes the pion decay constant. However, it takes very large staggered lattices
to reliably perform such analysis.
\vskip 0.5cm
\noindent {\bf Acknowledgments.}
I gratefully acknowledge all my collaborators in this
project. D. Toublan is thanked for a critical reading of the manuscript.
This work was partially supported by the US DOE grant
DE-FG-88ER40388.
|
hep-th/9912094
|
\section{Introduction}
\label{sec:intro}
\quad
Noncommutative Yang-Mills (NCYM) theory
attracts one's interest after it emerged
in the BPS solution of Matrix theory \cite{Connes,Aoki_NCYM},
and is realized as the effective theory
around one of the perturbative vacua
of superstring theory with constant $B$ background
\cite{Douglas_B}.
Prior to such recent development
noncommutative geometry and
the construction of field theory on it
has been developed \cite{Connes2}.
Although the noncommutative geometry
appearing in the context of perturbative superstring theory
does not seem to have connection with quantum gravity,
the general noncommutative geometry
may accommodate the essential ingredients of quantum geometry
(which we do not know at present)
by its new machinery.
Assuming the latter fact
we are naturally inclined to investigate
the quantum mechanical aspect of the theory
defined on the noncommutative geometry
in order to argue whether it really reflects
the microscopic structure of our world.
But the first thing to do
is to analyze the simplest toy model and
know that the theory is well defined
even in the ultraviolet region.
It is the most important subject
to find out how the infrared physics is affected,
and whether it provides more desirable foundation for
constructing the concrete model.
\\
\quad
NCYM theory is widely discussed as it appears in string theory.
The realization of such a theory
on the set of the ordinal functions
modifies the product of two functions
in terms of ``stared'' (mentioned hereafter as ``*''-)
product.
Here we consider how the matter field
couples to Yang-Mills field in such a manner that
the theory respects this rule of product
as well as the gauge symmetry.
The simplest candidate would be
a noncommutative counterpart of QED,
in which electron with a definite charge is present.
As will be shown in Sec. \ref{sec:NC-QED}
inclusion of matters does not have so many varieties.
In spite of the base manifold considered here
being topologically trivial,
charge is limited to three varieties; 0, $1$ and $-$1,
the precise meaning of which is defined
in Sec. \ref{sec:NC-QED}.
This is stronger requirement
than the charge quantization on compact manifold
such as torus.
We expect that there would be Hilbert space realization
for the theory with the fields of charge $\pm$ 1
on noncommutative geometry
and a mapping rule to
the function space on the usual coordinates
as was found in NCYM theory.
Thus we call the fields with charge $+1$
as ``electron''
and the theory including such a object and ''photon''
as noncommutative QED (NC-QED) in this paper.
Note that
the massive scalar fields receive quadratic divergence
which leads us to lose control of ultraviolet (UV) divergence
unless supersymmetry forbids its appearance.
Contrarily
the usual fermion mass receives only logarithmic divergence
in four-dimensional QED or QCD.
Nonlocal generalization of the interactions
is then expected not to drastically
modify the divergent structure
as experienced in NCYM system,
which also needs further clarification.
Thus here the system involving fermions with charges
$\pm 1$ is examined primarily.
\\
\quad
Although NCYM system is related to
ordinary Yang-Mills system by rather complex map
\cite{SW_NC},
there would be nontrivial quantum mechanical dynamics in NCYM
without maximal supersymmetry.
One aspect of NCYM theory
has been argued to describe
the large N ordinary Yang-Mills theory with
a fixed t'Hooft coupling constant
in the high momentum region \cite{Bigatti,Ishibashi_NCYM}.
It relies on
analysis of diagrams and
the pattern of momentum-dependent phase factors
appearing in it.
More direct connection through the correlators of
the gauge invariant operators is welcome
in spite of curious nature of general Wilson loop operators
in noncommutative Yang-Mills theory
\cite{Ishibashi_NCYM,Maldacena_NCYM,Alishahiha_NCYM}.
But it is beyond our present scope.
\\
\quad
In order to access to the infrared side,
the first thing we can do is to investigate
the perturbative aspects.
The perturbative analysis on UV structure of NCYM theory
has been done in Ref. \cite{Martin_NCYM,Krajewski}.
The infrared side on NCYM as well as NC-QED
is our primary concern.
Naive continuation of asymptotically free nature
\cite{Martin_NCYM} indicates
the existence of
such a dynamical scale as $\Lambda_{\rm QCD}$
at which the coupling constant diverges
while naive commutative limit reduces
to abelian gauge theory.
The renormalization refers to the structures much higher than
noncommutative energy scale.
Thus the commutative limit of the renormalized quantum theory
do not reduce it to its commutative counterpart
even in the low momentum region.
We would like to begin in this paper
with seeking the dynamics
which could not be reached in local field theory,
by first examining a simple model, NC-QED.
\\
\quad
The paper is organized as follows:
Sec. \ref{sec:NC-QED} is concerned with construction
of classical action involving the matter fields and showing
that the allowed choice is quite limited.
In Sec. \ref{sec:infra} noncommutative QED theory
is quantized and the anomalous magnetic dipole moment
is calculated
to see whether the radiative effect from
noncommutative extension appears in the finite quantities.
There the infrared behavior is further investigated
by observing the finite part of the vacuum polarization of photon.
The potentially appearing $1/\tilde{q}^2$-singularity
cancels among the diagrams,
but the logarithmic infrared singularity is found,
which also exists in NCYM theory.
Such a structure in the infrared side
is connected to the UV side.
The extension to the chiral gauge theory is also
examined in Sec.\ref{sec:chiral},
but it is found
that there is {\it no} chiral gauge theory.
Sec. \ref{sec:conc} is devoted to
the discussion and conclusion of the present paper.
\section{Noncommutative QED}
\label{sec:NC-QED}
\quad
Pure noncommutative U(1) Yang-Mills action
\begin{equation}
S_{YM} = \int d^d x\,
\left( -\frac{1}{4 g^2} \right)
F_{\mu\nu} *F^{\mu\nu}
\, ,
\label{eq:NYM_action}
\end{equation}
(Space-time dimension $d$ is set equal to four in final.)
is nothing but the one obtained from
the ordinary SU(N) Yang-Mills action
by replacing the matrix multiplication
to the ``star'' (hereafter referred as $*$-)product
\begin{equation}
A * B(x) \equiv
\left.
e^{\frac{1}{2i} C^{\mu\nu}
\partial^{(\xi)}_\mu \partial^{(\eta)}_\nu}\,
A(x+\xi)\, B(x+\eta)
\right|_{\xi,\eta \rightarrow 0} \, ,
\end{equation}
with an antisymmetric matrix $C^{\mu\nu}$
which characterizes noncommutativity of space-time
by modifying the algebra of functions.
Even in U(1) case $A_\mu$ couples to itself
since the field strength $F_{\mu\nu}$ of $A_\mu$
has the nonlinear term
\begin{equation}
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu
-i [A_\mu,\, A_\nu]_{\rm M}\, ,
\end{equation}
where $[A,\, B]_{\rm M}$ denotes Moyal bracket:
\begin{equation}
[A,\, B]_{\rm M} = A * B - B * A\, .
\end{equation}
The $*$-product obeys the associative law
which is also satisfied by the matrix algebra
so that the manipulation in this case
resembles
the one experienced in the calculus of matrix.
Thus it is quite simple to see
the action (\ref{eq:NYM_action}) is invariant
under
\begin{equation}
A_\mu(x) \rightarrow A^\prime(x)
= U(x) *A_\mu(x) *U^{-1}(x) + i U(x) *\partial_\mu U^{-1}(x)\, ,
\label{eq:gauge_A}
\end{equation}
where $U(x) = (e^{i\theta(x)})_*$ is defined
by an infinite series expansion of multiple $*$-product of
scalar function $\theta(x)$.
$U^{-1}(x) = (e^{-i\theta(x)})_*$
is similarly defined and plays the role
of the inverse of $U(x)$.
Here and hereafter we assume
that the fields decrease so promptly at infinity
that the space-time integral of a Moyal bracket
(which corresponds to the trace of the commutator
in the ``matrix language'') vanishes.
\\
\quad
The coupling of ``electron'' to gauge field
in noncommutative U(1) Yang-Mills theory
receives a severe restriction.
The gauge transformation (\ref{eq:gauge_A}) for the gauge
field shows that
the simple candidate of the interaction
$\psi(x) * A_\mu(x)$ or $A_\mu(x) * \hat{\psi}$
implies that $\psi$ and $\hat{\psi}$ must transform as
\begin{equation}
\psi(x) \rightarrow \psi^\prime(x) = U(x) *\psi(x)\, ,
\quad
\hat{\psi}(x) \rightarrow \hat{\psi}^\prime(x)
= \hat{\psi}(x) *U(x)^{-1}\, ,
\end{equation}
respectively in order for each product
with gauge field to transform in the same
way as the original field.
The covariant derivative
\begin{equation}
D_\mu \psi = \partial_\mu \psi - i A_\mu *\psi, \quad
D_\mu \hat{\psi} = \partial_\mu \hat{\psi}
+ i \hat{\psi} *A_\mu\, ,
\end{equation}
also transforms covariantly in the same way
as the original objects.
Since the commutative limit leads to
the fields with $+1$ charge and $-1$ charge respectively
in ordinary QED,
so we call the field $\psi$ in the above
a field with $+1$ charge and referred hereafter as ``electron''
(opposite to the usual convention).
Then the action
\begin{equation}
S_{\rm matter} = \int d^d x
\left(
\bar{\psi} *\gamma^\mu iD_\mu \psi - m \bar{\psi} *\psi
\right)
\, ,
\label{eq:matter_action}
\end{equation}
is invariant under local U(1) symmetry
since $\bar{\psi}$
\footnote{
To compute the form factor
of the on-shell electron coupling to photon
we consider a theory in Minkowski space.
Thus $\bar{\psi} = \psi^\dagger \gamma^0$.
}
behaves in the same manner as $\hat{\psi}$.
The field with charge $+1$($-1$) in noncommutative case
would correspond to (anti-)fundamental representation
in ordinary nonabelian gauge theory.
It is also reminiscent of such features that
noncommutative gauge theory carries
the internal degrees of freedom by imbedding them
into the space-time geometry itself.
This is the reverse process of the reduction
of the space-time degrees of freedom
into the internal ones in the large N gauge theory
\cite{Kawai_RM}.
When we pursue this correspondence further,
we are inclined to guess that
the higher-rank representation of SU(N) gauge theory
may convert into some matter fields
in noncommutative gauge theory.
It would be the counterpart of the fields
of integral multiple of unit charge
from the view point of
noncommutative generalization of U(1) gauge theory.
Actually the adjoint representation
corresponds to a field $\chi(x)$ with zero charge in total
but transforming in the by-product form
\begin{equation}
\chi(x) \rightarrow \chi^\prime(x) =
U(x) *\chi(x) *U^{-1}(x)\, .
\end{equation}
Its covariant derivative is given by Moyal bracket.
However we cannot find the counterpart
of the second-rank antisymmetric representation, etc,
of SU(N) gauge theory.
The $*$-product admits only the fields with
charge $0$, $+1$ or $-1$.
As a by-product the vacuum expectation value
of Wilson loop operator for a rectangular loop becomes
associated with the ground state energy
acting between two sources of charges, +1 and $-$1,
as usual.
\section{Perturbative Aspects in Infrared Side}
\label{sec:infra}
\quad
We are interested in the quantum mechanical aspect of
the theory defined by the sum of (\ref{eq:NYM_action})
and (\ref{eq:matter_action})
\begin{equation}
S_{\rm NC-QED}
= \int d^d x\,
\left(
-\frac{1}{4 g^2} F_{\mu\nu} *F^{\mu\nu}
+
\bar{\psi} *\gamma^\mu iD_\mu \psi - m \bar{\psi} *\psi
\right)
\, .
\label{eq:NC-QED}
\end{equation}
Here we consider the theory on Minkowski space
with nonzero $C^{23}$ but vanishing $C^{01}$
in the canonical basis of antisymmetric matrix $C^{\mu\nu}$
for the later purpose of calculating
the anomalous magnetic dipole moment.
\\
\quad
Perturbation theory begins with rescaling
$A_\mu \rightarrow g A_\mu$ and gauge fixing.
BRST quantization as in ordinary QCD theory
leads the gauge fixing and Faddeev-Popov (FP) terms
\begin{equation}
S_{\rm GF} =
\int d^d x
\left(
-\frac{1}{2\alpha} \partial_\mu A^\mu *\partial_\nu A^\nu
+
\frac{1}{2}
\left(
i\bar{c} *\partial^\mu D_\mu c -
i\partial^\mu D_\mu c * \bar{c}
\right)
\right)\, .
\label{eq:NC-QED-GF}
\end{equation}
Quantization is defined by perturbative expansion
due to Feynman rule as was done for NCYM theory
in Ref. \cite{Martin_NCYM},
but now derive from the actions (\ref{eq:NC-QED})
and (\ref{eq:NC-QED-GF})
\\
\quad
The extra ultraviolet divergent contributions arise
in addition to those already appearing in NCYM theory.
As we concentrate on reporting on the infrared phenomena
in this short article,
we postpone to describe the detail
about ultraviolet divergence at one-loop level
in the future extended volume \cite{Hayakawa},
and state the results only in brief:
All the one loop ultraviolet divergence
can be subtracted by the local counterterms
with maintaining the equalities among
various $Z$ factors
(wave function renormalization constant, etc.)
required from gauge invariance.
The $\beta$ function becomes for $N_F$ number of copies
of electron fields
\begin{equation}
\beta(g) = \frac{1}{g} Q \frac{dg}{dQ}
= -\left(
\frac{22}{3} - \frac{4}{3} N_F
\right)\, \frac{g^2}{16\pi^2}\, .
\label{eq:beta}
\end{equation}
A contribution $\frac{22}{3}$ is due to the structure similar
to nonabelian dynamics,
SU(2) Yang-Mills theory \cite{Martin_NCYM}.
However the matter contribution is that found
in ordinary QED theory
with unit charge, {\it not} that of the quarks belonging to
the fundamental representation of SU(2) gauge theory
(in which $\frac{2}{3}$ instead of $\frac{4}{3}$ per flavor
found in (\ref{eq:beta})).
\\
\quad
In this analysis
there is an important feature that should be kept in mind
for the analysis of infrared aspect of the theory.
Only {\it planar} diagrams can have overall divergence.
Noncommutativity of the theory manifests itself
in the form of a momentum-dependent phase
associated with the vertex in Feynman rule.
``Planar'' diagram is
a portion of the contributions
which has a definite phase factor,
but it only depends on the external momenta,
not on any loop momenta.
Once loop momentum enters in the phase factor,
the suppression factor which depend
on the external momentum through $C^{\mu\nu}$
is always induced.
This is the same feature already shared
by NCYM theory \cite{Filk,Bigatti,Ishibashi_NCYM}.
The explicit two-loop computation similar to
Ref. \cite{Arefeva} in $\phi^4$ theory is welcome
to reveal further detail structure
of the present theory.
\\
\quad
To observe an infrared aspects of the theory,
the leading correction to magnetic dipole coupling
is estimated.
The extraction of dipole coupling in the
$\psi\bar{\psi}A_\mu$ vertex function yields
\begin{eqnarray}
i\left. \Gamma^\mu(p_I, p_F, q)\right|_{\rm dipole}
&=&
i g^3
\left[
e^{\frac{i}{2} p_I \cdot C \cdot p_F}\,H(1,p,q)
\right.
\nonumber \\
&& \quad \quad
+
\left.
e^{\frac{i}{2} p_I \cdot C \cdot p_F}\,H(0,p,q)
-
e^{-\frac{i}{2}p_I \cdot C \cdot p_F}\,H(1,p,q)
\right]\, mi\sigma^{\mu\nu} q_\nu \, ,
\label{eq:MDM_1}
\end{eqnarray}
where $q$ is the incoming photon momentum,
and $p$ is connected to the incoming electron momentum
$p_I$ and the outgoing electron momentum $p_F$ through
\begin{equation}
p_I = p - \frac{q}{2}, \quad p_F = p + \frac{q}{2}\, .
\end{equation}
The matrix $\sigma^{\mu\nu}$ is here
$\sigma^{\mu\nu}
= \frac{i}{2} \left[ \gamma^\mu, \gamma^\nu \right]$.
The function $H(\eta,p,q)$ appearing in (\ref{eq:MDM_1}) is
\begin{eqnarray}
H(\eta,p,q) &=& \int_0^{\infty} id\alpha_0
\int_0^{\infty} id\alpha_+ \int_0^{\infty} id\alpha_+
\frac{1}{[4\pi\beta i]^2}
\nonumber \\
&& \quad \quad
\times 2
\left(
\frac{\alpha_+ + \alpha_-}{\beta}
-
\left(
\frac{\alpha_+ + \alpha_-}{\beta}
\right)^2
\right)
\nonumber \\
&& \quad \quad
\times
\exp \left[
-i\frac{1}{\beta}
\left\{
(\alpha_+ + \alpha_-)^2 m^2
+ \alpha_+ \alpha_- (-q^2)
\right.
\right.
\nonumber \\
&& \qquad \qquad \qquad \qquad
\left.
\left.
- \eta (\alpha_+ + \alpha_-) (p \cdot \tilde{q})
+ \eta^2 \frac{\tilde{q}^2}{4}
\right\}
\right]\, ,
\label{eq:ex_H}
\end{eqnarray}
where $\beta = \alpha_0 + \alpha_+ + \alpha_-$, and
$\tilde{q}^\mu = C^{\mu\nu} q_\nu$
has the dimension of length.
$H(0,p,q)$ becomes $\frac{1}{16\pi^2m^2}$ for on-shell photon.
$H(1,p,q)$ can be written in terms of
a modified Bessel function of the second kind $K_1(x)$
\cite{math_formula}
\begin{eqnarray}
H(1,p,q)
&=&
\frac{1}{8\pi^2}
\int_0^1 d\alpha_+ \int_0^{(1-\alpha_+)} d\alpha_-
\frac{(\alpha_+ + \alpha_-) - (\alpha_+ + \alpha_-)^2}
{(\alpha_+ + \alpha_-)^2 m^2
+ \alpha_+ \alpha_- (-q^2)}
\nonumber \\
&& \qquad \qquad \qquad \qquad \qquad \quad
\times
e^{i(\alpha_+ + \alpha_-) (p\cdot \tilde{q})}\,
x\,K_1(x)
\, ,
\end{eqnarray}
where $x = (-\tilde{q}^2)
\left\{
(\alpha_+ + \alpha_-)^2 m^2
+ \alpha_+ \alpha_- (-q^2)
\right\}$.
Since $K_1(x) \sim \frac{1}{x}$ for $x\sim 0$
we can take $q^2$ and $\tilde{q}^2$
\footnote{
Since we consider the situation that only $C^{23}$ is nonzero,
thus there is an on-shell photon with a finite spatial momentum.
}
to zero without confronting with any singularities.
Thus, for $q^2=0$ and $\tilde{q}^2=0$,
$H(1,p,q)$ is equal to $H(0,p,q)$.
Therefore the leading correction to the magnetic dipole moment
is the same for ordinary QED and NC-QED.
But the non-zero photon momentum in the direction
transverse to $(2,3)$ plane is allowed.
Eq. (\ref{eq:MDM_1}) shows
that the strength of magnetic dipole coupling
is affected for such a photon in general.
\\
\quad
We examine the infrared behavior of
the renormalized vertex functions,
especially the vacuum polarization for photon.
As the analysis is lacking even for NCYM theory
\footnote{
See also Ref. \cite{Seiberg}.
},
the common contributions,
the FP-ghost loop, gauge boson loop are examined here.
Taking Feynman gauge for simplicity,
they can be written in terms of Schwinger parameterization
\cite{Itzykson}
\begin{eqnarray}
&&
i\Pi^{\mu\nu}_{\rm ghost + 33}(q) =
ig^2 \int_0^\infty id\alpha_+ \int_0^\infty id\alpha_-
\frac{1}{(4\pi\beta i)^{d/2}}
\exp\left[
-i \frac{\alpha_+ \alpha_-}{\beta} (-q^2)
\right]
\nonumber \\
&& \qquad \quad \times
\left[
\left(
1 - \exp\left[ -i \frac{1}{\beta} \frac{\tilde{q}^2}{4} \right]
\right)
\times
\left\{
g^{\mu\nu} \left(
(3d-4) i\frac{1}{\beta} +
\left(
5-2\frac{\alpha_+ \alpha_-}{\beta^2}
\right) q^2
\right)
\right.
\right.
\nonumber \\
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad
+ \left.
q^\mu q^\nu
\left(
(d-6) - 4(d-2) \frac{\alpha_+\alpha_-}{\beta^2}
\right)
\right\}
\nonumber \\
&& \qquad \qquad \qquad
\left.
+ \exp\left[ -i \frac{1}{\beta} \frac{\tilde{q}^2}{4} \right]
\times \frac{1}{\beta^2} \times
\left\{
- \frac{1}{2} g^{\mu\nu} \tilde{q}^2
+ (2-d) \tilde{q}^\mu \tilde{q}^\nu
\right\}
\right]\, ,
\nonumber \\
&&
i\Pi^{\mu\nu}_{4}(q) =
2(d-1) i g^2 g^{\mu\nu}
\int_0^\infty id\alpha \frac{1}{[4\pi\alpha i]^{d/2}}
\left(
1 - \exp\left[ -i \frac{1}{\alpha}
\frac{\tilde{q}^2}{4}
\right]
\right)\, .
\label{eq:VP}
\end{eqnarray}
where $\beta = \alpha_+ + \alpha_-$.
The first quantity in eq. (\ref{eq:VP})
is the contributions from the ghost loop
and the gluon loop induced through the two trilinear
gauge couplings.
The other quantity is due to
the quartic self-interaction of gauge boson.
The evaluation is similar to that
found in Ref. \cite{Itzykson}.
The exponential factor
$\exp[-i /(4\beta \tilde{q}^2)]$ acts as the cutoff for
the ultraviolet divergence.
The latter quantity in eq. (\ref{eq:VP})
is calculated as:
\begin{equation}
i\Pi^{\mu\nu}_4(q) =
i \frac{g^2}{16\pi^2} g^{\mu\nu} \frac{-24}{-\tilde{q}^2}\, ,
\label{eq:VP_4}
\end{equation}
containing a hard singularity $1/\tilde{q}^2$.
It would be cancelled by
the term from $\Pi^{\mu\nu}_{\rm ghost + 33}(q)$.
To evaluate it we need to perform the integrals
\begin{equation}
\int_0^\infty \frac{d\rho}{\rho^{n+1}}
\exp\left( -\rho - \frac{1}{\rho} a^2 \right)
= \left( -\frac{1}{2a} \frac{d}{da} \right)^n [2K_0(2a)]\, .
\label{eq:int_Bessel}
\end{equation}
where $a^2$ is proportional to $\tilde{q}^2 q^2$ in
the present context.
Using
the asymptotic behavior of $K_0(x)$ around $x\sim 0$
available in a mathematical literature \cite{math_formula}
we can derive the useful formula
\begin{eqnarray}
\int_0^\infty \frac{d\rho}{\rho}
\exp\left( -\rho - \frac{1}{\rho} a^2 \right)
&=&
-{\rm ln}(a^2)
\left( 1 + a^2 + {\cal O}(a^4) \right)
- 2 \gamma_E + (-2\gamma_E + 2) a^2
\nonumber \\
&& \quad
+ {\cal O}(a^4)\, ,
\nonumber \\
\int_0^\infty \frac{d\rho}{\rho^2}
\exp\left( -\rho - \frac{1}{\rho} a^2 \right)
&=&
{\rm ln}(a^2)
\left(
1 + \frac{1}{2} a^2 + {\cal O}(a^4)
\right)
\nonumber \\
&& \quad
+ \frac{1}{a^2}
+
\left(
2 \gamma_E - 1
\right)
+ \left( \gamma_E - \frac{5}{4} \right) a^2
+ {\cal O}(a^4)\, ,
\nonumber \\
\int_0^\infty \frac{d\rho}{\rho^3}
\exp\left( -\rho - \frac{1}{\rho} a^2 \right)
&=&
-{\rm ln}(a^2)
\left(
\frac{1}{2} + \frac{1}{6} a^2 + {\cal O}(a^4)
\right)
+ \frac{1}{a^4} - \frac{1}{a^2}
\nonumber \\
&& \quad
+ \left(
-\gamma_E + \frac{3}{4}
\right)
+ \left(
-\frac{1}{3} \gamma_E + \frac{39}{36}
\right) a^2 + {\cal O}(a^4)\, .
\label{eq:int_formula}
\end{eqnarray}
where $\gamma_E$ is Euler constant.
They allow us to compute the singularity
of $\Pi^{\mu\nu}_{\rm ghost + 33}(q)$ for small $\tilde{q}^2$.
It has the singular terms in the infrared
\begin{equation}
i \Pi^{\mu\nu}_{\rm ghost + 33}(q) \sim
i \frac{g^2}{16\pi^2}
\left\{
g^{\mu\nu} \frac{24}{-\tilde{q}^2}
+
\frac{10}{3}
\left(
g^{\mu\nu} q^2 - q^\mu q^\nu
\right) {\rm ln} (q^2 \tilde{q}^2)
+ 32 \frac{\tilde{q}^\mu \tilde{q}^\nu}{\tilde{q}^4}
- \frac{4}{3} \frac{q^2}{\tilde{q}^2}
\tilde{q}^\mu \tilde{q}^\nu
\right\} \, .
\label{eq:singular}
\end{equation}
Therefore $1/\tilde{q}^2$-singularity
cancels in the sum of (\ref{eq:VP_4}) and (\ref{eq:singular})
\begin{equation}
i \Pi^{\mu\nu}(q) \sim
i \frac{g^2}{16\pi^2}
\left\{
\frac{10}{3}
\left(
g^{\mu\nu} q^2 - q^\mu q^\nu
\right) {\rm ln} (q^2 \tilde{q}^2)
+ 32 \frac{\tilde{q}^\mu \tilde{q}^\nu}{\tilde{q}^4}
- \frac{4}{3} \frac{q^2}{\tilde{q}^2}
\tilde{q}^\mu \tilde{q}^\nu
\right\} \, ,
\label{eq:singular}
\end{equation}
which is consistent with Slavnov-Taylor identity.
The nonplanar contribution would diverge
if the integral in eq. (\ref{eq:VP})
were evaluated with $\tilde{q}^2$ set equal to zero.
The logarithmic infrared singularity ${\rm ln}(\tilde{q}^2)$
in (\ref{eq:singular}) reflects
the fact that UV divergence is at most logarithmic
\footnote{
D. Bigatti and L. Susskind has argued this structure
of singularities \cite{Bigatti}.
}.
In fact the coefficient $10/3$ of ${\rm ln}(\tilde{q}^2)$
is that appears in the wave function renormalization factor
of photon in NCYM theory
\begin{equation}
\left. Z_A \right|_{\rm NCYM}
= 1 +
\frac{g^2}{16\pi^2} \frac{10}{3}
\frac{1}{\varepsilon^\prime}\, ,
\label{eq:Z_A}
\end{equation}
where $1 /\varepsilon^\prime = 1 /\varepsilon + \gamma_E
- {\rm ln}(4\pi)$
for the space-time dimension $d=4 - 2\varepsilon$.
The combination $q^2 \tilde{q}^2$ for logarithmic
correction assures this.
\section{Chiral Gauge Theory}
\label{sec:chiral}
\quad
Until now all the fermions are
assumed to be Dirac fermions.
It is naturally tempted
to pursue the extension to chiral gauge theory.
The classical analysis given in Sec. \ref{sec:NC-QED}
is irrelevant to the chiral property of fermion.
Thus Weyl fermions can have the charge $+1$ or $-1$.
The right-handed fermion with $+1$ charge
is easily seen to be replaced by its CP conjugate
(the left-handed) fermion also in the present context.
The chiral gauge theory
simply implies that
the number of the left-handed fermions with $+1$ charge
is not equal to one with $-1$.
The question is whether such a theory can
circumvents a triangle loop anomaly
to define a consistent quantum theory or not.
\\
\quad
It is easy to see the triangle loop diagram
is planar.
Once we remind the correspondence between the current theory
to ordinary nonabelian gauge system
in which the external momentum
plays the role of color in Yang-Mills theory,
the remained integral
is evaluated completely in the same manner
as encountered in ordinary nonabelian gauge theory
involving the fundamental and/or anti-fundamental Weyl fermions.
From this observation,
the number of the left-handed fermions with
$-1$ charge has to match with the number of fermions
with $+1$ in the system
\footnote{
We require that
the triangular loop contribution cancels with each other
for {\it all} momentum configuration.
But it might be too strong requirement
for noncommutative SU(N) gauge theory
due to non-factorizability of color and phase factors,
as suggested by Y. Kitazawa.
}.
Such a theory is vector-like, i.e.,
noncommutative QED considered until the previous sections.
\section{Conclusion}
\label{sec:conc}
\quad
In this paper we attempt to find
the noncommutative analogue of QED
and argue the perturbative aspects
of its infrared dynamics.
The anomalous magnetic dipole moment
does not change at leading order
for the photon moving in the direction
along which noncommutativity is irrelevant.
However the form factor seems to indicate
the possible observation of Lorentz invariance SO(1,3)
by controlling the direction of photon
although the conventional environment of measurement averages
over the direction of photon.
In order to discuss quantum aspects of the theory,
it would be the best
to calculate and investigate
the radiatively corrected cross sections
for the electron-positron annihilation,
M$\phi$ller scattering,
Compton, or photon-photon scattering processes
with explicitly specified helicities of the external states.
\\
\quad
For the preparation of this analysis,
the finite part of vacuum polarization
of the photon is calculated to study more about
the low momentum behavior.
The absence of $1/\tilde{q}^2$ is connected
with the absence of quadratic UV divergence.
In more detail the magnitude of the infrared singularity
is the same as that of logarithmic UV divergence.
\\
\quad
The requirement of anomalous diagrams being cancelled
in total is too strong for chiral gauge theory
to exist in the present context.
It is an interesting and important subject
to pursue the possibility to relax this requirement.
\\
\\
{\bf \Large Note added}
\\
\\
\quad
During preparation of the paper,
we find a preprint \cite{Seiberg} reported
a few days ago,
which discussed
the subject partly overlapping with the present paper.
The result here coincides with that obtained there.
\\
\quad
\\
{\bf \Large Acknowledgements}
\quad
\\
\\
\quad
The author thanks
especially S. Iso for discussion and suggestion
at frequent times and reading manuscript
several times.
He also thanks L. Susskind
for pointing out mistakes in the previous version
of the manuscript,
and N. Ishibashi, Y. Kitazawa, K. Okuyama and F. Sugino
for learning about noncommutative theory,
and the colleagues at KEK for sharing common interests
in this theory and the various topics
at a weekly informal meeting.
|
1603.07299
|
\section{Introduction}
The $K$-correction was originally defined in the work of \cite{Humason:1956}. As initially defined, it was limited to filter transforms from an observer frame photometric filter to the same filter in the galaxy's rest frame. Later work generalized this concept to include transforms to other rest frame observations \citep[for example, ][]{Blanton:2003K}. There is a thorough summary of the state of the art of $K$-corrections in \cite{Hogg:2002}. $K$-corrections are primarily useful when a large number of objects need to be characterized and there is not sufficient data about all of them to fully specify the spectral energy distribution (SED) of each object, or when theoretical knowledge of the objects' SEDs are deficient. Put in other words, $K$-corrections are the correct approach to take when the uncertainty in the predictions of the theoretical model exceeds the uncertainty of performing a filter transformation on a small number of observations. What has been missing in the literature, thus far, is an objective specification of which observation frame filter to choose to perform this transformation when multiple close filters are available, or, even better, how to combine two or more filters to increase the signal to noise ratio (SNR) of the resulting measurement.
The answer to both of the questions above, how to combine and which filters to choose, must be informed by an approximation of the contribution of the $K$-correction process to the uncertainty of the corrected measurement. It is also important to consider how systematic differences between fluxes measured using different filters can add differing biases. In particular, the biases in photometric measurements taken at different wavelengths will usually vary because of wavelength dependent background and resolution effects. It is also important to consider whether the post $K$-correction measurements need to be statistically independent (for example, for the construction of color-magnitude diagrams). Assuming systematic consistency is desirable beyond maximizing the SNR of every individual measurement, the answer to the question of which filters to $K$-correct and combine using an inverse variance weighted average is whichever filters produce $K$-corrected quantities with sufficiently high combined SNR for the data set as a whole. Examples of this sort of consideration include: Sloan Digital Sky Survey (SDSS) $i$ band measurements have significantly higher SNR than $z$ band ones, so it may yield more precise results for the data set as a whole to $K$-correct from observer frame $i$ to rest frame $z$ than from $z$ to $z$, even though the $z$ to $z$ correction can be smaller for a large number of galaxies.
Including information about the uncertainty added by the $K$-correction offers an improvement on the present state in the literature where filters are often chosen for $K$-correction based only on nearness of filters, regardless of whether the $K$-correction would move the flux across a spectral break with a wide range of strengths in galaxies' SEDs (for example, the 4,000\,\AA\ break).
The structure of this paper is as follows: Section~\ref{sec:thry} contains a short derivation of the propagation of errors level (Gaussian statistics) uncertainty in the $K$-correction, Section~\ref{sec:data} describes the data used to measure the SED covariance function on galaxies (overall, red, blue, and Active Galactic Nuclei [AGN]), and Section~\ref{sec:results} summarizes the results of the measurement.
The cosmology used in this paper is based on the WMAP 9 year $\Lambda$CDM cosmology \citep{Hinshaw:2013}\footnote{\url{http://lambda.gsfc.nasa.gov/product/map/dr5/params/lcdm_wmap9.cfm}}, with flatness imposed, yielding: $\Omega_M = 0.2793,\ \Omega_\Lambda = 1 - \Omega_M$, and $H_0 = 70 \operatorname{km} \operatorname{s}^{-1} \operatorname{Mpc}^{-1}$ (giving Hubble time $t_H = H_0^{-1} = 13.97\operatorname{Gyr}$, and Hubble distance $D_H = c t_H = 4.283 \operatorname{Gpc}$). All magnitudes quoted are in the AB system, unless otherwise stated.
\section{Theory} \label{sec:thry}
The general form of the $K$-correction, adapted from Equation~9 of \cite{Hogg:2002} by inverting a fraction and changing variables in an integral, used here is shown in Equation~\ref{eqn:thry:Kcorr}:
\begin{align}
K_{\mathrm{ratio}} & = \frac{1}{1+z} \left( \frac{\int \frac{\d \nu}{\nu} L_\nu(\nu)Q(\nu)}{\int \frac{\d \nu}{\nu} g^Q_\nu(\nu) Q(\nu)} \right) \nonumber \\
&\hphantom{=}\times \left( \frac{\int \frac{\d \nu}{\nu} g^R_\nu(\nu) R(\nu)}{ \int \frac{\d \nu}{\nu} L_\nu(\nu) R\left(\frac{\nu}{1+z}\right) } \right) , \label{eqn:thry:Kcorr}
\end{align}
where $R(\nu)$ is the observer frame detector's relative response to a photon of frequency $\nu$ (the Relative Photon Response [RPR]), $g^R_\nu(\nu)$ is the spectral energy distribution (SED) of the standard/zero point source of the observer's instrument, $L_\nu(\nu)$ is the rest frame luminosity SED of the source, and $Q(\nu)$ is the RPR of the instrument being $K$-corrected to (often Q = R). The usual definition of the $K$-correction is in terms of magnitudes, and in that case $K = 2.5 \log_{10} (K_{\mathrm{ratio}})$. The content of Equation~\ref{eqn:thry:Kcorr} can be summarized, in the notation of functional calculus, as:
\begin{align}
K_{\mathrm{ratio}} & = \frac{1}{1+z} \left( \frac{L_{Qe}[L_\nu]}{L_{Ro}[L_\nu]}\right), \label{eqn:thry:Ksumm}
\end{align}
where $e$ and $o$ are added to the subscripts to emphasize that they are calculated in emitted frame and observer frame, respectively.
Both $L_{Qe}[L_\nu]$ and $L_{Ro}[L_\nu]$ are what are known as `functionals' of $L_\nu$ - functions that map an entire function to the real numbers. In particular, they fit into the class of linear functionals that have the general form:
\begin{align}
f[L_\nu] & = \int w(\nu)\, L_\nu(\nu) \d \nu.
\end{align}
As long as the function $w$ is non-negative, and therefore falls into the class of weighting functions, then the form and units that $w$ has dictates the interpretation of the functional $f$. If $w = 1$, then $f$ is the bolometric luminosity. If $w(\nu) \propto \delta(\nu - \nu')$, then $f$ is proportional to a spectral luminosity. Most commonly in astronomy the weighting function is a detector's response to a photon, $w(\nu) \propto R(\nu) / \nu$. The weight function can also be proportional to $r^{-2}$, the inverse square of the distance, in which case all of the aforementioned quantities are fluxes instead of luminosities.
The important part of the previous paragraph, establishing notation aside, is that the linearity of $L_{Qe}$ and $L_{Re}$ combines with the form of Equation~\ref{eqn:thry:Ksumm} to make $K_{\mathrm{ratio}}$ completely independent of the normalization of the SED. For concreteness, we define the normalization luminosity and the normalized SED, respectively, in terms of $w_N(\nu)$ to be:
\begin{align}
L_N & \equiv \int w_N(\nu)\, L_\nu(\nu) \d \nu,\ \mathrm{and}\nonumber \\
\ell_\nu(\nu) & \equiv \frac{L_\nu(\nu)}{L_N}.
\end{align}
Because the $K$-correction in Equation~\ref{eqn:thry:Ksumm} is also a functional of the SED, it is necessary to adapt standard multi-dimensional propagation of errors to functional calculus to calculate the uncertainty in $K_{\mathrm{ratio}}$. In multiple dimensions the propagation of errors formula that relates the covariance of some quantities, $\vec{x}$, to a vector valued function of those quantities, $\vec{f}(\vec{x})$, is:
\begin{align}
\operatorname{cov}(f_i, f_j) & = \sum_{m, n} \frac{\partial f_i(\vec{x})}{\partial x_m} \frac{\partial f_j(\vec{x})}{\partial x_n} \operatorname{cov}(x_m, x_n). \label{eqn:thry:propcount}
\end{align}
Equation~\ref{eqn:thry:propcount} generalizes immediately to functional calculus in an obvious way:
\begin{align}
\operatorname{cov}(f_i, f_j) & = \int \frac{\delta f_i[\ell_\nu]}{\delta \ell_\nu(\nu)} \frac{\delta f_j[\ell_\nu]}{\delta \ell_\nu(\nu')} \Sigma(\nu, \nu') \d \nu \d \nu', \label{eqn:thry:properrs}
\end{align}
where $\Sigma(\nu, \nu')$ is the two point function of normalized SEDs in the class of galaxies being $K$-corrected; symbolically,
\begin{align}
\mu_\nu(\nu) & \equiv \left\langle \ell_\nu(\nu) \right\rangle, \ \mathrm{and} \nonumber \\
\Sigma(\nu, \nu') & = \left\langle \left(\ell_\nu(\nu) - \mu_\nu(\nu) \right) \left(\ell_\nu(\nu') - \mu_\nu(\nu) \right) \right\rangle, \label{eqn:thry:sigdef}
\end{align}
where $\mu_\nu(\nu)$ is the mean SED.
The formula in Eqution~\ref{eqn:thry:properrs} is more general than is actually required because all fluxes and luminosities are linear functions of the SED, not general ones. So, if a set of luminosities is defined by positive semi-definite weight functions, $L_i = \int w_i(\nu)\, L_\nu(\nu) \d \nu$, then:
\begin{align}
\operatorname{cov}(L_i, L_j) & = L_N^2 \int w_i(\nu) w_j(\nu') \Sigma(\nu, \nu') \d \nu \d \nu'. \label{eqn:thry:covlumas}
\end{align}
All of the tools are in place to produce the covariance of multiple $K$-corrections using the propagation of errors formalism. First, the variance of a single $K$-correction is:
\begin{align}
\operatorname{var}(K_{\mathrm{ratio}} ) & = K_{\mathrm{ratio}}^2 \left(\frac{\operatorname{var}(L_{Q})}{L_{Q}^2} + \frac{\operatorname{var}(L_{R})}{L_{R}^2} \right. \nonumber \\
&\hphantom{=K_{\mathrm{ratio}}^2 (-} \left.- 2 \frac{\operatorname{cov}(L_{Q},\, L_{R})}{L_{Q}\, L_{R}} \right), \label{eqn:thry:singlvar}
\end{align}
where the variances and covariance are calculated by applying Equation~\ref{eqn:thry:covlumas}. If multiple quantities are being $K$-corrected, then covariance matrix among the $K$-corrections takes the form:
\begin{widetext}
\begin{align}
\operatorname{cov}(K_i,\, K_j) & = K_i K_j \left( \frac{\operatorname{cov}(L_{Qi},\, L_{Qj})}{L_{Qi}\, L_{Qj}} - \frac{\operatorname{cov}(L_{Qi},\, L_{Rj})}{L_{Qi}\, L_{Rj}} - \frac{\operatorname{cov}(L_{Ri},\, L_{Qj})}{L_{Ri}\, L_{Qj}} + \frac{\operatorname{cov}(L_{Ri},\, L_{Rj})}{L_{Ri}\, L_{Rj}} \right). \label{eqn:thry:fullcovar}
\end{align}
\end{widetext}
The spectral versions of Equations~\ref{eqn:thry:singlvar} and \ref{eqn:thry:fullcovar} are:
\begin{widetext}
\begin{align}
\operatorname{var}(K_{\mathrm{ratio}} ) & = K_{\mathrm{ratio}}^2 \left(\frac{\Sigma(\nu_Q,\, \nu_Q)}{\ell_\nu(\nu_Q)^2} + \frac{\Sigma(\nu_R,\, \nu_R)}{\ell_\nu(\nu_R)^2} - 2 \frac{\Sigma(\nu_Q,\, \nu_R)}{\ell_\nu(\nu_Q)\, \ell_\nu(\nu_R)} \right),\ \mathrm{and} \\
\operatorname{cov}(K_i,\, K_j) & = K_i K_j \left( \frac{\Sigma(\nu_{Qi},\, \nu_{Qj})}{\ell_\nu(\nu_{Qi})\, \ell_\nu(\nu_{Qj})} - \frac{\Sigma(\nu_{Qi},\, \nu_{Rj})}{\ell_\nu(\nu_{Qi})\, \ell_\nu(\nu_{Rj})} - \frac{\Sigma(\nu_{Ri},\, \nu_{Qj})}{\ell_\nu(\nu_{Ri})\, \ell_\nu(\nu_{Qj})} + \frac{\Sigma(\nu_{Ri},\, \nu_{Rj})}{\ell_\nu(\nu_{Ri})\, \ell_\nu(\nu_{Rj})} \right), \label{eqn:thry:spectralcovar}
\end{align}
\end{widetext}
respectively. It's worth reinforcing that the luminosity used for normalization, $L_N$, must be the same for calculating $\ell_\nu(\nu)$ and $\Sigma(\nu,\,\nu')$, as is required for the covariance of $K$-corrections to be as independent of normalization as the $K$-correction itself is.
If either $Q$ or the observer frame $R$ are proportional to the function that defines the SED normalization luminosity, then the form of Equation~\ref{eqn:thry:singlvar} simplifies greatly:
\begin{align}
\operatorname{var}(K_{\mathrm{ratio}} ) & = K_{\mathrm{ratio}}^2 \frac{\operatorname{var}(L)}{L_N^2},
\end{align}
with a further simplification when the luminosity $L$ is a spectral luminosity at the frequency $\nu$:
\begin{align}
\operatorname{var}(K_{\mathrm{ratio}} ) & = K_{\mathrm{ratio}}^2 \Sigma(\nu,\nu).\label{eqn:thry:spectral2p4}
\end{align}
The units in Equation~\ref{eqn:thry:spectral2p4} look a little odd because it is being evaluated in the special case where $K_{\mathrm{ratio}} \propto L_\nu / L_N = \ell_\nu(\nu)$, or its multiplicative inverse, and therefore $\ell_\nu(\nu)$ is unitless by construction, making $\Sigma(\nu,\,\nu')$ unitless also.
The reason for exploring the simplified versions of the variance of $K_{\mathrm{ratio}}$ is that it highlights the centrality of $\Sigma(\nu,\nu')$ to the considerations here. Because of this its properties and the process of measuring it merit closer examination. $\Sigma$ has the property, clear by inspection of its definition, that it is symmetric under interchange of frequencies $\Sigma(\nu, \nu') = \Sigma(\nu', \nu)$. Less obvious is that $\Sigma$ has nodal lines that originate from the fact that all of the SEDs have to satisfy the normalization condition defined for $\ell_\nu(\nu)$. If the normalization luminosity is defined by the function $w_N(\nu)$, then the conditions imposed on the SEDs and $\Sigma$, respectively, are:
\begin{align}
1 & = \int \ell_\nu(\nu)\, w_N(\nu) \d \nu,\ \mathrm{and} \nonumber \\
0 & = \int \Sigma(\nu, \nu')\, w_N(\nu') \d \nu'.
\end{align}
If the normalization luminosity is even approximately spectral compared to the standard deviation of galaxy SEDs around frequency $\nu_N$, this condition will produce sharp sign flips on the $\nu=\nu_N$ and $\nu'=\nu_N$ axes in graphs of the correlation coefficient, $\rho(\nu,\nu') = \Sigma(\nu,\nu') / \sqrt{\Sigma(\nu,\nu)\, \Sigma(\nu',\nu')}$.
As with any covariance, $\Sigma(\nu,\nu')$ can be measured by replacing the expectation brackets in the definition, Equation~\ref{eqn:thry:sigdef}, with bias corrected sample averages:
\begin{widetext}
\begin{align}
\Sigma(\nu,\nu') & \approx \frac{1}{N-1} \left(\sum_{i=1}^N \ell_{\nu,i}(\nu)\, \ell_{\nu,i}(\nu') - \frac{1}{N} \left[\sum_{i=1}^N \ell_\nu(\nu)\right] \cdot \left[\sum_{i=1}^N \ell_\nu(\nu')\right] \right). \label{eqn:thry:sampav}
\end{align}
\end{widetext}
It is usually not practical, however, to measure $\Sigma$ using full spectra. In this common case, it is possible to approximate Equation~\ref{eqn:thry:sampav} by writing each SED as a linear combination of $n_T$ template spectra, $\ell_{\nu,i}(\nu) = \sum_{j=1}^{n_T} f_{ij} \ell_{\nu,j}(\nu)$, as were produced in, for example, \cite{Assef:2010} and \cite{Rieke:2009}. Note that the templates have to be scaled to match the normalization condition, and when this is done the coefficients will satisfy $\sum_{j=1}^{n_T} f_{ij} = 1$. In terms of the template approximation, Equation~\ref{eqn:thry:sampav} becomes:
\begin{align}
\mu_j & \equiv \frac{1}{N} \sum_{i=1}^N f_{ij}, \nonumber \\
\sigma^2_{jk} & \equiv \frac{1}{N-1} \sum_{i=1}^N f_{ij} f_{ik} - \frac{N}{N-1} \mu_j \mu_k,\ \mathrm{and} \nonumber\\
\Sigma(\nu,\nu') & \approx \sum_{j,k=1}^{n_T} \sigma^2_{jk}\, \ell_{\nu,k}(\nu)\, \ell_{\nu,j}(\nu);\label{eqn:thry:sigtmp}
\end{align}
that is, the templates reduce the infinite dimensional covariance function to an $n_T \times n_T$ covariance matrix of the template fractions.
\section{Data and Observations} \label{sec:data}
The measurement of the galaxy SED covariance in this paper is based on the template spectra approximation outlined at the end of Section~\ref{sec:thry}. The template set used is the one defined in \cite{Assef:2010}. The set consists of four templates that correspond, roughly, to galaxies that are: red (named Elliptical), moderately star forming blue (Sbc), starburst blue (Irregular), and active galactic nuclei (AGN). The AGN template, additionally, has a dust obscuration model parametrized by $\operatorname{E}(B-V)$, the extinction excess. Graphs of the templates, normalized to the \textit{WISE}\ W1 filter at a redshift of $z=0.38$ (effective wavelength $\lambda \approx 2.4\operatorname{\mu m}$), can be found in Figure~\ref{fig:dat:templates}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.48\textwidth]{SEDtemplates.pdf}
\end{center}
\caption[\cite{Assef:2010} Template Spectra]{The template spectra used from \cite{Assef:2010}.
The red solid line is the template called ``Elliptical." The purple dashed line is ``Sbc."
The blue dash-dotted line is ``Irregular." And the green dotted line is
``AGN," unobscured. }
\label{fig:dat:templates}
\end{figure}
The presence of AGN dust obscuration as a non-linear parameter throws off the mathematics behind Equations~\ref{eqn:thry:sigtmp}. There are multiple ways of getting around this problem, including replacing the AGN SED with multiple AGN SEDs that have different fixed extinction values. In order to make the results as simple as possible to produce, we only use one extinction value: the median $\operatorname{E}(B-V)$ for galaxies which had a sufficient AGN contribution to make the measured $\operatorname{E}(B-V)$ meaningful (see below). This means that the covariance measurement presented here will be an underestimate of the true spread among SEDs, particularly on the blue side of the spectrum. Estimating how much of an underestimate it is by looking at the distribution of $\operatorname{E}(B-V)$ values will prove inexact, because the effect of the parameter on individual SEDs is non-linear and the distribution of values observed is highly asymmetric. With those caveats in mind, we examined the distribution excess extinctions qualitatively and found it to have a width around 1 magnitude.
The scatter in observed AGN extinctions will be greater than what is imposed by dust near the black hole alone because galaxy inclination will change the amount of interstellar medium that the AGN's light has to travel through. Inclination will not just affect the measured AGN extinction, though, because the amount of dust the stellar light must travel through is also inclination dependent. The model we use in this work does not make any allowance for extinction within the target galaxy of anything but the AGN, though, so inclination effects will similarly modify the template fractions assigned to the galaxies in the fitting process. All of this has the effect of increasing the scatter of observed SEDs compared to the scatter that would be exhibited by a measure of the underlying physical properties of the galaxies. Despite these limitations, the work presented here should be sufficiently accurate for measuring a luminosity function in the near to mid-IR because of reduced dust absorption in those wavelengths.
Measuring the $f_{ij}$ of Equations~\ref{eqn:thry:sigtmp} using the \cite{Assef:2010} templates requires fitting the templates to observed photometry of a collection of galaxies, preferably with spectroscopic redshifts and a rich collection of filters. The zCOSMOS Bright 10k sample, described in \cite{Lilly:2009} and \cite{Knobel:2012}, is in the COSMOS field and, therefore, has a very rich set of publicly available photometry. The photometry we used is summarized in Table~\ref{tbl:dat:filters}. The targeting for the survey is based on photometry from Hubble Advanced Camera for Surveys (ACS) Wide Field Camera (WFC) imaging with the F814W filter, which is approximately $I$-band. The version of the data used for this analysis is Data Release 2.
The photometric surveys were cross-matched to the zCOSMOS data set based on a spatial cross-match that uniquely assigns a detection to its closest companion in zCOSMOS up to a maximum search radius that depended on the resolution of the external survey. For most surveys, the search radius was $1\ifmmode {^{\prime\prime}}\else $^{\prime\prime}$\fi$, but for AllWISE it was $3\ifmmode {^{\prime\prime}}\else $^{\prime\prime}$\fi$ (half the full width at half maximum of point sources for the \textit{WISE}\ W1 beam).
\begin{deluxetable*}{ccc}
\tablewidth{0.75\textwidth}
\tablecaption{zCOSMOS Photometry Used}
\tablehead{\colhead{Survey} & \colhead{Bands} & \colhead{Citation} }
\startdata
COSMOS & FUV, NUV, $u^*$, $B_j$, $g^+$, $V_j$, \ldots & \\
& \ldots $r^+$, F814W, $i^+$, $i^*$, $z^+$, $J$, $K_s$ & \cite{Capak:2007} \\
SDSS-DR10 & $u$, $g$, $r$, $i$, $z$ & \cite{SDSSdr10} \\
S-COSMOS-DR3 & c1, c2, c3, c4 & \cite{Sanders:2007} \\
AllWISE & W1, W2, W3, W4 & \cite{Wright:2010}
\enddata
\tablecomments{Photometric surveys used for fitting zCOSMOS sources. }
\label{tbl:dat:filters}
\end{deluxetable*}
Selecting high quality redshifts from zCOSMOS is somewhat involved because of the detailed `confidence class' (\code{cc}) system used. The recommendation in \cite{Lilly:2009} is to accept all sources with \code{cc} equal to: any 3.X, 4.X, 1.5, 2.4, 2.5, 9.3, and 9.5. Based on the description of those classes, the analysis here accepted sources that fit in the recommended classes, but also those with a leading 1 (10 was added to show broad line AGN), 18.3, 18.5 (both broad line AGN consistent with the photometric redshift), and rejected all secondary targets (2 in the tens or hundreds digit). This can be done by accepting sources for which the text string version of \code{cc} matches the regular expression ``\verb=([34]\..*)|([1289]\.5)|(2\.4)|([89]\.3)=" and doesn't match ``\verb=^2\d+\.=". Finally, the targets fell into three selection classes, column named \code{i}, and `unintended' sources are rejected by requiring $\code{i}>0$.
In addition to good redshifts, the sources needed to have a minimum amount quality of photometry available to make the template fitting reliable. To that end, we limited the analysis to sources that meet all the following conditions: redshifts satisfy $0.05 < z \le 1.0$, the sources have at least five high quality photometric measurements (the number of free parameters in the SED fit when unconstrained, description follows), and were measured in S-COSMOS to have a have $F_{\mathrm{c1}} \ge 5 \operatorname{\mu Jy}$ (corresponds to an empirical SNR limit of about 30, about $22.15\operatorname{mag}$) using a $3\ifmmode {^{\prime\prime}}\else $^{\prime\prime}$\fi$ aperture in \textit{Spitzer}'s IRAC channel 1 ($\lambda_{\mathrm{eff}} \approx 3.6\operatorname{\mu m}$). The external photometric measurements were deemed to be of sufficient quality if the photometry was not flagged as contaminated in the survey, or otherwise marked as obviously invalid by being less than or equal to zero.
The templates were constructed to be fit to fluxes using $\chi^2$ applied to a linear combination of the template fluxes with non-negative coefficients, and a search in the 1-dimensional parameter space for the best AGN extinction excess, $\operatorname{E}(B-V) = (2.5 / \ln(10)) \cdot (\tau_B - \tau_V)$. That is, the model has the form:
\begin{align}
F_\nu(\nu, z) & = a_\mathrm{E} F_\mathrm{E}(\nu, z) + a_\mathrm{S} F_\mathrm{S}(\nu, z) \nonumber \\
&\hphantom{=} + a_\mathrm{I} F_\mathrm{I}(\nu, z) + a_\mathrm{A} F_\mathrm{A}(\nu, z, \tau_B - \tau_V), \\
\chi^2 & = \sum_{i \in \{\mathrm{filters}\}} \left(\frac{F_{\mathrm{obs}\,i} - F_{\mathrm{mod}\,i}}{\sigma_i}\right), \label{eqn:sedchisqr}
\end{align}
with all $a_i \ge 0$, and $12 > \tau_B - \tau_V \ge 0 $. The $a_i$ were fit using the SciPy \code{optimize} package's routine \code{nnls} (quadratic programming for non-negative least squares), and $\tau_B - \tau_V$ were fit with the routine \code{brent} with fallback to \code{fmin} (Nelder-Meade simplex).
There is one modification to that procedure for the fits done for this paper. The templates do not include the ability to tune dust obscuration of the galaxy's stars, so a dusty starburst that has a detection in \textit{WISE}'s $12\operatorname{\mu m}$ filter, W3, will often be best fit with a galaxy that is dominated by its Elliptical component (to satisfy optical redness) and a super-obscured AGN ($\tau_B - \tau_V > 12$) masquerading as the emission from the stellar dust component. The problem this creates is that it makes the SED fit the data more poorly in the most important range for the subsequent uses to which we intend to put this data, where $K$-corrections from observer frame W1 to $2.4\operatorname{\mu m}$ rest wavelength are performed. We used two techniques to work around this problem. First, we limited the excess in optical depth as $\tau_B - \tau_V \le 12$ (equivalently, $\operatorname{E}(B-V) \le 13.03$). Second, when the SED was badly modeled ($\chi^2 > \mathrm{max}(N_{\mathrm{df}}, 1) \times 100$) and unlikely to be an AGN ($W1 - W2 > 0.5\operatorname{Vega\ mag}$ with uncertainty, $\sigma_{W1-W2} < 0.2\operatorname{Vega\ mag}$), we used the best model with $\operatorname{E}(B-V) = 0$. The reduced $\chi^2$ criterion was determined by subjective empirical examination, and the color based selection was found in \cite{Assef:2013} to select low redshift AGN with 90\% completeness. Overall, $2,604$ galaxies were fit using the `alternate' fitting mode where the AGN template was set at $\operatorname{E}(B-V) = 0$ and $4,621$ were fit in the `main' fitting mode where the AGN extinction was allowed to vary. For the subsets the breakdown is: none of the $268$ AGN, $1,139$ of the $1,903$ Red, and $1,465$ of the $5,054$ Blue galaxies were fit in the alternate mode.
Limiting the excess optical depth, $\tau_B - \tau_V$, to be non-negative introduces a bias to the parameter estimation of the individual galaxies. It is even physically possible for a source to appear bluer than expected if the line of sight is unobscured and dust clouds are reflecting excess blue light into it (that is, the line of sight contains significant contribution from reflection nebulae in the target). Even so, applying a negative optical depth excess to dust obscuration models is not likely to produce an accurate spectrum for reflection, and the magnitude of the negative excess doesn't have to be large to cause the estimate of the maximum redshift at which the galaxy could be observed to diverge.
There is a final detail involved in dealing with AGN obscuration measurements. The impact of changes in $\tau_B - \tau_V$ on the shape of the overall SED depends on what fraction of the luminosity the AGN contributes. If a minuscule fraction of the luminosity is contributed by the AGN, then the shape of the SED is insensitive to how obscured the AGN template is, rendering the value that the fitting process assigns to $\tau_B - \tau_V$ meaningless. It is, therefore, necessary when computing statistics involving $\tau_B - \tau_V$ to limit the sample to those galaxies for which the AGN's contribution to the shape of the SED is non-negligible. The cutoff used in this work, set arbitrarily, is that the fraction of $2.4\operatorname{\mu m}$ luminosity contributed must be greater than $0.1\%$. The cutoff is set low for two reasons: first, the shapes of the template spectra mean that the ability to measure extinction in the AGN template depends on both what other templates are present and which wavelengths were observed; and second, we prefer to make less aggressive cuts to the data when making them without making a rigorous exploration of their impact on the data.
The resulting data set contains $7,225$ galaxies. The template fit parameters of the data set are included in this work (at figshare.com\footnote{\url{https://figshare.com/articles/zCOSMOS_Template_Fractions_tbl_gz/3804210}} with doi:10.6084/m9.figshare.3804210) in gzipped\footnote{\url{https://www.gnu.org/software/gzip/}} IPAC Table format\footnote{\url{http://irsa.ipac.caltech.edu/applications/DDGEN/Doc/ipac\_tbl.html}}, an excerpt from which is in Table~\ref{tbl:dat:fitpars}. The normalization condition chosen for the templates is the luminosity \textit{WISE}'s W1 filter would observe directly in a galaxy at redshift $z=0.38$ ($\lambda_{\mathrm{eff}} \approx 2.4\operatorname{\mu m}$), after the effect of AGN obscuration has been applied to the AGN template. The latter choice ensures that the template fractions sum to 1, and that each $f$ represents the fraction of $2.4\operatorname{\mu m}$ luminosity contributed by the corresponding component of the galaxy.
We also performed a classification of galaxies into three possible subsets for which SED means and covariances were measured: AGN, red galaxies, and blue galaxies. The scheme for how this classification was done is outlined in the flowchart in Figure~\ref{fig:dat:class}. The dividing line for whether a galaxy is considered an AGN is if more than 50\% of its $2.4\operatorname{\mu m}$ luminosity comes from the obscured AGN component. The dividing line for whether a galaxy is ``red" was determined empirically by examining the smoothed $M_u - M_r$ versus $M_g$, that is a standard rest frame color versus absolute magnitude diagram, shown in Figure~\ref{fig:dat:CMD}. The rest frame Sloan filter $M_u$, $M_r$, and $M_g$ were calculated by $K$-correcting observer frame Subaru $g^+$, $r^+$, and $i^+$ fluxes, respectively, from the \cite{Capak:2007} data. We experimented with a photometric classification scheme for AGN, specifically the Stern wedge from \cite{Stern:2005}, but the reduced sensitivity of the longer wavelength IRAC data meant that the blue and AGN mean SEDs were nearly the same. The final classification process shown in Figure~\ref{fig:dat:class} resulted in: $266$ AGN ($3.7\%$ of the sample), $1,906$ red sequence galaxies ($26.4\%$ of the sample), and $5,053$ blue cloud galaxies ($69.9\%$ of the sample).
\begin{deluxetable*}{cllllllrcrcc}
\tabletypesize{\scriptsize}
\tablewidth{\textwidth}
\tablecaption{Excerpt of Fit Data}
\tablehead{\colhead{\code{ID}} & \colhead{\code{ra}} & \colhead{\code{dec}} &
\colhead{\code{f\_Ell}} & \colhead{\code{f\_Sbc}} & \colhead{\code{f\_Irr}} & \colhead{\code{f\_AGN}} & \colhead{\code{EBmV}} & \colhead{\code{ChiSqr}} & \colhead{\code{Ndf}} & \colhead{\code{FitMode}}
& \colhead{\code{class}} \\
& \colhead{$\ifmmode {^{\circ}}\else {$^\circ$}\fi$} & \colhead{$\ifmmode {^{\circ}}\else {$^\circ$}\fi$} & & & & & \colhead{mag} & & & & }
\startdata
700178 & 150.305008 & 1.876265 & 0.2611 & 0.0000 & 0.4446 & 0.2943 & 0.0000 & 1.27E+02 & 11 & main & B\\
700189 & 150.308258 & 1.916484 & 1.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 8.30E+03 & 16 & alt & B\\
700274 & 149.926743 & 1.869646 & 0.6927 & 0.0000 & 0.1911 & 0.1162 & 0.0000 & 1.98E+01 & 7 & main & B\\
700291 & 149.890167 & 1.859292 & 0.9394 & 0.0000 & 0.0000 & 0.0606 & 0.0921 & 3.50E+02 & 13 & main & R\\
700298 & 149.816711 & 1.916690 & 0.0803 & 0.6652 & 0.1738 & 0.0807 & 0.0000 & 1.02E+02 & 10 & main & B\\
700447 & 150.425690 & 2.123886 & 0.1311 & 0.5809 & 0.1246 & 0.1633 & 0.0450 & 9.10E+01 & 12 & main & B
\enddata
\tablecomments{Excerpt from the data set included with this work in IPAC Table format. The ID column is the unique identification number given to the target in the zCOSMOS survey. \code{ra} and \code{dec} are the J2000 right ascension and declination in the zCOSMOS targets, in decimal degrees. \code{f\_Ell}, \code{f\_Sbc}, \code{f\_Irr}, and \code{f\_AGN} are the fraction of $2.4\operatorname{\mu m}$ luminosity contributed by the Elliptical, Sbc, Irregular, and obscured AGN templates, respectively. \code{EBmV}$ = (2.5 / \ln(10)) \cdot (\tau_B - \tau_V)$ is the excess extinction in the AGN obscuration model. \code{ChiSqr} is the raw $\chi^2$ from the fitting process. \code{Ndf} is the net number of degrees of freedom in the fitting process (number of filters used minus 5), ignoring the way the effective dimensionality is altered by the constraints on the fitting process. \code{FitMode} is a character string that takes on one of two values: ``main" if $\tau_B - \tau_V$ was allowed to vary in the fitting process, ``alt" if it was set to $0$ as described in the text. \code{class} denotes the class assigned to the galaxy, and is one of `A', `R', or `B' for `AGN', `Red', and `Blue', respectively. The full table is available at: \url{https://figshare.com/articles/zCOSMOS_Template_Fractions_tbl_gz/3804210} with doi:10.6084/m9.figshare.3804210.}
\label{tbl:dat:fitpars}
\end{deluxetable*}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.48\textwidth]{ClassificationFlowchart.pdf}
\end{center}
\caption[Classification Flowchart]{Simple flowchart showing how galaxies were classified into AGN, red or blue galaxies in this work.}
\label{fig:dat:class}
\end{figure}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.75\textwidth]{zCJointColorMag.pdf}
\end{center}
\caption[Color-Magnitude Diagram]{Color-magnitude diagram showing the dividing line between the red sequence (above the line) and the blue cloud (below). The point with error bars shows the standard deviation of the smoothing kernel applied to the data in the dense region of the plot. }
\label{fig:dat:CMD}
\end{figure*}
\section{Results} \label{sec:results}
The template parameters for the mean SEDs, $\mu_j$ in Equations~\ref{eqn:thry:sigtmp} and the median $\tau_B - \tau_V$, of the different subsamples can be found in Table~\ref{tbl:res:meanSED}. The template covariance matrices, $\sigma_{jk}$ in Equations~\ref{eqn:thry:sigtmp}, are in split up into Tables~\ref{tbl:res:allSEDcov}--\ref{tbl:res:BlueSEDcov} for the overall sample of, AGN, red, and blue galaxies, respectively. Using the numbers in these tables with the normalized templates and obscuration models of \cite{Assef:2010} is sufficient to calculate the covariance associated with any set of $K$-corrections. It is useful to examine graphs of the diagonal elements of the covariance, $\sqrt{\Sigma(\nu,\,\nu)}$, and the correlation function, $\rho(\nu,\,\nu') = \Sigma(\nu,\,\nu') / \sqrt{\Sigma(\nu,\,\nu)\, \Sigma(\nu',\,\nu')}$, to get a feel for how they behave, and to have as a reference for quick spectral calculations of $K$-correction covariances.
Graphs of $\sqrt{\Sigma(\nu,\,\nu)}$ for the $2.4\operatorname{\mu m}$ normalized SEDs can be found in Figure~\ref{fig:res:SEDVars}. The standard deviation increases with wavelength distance from $2.4\operatorname{\mu m}$, but there are dips and jumps around spectral features with a wide variety of strengths, specifically spectral breaks and lines. Further, the increase in the spread is steeper on the short wavelength side than the long one, supporting the assertion that simple wavelength distance is not sufficient to determine which observer frame bands are the best to $K$-correct from. The scaling on the graph is linear in $y$ and logarithmic in $x$, so the large nearly linear stretches in the graphs represent growth that is logarithmic in wavelength ratio in the standard deviation of galaxy SEDs. The final notable feature is that the spread of red galaxy SEDs, in panel~\textbf{c}, is low, as to be expected from the comparative narrowness of the red sequence in color-magnitude diagrams like Figure~\ref{fig:dat:CMD}. The comparatively large spread in AGN SEDs, panel~\textbf{b}, is surprising because the AGN selection criterion is that most of the galaxy's light at $2.4\operatorname{\mu m}$, close to the minimum of the AGN SED, comes from the AGN. This criterion explicitly limits the range of possible values for the $f_{\mathrm{AGN}}$, and implicitly limits the other fractions because they must sum to $1$. The selection criterion does affect the template covariance matrix as expected (compare the $\sigma$ column in Table~\ref{tbl:res:AGNSEDcov} to the ones in Tables~\ref{tbl:res:RedSEDcov} and \ref{tbl:res:BlueSEDcov}). The most likely culprit for the variability is how the AGN template is so different from the other three (see Figure~\ref{fig:dat:templates}). The blue cloud galaxies, in panel~\textbf{d}, have a higher spread than any of the other types of galaxies, especially in the spectral lines, other than panel~\textbf{a}, which summarizes the standard deviation for all galaxies.
The non-monotinicity of $\sqrt{\Sigma(\nu,\, \nu)}$ is actually suppressed in Figure~\ref{fig:res:SEDVars} because the normalization luminosity lies in a range of frequencies where most galaxy SEDs don't show much variety. A clearer example of the SED variance exhibiting a broad maximum can be found in Figure~\ref{fig:res:BVars}, where $\sqrt{\Sigma(\nu,\, \nu)}$ is plotted for all galaxies with a normalization luminosity in the rest frame $B$ filter instead of $2.4\operatorname{\mu m}$. The standard deviation is pinched off by the normalization near $445\operatorname{nm}$ and the spread among the SED templates in the $2$--$5\operatorname{\mu m}$ range is intrinsically low, producing a marked peak in the standard deviation in most of the optical and near-IR. Because the uncertainty in the $K$-correction requires input from a $B$ normalized SED and a redshift, it is not possible to say, for sure, that the variance involved in correcting from $B$ to, say, $4\operatorname{\mu m}$ is smaller than correcting from $I$ to $4\operatorname{\mu m}$. Even so, real correlations, like the far IR radio correlation, should show a pattern like this in plots of $\sqrt{\Sigma(\nu,\, \nu)}$ that cover the relevant frequency range.
Very few astronomers are interested in $K$-correcting only to $2.4\operatorname{\mu m}$, and that's where the utility of the correlation function, shown in Figure~\ref{fig:res:covar}, comes in. As Equations~\ref{eqn:thry:singlvar} and \ref{eqn:thry:fullcovar} show, by combining the full $\Sigma(\nu,\, \nu')$ with the SED used in generating the $K$-correction (normalized to $2.4\operatorname{\mu m}$) and the filter curves, any covariance of $K$-corrections can be calculated.
The most prominent features in the correlation coefficient graphs related to physics, as opposed to mathematical artifacts that comes purely from the choice of normalization wavelength, in the graphs are the thick white lines. For points on those lines, the SED colors $L_\nu(\lambda_1) / L_\nu(2.4\operatorname{\mu m})$ and $L_\nu(\lambda_2) / L_\nu(2.4\operatorname{\mu m})$ are uncorrelated, meaning that they contain no mutual information and, therefore, provide maximally independent information about the shape of the SED. The location of those white lines is determined by where the template SEDs that dominate the sample diverge from each other. In the case of the red galaxies (Panel~\textbf{c}) this happens roughly at the $4,000$\,\AA\ break. For the other galaxies, the diversity of SEDs is more broad and the divergence of the templates is more gradual so the main uncorrelated band is more broad and more difficult to pin down to a single phenomenon.
The other prominent features present as horizontal and vertical striping. Those are caused by the presence of absorption and emission lines in some templates and not others. The most prominent emission lines present in the templates are: MgII ($279.8\operatorname{nm}$), OII (doublet, $372.6$ and $372.9\operatorname{nm}$), OIII (merged $495.9$ and $500.7\operatorname{nm}$), $\operatorname{H\alpha}$, and PAH lines at $\lambda > 3\operatorname{\mu m}$. The absorption lines are primarily a feature of the Elliptical template and that is responsible for the less prominent striping in the optical.
\begin{deluxetable}{clllll}
\tabletypesize{\scriptsize}
\tablewidth{0.48\textwidth}
\tablecaption{Mean SED Parameters}
\tablehead{\colhead{Subsample} & \colhead{$\langle f_{\mathrm{Ell}} \rangle$} & \colhead{$\langle f_{\mathrm{Sbc}} \rangle$} & \colhead{$\langle f_{\mathrm{Irr}} \rangle$} & \colhead{$\langle f_{\mathrm{AGN}} \rangle$} & \colhead{$\overline{\tau_B - \tau_V}$\tablenotemark{a}} }
\startdata
all & $0.490$ & $0.269$ & $0.114$ & $0.127$ & $0.023$ \\
\hline
AGN & $0.180$ & $0.076$ & $0.078$ & $0.666$ & $0.207$ \\
Red & $0.823$ & $0.131$ & $0.011$ & $0.035$ & $0.303$ \\
Blue & $0.380$ & $0.331$ & $0.155$ & $0.134$ & $0.015$
\enddata
\tablecomments{ Mean of the $2.4\operatorname{\mu m}$ luminosity template fractions, alongside the median excess extinction on the AGN. Numbers are given to three decimal places regardless of experimental uncertainty. }
\tablenotetext{a}{ $\overline{\tau_B - \tau_V}$ here means the median of $\tau_B - \tau_V$. }
\label{tbl:res:meanSED}
\end{deluxetable}
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablewidth{0.48\textwidth}
\tablecaption{Covariance Matrix of All SED Templates}
\tablehead{\colhead{Parameter} & \colhead{$\sigma$} & \colhead{$\delta f_{\mathrm{Ell}}$} & \colhead{$\delta f_{\mathrm{Sbc}}$} & \colhead{$\delta f_{\mathrm{Irr}}$} & \colhead{$\delta f_{\mathrm{AGN}}$} }
\startdata
$\delta f_{\mathrm{Ell}}$ & $0.353$ & $\hphantom{-}1.000$ & $-0.727$ & $-0.366$ & $-0.373$ \\
$\delta f_{\mathrm{Sbc}}$ & $0.325$ & $-0.727$ & $\hphantom{-}1.000$ & $-0.209$ & $-0.224$ \\
$\delta f_{\mathrm{Irr}}$ & $0.153$ & $-0.366$ & $-0.209$ & $\hphantom{-}1.000$ & $\hphantom{-}0.268$ \\
$\delta f_{\mathrm{AGN}}$ & $0.163$ & $-0.373$ & $-0.224$ & $\hphantom{-}0.268$ & $\hphantom{-}1.000$
\enddata
\tablecomments{The $\sigma$ column contains the standard deviations of the parameters, and the rest of the columns are the correlation matrix among the template fractions. Numbers are given to three decimal places regardless of experimental uncertainty. }
\label{tbl:res:allSEDcov}
\end{deluxetable}
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablewidth{0.48\textwidth}
\tablecaption{Covariance Matrix of AGN SED Templates}
\tablehead{\colhead{Parameter} & \colhead{$\sigma$} & \colhead{$\delta f_{\mathrm{Ell}}$} & \colhead{$\delta f_{\mathrm{Sbc}}$} & \colhead{$\delta f_{\mathrm{Irr}}$} & \colhead{$\delta f_{\mathrm{AGN}}$} }
\startdata
$\delta f_{\mathrm{Ell}}$ & $0.162$ & $\hphantom{-}1.000$ & $-0.459$ & $-0.388$ & $-0.417$ \\
$\delta f_{\mathrm{Sbc}}$ & $0.118$ & $-0.459$ & $\hphantom{-}1.000$ & $-0.212$ & $-0.112$ \\
$\delta f_{\mathrm{Irr}}$ & $0.136$ & $-0.388$ & $-0.212$ & $\hphantom{-}1.000$ & $-0.370$ \\
$\delta f_{\mathrm{AGN}}$ & $0.131$ & $-0.417$ & $-0.112$ & $-0.370$ & $\hphantom{-}1.000$
\enddata
\tablecomments{The $\sigma$ column contains the standard deviations of the parameters, and the rest of the columns are the correlation matrix among the template fractions. Numbers are given to three decimal places regardless of experimental uncertainty. }
\label{tbl:res:AGNSEDcov}
\end{deluxetable}
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablewidth{0.48\textwidth}
\tablecaption{Covariance Matrix of Red SED Templates}
\tablehead{\colhead{Parameter} & \colhead{$\sigma$} & \colhead{$\delta f_{\mathrm{Ell}}$} & \colhead{$\delta f_{\mathrm{Sbc}}$} & \colhead{$\delta f_{\mathrm{Irr}}$} & \colhead{$\delta f_{\mathrm{AGN}}$} }
\startdata
$\delta f_{\mathrm{Ell}}$ & $0.260$ & $\hphantom{-}1.000$ & $-0.941$ & $-0.111$ & $-0.229$ \\
$\delta f_{\mathrm{Sbc}}$ & $0.252$ & $-0.941$ & $\hphantom{-}1.000$ & $-0.041$ & $-0.070$ \\
$\delta f_{\mathrm{Irr}}$ & $0.043$ & $-0.111$ & $-0.041$ & $\hphantom{-}1.000$ & $-0.046$ \\
$\delta f_{\mathrm{AGN}}$ & $0.079$ & $-0.229$ & $-0.070$ & $-0.046$ & $\hphantom{-}1.000$
\enddata
\tablecomments{The $\sigma$ column contains the standard deviations of the parameters, and the rest of the columns are the correlation matrix among the template fractions. Numbers are given to three decimal places regardless of experimental uncertainty. }
\label{tbl:res:RedSEDcov}
\end{deluxetable}
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablewidth{0.48\textwidth}
\tablecaption{Covariance Matrix of Blue SED Templates}
\tablehead{\colhead{Parameter} & \colhead{$\sigma$} & \colhead{$\delta f_{\mathrm{Ell}}$} & \colhead{$\delta f_{\mathrm{Sbc}}$} & \colhead{$\delta f_{\mathrm{Irr}}$} & \colhead{$\delta f_{\mathrm{AGN}}$} }
\startdata
$\delta f_{\mathrm{Ell}}$ & $0.304$ & $\hphantom{-}1.000$ & $-0.728$ & $-0.215$ & $-0.189$ \\
$\delta f_{\mathrm{Sbc}}$ & $0.337$ & $-0.728$ & $\hphantom{-}1.000$ & $-0.418$ & $-0.375$ \\
$\delta f_{\mathrm{Irr}}$ & $0.162$ & $-0.215$ & $-0.418$ & $\hphantom{-}1.000$ & $\hphantom{-}0.346$ \\
$\delta f_{\mathrm{AGN}}$ & $0.128$ & $-0.189$ & $-0.375$ & $\hphantom{-}0.346$ & $\hphantom{-}1.000$
\enddata
\tablecomments{The $\sigma$ column contains the standard deviations of the parameters, and the rest of the columns are the correlation matrix among the template fractions. Numbers are given to three decimal places regardless of experimental uncertainty. }
\label{tbl:res:BlueSEDcov}
\end{deluxetable}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.98\textwidth]{MeanSEDVarPlot.pdf}
\end{center}
\caption[SED Variance]{Panel \textbf{a} shows the SED standard deviation ($\sqrt{\Sigma(\nu,\nu)}$) for all galaxies in the sample, and Panels \textbf{b}, \textbf{c}, and \textbf{d} show the same data for AGN, red, and blue galaxies, respectively. The vertical dashed line highlights the effective wavelength of the normalization luminosity, and the dotted lines show the effective rest frame wavelength of \textit{WISE}'s W1 channel for galaxies at the redshifts $z=0$ and $1$. Note the there are no units given for $\Sigma(\nu,\nu')$ because the normalization luminosity used, $L_N$, fits the standard practice in astronomy of it's weighting function, $w_N(\nu)$, having the units needed to make $L_N$ a weighted mean of $L_\nu$, that is $w_N(\nu)$ has units $[\operatorname{Hz}^{-1}]$. }
\label{fig:res:SEDVars}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.98\textwidth]{SingleMeanSEDVarPlot_Bnorm.pdf}
\end{center}
\caption[SED Variance]{The SED standard deviation ($\sqrt{\Sigma(\nu,\nu)}$) for all galaxies in the sample. The vertical dashed line highlights the effective wavelength of the normalization luminosity, which is the Johnson-Cousins $B$ filter for this plot only. Note the there are no units given for $\Sigma(\nu,\nu')$ because the normalization luminosity used, $L_N$, fits the standard practice in astronomy of it's weighting function, $w_N(\nu)$, having the units needed to make $L_N$ a weighted mean of $L_\nu$, that is $w_N(\nu)$ has units $[\operatorname{Hz}^{-1}]$. }
\label{fig:res:BVars}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.98\textwidth]{SEDcorrelationFunc.pdf}
\end{center}
\caption[SED Correlation Function]{SED correlation functions, $\rho(\nu,\,\nu') = \Sigma(\nu,\,\nu') / \sqrt{\Sigma(\nu,\,\nu) \, \Sigma(\nu',\, \nu')}$. Panel~\textbf{a} shows $\rho$ for all galaxies, while \textbf{b}, \textbf{c}, and \textbf{d} show $\rho$ for AGN, red, and blue galaxies, respectively. The dominant feature in the graphs, the sign flips across the $\lambda_{\{1,2\}} = 2.4\operatorname{\mu m}$ axes, is due to the choice of normalizing the SEDs to that wavelength. The impact of spectral features, emission and absorption lines, in the templates also stand out as horizontal and vertical striping. The final feature worth noting is the sign flip that corresponds, roughly, to when the wavelengths are on opposite sides of the point where the template SEDs that dominate the population diverge, generating the thick white lines where the sign of $\rho$ flips. }
\label{fig:res:covar}
\end{figure*}
\section{Conclusion}
In this work we derived formulae for computing the uncertainty added to an observed flux when it is $K$-corrected. We also showed, by approximating the SED covariance function using template fitting data, that the choice of which observations to $K$-correct to the rest frame quantities desired should be informed by information about the variety of the SEDs of the objects in question. While the discussion in the body of this paper focused on $K$-corrections, they are just a specific type of filter transformation, and the adaptation of the formulae here to all filter transforms is trivial: just drop the factors of $(1+z)$.
An example of a filter transformation that the covariance of observer frame SEDs can inform is the transformation from broad band filter to a spectral quantity (for example: W1 to $3.4\operatorname{\mu m}$). Note how the SED standard deviation plots in Figure~\ref{fig:res:SEDVars} have a minimum near $2.4\operatorname{\mu m}$. If the normalization luminosity were actually the spectral luminosity at $2.4\operatorname{\mu m}$ that minimum would be a zero of the function. Because the normalization is actually $^{0.38}$W1, in the notation of \cite{Blanton:2003LF} and subsequent works, the function only achieves a minimum that is close to zero at a wavelength very near $2.4\operatorname{\mu m}$. A similar plot of observer frame SED standard deviation for stars would show a similar minimum at the spectral wavelength most correlated with the broad band measurement, making that wavelength a good candidate for labeling as the filter's effective wavelength.
This study of the galaxy SED correlation function, and the mean normalized SED of galaxies, is also useful in that it feeds in to a generalization of the luminosity function that we call the spectroluminosity functional, $\Psi[L_\nu]$. We will be exploring the usefulness and mechanics of measuring $\Psi[L_\nu]$ in \cite{Lake:2016b}, and using the data from this paper and $\Psi[L_\nu]$ to measure the ordinary luminosity function, $\Phi(L)$, in \cite{Lake:2016d}.
Finally, there are definitely improvements that can be made to the techniques used here. The templates used are static, and the mean SEDs are not allowed to depend on luminosity. The latter is somewhat justified by the weak index in the power law relating $g$-band luminosity to $M_u - M_r$ color in the cut ($-0.0512$, see Figure~\ref{fig:dat:class}), but it would still be an improvement to allow for a luminosity dependence in the SED mean.
|
1603.07194
|
\section*{Introduction}
Since the beginning of the 21st century there has been a growing interest in increasing the capacity of telecommunication systems to eventually overcome our pending bandwidth crunch. Significant improvements in networks transmission capacity has been achieved through the use of polarization division multiplexing (PDM) and wavelength division multiplexing (WDM) techniques and also through implementing high order modulation formats \cite{WDM1, WDM2, WDM3}. However, it might not be possible to satisfy the exponential global capacity demand in the near future. One potential solution to eventually cope with bandwidth issues is space division multiplexing (SDM) \cite{Richardson1,Richardson2,Xia} and in particular the special case of mode division multiplexing (MDM), which was first suggested in the 1980s \cite{Berdague}. In MDM based communication systems, each spatial mode, from an orthogonal modal basis, can carry an independent data stream, thereby increasing the overall capacity by a factor equal to the number of modes used \cite{Li}. A particular mode basis for data communication is orbital angular momentum (OAM) \cite{Gibson} which has become the mode of choice in many studies due to its topical nature and ease of detection with phase-only optical elements \cite{Willner}. Indeed, OAM multiplexing implementation results have reported Tbit/s transmission capacity over both free space and optical fibers \cite{Bozinovic,Wang1}. More recent reports have shown free space communication with a bit rate of 1.036 Pbit/s and a spectral efficiency of 112.6-bit/s/Hz using 26 OAM modes \cite{Wang2}. But, by taking into account the effects of atmospheric turbulence on the crosstalk and system bit error rate (BER) in an OAM multiplexed free space optics (FSO) link, experimental results have indicated that turbulence-induced signal fading will significantly deteriorate link performance and might cause link outage in the strong turbulence regime \cite{Turbulence1,Turbulence2,Turbulence3}. Recently, Zhao et al. claimed that OAM is outperformed by any conventional mode division multiplexing technique with a complete basis or conventional line of sight (LOS) multipe-input multiple-output (MIMO) systems \cite{Zhao}. Indeed, OAM is only a subspace of the full space of Laguerre Gaussian (LG) beams where modes have two degrees of freedom: an azimuthal index $\ell$ and a radial index $p$, the former responsible for the OAM. In this study, we demonstrate a new holographic tool to realise a communication link using a densely packed LG mode set incorporating both radial and azimuthal degrees of freedom. We show that it is possible to multiplex/demultiplex over 100 spatial modes on a single hologram, written to a spatial light modulator, in a manner that is independent of wavelength. Our subset of the LG modes were successfully used as information carriers over a free space link to illustrate the robustness of our technique. The information is recovered by simultaneously detecting all 100 modes employing a single hologram. Using this approach we are able to transmit several images with correlations higher than 98\%. Although our scheme is a proof-of-concept, it provides a useful basis for increasing the capacity of future optical communication systems.
\section*{Results.}
Consider a LG mode in cylindrical coordinates, at its waist plane ($z=0$), described by:
\begin{eqnarray}
& &\mathrm{LG}_{p\ell}(r,\phi) =\sqrt{\frac{2p!}{\pi w_0^2(p+|\ell|)!}}\left(\frac{\sqrt{2}r}{w_0}\right)^{|\ell|}L_p^{|\ell|}\left(\frac{2r^2}{w_0^2}\right)\nonumber \\
& &\times\exp\left(-\frac{r^2}{w_0^2}\right)\exp(i\ell \phi)
\label{eq:laguerre}
\end{eqnarray}
\noindent where $p$ and $\ell$ are the radial and azimuthal indices respectively, $(r,\phi)$ are the transverse coordinates, $L_p^{|\ell|}$ is the generalized Laguerre polynomial and $w_0$ is a scalar parameter corresponding to the Gaussian (fundamental mode) radius. The mode size is a function of the indices and is given by $w_{p\ell} = w_0 \sqrt{2p + |\ell| + 1}$. Such modes are shape invariant during propagation and are reduced to the special case of the Gaussian beam when $p=\ell =0$. This full set of modes can be experimentally generated using complex-amplitude modulation. For this experiment we use the CGH type 3 as described in \cite{Arrizon} to generate a subset of 35 $\mathrm{LG}_{p\ell}$ modes given by combination of $p = \{0,1,2,3,4\}$ and $\ell = \{-3,-2,-1, 1,2,3,4\}$. In this way, the amplitude and phase of the $\mathrm{LG}_{p\ell}$ modes set (Eq.\ref{eq:laguerre}) can be encoded into phase-only digital holograms and displayed on phase-only SLMs to generate any $\mathrm{LG}_{p\ell}$ mode. Moreover, the holograms can be multiplexed into a single hologram to generate multiple modes simultaneously. Figure \ref{holograms} (a) shows the generated holograms to create the desired subset of $\mathrm{LG}_{p\ell}$ modes for this experiment. Their corresponding theoretical intensity profile can be seen in Fig. \ref{setup} (a).
\begin{figure*}[tb]
\centering
\includegraphics[width =.9\textwidth]{Demux}
\caption{{\bf Complex amplitude modulation and spatial-multiplexing.} (a) Holograms encoded via complex-amplitude modulation to generate different $\mathrm{LG}_{p\ell}$ modes. (b) Holograms encoded with different carrier frequencies are superimposed into a single hologram to produce a spatial separation of all modes in the Fourier plane.}
\label{holograms}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width = .9\textwidth]{setup}
\caption{{\bf Schematic of our Multiplexing and Demultiplexing setup.} (a) Intensity profiles of $\mathrm{LG}_{p\ell}$ modes generated from combinations of $p = \{0,1,2,3,4\}$ and $\ell = \{-3,-2,-1, 1,2,3,4\}$. (b) Experimental setup: Three components of a multiline Ion-Argon laser, $\lambda_1=457$ nm, $\lambda_2=488$ nm and $\lambda_3=514$ nm, are separated using a grating and sent to a Spatial Light Modulator (SLM-1). (c) The SLM is split into three independent screens, and addressed with holograms to produce the set of modes shown in (a). The information is propagated through free space and reconstructed in the second stage with a modal filter. (d) The modal filter consists of a superposition of all holograms encoded in SLM-2. (e) Each mode is identified in the far field using a CCD camera and a lens.}
\label{setup}
\end{figure*}
The $\mathrm{LG}_{p\ell}$ modes generated in this way were encoded using three different wavelengths onto a single hologram, in a wavelength independent manner, and sent through free space. At the receiver, we were able to identify with high fidelity any of the 105 encoded modes in a single real time measurement, using a wavelength independent multimode correlation filter on a single SLM \cite{Flamm2013,Spangenberg}. This involves superimposing a series of single transmission functions $t_{n}(\textbf{r})$, each multiplied with a unique carrier frequency $\textbf{K}_{n}$ to produce a final transmission function $T(\textbf{r})$.
\begin{eqnarray}
T(\textbf{r}) = \sum_{n=1}^{N}t_{n}(\textbf{r})\exp(i\textbf{K}_{n}\textbf{r})
\label{eq:multiplex}
\end{eqnarray}
\noindent where $N$ is the maximum number of multiplexed modes. In the Fourier plane the carrier frequencies $\textbf{K}_{n}$ manifest as separate spatial coordinates as illustrated in Fig. \ref{setup} (e). This approach allows multiple LG modes to be generated and detected simultaneously producing a high data transmission rate. The experimentally generated $\mathrm{LG}_{p\ell}$ modes are used to encode and decode information in our multiplexing and demultiplexing scheme as shown in Fig. \ref{setup}.
To date only the azimuthal component, responsible for the OAM content of these mode, has been used for data transmission, ostensibly because the divergence is lowest for $p=0$ \cite{Gibson,Zhao}. Here we demonstrate that the propagation dynamics, divergence being one example, is governed by the beam quality factor $M^2= 2p+|\ell|+ 1$ \cite{Forbes} and that modes with the same index will propagate in an identical manner regardless of the radial component $p$. For example the modes $\mathrm{LG}_{11}$ and $\mathrm{LG}_{04}$ will experience the same diffraction since both has the same value $M^2=4$. To show this, we encoded information in the set of $\mathrm{LG}_{p\ell}$ modes that incorporates both degrees of freedom, created as described before. Moreover, we multiplex the above mentioned subset of $\mathrm{LG}_{p\ell}$ modes on three different wavelengths to increase our (de) encoding basis set from 35 to 105. All modes were generated using a single SLM (SLM-1 in Fig. \ref{setup} (b)) and a wide range multi-line laser. The data are encoded using these mode set and transferred in free space. This information is recovered by projecting the propagated information onto a modal filter. The modal filter consists of multiplexed holograms displayed on a second SLM (SLM-2) and a CCD camera, capable of identifying with high accuracy any of the input modes (see experimental details). The intemodal crosstalk for the chosen modes, this is, the crosstalk between the input modes and the measured modes (output modes) is illustrated in Fig.~\ref{crosstalk}. As can be seen, the crosstalk between the different modes is very low and is independent of the $p$ value.
\begin{figure}[h!]
\centering
\includegraphics[width=.4\textwidth]{crosstalk}
\caption{{\bf Cross Talk.} For each input mode we measure the output cross talk for all hundred and five output modes. In all cases the input mode is detected with very high accuracy, higher than 98\%.}
\label{crosstalk}
\end{figure}
Figure \ref{cube} shows an example of an RGB image encoded, pixel by pixel as explained in the next section, and reconstructed in real time with a very high correlation coefficient ($c=0.96$). The correlation coefficient is a dimensionless number that measures the similarity between two images, being 0 for nonidentical images and 1 for identical images.
\begin{figure}[tb]
\centering
\includegraphics[width=.4\textwidth]{cubes}
\caption{{\bf Example of sent and received images.} A quantification of the similarity between sent and received images is done using 2D image correlation. The value of the correlation coefficient ranges from 0 for nonidentical images to 1 for identical images. The correlation coefficient for the above image is $c=0.96$. Rubik's Cube$^\circledR$ used by permission of Rubik's Brand Ltd.}
\label{cube}
\end{figure}
\subsection*{Encoding scheme}
The information encoding is performed in three different ways. In the first one, applied to grayscale images, we specifically assign a particular mode and a particular wavelength to the gray-level of each pixel forming the image. For example the mode $\mathrm{LG_{0-3}}$ generated with $\lambda_{1}$ is assigned to the lowest gray-level and the mode $\mathrm{LG_{44}}$ generated with $\lambda_{3}$ to the highest [see Fig.~\ref{coding} (a)]. In this approach we are able to reach 105 different levels of gray. In a second approach, applied to colour images, each pixel is first decomposed into its three colour components (red, blue and green). The level of saturation of each colour is assigned to one of the 35 different spatial modes and to a specific wavelength $\lambda_{1}$, $\lambda_{2}$ or $\lambda_{3}$ [see Fig. \ref{coding} (b)]. In this approach only 35 levels of saturation can be reached with a total number of 105 generated modes. Finally, in the third we implement multi-bit encoding [see Fig. \ref{coding} (c)]. In this scheme, 256 levels of contrast are achieved by multiplexing eight different modes on a single hologram. Each of the 256 possible permutations, of these 8 modes, representing a particular gray level. Upon arrival to the detector each permutation is uniquely identified and the information is decoded to its 8-bit form to reconstruct the image. This approach was extended to high contrast colour images by using a particular wavelength for each primary colour intensity, achieving a total transmission rate of 24 bits per pixel. The transmission error rate, defined as the ratio between the number of wrong pixels and the total number of transmitted pixels, is found very low and did not reach 1\% in the case of gray-scale images. The reliability of our technique was further tested by transmitting different complex images containing all levels of saturation in each RGB component. Here we only show the results for one image (Fig.~\ref{cube}), that clearly evinces the very high similitude between the original and recovered images.
\begin{figure*}[tb]
\centering
\includegraphics[width = 1\textwidth]{coding}
\caption{{\bf Encoding Configurations.} (a) Single colour channel encoding, applied to gray-scale images. (b) RGB encoding, applied to colour images. (c) Multi-bit encoding, applied to both gray-scale and colour images. Rubik's Cube$^\circledR$ used by permission of Rubik's Brand Ltd }
\label{coding}
\end{figure*}
\section*{Discussion.}
Very recently it was pointed out that OAM multiplexing is not an optimal technique for free-space information encoding and that OAM itself does not increase the bandwidth of optical communication systems \cite{Zhao}. It has also been suggested that MDM, requires a complete mode set for a real bandwidth increment. Indeed, in all work to date only the azimuthal component of transverse modes, that gives rise to OAM, has been used in multiplexing schemes. Here we point out that the propagation dynamics (beam size, divergence, phase shift etc.) in free space are entirely governed by the beam quality factor, $M^2= 2p+|\ell|+ 1$ \cite{Forbes}, with analogous relations for fibre modes. The $M^2$ can be viewed as a mode index: modes with the same index (e.g., $p=0$, $\ell=2$ and $p=1$, $\ell=0$) will propagate in an identical manner as they have the same space-bandwidth product (see supplementary information for some examples). It is clear that one mode set will be as good as any other (at least in terms of perturbation-free communication), provided that the elements are orthogonal and regardless of whether it carries OAM or not. To demonstrate this, we create a mixed radial and azimuthal mode set from the $\mathrm{LG}_{p\ell}$ basis (with $p = \{0,1,2,3,4\}$ and $\ell = \{-3,-2,-1, 1,2,3,4\}$) and use this to transfer information over free space. Moreover, by implementing MDM on different wavelengths, we demonstrate that it is possible to expand the overall transmission capacity by several orders of magnitude. The number of carrier channels would be given by the number of optical modes times the number of wavelengths. In our experiment we generated 35 optical modes and combined this with 3 different wavelengths, creating a basis set of 105 modes. These modes are used as information carriers in a proof-of-concept free space link, capable of transmitting and recovering information in real time with very high fidelity. Fig.~\ref{cube} is an example of the many images transmitted in our link. Each image is sent pixel by pixel, for this, the information of colour saturation of each pixel, is encoded using our mode set. Our encoding/decoding technique is key in the implementation of our optical link. Its simplicity linked to the versatility of SLMs, capable of operating in a wide range of the spectrum as well as with broad band sources, allowed us to generate customized digital hologram to encode and decode the information. Furthermore, the designed correlation filters are wavelength insensitive which allows the technique to operate in a large spectrum, compared to existing mode (de) multiplexers which are extremely wavelength sensitive, such as the photonic lantern. This approach can be extended to a wider range of wavelengths and to a higher number of modes. The use of polarization could be potentially an additional degree of freedom and could possibly double the overall transmission capacity of the system. Even though here we have used our modes as information carriers, this experiment establishes the basis for this technique to be incorporated into standard communication systems. In this case each mode would represent a channel that can be modulated and detected with conventional technology.
To conclude, we have introduced a novel holographic technique that allows over 100 modes to be encoded/decoded on a single hologram, across a wide wavelength range, in a wavelength independent manner. This technique allowed us to incorporate the radial component of LG beams as another degree of freedom for mode division multiplexing. By combining both degrees of freedom, radial and azimuthal, with wavelength-division multiplexing, we are able to generate over 100 information channels using a single hologram. As a proof-of-concept, we implemented different encoding techniques to transmit information, with very high accuracy, in a free space link that employs conventional technology such as SLMs and CCD cameras. Our approach can be implemented in both, free space and optical fibres, facilitating studies towards high bit rate next generation networks.
\section*{Methods}
\subsection*{Experimental details.}
The source, a continuum linearly-polarized Argon Ion laser (Laser Physics: 457-514 nm), is expanded and collimated by a telescope ($f_{1}=50$ mm and $f_{2}=300$ mm) to approximate a plane wave. Afterwards it is decomposed into its different wavelength components by means of a grating. Three of these components, $\lambda_{1}=457$ nm, $\lambda_{2}=488$ nm and $\lambda_{3}$=514 nm propagating almost parallel to each other, are redirected to a HoloEye Pluto Spatial Light Modulator (SLM, $1080\times1920$ pixels) with a resolution of 8 $\mathrm{\mu m}$ per pixel [see Fig.~\ref{setup} (b)]. The SLM is split into three independent screens, one for each beam, and controlled independently. Each third is addressed with a hologram representing a Laguerre-Gaussian mode ($\mathrm{LG}_{p\ell}$), where $p$ is the radial index and $\ell$ the azimuthal index [see Fig.~\ref{setup} (c)]. For this experiment we use 35 different modes [see Fig.~\ref{setup} (a)], generated by combinations of $p$ = \{0, 1, 2, 3, 4\} and $\ell$ = \{-3, -2, -1, 1, 2, 3, 4\}. It should be stressed that the selection of the modes is made arbitrary and does not exclude any other combinations. These modes are encoded via complex amplitude modulations and only the first diffracted order from each third of the SLM is used.
The information decoding is performed using modal decomposition, for this, the beams are projected onto a second SLM using a $4f$ configuration system ($f_{3}=150$ mm). This SLM is also split into three independent screens, each of which is addressed with a multiplexed hologram. This hologram consists of the complex conjugated of all 35 modes, encoded with different spatial carrier frequencies [see Fig.~\ref{setup} (d)]. To identify each mode, and therefore the graylevel of each pixel, we measured the on-axis intensity, of the projection, in the far field. For this we use a lens with focal length $f_4=200$ mm and a CCD camera (Point Grey Flea3 Mono USB3 $1280\times960$) in a $2f$ configuration system. In the detection plane (that of the camera), all 105 modes appear spatially separated, due to their unique carrier frequencies, in a rectangular configuration. In this way, an incoming mode can be unambiguously identified by detecting an on-axis high intensity [see Fig.~\ref{setup} (e)]. Even though, it is possible to get on-axis intensity for many other modes, the one that matches the incoming one, is always brighter. In our experiment, it is necessary to compensate for small spherical aberrations, this is done by digitally encoding a cylindrical lens on the second SLM which corrects for all modes. \newline
|
1911.08708
|
\section{Introduction}\label{sec:intro}
Humans perceive others' emotions through verbal cues such as speech~\cite{prosodic1,prosodic2}, text~\cite{text1,text2}, and non-verbal cues such as eye-movements~\cite{eye2}, facial expressions~\cite{AU}, tone of voice, postures~\cite{posture1}, walking styles~\cite{gait_psych4}, etc.
Perceiving others' emotions shapes people's interactions and experiences when performing tasks in collaborative or competitive environments~\cite{secret_life_of_brain}.
Given this importance of perceived emotions in everyday lives, there has been a steady interest in developing automated techniques for perceiving emotions from various cues, with applications in affective computing, therapy, and rehabilitation~\cite{rehab}, robotics~\cite{robotics,proxemo}, surveillance~\cite{surveillance,liarswalk}, audience understanding~\cite{audience_understanding}, and character generation~\cite{char_gen}.
While there are multiple non-verbal modalities for perceiving emotions, in our work, we only observe people's styles of walking or their gaits, extracted from videos or motion-captured data. Perceived emotion recognition using any non-verbal cues is considered to be a challenging problem in both psychology and AI, primarily because of the unreliability in the cues, arising from sources such as ``mock'' expressions~\cite{unreliability1}, expressions affected by the subject's knowledge of an observer~\cite{unreliability2}, or even self-reported emotions in certain scenarios~\cite{unreliability3}. However, gaits generally require less conscious initiation from the subjects and therefore tend to be more reliable cues. Moreover, studies in psychology have shown that observers were able to perceive the emotions of walking subjects by observing features such as arm swinging, stride lengths, collapsed upper body, etc.~\cite{gait_psych3,gait_psych4}.
Gaits have been widely used in computer vision for many applications, including action recognition~\cite{stgcn,dgnn,msg3d} and perceiving emotions~\cite{tanmay_emotions,eva,step,emoticon}. However, there are a few key challenges in terms of designing machine learning methods for emotion recognition using gaits:
\begin{itemize}[label=\textbullet]
\item Methods based on hand-crafted biomechanical features extracted from human gaits often suffer from low prediction accuracy~\cite{crenn2016body,venture2014recognizing}.
\item Fully deep-learned methods~\cite{tanmay_emotions,step} rely heavily on sufficiently large sets of annotated data. Annotations are expensive and tedious to collect due to the variations in scales and motion trajectories~\cite{discrimnet}, as well as the inherent subjectivity in perceiving emotions~\cite{step}. The benchmark dataset for emotion recognition, Emotion-Gait~\cite{step}, has around $4,000$ data points of which more than $53\%$ are unlabeled.
\item Conditional generative methods are useful for data augmentation, but current methods can only generate data for short time periods~\cite{orange_duck,video_ac_gen_2} or with relatively low diversity~\cite{quaternet,gait_ac_gen_1,video_ac_gen_1,step}.
\end{itemize}
On the other hand, acquiring poses from videos and MoCap data is cheap and efficient, leading to the availability of large-scale pose-based datasets~\cite{cmu_mocap,human3.6m,kinetics,ntu_rgbd}.
Given the availability of these unlabeled gait datasets and the sparsity of gaits labeled with perceived emotions, there is a need to develop automatic methods that can utilize these datasets for emotion recognition.
\noindent\textbf{Main Contributions:}
We present a semi-supervised network that accepts 3D pose sequences of human gaits extracted from videos or motion-captured data and predicts discrete perceived emotions, such as happy, angry, sad, and neutral. Our network consists of an unsupervised autoencoder coupled with a supervised classifier. The encoder in the unsupervised autoencoder hierarchically pools attentions on parts of the body. It learns separate intermediate feature representations for the motions on each of the human body parts (arms, legs, and torso) and then pools these features in a bottom-up manner to map them to the latent embeddings of the autoencoder. The decoder takes in these embeddings and reconstructs the motion on each joint of the body in a top-down manner.
We also perform affective mapping: we constrain the space of network-learned features to subsume the space of biomechanical affective features~\cite{aff_features} expressed from the input gaits. These affective features contain useful information for distinguishing between different perceived emotions.
Lastly, for the labeled data, our supervised classifier learns to map the encoder embeddings to the discrete emotion labels to complete the training process. To summarize, we contribute:
\begin{itemize}[label=\textbullet]
\item \textbf{A semi-supervised network}, consisting of an autoencoder and a classifier, that are trained together to predict discrete perceived emotions from 3D pose sequences of gaits of humans.
\item \textbf{A hierarchical attention pooling module} on the autoencoder to learn useful embeddings for unlabeled gaits, which improves the mean average precision (mAP) in classification by 1--17\% on the absolute compared to state-of-the-art methods in both emotion recognition and action recognition from 3D gaits on the Emotion-Gait benchmark dataset.
\item \textbf{Subsuming the affective features} expressed from the input gaits in the space of learned embeddings. This improves the mAP in classification by 7--23\% on the absolute compared to state-of-the-art methods.
\end{itemize}
We observe the performance of our network improves linearly as more unlabeled data is used for training. More importantly, we report a 10--50\% improvement on average precision on the absolute for emotion classes that have fewer than 25\% labeled samples in the Emotion-Gait dataset~\cite{step}.
\section{Related Work}\label{sec:rw}
We briefly review prior work in classifying perceived emotions from gaits, as well as the related task of action recognition and generation from gaits.
\noindent\textbf{Detecting Perceived Emotions from Gaits.} Experiments in psychology have shown that observers were able to identify sadness, anger, happiness, and pride by observing gait features such as arm swinging, long strides, erect posture, collapsed upper body, etc.~\cite{gait_psych1,gait_psych2,gait_psych3,gait_psych4}. This, in turn, has led to considerable interest from both the computer vision and the affective computing communities in detecting perceived emotions from recorded gaits. Early works exploited different gait-based affective features to automatically detect perceived emotions~\cite{karg2010recognition,venture2014recognizing,crenn2016body,daoudi2017emotion}. More recent works combined these affective features with features learned from recurrent~\cite{tanmay_emotions} or convolutional networks~\cite{step} to significantly improve classification accuracies.
\noindent\textbf{Action Recognition and Generation.} There are large bodies of recent work on both gait-based supervised action recognition~\cite{video_ac_reg_1,video_ac_reg_2,stgcn,recurrent_action,gc_lstm,dgnn,2sagcn,msg3d}, and gait-based unsupervised action generation~\cite{video_ac_gen_1,gait_ac_gen_1,orange_duck,quaternet}. These methods make use of RNNs or CNNs, including GCNs, or a combination of both, to achieve high classification accuracies on benchmark datasets such as Human3.6M~\cite{human3.6m}, Kinetics~\cite{kinetics}, NTU RGB-D~\cite{ntu_rgbd}, and more. On top of the deep-learned networks, some methods have also leveraged the kinematic dependencies between joints and bones~\cite{dgnn}, dynamic movement-based features~\cite{2sagcn}, and long-range temporal dependencies~\cite{msg3d}, to further improve performance. A comprehensive review of recent methods in kinect-based action recognition is available in~\cite{kinect_review}.
RNN and CNN-based approaches have been extended to semi-supervised classification as well~\cite{recurrent_semisup,conv_semisup,lhd,phd}. These methods have also added constraints on limb proportions, movement constraints, and exploited the autoregressive nature of gait prediction to improve their generative and classification components.
Generative methods have also exploited full sequences of poses to directly generate full test sequences~\cite{pose_guided_gait1,pose_guided_gait2}. Other approaches have used constraints on limb movements~\cite{discrimnet}, action-specific trajectories~\cite{orange_duck}, and the structure and kinematics of body joints~\cite{quaternet}, to improve the naturalness of generated gaits.
In our work, we learn latent embeddings from gaits by exploiting the kinematic chains in the human body~\cite{human_kinematics} in a hierarchical fashion. Inspired by prior works in emotion perception from gaits, we also constrain our embeddings to contain the space of affective features expressed from gaits, to improve our average precision, especially on the rarer classes.
\section{Approach}\label{sec:approach}
Given both labeled and unlabeled 3D pose sequences for gaits, our goal is classify all the gaits into one or more discrete perceived emotion labels, such as happy, sad, angry, etc. We use a semi-supervised approach to achieve this, by combining an autoencoder with a classifier, as shown in Fig.~\ref{fig:network}. We denote the set of trainable parameters in the encoder, decoder, and classifier with $\theta$, $\psi$, and $\phi$ respectively. We first extract the rotation per joint from the first time step to the current time step in the input sequences (details in Sec.~\ref{subsec:preprocessing}). We then pass these rotations through the encoder, denoted with $f_\theta\parens{\cdot}$, to transform the input rotations into features in the latent embedding space. We pass these latent features through the decoder, denoted with $f_\psi\parens{\cdot}$, to generate reconstructions of the input rotations. If training labels are available, we also pass the encoded features through the fully-connected classifier network, denoted with $f_\phi\parens{\cdot}$, to predict the probabilities of the labels. We define our overall loss function as
\begin{equation}
\mathcal{C}\parens{\theta, \phi, \psi} = \sum_{i=1}^M I_y^{\parens{i}}\mathcal{C}_{CL}\parens{y^{\parens{i}}, f_{\phi\circ\theta}\parens{D^{\parens{i}}}} + \mathcal{C}_{AE}\parens{D^{\parens{i}}, f_{\psi\circ\theta}\parens{D^{\parens{i}}}},
\label{eq:semisup_loss}
\end{equation}
where $f_{b\circ a}\parens{\cdot} := f_b\parens{f_a\parens{\cdot}}$ denotes the composition of functions, $I_y^{\parens{i}}$ is an indicator variable denoting whether the $\raisedth{i}$ data point has an associated label $y^{\parens{i}}$, $M$ is the number of gait samples, $\mathcal{C}_{CL}$ denotes the classifier loss detailed in Sec.~\ref{subsec:cf_loss}, and $\mathcal{C}_{AE}$ denotes the autoencoder loss detailed in Sec.~\ref{subsec:ae_loss}. For brevity of notation, we will henceforth use $\hat{y}^{\parens{i}} := f_{\phi\circ\theta}\parens{D^{\parens{i}}}$ and $\hat{D}^{\parens{i}} := f_{\psi\circ\theta}\parens{D^{\parens{i}}}$.
\begin{table}[t]
\begin{minipage}{0.38\linewidth}
\centering
\includegraphics[width=\columnwidth]{figures/pose.jpg}
\captionof{figure}{\fontsize{8}{9.6}\selectfont\textbf{3D pose model.} The names and numbering of the $21$ joints in the pose follow the nomenclature in the ELMD dataset~\cite{elmd}.}
\label{fig:pose}
\end{minipage}
\hfill
\begin{minipage}{0.60\linewidth}
\caption{\fontsize{8}{9.6}\selectfont\textbf{Affective Features.} List of the $18$ pose affective features that we use to describe the affective feature space for our network.}
\label{tab:aff_features}
\centering\tiny
\resizebox{0.8\columnwidth}{!}{%
\begin{tabular}{L{0.9cm}L{5.4cm}}
\toprule
\multirow{11}{0.9cm}{Angles between} & shoulders at lower back\\
& hands at root\\
& left shoulder and hand at elbow \\
& right shoulder and hand at elbow \\
& head and left shoulder at neck \\
& head and right shoulder at neck \\
& head and left knee at root \\
& head and right knee at root \\
& left toe and right toe at root \\
& left hip and toe at knee \\
& right hip and toe at knee \\
\midrule
\multirow{4}{0.9cm}{Distance ratios between} & left hand index (LHI) to neck and LHI to root \\
& right-hand index (RHI) to neck and RHI to root \\
& LHI to RHI and neck to root \\
& left toe to right toe and neck to root \\
\midrule
Area($\Delta$) & $\Delta$ shoulders to lower back and $\Delta$ shoulders to root \\
ratios & $\Delta$ hands to lower back and $\Delta$ hands to root \\
between & $\Delta$ hand indices to neck and $\Delta$ toes to root \\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
\subsection{Representing Emotions}\label{subsec:represent_emotions}
The Valence-Arousal-Dominance (VAD) model~\cite{vad} is used for representing emotions in a continuous space. This model assumes three independent axes for valence, arousal, and dominance values, which collectively indicate an observed emotion. Valence indicates how pleasant (vs. unpleasant) the emotion is, arousal indicates how much the emotion is tied to high (vs. low) physiological intensity, and dominance indicates how much the emotion is tied to the assertion of high (vs. low) social status. For example, discrete emotion terms such as happy indicate high valence, medium arousal, and low dominance, angry indicate low valence, high arousal, and high dominance, and sad indicate low valence, low arousal, and low dominance.
On the other hand, these discrete emotion terms are easily understood by non-expert annotators and end-users. As a result, most existing datasets for supervised emotion classification consist of discrete emotion labels, and most supervised methods report performance on predicting these discrete emotions. In fact, discrete emotions can actually be mapped back to the VAD space through various known transformations~\cite{discrete_to_continuous_1,discrete_to_continuous_2}. Given these factors, we choose to use discrete emotion labels in our work as well. We also note that human observers have been reported to be most consistent in perceiving emotions varying primarily on the arousal axis, such as happy, sad, and angry~\cite{critical_gait_features,effort_shape}. Hence we work with the four emotions, happy, sad, angry, and neutral.
\subsection{Representing the Data}\label{subsec:preprocessing}
Given the 3D pose sequences for gaits, we first obtain the rotations per joint per time step. We denote a gait as $G = \braces{\parens{x_j^t, y_j^t, z_j^t}}_{j=1, t=1}^{J, T}$, consisting of the 3D positions of $J$ joints across $T$ time steps. We denote the rotation of joint $j$ from the first time step to time step $t$ as $R_j^t \in \mathbb{SO}\parens{3}$. We represent these rotations as unit quaternions $q_j^t \in \mathbb{H} \subset \mathbb{R}^4$, where $\mathbb{H}$ denotes the space of unit 4D quaternions. As stated in~\cite{quaternet}, quaternions are free of the gimbal-lock problem, unlike other common representations such as Euler angles or exponential maps~\cite{exponential_maps}. We enforce the additional unit norm constraints for these quaternions when training our autoencoder. We represent the overall input to our network as $D^{\parens{i}} := \braces{q_j^t}_{j=1, t=1}^{J, T} \in \mathbb{H}^{J\times T}$.
\subsection{Using Perceived Emotions and Constructing Classifier Loss}\label{subsec:cf_loss}
Observers' perception of emotions in others depends heavily influenced by their own personal, social, and cultural experiences, making emotion perception an inherently subjective task~\cite{critical_gait_features,gait_psych4}. Consequently, we need to keep track of the differences in the perceptions of different observers. We do this by assigning multi-hot emotion labels to each input gait.
We assume that the given labeled gait dataset consists of $C$ discrete emotion classes. The raw label vector $L^{\parens{i}}$ for the $\raisedth{i}$ gait is a probability vector where the $\raisedth{l}$ element denotes the probability that the corresponding gait is perceived to have the $\raisedth{l}$ emotion. Specifically, we assume $L^{\parens{i}} \in \bracks{0, 1}^C$ to be given as $L^{\parens{i}} = \begin{bmatrix} p_1 & p_2 & \dots p_C \end{bmatrix}^\top$, where $p_l$ denotes the probability of the $\raisedth{l}$ emotion and $l=1, 2, \dots C$. In practice, we compute the probability of each emotion for each labeled gait in a dataset as the fraction of annotators who labeled the gait with the corresponding emotion. To perform classification, we need to convert each element in $L^{\parens{i}}$ to an assignment in $\braces{0, 1}$, resulting in the multi-hot emotion label $y^{\parens{i}} \in \braces{0, 1}^C$. Taking into account the subjectivity in perceiving emotions, we set an element $l$ in $y^{\parens{i}}$ to 1 if $p_l > \frac{1}{C}$, \textit{i.e.}, the $\raisedth{l}$ perceived emotion has more than a random chance of being reported, and 0 otherwise. Since our classification problem is multi-class (typically, $C > 2$) as well as multi-label (as we use multi-hot labels), we use the weighted multi-class cross-entropy loss
\begin{equation}
\mathcal{C}_{CL}\parens{y^{\parens{i}}, \hat{y}^{\parens{i}}} := -\sum_{l=1}^C w_l \parens{y_l}^{\parens{i}}\log\parens{\hat{y}_l}^{\parens{i}}
\label{eq:cl_loss}
\end{equation}
for our classifier loss, where $\parens{y_l}^{\parens{i}}$ and $\parens{\hat{y}_l}^{\parens{i}}$ denote the $\raisedth{l}$ components of $y^{\parens{i}}$ and $\hat{y}^{\parens{i}}$, respectively. We also add per-class weights $w_l = e^{-p_l}$ to make the training more sensitive to mistakes on the rarer samples in the labeled dataset.
\subsection{Using Affective Features and Constructing Autoencoder Loss}\label{subsec:ae_loss}
Our autoencoder loss consists of three constraints: affective loss, quaternion loss, and angle loss.
\noindent\textbf{Affective loss.} Prior studies in psychology report that a person's perceived emotions can be represented by a set of scale-independent gait-based affective features~\cite{crenn2016body}. We consider the poses underlying the gaits to be made up of $J = 21$ joints (Fig.~\ref{fig:pose}). Inspired by~\cite{tanmay_emotions}, we categorize the affective features as follows:
\begin{itemize}[label=\textbullet]
\item \textit{Angles} subtended by two joints at a third joint. For example, between the head and the neck (used to compute head tilt), the neck, and the shoulders (to compute slouching), root and thighs (to compute stride lengths), etc.
\item\textit{Distance ratios} between two pairs of joints. For example, the ratio between the distance from the hand to the neck, and that from the hand to the root (to compute arm swings).
\item\textit{Area ratios} formed by two triplets of joints. For example, the ratio of the area formed between the elbows and the neck and the area formed between the elbows and the root (to compute slouching and arm swings). Area ratios can be viewed as amalgamations of the angle- and the distance ratio-based features used to supplement observations from these features.
\end{itemize}
We present the full list of the $\mathcal{A} = 18$ affective features we use in Table~\ref{tab:aff_features}. We denote the set of affective features across all time steps for the $\raisedth{i}$ gait with $a^{\parens{i}} \in \mathbb{R}^{\mathcal{A}\times T}$. We then constrain a subset of the embeddings learned by our encoder to map to these affective features. Specifically, we construct our embedding space to be $\mathbb{R}^{\mathcal{E}\times T}$ such that $\mathcal{E} \geq \mathcal{A}$. We then constrain the first $\mathcal{A}\times T$ dimensions of the embedding, denoted with $\hat{a}^{\parens{i}}$ for the $\raisedth{i}$ gait, to match the corresponding affective features $a^{\parens{i}}$. This gives our affective loss constraint:
\begin{equation}
\mathcal{L}_{\textrm{aff}}\parens{a^{\parens{i}}, \hat{a}^{\parens{i}}} := \norm{a^{\parens{i}} - \hat{a}^{\parens{i}}}^2.
\label{eq:aff_loss}
\end{equation}
We use affective constraints rather than providing affective features as input because there is no consensus on the universal set of affective features, especially due to cross-cultural differences~\cite{ekman_non_verbal,critical_gait_features}. Thus, we allow the encoder of our autoencoder to learn an embedding space using both data-driven features and our affective features, to improve generalizability.
\noindent\textbf{Quaternion loss.} The decoder for our autoencoder returns rotations per joint per time step as quaternions $\parens{\hat{q}_j^t}^{\parens{i}}$. We then constrain these quaternions to have unit norm:
\begin{equation}
\mathcal{L}_{\textrm{quat}}\parens{\parens{\hat{q}_j^t}^{\parens{i}}} := \parens{\norm{\parens{\hat{q}_j^t}^{\parens{i}}} - 1}^2.
\label{eq:quat_loss}
\end{equation}
We apply this constraint instead of normalizing the decoder output, since individual rotations tend to be small, which leads the network to converge all its estimates to the unit quaternion.
\noindent\textbf{Angle loss.} This is the reconstruction loss for the autoencoder. We obtain it by converting the input and the output quaternions to the corresponding Euler angles and computing the mean loss between them:
\begin{equation}
\mathcal{L}_{\textrm{ang}}\parens{D^{\parens{i}}, \hat{D}^{\parens{i}}} := \norm{\parens{D_X, D_Y, D_Z}^{\parens{i}} - \parens{\hat{D}_X, \hat{D}_Y, \hat{D}_Z}^{\parens{i}}}_F^2
\label{eq:ang_loss}
\end{equation}
where $\parens{D_X, D_Y, D_Z}^{\parens{i}} \in \bracks{0, 2\pi}^{3J\times T}$ and $\parens{\hat{D}_X, \hat{D}_Y, \hat{D}_Z}^{\parens{i}} \in \bracks{0, 2\pi}^{3J\times T}$ denotes the set of Euler angles for all the joints across all the time steps for input $D^{\parens{i}}$ and output $\hat{D}^{\parens{i}}$, respectively, and $\norm{\cdot}_F$ denotes the Frobenius norm.
Combining Eqs.~\ref{eq:aff_loss}, \ref{eq:quat_loss} and \ref{eq:ang_loss}, we write the autoencoder loss $\mathcal{C}_{AE}\parens{\cdot, \cdot}$ as
\begin{equation}
\mathcal{C}_{AE}\parens{D^{\parens{i}}, \hat{D}^{\parens{i}}} := \mathcal{L}_{\textrm{ang}}\parens{D^{\parens{i}}, \hat{D}^{\parens{i}}} + \lambda_{\textrm{quat}}\mathcal{L}_{\textrm{quat}} + \lambda_{\textrm{aff}}\mathcal{L}_{\textrm{aff}}
\label{eq:ae_loss}
\end{equation}
where $\lambda_{\textrm{quat}}$ and $\lambda_{\textrm{aff}}$ are the regularization weights for the quaternion loss constraint and the affective loss constraint, respectively. To keep the scales of $\mathcal{L}_{\textrm{quat}}$ and $\mathcal{L}_{\textrm{aff}}$ consistent, we also scale all the affective features to lie in $\bracks{0, 1}$.
\section{Network Architecture and Implementation}\label{sec:impl}
Our network for semi-supervised classification of discrete perceived emotions from gaits, shown in Fig.~\ref{fig:network}, consists of three components, the encoder, the decoder, and the classifier. We describe each of these components and then summarize the training routine for our network.
\subsection{Encoder with Hierarchical Attention Pooling}\label{subsec:encoder}
We first pass the sequences of joint rotations on all the joints through a two-layer Gated Recurrent Unit (GRU) to obtain feature representations for rotations at all joints at all time steps. We pass each of these representations through individual linear units. Following the kinematic chain of the human joints~\cite{human_kinematics}, we pool the linear unit outputs for the two arms, the two legs, and the torso in five separate linear layers. Thus, each of these five linear layers learns to focus attention on a different part of the human body. We then pool the outputs from these five linear layers into another linear layer, which, by construction, focuses attention on the motions of the entire body. For pooling, we perform vector addition as a way of composing the features at the different hierarchies.
Our encoder learns the hierarchy of the joint rotations in a bottom-up manner. We map the output of the last linear layer in the hierarchy to a feature representation in the embedding space of the encoder through another linear layer. In our case, the embedding space lies in $\mathbb{R}^{\mathcal{E}\times T}$ with $\mathcal{E} = 32$, which subsumes the space of affective features $\mathbb{R}^{\mathcal{A}\times T}$ with $\mathcal{A} = 18$, as discussed in Sec.~\ref{subsec:ae_loss}.
\subsection{Decoder with Hierarchical Attention Un-pooling}\label{subsec:decoder}
The decoder takes in the embedding from the encoder, repeats it five times for un-pooling, and passes the repeated features through five linear layers. The outputs of these linear layers are features representing the reconstructions on the five parts, torso, two arms, and two legs. We repeat each of these features for un-pooling, and then collectively feed them into a GRU, which reconstructs the rotation on every joint at a single step. A subsequent GRU takes in the reconstructed joint rotations at a single time step and successively predicts the joint rotations for the next $T-1$ time steps.
\subsection{Classifier for Labeled Data}\label{subsec:classifier}
Our classifier takes in the embeddings and passes it through a series of three linear layers, flattening the features between the second and the third linear layers. The output of the final linear layer, called ``Output Labels'' in Fig.~\ref{fig:network}, provides the label probabilities. To make predictions, we set the output for a class to be $1$ if the label probability for that class was more than $\frac{1}{C}$, similar to the routine for constructing input labels discussed in Sec.~\ref{subsec:cf_loss}.
\subsection{Training Routine}\label{subsec:train_routine}
We train using the Adam optimizer~\cite{adam} with a learning rate of $0.001$, which we decay by a factor of $0.999$ per epoch. We apply the ELU activation~\cite{elu} on all the linear layers except the output label layer, apply batch normalization~\cite{batchnorm} after every layer to reduce internal covariance-shift, and apply a dropout of $0.1$ to prevent overfitting. On the second GRU in the decoder, which predicts joint rotations for $T$ successive time steps, we use a curriculum schedule~\cite{curriculum_schedule}. We start with a teacher forcing ratio of $1$ on this GRU and at every epoch $E$, we decay the teacher forcing ratio by $\beta = 0.995$, \textit{i.e.}, we either provide this GRU the input joint rotations with probability $\beta^E$, or the GRU's past predicted joint rotations with probability $1 - \beta^E$. Curriculum scheduling helps the GRU to gently transition from a teacher-guided prediction routine to a self-guided prediction routine, thereby expediting the training process.
We train our network for $500$ epochs, which takes around $4$ hours on an Nvidia GeForce GTX 1080Ti GPU with $12$ GB memory. We use $80\%$ of the available labeled data and all the unlabeled data for training our network, and validate its classification performance on a separate $10\%$ of the labeled data. We keep the remaining $10\%$ as the held-out test data. We also observed satisfactory performance when the weights $\lambda_\textrm{quat}$ and $\lambda_\textrm{aff}$ (in Eqn.~\ref{eq:ae_loss}) lie between $0.5$ and $2.5$. For our reported performances in Sec.~\ref{subsec:experiments}, we used a value of $2$ for both.
\section{Results}\label{sec:results}
We perform experiments with the Emotion-Gait benchmark dataset~\cite{step}. It consists of 3D pose sequences of gaits collected from a variety of sources and partially labeled with perceived emotions. We provide a brief description of the dataset in Sec.~\ref{subsec:dataset}. We list the methods we compare with in Sec.~\ref{subsec:methods}. We then summarize the results of the experiments we performed with this dataset on all these methods in Sec.~\ref{subsec:experiments}, and describe how to interpret the results in Sec.~\ref{subsec:interpretation}.
\begin{table}[t]
\begin{minipage}[t]{0.48\linewidth}
\caption{\fontsize{8}{9.6}\selectfont\textbf{Average Precision scores.} Average precision (AP) per class and the mean average precision (mAP) over all the classes achieved by all the methods on the Emotion Gait dataset. Classes are Happy (H), Sad (S), Angry (A) and Neutral (N). Higher values are better. Bold indicates best, blue indicates second best.}
\label{tab:precision}
\centering
\resizebox{0.9\columnwidth}{!}{%
\begin{tabular}{lccccc}
\toprule
Method & \multicolumn{4}{c}{AP} & mAP \\
\cline{2-5}
& H & S & A & N & \\
\midrule
STGCN~\cite{stgcn} \Tstrut \Bstrut & 0.98 & 0.83 & 0.42 & 0.18 & 0.61 \\
DGNN~\cite{dgnn} \Tstrut \Bstrut & 0.98 & 0.88 & 0.73 & 0.37 & 0.74 \\
MS-G3D~\cite{dgnn} \Tstrut \Bstrut & {\color{blue}0.98} & {\color{blue}0.88} & {\color{blue}0.75} & 0.44 & 0.76 \\
LSTM Network~\cite{tanmay_emotions} \Tstrut \Bstrut & 0.96 & 0.84 & 0.62 & 0.51 & 0.73 \\
STEP~\cite{step} \Tstrut \Bstrut & 0.97 & 0.88 & 0.72 & {\color{blue}0.52} & {\color{blue}0.77} \\
\midrule
\textbf{Our Method} \Tstrut \Bstrut & \textbf{0.98} & \textbf{0.89} & \textbf{0.81} & \textbf{0.71} & \textbf{0.84} \\
\bottomrule
\end{tabular}
}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\caption{\fontsize{8}{9.6}\selectfont\textbf{Ablation studies.} Comparing average precisions of ablated versions of our method. HP denotes Hierarchical Pooling, AL denotes the Affective Loss constraint. AP, mAP, H, S, A, N are reused from Table~\ref{tab:precision}. Bold indicates best, blue indicates second best.}
\label{tab:ablation}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccccc}
\toprule
Method & \multicolumn{4}{c}{AP} & mAP \\
\cline{2-5}
& H & S & A & N & \\
\midrule
With only labeled data, no AL or HP \Tstrut \Bstrut & 0.92 & 0.81 & 0.51 & 0.42 & 0.67 \\
With only labeled data, HP and no AL \Tstrut \Bstrut & 0.93 & 0.81 & 0.63 & 0.49 & 0.72 \\
With only labeled data, AL and no HP \Tstrut \Bstrut & 0.96 & 0.86 & 0.70 & 0.51 & 0.76 \\
With only labeled data, AL and HP \Tstrut \Bstrut & 0.97 & 0.86 & 0.72 & 0.55 & 0.78 \\
\midrule
With all data, no AL or HP \Tstrut \Bstrut & 0.94 & 0.83 & 0.55 & 0.48 & 0.70 \\
With all data, HP and no AL \Tstrut \Bstrut & 0.96 & 0.85 & 0.70 & 0.60 & 0.78 \\
With all data, AL and no HP \Tstrut \Bstrut & {\color{blue}0.97} & {\color{blue}0.87} & {\color{blue}0.76} & {\color{blue}0.65} & {\color{blue}0.81} \\
\textbf{With all data, AL and HP} \Tstrut \Bstrut & \textbf{0.98} & \textbf{0.89} & \textbf{0.81} & \textbf{0.71} & \textbf{0.84} \\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
\subsection{Dataset}\label{subsec:dataset}
The Emotion-Gait dataset~\cite{step} consists of gaits collected from various sources of 3D pose sequence datasets, including BML~\cite{bml}, Human3.6M~\cite{human3.6m}, ICT~\cite{ict}, CMU-MoCap~\cite{cmu_mocap} and ELMD~\cite{elmd}. To maintain a uniform set of joints for the pose models collected from diverse sources, we converted all the models in Emotion-Gait to the $21$ joint pose model used in ELMD~\cite{elmd}. We clipped or zero-padded all input gaits to have $240$ time steps, and downsampled it to contain every $\raisedth{5}$ frame. We passed the resultant $48$ time steps to our network, we have \textit{i.e.}, $T = 48$. In total, the dataset has $3,924$ gaits of which $1,835$ have emotion labels provided by 10 annotators, and the remaining $2,089$ are not annotated. Around $58\%$ of the labeled data have happy labels, $32\%$ have sad labels, $23\%$ have angry labels, and only $14\%$ have neutral labels (more details on the project webpage).
\noindent\textbf{Histograms of Affective Features.} We show histograms of the mean values of $6$ of the $18$ affective features we use in Fig.~\ref{fig:aff_features}. The means are taken across the $T = 48$ time steps in the input gaits and differently colored for inputs belonging to the different emotion classes as per the annotations. We count the inputs belonging to multiple classes once for every class they belong to. For different affective features, different sets of classes have a high overlap of values while values of the other classes are well-separated. For example, there is a significant overlap in the values of the distance ratio between right-hand index to the neck and right-hand index to the root (Fig.~\ref{fig:aff_features}, bottom left) for gaits belonging to sad and angry classes, while the values of happy and neutral are distinct from these. Again, for gaits in happy and angry classes, there is a high overlap in the ratio of the area between hands to lower back and hands to root (Fig.~\ref{fig:aff_features}, bottom right), while the corresponding values for gaits in neutral and sad classes are distinct from these. The affective features also support observations in psychology corresponding to perceiving emotions from gaits. For example, slouching is generally considered to be an indicator of sadness~\cite{gait_psych3}. Correspondingly, we can observe that the values of the angle between the shoulders at the lower back (Fig.~\ref{fig:aff_features}, top left) are lowest for sad gaits, indicating slouching.
\begin{figure}[t]
\begin{minipage}[t]{0.56\linewidth}
\centering
\includegraphics[width=\columnwidth]{figures/affective_features.jpg}
\caption{\fontsize{8}{9.6}\selectfont\textbf{Conditional distribution of mean affective features.} Distributions of $6$ of the $18$ affective features, for the Emotion-Gait dataset, conditioned on the given classes Happy, Sad, Angry, and Neutral. Mean is taken across the number of time steps. We observe that the different classes have different distributions of peaks, indicating that these features are useful for distinguishing between perceived emotions.}
\label{fig:aff_features}
\end{minipage}
\hfill
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[width=0.8\columnwidth]{figures/data_map_increase.jpg}
\caption{\fontsize{8}{9.6}\selectfont\textbf{AP increases with adding unlabeled data.} AP achieved on each class, as well as the mean AP over the classes, increases linearly as we add more unlabeled data to train our network. The increment is most significant for the neutral class, which has the fewest labels in the dataset.}
\label{fig:data_map_increase}
\end{minipage}
\end{figure}
\subsection{Comparison Methods}\label{subsec:methods}
We compare our method with the following state-of-the-art methods for both emotion recognition and action recognition from gaits. We choose to compare with action recognition methods because similar to these methods, we aim to learn a mapping from gaits to a set of labels (emotions instead of actions).
\begin{itemize}[label=\textbullet]
\item \textbf{Emotion Recognition.} We compare with the network of~\cite{tanmay_emotions}, which combines affective features from gaits with features learned from an LSTM-based network taking pose sequences of gaits as input, to form hybrid feature vectors for classification. We also compare with STEP~\cite{step}, which trains a spatial-temporal graph convolution-based network with gait inputs and affective features obtained from the gaits, and then fine-tunes the network with data generated from a graph convolution-based variational autoencoder.
\item \textbf{Action Recognition.} We compare with recent state-of-the-art methods based on the spatial-temporal graph convolution network (STGCN)~\cite{stgcn}, the directed graph neural network (DGNN)~\cite{dgnn}, and the multi-scale graph convolutions with temporal skip connections (MS-G3D)~\cite{msg3d}. STGCN computes spatial neighborhoods as per the bone structure of the 3D poses and temporal neighborhoods according to the instances of the same joints across time steps and performs convolutions based on these neighborhoods. DGNN computes directed acyclic graphs of the bone structure based on kinematic dependencies and trains a convolutional network with these graphs. MS-G3D performs multi-scale graph convolutions on the spatial dimensions and adds skip connections on the temporal dimension to model long-range dependencies for various actions.
\end{itemize}
For a fair comparison, we retrained all these networks from scratch with the labeled portion of the Emotion-Gait dataset, following their respective reported training parameters, and the same data split of $8:1:1$ as our network.
\subsubsection{Evaluation Metric}
Since we deal with a multi-class, multi-label classification, we report the average precision (AP) achieved per class, which is the mean of the precision values across all values of recall between $0$ and $1$. We also report the mean AP, which is the mean of the APs achieved in all the classes.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/pos_neg_examples.jpg}
\caption{\fontsize{8}{9.6}\selectfont\textbf{Comparing predictions with annotations.} The top row shows $4$ gaits from the Emotion-Gait dataset where the predicted labels of our network exactly matched the annotated input labels. The bottom row shows $4$ gaits where the predicted labels did not match any of the input labels. Each gait is represented by $3$ poses in temporal sequence from left to right. We observe that most of the disagreements are between either happy and angry or between sad and neutral, which is consistent with general observations in psychology.}
\label{fig:pos_neg_examples}
\end{figure*}
\subsection{Experiments}\label{subsec:experiments}
In our experiments, we ensured that the held-out test data were from sources different from the train and validation data in the Emotion-Gait dataset. We summarize the AP and the mean AP scores of all the methods in Table~\ref{tab:precision}. Our method outperforms the next best method, STEP~\cite{step}, by around $7\%$ and outperforms the lowest-performing method, STGCN~\cite{stgcn}, by $23\%$, both on the absolute. We summarize additional results, including the interpretation of the data labels and our results in the VAD dimensions~\cite{vad}, on our project webpage.
Both the LSTM-based network and STEP consider per-frame affective features and inter-frame features such as velocities and rotations as inputs but do not explicitly model the dependencies between these two kinds of features. Our network, on the other hand, learns to embed a part of the features learned from joint rotations in the space of affective features. These embedded features, in turn, help our network predict the output emotion labels with more precision.
The action recognition methods STGCN, DGNN, and MS-G3D focus more on the movements of the leaf nodes, \textit{i.e.}, hand indices, toes, and head. These nodes are useful for distinguishing between actions such as running and jumping but do not contain sufficient information to distinguish between perceived emotions.
Moreover, given the long-tail nature of the distribution of labels in the Emotion-Gait dataset (Sec.~\ref{subsec:dataset}), all the methods we compare with have more than $0.95$ AP in the happy and more than $0.80$ AP in the sad classes, but perform much poorer on the angry and the neutral classes. Our method, by contrast, learns to map the joint motions to the affective features, which helps it achieve around $10$--$50\%$ better AP on the absolute on the angry and the neutral class while maintaining similarly high AP in the happy and the sad classes.
\subsubsection{Ablation Studies}
We also perform ablation studies on our method to highlight the benefit of each of our three key components: using hierarchical pooling (HP) (Sec.~\ref{subsec:encoder}), using the affective loss constraint (AL) (Eqn.~\ref{eq:aff_loss}), and using both labeled and unlabeled data in a semi-supervised manner (Eqn.~\ref{eq:semisup_loss}). We summarize the observations of our ablation studies in Table~\ref{tab:ablation}.
First, we train our network only on the labeled dataset by removing the decoder part of our network and dropping the autoencoder loss from Eqn.~\ref{eq:semisup_loss}. Without using either AL or HP, the network achieves an AP of $0.51$ on angry and $0.42$ on neutral, the two least populous classes. We call this our baseline network. Adding only the AL increases these two APs more from the baseline than adding only the HP. This is reasonable since hierarchical pooling helps the network learn generic differences in the pose sequences of different data, while the affective loss constraint helps the network to distinguish between pose structures specific to different perceived emotions. Adding both HP and AL increases the AP from the baseline even further. From these experiments, we can confirm that using either AL or HP improves the performance from the baseline, and their collective performance is better than their individual performances.
Next, we add in the decoder and use both labeled and unlabeled data for training our network, using the loss in Eqn.~\ref{eq:semisup_loss}. Without both AL and HP, the network now achieves an AP of $0.55$ on angry and $0.48$ on neutral, showing appreciable improvements from the baseline. Also, as earlier, adding in only the AL shows more benefit on the network's performance than adding in only the HP. Specifically, adding in only the HP produces $1\%$ absolute improvement in mean AP over STEP~\cite{step} (row 4 in Table~\ref{tab:precision}) and $17\%$ absolute improvement in mean AP over STGCN~\cite{stgcn} (row 1 in Table~\ref{tab:precision}). Adding in only the AL produces $4\%$ absolute improvement in mean AP over STEP~\cite{step} (row 4 in Table~\ref{tab:precision}) and $20\%$ absolute improvement in mean AP over STGCN~\cite{stgcn} (row 1 in Table~\ref{tab:precision}). Adding in both, we get the final version of our network, which improves on the mean AP of STEP~\cite{step} by $7\%$, and the mean AP of STGCN~\cite{stgcn} by $23\%$.
\subsubsection{Performance Trend with Increasing Unlabeled Data}
In practice, it is relatively easy to collect unlabeled gaits from videos or using motion capture. We track the performance improvement of our network as we keep adding unlabeled data to our network, and summarize the results in Fig.~\ref{fig:data_map_increase}. We observe that the mean AP improves linearly as we add more data. The trend does not indicate a saturation in AP for the angry and the neutral classes even after adding all the $2,089$ unlabeled data. This suggests that the performance of our approach can increase further with more unlabeled data.
\subsection{Interpretation of the Network Predictions}\label{subsec:interpretation}
We show the qualitative results of our network in Fig.~\ref{fig:pos_neg_examples}. The top row shows cases where the predicted labels for a gait exactly matched all the corresponding annotated labels. We observe that the gaits with happy and angry labels in the annotation have more animated joint movements compared to the gaits with sad and neutral labels, which our network was able to successfully learn from the affective features. This is in line with established studies in psychology~\cite{vad}, which show that both happy and angry emotions lie high on the arousal scale, whereas neutral and sad are lower on the arousal scale. The bottom row shows cases where the predicted labels for a gait did not match any of the annotated labels. We notice that most disagreements arise either between sad and neutral labels or between happy and angry labels. This again follows the observation that both happy and angry gaits, higher on the arousal scale, often have more exaggerated joint movements, while both sad and neutral gaits, lower on the arousal scale, often have more reserved joint movements. There are also disagreements between happy and neutral labels for some gaits, where the joint movements in the happy gaits are not as exaggerated.
We also make an important distinction between the multi-hot input labels provided by human annotators and the multi-hot predictions of our network. The input labels capture the subjectivity in human perception, where different human observers perceive different emotions from the same gait based on their own biases and prior experiences~\cite{critical_gait_features}. The network, on the other hand, indicates that the emotion perceived from a particular gait data best fits one of the labels it predicts for that data. For example, in the third result from left on the top row in Fig.~\ref{fig:pos_neg_examples}, five of the ten annotators perceived the gait to be happy, three perceived it to be angry, and the remaining two perceived it to be neutral. Following our annotations procedure in Sec.~\ref{subsec:classifier}, we annotated this gait as an instance of both happy and angry. Given this gait, our network predicts a multi-hot label with 1's for happy and angry and 0's for neutral and sad. This indicates that the network successfully focused on the arousal in this gait, and found the emotion perceived from it to best match either happy or angry, and not match neutral and sad. We present more such results on our project webpage.
\section{Limitations and Future Work}\label{sec:limitations}
Our work has some limitations. First, we consider only discrete emotions of people and do not explicitly map these to the underlying continuous emotion space given by the VAD model~\cite{vad}. Even though discrete emotions are presumably easier to work with for non-expert end-users, we plan to extend our method to work in the continuous space of emotions, \textit{i.e.}, given a gait, our network regresses it to a point in the VAD space that indicates the perceived emotions.
Second, our network only looks at gait-based features to predict perceived emotions. In the future, we plan to combine these features with cues from other modalities such as facial expressions and body gestures, that are often expressed in tandem with gaits, to develop more robust emotion perception methods. We also plan to look at higher-level information, such as the presence of other people in the vicinity, background context, etc. that are known to influence a person's emotions~\cite{context1,context2} to further sophisticate the performance of our network.
|
1811.07178
|
\subsection{S1. One-Dimensional Nature of RbCoCl$_3$}
\begin{figure}[b]
\includegraphics[width=8.78cm]{rccsf1}
\caption{\label{figs1} Dynamical structure factors, $S(\vec{Q},\omega)$,
measured at 4 K for scattering vectors perpendicular to the chain direction.
The intensity is integrated over the windows (a) $- 1.05 < L < - 0.95$
r.l.u.~and (b) $- 0.55 < L < - 0.45$ r.l.u. The red lines are guides to the
eye.}
\end{figure}
To justify the statement made in the main text that RbCoCl$_3$ is a
very one-dimensional (1D) magnetic system, in Fig.~\ref{figs1} we show
representative cuts of our scattered intensity data for $\vec{Q}$ in the
$H$ direction with two choices of $L$. The broad lower and narrow upper
excitations correspond respectively to the continuum and lowest bound-state
features visible in Fig.~4 of the main text. The extremely flat dispersion in
both normal directions is best modelled by a constant energy, i.e.~any $H$- or
$K$-dependence of the dispersion is smaller than the instrumental resolution.
\subsection{S2. Cluster Heat Bath Monte Carlo Simulations}
\begin{figure}[b]
\includegraphics[width=8cm]{rccsf2}
\caption{\label{figs2} Snapshots of spin configurations obtained from CHB
simulations at temperatures corresponding to (a) 4 K and (b) 18 K in RbCoCl$_3$.
Each hexagon represents a site in a single plane of the triangular lattice of
antiferromagnetically coupled Ising chains, which have in addition a weak
ferromagnetic next-nearest-neighbor interaction. Black and white colors
represent opposing directions of the staggered Ising order. (c) Fully random
chain configuration.}
\end{figure}
The Cluster Heat Bath (CHB) algorithm is a Monte-Carlo method that was
developed specifically for simulating quasi-1D Ising compounds. In contrast
to the Metropolis-Hastings algorithm, in which one spin is flipped at each
time step, the CHB approach belongs to the broad family of cluster methods,
where blocks of many spins may be rearranged at each step. For the coupled
Ising system, entire spin chains are flipped to achieve a configuration where
every chain is in equilibrium (according to the Boltzmann distribution) with
its environment at each step. This method has been used to reproduce closely
the magnetic phase transitions of CsCoBr$_3$ and CsCoCl$_3$ and in the present
study we have followed the work of Refs.~\cite{matsubara1997,koseki1997,
koseki2000}.
Here we use CHB simulations in order to gain qualitative insight into and
semi-quantitative comparisons with our experimental results. Specifically,
we wish to illustrate the qualitative nature of the two ordered phases,
shown in the insets of Figs.~2(c) and 3(c) of the main text, and to verify
quantitatively the population factors, obtained by fitting the dynamical
structure factor at each temperature, of chains subject to the different
possible effective staggered fields. The CHB spin structure can also be
used to simulate a quasi-elastic scattering signal for comparison with
experimental observations \cite{haenni2017}, which show both sharp and
broad temperature-dependent components. We have performed simulations, using
the interaction parameters of the main text augmented by the value $J_{nnn} =
- J_{nn}/10$, on lattices of 120$\times$120$\times$5000 spins for up to 8000
steps. During even-numbered steps, 10 conventional CHB operations took place
on randomly chosen chains (i.e.~ten individual chains were flipped), while
on odd-numbered steps flipping operations were allowed on loops of chains
(of different, random lengths). The temperature of the system was lowered
during the 8000 cycles, in a manner similar to simulated annealing, until
the target temperature was obtained.
Figures \ref{figs2}(a) and \ref{figs2}(b) show examples of equilibrated
spin configurations in a single plane obtained for temperatures corresponding
respectively to 4 and 18 K in RbCoCl$_3$. These are more extended versions of
the figures shown in the insets of Figs.~2(c) and 3(c) of the main text. It
is clear at 4 K [Fig.~\ref{figs2}(a)] that significant domains of the system
attain a well-ordered ``honeycomb'' pattern, in which 1/3 of the chains have a
maximal net staggered field ($6J_{nn}$) while on 2/3 of them the field cancels.
The contribution of other chain configurations is confined to the domain walls,
which despite the rather weak $J_{nn}$ in RbCoCl$_3$ are relatively sparse at
4 K. At intermediate temperatures [Fig.~\ref{figs2}(b)], the domains are
strongly disordered and the chains are correlated laterally only over rather
short ranges, giving a distribution in which the staggered fields $2J_{nn}$
and $4J_{nn}$ have significant representation, whereas $6J_{nn}$ becomes much
less likely. For reference we show in Fig.~\ref{figs2}(c) a completely random
planar spin configuration.
\subsection{S3. Finite-temperature DMRG}
For a microscopic understanding of temperature effects in the Ising spin
chain, which in the extended Matsubara framework we described by effective
parameters $\epsilon_1(T)$ and $\Gamma(T)$, we have performed time-dependent
DMRG calculations at finite temperatures to obtain the full spectral functions.
The spin system is represented in a basis of matrix-product states (MPS)
\cite{schollwoeck11,hubig17:_gener}, in which the mixed states at finite
temperatures are represented by a purification approach where their
grand-canonical thermal density matrices are expressed in a doubled basis
containing auxiliary variables \cite{verstraete2004,barthel09:_spect}. We
computed the time evolution of a spin-flip excitation using the two-site
time-dependent variational-principle method \cite{haegeman16:_unify} in
combination with the appropriate ``near-optimal'' auxiliary space
transformation of Ref.~\cite{barthel13:_precis}.
Time evolution by repeated application of matrix operators causes a growth
in information, and one of the primary attributes of the MPS formalism is
to allow a systematic truncation of this information to fit the available
computational resources. Physically, the problem of entanglement growth
sets the limits in both time and space where a correlation function may be
calculated with acceptable numerical precision. We have performed calculations
for chain lengths up to $L = 512$ physical sites ($1024$ including auxiliary
sites), finding these sufficient to exclude finite-size effects in the starting
state in all cases. At $T = 4$ K, the growth of entanglement is not significant
on the timescale over which excitations propagate across the finite system,
and hence it is the system size that limits the maximum attainable time.
\begin{figure}[t]
\includegraphics[width=8cm]{rccsf3}
\caption{\label{figs2p5} $S(\vec{Q},\omega)$ calculated by DMRG for the
isolated Ising chain using the $T = 0$ interaction parameters of RbCoCl$_3$
with the instrumental broadening, $\sigma = 0.32$ meV, at temperatures of 4,
18, and 35 K for (a) $L = - 1$ and (b) $L = - 0.5$.}
\end{figure}
At $T = 18$ K and $T = 35$ K, the entanglement growth is more rapid, and this
limits the maximum attainable time to $t_{\mathrm{max}} \approx 70$ in units of
$1/2J_1$ [Eq.~(1) of the main text], where we set the criterion of numerical
precision to be a maximal discarded weight of $\chi^2 = 10^{-8}$. However, one
may then extend the computed data in time, for which the linear prediction
method of Ref.~\cite{barthel09:_spect} is particularly well suited when the
finite temperature induces an exponential decay of the real-time correlators,
as in the present problem. Optimizing the parameters of this interpolation
scheme allowed us to stabilize the linear prediction for all momentum values
and hence to reach large effective times, $t_{\mathrm{max}}^{\mathrm{pred}} \approx
2000$. We then evaluated the dynamical response function, $S(\vec{Q},\omega,
T)$, by two Fourier transformations of the DMRG correlation functions in real
space and time.
Figure \ref{figs2p5} illustrates the results of our DMRG calculations for
the case of an isolated chain (zero staggered field) at temperatures of
4, 18, and 35 K. The parameters $J_1$, $J_2$, and $\epsilon_2$ are as given
in the main text and Table S1, and we have included a broadening equivalent
to the instrumental resolution of 0.32 meV. We stress that $\epsilon_1 = 0.126$
meV is fixed to its low-temperature value in all cases, i.e.~it is not being
changed as it is in our Matsubara procedure. Thus the fact that the peak in
the continuum contribution at $L = - 1$ moves up in energy with increasing
temperature [Fig.~\ref{figs2p5}(a)], signalling an effective band-narrowing,
is an intrinsic consequence of the DMRG calculations capturing the effects
of increased scattering processes involving thermally excited domain walls
that are fully dynamical. We note that this intrinsic band-narrowing is
smaller than the effective one providing optimal fits in the Matsubara
framework, pointing to the need for a deeper investigation of dynamical
domain-wall processes that lies beyond the scope of our present study. The
$L = - 0.5$ peak [Fig.~\ref{figs2p5}(b)] is located near the band center
and shows only a very weak downward trend with increasing temperature.
On the technical side, the weak oscillation in the 18 K data is a numerical
artifact of the limited $t_{\mathrm{max}}$ that is often suppressed by introducing
a large effective broadening (analogous to the parameter $\Gamma$ in the main
text). Here we have investigated different broadening schemes, but guided by
the physics we maintain the instrumental broadening, $\sigma = 0.32$ meV. In
this way we do not obscure the intrinsic dynamical response, particularly the
discrete Zeeman-ladder peaks in finite staggered fields shown in Figs.~2 and 3
of the main text and repeated in Sec.~S4. As a consequence, the oscillation in
the 18 K data in Fig.~\ref{figs2p5}(a) is not eliminated completely, and its
presence contributes percent-level errors in our DMRG intensity fits (Table S2)
at this temperature; however, the line shape of the 35 K data takes its
intrinsic form and this is important to our estimate of interchain
correlations above $T_{N1}$ (Sec.~S5).
\begin{figure}[t]
\includegraphics[width=8.78cm]{rccsf4}
\caption{\label{figs3} Comparison between calculations of the dynamical
structure factor, $S(\vec{Q},\omega)$, performed within the extended
Matsubara formalism (a,c) and by DMRG (b,d). Results are shown for the
two temperatures at which detailed experimental data were gathered in
the two ordered phases, namely 4 K (a,b) and 18 K (c,d).}
\end{figure}
\subsection{S4. Data and Fitting Comparisons: Ordered Phases}
Here we show in full detail the comparison between our calculations within
the extended Matsubara formalism and by DMRG, as well as the comparison of
both with the spectral data measured at low ($T < T_{N2}$) and intermediate
temperatures ($T_{N2} < T < T_{N1}$). The unique feature of the ACoX$_3$
materials is that, in both the fully ordered and the partially disordered
antiferromagnetic phases, they allow the investigation of Ising chains
subject to different effective staggered fields within a single material.
This treatment assumes that the individual Ising chains remain as
well-ordered clusters up to $T_{N1}$, leading to coherent effective
staggered fields even as interchain correlations are weakened by the
rising temperature. In Sec.~S5 we will demonstrate that this assumption,
which is also exploited in CHB simulations, is well justified at 18 K. By
contrast, at $T > T_{N1}$ the loss of chain order due to thermal domain-wall
formation becomes significant and an alternative treatment is required.
In Fig.~\ref{figs3} we show the full spectra obtained by extended Matsubara
and by DMRG calculations. We note that Figs.~\ref{figs3}(a) and \ref{figs3}(d)
are the same, respectively, as Figs.~2(b) and 3(b) of the main text. While it
is not surprising that both calculations reproduce the 4 K data rather well
[Figs.~\ref{figs3}(a) and \ref{figs3}(b)], this does demonstrate both that
the Matsubara framework captures the primary physics of the system and that
the DMRG methods are well within their numerical capabilities.
\begin{figure}[t]
\includegraphics[width=8.4cm]{rccsf5}
\caption{\label{figs4} Staggered-field contributions to the dynamical
structure factor, $S(\vec{Q},\omega)$. Measured intensities (points) and
those calculated using both the extended Matsubara model (dashed lines) and
DMRG (solid lines) are integrated over the $\vec{Q}$ windows $- 1.05 < L <
- 0.95$ r.l.u.~(a,c) and $- 0.55 < L < - 0.45$ r.l.u.~(b,d). Results are
shown at temperatures of 4 K (a,b) and 18 K (c,d). The lines changing from
blue to green indicate the individual contributions of chains subject to
each of the possible staggered fields, $m J_{nn}$ with $m = 0$, 2, 4, or 6,
and the red lines show their sum.}
\end{figure}
At 18 K, in the partially disordered antiferromagnetic phase, the level of
agreement is non-trivial. In the extended Matsubara formalism, the fit to the
data can, as discussed in the main text, be achieved by allowing only two of
the six parameters to have a temperature-dependence, one affecting the band
width and one the line width. By contrast, in DMRG there are no free parameters
and both effects are intrinsic. Table \ref{tabs1} shows a comparison between
the fitting results we obtain from the two procedures: there is close agreement
on $J_1$ and $J_{nn}$, some discrepancy in the next-neighbor parameters $J_2$
and $\epsilon_2$, and of course the difference in treatment of the effective
band width, contained in $\epsilon_1$.
\begin{table}
\begin{center}
\begin{tabular}{ c || c | c }
& \;\; Matsubara \;\; & \;\; DMRG \;\; \\
\hline
$J_1$ [meV] & 5.89 & 5.86 \\
$J_2$ [meV] & $-0.518$ & $-0.576$ \\
$J_{nn}$ [meV] & 0.129 & 0.128 \\
\hline
$\epsilon_1$ at 4 K & 0.126 & 0.126 \\
\;\; $\epsilon_1$ at 18 K \;\; & 0.112 & 0.126 \\
$\epsilon_1$ at 35 K & 0.101 & 0.126 \\
\hline
$\epsilon_2$ & 0.605 & 0.559 \\
\end{tabular}
\caption{\label{tabs1} Comparison of fitting parameters obtained from the
extended Matsubara Hamiltonian and from DMRG calculations at 4 K and 18 K.
Error bars are omitted.}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ c | c || c | c | c | c }
Method & \;\; $T$ [K] \;\; & \;\; $I_{0}$ \;\; & \;\; $I_{2}$ \;\; & \;\;
$I_{4}$ \;\; & \;\; $I_{6}$ \;\; \\
\hline
\;\; Matsubara \;\; & \multirow{3}{*}{4} & 67(3) & 3(3) & 0(3) & 31(3) \\
DMRG & & 65(3) & 3(3) & 1(3) & 31(3) \\
CHB & & 61(1) & 6(1) & 5(1) & 28(1) \\
\hline
Matsubara &\multirow{3}{*}{18} & 39(5) & 35(5) & 27(5) & 0(5) \\
DMRG & & 35(5) & 27(5) & 28(5) & 10(5) \\
CHB & & 39(1) & 35(1) & 20(1) & 6(1)
\end{tabular}
\caption{\label{tab2} Comparison of chain population factors deduced from
extended Matsubara and DMRG calculations at 4 K and 18 K, and from CHB
simulations on a system of 120$\times$120 chains. $I_m$ is the percentage
of the scattered weight that may be ascribed to a chain in a staggered
field $h = m J_{nn}$. The sum of the intensities may deviate from 100\% due
to rounding effects. Because the measured $S(\vec{Q},\omega)$ datasets contain
many thousands of points, statistical errors in the fitting procedure are of
order 0.1\%. The quoted error bars for Matsubara and DMRG results represent
the estimated systematic uncertainties in the fitting process, and for CHB
in the simulations.}
\end{center}
\end{table}
These results are predicated on two subsidiary calculations, namely the
intensity contributions due to chains in the different effective staggered
fields and the weights of each chain type in the final sum; we take only the
latter as free parameters. To show clearly the contributions of the different
types of chain (i.e.~different staggered fields) to the measured intensity, we
have performed our calculations separately for staggered fields of 0, $2J_{nn}$,
$4J_{nn}$, and $6J_{nn}$. The results, displayed in Figs.~\ref{figs4}(a,b) and
\ref{figs4}(c,d), show in full detail the respective panels of Figs.~2(c,d)
and 3(c,d) of the main text. Both the energy levels of the Zeeman ladders
and the corresponding intensities computed within the extended Matsubara
description and by DMRG are equal with quantitative accuracy at 4 K.
However, it is clear at 18 K that the DMRG results do not contain as much
narrowing of the band width as that optimizing the Matsubara fits, as a
result of which the Zeeman-ladder states are less strongly renormalized and
the weight factors appropriate to reproduce the measured intensity, shown
in Table \ref{tab2}, include stronger contributions from higher $m$.
Nonetheless, these discrepancies lie close to the systematic error bars
on the fitted $I_m$ percentages, and both fits are consistent with our CHB
simulations (Sec.~S2). One may certainly conclude that the treatment of
neighboring chains in terms of an effective staggered field does provide
an accurate reflection of the response of the 3D system in its ordered phases.
\subsection{S5. Data and modelling above $T_{N1}$}
For the discussion in Sec.~S4 we excluded the high-temperature regime. In the
magnetically disordered state, it is not appropriate to use a formalism based
on effective staggered magnetic fields, i.e.~on long segments of uniform
interchain order. Nor, however, is it appropriate to neglect all interchain
interactions at temperatures close to but above $T_{N1}$ (Fig.~4(a) of the main
text). In this regime, thermally induced randomness appears both in the chains,
in the form of single thermal domain walls, and between the chains, in a form
that can be modelled by an increasingly random effective field. In the
Matsubara framework, a single domain wall acts to terminate each chain
segment and we model the first effect by considering a static thermal
distribution of domain walls, and hence of chain segment lengths, within
a Monte Carlo approach. By applying the same approach to the neighboring
chains, we also account for the second effect. Here it is important to note
that our DMRG calculations, which are performed for a single chain at finite
temperatures, are computing the first of these two effects for fully dynamic
thermal domain walls.
\begin{figure}[t]
\includegraphics[width=8cm]{rccsf6}
\caption{\label{figs6} Illustration of scattered intensities obtained by
Monte Carlo modelling within the Matsubara framework for coupled chains in a
thermally random effective field with the average length, $\langle n \rangle$,
of the Ising chain segments as the adjustable parameter. All parameters are
those of the Matsubara fit with effective band-width parameter $\epsilon_1 =
0.101$. (a) $L = - 1$, $\sigma = 0.03$. (b) $L = - 0.5$, $\sigma = 0.03$. (c)
$L = - 1$, $\sigma = 0.32$. (d) $L = - 0.5$, $\sigma = 0.32$. The lowest modes
of each Zeeman ladder are marked in panels (a) and (b) by ``$mJ_{nn}$'' and ZL
denotes higher Zeeman-ladder modes in panel (a).}
\end{figure}
In Figs.~\ref{figs6}(a) and \ref{figs6}(b) we illustrate the results of the
Matsubara-based procedure for the parameters of RbCoCl$_3$ with a very low
broadening ($\sigma = 0.03$ meV). For long average segment lengths, $\langle
n \rangle$, at $L = - 1$ [Fig.~\ref{figs6}(a)] the isolated-chain continuum
and the Zeeman-ladder peaks for all three finite values of $m$ are clearly
discernible, along with faint signals for $-m$. At $L = - 0.5$
[Fig.~\ref{figs6}(b)], where the modes are non-dispersive, the intensities
of the lowest modes of each Zeeman ladder, which have separation $2J_{nn}$,
approach a 1:6:15:20:15:6:1 distribution, while the higher modes of all
ladders are very weak. This regime is the basis on which, by inspection of
the data at 18 K (Fig.~3 of the main text), one may conclude that the approach
of effective staggered fields remains well justified at that temperature. As
the density of thermal domain walls increases, i.e.~as $\langle n \rangle$
decreases, it is clear at $L = - 1$ that the low-$T$ features lose weight
and that scattered-intensity contributions appear at many different energies
as chain segments of all possible lengths contribute (including those of only
one and two spins). However, at $L = - 0.5$ all of these segments continue to
contribute at the same energies.
\begin{figure}[t]
\includegraphics[width=8.2cm]{rccsf7}
\caption{\label{figs5} Thermal evolution of $S(\vec{Q},\omega,T)$, shown by
superposing datasets taken at all three temperatures. Measured intensities
(points) and those calculated using both the extended Matsubara model (dashed
lines) and DMRG (solid lines) are integrated over the $\vec{Q}$ windows
(a) $- 1.05 < L < - 0.95$ r.l.u.~and (b) $- 0.55 < L < - 0.45$ r.l.u., and
shown with an offset of 15 for clarity. Solid lines show the fits discussed
in Secs.~S4 and S5.}
\end{figure}
To model our 35 K data, we first restore the instrumental resolution,
$\sigma = 0.32$ meV. As Figs.~\ref{figs6}(c) and \ref{figs6}(d) make clear,
this causes all of the separate features of the response at any average
segment length to merge into a single, broad peak. Somewhat surprisingly, the
shape of this feature, which at $L = - 1$ is centered between the energies of
the $m = 0$ and $m = 2$ peaks, becomes independent of $\langle n \rangle$.
Thus although we cannot relate $\langle n \rangle$ directly to the temperature
of the system, its effects become irrelevant due to the combined effects of
the ``splitting'' $2J_{nn}$ and the instrumental broadening, which establish
the width of the broad feature. Its position is controlled by the effective
band-width parameter, $\epsilon_1$, which may therefore be fixed with
reasonable accuracy using the experimental data. Beyond the fact that the
domain walls in our modelling procedure are static rather than dynamic, which
is also accounted for crudely by the effective $\epsilon_1$, we expect the
primary inaccuracy to lie in the neglected effects of $J_2$ across a thermal
domain wall. In this sense the physics of coupled Ising-chain systems with
thermal randomness poses a quantitative challenge to more specialized
theoretical and numerical techniques.
Returning again to our experiments, it was shown clearly in the main text
that, despite $T$ exceeding $T_{N1}$, the scattered intensity at 35 K is far
from that of chains isolated from each other by strong thermal fluctuations.
To quantify the remaining interchain correlation effects, we fit the intensity
measured at 35 K to a weighted sum of the isolated-chain response, taken from
DMRG at 35 K, and the response of the thermally disordered system with a
realistic average segment length of $\langle n \rangle = 10$ sites and the
value $\epsilon_1 = 0.101(2)$ given in Table \ref{tabs1}. In this fit we also
include the actual energy and momentum steps of the experimental data binning,
which is responsible for the discrepancy in shape between the smooth model of
Fig.~\ref{figs6}(c) and the more discrete ``disordered coupled chains''
response in Fig.~4(a) of the main text.
The results of this procedure, shown by the red lines in Figs.~4(a) and 4(b)
of the main text, indicate that approximately 63\% of the measured response
can be ascribed to residual interchain correlations, with a systematic error
of order 5\%. Although this seems to be a surprisingly large fraction, it
should be borne in mind that essentially all of the chains are correlated
below $T_{N1}$, which is only 7 K lower, because with a random field there
is no longer any cancellation effect of the type determining the response
of 2/3 of the Ising chains in the FI phase. As noted in the main text,
susceptibility measurements indicate that these correlations persist up to
temperatures around 80 K in RbCoCl$_3$ while diffuse scattering measurements
\cite{haenni2017} confirm their presence up to 60 K.
We now step back to consider the physics of the system. In Fig.~\ref{figs5}
we illustrate the evolution of the dynamical structure factor with temperature
by comparing the zone-center and zone-edge intensities at all three measurement
temperatures. This highlights the rapid loss of the bound-state
(staggered-field) contributions, the broadening of both the continuum
and the remaining bound-state signals, and the upshift of the lower part of
the band that can be understood as a narrowing effect due to the scattering
of propagating domain-wall pairs on thermally excited domain walls.
Figure \ref{figs5} allows a clear visualization of the way in which the
Matsubara (domain-wall) description allows these changes to be captured by
only two thermal parameters (for line width and band width) and highlights
the power of state-of-the-art DMRG methods to compute the full response of
1D systems at finite temperatures and energies.
For perspective on our modelling of the 3D Ising system, the Matsubara
and DMRG methods are complementary in that DMRG provides the fundamental
strongly correlated quantum physics, albeit at significant computational
expense, which makes it difficult to perform iterative fits of experimental
data and prohibitive to include randomness; the extended Matsubara (effective
Hamiltonian) framework is cheap and easy to apply for iterative fitting, but
its approximate inclusion of thermal effects requires a benchmark that DMRG
can provide.
\end{document}
|
1112.0730
|
\section{Introduction}
The interaction of the martian atmosphere with the solar radiation and interplanetary plasma results in its evaporation due to thermal (Jeans) escape and a number of non-thermal mechanisms. The absence of the intrinsic magnetic field on Mars and its low gravitational potential make the martian atmosphere particularly susceptible to erosion \citep{1998Sci...279.1676A}. The current low atmospheric pressure can be explained well by extrapolating the escape rates to a geological time frame while accounting for a change of the solar activity in the past \citep{2004P&SS...52.1039C}.
The present day degradation of the martian atmosphere occurs mainly via non-thermal escape processes induced by ion charge-exhange, sputtering and ionospheric outflows driven by solar wind \citep{2004P&SS...52.1039C,2008SSRv..139..355J}.
The dissociative recombination (DR) of O$_{2}^{+}$ is a major source of hot O atoms in the upper atmosphere of Mars, responsible for the escape of oxygen and formation of martian hot corona \citep{1988Icar...76..135I,1993GeoRL..20.1747F,2005SoSyR..39...22K}.
Nascent hot O atoms collide with thermal constituents of the martian atmosphere and eject them from the planetary gravitational field, if a sufficient kinetic energy transfer occurs. Suprathermal neutral oxygen was shown to be important for analyses of Mars' corona and the non-thermal escape of neutral atoms \citep{2005SoSyR..39...22K,2006SoSyR..40..384K,2009Icar..204..527F}. Recent calculations of the non-thermal He escape from Mars carried out with accurate energy transfer parameters predicted a significant He escape flux induced by hot O atoms \citep{2011GeoRL..3802203B}.
In this Letter we explore collisional ejection of molecules from the martian atmosphere. Specifically, we report the results of a quantum-mechanical study of the energy transfer from the hot O to H$_2$ molecules and their subsequent escape. Significant computational difficulties arise from the fact that molecular internal rotational and vibrational degrees of freedom can be excited in collisions. In addition, the reactive pathway leading to the production of OH molecules is energetically permitted. To account for the increased complexity, we have computed the cross sections for O($^3$P) + H$_2$ reactive collision using fully quantum-mechanical approach. Kinetic theory was used to calculate the rate of energy transfer, as well as distributions of excited rotational and vibrational (RV) states of the recoiled H$_2$ molecules. The total escape flux of H$_2$ from Mars and RV distributions of escaping molecules have been evaluated for the low solar activity conditions. Also, we have estimated the non-thermal escape fluxes of HD and D$_2$ and compared them to the corresponding Jeans escape rates.
Finally, the dependence of the molecular ejection fluxes on the gravitational escape threshold is analyzed for conditions present on other planets, satellites, and exoplanets.
\section{Cross Sections and Energy Transfer}
The DR of O$_{2}^{+}$ with electrons proceeds via five possible dissociation pathways, producing O($^3$P), O($^1$D), and O($^1$S) \citep{1988P&SS...36...47G,2009Icar..204..527F}.
Energetic metastable O($^1$D) atoms decay via spontaneous emission and quenching in collisions with atmospheric gases into O($^3$P) atoms \citep{2005JGRA..11012305K}. The cross sections for O($^3$P) and O($^1$D) colliding with He were found to be very similar \citep{2011GeoRL..3802203B}.
For simplicity, we assumed a similar behavior for O($^3$P) + H$_2$ and O($^1$D) + H$_2$ elastic collisions.
To describe the collision ejection of H$_2$ molecules, we have calculated elastic and inelastic cross sections for the center-of-mass (CM) collision energies from 0.01 to 4.5 eV. The quantum scattering code ABC \citep{2000CoPhC.133..128S}, that can treat elastic, inelastic, and open reactive channels of the OH production, $\mathrm{O}(^3P) + \mathrm{H}_2(v,j) \rightarrow \mathrm{OH}(v'',j'') + \mathrm{H}$, where $(v,j)$ and $(v'',j'')$ indicate initial and final RV levels of H$_2$ and OH, was used to solve the time-independent coupled-channel Schr\"odinger equation in Delves hyperspherical coordinates.
In addition, for high collision energies, the elastic and inelastic cross sections were calculated using the MOLSCAT \citep{MOLSCAT} code to ensure the convergence of the nonreactive channels\footnote{A detailed description of the scattering calculations and resulting cross sections for the two lowest potential energy surfaces will be published elsewhere.}. Extensive numerical convergence tests were carried out for the both codes.
\begin{figure}[htbp]
\noindent \includegraphics[width=20pc]{fig1_v1.eps}
\caption{
Elastic and momentum transfer cross sections for H$_2(v=0,j=0,1,2)$ + O collisions. The momentum transfer cross section shown is thermally averaged over the first three rotational states.
\textit{Inset:} Total inelastic cross sections $\sigma_{vj}^{\mathrm{inel}}(E) = \sum_{v',j'}\sigma_{vj,v'j'}(E)$ for H$_{2}(v=0,j=0,1,2) + \mathrm{O} \rightarrow \mathrm{H}_{2}(v'=0,j')$ + O, and $j \neq j'$.
The reactive cross section for H$_{2}(v=0,j=0) + \mathrm{O} \rightarrow \mathrm{OH} + \mathrm{H}$ is also shown and compared with the experimental results (black squares) \citep{2003JChPh.118.1585G}.}
\label{fig1}
\end{figure}
The O($^3P$)+H$_2(v,j)$ interaction was described using two lowest potential energy surfaces, Rogers' LEPS $^3A''$ \citep{Rogers_Kupperman_PES_2000} and Brand\~ao's BMS1 $^3A'$ \citep{2004JChPh.121.8861B}.
Partial cross sections for initial and final rotational levels $j$ and $j'$ were constructed as a statistically weighted sum of the independently calculated cross sections for the two potential surfaces, where both $^3A''$ and $^3A'$ contribute a weight factor of $1/3$ \citep{2004JChPh.121.6346B}. Elastic, inelastic, and momentum transfer partial cross sections for oxygen colliding with the hydrogen molecule in three energetically lowest rotational states are given in Figure \ref{fig1}.
We compared our reactive cross sections for OH production to the previously published results \citep{2003JChPh.119..195B,2004JChPh.121.6346B,2004JChPh.120.4316B,2010ChJCP..23..149W}
and found them to be in close agreement within the available energy range.
To determine the energy transfer rate from the suprathermal oxygen to atmospheric H$_2$ and find its escape rate, we used kinetic theory with a quantum description of internal molecular structure and realistic anisotropic cross sections. Since the reactive cross sections are an order of magnitude smaller than the elastic cross sections (Figure \ref{fig1}), and the more massive OH molecule has a considerably higher escape threshold than H$_2$, we neglected it in this study. However, note that a small fraction of produced OH molecules may be sufficiently energetic to escape.
The transferred energy from the energetic projectile O to the frozen target H$_2$ in the laboratory frame (LF) can be expressed as \citep{1982itam.book.....J}
\begin{equation}
T_{v',j'} = \frac{m_\mathrm{O} \, m_{\mathrm{H}_2}} {(m_\mathrm{O} + m_{\mathrm{H}_2})^2}
\left( 1 + \gamma_{v',j'} - 2\sqrt{\gamma_{v',j'}} \cos \theta \right) E,
\label{eq1}
\end{equation}
where
$m_\mathrm{O}$ and $m_\mathrm{{H_2}}$ are masses of O and H$_2$, respectively, $E$ is the collision energy in the LF, $\gamma_{v',j'} = \epsilon_{v'j'} / \epsilon$, is the ratio of $\epsilon_{v'j'}$ and $\epsilon$, the CM translational kinetic energies after and before the collision, respectively, and $\theta$ is the scattering angle in the CM frame.
The energies $\epsilon_{v'j'}$ were calculated quantum mechanically for the two triplet potential surfaces. Eq. (\ref{eq1}) takes into account that the energy transferred to H$_2$ molecules is spent on increasing their translational kinetic energy and exciting their internal RV degrees of freedom.
The fraction of energized H$_2$ molecules capable of escaping can be calculated as
\begin{equation}
\Gamma_{vj}^{v'j'}(E) = \frac{\int_{\theta_{\mathrm{min}}}^{\pi} Q_{vj,v'j'}(\theta)
\left(1-\cos \theta \right) \sin \theta d \theta }
{\int_{0}^{\pi} Q_{vj,v'j'}(\theta) \left(1-\cos \theta \right) \sin \theta d \theta } ~,
\label{eq2}
\end{equation}
where $Q_{vj,v'j'}(\theta)$ is the differential cross section for scattering of H$_2$ in the initial $(v,j)$ into the final $(v',j')$ state.
The critical angle $\theta_{\mathrm{min}}$ was determined from the condition that the translational part of the transferred energy $T_{v',j'}$ is equal to the minimum energy required for H$_2$ to escape from Mars, $E_{\mathrm{esc}} = 0.26$ eV. An alternative description of the escape process could be constructed by performing Monte Carlo simulations with accurate quantum cross sections for angular distributions of the recoiled H$_2$ molecules.
Momentum transfer cross sections $\sigma^{\mathrm{mt}}_{v j, v' j'}$ for inelastic collisions were calculated using \citep{1978JChPh..68.1585P}
\begin{equation}
\sigma^{\mathrm{mt}}_{v j, v' j'} = 2 \pi \int_0^{\pi} \mathrm{d} \theta \sin \theta
\left( 1-\sqrt{ 1 - \gamma_{v',j'}} \cos \theta \right) Q_{vj,v'j'}(\theta) .
\label{eq3}
\end{equation}
\section{Flux and Distribution of Escaping H$_2$}
Jeans escape and collisions with hot oxygen are the major mechanisms that contribute to the escape of neutral H$_2$ molecules and their isotopes from the martian atmosphere.
Both processes are strongly dependent on the temperature and density of upper layers of the martian atmosphere. The temperature of the exosphere (above the altitude of about 180 km), $T_{\mathrm{exo}}$, is approximatelly constant and estimated to be between 240 and 280 K, depending on the solar activity and gas density profiles \citep{2010Icar..207..638K,2009Icar..204..527F,2003JGRA..108.1223F}. To obtain a conservative estimate of the non-thermal flux of escaping H$_2$, we considered the $T_{\mathrm{exo}} = 240$ K, corresponding to the low solar activity.
Furthermore, we assumed a thermal distribution of the initial rotational states of H$_2$, where more than 95 \% of the total population is distributed between its first three rotational states, $j=0,1,2$, with corresponding population fractions equal to 0.31, 0.46, and 0.19, respectively.
Using Eqs. (\ref{eq2},\ref{eq3}) we have calculated the values of the thermally averaged momentum transfer cross sections and the fractions $\Gamma_{vj}^{v'j'}(E)$ of rovibrationally excited H$_{2}$, sufficiently energetic to escape from Mars (Figure \ref{fig2}).
The fraction $\Gamma_{vj}^{v'j'}(E)$ becomes significant at collision energies greater than 0.7 eV for H$_2(j'=0,2)$. Although higher rotational states require increasingly larger projectile energies, \textit{e.g.} H$_2(j'=16)$ can escape for $E>1.65 \; \mathrm{eV}$, their fraction in the RV distribution of the escaping molecules also becomes larger. Note that the initial population of higher vibrational levels of H$_2$ at the exobase is negligible.
Since $\Gamma_{vj}^{v'j'}(E)$ depends only on the energy transfer efficiency and the escape energy threshold, it can be easily generalized to different astronomical objects. We illustrate this for two hypothetic planets, the first corresponding in size and mass to Earth and the second to the extrasolar super-earth Kepler-10b (3.3 Earth masses) \citep{0004-637X-729-1-27}. Note that only very energetic H$_2$, mostly in higher excited rotational levels, is able to escape (Figure \ref{fig2}).
\begin{figure}[htp]
\noindent\includegraphics[width=20pc]{fig2_v3.eps}
\caption{\textit{Top:} State-to-state momentum transfer cross sections for the H$_{2}(v',j')$. Curves for the final rotational levels, $j'=0,2,\ldots,16$ (labeled on graph), collisionally excited from the ground state of H$_{2}$, are shown.
\textit{Bottom:} Fraction $\Gamma_{v j}^{v' j'}$ of the recoiled H$_{2}(v'=0,j')$ with energies greater than the escape energy for Mars. The escaping fractions for Earth for $j'=0,4$ (dashed blue) and Kepler-10b super-earth for $j'=0,8$ (light green) are given for comparison.}
\label{fig2}
\end{figure}
A simple estimate of the total escape flux of H$_{2}(v',j')$ can be obtained from the exobase approximation \citep{2003JGRA..108.1223F,2004P&SS...52.1039C,2010Icar..207..638K}, using the density of exospheric H$_2$, and fractions $\Gamma_{vj}^{v'j'}(E)$ calculated above. However, such an approach neglects the hot O production below the exobase and loss of the upward flux in atmospheric collisions, resulting in a large uncertainty in the computed flux.
We constructed a more realistic 1D model of escape, analogous to the one used to describe the escape of He atoms \citep{2011GeoRL..3802203B}. In our model the explicit consideration of energy transfer collisions is combined with the altitude-dependent rate of production of hot O atoms, $f(E,h)$ via DR channels \citep{1988P&SS...36...47G,2005JChPh.122w4311P,2011GeoRL..3802203B}. In addition, we estimated the extinction of fluxes of suprathermal O and H$_2$ due to collisions with thermal atmospheric gases. All calculations were performed for low solar activity. We used the rate of production of hot O below 400 km by \citet{2009Icar..204..527F} and smoothly interpolated it to the rate given by \citet{1996JGR...10115765K} at higher altitudes.
The volume production rate of escaping hot H$_2(v',j')$ can be expressed as
\begin{eqnarray}
P_{v' j'}(h) & = & \frac{1}{2} \int_{0}^{\infty} \mathrm{d}E \, T_{\mathrm{H}_2}(h,E)
n_{\mathrm{H}_2}(h) \Gamma_{vj}^{v'j'}(E) \sigma_{vj,v'j'}^{\mathrm{mt}}(E) \nonumber \\
& & \times \int_{h_{2}^{\mathrm{min}}}^{h} \mathrm{d}h_2 f(E,h_2) T_\mathrm{O}(h_2,h,E) ,
\label{eq:P}
\end{eqnarray}
with the transparency factors $T_{\mathrm{H}_2}$ and $T_\mathrm{O}$ defined as
\begin{eqnarray}
T_{\mathrm{H_2}}(h,E) & = & \exp \left[-\int_{h}^{h_\mathrm{max}} \mathrm{d}h' \sum_i
\sigma_{\mathrm{H_2},i}^{\mathrm{mt}}(E) n_{i}(h') \right] \nonumber \\
T_\mathrm{O}(h_2,h,E) & = & \exp \left[ -\int_{h_2}^{h} \mathrm{d}h' \sum_i
\sigma_{\mathrm{O},i}^{\mathrm{mt}}(E) n_{i}(h') \right] .
\end{eqnarray}
\begin{figure}[htbp]
\noindent\includegraphics[width=20pc]{fig3_v2.eps}
\caption{Altitude profile of the volume production rate $P_{v',j'}(h)$ of the non-thermal flux of H$_2$ molecules escaping from Mars. The most significant rates with respect to the initial and final rotational states, $j=0$ (solid), $j=1$ (dashed) and $j=2$ (dotted), and $j'=0-10$, are shown. The curves are denoted as $jj'$.}
\label{fig3}
\end{figure}
The transparency factor $T_{\mathrm{H}_2}$ is equal to the escape probability of hot H$_2(v',j')$ produced in collisions with the incident hot O of energy $E$ at the altitude $h_2$.
The second transparency factor $T_\mathrm{O}$ is defined as the probability that the hot O atoms, produced at the altitude $h_2$, reach the altitude $h$ without the energy loss in collisions with other atmospheric constituents.
The quantity $n_{\mathrm{H}_2}(h) \, \Gamma_{vj}^{v'j'}(E) \, \sigma_{vj,v'j'}^{\mathrm{mt}}(E)$ is the inverse mean free path for O+H$_2(v,j)$ collisions, resulting in the energy transfer greater than the H$_2$ escape threshold.
The prefactor $1/2$ indicates that, in our simplified 1D model, approximately half of the nascent energetic atoms and recoiled H$_2$ molecules are scattered towards the planet and cannot escape regardless of the energy transferred.
Summations of the flux loss of H$_2$ and O in collisions with the $i$-th atmospheric gas of density $n_i(h)$ and momentum transfer cross section $\sigma_{\mathrm{H_2},i}^{\mathrm{mt}}(E)$ and $\sigma_{\mathrm{O},i}^{\mathrm{mt}}(E)$, respectively, included major constituents of the martian upper atmosphere: CO$_2$, CO, N$_2$, O$_2$, H$_2$, H, Ar, and He.
The momentum transfer cross sections for H-H$_2$ \citep{1999JPhB...32.2415K}, Ar-H$_2$ \citep{2005JChPh.122b4304U}, H$_2$-H$_2$ \citep{1990JPCRD..19..653P} were used from the literature. Since no data were available in the required energy range, we used approximate mass-scaled cross sections for He-H$_2$ (from Ar-H$_2$), N$_2$-H$_2$, O$_2$-H$_2$, CO-H$_2$ (from O-H$_2$), and CO$_2$-H$_2$ (from O-N$_2$ \citep{1998JGR...10323393B}).
Using Eq. (\ref{eq:P}) we have calculated volume production rate of the escaping H$_2(v'=0,j')$ molecules induced in H$_2(v=0,j=0-2)$ + O collisions for a range of altitudes from $h_{\mathrm{min}}=130$ km to $h_{\mathrm{max}}=800$ km (Figure \ref{fig3}). Note that, by symmetry arguments, for a homonuclear H$_2$ only $\Delta j=0,2,4...$ transitions are allowed \citep{1986qmv2.book.....C}.
The resulting escape rates of H$_2$ molecules are the largest for the elastic collisions, followed by the three times smaller rates for the first two excited rotational states, $j'=2$ and $j'=4$. The rates remain significant for the final rotational levels up to $j'=10$.
\begin{figure}[htbp]
\noindent\includegraphics[width=20pc]{fig4_v3.eps}
\caption{\textit{Top:} Collisional and thermal total escape rate of H$_2(v'=0,j')$ for the first 16 rotational states $j'$. \textit{Bottom:} The same as above for H$_2$, HD, and D$_2$.}
\label{fig4}
\end{figure}
The altitude profile of the production rate of H$_2$ capable of escaping is similar to the production rate profile of He \citep{2011GeoRL..3802203B}. This was expected, since the escape of both species is driven by collisions with the nascent fast O atoms, produced mostly below 150 km for the considered atmospheric and solar conditions. The calculated altitude profile can be used to compute the non-thermal escape flux of H$_2$ molecules and estimate the accuracy of the exobase approximation.
We calculated $\phi_{j'}$, the non-thermal flux for H$_2(v'=0,j')$ molecules as
\begin{equation}
\phi_{j'} = \int_{h_{\mathrm{min}}}^{h_{\mathrm{max}}} P_{v'j'}(h) \mathrm{d}h .
\end{equation}
Total collisional and thermal fluxes were calculated as sums over all rotational levels and found to be $1.9 \times 10^5$ cm$^{-2}$ s$^{-1}$ and $1.1 \times 10^6$ cm$^{-2}$ s$^{-1}$, respectively.
A comparison of Jeans and non-thermal rates of escape of H$_2(v'=0,j')$ molecules, escaping in different rotational states $j'$ from martian dayside, is given in Figure \ref{fig4}.
To simplify the calculation we assumed the average solar conditions and neglected the latitude dependence of the production rate of hot O atoms.
Jeans rate is about eight times greater than the non-thermal rate of the escaping H$_2$ for the lowest three rotational states, while for $j'>3$ the latter starts to dominate. The distinct character of the two RV distributions is a clear signature of different physical escape mechanisms.
\begin{table}
\centering
\begin{tabular}{r | c c c}
\hline
& H$_2$ & HD & D$_2$ \\ \hline
Jeans escape rate (s$^{-1}$) & $1.1 \times 10^6$ & 2.7 & $3.3 \times 10^{-6}$ \\
Non-thermal escape rate (s$^{-1}$) & $1.9\times 10^5$ & 74 & 0.03 \\
\hline
\end{tabular}
\caption{Total collisionally-induced escape rates of H$_2$, HD, and D$_2$ from the martian atmosphere.}
\label{table1}
\end{table}
While, in case of H$_2$, thermal rate is almost an order of magnitude higher than the collisionally-induced rate of escape, relative importance of the two processes changes for heavier isotopologues, namely HD and D$_2$ (Table \ref{table1}). A similar scaling of the collisional and thermal escape fractions can be expected for molecular escape from more massive astronomical objects.
\section{Conclusions}
We find that the collisionally-induced outflow of H$_2$ molecules and their heavier isotopes contributes to the evolution of the martian atmosphere. Namely, the escape rate of molecular hydrogen induced by collisions with hot oxygen from the martian atmosphere was calculated and found to be about six times smaller than the corresponding Jeans escape rate for the low solar activity.
For heavier molecules, the collisional escape will dominate over thermal, as we have illustrated in case of HD and D$_2$ isotope molecules. In fact, the described process of molecular ejection induced by collisions may be one of the most important escape mechanisms of HD and D$_2$ from Mars. Consequently, the calculated escape fluxes of H$_2$ isotopologues could be important in analyses of the H/D ratio on Mars and evolution of water in martian history.
The described mechanism of molecular escape, where collisions provides sufficient translational energy to exceed the escape threshold and simultaneously excite internal molecular degrees of freedom, is rather general. It could be used to evaluate non-thermal escape fluxes and RV distributions of heavier molecules, such as CO, N$_2$, or CH$_4$, from Mars, Solar system bodies, and exoplanets.
For the escape flux induced by O atoms produced in DR the upper limit on the mass of the escaping molecule is about $30 \; u$ on Mars.
Similarly, we estimate that the non-thermal escape of H$_2$ from a planetary atmosphere is possible for planetary masses up to about 3.4 Earth masses. A number of solar system bodies as well as the lightest currently confirmed exoplanets belong in that mass range.
These limits do not include other non-thermal sources of hot atoms.
The escaping H$_2$ molecules exhibit a characteristic internal energy distribution, with a significant fraction of populated higher rotational states. Since H$_2$ molecules do not have a permanent electric dipole moment, they decay to the ground state mainly via collisions with the background gases present in an extended planetary corona. This is true for all escaping molecules that do not have ther permanent dipole moment. It could be possible to indirectly detect the presence of H$_2$ or other rotationally excited light molecules in the extended martian corona from a careful analysis of the collision rates and abundancies of the excited coronal species.
Finally, a significant amount of rovibrationally excited H$_2$ molecules remain in the martian atmosphere after colliding with hot O atoms. The cross sections and energy transfer parameters presented in this study can be used to determine non-thermal translational and rovibrational distributions of hot H$_2$ gas in the upper atmosphere of Mars.
\begin{acknowledgments}
We are grateful to D. Wang, A. Kuppermann, and J. Brand\~ao for providing Fortran subroutines for constructing potential energy surfaces, and to N. Lewkow for reading and suggestions. M.G. and V.K. were supported by NASA grants NNX09AF13G and NNX10AB88G.
\end{acknowledgments}
\bibliographystyle{agu08}
|
1112.0188
|
\section{Introduction}
A major breakthrough in the understanding of superconductivity is definitely due to the wave function ansatz proposed by Bardeen, Cooper, Schrieffer \cite{BCS}. The great advantage of this ansatz is that it leads to results in agreement with experiments through calculations quite easy to perform analytically. Its standard form in the grand canonical ensemble as a sum of states with different particle numbers, makes the Pauli exclusion principle between paired up and down spin electrons straightforward to handle. Calculations in the grand canonical ensemble however mask the key role played by the Pauli exclusion principle in superconductivity. Indeed, due to the very peculiar form of the reduced BCS potential --- an up spin electron $(\v k)$ can interact with a down spin electron $(-\v k)$ only --- Pauli blocking is the only way two correlated electron pairs can feel each other (see Fig. \ref{fig:shiva}). This in particular explains why Cooper pairs can strongly overlap witho
ut dissociating, by contrast with excitons which dissociate into an electron-hole plasma \cite{CN} through a Mott transition when overlap starts.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{shiva}
\caption{Two Cooper pairs made of the repeated interactions of an up spin electron $\v k$ and a down spin electron $-\v{k}$ cannot interact by the reduced BCS potential because this would impose $\v{k}'_1=\v{k}'_2$. The two up spin electrons would then have the same momentum which is impossible due to the Pauli exclusion principle. \label{fig:shiva}}
\end{center}
\end{figure}
To better understand the key role played by the Pauli exclusion principle in the physics of BCS superconductors, it is necessary to stay in the canonical ensemble, with a pair number and a number of states available for pairing fixed. However, to exactly handle Pauli blocking between a fixed number of paired fermions like excitons or Cooper pairs, is far from easy.
There were, in the past, several discussions on the exact particle-conserving solution within the reduced BCS potential using Richardson-Gaudin procedure\cite{Richardson1,Richardson2,Richardson3,Gaudin,CobosonBcsRich,ortiz,JETPLett}and its difference with the BCS ansatz \cite{Bog,Bardeen,bang,hasegawa,Roman2002}. Most discussions on the ground state however focus on recovering the correct energy or some physical quantities, like the integrals of motion (\cite{ortiz}), instead of the wave function itself. Yet, the BCS ansatz wave function is strongly linked to the picture people commonly have of superconductivity.
During the last decade, we developed a many-body formalism appropriate to composite bosons made of fermion pairs\cite{CobosonPhysicsReports, combescotBCS, CobosonBcsRich,motheaten}. Up to now, we have extensively studied excitons and developed a formalism adapted to their many-body physics. Because the long-range Coulomb interaction between electrons and holes leads to a Mott dissociation of the exciton gas into an electron-hole plasma \cite{CN} when the density increases, the relevant exciton regime is the dilute regime. By contrast, the relevant regime in BCS superconductivity is the dense regime with Cooper pair wave functions strongly overlapping. As a result, the many-body physics of composite bosons like Cooper pairs is expected to have some similarities with the one of excitons, but with a few major differences.
In section II, we develop a commutator formalism for paired electrons capable to handle the Pauli exclusion principle within a $N$-pair condensate, in an exact way.
In section III, we come back to the BCS ansatz in its grand canonical form and we briefly rederive through the standard grand canonical procedure, some textbook results \cite{Tinkham,Fetter} on quantities we are going to consider in the canonical ensemble, namely, the $\v k$-state population, the pair number mean value, its fluctuations, and the two-pair correlation function.
In section IV, we turn to the canonical ensemble, with a fixed number of pairs. Through a direct study of the probability distribution for $N$-pair states in the BCS ansatz, we prove that this ansatz indeed corresponds to a distribution very much peaked on a particular value $N^\ast$ of the pair number, as a result of the ``moth-eaten effect'' induced on Cooper pairs by the Pauli exclusion principle\cite{motheaten}. The standard derivation of this result, through the fluctuations of the pair number around its mean value\cite{Tinkham}, completely hides the microscopic origin of this maximum.
In section V, we calculate the fraction of the $\v k$ electron state occupied in a $N$-pair state. We do
show that it is identical to the one calculated within the grand canonical version of the ansatz for $N=N^\ast$. This is also true for the pair operator mean value associated to what is often called ``pair wave function''.
In section VI, we come back to what should be called ``Cooper pair wave function''.
In section VII, we conclude.
\section{Composite boson formalism for condensed pairs}
The goal of this section is to develop a formalism capable to, in an exact way, handle Pauli blocking within a $N$-fermion-pair condensate.
For that, we follow our previous works \cite{ CobosonBcsRich,motheaten} and introduce the generalized creation operator for correlated pairs,
\begin{equation}
B_n^\dag=\sum_{\v k}|\varphi_{\v k}^2|^n\varphi_{\v k}\beta_{\v k}^\dag\ ,
\end{equation}
with $n=(0,1,2\cdots)$. The operator $\beta_{\v k}^\dag=a_{\v k}^\dag b_{-\v k}^\dag$ creates a pair of free fermions with zero total momentum. In the case of BCS superconductivity, these fermions are up and down spin electrons.
We first note that free fermion pair creation operators commute, $[\beta_{\v k'}^\dag,\beta_{\v k}^\dag]=0$, while
\begin{equation}
[\beta_{\v k'},\beta_{\v k}^\dag]=\delta_{\v k',\v k}-D_{\v k'\v k}\ ,
\end{equation}
with $D_{\v k'\v k}=\delta_{\v k',\v k}(a_{\v k}^\dag a_{\v k}+b_{-\v k}^\dag b_{-\v k})$, so that $D_{\v k'\v k}|0\rangle=0$. We also have
\begin{equation}
[a_{\v p}^\dag a_{\v p},\beta_{\v k}^\dag]=\delta_{\v p,\v k}\beta_{\v k}^\dag=[b_{-\v p}^\dag b_{-\v p},
\beta_{\v k}^\dag]\ .
\end{equation}
It is then easy to show that
\begin{equation}\label{BBcomm}
[B_m,B_n^\dag]=\tau_{m+n}-D_{m+n}\ ,
\end{equation}
where the ``deviation from boson operator'' of these generalized correlated pairs is given by $D_m=\sum_{\v k}|\varphi_{\v k}^2|^{m+1}(a_{\v k}^\dag a_{\v k}+b_{-\v k}^\dag b_{-\v k})$. The scalar $\tau_m$, defined as
\begin{equation}
\tau_m=\sum_{\v k}|\varphi_{\v k}^2|^{m+1},
\end{equation}
is the $m+1$ moment of the $\v k$ state distribution in the correlated pair at hand. To possibly relate this moment to the correlated pair wave function, we are led to normalize the $\varphi_{\v k}$ distribution as $\tau_0=\sum_{\v k}|\varphi_{\v k}^2|=1$. We then have, for $|0\rangle$ being the vacuum state
\begin{equation}
\langle0|B_0^{}B_0^{\dag}|0\rangle=\tau_0=1
\end{equation}
In order to easily handle the Pauli exclusion principle within a $B_0^{\dag N}|0\rangle$ condensate, we also need
\begin{equation}
[D_m,B_n^\dag]=2B_{m+n+1}^\dag\ .
\end{equation}
Using it and
\begin{eqnarray}
\left[D_m,B_0^{\dag N}\right]=\left[D_m,B_0^\dag\right]B_0^{\dag N-1}\hspace{1cm}\nonumber\\
+B_0^\dag\left[D_m,B_0^{\dag N-1}\right],
\end{eqnarray}
we get by iteration
\begin{equation}
\left[D_m,B_0^{\dag N}\right]=2NB_{m+1}^\dag B_0^{\dag N-1}\ .\label{DBcomm}
\end{equation}
In the same way, Eqs.(\ref{BBcomm}) and (\ref{DBcomm}), along with
\begin{eqnarray}
\left[B_m,B_0^{\dag N}\right]=\left[B_m,B_0^\dag\right]B_0^{\dag N-1}\hspace{2cm}\nonumber\\
+B_0^\dag\left[B_m,B_0^{\dag N-1}\right],\label{Bcomm}
\end{eqnarray}
allow us to rewrite the RHS of the above equation as
\begin{equation}\label{BcommRHS}
NB_0^{\dag N-1}(\tau_m-D_m)-N(N-1)B_{m+1}^\dag B_0^{\dag N-2}.
\end{equation}
One important quantity for a condensate made of $N$ composite bosons $B_0^\dag$ is its normalization factor. Let us write it as
\begin{equation}
\langle 0|B_0^NB_0^{\dag N}|0\rangle=N!F_N\ .
\end{equation}
If $B_0^\dag$ were an elementary boson creation operator, we would have $F_N=1$. For composite bosons, $F_N$, equal to 1 for $N=1$, decreases when $N$ increases, due to what we called ``moth eaten effect''\cite{JETPLett}: more and more free pair states are missing in the $B_0^\dag$ operators of $B_0^{\dag N}|0\rangle$ due to the Pauli exclusion principle between these $N$ pairs as if $N$ little moths had eaten these free states.
To calculate $F_N$, we first note that
$\langle 0|{B_0^N}B_0^{\dag}{}^{ N}|0\rangle$ also reads $\langle 0|B_0^{N-1}B_0B_0^{\dag}{}^ N|0\rangle$. We then use Eqs. (\ref{Bcomm}, \ref{BcommRHS}). For $\tau_0=1$, we find
\begin{equation}\label{FN}
F_N=F_{N-1}-\frac{1}{(N-2)!}\langle\langle 0|B_0^{N-1}B_1^\dag B_0^{\dag N-2}|0\rangle,
\end{equation}
and we iterate using Eqs.(\ref{Bcomm}, \ref{BcommRHS}). This shows that the $F_N$'s are linked by
\begin{eqnarray}
F_N&=&F_{N-1}-(N-1)\tau_1F_{N-2}\hspace{2.3cm}\nonumber\\
&&\hspace{0.5cm}+(N-1)(N-2)\tau_2F_{N-3}+\cdots\nonumber\\
&&\hspace{1cm}+(-1)^{N-1}(N-1)!\tau_{N-1}F_0
\end{eqnarray}
Eq.(\ref{FN}) also shows that $F_N$ is a decreasing function of $N$: The moth-eaten effect gets larger and larger when $N$ increases. Indeed, the last matrix element is positive as seen by expanding it on free pair operators; this matrix element then reads
\begin{equation}
\sum_{\v k_1\cdots \v k_{N-1}}^{\neq}|\varphi_{\v k_1}^4||\varphi_{\v k_2}^2|\cdots|\varphi_{\v k_{N-1}}^2|,
\end{equation}
the sum being taken over different $(\v k_1,\cdots,\v k_{N-1})$ due to the Pauli exclusion principle.
\section{BCS ansatz}
Let us introduce the \emph{unnormalized} correlated pair creation operator
\begin{equation}\label{C}
C^\dag=\sum_{\v p}\phi_{\v p}\beta_{\v p}^\dag\ .
\end{equation}
with $\langle 0|CC^{\dag }|0\rangle$ possibly different from $1$. From it, we construct the following linear combination of $N$-pair states\cite{Tinkham,Fetter,Leggett}
\begin{equation}
|\phi\rangle=\sum_{N=1}^{+\infty}\frac{1}{N!}C^{\dag N}|0\rangle
\end{equation}
This state also reads
\begin{equation}
|\phi\rangle=e^{C^\dag}|0\rangle=\Pi_{\v p}(1+\phi_{\v p}\beta_{\v p}^\dag)|0\rangle.
\end{equation}
since $\beta_{\v p}^{\dag 2}=0$ due to the Pauli exclusion principle. By writing $\phi_{\v p}$ as $v_{\v p}/u_{\v p}$ with $|u_{\v p}^2|+|v_{\v p}^2|=1$, we get the usual form of the \emph{normalized} BCS state as product of $\v p$ operators
\begin{equation}
|\phi_{BCS}\rangle=\gamma|\phi\rangle
=\prod_{\v p}(u_{\v p}+v_{\v p}\beta_{\v p}^\dag)|0\rangle,
\end{equation}
where the normalization factor reads $\gamma=\prod_{\v p}u_{\v p}$.
Using this $\v p$ product, the $\v k$ electron distribution in the $|\phi_{BCS}\rangle$ state is easy to find as
\begin{eqnarray}
\langle\hat{N}_{\v k}\rangle&=&\langle\phi_{BCS}|a_{\v k\uparrow}^\dag a_{\v k\uparrow}|\phi_{BCS}\rangle\nonumber\\
&=&|v_{\v k}^2|=\frac{|\phi_{\v k}^2|}{1+|\phi_{\v k}^2|}\ .\label{Nk}
\end{eqnarray}
As a result, the $\phi_{\v k}$ distribution of the correlated pair operator $C^\dag$ defined in Eq.(\ref{C}) is related to the mean value $\langle\hat{N}\rangle$ of the number of up \emph{or} down spin electrons $\hat{N}=\sum_{\v k}a_{\v k\uparrow}^\dag a_{\v k\uparrow}$ in the $|\phi_{BCS}\rangle$ state through
\begin{eqnarray}
\langle\hat{N}\rangle&=&\sum_{\v k}\langle\hat{N}_{\v k}\rangle=\sum_{\v{k}}|v_{\v k}^2|
=\sum_{\v k}\frac{|\phi_{\v k}^2|}{1+|\phi_{\v k}^2|}\ .
\end{eqnarray}
Turning to the fluctuation of this mean value, we find that it reads
\begin{equation}
\frac{\langle\hat{N}^2\rangle-\langle\hat{N}\rangle^2}{\langle\hat{N}\rangle^2}=\frac{\sum_{\v k}|u_{\v k}^2v_{\v k}^2|}{\Big{(}\sum_{\v k}|v_{\v k}^2|\Big{)}^2}.
\end{equation}
Since each sum over $\v k$ is proportional to the sample volume, the above ratio goes to zero in the large sample limit as 1 over the volume. So, this fluctuation is indeed very small which proves that the $N$ distribution in the $|\phi_{BCS}\rangle$ state is very much peaked on its mean value $\langle\hat{N}\rangle$.
We can also consider the mean value of the two-pair operator $\beta_{\v k}\beta_{\v k'}^\dag$ in the $|\phi_{BCS}\rangle$ state. For $\v k\neq\v k'$, we find
\begin{equation}\label{betaBCS}
\langle\phi_{BCS}|\beta_{\v k}\beta_{\v k'}^\dag|\phi_{BCS}\rangle=u_{\v k}^\ast v_{\v k}u_{\v k'}v_{\v k'}^\ast=F_{\v k}F_{\v k'}^\ast\
\end{equation}
where $F_{\v k}$, often called ``pair wave function'', is defined as
\begin{equation}\label{fn}
F_{\v k}=u_{\v k}^\ast v_{\v k}=\frac{\phi_{\v k}}{1+|\phi_{\v k}^2|}\ .
\end{equation}
\section{Direct calculation of the $N$-pair state probability in the BCS ansatz}
The above calculation of the $\langle\hat{N}\rangle$ fluctuation using the grand canonical form $|\phi_{BCS}\rangle$ of the BCS ansatz as a $\v p$ product is quite convincing to conclude that the $N$-pair state distribution in this ansatz is very much peaked on the $N$ mean value. However, a direct calculation of the probability distribution, using the $N$ sum $|\phi\rangle$, is of interest to understand the microscopic origin of this peaked value.
If we only consider the $(1/N!)$ prefactor in the $|\phi\rangle$ sum, we could na\"{\i}vely conclude that the $N=1$ state dominates the sum. However, in order for the prefactors of the $N$-pair state in the $|\phi\rangle$ expansion to have some physical meaning, this $N$-pair state must be normalized.
To do it, we first introduce the normalized correlated pair operator
\begin{equation}
B_0^\dag=\sum_{\v p}\varphi_{\v p}\beta_{\v p}^\dag=\frac{C^\dag}{\sqrt{S}}\ ,
\end{equation}
where $\varphi_{\v p}=\phi_{\v p}/\sqrt{S}$ and $S=\sum_{\v p}|\phi_{\v p}^2|$, in order to have $\langle 0|B_0B_0^\dag|0\rangle=1$. The normalized $N$-pair state associated to the $C^\dag$ correlated pair operator then reads
\begin{equation}\label{psiN}
|\psi_N\rangle=\frac{B_0^{\dag N}|0\rangle}{\sqrt{N!F_N}}=\frac{C^{\dag N}|0\rangle}{\sqrt{N!F_NS^N}}\
\end{equation}
with $F_N$ defined as in Eq.(12), in order to have $\langle\psi_N|\psi_N\rangle=1$.
The above equation allows us to rewrite $|\phi_{BCS}\rangle$ given in Eq.(19) as
\begin{equation}\label{phiBCS}
|\phi_{BCS}\rangle=\gamma\sum_{N}\sqrt{\frac{F_NS^N}{N!}}|\psi_N\rangle
\equiv\sum_{N}\lambda_N|\psi_N\rangle\ .
\end{equation}
We then note that $\sum_N |\lambda_N^2|=1$ since $\langle\phi_{BCS}|\phi_{BCS}\rangle=1$ and $\langle\psi_N|\psi_N\rangle=1$; so, $|\lambda_N^2|$ indeed is the probability distribution of the $N$-pair states in the BCS ansatz.
To show that this $N$ distribution is peaked, we first note that $1/N!$ and $F_N$ both decrease from $1$ when $N$ increases. So, the ratio $F_N/N!$ also decreases when $N$ increases. In order to show that this decrease is compensated by the increase of $S^N$ in order for $\lambda_N$ to be possibly peaked, we first note that the sum $S=\sum_{\v p}|v_{\v p}^2|/|u_{\v p}^2|$ is larger than $\sum_{\v p}|v_{\v p}^2|=\langle\hat{N}\rangle$ since $|u_{\v p}^2|+|v_{\v p}^2=1$ so $|u_{\v p}^2|$ is smaller than 1. As a result, $S> 1$ and $S^N$ increases with $N$. This $S^N$ increase first dominates the $1/N!$ decrease since, due to the Stirling formula, $S^N/N!\simeq(S/N)^N$. So, in the absence of the $F_N$ factor, the $\lambda_N$ probability distribution distribution would be peaked on $N^{\ast\ast}\simeq S$, which is far larger than the pair number mean value $\langle\hat{N}\rangle$.
The moth-eaten effect induced by Pauli blocking on Cooper pairs is responsible for bringing the $\lambda_N$ peak back to $\langle\hat{N}\rangle$, through the $F_N$ decrease it induces. To show it, we first note that this $F_N$ decrease does not affect the $\lambda_N$ behavior so much for small $N$ since, as seen from Eq.(\ref{FN}), $F_N/F_{N-1}$ stays close to 1 for small $N$. By contrast, $F_N$ is going to play a key role for large $N$'s by changing the peak of the probability distribution from $N^{\ast\ast}=S$ to $N^\ast$ which, as a definition of the $\lambda_N$ peak, must be such that
$\lambda_{N^\ast-1}\simeq \lambda_{N^\ast}$. Eq.(\ref{phiBCS}) then gives
\begin{equation}\label{xn}
1\simeq\frac{F_{N^\ast-1}}{F_{N^\ast}}\ \frac{N^\ast}{S}\equiv x_{N^\ast}^2\
\end{equation}
If Cooper pairs were elementary bosons, $F_N$ would stay equal to 1 for all $N$ and the peak of the $\lambda_N$ distribution would take place for a pair number equal to $S$ which is far larger than $\langle\hat{N}\rangle$. For composite bosons, the $F_N/F_{N-1}$ ratio is smaller than 1 since $F_N$ decreases with $N$, as previously shown; so, $\lambda_N$ is peaked on a $N$ value smaller than S.
If the $N^\ast$ peak were in the dilute regime, the $F_N$ ratio would be close to 1 and $N^\ast$ would be close to $S\gg\langle\hat{N}\rangle$. This shows that the solution of Eq.(\ref{xn}) is in the dense regime, with $F_N/F_{N-1}$ substantially smaller than 1. This has to be contrasted to excitons for which the $F_N/F_{N-1}$ ratios always are close to 1: at large density, excitons would dissociate into an electron-hole plasma\cite{CN}.
More demanding is to go further and to precisely relate the pair number $N^*$ corresponding to the $\lambda_N$ maximum to the pair number mean value $\langle\hat{N}\rangle$ calculated within the $\v p$ product form of the BCS ansatz. With this goal in mind, let us first calculate the $\v k$ electron state population in the $B_0^{\dag N}$ condensate.
\section{$\v k$-electron population in $N$-pair state}
The $\v k$ electron distribution is straightforward to obtain in the $|\phi_{BCS}\rangle$ state; the result is given in Eq.(\ref{Nk}). To calculate the $\v k$ electron distribution in a $N$-pair state $|\psi_N\rangle$ is not as easy.
As a first idea, we could perform a brute force calculation of
\begin{equation}\label{NkK}
\langle\hat{N}_{\v{k}}\rangle_{N}=\langle\psi_N|a_{\v k\uparrow}^\dag a_{\v k\uparrow}|\psi_N\rangle
\end{equation}
in the normalized state $|\psi_N\rangle$ given in Eq.(\ref{psiN}), using the commutator formalism developed in section II. To do it, we start with $[a_{\v k},B_0^\dag]=\varphi_{\v k}b_{-\v k}^\dag$ which gives by iteration
\begin{equation}
[a_{\v k},B_0^{\dag N}]=N\varphi_{\v k} b_{-\v k}^\dag B_0^{\dag N-1}\ .
\end{equation}
This allows us to rewrite Eq.(\ref{NkK}) as
\begin{equation}
\langle\hat{N}_{\v{k}}\rangle_{N}=\frac{N^2|\varphi_{\v k}^2|}{N!F_N}\langle 0|B_0^{N-1}b_{-\v k}b_{-\v k}^\dag B_0^{\dag N-1}|0\rangle\ .
\end{equation}
We then use $b_{-\v k}b_{-\v k}^\dag=1-b_{-\v k}^\dag b_{-\v k}$ and iterate the process. This gives $ \langle\hat{N}_{\v{k}}\rangle_{N}$ through the following $F_N$ expansion
\begin{eqnarray}
\langle\hat{N}_{\v{k}}\rangle_{N}=\frac{N}{F_N}\Big{[}|\varphi_{\v k}^2|F_{N-1}-(N-1)|\varphi_{\v k}^4|F_{N-2}
\nonumber\\
+(N-1)(N-2)|\varphi_{\v k}^6|F_{N-3}\cdots\Big{]}.
\end{eqnarray}
Eq.(14) shows that the sum over $\v k$ of the above bracket reduces to $F_N$; so we do have
\begin{equation}
\sum_{\v k} \langle\hat{N}_{\v{k}}\rangle_{N}=N,
\end{equation}
as expected. However, through this $F_N$ expansion, it is not easy to show that $ \langle\hat{N}_{\v{k}}\rangle_{N}$ indeed reduces to $ \langle\hat{N}_{\v{k}}\rangle$ for $N$ equal to the peak value $N^\ast$.
A better way to make such a link is to follow Leggett \cite{Leggett} and to introduce the operator $C^\dag$ defined in Eq.(\ref{C}) with the $\v k$ state removed from the sum, namely,
\begin{equation}
C_{\v k}^\dag=\sum_{\v p\neq\v k}\phi_{\v p}\beta_{\v p}^\dag.
\end{equation}
We then construct its normalized form $B_{\v k}^\dag=C_{\v k}^\dag/\sqrt{S_{\v k}}$ with $S_{\v k}=\sum_{\v p\neq\v k}|\phi_{\v p}^2|$ in order to have $\langle 0|B_{\v k}B_{\v k}^\dag|0\rangle=1$. The associated $N$-pair normalized state then reads
\begin{equation}
|\psi_{N,\v k}\rangle=\frac{B_{\v k}^{\dag N}|0\rangle}{\sqrt{N!F_{N,\v k}}}=
\frac{C_{\v k}^{\dag N}|0\rangle}{\sqrt{N!F_{N,\v k}S_{\v k}^N}}\ ,
\end{equation}
where, as for $F_N$, the normalization factor $F_{N,\v k}$ is defined as $\langle 0|B_{\v k}^NB_{\v k}^{\dag N}|0\rangle=N!F_{N,\v k}$ in order to have $\langle\psi_{N,\v k}|\psi_{N,\v k}\rangle=1$.
By writing $C^{\dag N}$ as $(C_{\v k}^\dag+\phi_{\v k}\beta_{\v k}^\dag)^N$ and by noting that $(\beta_{\v k}^\dag)^2=0$ due to the Pauli exclusion principle, we easily find that the normalized $N$-pair state $|\psi_N\rangle$ defined in Eq.(\ref{psiN}) also reads
\begin{equation}
|\psi_N\rangle=\frac{|\psi_{N,\v k}\rangle+x_{N,\v k}\phi_{\v k}\beta_{\v k}^\dag|\psi_{N-1,\v k}\rangle}{\sqrt{1+x_{N,\v k}^2
|\phi_{\v k}^2|}}\ ,
\end{equation}
with $x_{N,\v k}$ defined as $x_{N}$ in Eq. (\ref{xn}), namely
\begin{equation}
x_{N,\v k}^2=\frac{F_{N-1,\v k}}{F_{N,\v k}}\,\frac{N}{S_{\v k}}\ .
\end{equation}
Using this new expression of $|\psi_N\rangle$, it becomes easy to write the population of the $\v k$ electron state in the $N$ pair condensate in a compact form as
\begin{equation}
\langle\hat{N}_{\v{k}}\rangle_{N}=\frac{x_{N,\v k}^2|\phi_{\v k}^2|}{1+x_{N,\v k}^2|\phi_{\v k}^2|}\ .
\end{equation}
When compared to $ \langle\hat{N}_{\v{k}}\rangle$ calculated in the $|\phi_{BCS}\rangle$ state, as given in Eq.(\ref{Nk}), we see that $ \langle\hat{N}_{\v{k}}\rangle_{N}$ calculated in a $N$-pair condensate reduces to $ \langle\hat{N}_{\v{k}}\rangle$ for $x_{N,\v k}^2=1$. We then note that, for large samples, the number of $\v k$ states is very large; so, the $S$ sum does not change very much when one state is removed, i.e., $S\simeq S_{\v k}$. In the same way $F_N\simeq F_{N,\v k}$. As a result, the condition $x_{N,\v k}^2=1$ also reads $1\simeq x_N^2=NF_{N-1}/SF_N$. This is fulfilled for $N=N^\ast$ defined in Eq.(\ref{xn}). This shows that $\langle\hat{N}_{\v{k}}\rangle_{N^\ast}= \langle\hat{N}_{\v{k}}\rangle$: as expected, the $\v k$-electron state population calculated in a $N$-pair state is equal to the one calculated in the grand canonical state $|\phi_{BCS}\rangle$ provided that $N$ corresponds to the maximum value $N^\ast$ of the $N$-pair distribution in the BCS st
ate.
To get a better understanding of the link which exists between $|\phi_{BCS}\rangle$ and its $N$-pair state projection $|\Psi_N\rangle$, we can also calculate the mean value of the two-pair operator $\beta_{\v k}^{}\beta_{\v k'}^\dag$ in this $|\Psi_N\rangle$ state. As for $a_{\v k\uparrow}^\dag a_{\v k\uparrow}$, a brute force calculation of this mean value would give it as a $F_N$ expansion, not easy to compare with Eqs.(\ref{betaBCS}, \ref{fn}). We can instead follow Leggett \cite{Leggett} and introduce the operator $C^\dag$ in which the $\v k$ and $\v k'$ states are missing, namely,
\begin{equation}
C_{\v k\v k'}^\dag=\sum_{\v p\neq (\v k,\v k')}\phi_{\v p}\beta_{\v p}^\dag\ .
\end{equation}
We again construct its normalized form $B_{\v k\v k'}^\dag=C_{\v k\v k'}^\dag/\sqrt{S_{\v k\v k'}}$ where $S_{\v k\v k'}=\sum_{\v p\neq (\v k,\v k')}|\phi_{\v p}^2|$ and the associated $N$-pair normalized state
\begin{equation}
|\psi_{N,\v k\v k'}\rangle=\frac{B_{\v k\v k'}^{\dag N}|0\rangle}{\sqrt{N!F_{N,\v k\v k'}}}=
\frac{C_{\v k\v k'}^{\dag N}|0\rangle}{\sqrt{N!F_{N,\v k\v k'}S_{\v k\v k'}^N}},
\end{equation}
with $F_{N,\v k\v k'}$ such that $\langle 0|B_{\v k\v k'}^NB_{\v k\v k'}^{\dag N}|0\rangle=N!F_{N,\v k\v k'}$.
From $C^\dag=C_{\v k\v k'}^\dag+\phi_{\v k}\beta_{\v k}^\dag+\phi_{\v k'}\beta_{\v k'}^\dag$, it is then easy to show that the normalized $N$-pair state $|\psi_N\rangle$ defined in Eq.(\ref{psiN}) also reads
\begin{eqnarray}
|\psi_N\rangle=\frac{1}{\mathcal{N}}\Big{[}|\psi_{N,\v k\v k'}\rangle\hspace{4cm}\nonumber\\
+x_{N,\v k\v k'}(\phi_{\v k}\beta_{\v k}^\dag+
\phi_{\v k'}\beta_{\v k'}^\dag)|\psi_{N-1,\v k\v k'}\rangle\hspace{1cm}\nonumber\\
+x_{N,\v k\v k'}x_{N-1,\v k\v k'}\phi_{\v k}\phi_{\v k'}\beta_{\v k}^\dag\beta_{\v k'}^\dag|\psi_{N-2,\v k\v k'}\rangle\Big{]},
\label{psiNexpand}
\end{eqnarray}
where $x_{N,\v k\v k'}$ has a similar form as $x_{N,\v k}$ with now two states excluded instead of one, namely,
\begin{equation}
x_{N,\v k\v k'}^2=\frac{F_{N-1,\v k\v k'}}{F_{N,\v k\v k'}}\,\frac{N}{S_{\v k\v k'}}\ .
\end{equation}
In order to still have $\langle\psi_N|\psi_N\rangle=1$, the normalization factor $\mathcal{N}$ in Eq.(\ref{psiNexpand}) must be equal to
\begin{eqnarray}
\mathcal{N}=\Big{[}1+x_{N,\v k\v k'}^2\left(|\phi_{\v k}^2|+|\phi_{\v k'}^2|\right)\hspace{2cm}\nonumber\\
+x_{N,\v k\v k'}^2x_{N-1,\v k\v k'}^2|\phi_{\v k}^2||\phi_{\v k'}^2|\Big{]}^{1/2}\ .
\end{eqnarray}
If we now note that, for $N$ large, $x_{N,\v k\v k'}$ and $x_{N-1,\v k\v k'}$ are very close, this normalization factor reduces to a product
$\Big{[}1+x_{N,\v k\v k'}|\phi_{\v k}^2|\Big{]}^{1/2}\Big{[}1+x_{N,\v k\v k'}|\phi_{\v k'}^2|\Big{]}^{1/2}$. It is then easy to show, using Eq.(41) for $|\psi_N\rangle$, that
\begin{eqnarray}
\langle\psi_N|\beta_{\v k}\beta_{\v k'}^\dag|\psi_N\rangle=\hspace{3cm}\nonumber\\
\frac{x_{N,\v k\v k'}\phi_{\v k}}{1+x_{N,\v k\v k'}^2|\phi_{\v k}^2|}\
\frac{x_{N,\v k\v k'}\phi_{\v k'}}{1+x_{N,\v k\v k'}^2|\phi_{\v k'}^2|}\ .
\end{eqnarray}
When compared to the mean value of $\beta_{\v k}^{}\beta_{\v k'}^\dag$ calculated in the $|\phi_{BCS}\rangle$ state, as given in Eqs.(\ref{betaBCS}, \ref{fn}), we find that these two results are identical provided that $1=x_{N,\v k\v k'}^2$. We then note that $x_{N,\v k\v k'}^2$ and $ x_N^2$ again are very close because the missing $(\v k , \v k')$ states do not very much affect $F_N$ and the $S$ sum in the large sample limite when the number of $\v k$'s is very large; so $F_{N,\v k\v k'}\simeq F_N$ and $S_{\v k\v k'}\simeq S$. We thus end with the same $\beta_{\v k}^{}\beta_{\v k'}^\dag$ mean values in the $N$ pair state $|\psi_N\rangle$ and in the $|\phi_{BCS}\rangle$ state provided that $N$ is equal to the maximum $N^\ast$ of the $\lambda_N$ distribution of $N$-pair states in the BCS ansatz, which also is the mean value of the particle number $\langle\hat{N}\rangle$ in the $|\phi_{BCS}\rangle$ state.
\section{Cooper pair wave function}
We wish to end this work by reconsidering what should be called ``pair-wave function'', since $F_{\v{k}}$ defined in Eq. (\ref{fn}) is often called this way.
Cooper pairs like excitons are composite bosons made of two fermions. Their creation operators thus read as a sum of free fermion pair creation operators $\beta_{\v k}^\dag$. To properly identify what should be called ``Cooper pair wave function'', it is enlightening to compare their creation operator with the exciton creation operator.
We commonly distinguish two types of excitons: Wannier excitons\cite{Wannier} and Frenkel excitons\cite{Frenkel,frenkel}, the latter having closer similarity with Cooper pairs. Indeed, Wannier excitons are constructed on free electrons and free holes, so that they are ``double index'' composite bosons. Their creation operators read
\begin{equation}\label{Bi}
B_i^\dag=\sum_{\v k_e,\v k_h}a_{\v k_e}^\dag b_{\v k_h}^\dag \langle\v k_h,\v k_e|i\rangle\ ,
\end{equation}
with $i=(\v Q_i,\nu_i)$, where $\v Q_i$ is the exciton center-of-mass momentum and $\nu_i$ its relative motion index. The prefactor $\langle\v k_h,\v k_e|i\rangle$ of the $i$ exciton expansion on free electron-hole pairs, is the $i$ exciton wave function in momentum space.
By contrast, Frenkel excitons are made of atomic excitations for atoms on a regular lattice. They thus are ``single index'' composite bosons, their creation operators reading as
\begin{equation}
B_{\v Q}^\dag=\sum_n\frac{e^{i\v Q.\v R_n}}{\sqrt{N_s}} a_n^\dag b_n^\dag.
\end{equation}
$\v R_n$ is the position of the excited atom and $N_s$ the number of atomic sites.
Cooper pairs also are ``single index'' composite bosons since an up spin electron $\v k$ is coupled to one down spin electron $(-\v k)$ only by the reduced BCS potential. The (normalized) creation operator of one Cooper pair in the BCS condensate reads, as quoted above,
\begin{equation}\label{B0}
B_0^\dag=\sum_{\v k}\varphi_{\v k}a_{\v k}^\dag b_{-\v k}^\dag,
\end{equation}
where $\varphi_{\v k}=\phi_{\v k}/\sqrt{S}$ with $\phi_{\v k}=v_{\v k}/u_{\v k}$ and $S=\sum_{\v k}|\phi_{\v k}^2|$.
In view of Eqs.(\ref{Bi}-\ref{B0}), $\varphi_{\v k}$ must be taken as the Cooper pair wave function, without any ambiguity, in spite of other quantities like $F_{\v k}=u_{\v k}^\ast v_{\v k}$ often quoted as ``pair wave function" in the literature.
Although Cooper pairs and Frenkel excitons both are ``single index'' composite bosons, they however have a few major differences. One of them comes from the fact that the Frenkel exciton wave function is just a phase, so that the distribution of the excited site $n$ in a Frenkel exciton is flat. By contrast, the Cooper pair wave function is given by $\varphi_{\v k}=\phi_{\v k}/\sqrt{S}$ with
\begin{equation}\label{phi}
\phi_{\v k}=\frac{v_{\v k}}{u_{\v k}}=\sqrt{\frac{E_{\v k}-\xi_{\v k}}{E_{\v k}+\xi_{\v k}}}=\frac{\Delta}{E_{\v k}+\xi_{\v k}}
\end{equation}
$\xi_{\v k}=\epsilon_{\v k}-\mu$ and $E_{\v k}^2=\xi_{\v k}^2+\Delta^2$, the electron $\v k$ energy being $\epsilon_{\v k}=\v k^2/2m$ while $\mu$ is the chemical potential of the $|\phi_{BCS}\rangle$ state in the grand canonical ensemble. In the usual BCS configuration with a reduced BCS potential extending symmetrically on a phonon energy scale on both sides of the normal electron Fermi sea, $\mu$ is in the middle of the layer over which the potential acts, this layer extension $\Omega$ being twice the phonon energy (see Fig. \ref{potfig}). The gap $\Delta$ then reads in the small potential limit as $\Delta\simeq \Omega e^{-1/\rho_0V}$, where $\rho_0$ is the density of states taken as constant over the layer where the potential acts. As a result, $\phi_{\v k}$ has three different scales: $\phi_{\v k}\simeq 1$ for $\epsilon_{\v k}$ very close to $\mu$ on the $\Delta$ scale. $\phi_{\v k}\simeq 2e^{1/\rho_0V}(\mu-\epsilon_{\v k})/\Omega$ for $\epsilon_{\v k}$ far below $\mu$ o
n the $\Delta$ scale and $1/\phi_{\v k}\simeq 2e^{1\rho_0V}(\epsilon_{\v k}-\mu)/\Omega$ for $\epsilon_{\v k}$ far above $\mu$ on this scale.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.3\textwidth]{potential}
\caption{The reduced BCS potential extends over an energy $\Omega$ of the order of twice the phonon energy, above a non-interacting electron Fermi sea with Fermi energy $\epsilon_{F_0}$. The dashed line indicates the chemical potential $\mu$. \label{potfig}}
\end{center}
\end{figure}
If we now turn to the normalized distribution, i.e., the Cooper pair wave function in momentum space, it reads $\varphi_{\v k}=\phi_{\v k}/\sqrt{S}$ where, using Eq. (\ref{phi})
\begin{equation}
S=\sum_{\v k}|\phi_{\v k}^2|=N_{\Omega}(1+\frac{\Omega^2}{6\Delta^2})\simeq \frac{N_{\Omega}}{6}\,e^{2/\rho_0V},
\end{equation}
$N_{\Omega}=\rho_0\Omega$ is the total number of pair states in the potential layer for a constant density of states $\rho_0$. Since $e^{-1/\rho_0V}$ is very small in the small coupling limit, the Cooper pair wave function $\varphi_{\v{k}}$ is sizeable between $\epsilon_{F_0}$ and $\mu-\Delta/2$ only, where $\epsilon_{F_0}$ is the Fermi energy of the non-interacting electrons (see Fig. \ref{potfig}). $\varphi_{\v{k}}$ then scales, within irrelevant numerical prefactors, as
\begin{equation}
\varphi_{\v{k}}^{(1)}\simeq \frac{1}{\sqrt{N_{\Omega}}}\frac{\mu-\epsilon_{\v k}}{\Omega}.
\end{equation}
It has a small tail (see Fig. \ref{wavfig}) of the order of
\begin{equation}
\varphi_{\v{k}}^{(2)}\simeq \frac{e^{-1/\rho_0V}}{\sqrt{N_{\Omega}}}
\end{equation}
for electron energies in the range $\pm \Delta$ around $\mu$. For higher energy, i.e., for $\epsilon_{\v k}$ between $\mu+\Delta$ and $\epsilon_{F_0}+\Omega$, the wave function is even smaller, being of the order of
\begin{equation}
\varphi_{\v{k}}^{(3)}\simeq \frac{e^{-2/\rho_0V}}{\sqrt{N_{\Omega}}}\frac{\Omega}{\epsilon_{\v k}-\mu}.
\end{equation}
This shows that the sizeable part of the Cooper pair wave function $\varphi_{\v k}$, which is the normalized form of $\phi_{\v k}=v_{\v k}/u_{\v k}$, is a linearly decreasing function of $\epsilon_{\v k}$ between the non-interacting electron Fermi sea $\epsilon_{F_0}$ and the normal electron Fermi sea $\epsilon_{F}=\epsilon_{F_0}+\Omega/2$, in the case of a potential extending symmetrically on both sides of this Fermi sea. The number of pair states with sizeable weight in the Cooper pair distribution thus is of the order of $N_\Omega/2$. This understanding has to be contrasted with what is often called "pair wave function", namely $F_{\v k}=v_{\v k}^*u_{\v k}$, and which is highly peaked at $\epsilon_{F}$ (see insert of Fig. \ref{wavfig}). $F_{\v k}$ is physically related to the excitation of electron-hole pairs in the BCS condensate while $\varphi_{\v k}$ is associated to the ground state of up and down spin electrons added to the non-interacting Fermi sea $\epsilon_{F_0
}$ and paired by the reduced BCS potential.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.45\textwidth]{wave}
\caption{The Cooper pair wave function $\varphi_{\v{k}}$ essentially has a linear decrease between $\epsilon_{F_0}$ and $\epsilon_{F_0}+\frac{\Omega}{2}\simeq\mu$ and a very small tail between $\mu$ and $\epsilon_{F_0}+\Omega$. \\ Insert: the ``pair wave function'' defined as $F_{\v{k}}=u_{\v{k}}v_{\v{k}}^*$ is concentrated on a $\Delta$ scale around $\epsilon_{F_0}+\frac{\Omega}{2}$.\label{wavfig}}
\end{center}
\end{figure}
For completeness, in addition to Cooper pairs with creation operator $B_0^\dag$ making the BCS condensate, we must mention the ``single Cooper pair" creation operator derived by Cooper when studying a single pair of up and down spin electrons added to the $\epsilon_{F_0}$ Fermi sea. Its ( unnormalized ) creation operator reads
\begin{equation}
B_{N=1}^\dag=\sum_{\v k}\frac{1}{2\epsilon_{\v k}-E_1}a_{\v k}^\dag b_{-\v k}^\dag.
\end{equation}
the sum being taken over $\epsilon_{F_0}<\epsilon_{\v{k}}<\epsilon_{F_0}+\Omega$. The single pair binding energy is $E_1=2\epsilon_{F_0}-\epsilon_{c}$, where $\epsilon_{c}\simeq2\Omega e^{-2/\rho_0V}$ for small $V$. The above equation shows that the wave function of the $B_{N=1}^\dag|0\rangle$ state is concentrated on a $\epsilon_{c}$ scale above $\epsilon_{F_0}$; so, that the amount of pair states with sizeable weight in $B_{N=1}^\dag$ is of the order of $N_c=\rho_0\epsilon_{c}$ which is far smaller than the number of pairs in the $B_0^\dag$ operator making the BCS condensate.
\section{Conclusion}
In this work, we address the BCS condensate of Cooper pairs in the canonical ensemble. To easily handle the Pauli exclusion principle between a given number of paired electrons, we first develop a formalism appropriate to Cooper pairs which is based on a set of commutators. We then use it to, in particular, show that the Pauli exclusion principle between Cooper pairs is fully responsible for the correct value of the probability distribution peak for $N$-pair states in the BCS wave function ansatz. The standard grand canonical ensemble approach tends to mask the key role played by Pauli blocking in this problem. We end by reconsidering what should be called ``pair wave function'' through a comparison with other composite bosons like Wannier and Frenkel excitons.
\textbf{Acknowledgments}
We wish to thank Tony Leggett for very many discussions on the canonical ensemble approach to Cooper pairs. M. C. wants to thank the Institute of Condensed Matter Physics of the University of Illinois at Urbana-Champaign for various invitations during which most of this work has been performed. G. Z. is supported partly by USA NSF under grant No. DMR 09-06921.
|
1112.0527
|
\section{introduction}
Measurements of the arrival times of muon neutrinos from the CERN CNGS beam at the OPERA detector
730~km away suggest that they travel superluminally with
$(v-c)/c = (2.37 \pm 0.32 (\rm{stat.}) \pm 0.29 (\rm{sys.})) \times 10^{-5}$~\cite{:2011zb}. Many interpretations of this
stunning result have been proposed~\cite{opera-papers}.
Inspired by the OPERA anomaly, we present a specific realization of a class of models that may be viewed either as
superluminal travel of a gauge-singlet sterile neutrino
via extra-dimensional shortcuts~\cite{Pas:2005rb} or alternatively as Lorentz violation for sterile neutrinos as viewed from our
four-dimensional spacetime~\cite{Coleman:1997xq}. We emphasize at the outset that it is {\it not} our intention to explain
the OPERA data, but to simply provide a concrete model of superluminal neutrinos.
As described at length in Ref.~\cite{Pas:2005rb},
a superluminal sterile neutrino is well-motivated within the context of
brane-world phenomenology.
The active neutrinos, carrying electroweak gauge charge like all other Standard Model (SM)
fields, are described as open string excitations with their string endpoints confined to
our $3+1$ dimensional brane.
On the other hand, the sterile neutrino, carrying no gauge charge, is
characterised by a closed string, free to roam the extra-dimensional bulk as well as the brane
(in the fashion of the ``gauge-singlet'' graviton).
Thus, its geodesic between two points on the brane will include travel in the bulk.
The net result in general will be a shorter transit distance; such shortcuts were proposed a decade ago for
gravitons in Ref.~\cite{Chung:1999zs}.
From the point of view of our brane, the sterile neutrino will appear to travel superluminally.
An analogy would be a comparison of the light transit time and distance when confined
within a curved optical fiber,
and the light transit time and distance when traveling the straight path between the fiber endpoints.
As proposed in Ref.~\cite{Pas:2005rb},
the shorter distance through the bulk could be a result of brane fluctuations within the bulk.
These fluctuations could be thermal, gravitational, or quantum mechanical in origin.
Also, the difference in the limiting velocities between active and sterile neutrinos $\delta v$ is related to
the geometry of the brane fluctuation. The relation $\delta v=(\frac{Ak}{2})^2$ was found,
where $A$ is the (classical) amplitude of the brane fluctuation in the bulk direction,
and $k$ is the wave number of the brane fluctuation along the brane direction.
Thus, $\delta v$ is basically the dimensionless aspect ratio of the brane fluctuation.
This article is organized as follows.
We derive the oscillation probabilities when only
one active-sterile mixing angle is nonzero. In doing so, we
extend the results of Ref.~\cite{Pas:2005rb},
from two to three active neutrinos, plus one sterile neutrino.
We then briefly mention aspects of the OPERA data in the context of our model.
Finally, we conclude.
\section{formalism}
The quantum mechanics of the model is simple.
The flavor-oscillation amplitude for a propagating neutrino is
\beq{oscamp1}
A(\nu_{\alpha}\rightarrow\nu_{\beta})=\langle\nu_{\beta} | \,e^{-iHt} |\nu_{\alpha}\rangle\,.
\end{equation}
A component of $Ht$ that is proportional to the identity
cannot affect flavor change, and can be subtracted. We write the remainder as
$\delta (Ht)= (\delta H)t+H(\delta t)$ under the assumption that it is small.
We are left with
\beq{oscamp2}
A(\nu_{\alpha}\rightarrow\nu_{\beta})=\langle\nu_{\beta} | \,e^{-i[(\delta H)t+H(\delta t)]} |\nu_{\alpha}\rangle\,.
\end{equation}
As in standard oscillations, $\delta H$ is diagonal in the mass-basis,
and at lowest order is equal to
\beq{deltaH}
\delta H=\frac{1}{2E}\,{\rm diag}(m^2_1,m^2_2,\cdots)\,.
\end{equation}
Upon inserting complete sets of mass eigenstates before and after
$e^{-i(\delta H)t}$ in Eq.~\rf{oscamp2}, the first term there becomes
$\sum_j\,U_{\alpha j}^*\,U_{\beta j}\,e^{-i\frac{m^2_j t}{2E}}$;
the usual definition of the bases-mixing matrix,
\beq{Umatrix}
U_{\alpha j}=\langle\nu_{\alpha}\,|\,\nu_j\rangle, \quad
{\rm or\ equivalently},\ |\nu_{\alpha}\rangle=U_{\alpha j}^*\,|\nu_j\rangle\,,
\end{equation}
has been employed.
A nonvanishing value for the second term in Eq.~\rf{oscamp2} is
unconventional, and occurs if the propagation times for the neutrino states
are not universal. Such a theory assigns different ``light-cones'' to
different states, thereby breaking Lorentz invariance.
Conversely, a large class of models with Lorentz Invariance Violation (LIV)
has been shown
to be phenomenologically equivalent to state-dependent limiting
velocities~\cite{Coleman:1997xq}.
We note that with differing velocities, one has
$\delta t=\delta(L/v)=-L\,\delta v/v^2$, which is $-L\,\delta v$ to lowest order (in natural units).
In the picture where gauge-singlet states are closed strings free to roam the bulk,
the limiting velocities are assigned to the flavor eigenstates
rather than to the mass eigenstates.\footnote
{An alternative model arises if one assigns the limiting velocities to the mass eigenstates.
In such a model, the mass-squared matrix and the $\delta v$ matrix are diagonal in the same basis,
and so there is no brane-bulk resonance arising from diagonalization
of the Hamiltonian of Eq.~\rf{Heff1} below.
Yet another possibility is to assign limiting velocities to velocity eigenstates.
}
Such a choice affects the equivalence between a sterile flavor traveling the shortened geodesics
available in the bulk, and a Lorentz-violating, superluminal limiting velocity for the sterile state as viewed from the brane.
The second term in \rf{oscamp2} as written is already in a diagonal basis,
and
\beq{deltat}
\delta t={\rm diag}(\deltat_{\alpha},\deltat_{\beta},\cdots)=-L\,{\rm diag}(\deltav_{\alpha},\deltav_{\beta},\cdots)\,.
\end{equation}
It is conventional to put the physics
into a Hamiltonian framework.
The effective neutrino Hamiltonian in the flavor basis is
\beq{Heff1}
H_{(F)} =\frac{1}{2E}\,\, U\,
\left(
\begin{array}{cccc}
m^2_1 & 0 & 0 & 0 \\
0 & m^2_2 & 0 & 0 \\
0 & 0 & m^2_3 & 0 \\
0 & 0 & 0 & m^2_4 \\
\end{array}
\right)
\,U^\dag
-E\,
\left(
\begin{array}{cccc}
\delta v_1 & 0 & 0 & 0 \\
0 & \delta v_2 & 0 & 0 \\
0 & 0 & \delta v_3 & 0 \\
0 & 0 & 0 & \delta v_4 \\
\end{array}
\right)\,.
\end{equation}
In general, the $4\times4$ mixing matrix $U$ consists of six angles (the number of planes
in four dimensions) and four phases. To simplify the analysis,
we neglect the three new phases, and for now, set to zero the
rotation angles in the $4-2$ and $4-1$ planes.
By keeping the $\theta_{34}$ angle in $R_{34}$ nonzero,
we retain the basic features of the model.
We have
\beq{U4x4}
U=
\left(
\begin{array}{cc}
V & 0 \\
0 & 1 \\
\end{array}
\right)
\times
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & R_{34} \\
\end{array}
\right)\,,
\end{equation}
in the absence of the new term proportional to $\delta v$'s.
Here, $V$ is the usual PMNS mixing-matrix among the
three active-flavor neutrinos,
and
\beq{R34}
R_{34}=
\left(
\begin{array}{cc}
\cos\theta_{34} & \sin\theta_{34} \\
-\sin\theta_{34} & \cos\theta_{34} \\
\end{array}
\right)\,.
\end{equation}
We next write $\Delta\equiv m^2_4 -m^2_3$, and neglect the
light masses $m^2_j, j=1,2,3$ relative to $m^2_4$.
We assume that the active neutrino flavors have the usual limiting velocity $c$, whereas
the sterile flavor has a limiting velocity
$\delta v\equiv \delta v_4>0$.
This seems to us to be the most economic and intuitive
application of possibly-differing limiting-velocities.
The sterile state is qualitatively different from active states in that it has no
gauge interactions, and therefore is unconstrained by gauge symmetries.
We provide more discussion of a qualitatively different sterile neutrino below.
With these assumptions, the effective Hamiltonian in~\rf{Heff1} may be written as
\beq{Heff2}
H_{(F)} =
\left(
\begin{array}{cc}
V & 0 \\
0 & 1 \\
\end{array}
\right)
\left[
\frac{1}{2E}\,
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & R_{34} \\
\end{array}
\right)
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & \Delta \\
\end{array}
\right)
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & R^T_{34} \\
\end{array}
\right)
-E\delta v
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right)
\right]
\left(
\begin{array}{cc}
V^\dag & 0 \\
0 & 1 \\
\end{array}
\right)\,.
\end{equation}
The qualitative features of $H_{(F)}$ in Eq.~\rf{Heff2} provide for an
interesting discussion.
At sufficiently low energies, the first term on the right-hand-side
of $H_{(F)}$ dominates,
and oscillations proceed in the standard way.
The second term on the right-hand-side, diagonal
in the flavor basis, has an analogy with the famous MSW matter-term.
At sufficiently high energies,
the eigenstates of the Hamiltonian are nearly flavor states, and
oscillations are very suppressed.
At some intermediate value of energy, the two terms are comparable,
and resonance enhancement of the mixing angles may occur (if the mixing
angle can reach the maximal-mixing value of $45^\circ$, as discussed below).
The matrix in brackets in Eq.~\rf{Heff2} is equal to
\beq{matrix}
\frac{\Delta}{2E}\,
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & s^2_{34} & s_{34}\,c_{34} \\
0 & 0 & s_{34}\,c_{34} & \left(c^2_{34}-\frac{2E^2\delta v}{\Delta}\right)\\
\end{array}
\right)\,,
\end{equation}
and is diagonalized by the rotation $R_{34}$ through an angle
${\tilde \theta}_{34}$ given by
\beq{thtilde1}
\tan 2{\tilde \theta}=\frac{\sin2\theta_{34}}{\cos2\theta_{34}-2E^2\delta v/\Delta}\,,
\end{equation}
or equivalently, by
\beq{thtilde2}
\sin^2 2{\tilde \theta} = \frac{\sin^2 2\theta_{34}}{\sin^2 2\theta_{34}
+\left(\cos2\theta_{34}-2E^2\delta v/\Delta\right)^2}\,.
\end{equation}
Because of the second term on the right-hand-side of Eq.~\rf{Heff2},
we are led to a diagonalization matrix
of the form given in Eqs.~\rf{U4x4} and \rf{R34}, but with $\theta_{34}$ in Eq.~\rf{R34}
replaced by ${\tilde \theta}$.
Thus, the matrix which diagonalizes the full Hamiltonian $H_{(F)}$ is
\beq{Utilde}
{\tilde U}=
\left(
\begin{array}{cc}
V & 0 \\
0 & 1 \\
\end{array}
\right)
\times
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & R_{34}({\tilde \theta}) \\
\end{array}
\right)
=
\left(
\begin{array}{cccc}
V_{e1} & V_{e2} & \;V_{e3}\,\cos{\tilde \theta} & \;V_{e3}\,\sin{\tilde \theta} \\
V_{\mu 1} & V_{\mu 2} & \;V_{\mu 3}\,\cos{\tilde \theta} & \;V_{\mu 3}\,\sin{\tilde \theta} \\
V_{\tau 1} & V_{\tau 2} & \;V_{\tau 3}\,\cos{\tilde \theta} & \;V_{\tau 3}\,\sin{\tilde \theta} \\
0 & 0 & -\sin{\tilde \theta} & \cos{\tilde \theta} \\
\end{array}
\right)\,.
\end{equation}
Resonant mixing occurs when the two diagonal elements in Eq.~\rf{matrix} are equal,
{\it i.e.}, when
\beq{Eres}
E_{\rm R}=\sqrt{\frac{\Delta\cos 2\theta_{34}}{2\delta v}}\,.
\end{equation}
In terms of $E_{\rm R}$, equations~\rf{thtilde1} and \rf{thtilde2} may be written as
\beq{thtilde3}
\tan 2{\tilde \theta}=\frac{\tan2\theta_{34}}{1-\left(\frac{E}{E_{\rm R}}\right)^2}\,,
\end{equation}
and
\beq{thtilde4}
\sin^2 2{\tilde \theta} = \frac{\sin^2 2\theta_{34}}{\sin^2 2\theta_{34}
+\cos^2 2\theta_{34}\left(1-\left(\frac{E}{E_{\rm R}}\right)^2\right)^2}\,.
\end{equation}
The energy-dependent angle ${\tilde \theta}$ is obtained by taking the inverse sine of Eqs.~\rf{thtilde2} or~\rf{thtilde4},
or the inverse tangent of Eq.~\rf{thtilde1} or~\rf{thtilde3}.
Care must be taken to ensure that ${\tilde \theta}$ is chosen in the first octant for $E<E_{\rm R}$,
and in the second octant for $E>E_{\rm R}$.
The functions $\sin{\tilde \theta}$ and $\cos{\tilde \theta}$ are then readily obtained.
Since $\cos 2\theta_{34}$ is positive definite for small $\theta_{34}$,
resonance can occur only if $\Delta$ and $\delta v$ have the same sign.
Cosmological limits on neutrino masses disallow
$\sum_{j=1}^3 m_j \ge 3\sqrt{|\Delta|} \sim 3$~eV,
so $\Delta$ must be positive.
Thus, resonance is possible only if $\delta v_4 > 0$.
One possibility is to have limiting velocities $v_4=c$, $v_i < c$ ($i<4$).
The other possibility, more natural in the brane-bulk scenario
and assumed above, is to have $v_i=c$ ($i<4$) and $v_4>c$~\cite{Dent:2007rk}.
This latter possibility is discussed more below.
There are two distinct qualitative differences between the LIV resonance
inherent in Eq.~\rf{Heff2}, and the MSW matter-resonance.
Firstly, The LIV term here grows with energy, whereas
the matter term in the MSW Hamiltonian does not.
Consequently, the LIV resonance will be narrower than an MSW resonance.
In other words, a measurement of the full width at half maximum (FWHM)
may be a signature of the LIV resonance.
Secondly, the LIV resonance does not violate CPT,
whereas the MSW resonance necessarily does; the LIV resonance will occur
identically in both neutrino and antineutrino channels,
in contrast to the MSW resonance.
The eigenvalues of $H_{(F)}$ are
\beq{eigvals}
\lambda_1=\lambda_2=0,\quad
\lambda_{4/3}\equiv\lambda_\pm=\frac{\Delta}{4E}\,\left(1-\cos2\theta_{34}\left(\frac{E}{E_{\rm R}}\right)^2
\pm\sqrt{\sin^2 2\theta_{34} +\cos^2 2\theta_{34}\,\left[1-\left(\frac{E}{E_{\rm R}}\right)^2\right]^2}\,\right)\,,
\end{equation}
and the eigenvalue differences $\delta H_{kj}\equiv \lambda_k -\lambda_j$ are
\bea{deltaHkj}
\delta H_{43} &=& \lambda_+ -\lambda_- = \frac{\Delta}{2E}
\sqrt{\sin^2 2\theta_{34} +\cos^2 2\theta_{34}\,\left[1-\left(\frac{E}{E_{\rm R}}\right)^2\right]^2}\nonumber\\
\delta H_{42}&=&\delta H_{41}=\lambda_+ \nonumber\\
\delta H_{32}&=&\delta H_{31} = \lambda_- \nonumber\\
\delta H_{21}&=&0\,.
\end{eqnarray}
\noindent
With these eigenvalue differences
and the mixing matrix ${\tilde U}$,
we have all the ingredients to obtain all possible oscillation probabilities.\footnote
{In the three-neutrino model of Ref.~\cite{Pas:2005rb}, consisting of two active and one
sterile neutrino, the ``1'' state is absent and so
the $\nu_1$~row and column of $U$ is absent, and $V$ is effectively replaced by the
$2\times 2$ matrix $R_{23}(\theta_*)$.
Consequently in Ref.~\cite{Pas:2005rb}, $\delta H_{34}$, $\delta H_{42}$, and $\delta H_{32}$,
are all of the same magnitude in the resonance region.}
Furthermore, in the model as presented, there are just three parameters
beyond the standard three-neutrino parameters.
These are $\Delta$, $\theta_{34}$, and $E_{\rm R}$.{\footnote
{More general LIV scenarios lead to dispersion relations of the form
\begin{equation}
\label{disprel}
E\sim |\vec{p}| + \frac{m^2}{2 |\vec{p}|} \pm \delta v \frac{|\vec{p}|^n}{E_0^{n-1}},
\end{equation}
where $E_0$ denotes some typical energy scale. The case under discussion correponds to $n=1$.
For arbitrary $n$,
\begin{equation}\label{Eres}
E_{\rm R} = \bigg({\frac{\Delta\,E_0^{n-1}\cos 2\theta_{34}}{2\delta v}}\bigg)^{1\over{n+1}}\,,
\end{equation}
and the corresponding $\sin^2 2{\tilde \theta}$ and $\delta H_{kj}$ are obtained by replacing
$(E/E_{\rm R})^2$ by $(E/E_{\rm R})^{n+1}$ in Eqs.~\rf{thtilde3}-\rf{deltaHkj}.
}
The general oscillation formulae are
\beq{oscprob1}
P(\nu_{\alpha}\rightarrow\nu_{\beta})=\delta_{\alpha\beta}
-4\sum_{j<k} \Re\{{\tilde U}_{\beta j}\,{\tilde U}_{\beta k}^*\,{\tilde U}_{\alpha j}^*\,{\tilde U}_{\alpha k}\}\,\sin^2\left(\frac{L\,\delta H_{kj}}{2}\right)
+2 \sum_{j<k} \Im\{{\tilde U}_{\beta j}\,{\tilde U}_{\beta k}^*\,{\tilde U}_{\alpha j}^*\,{\tilde U}_{\alpha k}\}\,\sin\left(L\,\delta H_{kj}\right) \,,
\end{equation}
which on ignoring phases in $U$ becomes
\beq{oscprob1.5}
P(\nu_{\alpha}\rightarrow\nu_{\beta}) = \delta_{\alpha\beta}
-4\sum_{j<k} {\tilde U}_{\beta j}\,{\tilde U}_{\beta k}\,{\tilde U}_{\alpha j}\,{\tilde U}_{\alpha k}\,\sin^2\left(\frac{L\,\delta H_{kj}}{2}\right)\,.
\end{equation}
For the present case we get
\beq{oscprob2}
P(\nu_{\alpha}\rightarrow\nu_{\beta}) =\delta_{\alpha\beta}-4\times
\left\{
\begin{array}{ll}
\sin^2\left(\frac{L\,(\lambda_+ -\lambda_-)}{2}\right)&{\tilde U}_{\beta 3}\,{\tilde U}_{\beta 4}\,{\tilde U}_{\alpha 3}\,{\tilde U}_{\alpha 4}\\
+\sin^2\left(\frac{L\,\lambda_+ }{2}\right) &\sum_{j=1,2}\,{\tilde U}_{\beta j}\,{\tilde U}_{\beta 4}\,{\tilde U}_{\alpha j}\,{\tilde U}_{\alpha 4}\\
+\sin^2\left(\frac{L\,\lambda_- }{2}\right) &\sum_{j=1,2}\,{\tilde U}_{\beta j}\,{\tilde U}_{\beta 3}\,{\tilde U}_{\alpha j}\,{\tilde U}_{\alpha 3}\,.
\end{array}
\right.
\end{equation}
A relevant variable for neutrino oscillations is $L/E$.
Just as $E_{\rm R}$ sets the energy scale for the resonance,
the length scale for the $n$-th maximum at resonance is set by an interplay of the various
\beq{Lres1}
L_{{\rm R}(n)}\equiv \frac{\pi\,(2n-1)}{|\delta H_{jk}|}\,.
\end{equation}
Substituting $E=E_{\rm R}$ into Eqs.~\rf{eigvals} and~\rf{deltaHkj}, we find
the values of $L/E$ at $E=E_{\rm R}$ for the three contributing amplitudes to be
\beq{Lres2}
\left(\frac{L_{{\rm R}(n)}^{+-}}{E_{\rm R}}\right) \equiv
\frac{\pi\,(2n-1)}{E_{\rm R}\,(\lambda_+ -\lambda_-)}=\frac{2\pi\,(2n-1)}{\Delta\,\sin2\theta_{34}}\,,
\end{equation}
and
\beq{Lres3}
\left(\frac{L_{{\rm R}(n)}^\pm}{E_{\rm R}}\right) \equiv
\frac{\pi\,(2n-1)}{E_{\rm R}\,|\lambda_{\pm}|}=\frac{4\pi\,(2n-1)}{\Delta\,(\sin2\theta_{34}\pm2\sin^2\theta_{34})}
\approx 2\,\left( \frac{L_{{\rm R}(n)}^{+-}}{E_{\rm R}} \right)\,.
\end{equation}
Note that the $1-2$ submatrix of ${\tilde U}$ is the same as that of $V$.
The matrix $V$, like ${\tilde U}$, is unitary.
Thus, $\sum_{j=1,2} {\tilde U}_{\alpha j}\,{\tilde U}_{\beta j} = \delta_{\alpha\beta} -V_{\alpha 3}\,V_{\beta 3}$.
Making this replacement, and using the explicit matrix entries in the third and
fourth columns of Eq.~\rf{Utilde},
we arrive at simpler expressions for the three relevant cases: active neutrino survival,
active-to-active neutrino conversion, and active-to-sterile conversion.
Denoting the sterile neutrino by $\nu_s$ and active flavors by $\nu_a, \nu_b,\cdots$,
the active neutrino survival probability is given by
\bea{active_survival}
P(\nu_a\rightarrow\nu_a) &=&1-4\,V^2_{a3}\times
\left\{
\begin{array}{ll}
\sin^2\left(\frac{L\,(\lambda_+ -\lambda_-)}{2}\right)&\sin^2{\tilde \theta}\,\cos^2{\tilde \theta}\;\;V^2_{a3}\\
+\sin^2\left(\frac{L\,\lambda_+ }{2}\right) &\sin^2{\tilde \theta}\;\;(1-V^2_{a3})\\
+\sin^2\left(\frac{L\,\lambda_- }{2}\right) &\cos^2{\tilde \theta}\;\;(1-V^2_{a3})\,.
\end{array}
\right.
\end{eqnarray}
The active-to-(different) active neutrino conversion probability is given by
(and mind the minus sign on the first term in brackets)
\bea{active_conversion}
P(\nu_a\rightarrow\nu_b) &=& 4\,V^2_{a3}\,V^2_{b3}\times
\left\{
\begin{array}{ll}
-\sin^2\left(\frac{L\,(\lambda_+ -\lambda_-)}{2}\right)&\sin^2{\tilde \theta}\,\cos^2{\tilde \theta}\\
+\sin^2\left(\frac{L\,\lambda_+ }{2}\right) &\sin^2{\tilde \theta}\\
+\sin^2\left(\frac{L\,\lambda_- }{2}\right) &\cos^2{\tilde \theta}\,.
\end{array}
\right.
\end{eqnarray}
The active-to-sterile conversion probability is given by
\beq{active_sterile}
P(\nu_a\rightarrow\nu_s)=V^2_{a3}\,\sin^2 2{\tilde \theta}\,\sin^2\left(\frac{L\,(\lambda_+ -\lambda_-)}{2}\right)\,.
\end{equation}
Note that correct limits are respected here.
Far above the resonance, $\cos^2{\tilde \theta}$ and $\lambda_+$ approach zero
(while $\sin^2{\tilde \theta}$ and $\lambda_-$ do not).
Thus, each term in the above probabilities vanishes far above $E_{\rm R}$,
and the sterile state effectively decouples, as it must.
The analytic formalism presented here fails if more than one $\theta_{j4}$ is taken to be nonzero,
for then the eigenvalues must be derived from a matrix larger than the
$2\times 2$ subblock given in Eq.~\rf{matrix}.
However, the formalism goes through when a $\theta_{j4}$ other than
$\theta_{34}$ is taken to be nonzero.
We have really described three models here,
characterized by a nonzero $\theta_{34}$, $\theta_{24}$, or $\theta_{14}$.
In Eqs.~\rf{active_survival}--\rf{active_sterile},
one need only replace the subscript ``3'' by ``2'' or ``1'' to
obtain the $\theta_{24}$ and $\theta_{14}$ models, respectively.
For the $\theta_{34}$ model, we have
$4\,V^2_{e3}\,V^2_{\mu 3}= \sin^2 (2\theta_{13})\,\sin^2 \theta_{23}$,
$4\,V^2_{e3}\,V^2_{\tau 3}= \sin^2 (2\theta_{13})\,\cos^2 \theta_{23}$,
and
$4\,V^2_{\mu 3}\,V^2_{\tau 3}= \sin^2 (2\theta_{23})\,\cos^4 \theta_{13}$,
for the prefactors to $P(\nu_e\leftrightarrow\nu_\mu)$,
$P(\nu_e\leftrightarrow\nu_\tau)$,
and $P(\nu_\mu\leftrightarrow\nu_\tau)$, respectively.
The formalism needs to be extended to be relevant for data away from the resonance.
Continuing with the simple model with just one nonzero $R_{j4}$, we see that
a factorization occurs between the squared elements of $V$ and $R$ in $U$, {\it viz}.
(with no sum on $j$ implied)
\bea{factorize}
U^2_{\alpha k} &=& [ \sum_p V_{\alpha p}\,(R_{j4})_{pk}]^2
=\sum_{p,q} V_{\alpha p}\,(R_{j4})_{pk}\,V_{\alpha q}\,(R_{j4})_{qk}= [ V_{\alpha j} ]^2\, [(R_{j4})_{jk} ]^2 \,.
\end{eqnarray}
The final form results because only the $j^{th}$ active flavor and the sterile
state appear in the $R_{j4}$ matrix.
This result is simple matrix multiplication of the matrix with $V$-squared
elements and the matrix with $R$-squared elements.
It is useful to present the squared elements of the $V$ and $R$ matrices.
The $R$-squared elements are
\beq{Rsquared}
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \cos^2{\tilde \theta} & \sin^2{\tilde \theta} \\
0 & 0 & \sin^2{\tilde \theta} & \cos^2{\tilde \theta} \\
\end{array}
\right)\,,
\end{equation}
with obvious generalizations for the other choices of nonzero $R_{4j}$.
We next discuss the $V$-squared matrix.
A PMNS matrix consistent with most neutrino data
is the tribimaximal matrix~\cite{Harrison:2002er}, which extended to the $4\times4$ case is:
\beq{tribimax1}
V_{4\times 4}=\frac{1}{\sqrt{6}}\,
\left(
\begin{array}{rrrr}
2 & \sqrt{2} & 0 & 0 \\
-1 & \sqrt{2} & \sqrt{3} & 0 \\
-1 & \sqrt{2} & -\sqrt{3} & 0 \\
0 & 0 & 0 & \sqrt{6} \\
\end{array}
\right)\,.
\end{equation}
The $V$-squared elements of this matrix are
\beq{tribisquared}
\frac{1}{6}\,
\left(
\begin{array}{cccc}
4 & 2 & 0 & 0 \\
1 & 2 & 3 & 0 \\
1 & 2 & 3 & 0 \\
0 & 0 & 0 & 6 \\
\end{array}
\right)\,.
\end{equation}
For example, in the $\theta_{34}$-model,
$U^2_{\mu 3}=[V_{\mu 3}]^2\,[(R_{34})_{33}]^2= \frac{1}{2}\cos^2{\tilde \theta}$.
This can also be seen from Eq.~\rf{Utilde}.
A unitary extension of the tribimaximal matrix to nonzero $U_{e3}$ is given in
Ref.~\cite{triminimal}.
We find that the conversion probability for $\nu_\mu\rightarrow\nu_e$, to lowest order in $|U_{e3}|^2$, is
\beq{mu2e_conversion}
P(\nu_\mu\rightarrow\nu_e) =
\left\{
\begin{array}{ll}
4/9 & \ {\rm for}\ j=1,2\\
2\,|U_{e3}|^2 & \ {\rm for}\ j=3
\end{array}
\right\}
\times
\left\{
\begin{array}{ll}
-\sin^2\left(\frac{L\,(\lambda_+ -\lambda_-)}{2}\right)&\sin^2{\tilde \theta}\,\cos^2{\tilde \theta}\\
+\sin^2\left(\frac{L\,\lambda_+ }{2}\right) &\sin^2{\tilde \theta}\\
+\sin^2\left(\frac{L\,\lambda_- }{2}\right) &\cos^2{\tilde \theta}\,.
\end{array}
\right.
\end{equation}
Note that the different choices of the nonzero $\theta_{j4}$ result in
just a change in the overall magnitude of the flavor-changing probability.
To study the influence of the new mass-squared scale $\Delta$ on
long-baseline and atmospheric data, the atmospheric mass-squared
scale $\Delta_{\rm atm}\equiv |m^2_3 -m^2_1|$
must be entered into the probability formulae.
This is done simply in the following way:
In Eq.~\rf{Heff2}, when $\theta_{34}$ is chosen for mixing
the diagonal mass matrix is replaced by
\beq{atmos_scale1}
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & \Delta \\
\end{array}
\right)
\rightarrow
\left(
\begin{array}{cccc}
\mp\Delta_{\rm atm} & 0 & 0 & 0 \\
0 & \mp\Delta_{\rm atm} & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & \Delta \\
\end{array}
\right)
\end{equation}
and when $\theta_{24}$ or $\theta_{14}$ are chosen for mixing,
the diagonal mass matrix is replaced by
\beq{atmos_scale2}
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & \Delta \\
\end{array}
\right)
\rightarrow
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & \pm\Delta_{\rm atm} & 0 \\
0 & 0 & 0 & \Delta \\
\end{array}
\right)
\end{equation}
These choices ensure that $\Delta_{\rm atm}$ is not mixed into the resonance condition,
making the extraction of the four eigenvalues simple.
We neglect effects of order $\Delta_{\rm atm}/\Delta$.
The two sign choices correspond to normal and inverted hierarchies, respectively,
for the active mass spectrum.
Notice that because one of the three active states is distinguished from the other two,
due to its mixing with $\nu_4$, the two hierarchies yield different
physics. This is analogous to the the MSW situation in matter.
For the $\theta_{34}$ model, the ordered eigenvalues of the full Hamiltonian are
$\lambda_j=\{\mp\Delta_{\rm atm}/2E,\mp\Delta_{\rm atm}/2E,\lambda_-,\lambda_+\}$;
there are three distinct nonzero eigenvalue differences $\delta H_{kj}=\lambda_k-\lambda_j$,
as in the $\Delta_{\rm atm}=0$ case discussed earlier.
To get the oscillation probabilities,
these eigenvalue differences $\delta H_{kj}=\lambda_k-\lambda_j$ are
inserted into Eq.~\rf{oscprob1.5}.
For the $\theta_{24}$ and $\theta_{14}$ models, the eigenvalues are
$\lambda_j=\{0$ and $\lambda_-$ in the appropriate order, followed by $\pm\Delta_{\rm atm}/2E$ and $\lambda_+\}$;
here there are six distinct eigenvalue differences.
After insertion into Eq.~\rf{oscprob1.5}, one obtains the oscillation probabilities
after some straightforward but tedious algebra.
\section{Opera}
OPERA has inferred a mean arrival time for muon neutrinos of $\sim 60$~ns faster than the theoretical light-travel time,
over the 730~km pathlength.
The short-bunch (3 ns) beam data newly acquired by OPERA provide a stringent constraint on models in which
neutrino flavor states or mass states travel with different limiting velocities.
This is because the new data show an arrival time rms of $\sim 16$~ns about the mean,
with {\it no events} arriving within 37~ns of the theoretical light-travel time.
It appears that there is but one neutrino speed in the data.
A fit to these new data~\cite{Winter:2011zf}
requires the fraction of superluminal neutrinos to be at least 80\% at a $3\sigma$ confidence.
(Resonant conversion offers the possibility of a 50\% mixture on average,
while adiabatic conversion and back-conversion can theoretically attain 100\% conversion to the fast species.)
This implies that atmospheric neutrinos oscillate primarily into sterile neutrinos, a conclusion that is excluded
by Super-Kamiokande.
Another daunting constraint
comes from the $L/E$ distribution of Super-Kamiokande's atmospheric muon neutrino events~\cite{SKatmos}.
A large scale (compared to $2.5\times 10^{-3}$~eV$^2$) in the difference of eigenvalues of the Hamiltonian, as occurs in active-plus-sterile neutrino models (cf. Eqs.~\rf{eigvals}, \rf{deltaHkj} and \rf{active_survival})
would provide an energy-independent, averaged contribution for the $L/E$ distribution,
in violation of these data, unless the active-sterile mixing angle is small.
However, the OPERA data tell us that the mixing in an overlapping $L/E$ range cannot be small.
Thus, the simple model we have presented does not simultaneously account for OPERA and
atmospheric data.
On the positive side, additional constraints on models of OPERA data seem to be easy to meet
with our class of models.
The coincident time of arrival of neutrinos and photons from SN 1987A is easily accommodated
by making $\theta_{13}$ sufficiently small so as to suppress
$\nu_e \to \nu_s$ oscillations; see Eq.~\rf{active_sterile}.
Also, the analogue of
Cherenkov radiation for superluminal neutrinos, pointed out in Ref.~\cite{Cohen:2011hx}, does not
apply at least to the extra-dimensional variety
of our model since all propagation is subluminal locally, and the apparent superluminal behavior
is simply a consequence of the bulk shortcut.
(Also, with SM particles confined to the brane,
there can be no Cherenkov radiation of SM particles from sterile neutrinos traveling in the bulk.)
\section{Conclusions}
In light of the alleged evidence for superluminal propagation of neutrinos in OPERA data, and drawing
on our prior work on superluminal neutrinos,
we have provided herein a concrete model that accommodates superluminal neutrinos.
We find that oscillations between active neutrinos and a sterile neutrino
with a shorter geodesic path through extra dimensions
(or equivalently, obeying a modified dispersion relation as seen from the brane)
do not provide a simple explanation of the OPERA anomaly because of conflicts with
atmospheric neutrino data. This does not invalidate the model in a broader context.
Introducing additional sterile neutrinos and/or mixing angles into the framework may
produce consistency with data, but we have not explored this possibility.
In one variation of the model, the resonance may have an $(LE)$-dependence~\cite{Hollenberg:2009ws}
(instead of a simple $E$-dependence)
thus affording greater flexibility in addressing the tension with atmospheric data.
It does seem, however, that an explanation of the OPERA anomaly using neutrino oscillations
is likely to be contrived.
Finally we remark on the generality of our results.
Since the bulk-shortcut model for sterile neutrinos appears from the vantage point of the brane as a LIV model,
our conclusions apply in generality to the larger class of models in which a sterile flavor state
is superluminal, and transmits its greater speed to active flavor states via mixing and oscillations.
\vspace{0.5cm}
{\it{Acknowledgments.}}
We thank J.~Kumar and J.~Learned for discussions.
DM thanks the University of Hawaii for its hospitality while
this work was in progress. HP thanks the University of Hawaii and the Universidad Tecnica Federico Santa Maria,
Valparaiso, Chile for their hospitality.
This research was supported by US DoE Grants DE-FG02-04ER41308,
DE-FG05-85ER40226 and DE-FG02-04ER41291,
by US NSF Grant PHY-0544278 and by DFG Grant No. PA 803/5-1.
|
1509.07818
|
\section{Andreev reflection phenomena}
In an isotropic non-magnetic superconductor the normal-state single-particle excitation spectrum $\varepsilon_\vec{k}$ is modified in the superconducting state to $E_\vec{k}=[(\varepsilon_\vec{k}-\mu)^2+\Delta^2]^{1/2}$, acquiring a gap $\Delta$ around the electrochemical potential $\mu $, and the density of states is characterized by a coherence peak just above the gap, accounting for the missing sub-gap states. This spectral signature, predicted by Bardeen-Cooper-Schrieffer (BCS) theory \cite{Cooper56,Bardeen57} has been first observed by infrared absorption spectroscopy \cite{Glover56,Tinkham56} and by tunneling spectroscopy \cite{Giaever60}.
The spectral features above the gap may show information about electron-phonon interaction (or about interaction with some low-energy bosonic modes, e.g. spin-fluctutations \cite{Eschrig06}), or may exhibit geometric interference patterns.
Features due to electron-phonon interaction, predicted by
Migdal-Eliashberg theory \cite{Migdal58,Eliashberg60} and studied in detail by Scalapino {\it et al.} \cite{Scalapino66}, were measured early in tunneling experiments by Giaever {\it et al.} \cite{Giaever62}. They are a consequence of electronic particle-hole coherence in a superconductor
and build the basis for the McMillan-Rowell inversion procedure for determining the Eliashberg effective interaction spectrum $\alpha^2F(\omega )$ \cite{McMillan65}.
Geometric interference effects include oscillations
\end{fmtext}
\maketitle
\noindent
in the density of states, such as
Tomasch oscillations in N-I-S and N-I-S-N' tunnel structures with a superconductor of thickness $d_{\rm S}$ (and with transverse Fermi velocity $v_{\rm F,S}$), giving rise to voltage peaks at ${e}V_n=[(2\Delta)^2+(n\pi\hbar v_{\rm F,S}/d_{\rm S})^2]^{1/2}$ ($n$ integer)
\cite{Tomasch65,McMillan66,Wolfram68}; and Rowell-McMillan oscillations in N-I-N'-S tunnel structures with a normal metal N' of thickness $d_{\rm N'}$ (transverse Fermi velocity $v_{\rm F,N'}$), giving rise to voltage peaks at ${e}V_n=n\pi \hbar v_{\rm F,N'}/2d_{\rm N'}$ \cite{Rowell66}.
Possible offsets due to spatial variation of the gap $\Delta $ may occur \cite{Nedellec71}.
Inhomogeneous superconducting states exhibit also features at energies inside the gap. Surface bound states
in a normal metal overlayer on a superconductor were predicted first by de Gennes and Saint-James \cite{deGennes63,James64} and measured by Rowell \cite{Rowell73} and Bellanger {\it et al.} \cite{Bellanger73}.
These de Gennes-Saint-James bound states have a natural explanation in terms of the so-called Andreev reflection process, an extremely fruitful physical picture suggested in 1964 by Andreev \cite{Andreev64}.
For example, geometric resonances above the gap appear due to Andreev reflection at N-S interfaces, which describe (for normal impact) scattering of a particle at wavevector $k_{{\rm F},x}+(E^2-\Delta^2)^{1/2}/\hbar v_{{\rm F},x}$ into a hole at wavevector
$k_{{\rm F},x}-(E^2-\Delta^2)^{1/2}/\hbar v_{{\rm F},x}$ or vica versa.
Below the gap Andreev reflections lead
to subharmonic gap structure due to multiple Andreev reflections at voltages $V_n=2\Delta/en$ ($n$ integer) in SIS junctions \cite{Klapwijk82} (for a general treatment in diffusive systems see \cite{Cuevas06}), and
control electrical and thermal resistance of a superconductor/normal-metal interface and the Josephson current in a superconductor/normal-metal/superconductor junction.
The Andreev mechanism also gives rise to bound states in various other systems with inhomogeneous superconducting order parameter, which are named in the general case Andreev bound states.
Transport trough an N-S contact is strongly influenced by Andreev scattering, and is described in the single-channel case by the theory of Blonder, Tinkham, and Klapwijk \cite{Blonder82}, generalized to the multi-channel case by Beenakker \cite{Beenakker92}. Andreev scattering at N-S interfaces is the cause of the superconducting proximity effect \cite{deGennes63a,Werthamer63}.
Interference effects in transport appear also as the result of impurity disorder.
In contrast to unconventional superconductors, where normal impurities are pair breaking,
isotropic $s$-wave superconductors are insensitive to scattering from normal impurities for not too high impurity concentration, which is the content of a theorem by Abrikosov and Gor'kov \cite{Abrikosov58,Abrikosov59}, and by Anderson \cite{Anderson59a}.
In strongly disordered superconductors (weak localization regime) the superconducting transition temperature $T_{\rm c}$ is reduced \cite{Finkelstein87}, accompagnied by
localized tail states (similar to Lifshitz tail states in semiconductors \cite{Lifshitz64}) just below
the gap edge \cite{Larkin71,Balatsky97}.
An interference effect is the
so-called reflectionless tunneling \cite{Kastalsky91,Wees92,Beenakker92,Zaitsev90,Volkov92}, which leads to a zero-bias conductance peak in a diffusive N-I-S structure. It results from multiple scattering of Andreev-reflected coherent particle-hole pairs at impurities, and from the resulting backscattering to the interface barrier, making the barrier effectively transparent near the electrochemical potential for a pair current even in the tunneling regime.
Abrikosov and Gor'kov developed in 1960 a theory for pair-breaking by paramagnetic impurities, showing that at a critical value for the impurity concentration superconductivity is destroyed, and that gapless superconductivity can exist in a narrow region below this critical value \cite{Abrikosov60}.
Yu \cite{Luh65}, Shiba \cite{Shiba68},
and Rusinov \cite{Rusinov69} (who happened to work isolated from each other in China, Japan, and Russia) independently discovered within the framework of a full $t$-matrix treatment of the problem that local Andreev bound states (now called the Yu-Shiba-Rusinov states) are present within the BCS energy gap due to multiple scattering between conduction electrons and paramagnetic impurities.
Andreev bound states also exist in the cores of vortices in type II superconductors. These are called Caroli-de Gennes-Matricon bound states \cite{Caroli64}, and carry current in the core region of a vortex \cite{Rainer96}. Their dynamics plays a crucial role in the absorption of electromagnetic waves \cite{Eschrig99,Eschrig02,Sauls09}.
In an S-N-I or S-N-S junction, Andreev bound states appear in the normal metal region at energies below the gaps of the superconductors.
The number and distribution of these bound states depend on details such as interface transmission, mean free path, and length of the normal metal $d_{\rm N}$. In general, there is a characteristic energy, the Thouless energy \cite{Thouless72}
(related to the dwell time between Andreev reflections), given by $\hbar v_{\rm F,N}/d_{\rm N}$ for the clean limit, and by $\hbar D_{\rm N}/d_{\rm N}^2$ for the diffusive limit, with Fermi velocity $v_{\rm F,N}$ and diffusion constant $D_{\rm N}$ of the normal metal. In the diffusve limit, the Andreev states build a quasi-continuum below the superconducting gap, whereas in the case of ballistic junctions bands of Andreev bound states arise.
For the case that no superconducting phase gradient is present in the system, however,
a low-energy gap always arises in the spectrum of Andreev states in the normal metal.
This so-called minigap scales for sufficiently thick normal metal layers approximately with its Thouless energy
and with the transmission probability (possibly further reduced by inelastic scattering processes). It was found first by McMillan \cite{McMillan68} and can be probed by scanning tunneling microscopy \cite{Gueron96}.
In chaotic Andreev billiards \cite{Kosztin95}, where disorder is restricted to boundaries,
a second time scale, the Ehrenfest time, competes with the dwell time to set the minigap \cite{Lodder98}.
The importance of Andreev bound states in S-N-S Josephson junctions for current transport was first discussed by Kulik in 1969 \cite{Kulik69}. Andreev bound states form in a sufficiently long normal region, which are doubly degenerate (carrying current in opposite direction) for zero
phase difference between the superconducting banks. For a finite phase difference, this degeneracy is lifted.
The gap or minigap in a Josephson structure is reduced and eventually closes when a supercurrent flows across the junction. This is a result of a ``dispersion'' of the energy $E_{\rm b.s.}$ of the Andreev bound states as function of phase difference $\Delta \chi$ between the superconducting banks \cite{Kulik69,Zhou98,leSeur08}.
The contribution of the bound states to the supercurrent is given by $(2e/\hbar) \partial E_{\rm b.s.}/\partial \Delta \chi $, with $e<0$ the electron charge.
Apart from the current carried by the Andreev bound states, there is also a contribution from continuum states above the gap \cite{Kulik69}.
For a single-channel weak link between two superconductors with normal-state transmission probability $\tau^2 $ there is one pair of Andreev bound states with dispersion $E_\pm=\pm\Delta [1-\tau^2 \sin^2(\Delta \chi/2)]^{1/2}$.
The large size of a Cooper pair in conventional superconductors leads to a pronounced non-locality of Andreev reflection processes. This allows for interference effects due to
crossed Andreev reflection, in which the particle and hole involved in the process enter different normal-state (typically spin-polarized) terminals, which are both simultaneously accessable to one Cooper pair \cite{Byers95,Deutscher00}. This effect has been first experimentally observed by Beckmann {\it et al.} \cite{Beckmann04}.
Finally, an important role is played by Andreev zero modes as topological surface states. Examples are zero-bias states at the surface of a $d$-wave superconductor \cite{Hu94,Tanaka95}, and Majorana zero modes in topological superconductors \cite{Volovik99,Kitaev01} and superfluids \cite{Volovik03}.
\renewcommand{\i}{{\rm i}}
\section{Andreev bound states at magnetically active interfaces}
\label{ABS}
\subsection{Spin-dependent interface scattering phase shifts}
The importance of spin-dependent interface scattering phase shifts for superconducting phenomena has been pioneered in the work of Tokuyasu, Sauls, and Rainer in 1988 \cite{Tokuyasu88}.
Consider an interface between a normal metal (N) at $x<0$ and a ferromagnetic insulator (FI) or a half-metallic ferromagnet (HM) at $x> 0$.
For simplicity, let us model the FI (or HM) by a single electronic band with energy gap $V_\downarrow$ for spin-down particles and an energy gap $V_\uparrow=V_\downarrow-2J$ for spin-up particles, where $J>0$ denotes an effective exchange field.
The exchange field can be related to an effective magnetic field via $ \mu\vec{B}_{\rm eff}=
\vec{J}$ (for free electrons the magnetic moment is $\mu=\mu_{\rm e}<0$).
Let us assume an incoming Bloch electron with energy $0<E<V_\uparrow $ and spin $\sigma\in\left\{\uparrow,\downarrow\right\}$, reflected back from the interface with amplitude $r_\sigma $.
It is described by a wave function
$\Psi_\sigma (x,\vec{r}_{\parallel })=e^{\i\vec{k}_{\parallel }\vec{r}_{\parallel }} (e^{\i kx}+r_\sigma\;e^{-\i kx})$ at $x<0$ and
$\Psi_\sigma (x, \vec{r}_{\parallel })= t_\sigma e^{\i\vec{k}_{\parallel }\vec{r}_{\parallel}} e^{-\kappa_\sigma x}$ at $x>0$.
For the normal metal
$\hbar k(E)=[2mE-(\hbar k_{\parallel })^2]^{1/2}$.
For the FI
$\hbar \kappa_\sigma(E)=[2m(V_\sigma-E)+(\hbar k_{\parallel })^2]^{1/2}$. The reflection scattering matrix is
\begin{eqnarray}
{\bf S}= \left(\begin{array}{cc} e^{\i\vartheta_\uparrow} &0 \\ 0& e^{\i\vartheta_\downarrow} \end{array}\right), \qquad
e^{\i \vartheta_\uparrow }= r_\uparrow=\frac{k-\i\kappa_\uparrow }{k+\i\kappa_\uparrow}, \quad
e^{\i \vartheta_\downarrow} = r_\downarrow=\frac{k-\i\kappa_\downarrow }{k+\i\kappa_\downarrow}
\label{Smatrix}
\end{eqnarray}
In the range $V_\uparrow<E<V_\downarrow $ the spin-up electron can be transmitted for sufficiently small $k_{\parallel}$ with amplitude $t_\uparrow=2\sqrt{kk_\uparrow }/(k+k_\uparrow)$, where $\hbar k_\uparrow(E)=[2m(E-V_\uparrow)-(\hbar k_{\parallel })^2]^{1/2}$. In this case, the reflection amplitude is also real, and equal to $r_\sigma=(k-k_\uparrow)/(k+k_\uparrow)$. The reflection phase is $-\pi$ for $k<k_\uparrow$ and zero for $k>k_\uparrow$. In figure \ref{Phasedelay} (a)
$V_\downarrow =3E_{\rm F}$ is fixed and $V_\uparrow $ varied from $-E_{\rm F} $ to $3E_{\rm F}$, $k_\parallel =0$.
A phase shift for reflected waves between the two spin projections appears.
\begin{figure}[t]
\centering{(a) \hspace{1.6in} (b) \hspace{1.6in} (c) \hspace{1.5in}$\;$}\\
\centering{
\includegraphics[width=1.7in]{Phasedelay_compressed.pdf}
\includegraphics[width=1.7in]{Theta_J03_compressed.pdf}
\includegraphics[width=1.7in]{Theta_J08_compressed.pdf}
}
\caption{
(a) Spin dependent scattering phase shifts for Bloch waves with energy $E_{\rm F}$ reflected from an N-FI or N-HM interface (at $x=0$).
Here $V_\downarrow =3E_{\rm F}$ is fixed and $V_\uparrow $ varied from $-E_{\rm F} $ to $3E_{\rm F}$, $k_\parallel =0$.
For $V_\uparrow>E_{\rm F}$ this describes an FI, for $V_\uparrow<E_{\rm F}$ a HM. For $V_\uparrow<0$ the Fermi surface of the spin-up band in the HM becomes larger than in N.
Shown are for $x<0$ the (normalized) reflected wave, quantified by $-{\rm Im} (r_\sigma\;e^{-\i kx})/|r_\sigma|$, and for $x>0$ the transmitted wave, quantified by $-{\rm Im} (t_\sigma e^{-\kappa_\sigma x})$ (for real $\kappa_\sigma$) or $-{\rm Im}(t_\uparrow e^{\i k_\uparrow x})$ (for real $k_\uparrow$, i.e. $V_\uparrow<E_{\rm F}$). The spin-up wave is shown in orange, the spin-down wave in blue.
(b) Spin mixing angle $\vartheta $ as function of parallel momentum $k_\parallel /k_{\rm F}$ and $\nu=(V_\uparrow+V_\downarrow)/2$ for $J=0.3E_{\rm F}$ (FI for $\nu>1.3$, HM for $\nu>0.7$); (c) the same for $J=0.8E_{\rm F}$ (FI for $\nu>1.8$, HM for $\nu>0.2$).
}
\label{Phasedelay}
\end{figure}
It results from the well-known effect that
reflection from an insulating region results in
a {\it phase-delay} of the reflected wave with respect to the case of an infinite interface potential. This phase delay appears due to the quantum mechanical penetration of the wave function into the classically forbidden region. The range $V_\uparrow-E_{\rm F}>0$ in figure \ref{Phasedelay} (a) corresponds to an N-FI interface, where both spin-projections are evanescent in FI. Here, the reflected spin-up wave trails that of the spin-down wave, and the effect increases when $V_\uparrow-E_{\rm F}$ approaches zero. The phase $\vartheta =\vartheta_\uparrow-\vartheta_\downarrow $ of the parameter $r_\uparrow r_\downarrow^\ast = |r_\uparrow r_\downarrow| e^{\i\vartheta}$ is called {\it spin-mixing angle} \cite{Tokuyasu88}, or {\it spin-dependent scattering phase shift} \cite{Cottet05}. It is an important parameter for superconducting spintronics. For the N-FI model interface it is given by
\begin{equation}
\tan \frac{\vartheta}{2} =
\tan \frac{\vartheta_\uparrow-\vartheta_\downarrow}{2}= \frac{k(\kappa_\downarrow-\kappa_\uparrow )}{k^2+\kappa_\uparrow\kappa_\downarrow },
\end{equation}
which is positive due to $\kappa_\downarrow>\kappa_\uparrow$.
The range $V_\uparrow-E_{\rm F}<0$
corresponds to a N-HM interface, with the spin-up band itinerant in HM. Here, as long as the spin-up Fermi surface in the HM is smaller than that in N (for $-1<(V_\uparrow-E_{\rm F})/E_{\rm F}<0$), the reflection phase in $r_\uparrow $ is zero, and
\begin{equation}
\tan \frac{\vartheta}{2} =
\tan \frac{-\vartheta_\downarrow}{2}= \frac{\kappa_\downarrow}{k},
\end{equation}
which can aquire large values. Finally, in the range $(V_\uparrow-E_{\rm F})/E_{\rm F}<-1$ the Fermi surface in the HM is larger than in N, which leads to a reflection phase of $\pi$ for spin-up particles, and
\begin{equation}
\tan \frac{\vartheta}{2} =
\tan \frac{\pi-\vartheta_\downarrow}{2}= -\frac{k}{\kappa_\downarrow},
\end{equation}
which now is negative.
In ballistic structures, the spin-mixing angle depends on the momentum $\hbar \vec{k}_\parallel$ parallel to the interface, as illustrated in figure \ref{Phasedelay} (b) and (c) for varying Fermi surface geometry in the ferromagnet, here parameterized by varying $(V_\uparrow+V_\downarrow)/2$ keeping $J$ fixed.
If both spin-bands are itinerant in the ferromagnet (F), then the spin-mixing angle is zero
(if $k>k_\uparrow, k_\downarrow $ or $k< k_\uparrow, k_\downarrow $)
or $-\pi$ (if $k_\uparrow>k> k_\downarrow $, red areas in figure \ref{Phasedelay} b and c),
unless an interface potential exists, rendering the reflection amplitudes complex valued. In general, the spin-mixing angle should be considered as material parameter, which in addition depends on the impact angle of the incoming electron or on transport channel indices.
Note that the parameter $r_\uparrow r_\downarrow^\ast $ has become well-known in the spintronics community, as it governs the {\it spin mixing conductance} \cite{Brataas00} in spintronics devices.
It is also instructive to study an incoming Bloch-electron polarized in a direction different from the magnetization direction in the ferromagnet. Let us consider the case of a FI.
For a Bloch electron polarized in a direction $\vec{n}(\alpha, \phi)$, parameterized by polar and azimuthal angles, $\alpha $ and $\phi $,
\begin{equation}
\uparrow_{\alpha,\phi} e^{\i \vec{k}_\parallel \vec{r}_\parallel } e^{\i kx} = \left[\cos \frac{\alpha}{2} e^{-\i\frac{\phi}{2}} \uparrow_z + \sin \frac{\alpha}{2} e^{\i\frac{\phi}{2}} \downarrow_z
\right] e^{\i \vec{k}_\parallel \vec{r}_\parallel } e^{\i kx}
\end{equation}
the reflected wave will have the form
\begin{equation}
\left[\cos \frac{\alpha}{2} e^{-\i\frac{\phi-\vartheta}{2}} \uparrow_z + \sin \frac{\alpha}{2} e^{\i\frac{\phi-\vartheta}{2}} \downarrow_z \right]
e^{\i \frac{\vartheta_\uparrow+\vartheta_\downarrow}{2}}
e^{\i \vec{k}_\parallel \vec{r}_\parallel } e^{-\i kx}
\equiv \uparrow_{\alpha,\phi-\vartheta}
e^{\i \bar\vartheta }
e^{\i \vec{k}_\parallel \vec{r}_\parallel } e^{-\i kx}
\label{SpinRot}
\end{equation}
with $\bar\vartheta=(\vartheta_\uparrow+\vartheta_\downarrow)/2$. Similarly, $\downarrow_{\alpha,\phi}$ scatters into $\downarrow_{\alpha,\phi-\vartheta} e^{\i \bar\vartheta}$.
This
means that scattering leads, apart from an unimportant spin-independent phase factor $e^{\i \bar\vartheta}$, to a precession of the spin around the magnetization axis \cite{Tokuyasu88}. The direction of precession depends on the Fermi surface geometries, and is determined by the sign of the spin-mixing angle $\vartheta$.
\subsection{Andreev reflection in an S-N-FI structure}
\label{ABS1}
An important consequence of spin-mixing phases is the appearence of Andreev bound states at magnetically active interfaces, predicted theoretically \cite{Fogelstrom00,Eschrig03,Krawiec04,Zhao04,Annett06,Lofwander10,Metalidis10}, and verified experimentally \cite{Huebler12}.
Consider a superconductor near an interface with a ferromagnetic insulator. Let us assume that the superconducting order parameter is suppressed to zero in a layer of thickness $d$ next to the FI
interface, such that the structure can be described as a S-N-FI junction with identical normal state parameters in S and N. For simplicity I consider here a spatially constant order parameter in S (extending to the half space $x<0$). The FI ($x>d$) will be parameterized by reflection phases $\vartheta_\uparrow$ and $\vartheta_\downarrow $, with spin-mixing angle $\vartheta =\vartheta_\uparrow-\vartheta_\downarrow $. Solving the corresponding Bogoliubov-de Gennes equations in the superconductor ($\sigma_i$ are spin Pauli matrices, $\sigma_0$ a 2$\times$2 unit spin matrix)
\begin{align}\label{BdG}
\left(\begin{array}{cc}
(-\frac{\hbar^2 \nabla^2}{2m}-\mu) \sigma_0 & \Delta \i\sigma_2\\ -\Delta^\ast \i \sigma_2 & (\frac{\hbar^2 \nabla^2}{2m}+\mu) \sigma_0
\end{array}\right)
\left(\begin{array}{c}
u \\v
\end{array}\right)
=\varepsilon
\left(\begin{array}{c}
u \\v
\end{array}\right)
\end{align}
with spinors $u$ and $v$,
the (still unnormalized) eigenvectors for given energy $\varepsilon $ and $\vec{k}_\parallel=0$ are
\begin{align}\label{sols}
\left(\begin{array}{r}
1\\0\\0\\\tilde \gamma
\end{array}\right) e^{\pm \i k_+x},
\left(\begin{array}{r}
0\\1\\-\tilde \gamma\\0
\end{array}\right) e^{\pm \i k_+x},
\left(\begin{array}{r}
0\\ \gamma\\1\\0
\end{array}\right) e^{\pm \i k_-x},
\left(\begin{array}{r}
-\gamma\\0\\0\\1
\end{array}\right) e^{\pm \i k_-x},
\end{align}
where to first order in $|\Delta|/E_{\rm F}$ the wavevectors are
$k_\pm (\varepsilon) = k_{\rm F} \pm \i \sqrt{|\Delta|^2 - \varepsilon^2}/(\hbar v_{\rm F})$
and
\begin{align}\label{sols1}
\gamma (\varepsilon) = -\frac{\Delta }{ \varepsilon+\i \sqrt{|\Delta|^2 - \varepsilon^2}}, \quad
\tilde\gamma (\varepsilon) = +\frac{\Delta^\ast }{ \varepsilon+\i \sqrt{|\Delta|^2 - \varepsilon^2}}.
\end{align}
For $|\epsilon | > |\Delta |$ the expression $\i \sqrt{|\Delta|^2 - \varepsilon^2}$
is replaced by $\varepsilon\sqrt{1-(|\Delta|/\varepsilon)^2}$ (which corresponds to $\varepsilon \to \varepsilon+\i 0^+$ with infinitesimally small positive $0^+$).
In the N layer the solutions are obtained by setting $\Delta=0$. In the FI only evanescent solutions of the form $e^{-\kappa_\uparrow x}$ and $e^{-\kappa_\downarrow x}$ are allowed. The reflection coefficients connect incoming ($e^{\i k_+x}$, $e^{-\i k_-x}$) solutions with outgoing ($e^{-\i k_+x}$, $e^{\i k_-x}$) solutions.
For scattering from electron-like to electron-like Bogoliubov quasiparticles and for electron-like to hole-like Bogoliubov quasiparticles in leading order in $|\Delta|/E_{\rm F}$ they are
\begin{align}\label{refl}
r_{e\uparrow \to e\uparrow} = \frac{e^{2\i d k_{\rm F}}e^{2\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{\i \vartheta_\uparrow}(1+\gamma \tilde \gamma )}{1+\gamma \tilde \gamma e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{\i \vartheta }}, &\quad
r_{e\uparrow \to h\downarrow} = \frac{-\tilde \gamma (1-e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{\i \vartheta })}{1+\gamma \tilde \gamma e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{\i \vartheta }},\\
r_{h\uparrow \to h\uparrow} = \frac{e^{-2\i d k_{\rm F}}e^{2\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{-\i \vartheta_\uparrow}(1+\gamma \tilde \gamma )}{1+\gamma \tilde \gamma e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{-\i \vartheta }}, &\quad
r_{h\uparrow \to e\downarrow} = \frac{-\gamma (1-e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{-\i \vartheta })}{1+\gamma \tilde \gamma e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{-\i \vartheta }},
\end{align}
and similar relations hold for $\uparrow \leftrightarrow \downarrow $ and simultaneously $\vartheta \to -\vartheta $, $\gamma \to -\gamma $, $\tilde \gamma \to -\tilde \gamma $.
These relations have a simple interpretation.
The {\it coherence functions} $\gamma $ and $\tilde \gamma $ represent probability amplitudes for hole-to-particle conversion ($-\gamma $) or particle-to-hole conversion ($-\tilde \gamma $), whereas the factors $e^{\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}$ represent the electron-hole dephasing when crossing the N layer. Thus, the factors $\gamma \tilde \gamma e^{4\i d\frac{ \varepsilon}{\hbar v_{\rm F}}}e^{\i \vartheta }$ represent a Rowell-McMillan process of four times crossing N with two reflections from FI (once as particle and once as hole, contributing $e^{\i \vartheta_\uparrow }$ and $e^{-\i \vartheta_\downarrow }$), and two Andreev conversions.
When this factor equals $-1$, which happens for energies below the gap,
a bound state appears in N due to constructive interference between particles and holes.
Note that for $|\varepsilon |\le |\Delta|$ the coherence functions have unit modulus: $|\gamma|=|\tilde \gamma| = 1$, such that with $\Delta =|\Delta|e^{\i\chi}$ one can write
$\gamma = \i e^{\i \Psi(\varepsilon) }e^{\i \chi} $ and
$\tilde\gamma = -\i e^{\i \Psi(\varepsilon) } e^{-\i \chi}$, with $\sin \Psi = \varepsilon/|\Delta|$, $\cos \Psi >0$.
For $|\varepsilon |< |\Delta|$ only outgoing wavevectors $-k_+$ and $k_-$ lead to normalizable solutions in the superconductor, which are restricted to the bound state energies, given by the solution of $\varepsilon = |\Delta| \cos(\frac{ 2d\varepsilon}{\hbar v_{\rm F}} \pm \frac{\vartheta}{2})\mbox{sign}[\sin(\frac{ 2d\varepsilon}{\hbar v_{\rm F}} \pm \frac{\vartheta}{2})] $.
\begin{figure}[t]
\centering{(a) \hspace{1.5in} (b) \hspace{1.5in} (c) \hspace{1.5in}$\;$}\\
\centering{\includegraphics[width=1.7in]{Reh_th0_compressed.pdf}
\includegraphics[width=1.7in]{Reh_th05Pi_compressed.pdf}
\includegraphics[width=1.7in]{Reh_th1Pi_compressed.pdf}
}
\centering{(d) \hspace{1.5in} (e) \hspace{1.5in} (f) \hspace{1.5in}$\;$}\\
\centering{\includegraphics[width=1.75in]{Reh_d0_compressed.pdf}
\includegraphics[width=1.75in]{Reh_d1_compressed.pdf}
\includegraphics[width=1.75in]{Reh_d2_compressed.pdf}
}
\caption{Imaginary part of Andreev reflection amplitudes for
spin-up Bogoliubov quasiparticle to spin-down Bogoliubov quasihole,
Im$(r_{e\uparrow \to h\downarrow })$, at normal impact, for an S-N-FI structure with a normal region of thickness $d$, as function of energy $\varepsilon$, and of $d$ (in units of
$\xi_0=\hbar v_{F,z}/\Delta $ with $v_{F,z}$ the projection of the Fermi velocity on the surface normal).
The reflection amplitude at the N-FI interface is for spin-up $e^{\i\vartheta_\uparrow}$ and for spin-down $e^{\i\vartheta_\downarrow}$. The spin-mixing angle is defined as $\vartheta=\vartheta_\uparrow-\vartheta_\downarrow$. It has the values (a) $\vartheta=0$, (b) $\vartheta=\pi/2$, (c) $\vartheta=\pi$.
In (d)-(f) the thickness of the normal layer is fixed to (d) $d=0$, (e) $d=\xi_0$, (f) $d=2\xi_0$, and vary the spin-mixing angle $\vartheta $.
The negative reflection amplitude for spin-down quasiparticle to spin-up quasihole, $-$Im$(r_{e\downarrow \to h\uparrow })$, is obtained by inverting the energy axis, $\varepsilon \leftrightarrow -\varepsilon$.
}
\label{Reh_th}
\end{figure}
In figure \ref{Reh_th}
results for the quantity Im$(r_{e\uparrow \to h\downarrow})$ for normal impact are shown.
For energies above the gap the typical Rowell-McMillan oscillations are visible. Below the gap sharp bound states exist, the energy of which depends on both $\vartheta $ and $d$. In (a)-(c) the influence of varying $d$ is illustrated.
The special case $\vartheta=0$, shown in (a), corresponds to the classical Rowell-McMillan S-N-I structure. For $\vartheta=\pi$, shown in (c), a midgap bound state exists for all $d$.
For $0<\vartheta <\pi$ particle-hole symmetry is broken, as seen in figure \ref{Reh_th} (b) and (d-f).
A corresponding bound state at negative energy exists for $r_{e\downarrow \to h\uparrow}$. The two corresponding bound states in the density of states have opposite spin polarization. With increasing $d$ more and more bound states enter the sub-gap region, emerging from the continuum Rowell-McMillan resonances.
For $d$=0 a bound state exists for any nonzero $\vartheta $, as shown in (d), and as discussed in Ref. \cite{Fogelstrom00}.
The influence of $\vartheta $ is shown for various $d$ in (d)-(f).
These results can be interpreted as
one spin-polarised chiral branch crossing the gap region with increasing $\vartheta $. For $d=0$ this happenes when varying $\vartheta $ from zero to $2\pi$. For $d\ne 0$ one needs a variation exceeding $2\pi$ (up to multiple times) until the branch crosses the entire gap.
The figure shows results for normal impact, $k_\parallel=0$. In general, an integration over $k_\parallel $ will lead to Andreev bands instead of sharp bound states, similar as in the case of de Gennes-Saint-James bound states in S-N-I structures.
Finally, note that with the reflection matrix \eqref{Smatrix} the resulting coherence function developes a spin-triplet component from a singlet component $\gamma^{\rm in}=\gamma_0 \i\sigma_2$:
\begin{align}
\left(\begin{array}{cc} 0 &\gamma^{\rm out}_{\uparrow \downarrow}\\ \gamma^{\rm out}_{\downarrow \uparrow} & 0 \end{array}\right) &=
\left(\begin{array}{cc} e^{\i\vartheta_\uparrow} &0 \\ 0& e^{\i\vartheta_\downarrow} \end{array}\right)
\left(\begin{array}{cc} 0 &\gamma_0\\ -\gamma_0 & 0 \end{array}\right)
\left(\begin{array}{cc} e^{-\i\vartheta_\uparrow} &0 \\ 0& e^{-\i\vartheta_\downarrow} \end{array}\right) \nonumber \\
&=
\cos(\vartheta)
\left(\begin{array}{cc} 0 &\gamma_0\\ -\gamma_0 & 0 \end{array}\right)
+ \i \sin(\vartheta )
\left(\begin{array}{cc} 0 &\gamma_0\\ \gamma_0 & 0 \end{array}\right)
\label{mixing0}
\end{align}
which implies that a singlet pair is scattered into a superposition of a singlet and a triplet pair:
\begin{align}
(\uparrow\downarrow-\downarrow\uparrow) \to
(\uparrow\downarrow e^{\i\vartheta }-\downarrow\uparrow e^{-\i\vartheta }) =
\cos(\vartheta) (\uparrow\downarrow-\downarrow\uparrow) +\i \sin(\vartheta) (\uparrow\downarrow+\downarrow\uparrow) .
\label{mixing}
\end{align}
\subsection{Andreev bound states in an S-FI-N structure}
As next example I summarize some results from Refs. \cite{Linder09,Linder10} and section IV of Ref. \cite{Eschrig09} for an S-FI-N junction, consisting of a bulk superconductor coupled via a thin ferromagnetic insulator (such as EuO) of thickness $d_{\rm I}$ to a normal layer of thickness $d_{\rm N}$. I assume here the ballistic case, and refer for the diffusive case to Refs. \cite{Linder09,Linder10,Cottet11}.
The interface is characterized by potentials $V_\uparrow $ and $V_\downarrow=V_\uparrow+2J$, such that the energy dispersions in the superconductor is $\hbar^2\vec{k}^2/2m$, in the normal metal $V_{\rm N}+\hbar^2\vec{k}^2/2m$, and in the barrier $V_\sigma+\hbar^2\vec{k}^2/2m$, $\sigma\in\left\{\uparrow,\downarrow\right\}$. The parameter $V_{\rm N}$ is used to vary the Fermi surface mismatch.
The Fermi wave vectors and Fermi velocities in S and N are $\vec{k}_{\rm F,S}$, $\vec{v}_{\rm F,S}$ and $\vec{k}_{\rm F,N}$, $\vec{v}_{\rm F,N}$, respectively. The Fermi energy is $E_{\rm F}=\hbar^2\vec{k}_{\rm F,S}^2/2m$.
For shorter notation we
introduce a directional vector for electrons moving in positive $x$-direction, $\vec{v}_{\rm F,N}=|\vec{v}_{\rm F,N}|\hat{\vec{n}}$ (i.e. $\hat n_x\ge 0$),
and the corresponding $x$-component of the Fermi velocity, $v_{{\rm F,N},x}\equiv v_{x}\ge 0$, in the normal metal (situated at $0\le x\le d_{\rm N}$).
It is convenient to define coherence functions $\gamma $ and $\tilde \gamma $ as 2$\times$2 spin matrices.
The coherence functions in the superconductor are given by $\gamma=\gamma_0 \i \sigma_2$ and $\tilde\gamma=\tilde\gamma_0 \i \sigma_2$, where $\gamma_0$ and $\tilde \gamma_0$ are given by the expressions in \eqref{sols1}.
The solutions in the normal metal are (for simplicity of notation I also suppress the arguments $\vec{k}_\parallel $ and $\varepsilon $ in $\gamma $ and $\tilde \gamma $)
\begin{align}
\gamma(\hat n_x,x)&= \gamma(\hat n_x,0) e^{2\i \varepsilon x/\hbar v_x}, \quad
\gamma(-\hat n_x,x)= \gamma(-\hat n_x,d_{\rm N}) e^{-2\i \varepsilon (x-d_{\rm N})/\hbar v_x} \\
\tilde \gamma(-\hat n_x,x)&= \tilde \gamma(-\hat n_x,0) e^{2\i \varepsilon x/\hbar v_x}, \quad
\tilde \gamma(\hat n_x,x)= \tilde \gamma(\hat n_x,d_{\rm N}) e^{-2\i \varepsilon (x-d_{\rm N})/\hbar v_x}
\end{align}
At $x=d_{\rm N}$ one obtains $\gamma(\hat n_x,d_{\rm N})=\gamma(-\hat n_x,d_{\rm N})\equiv \gamma_{\rm B}$ and
$\tilde \gamma(\hat n_x,d_{\rm N})=\tilde \gamma(-\hat n_x,d_{\rm N})\equiv \tilde \gamma_{\rm B}$,
with
\begin{align}
\gamma_{\rm B}= \left(\begin{array}{cc} 0& \gamma_+\\ -\gamma_-&0\end{array}\right),\quad
\tilde\gamma_{\rm B}= \left(\begin{array}{cc} 0& \tilde \gamma_+\\ -\tilde \gamma_-&0\end{array}\right).
\end{align}
The scattering parameters are the modulus of the transmission amplitudes, $t_\uparrow$ and $t_\downarrow$, the modulus of the reflection amplitudes $r_\uparrow=(1-t_\uparrow^2)^{1/2}$ and $r_\downarrow=(1-t_\downarrow^2)^{1/2}$ (equal on both sides of the FI), as well as the phase factors of the scattering parameters (all these parameters depend on $\vec{k}_\parallel$).
The relevant energy scale in the normal metal for given direction $\hat{\vec{n}}(\vec{k}_\parallel)$ is
\begin{align}
\delta(\vec{k}_\parallel)=t_{\uparrow}(\vec{k}_\parallel)\; t_{\downarrow}(\vec{k}_\parallel ) \; \hat n_x(\vec{k}_\parallel ) \;\varepsilon_{\rm Th}, \qquad
\varepsilon_{\rm Th}=\hbar v_{\rm F,N}/2d_{\rm N},
\end{align}
with the Thouless energy $\varepsilon_{\rm Th}$.
Matching the wavefunctions at $x=0$ to the thin FI layer and the superconductor, leads to $\tilde \gamma_-=-\gamma_+$ and $\tilde\gamma_+=-\gamma_-$, as well as \cite{Linder09,Linder10}
\begin{align}\label{eq:gamma}
\gamma_\sigma = - \frac{\delta }{\nu_\sigma + \i \sqrt{\delta^2-(\nu_\sigma+\i 0^+)^2}}
\end{align}
where $\sigma\in \{+,-\}$, and
the function $\nu_\sigma (\varepsilon )$ is defined as
\begin{align}\label{eq:u}
\nu _\sigma (\varepsilon) &= \hat n_x \varepsilon_{\rm Th} \left[\sin\left(\frac{\varepsilon }{\hat n_x\varepsilon_{\rm Th}} +
\sigma\vartheta_+
+ \Psi \right)
+r_\uparrow r_\downarrow \sin\left(\frac{\varepsilon }{\hat n_x\varepsilon_{\rm Th}} +
\sigma\vartheta_-
- \Psi\right)\right]
\end{align}
with $\vartheta_\pm = \frac{1}{2}(\vartheta_{\rm N}\pm \vartheta_{\rm S})$, where $\vartheta_{\rm N}$ and $\vartheta_{\rm S}$ are the spin-mixing angles for reflection at the FI-N interface and the S-FI interface, respectively, and the
variable $\sigma $ is to be understood as a factor $\pm 1$ for $\sigma=\pm$. Note that \eqref{eq:gamma} has the same form as \eqref{sols1} with the role of $\Delta $ and $\varepsilon $ taken over by
$\delta $ and $\nu_\sigma $, respectively.
Note also that $|\gamma_\sigma |=1$ for $\nu_\sigma <\delta $, even in the tunneling limit. This is an example of reflectionless tunneling at low energies and resuls from multiple reflections within the normal layer. Quasiparticles in the normal layer stay fully coherent in this energy range.
The density of states at the outer surface of the N layer is obtained as
\begin{align}\label{eq:dos}
\frac{N_{\rm B}(\varepsilon)}{N_{\rm F,N}} = \mbox{Re} \sum_\sigma \left\langle \frac{1+\gamma_\sigma^2}{1-\gamma_\sigma^2} \right\rangle=
\mbox{Re} \sum_\sigma \left\langle \frac{|\nu_\sigma(\varepsilon )|}{\sqrt{\nu_\sigma(\varepsilon)^2-\delta^2}}\right\rangle
\end{align}
where $\langle \ldots \rangle $ denotes Fermi surface averaging, and $N_{\rm F,N}$ is the density of states at the Fermi level of the bulk normal metal.
Results for this density of states are shown in figure \ref{SFIN}.
\begin{figure}[t]
\centering{
\includegraphics[width=1.7in]{N_k05.pdf}
\includegraphics[width=1.7in]{N_k1.pdf}
\includegraphics[width=1.7in]{N_k10.pdf}
}
\caption{
Energy-resolved DOS in the normal metal for different values of the interface exchange field $J$.
The energy scale is $\delta_0=(t_{\uparrow}t_{\downarrow} \varepsilon_{\rm Th})_{\vec{k}_\parallel=0}$,
with the Thouless energy $\varepsilon_{\rm Th}=\hbar v_{\rm F,N}/2d_{\rm N}$.
The interlayer thickness is $d_\text{I}=2/k_{\rm F,S}$
and the interface potentials are $V_\uparrow=1.2 E_{\rm F}$, $V_\downarrow=V_\uparrow+2J$.
The width of the normal layer is $d_{\rm N}=\hbar v_{\rm F,N} /\Delta$.
The inset in the lower left corner of each panel illustrates the Fermi-surface mismatch:
in (a) $k_{\rm F,N}=0.5 k_{\rm F,S}$, in (b) $k_{\rm F,N}=k_{\rm F,S}$, and in (c) $k_{\rm F,N}=10 k_{\rm F,S}$. Adapted from \cite{Linder10}.
Copyright (2010) by the American Physical Society.
}
\label{SFIN}
\end{figure}
The various panels (a)-(c) show examples for various Fermi surface mismatches. In (c) there are non-transmissive channels present in the normal layer ($|\vec{k}_\parallel|>k_{\rm F,S}$), leading to a large constant background density of states.
In each panel, the curve for $J=0$
corresponds to the case of an non-spinpolarized SIN junction. There is a critical value $J_{\rm crit}$ (independent from the Fermi surface mismatch and equal to $\approx 0.15E_{\rm F}$ in the figure) above which the system is in a state where no singlet correlations are present in the normal metal at the chemical potential ($\varepsilon=0$), and pure odd-frequency spin-triplet correlations remain. In this range the density of states is enhanced above its bulk normal state value \cite{Linder10}.
On either side of this critical value the density of states decreases as function of $J$, however stays always above $N_{\rm F,N}$ for $J>J_{\rm crit}$.
In the diffusive limit a similar scenario arises, with a peak centered at zero energy in the density of states \cite{Linder09, Linder10}.
A zero-energy peak in the density of states has been suggested as a signature of odd-frequency spin-triplet pairing also in hybrid structures with an itinerant ferromagnet or a half-metallic ferromagnet coupled to a superconductor \cite{Yokoyama07,Asano07,Braude07}.
It is interesting to study the tunneling limit, $t_\uparrow\ll 1$, $t_\downarrow \ll 1$, for small excitation energies $\varepsilon\ll \mbox{min}(\varepsilon_{\rm Th},\Delta )$ and small spin-mixing angles $\vartheta_{\rm N}$, $\vartheta_{\rm S}$.
Then $\nu_\sigma \approx 2\varepsilon+\sigma \hat n_x \varepsilon_{\rm Th} \vartheta_{\rm N}$, i.e. $\nu_\sigma $ depends only on the spin mixing angle at the FI-N interface, which acts in this case as an (anisotropic) effective exchange field $b=\hat n_x \varepsilon_{\rm Th}\vartheta_{\rm N}/2$ on the quasiparticles. For diffusive structures, a similar connection between an effective exchange field and the spin-mixing angle has been made \cite{Hernando02}.
The parameter $\delta/2=t_\uparrow t_\downarrow \hat n_x \varepsilon_{\rm Th}/2 $ on the other hand acts as effective (anisotropic) gap function. For each direction $\hat{\vec{n}}$, the gap closes at a critical value of effective exchange field, $b=\delta/2$, which happens for $\vartheta_{\rm N}=t_\uparrow t_\downarrow $.
\section{Andreev bound states in Josephson junctions with strongly spin-polarized ferromagnets}
\subsection{Triplet rotation}
Interfaces with strongly spin-polarized ferromagnets polarize the superconductor in proximity with it, as shown in the previous section. However, inorder for superconducting correlations to penetrate the ferromagnet, it is necessary to turn the triplet correlations of the form $\uparrow \downarrow + \downarrow \uparrow $ into equal spin pair correlations of the form $\uparrow \uparrow $ and $\downarrow \downarrow $. The reason is that correlations involving spin-up and spin-down electrons involve a phase factor $k_{\rm F\uparrow}-k_{\rm F\downarrow}$, which in strongly spin-polarized ferromagnets oscillates on a short length scale. This leads to destructive interference and allows to neglect such pair correlations on the superconducting coherence length scale \cite{Eschrig07}.
The way to achieve this is to allow for a non-trivial magnetization profile at the interface between the ferromagnet and the superconductor. This can include for example strong spin-orbit coupling, or a misaligned (with respect to the bulk magnetization) magnetic moment in the interface region. For strongly spin-polarized ferromagnets this has been suggested in Ref. \cite{Eschrig03,Kopu04,Eschrig08}. For weakly spin-polarized ferromagnets a theory was developed in 2001 involving a spiral inhomogeneity on the scale of the superconducting coherence length \cite{Bergeret01,Kadigrobov01}. A multilayer arrangement was subsequently also suggested \cite{Volkov03,Houzet07}.
For various reviews of this field see Refs. \cite{Izyumov02,Fominov03,Eschrig04,Golubov04,Buzdin05,Bergeret05,Eschrig07,Lyuksyutov07,Eschrig11,Blamire14,Linder15,Eschrig15}.
The idea is to rotate the triplet component, once created by spin-mixing phases in the S-F interfaces, into equal-spin triplet amplitudes with respect to the bulk magnetization of the ferromagnet \cite{Eschrig11}.
This is achieved by writing a triplet component with respect to a new axis
\begin{align}
(\uparrow\downarrow+\downarrow\uparrow)_{\alpha,\phi} &=
-\sin (\alpha) \left[e^{-\i\phi} (\uparrow\uparrow)_z - e^{\i\phi} (\downarrow\downarrow)_z\right]
+ \cos (\alpha) (\uparrow\downarrow+\downarrow\uparrow)_z ,
\label{triplet_z}
\end{align}
where $\alpha $ and $\phi $ are polar and azimuthal angles of the new quantization axis.
Then, if a thin FI layer oriented along the $(\alpha,\phi)$ direction is inserted between the superconductor and the strongly spin-polarized ferromagnet with magnetization in $z$ direction, equal spin-correlations can penetrate with amplitudes $-\sin (\alpha) e^{-\i \phi} $ and $\sin (\alpha) e^{\i \phi}$, respectively. These correlations are long-range and not affected by dephasing on the short length scale associated with $k_{\rm F\uparrow}-k_{\rm F\downarrow}$.
\subsection{Pair amplitudes at an S-FI-F interface}
It is instructive to consider the scattering matrix of an S-FI-F interface between a superconductor and an itinerant ferromagnet, with a thin FI interlayer of width $d$, in the tunneling limit. In this case one can achieve an intuitive understanding of the various spin-mixing phases involved in the reflection and transmission processes. Denoting
wavevector components perpendicular to the interface as
$k$ in the superconductor, $q_\uparrow $ and $q_\downarrow $ in the ferromagnet (I assume $\vec{k}_\parallel $ such that both spin directions are itinerant), and imaginary wavevectors $i\kappa_\uparrow$ and $i\kappa_\downarrow$ in the FI, the FI magnetic moment aligned in direction $(\sin \alpha \cos \varphi , \sin \alpha \sin \varphi, \cos \alpha) $, and a F magnetization aligned with the $z$-direction in spin space ($\alpha=0$), matching of wavefunctions leads to a scattering matrix
\begin{align}
{\bf S}= \left( \begin{array}{cc}
\bar D_\varphi D_\alpha \Phi_{\rm S}^{\frac{1}{2}} &0\\ 0& \i \bar D_\varphi D_\beta \Phi_{\rm F}^{\frac{1}{2}}
\end{array} \right)
\left( \begin{array}{cc}
1& 2 \nu_{\rm S} {\cal T} \nu_{\rm F} \\
2 \nu_{\rm F} {\cal T}^\dagger \nu_{\rm S} & -1
\end{array} \right)
\left( \begin{array}{cc}
\Phi_{\rm S}^{\frac{1}{2}} D_\alpha^\dagger \bar D_\varphi^\dagger &0\\ 0& \i \Phi_{\rm F}^{\frac{1}{2}} D_\beta^\dagger \bar D_\varphi^\dagger
\end{array} \right)
\label{srot}
\end{align}
where $\Phi_{\rm S,F}$ are phase matrices which include
the spin-mixing phase factors, $\bar D_\varphi $, $D_\alpha $, $D_\beta $ are spin-rotation matrices, $\nu_{S,F}$ carry information about S-FI and FI-F wavevector mismatch, and ${\cal T}$ contains the tunneling amplitudes including wavevector mismatch between S and F.
In particular, if one denotes diagonal matrices with diagonal elements $a$, $b$ by diag$(a,b)$, then
$K=\mbox{diag}(\kappa_\uparrow/k,\kappa_\downarrow/k)$, $Q=\mbox{diag}(q_\uparrow/k,q_\downarrow/k)$,
the spin-rotation matrices $\bar D_\varphi$, $D_\alpha $ between the quantization axis in the FI and the $z$ axis, and the phase matrices $\Phi_{\rm S,F}$ are
\begin{align}
\bar D_\varphi = \left( \begin{array}{cc}
e^{\frac{\i}{2}\varphi}&0\\0& e^{-\frac{\i}{2}\varphi} \end{array} \right), \;
D_\alpha = \left( \begin{array}{cc} \cos \frac{\alpha}{2} & -\sin \frac{\alpha}{2} \\
\sin \frac{\alpha}{2} & \cos \frac{\alpha}{2} \end{array} \right), \;
\Phi_{\rm S,F}=\left( \begin{array}{cc} e^{\i\vartheta_\uparrow^{\rm S,F}} &0\\
0&e^{\i\vartheta_\downarrow^{\rm S,F}} \end{array} \right),
\end{align}
and the spin-rotation matrix $D_\beta $ at the FI-F interface results from
$Q^{-\frac{1}{2}}D_\alpha K D_\alpha^\dagger Q^{-\frac{1}{2}} = D_\beta Z D_\beta^\dagger$ with $Z=\mbox{diag}(\zeta_\uparrow,\zeta_\downarrow)$. The angle $\beta $ vanishes for $\alpha=0$, and $\zeta_\uparrow $ varies from $\kappa_\uparrow/q_\uparrow $ at $\alpha=0$ to $\kappa_\downarrow/q_\uparrow $ at $\alpha=\pi$, correspondingly $\zeta_\downarrow $ varies from $\kappa_\downarrow/q_\downarrow$ to $\kappa_\uparrow/q_\downarrow$.
Also,
$\Phi_{\rm S} =(1-\i K)/(1+\i K)$,
$\Phi_{\rm F} =(1-\i Z)/(1+\i Z)$,
$\nu_{\rm S}= \sqrt{2/(1+K^2)}$, $\nu_{\rm F}=\sqrt{2/(1+Z^2)}$, and the tunneling amplitude is
${\cal T}=VA$ with $V=\mbox{diag}(e^{-\kappa_\uparrow d},e^{-\kappa_\downarrow d})$ and the real-valued mismatch matrix
$A=KD_\alpha^\dagger Q^{-\frac{1}{2}} D_\beta= D_\alpha^\dagger Q^{\frac{1}{2}}D_\beta Z$, of which the off-diagonal elements appear for $\alpha \ne 0, \pi$ only.
One can see from equation \eqref{srot} that the spin-mixing phases, which appear in the reflection amplitudes, also enter the transmission amplitudes; in the tunneling limit they contribute from each side of the interface one half \cite{Eschrig03}. Furthermore, one should notice that the interface is described by two spin-rotation matrices: one given by the misalignment of the FI magnetic moment with the $z$ axis in spin space, and one which is combined from the magnetization in F and the magnetic moment in FI. The latter appears because the wave function at the FI-F interface is delocalized over the FI-F interface region on the scale of the Fermi wavelength and experiences an averaged effective exchange field, which lies in the plane spanned by the $z$ axis and the direction of the FI magnetic moment (same $\bar D_\varphi $ in equation \eqref{srot}).
Pair correlation functions $f$ are related to coherence functions by $f=-2\pi i \gamma (1-\tilde \gamma \gamma)^{-1}$, with $f$, $\gamma$ and $\tilde \gamma$ 2$\times$2 matrices in spin space \cite{Eschrig00}.
When both are small (near $T_{\rm c}$ or induced from a reservoir by tunneling through a barrier), $f$ and $\gamma $ are proportional.
Assuming an incoming singlet coherence function $\gamma_0$ in S,
the coherence functions reflected back into S and the ones transmitted to F can be calculated to linear order in the pair tunneling amplitude according to
\begin{align}
\gamma^{\rm (S)}_{\rm out} =
{\bf S}_{11}
\left( \begin{array}{cc} 0&\gamma_0\\-\gamma_0 &0\end{array} \right)
{\bf S}_{11}^\ast
, \quad
\gamma^{\rm (F)}_{\rm out} = {\bf S}_{21}
\left( \begin{array}{cc} 0&\gamma_0\\-\gamma_0 &0 \end{array} \right)
{\bf S}_{12}^\ast .
\end{align}
For the reflected amplitude in S one obtains ($\vartheta_{\rm S}=\vartheta_\uparrow^{\rm S}-\vartheta_\downarrow^{S}$)
\begin{align}
\frac{\gamma^{\rm (S)}_{\rm out}}{\gamma_0}=\left( \begin{array}{cc} -\i\sin \vartheta_{\rm S} \sin \alpha e^{-\i\varphi}
& \cos \vartheta_{\rm S} +\i\cos \alpha \sin \vartheta_{\rm S}\\
-\cos \vartheta_{\rm S} +\i\cos \alpha \sin \vartheta_{\rm S} & \i\sin \vartheta_{\rm S} \sin \alpha e^{\i\varphi}
\end{array} \right),
\end{align}
which is just equation \eqref{mixing0} rotated in spin-space by the spherical angles $\alpha $ and $\varphi $. For the equal-spin coherence functions (or pair amplitudes) in F follows up to leading order in the misalignent angles $\alpha $, $\beta $ (denoting $\vartheta_{\rm F}=\vartheta^{\rm F}_\uparrow-\vartheta^{\rm F}_\downarrow $)
\begin{align}
\frac{\gamma^{\rm (F)}_{\rm out \uparrow\uparrow}}{\gamma_0}\approx
-\i C e^{-\i\varphi }
\left\{ \frac{\nu_{\rm F\uparrow}}{\nu_{\rm F\downarrow} }
\sin \left(\frac{\vartheta_{\rm S}}{2} \right)
\left[
\sqrt{\frac{q_\downarrow }{q_\uparrow}}
\sin (\alpha )- \sin (\beta )\right]
+\sin \left(\frac{\vartheta_{\rm S}+\vartheta_{\rm F}}{2} \right)
\sin (\beta )
\right\}
\label{eqspin1}
\\
\frac{\gamma^{\rm (F)}_{\rm out \downarrow\downarrow}}{\gamma_0}\approx
+\i C e^{+\i\varphi }
\left\{ \frac{\nu_{\rm F\downarrow}}{\nu_{\rm F\uparrow}}
\sin \left(\frac{\vartheta_{\rm S}}{2} \right)
\left[ \sqrt{\frac{q_\uparrow }{q_\downarrow}}
\sin (\alpha ) -\sin (\beta )\right]
+\sin \left(\frac{\vartheta_{\rm S}+\vartheta_{\rm F}}{2}\right)
\sin (\beta )
\right\}
\label{eqspin2}
\end{align}
with $C=4e^{-(\kappa_\uparrow+\kappa_\downarrow)d}(\nu_{\rm S\uparrow} \nu_{\rm S\downarrow}
\nu_{\rm F\uparrow}\nu_{\rm F\downarrow}) \kappa_\uparrow \kappa_\downarrow/[k(q_\uparrow q_\downarrow )^{\frac{1}{2}}]$.
The transmitted $\uparrow\downarrow $ and $\downarrow\uparrow $ coherence functions
$\gamma^{\rm (F)}_{\rm out \uparrow\downarrow}\approx
C\gamma_0 e^{\frac{\i }{2}(\vartheta_{\rm S}+\vartheta_{\rm F})}$
and
$\gamma^{\rm (F)}_{\rm out \downarrow\uparrow}\approx
-C\gamma_0e^{-\frac{\i }{2}(\vartheta_{\rm S}+\vartheta_{\rm F})}$ spatially oscillate with a wavevector
$e^{\i\vec{k}_\parallel \vec{r}_\parallel} e^{\pm \i (q_\uparrow-q_\downarrow )x}$ in F, and are suppressed (except in ballistic one-dimensional channels) due to dephasing after a short distance
$1/|q_\uparrow-q_\downarrow|$ away from the interface.
Importantly, from \eqref{eqspin1} and \eqref{eqspin2} it is visible that the equal-spin amplitudes acquire phases $\pm \varphi $ from the azimuthal angle in spin space, which play an important role in Josephson structures with half-metallic ferromagnets \cite{Eschrig07,Eschrig08} and with strongly spin-polarized ferromagnets when two interfaces with different azimuthal angles $\varphi_1$ and $\varphi_2$ are involved \cite{Grein09,Grein13}.
The misalignment of FI with F also induces a spin-flip term during reflection on the ferromagnetic side of the interface, which creates for an in F incoming amplitude $\gamma_{\uparrow\uparrow}$ a reflected amplitude $\gamma_{\downarrow\downarrow}=\gamma_{\uparrow\uparrow}e^{2\i\varphi}\sin^2\frac{\vartheta_{\rm F}}{2} \sin^2\beta $, and for an incoming amplitude $\gamma_{\downarrow\downarrow} $ a reflected amplitude $\gamma_{\uparrow\uparrow}=\gamma_{\downarrow\downarrow}e^{-2\i\varphi}\sin^2\frac{\vartheta_{\rm F}}{2} \sin^2\beta $. In this case, twice the azimutal angle $\pm \varphi $ enters.
\subsection{Andreev bound states in S-FI-HM-FI'-S junctions}
For an S-FI-HM interface with a half-metallic ferromagnet (HM)
in which one spin-band (e.g spin-down) is insulating and the other itinerant, equation \eqref{eqspin1} is modified to \cite{Eschrig08}
\begin{align}
\frac{\gamma^{\rm (F)}_{\rm out \uparrow\uparrow}}{\gamma_0}\approx
-4\i e^{-\i\varphi } e^{-(\kappa_\uparrow+\kappa_\downarrow)d}
\nu_{\rm S\uparrow} \nu_{\rm S\downarrow}
\nu_{\rm F\uparrow}^2 \frac{\kappa_\uparrow \kappa_\downarrow}{kq_\uparrow}
\sin \left(\frac{\vartheta_{\rm S}}{2}\right) \sin (\alpha ) .
\label{eqspinHM}
\end{align}
The same equation also holds at an S-FI-F interface when the conserved
wavevector $\vec{k}_\parallel $ is such that only one spin-projection on the magnetization axis is itinerant in F. For strongly spin-polarized F this is an appeciable contribution to the transmitted pair correlations.
In order to describe Josephson structures, it is necessary to handle the spatial variation (and possibly phase dynamics) of both coherence amplitudes and superconducting order parameter (which are coupled to each other). For this, a powerful generalization of the 2$\times$2 spin-matrix coherence functions has been introduced \cite{Eschrig00,Eschrig09}, based on previous work for spin-scalar functions in equilibrium \cite{Nagato93,Schopohl95} and nonequilibrium \cite{Eschrig99}.
The coherence functions $\gamma (\vec{n},\vec{R},\varepsilon, t)$ fulfil transport equations
\begin{align}
\label{cricc1}
\i\hbar \, \vf \! \cdot \vnabla \gamma +
2\varepsilon \gamma
&= \gamma {\, \scriptstyle \circ}\, \tilde{\Da} {\, \scriptstyle \circ}\, \gamma +
\Big( \Sigma {\, \scriptstyle \circ}\, \gamma - \gamma {\, \scriptstyle \circ}\, \tilde{\va} \Big) - \Delta, \\
\label{cricc2}
\i\hbar \, \vf \! \cdot \vnabla \tilde{\ga} -
2\varepsilon \tilde{\ga}
&= \tilde{\ga} {\, \scriptstyle \circ}\, \Delta {\, \scriptstyle \circ}\, \tilde{\ga} +
\Big( \tilde{\va} {\, \scriptstyle \circ}\, \tilde{\ga} - \tilde{\ga} {\, \scriptstyle \circ}\, \Sigma \Big) - \tilde{\Da}
\end{align}
with $\vec{n}$ a unit vector in direction of ${\mbf v}_F$, $\Sigma $ and $\Delta $ include particle-hole diagonal and off-diagonal self-energies and mean fields (e.g for impurity scattering, $\Delta $ includes the superconducting order parameter), and external potentials. The time-dependent case is included by the convolution over the internal energy-time variables in Wigner coordinate representation,
\begin{equation}
(A {\, \scriptstyle \circ}\, B)(\varepsilon ,t) \equiv
e^{\frac{\i}{2} (\partial_\varepsilon ^A\partial_t^B-\partial_t^A\partial_\varepsilon ^B)} A(\varepsilon ,t) B(\varepsilon ,t).
\end{equation}
In the time-independent case it reduces to a simple spin-matrix product. Furthermore, the particle-hole conjugation operation is defined by $\tilde A(\vec{n},\vec{R},\varepsilon, t)=A^\ast(-\vec{n},\vec{R},-\varepsilon,t)$.
Particle-hole diagonal [$g=-\i \pi(2{\cal V}-\sigma_0)$ and $\tilde g=\i \pi(2\tilde{\cal V}-\sigma_0)$] and off-diagonal [$f=-2\pi\i {\cal F}$ and $\tilde f=2\pi \i \tilde{\cal F}$] pair correlation functions (quasiclassical propagators) are obtained in terms of these coherence functions by solving the following algebraic (or in the time-dependent case, differential) equations
\begin{align}
{\cal V}= \sigma_0+\gamma{\, \scriptstyle \circ}\, \tilde{\ga}{\, \scriptstyle \circ}\, {\cal V}, \quad
\tilde{\cal V}= \sigma_0+\tilde{\ga}{\, \scriptstyle \circ}\, \gamma{\, \scriptstyle \circ}\, \tilde{\cal V}, \quad
{\cal F}=\gamma+\gamma{\, \scriptstyle \circ}\, \tilde{\ga}{\, \scriptstyle \circ}\, {\cal F}, \quad
\tilde{\cal F}=\tilde{\ga}+\tilde{\ga}{\, \scriptstyle \circ}\, \gamma{\, \scriptstyle \circ}\, \tilde{\cal F} .
\end{align}
\begin{figure}[t]
\centering{
\includegraphics[width=2.4in]{ALL2b_col.pdf}
\hspace{0.2in}
\includegraphics[width=2.4in]{ALL2b_col01.pdf}
}
\caption{
Local density of Andreev states (LDOS)
at the HM side of the S-FI-HM interface in a S-FI-HM-FI'-S Josephson structure, as function of phase difference $\Delta\chi $ over the junction,
for quasiparticles with normal impact, at $T = 0.05T_{\rm c}$.
All states are fully spin-polarized.
(a) and (c) Dispersion of the
maxima of the LDOS as function of phase difference. Regions with low LDOS are
white, regions of high LDOS (bands of Andreev bound states) are shaded. The signs indicate the direction of the current
carried by the Andreev states. (b) and (d) show spectra for a fixed
phase difference, both for positive (full lines) and negative (dashed lines) propagation direction. (a)-(b) is for a large misalignment between FI and HM, and (c)-(d) for a small misalignment.
(b) and (d) from \cite{Eschrig03}.
Copyright (2003) by the American Physical Society.
}
\label{SHM}
\end{figure}
In figure \ref{SHM} an example for a fully self-consistent calcuation of the spectrum of subgap Andreev states in a S-FI-HM-FI'-S junction is shown, obtained by solving equations \eqref{cricc1}, \eqref{cricc2} as well as the self-consistency equation for the superconducting order parameter in S.
For details of the calcuation and parameters see \cite{Eschrig03,Eschrig04}.
The ferromagnetic insulating barriers FI and FI' are taken identical in this calculation, and the spectra are shown at the half-metallic side of the S-FI-HM interface.
The most prominent feature in these spectra is an Andreev quasiparticle band centered at zero energy \cite{Eschrig03,Eschrig09,Halterman09}. Further bands at higher excitation energies are separated by gaps.
The zero-energy band is a characteristic feature of the nature of superconducting correlations in the half metal: they are spin-triplet with a phase shift of $\pm \pi/2$ with respect to the singlet correlations they are created from.
The dispersion of the Andreev states with applied phase difference between the two S banks determine the direction of current carried by these states. This direction is indicated in the figure by $+$ and $-$ signs. In panels (b) and (d) for a selected phase difference the spectra for positive and negative propagation direction are shown. These spectra, multiplied with the equilibrium distribution function (Fermi function) determine the contribution of the Andreev bound states to the Josephson current in the system.
These spectra are shown for normal impact direction ($\vec{k}_\parallel=\vec{0}$). An integration over $\vec{k}_\parallel$ gives the local density of states.
\begin{figure}[b]
\centering{
\includegraphics[width=1.25in]{DOS13a_2.pdf}
\includegraphics[width=1.25in]{DOS13_2.pdf}
\includegraphics[width=1.25in]{DOS12a_1.pdf}
\includegraphics[width=1.25in]{DOS12_1.pdf}
}
\caption{
Local density of states in the center of a
current biased high-transmissive symmetric Josephson junction for
(a) an S-N-S junction and (b)-(d) an S-FI-HM-FI-S junction.
In (a) and (b) the phase difference $\Delta \chi$ over the junction is varied from 0 to $\pi$.
In (b) and (c) the length of the junction is varied for a zero-junction and a $\pi$-junction.
The temperature is $T=0.1T_{\rm c}$, the coherence length of the half metal $\xi_0=\hbar |{\mbf v}_F |/2\pi T_{\rm c}$.
The FI misalignment angle is $\alpha =\pi/2$.
The transmission parameter $t$ and spin-mixing angle $\vartheta_{\rm S}$ depend on the impact angle $\Psi_{\vec{n}}$ measured from the surface normal; this is modeled here by
$t(\Psi_{\vec{n}})=t_0 \cos \Psi_{\vec{n}}/(1-t_0^2\sin^2 \Psi_{\vec{n}})^{\frac{1}{2}}$ and $\vartheta_{\rm S}=\vartheta_0 \cos \Psi_{\vec{n}}$.
Adapted from \cite{Eschrig09}.
Copyright (2009) by the American Physical Society.
}
\label{SHM1}
\end{figure}
For the case that one can neglect the variation of the order parameter $\Delta $ in S, one can derive quite a number of analytical expressions \cite{Eschrig07,Galaktionov08,Eschrig09}.
Examples for integrated spectra are shown in figure \ref{SHM1}, taken from Ref. \cite{Eschrig09}. In (a) the well-known spectrum of de Gennes-Saint James bound states is seen for an S-N-S junction \cite{Deutscher05}, showing a dispersion with phase bias $\Delta \chi $ between the two superconductors. At $\Delta \chi=\pi$ a zero energy bound state is present, which is a topological feature of the particular Andreev differential equations describing this system, for real-valued order parameters that change sign when going from the left S reservoir to the right S reservoir. The origin is the same as for the midgap state in polyacetylene \cite{Heeger88}, which is governed by similar differencial equations. Such midgap states have been studied in more general context by Jackiw and Rebbi \cite{Jackiw76} and ultimately have their deep mathematical foundation in the Atiyah-Patodi-Singer index theorem \cite{Atiyah75}.
For the S-FI-HM-FI-S junction, shown in (b)-(d), the prominent feature for all values of $\Delta \chi$ is the band of Andreev states centered around zero energy.
The width $W$ of this low-energy Andreev band depends on the parameter $P=\sin \left(\vartheta_{\rm S}/2\right) \sin (\alpha)$ and
can be calculated for the limit of short junctions ($L\to 0$) for $t=1$ as \cite{Eschrig09}
\begin{align}
W(\Delta \chi=0) = 2|\Delta | \sqrt{1-P^2}, \quad
W(\Delta \chi=\pi ) = |\Delta |(\sqrt{2-P^2}-P).
\end{align}
In the limit $P\to 0$ this gives
$W(\Delta \chi=0) = 2|\Delta |$ and $W(\Delta \chi=\pi ) = \sqrt{2} |\Delta | $.
Note that compared to the S-N-S junction, the low-energy features disperse in opposite direction when increasing $\Delta \chi$ for the S-FI-HM-FI-S junction. This means that the current flows in opposite direction, and typically a $\pi $-junction is realised for identical interfaces. If the azimuthal interface misalignment angles $\varphi $ differ by $\pi $ in FI and FI', then this phase would add to $\Delta \chi$ according to equation \eqref{eqspinHM} and a zero-junction would be realized. In the general case, a $\phi$-junction appears, both for ballistic and diffusive structures \cite{Eschrig07,Eschrig08,Eschrig15a}.
\subsection{Spin torque in S-FI-N-FI'-S' structures}
Andreev states play also an important role in the non-equilibrium spin torque and in the spin-transfer torque in S-F structures \cite{Slonc96,Ralph08,Brataas12,Locatelli14}. Zhao and Sauls found that
in the ballistic limit the equilibrium torque is related to the spectrum of spin-polarized Andreev bound states, while the ac component, for small bias voltages, is determined by the nearly adiabatic dynamics of the Andreev bound states \cite{Zhao07,Zhao08}.
The equilibrium spin-transfer torque $\tau_{\rm eq} $ in an S-FI-N-FI'-S' structure
is related to the Josephson current $I_{\rm e}$, the phase difference between S and S', $\Delta \chi$, and the angle $\Delta \alpha $ between FI and FI', by
\cite{Waintal02}
\begin{align}
\hbar \frac{\partial I_{\rm e}}{\partial \Delta \alpha } = 2{\rm e} \frac{\partial \tau_{\rm eq}}{\partial \Delta \chi }.
\end{align}
Similarly, as the dispersion of the Andreev bound states with superconducting phase difference $\Delta \chi $ yields the contribution of the bound state to the Josephson current, the dispersion of the Andreev states with $\Delta \alpha $ yields the contribution of this state to the spin current for spin polarisation in direction of the spin torque.
The dc spin current shows subharmonic gap structure due to multiple Andreev reflections (MAR), similar as for the charge current in voltage biased Josephson junctions \cite{Kummel85,Arnold87}. For high transmission junctions the main contribution to the dc spin current comes from consecutive spin rotations according to equation \eqref{SpinRot} when electrons and holes undergo MAR
\cite{Zhao08}.
Turning to ac effects, for a voltage $eV\ll\Delta $
the time evolution of spin-transfer torque is governed by the nearly adiabatic dynamics of the Andreev bound states. However, the dynamics of the bound state spectrum leads to non-equilibrium population of the Andreev bound states, for which reason the spin-tranfer torque does not assume its instantaneous equilibrium value \cite{Zhao08}.
For the occupation to change, the bound state energy must evolve in time to the continuum gap edges, where it can rapidly equilibrate with the quasiparticles, similar as in the adiabatic limit of ac Josephson junctions \cite{Averin95}.
The effect of rough interfaces and of spin-flip scattering on spin-transfer torque in the presence of Andreev reflections has been discussed by Wang, Tang, and Xia \cite{Wang10}. For diffusive structures see \cite{Shomali11}.
Magnetization dynamics has been also addressed recently \cite{Linder11, Mai11, Linder14}.
Andreev sidebands in a system with two superconducting leads coupled by a precessing spin proved important to study spin-transfer torques acting on the precessing spin \cite{Holmqvist11}. Spin-polarized Shapiro steps were studied in \cite{Holmqvist14}.
\section{Andreev spectroscopy in F-S and F-S-F' structures}
Andreev point contact spectra in S-F structures are modified with respect to those in S-N structures due to spin-filtering effects and the spin-sensitivity of Andreev scattering \cite{Jong95,Kashiwaya99,Zutic00,Mazin01,Kopu04,Perez04,Eschrig13}.
Spin-dependent phase shifts also crucially affect Andreev point contact spectra \cite{Lofwander10,Grein10,Piano11,Kupferschmidt11,Wilken12,Yates13,Sun15}.
Point contacts have lateral dimensions much smaller than the superconducting coherence lengths of the materials on either side of the contact.
Typically, a voltage will be applied over the contact, which makes it necessary to study in addition to the coherence amplitudes $\gamma $ and $\tilde{\ga} $ also distribution functions. I follow the definition in Ref. \cite{Eschrig99,Eschrig00}, where 2$\times $2 distribution function spin-matrices $x $ and $\tilde{x} $ for particles and holes are introduced which obey a transport equation
\begin{align}
\label{keld1}
&\i\hbar \, (\vf \! \cdot \vnabla + \partial_t )x
-(\gamma {\, \scriptstyle \circ}\, \tilde{\Da} +\Sigma ) {\, \scriptstyle \circ}\, x +
x {\, \scriptstyle \circ}\, (\gamma {\, \scriptstyle \circ}\, \tilde{\Da} +\Sigma )^\dagger
= {\cal I}^{\rm coll}
\\
\label{keld2}
&i\hbar \, (\vf \! \cdot \vnabla - \partial_t )\tilde{x}
-( \tilde{\ga} {\, \scriptstyle \circ}\, \Delta +\tilde{\va} ) {\, \scriptstyle \circ}\, \tilde{x} +
\tilde{x} {\, \scriptstyle \circ}\, ( \tilde{\ga} {\, \scriptstyle \circ}\, \Delta +\tilde{\va} )^\dagger
= \tilde{\cal I}^{\rm coll}
\end{align}
The distribution functions matrices are hermitian, $x =x^\dagger $ and $\tilde{x} =\tilde{x}^\dagger $.
The right hand sides of equation \eqref{keld1} and \eqref{keld2} contain collision terms (see Ref. \cite{Eschrig00} for details), which vanish in ballistic structures. In general,
these distribution functions can be related to quasiclassical Keldysh propatators
$g^{\rm K}=-2\pi\i ({\cal V}{\, \scriptstyle \circ}\, x {\, \scriptstyle \circ}\, {\cal V}^\dagger - {\cal F}{\, \scriptstyle \circ}\, \tilde{x} {\, \scriptstyle \circ}\, {\cal F}^\dagger )$
and
$f^{\rm K}=-2\pi\i ({\cal V}{\, \scriptstyle \circ}\, x {\, \scriptstyle \circ}\, {\cal F}^\dagger - {\cal F}{\, \scriptstyle \circ}\, \tilde{x}{\, \scriptstyle \circ}\, {\cal V}^\dagger )$.
The Fermi distribution function for particles, $f_{\rm p}$, and holes, $f_{\rm h}$, is related to $x $ in the normal state by $f_{\rm p}=(1-x )/2$ and $f_{\rm h}=(1-\tilde{x} )/2$.
\subsection{Andreev processes in point contact geometry}
In this section I consider point contacts
of dimensions large compared to the Fermi wavelength and small compared to the superconducting coherence lengths. In this case, the wavevector $\vec{k}_\parallel $ parallel to the contact interface is approximately conserved.
The current on the ferromagnetic side of a point contact,
being directed along the interface normal,
can be decomposed into
\begin{equation}
I=I_{\rm I}-I_{\rm R} + I_{\rm AR}
\end{equation}
where the various terms are the incoming current, $I_{\rm I}$, the
normally reflected part, $I_{\rm R}$,
and the Andreev reflected part, $I_{\rm AR}$.
The sign convention here is such that a positive current denotes
a current into the superconductor. Thus, in the normal state the current $I$ is positive
when the voltage in the ferromagnet is positive.
The various currents can be expressed as
\begin{equation}
\label{def1}
I_{\rm X}= -
\frac{{\cal A} }{2\pi \hbar }
\int_{\cal A_{\rm cF}}
\frac{{\rm d}^2 S(\vec{k}_\parallel ) }{(2\pi )^2}
\int\limits_{-\infty }^{\infty } \frac{{\rm d}\varepsilon}{2}
{\rm e} \; j_{\rm X}
,
\end{equation}
where ${\rm e}=-|{\rm e}|$ is the charge of the electron, ${\cal A} $ is the contact area, and ${\cal A}_{\rm cF}$ is the projection of the Fermi surfaces in the ferromagnet on the contact plane.
For each value $\vec{k}_\parallel $ there will be a number of (spin-polarized) Fermi surface sheets involved in the interface scattering (in the simplest case spin-up and spin-down, or only spin-up), and the dimension and structure of the scattering matrix will depend on how many Fermi surface sheets are involved. The sum over $\alpha $ and $\beta $ runs over those Fermi surface sheets $1,...,\nu $ for each given value of $\vec{k}_\parallel $. In the superconductor I assume for simplicity that only one Fermi surface sheet is involved for each $\vec{k}_\parallel $.
The reflection and transmission amplitudes for each $\vec{k}_\parallel $ are related to the scattering matrix as
\begin{equation}
{\bf S}(12;34)=\left(
\begin{array}{cc}
R_{12} & \; \; \, T_{14}\\T_{32} & -R_{34}
\end{array}
\right)
\end{equation}
where directions 1 and 2 refer to the superconductor, and 3 and 4 to the ferromagnet.
$R_{12}$ is a 2$\times $2 spin matrix, $R_{34}$ is a $\nu \times \nu$ matrix with elements $R_{\alpha \beta}$, $T_{14} $ is a $2\times \nu $ matrix with elements $T_{1\beta }$, and $T_{32}$ is a $\nu \times 2$ matrix with elements $T_{\alpha 2}$.
The spectral current densities $j_{\rm X}$ are given by
\begin{align}
j_{\rm I} &= \sum_{\beta } \delta x_{\beta},\qquad
j_{\rm R} = \sum_{\alpha \beta } |R_{\alpha \beta} - T_{\alpha 2} \; v_2 \; \gamma_2 \; \tilde R_{2 1} \; \tilde \gamma_1 \; T_{1 \beta}|^2 \delta x_{\beta}
\label{IR}
\\
j_{\rm AR}&=
\sum_{\alpha \underline\alpha} |T_{\alpha 2} \; v_2 \; \gamma_2\; \tilde T_{2 \underline\alpha } |^2 \delta \tilde x_{\underline\alpha },\qquad
v_2=(1-\gamma_2 \; \tilde R_{2 1} \; \tilde \gamma_1 \; R_{1 2} )^{-1},
\label{AR}
\end{align}
where $\delta x_\beta $ and $\delta \tilde x_\beta $ are the differences between the distribution functions in the ferromagnet and in the superconductor. If there is no spin-accummulation present, they are independent on the index $\beta $ and given by
\begin{align}
\delta x(V,T;\varepsilon) &= \tanh \left( \frac{\varepsilon -{\rm e} V}{2k_{\rm B}T} \right)
-\tanh \left( \frac{\varepsilon }{2k_{\rm B}T_{\rm S}} \right) , \qquad
\delta \tilde x (V, T; \varepsilon )= \delta x(V,T;-\varepsilon ),
\end{align}
with $T_{\rm S}$ the temperature in the superconductor.
Equations \eqref{IR}-\eqref{AR} are valid for general normal-state scattering matrices $S$, and can be applied to non-collinear magnetic structures.
For the case that all reflection and transmission amplitudes are spin-diagonal, and assuming on the superconducting side of the interface a spin-mixing angle $\vartheta $, these expressions are explicitely given by
\begin{align}
j_{\rm R} &= \left[|v_{0+}|^2 \; |r_\uparrow-r_\downarrow e^{\i\vartheta } \gamma_0^2|^2
+ |v_{0-}|^2 \; |r_\downarrow -r_\uparrow e^{-\i \vartheta} \gamma_0^2|^2\right] \delta x\\
j_{\rm I} &= 2 \delta x, \qquad
j_{\rm AR} = (t_\uparrow t_\downarrow)^2 |\gamma_0|^2 \left[|v_{0+}|^2 +|v_{0-}|^2 \right] \delta \tilde x
\end{align}
with
$\gamma_0= -\Delta /(\varepsilon+\i\Omega)$, $\Omega=\sqrt{\Delta^2-\varepsilon^2}$,
$v_{0\pm}=( 1-\gamma_0^2 r_\uparrow r_\downarrow e^{\pm \i\vartheta} )^{-1}$,
and the energy $\varepsilon $ is assumed to have an infinitesimally small positive imaginary part.
Andreev resonances arise for energies fulfilling $1=\gamma_0(\varepsilon )^2 r_\uparrow r_\downarrow e^{\pm \i \vartheta }$ (in agreement with the discussion in section \ref{ABS}\ref{ABS1} for $d=0$).
On the other hand, for half-metallic ferromagnets the Andreev reflection contribution is zero in collinear magnetic structures. In non-collinear structures, however,
the process of {\it spin-flip Andreev reflection} takes place, introduced in reference \cite{Grein10}, and illustrated there in figure 11.
Spin-flip Andreev reflection is the only process providing particle-hole coherence in a half-metallic ferromagnet.
Such structures are described by the theory deceloped in appendix C of \cite{Eschrig09}.
Application of this theory to experiment on CrO$_2$ is provided in \cite{Lofwander10,Yates13}.
A generalization to strongly spin-polarized ferromagnets with two itinerant bands is given in \cite{Grein10} with application to experiment in \cite{Piano11}.
\begin{figure}[t]
\centering{\hspace{0.8in} {\tiny (c)} \hspace{1.7in} {\tiny (d)} \hspace{1.0in}{\tiny (e)} \hspace{0.7in}$\;$}\\
\centering{
\begin{minipage}[b]{0.8in}
\includegraphics[width=0.8in]{pot.pdf}
\includegraphics[width=0.8in]{theta.pdf}
\end{minipage}
\begin{minipage}[b]{4.2in}
\includegraphics[width=1.7in]{sig.pdf}
\includegraphics[width=2.5in]{G0_vs_T.pdf}
\end{minipage}
}
\caption{
(a)
Shape function of the (spin-averaged) interface barrier potential.
A shape parameter $\sigma=0\ldots 0.7\ \lambda_{\rm{F}}$ increases for increasing smoothness.
(b)
Spin-mixing angle $\vartheta_{\rm S}$ as a function of $|\vec{k}_\parallel|$
for the shape functions in (a). $\vartheta_{\rm S} $ increases with increasing $\sigma $.
(c)
The differential conductance of an F-FI-S structure with various degrees of smoothness if the FI barrier, at $T=0$;
$\sigma$ increases from back to front in steps of 0.1 $\lambda_{\rm F}$.
(d)-(e)
Temperature dependence of the zero-voltage
conductance of an HM-FI-S point contact
as predicted by (d) the modified BTK model \cite{Mazin01}
and (e) the spin-active interface model \cite{Eschrig09,Grein10}.
Adapted from \cite{Grein10,Lofwander10}.
Copyright (2010) by the American Physical Society.
}
\label{PC}
\end{figure}
In figure \ref{PC} selected results are shown. In (a) and (b) it is demonstrated that the spin-mixing angle can acquire large values if a smooth spatial interface profile is used instead of an atomically clean interface. Correspondingly, in (c) Andreev resonances are more pronounced for smoother interfaces. In (d) and (e) a comparision of the model by Mazin {\it et al.} \cite{Mazin01} for various spin polarizations $P$ with the spin-mixing model for P=100\% and various spin-mixing angles $\vartheta $ shows that the two can be experimentally differentiated by studying the low-temperature behavior \cite{Lofwander10}.
In an experiment by Visani {\it et al.}
\cite{Visani12} geometric resonances (Tomasch resonances and Rowell-McMillan resonances)
in the conductance across a La$_{0.7}$Ca$_{0.3}$Mn$_3$O/YBa$_2$Cu$_3$O$_7$ interface were studied, demonstrating long-range propagation of superconducting correlations across the half metal La$_{0.7}$Ca$_{0.3}$Mn$_3$O.
The effect is interpreted in terms of spin-flip Andreev reflection (or, as named by the authors of \cite{Visani12}, ``equal-spin Andreev reflection'').
Spin-dependent scattering phases qualitatively affect the
zero- and finite-frequency current noice in S-F point contacts \cite{Cottet08a,Cottet08}.
It was found that for weak transparency noise steps appear at frequencies
or voltages determined directly by the spin dependence of scattering phase shifts.
\subsection{Andreev bound states in non-local geometry}
A particular interesting case is that of two F-S point contacts separated by a distance $L$ of the order of the superconducting coherence length.
This is effectively an F-S-F' system, or if barriers are included, an F-FI-S-FI'-F' system.
In this case, for a ballistic superconductor, one must consider separately trajectories connecting the two contacts \cite{Kalenkov07}. Along these trajectories the distribution function is out of equilibrium, and equations \eqref{keld1}-\eqref{keld2} must be solved. In addition, the coherence functions at these trajectories experience both ferromagnetic contacts, and are consequenctly different from the homogeneous solutions $\gamma_0$, $\tilde \gamma_0$ of all other quasiparticle trajectories.
The current on the ferromagnetic side of one particular interface (positive in direction of the superconductor),
can be decomposed in an exact way,
\begin{equation}
I=I_{\rm I}-I_{\rm R} + I_{\rm AR} - I_{\rm EC} + I_{\rm CAR}
\end{equation}
where the various terms are the incoming current, $I_{\rm I}$, the
normally reflected part, $I_{\rm R}$,
the Andreev reflected part, $I_{\rm AR}$, and the two non-local contributions
due to elastic co-tunneling, $I_{\rm EC}$ and crossed Andreev reflection,
$I_{\rm CAR}$.
\begin{figure}[t]
\centering{
\begin{minipage}[b]{3.0in}
\includegraphics*[width=2.1in,clip]{Setup1.pdf}
\includegraphics*[width=0.8in,clip]{Setup1a.pdf}\\
\includegraphics*[width=3.0in,clip]{Setup.pdf}
\end{minipage}
\begin{minipage}[b]{2.0in}
\includegraphics*[width=1.8in,clip]{FS1.pdf}
\end{minipage}
}
\caption{
Illustration of notation used in text.
For brevity of notation I sometimes omit the label 3 and 4, implying that $\alpha $ then means $3\alpha $ and $\beta $ means $4\beta $. E.g. in the right picture
the ferromagnet has two spin Fermi surfaces (red and blue), labeled by
$\alpha \in \left\{3\uparrow,3\downarrow\right\}$ and
$\beta \in \left\{4\uparrow,4\downarrow\right\}$ etc. The superconductor's Fermi surface is drawn in green.
}
\label{setup}
\end{figure}
There will be contributions from trajectories in the superconductor which do not connect the two contacts. These will be described by equations \eqref{IR}-\eqref{AR} above. Here, I will concentrate on the non-local contributions, which arise from the particular trajectories connecting the two contacts. Assuming the area of each contact much smaller than the superconducting coherence lenght (however larger than the Fermi wavelength, such that the momentum component parallel to the contact interfaces are approximately conserved), one can identify all trajectories connecting the two contacts, treating only one and scaling the result with the contact area.
The solid angle from a point at the first contact to the area ${\cal A}'$
of the second contact
is given by $\delta \Omega= {\cal A}'_2/L^2$, where ${\cal A}'_2$ is the projection
of the area of the second contact to the plane perpendicular to the
line $2,2'$ which connects the contacts (see figure \ref{setup} for the notation).
Using the conservation of $\vec{k}_\parallel $, and that consequently
${\rm d}^2S(\vec{k}_\parallel )={\rm d}^2S(\vec{p}_{\rm F2}) |\hat{\vec{n}} \cdot \vec{v}_{\rm F2}| /|\vec{v}_{\rm F2}|
={\rm d}^2S(\vec{p}_{\rm F\alpha}) |\hat{\vec{n}}\cdot \vec{v}_{\rm F\alpha}|/|\vec{v}_{\rm F\alpha}| $ (where $\hat{\vec{n}} $ is the contact surface normal),
one can express the currents as
\begin{equation}
\label{def2}
I_{\rm X}=
\frac{{\rm d}^2 S}{{\rm d}\Omega} \Big|_{\vec{p}_{\rm F2}}
\frac{{\cal A}_2 {\cal A}'_2}{(2\pi\hbar)^3L^2}
\int\limits_{-\infty }^{\infty } \frac{{\rm d}\varepsilon}{2}
j_{\rm X}
,
\end{equation}
where
$\vec{p}_{\rm F2}$ is the particular Fermi momentum in
the superconductor corresponding to a Fermi velocity in direction of the line $2,2'$ (I assume for simplicity that only one such Fermi momentum exists),
and ${\rm d}^2 S/{\rm d}\Omega$ is the differential fraction of the Fermi surface of the superconductor per solid angle $\Omega $ in direction of the Fermi velocity $\vec{v}_{\rm F2}$ that connects the two contacts. Note that ${\rm d}^2 S/{\rm d}\Omega$ is the same at both contacts for superconductors with inversion symmetry, as then this quantity is equal at $\vec{p}_{\rm F2}$ and $-\vec{p}_{\rm F2}$.
Reversed directions are denoted by an overline: $\vec{p}_{\rm F\bar 2}=-\vec{p}_{\rm F2}$ etc.
Let us introduce scattering matrices ${\bf S}(12;34)$ as well as ${\bf S}(\bar 2 \bar 1, \bar 4 \bar3 )$ at the
left interface, the latter being equal to ${\bf S}(12;34) $ for materials with
centrosymmetric symmetry groups, which I consider here.
Analogously, for the right interface let us introduce the scattering matrices
${\bf S}'(2' 1';4'3')={\bf S}'(\bar 1' \bar 2';\bar 3'\bar 4')$.
The scattering matrices for holes are related to the scattering matrices for particles by
$\tilde {\bf S}(21;43)= {\bf S}(\bar 2 \bar 1,\bar 4 \bar 3)^\ast$ etc.
One obtains for this case
\begin{align}
\label{ji}
j_{\rm I} &=
\sum_\beta x_{\beta } + \sum_{\bar \alpha } \delta x_{\bar \alpha }
\\
\label{jr}
j_{\rm R} &=
\sum_{\alpha \beta}
|R_{\alpha \beta } - T_{\alpha 2} \; v_2 \; \gamma_2 \; \tilde R_{2 1} \; \tilde \gamma_1 \;
T_{1 \beta } |^2 \delta x_{\beta }
+ \sum_{\bar \beta \bar \alpha }
|R_{\bar \beta \bar \alpha } - T_{\bar \beta \bar 1} \;
v_{\bar 1} \;\gamma_{\bar 1} \; \tilde R_{\bar 1 \bar 2} \; \tilde \gamma_{\bar 2}
\; T_{\bar 2 \bar \alpha } |^2 \delta x_{\bar \alpha }
\\
\label{jar}
j_{\rm AR} &=
\sum_{\alpha \underline \alpha }
|T_{\alpha 2} \; v_2 \; \gamma_2\; \tilde T_{2 \underline\alpha } |^2 \delta \tilde x_{\underline \alpha }
+
\sum_{\bar \beta \bar{\underline \beta }}
|T_{\bar \beta \bar 1} \; v_{\bar 1} \; \gamma_{\bar 1} \; \tilde T_{\bar 1 \bar{\underline\beta }} |^2\delta \tilde x_{\bar{\underline\beta }}
\\
\label{jec}
j_{\rm EC} &=
\sum_{\alpha\alpha' }
|T_{\alpha 2} \; v_2 \; u_{2 2'} \; T'_{2' \alpha'} |^2 \delta x_{\alpha'}
+
\sum_{\bar\beta \bar\beta' }
|T_{\bar \beta \bar 1} \; v_{\bar 1} \; \gamma_{\bar 1} \;
\tilde R_{\bar 1 \bar 2} \; \tilde u_{\bar 2 \bar 2'} \; \tilde R'_{\bar 2' \bar 1'} \; \tilde \gamma_{\bar 1'} \; T'_{\bar 1' \bar \beta'} |^2 \delta x_{\bar \beta'}
\qquad
\\
j_{\rm CAR} &=
\sum_{\alpha\beta' }
|T_{\alpha 2} \; v_{2}
\; u_{2 2'} \; R'_{2' 1'} \; \gamma_{1'} \; \tilde T'_{1' \beta'}|^2\delta \tilde x_{\beta'}
+
\sum_{\bar\beta \bar\alpha' }
|T_{\bar \beta \bar 1} \; v_{\bar 1} \; \gamma_{\bar 1} \;
\tilde R_{\bar 1 \bar 2} \; \tilde u_{\bar 2 \bar 2'} \; \tilde T'_{\bar 2' \bar \alpha'} |^2\delta \tilde x_{\bar \alpha'}
\label{jcar}
\end{align}
where the vertex corrections due to multiple Andreev processes are
$v_2=(1-\gamma_2 \; \tilde R_{2 1} \; \tilde \gamma_1 \; R_{1 2} )^{-1}$ and $
v_{\bar 1}=(1-\gamma_{\bar 1} \; \tilde R_{\bar 1 \bar 2} \; \tilde \gamma_{\bar 2} \; R_{\bar 2 \bar 1} )^{-1}$.
For unitary order parameters ($\Delta \tilde \Delta \sim \sigma_0$), let us define
$\Omega \, \sigma_0=[-\Delta_{k} \tilde \Delta_{k} -(\varepsilon+\i 0^+)^2 \sigma_0]^{\frac{1}{2}}$
as well as $\gamma=-\Delta/(\varepsilon +\i \Omega )$, $\tilde \gamma= \tilde \Delta/(\varepsilon +\i \Omega )$.
Using the amplitudes
\begin{eqnarray}
\Gamma_{2'}=R'_{2' 1'} \; \gamma_{1'} \; \tilde R'_{1' 2'} , \quad
\tilde \Gamma_{\bar 2'} =
\tilde R'_{\bar 2' \bar 1'} \; \tilde \gamma_{\bar 1'} \; R'_{\bar 1' \bar 2'}, \quad
\end{eqnarray}
and denoting with $L$ the distance between $2$ and $2'$,
\begin{eqnarray}
u_{2 2'}&=& \left[ c_{2'} + \i\frac{ \Gamma_{2'}
\tilde \Delta_{2'} -\varepsilon}{\Omega_{2'}} s_{2'} \right]^{-1}
,\quad
\gamma_2= u_{2 2'} \left[ \Gamma_{2'} c_{2'} + \i\frac{
\Delta_{2'}+ \Gamma_{2'} \varepsilon }{\Omega_{2'}} s_{2'}
\right]
\\
\tilde u_{\bar 2 \bar 2'}&=& \left[ c_{2'} - \i\frac{ \tilde \Gamma_{\bar 2'}
\Delta_{\bar 2'} +\varepsilon}{\tilde \Omega_{\bar 2'}} s_{2'} \right]^{-1}, \quad
\tilde \gamma_{\bar 2}= \tilde u_{\bar 2 \bar 2'} \left[ \tilde \Gamma_{\bar 2'}
c_{2'} - \i\frac{ \tilde \Delta_{\bar 2'}- \tilde \Gamma_{\bar 2'}
\varepsilon }{\tilde \Omega_{\bar 2'}} s_{2'} \right]
\end{eqnarray}
with $c_{2'}\equiv \cosh(\Omega_{2'}L/\hbar v_{\rm F,2'})$ and $s_{2'}\equiv \sinh(\Omega_{2'}L/\hbar v_{\rm F,2'})$.
For the distribution functions one obtains for the two leads
\begin{eqnarray}
\delta x_{\beta }=x_{\bar \alpha } = \delta x(V,T;\varepsilon ),&& \quad
\delta \tilde x_{\alpha }=\delta \tilde x_{\bar \beta } = \delta x(V,T;-\varepsilon ), \\
\delta x_{\alpha'}=\delta x_{\bar \beta' } = x(V',T';\varepsilon ) ,&&\quad
\delta \tilde x_{\beta'}=\delta \tilde x_{\bar \alpha' } = \delta x(V',T';-\varepsilon ).
\end{eqnarray}
Here, $T_{\rm S}$ is the temperature in the superconductor, $T$ and $V$ are temperature and
voltage in the left lead, and $T'$ and $V'$ are temperature and voltage in the right lead.
The voltages are measured with respect to the superconductor.
The expressions appearing in equations \eqref{ji}-\eqref{jcar} have an intuitive interpretation, and selected processes are illustated in figure \ref{NLAR}.
\begin{figure}[t]
\centering{
\hspace{0.5in}{\tiny (a)} \hspace{0.95in} {\tiny (b)}
\hspace{0.95in}{\tiny (c)} \hspace{0.95in} {\tiny (d)} \hspace{1.0in}$\;$}\\
\centering{
\includegraphics*[width=0.2\linewidth,clip]{R1b.pdf}
\includegraphics*[width=0.2\linewidth,clip]{R2b.pdf}
\includegraphics*[width=0.2\linewidth,clip]{AR1.pdf}
\includegraphics*[width=0.2\linewidth,clip]{AR2.pdf}
}\\
\centering{
\hspace{0.5in}{\tiny (e)} \hspace{0.95in} {\tiny (f)}
\hspace{0.95in}{\tiny (g)} \hspace{0.95in} {\tiny (h)} \hspace{1.0in}$\;$}\\
\centering{
\includegraphics*[width=0.2\linewidth,clip]{DET1.pdf}
\includegraphics*[width=0.2\linewidth,clip]{DET2a.pdf}
\includegraphics*[width=0.2\linewidth,clip]{CAR1.pdf}
\includegraphics*[width=0.2\linewidth,clip]{CAR2a.pdf}
}
\caption{
Illustration of selected processes contributing to the expressions \eqref{ji}-\eqref{jcar}.
Andreev reflections are denoted as loops, turning particles (p, full lines) into holes (h, dashed lines) or vice versa.
(a)-(b) contributions to the reflection components, equation \eqref{jr};
(c)-(d) contributions to the Andreev reflection components, equation \eqref{jar};
(e)-(f) contributions to the coherent electron transfer components, equation \eqref{jec};
(g)-(h) contributions to the crossed Andreev reflection components, equation \eqref{jcar}.
}
\label{NLAR}
\end{figure}
These terms involve propagation of particles or holes, represented as full lines and dashed lines in the figure. Certain processes involve conversions between particles and holes, accompanied by the creation or destruction of a Cooper pair (loops in the figure), and correspond to the factors $\gamma_{\bar 1} $, $\gamma_{1'}$, $\tilde\gamma_1 $, and $\tilde\gamma_{\bar 1'}$ in \eqref{jr}-\eqref{jcar}.
Propagation of particles or holes between the left and right interface is represented in these equations by the factors $u_{22'}$ and $\tilde u_{\bar 2 \bar 2'} $. Vertex corrections $v_2$ and $v_{\bar 1}$ correspond to multiple Andreev reflections at either interface.
The factors $\gamma_2$ and $\tilde \gamma_{\bar 2}$ combine propagation between the two interfaces with Andreev reflections at the other interface.
As an example, for an isotropic singlet superconductor and collinear arrangement of the magnetization directions, one obtains
\begin{align}
\label{ji0}
j_{\rm I} &= 4 \delta x,\quad
j_{\rm R}= \left[2|v_+|^2 \; |r_\uparrow-r_\downarrow e^{\i\vartheta } \gamma_0\gamma_+|^2
+ 2|v_-|^2 \; |r_\downarrow -r_\uparrow e^{-\i \vartheta} \gamma_0 \gamma_-|^2\right] \delta x\\
j_{\rm AR} &= (t_\uparrow t_\downarrow)^2 \left[|v_+|^2 \;
(|\gamma_+|^2+|\gamma_0|^2)
+ |v_-|^2 \; (|\gamma_-|^2+ |\gamma_0|^2)\right] \delta \tilde x\\
j_{\rm EC} &= \left[ (t_\uparrow t'_\uparrow)^2
|v_+u_+|^2 \left\{ 1+|\gamma_0|^4 (r_\downarrow r'_\downarrow)^2 \right\}
+ (t_\downarrow t'_\downarrow)^2
|v_-u_-|^2 \left\{ 1+|\gamma_0|^4 (r_\uparrow r'_\uparrow)^2 \right\} \right] \delta x'\\
j_{\rm CAR} &= |\gamma_0|^2 \left[ (t_\uparrow t'_\downarrow)^2
|v_+u_+|^2 \left\{ (r'_\uparrow)^2 +(r_\downarrow)^2 \right\}
+ (t_\downarrow t'_\uparrow)^2
|v_-u_-|^2 \left\{ (r'_\downarrow)^2 +(r_\uparrow)^2 \right\} \right] \delta \tilde x'
\label{jcar0}
\end{align}
where I defined
(with $s\equiv s_{2'}$ and $c\equiv c_{2'}$)
\begin{align}
\Gamma'_\pm &= r'_\uparrow r'_\downarrow e^{\pm \i\vartheta' } \gamma_0,\quad
\gamma_\pm= u_\pm \left[ \Gamma'_\pm c +\i s(\Delta + \Gamma'_\pm \varepsilon)/\Omega \right]\\
u_\pm&=\left[c-\i s(\varepsilon+\Gamma'_\pm \Delta)/\Omega \right]^{-1}
,\quad
v_\pm= [ 1-\gamma_\pm \gamma_0 r_\uparrow r_\downarrow e^{\pm \i\vartheta} ]^{-1}.
\end{align}
Early studies of nonlocal transport in F-S-F structures include Ref. \cite{Melin04}.
In Ref. \cite{Metalidis10} the non-local conductance was explained in terms of the processes discussed above for an F-S-F structure with strong spin-polarization. Andreev bound states appear on both ferromagnet-superconductor interfaces, which decay trough the superconductor towards the opposite contact. Parallel and antiparallel alignment of the magnetizations lead to qualitatively different Andreev spectra. The non-local processes have a natural explanation in terms of overlapping spin-polarized Andreev states.
The density of states for the trajectory connecting the two contacts is obtained from
\begin{align}
N_\uparrow(\varepsilon , x)= \frac{N_{\rm F}}{2} \mbox{Re} \frac{1-\gamma_+(\varepsilon,x)\gamma^\ast_-(-\varepsilon^\ast,L-x)}{1+\gamma_+(\varepsilon,x)\gamma^\ast_-(-\varepsilon^\ast,L-x)},
\end{align}
and for $N_\uparrow $ the same expression holds with $+$ and $-$ interchanged.
\begin{figure}[t]
\centering{
\hspace{0.3in}{\tiny (a)} \hspace{2.5in} {\tiny (b)} }\\
\centering{
\includegraphics[width=2.5in]{NL_th107_L2_compressed.pdf}
\includegraphics[width=1.9in]{NL_th07vsL_compressed.pdf}
\hspace{0.4in}$\; $
}\\
\centering{
\hspace{0.3in}{\tiny (c)} \hspace{1.2in} {\tiny (d)}
\hspace{1.2in}{\tiny (e)} \hspace{1.2in} {\tiny (f)} \hspace{0.5in}$\;$}\\
\centering{
\includegraphics[width=1.25in]{NL_th07P_L2_compressed.pdf}
\includegraphics[width=1.25in]{NL_th07_05_P_L2_compressed.pdf}
\includegraphics[width=1.25in]{NL_th07_06_AP_L2_compressed.pdf}
\includegraphics[width=1.25in]{NL_th07AP_L2_compressed.pdf}
}
\caption{
Andreev bound states in an F-S-F structure of the type shown in figure \ref{setup}.
(a):
$P\equiv
\frac{N_\uparrow - N_\downarrow}{N_\uparrow + N_\downarrow}$ as function of
$\varepsilon$ and $\vartheta_{\rm R}$ at a position in S midway between the contacts, for $L=2\xi_0$ (with the coherence length of the superconductor $\xi_0=\hbar v_{F,S}/|\Delta |$) and $\vartheta_{\rm L}=0.7\pi$.
An avoided crossing appears for equally spin-polarized Andreev states, which is absent for opposite polarization.
(b) dependence on $L/\xi_0$ for fixed $\vartheta_{\rm R}=\vartheta_{\rm L}=0.7\pi$.
(c)-(f): $P$ as function of $\varepsilon $ and $x$ for $L=2\xi_0$ and $\vartheta_{\rm L}=0.7\pi$, and
(c) $\vartheta_{\rm R}=0.7\pi$
(d) $\vartheta_{\rm R}=0.5\pi$
(e) $\vartheta_{\rm R}=-0.6\pi$
(f) $\vartheta_{\rm R}=-0.7\pi$.
At the avoided crossing all bound states have equal weight at both interfaces. In all other cases the bound states for fixed spin projection are localized at one interface only.
In all panels
$(r_\uparrow r_\downarrow)_{\rm L}=(r_\uparrow r_\downarrow)_{\rm R} =0.95$.
}
\label{NL}
\end{figure}
In figure \ref{NL} I show an example of such a setup. As can be seen, avoided crossings of Andreev bound states with equal spin polarization play an important role in such systems. At the avoided crossing the bound states have equal weight at both interfaces.
This contrasts the case when bound states have opposite spin polarization, where no avoided crossings appear, and the case when the two S-F interfaces have markedly different spin-dependent phase shifts, in which case bound states do not overlap and stay localized at one of the two interfaces only. The spin-polarization and weight of the bound states at energies $\varepsilon_{\rm b}$ and $-\varepsilon_{\rm b}$ determine the magitude of the nonlocal currents due to crossed Andreev reflection and elastic co-tunneling. A detailed discussion of how the weights, transmission probabilities, and bound-state geometries influence CAR and EC processes is given in \cite{Metalidis10}.
In diffusive S-F systems a theory for nonlocal transport was developed in \cite{Kalenkov10}.
Equations \eqref{ji0}-\eqref{jcar0}
have been applied to the study of thermoelectric effects
in non-local setups \cite{Machon13,Machon14}.
The contributions to the energy current are obtained as
\begin{equation}
I_{\varepsilon }=I_{\varepsilon \rm I}-I_{\varepsilon \rm R} + I_{\varepsilon \rm AR} - I_{\varepsilon \rm EC} + I_{\varepsilon \rm CAR} ,
\end{equation}
where the respective contributions are given by analogous equations
as in equations \eqref{def2}-\eqref{jcar}, however with replacing the charge e by energy $\varepsilon $.
Spin-dependent interface scattering phases in combination with spin filtering leads to giant thermoelectric effects in F-S-F devices \cite{Machon13,Ozaeta14}.
There have been a number of experimental studies of non-local transport in S-F hybrid structures, e.g. \cite{Beckmann04,Cadden07,Brauer10}. Colci {\it et al.} investigate S-FF-S junctions with two parallel running wires \cite{Colci12}.
A very recent experiment shows a non-local inverse proximity effect in an (F-N-F')-S-N structure \cite{Flokstra15}. The inverse proximity effect transfers magnetization from the ferromagnet into the superconductor or across a superconductor. For strongly spin-polarized systems this occurs as a result of spin-mixing phases \cite{Eschrig08,Grein13}.
A combination of non-local effects in S-F structures with non-equilibrium Andreev interferometer geometries, in analogy to experiments in S-N structures \cite{Cadden09,Eschrig09NV}, seems to be another exciting avenue for future applications \cite{Noh13}.
\section{Generalized Andreev Equations}
In this section I discuss the physical interpretation of the coherence functions.
To this end, I present a generalized set of Andreev equations which is equivalent to equations \eqref{cricc1}-\eqref{cricc2}.
Let us define for each pair of Fermi momenta $\mvec{p}_F$, $-\mvec{p}_F$ and corresponding Fermi velocities $\mvec{v}_F(\mvec{p}_F)$, $\mvec{v}_F(-\mvec{p}_F)=-\mvec{v}_F(\mvec{p}_F)$
a pair of mutually conjugated trajectories
\begin{align}
\mvec{R}(\rho )=\mvec{R}_0+\hbar \mvec{v}_F(\mvec{p}_F) (\rho-\rho_0),\quad
\tilde{\mvec{R}}(\rho )=\mvec{R}_1-\hbar \mvec{v}_F(\mvec{p}_F) (\rho-\rho_1),\quad \rho_0\le \rho \le \rho_1.
\end{align}
Using $\partial_\rho \equiv \hbar \vf \! \cdot \vnabla $,
let us define the following differential operators
\begin{align}\label{M1}
\hat{{\cal D}} \equiv \left(\begin{array}{cc}
-\i\partial_\rho+\Sigma & \Delta \\ -\tilde\Delta & \i\partial_\rho-\tilde \Sigma
\end{array}\right), \quad
\tilde{\hat{{\cal D}}} \equiv \left(\begin{array}{cc}
-\i\partial_\rho+\tilde \Sigma & \tilde \Delta \\ -\Delta & \i\partial_\rho-\Sigma
\end{array}\right)
\end{align}
which fulfill $\tilde{\hat{{\cal D}}} = -\op{\tau }_1 \hat{{\cal D}} \op{\tau }_1 $.
Let us also define the adjoint operator $\adj{\hat{{\cal D}}}(\rho,\partial_\rho ) = \hc{\hat{{\cal D}}}(\rho,-\partial_\rho )$.
For a fixed conjugated trajectory pair
the set of generalized Andreev equations (retarded and advanced) is,
\begin{align}\label{A1}
\hat{{\cal D}}
{\, \scriptstyle \circ}\,
\left(\begin{array}{cc} u^\ret & \tilde v^\ret\\ v^\ret &\tilde u^\ret \end{array}\right)
= \varepsilon
\left(\begin{array}{cc} u^\ret & \tilde v^\ret\\ v^\ret &\tilde u^\ret \end{array}\right),
\begin{array}{c}
\quad v^\ret(\rho_1) = -\tilde \gamma_1 {\, \scriptstyle \circ}\, u^\ret (\rho_1)
\\
\quad \tilde v^\ret(\rho_0) = -\gamma_0 {\, \scriptstyle \circ}\, \tilde u^\ret(\rho_0)
\end{array}
\\
\label{A2}
\adj{\hat{{\cal D}}}
{\, \scriptstyle \circ}\,
\left(\begin{array}{cc} u^\adv & \tilde v^\adv \\ v^\adv & \tilde u^\adv \end{array}\right)
= \varepsilon
\left(\begin{array}{cc} u^\adv & \tilde v^\adv \\ v^\adv & \tilde u^\adv \end{array}\right),
\begin{array}{c}
\quad v^\adv(\rho_0)= -\gamma^\dagger_0{\, \scriptstyle \circ}\, u^\adv(\rho_0)
\\
\quad \tilde v^\adv(\rho_1) = -\tilde \gamma^\dagger_1 {\, \scriptstyle \circ}\, \tilde u^\adv(\rho_1)
\end{array}
\end{align}
where the boundary conditions at $\rho=\rho_0$ and $\rho=\rho_1$
for the solutions fulfill the restrictions shown on the right hand side of the equations.
Then the relation between the Andreev amplitudes $u$, $v$, $\tilde u $, and $\tilde v$ and the coherence amplitudes $\gamma $ and $\tilde \gamma $ is given along the entire trajectories by \cite{Eschrig00}
\begin{align}\label{G1}
\tilde v^\ra = -\gamma^\ra {\, \scriptstyle \circ}\, \tilde u^\ra, \quad
v^\ra = -\tilde \gamma^\ra {\, \scriptstyle \circ}\, u^\ra
\end{align}
with $\ga^{\ret}\equiv \gamma $, $\gb^{\ret}\equiv \tilde{\ga} $, $\ga^{\adv}\equiv \tilde{\ga}^\dagger $, $\gb^{\adv} \equiv \gamma^\dagger $.
It is easy to show that the following conservation law along the trajectory holds
\begin{align}
\partial_\rho \left\{
\hc{\left(\begin{array}{cc} u^\adv & \tilde v^\adv \\ v^\adv & \tilde u^\adv \end{array}\right)}
\op{\tau }_3 {\, \scriptstyle \circ}\,
\left(\begin{array}{cc} u^\ret & \tilde v^\ret\\ v^\ret &\tilde u^\ret \end{array}\right)
\right\}=0.
\end{align}
Thus, the matrix inside the curly brackets is given by its value at one point on the trajectory.
The off-diagonal elements are zero due to the conditions in Eq.~\eqref{A1} for $\rho_0$ and $\rho_1$,
leading to $\hc{(u^\adv)} {\, \scriptstyle \circ}\, \tilde v^\ret=\hc{(v^\adv )} {\, \scriptstyle \circ}\, \tilde u^\ret $ and
$\hc{(\tilde u^\adv)} {\, \scriptstyle \circ}\, v^\ret=\hc{(\tilde v^\adv )} {\, \scriptstyle \circ}\, u^\ret $ along the entire trajectory.
The diagonal components $\partial_\rho [\hc{(u^\adv)} {\, \scriptstyle \circ}\, u^\ret - \hc{(v^\adv )} {\, \scriptstyle \circ}\, v^\ret ] =0$ and $\partial_\rho [\hc{(\tilde u^\adv)} {\, \scriptstyle \circ}\, \tilde u^\ret - \hc{(\tilde v^\adv )} {\, \scriptstyle \circ}\, \tilde v^\ret ] =0$ translate into
$\partial_\rho [ \hc{(u^\adv)} {\, \scriptstyle \circ}\, (1-\ga^{\ret} {\, \scriptstyle \circ}\, \gb^{\ret} ) {\, \scriptstyle \circ}\, u^\ret ]=0$ and
$\partial_\rho [ \hc{(\tilde u^\adv)} {\, \scriptstyle \circ}\, (1-\gb^{\ret} {\, \scriptstyle \circ}\, \ga^{\ret} ) {\, \scriptstyle \circ}\, \tilde u^\ret ]=0$. In particular, for Andreev bound states $1-\ga^{\ret} {\, \scriptstyle \circ}\, \gb^{\ret} =0$ or $1-\gb^{\ret} {\, \scriptstyle \circ}\, \ga^{\ret}=0$, and thus this property is conserved along the entire trajectory.
If one writes Eq.~\eqref{A1} formally as $\hat{\cal D}{\, \scriptstyle \circ}\, \hat U=\varepsilon \hat U$, then the conjugated equation $\tilde{\hat{\cal D}} {\, \scriptstyle \circ}\, \tilde{\hat U}=-\varepsilon \tilde{\hat U}$ holds with $\tilde {\hat U}=\hat \tau_1 \hat U \hat \tau_1$, which leads, however, to a system identical to Eq.~\eqref{A1}. The adjoint equation $\adj{\hat{\cal D}} {\, \scriptstyle \circ}\, \hat{\underline{U}}=\varepsilon \hat{\underline{U}}$ defines adjoined Andreev amplitudes (left eigenvectors) $\underline u$, $\underline{v}$, $\underline{\tilde u}$, and $\underline{\tilde v}$.
These are, however, equivalent to the advanced eigenvectors in Eq.~\eqref{A2}.
\section{Conclusion}
I have presented theoretical tools for studying Andreev reflection phenomena and Andreev bound states in superconductor-ferromagnet hybrid structures. Concentrating on ballistic heterostructures with strong spin-polarization, I have formulated theories for point contact spectroscopy and for nonlocal transport, as well as for Andreev states in Josephson structures in terms of coherence functions and distribution functions. The connection to coherence amplitudes appearing in the solutions of Andreev equations has been made explicit. The formulas for non-local transport have been given in a general form, allowing for non-collinear geometries, and using the normal-state scattering matrix as input.
\dataccess{All data can be obtained from the author.}
\ack{I acknowledge the hospitality and
the financial support from the Lars Onsager Award committee
during my stay at the Norwegian University of Science and Technology, as well as stimulating discussions within the Hubbard Theory Consortium.}
\funding{This work is supported by the Engineering and Physical Science
Research Council (EPSRC Grant No. EP/J010618/1).}
\conflict{I have no competing interests.}
|
2007.12987
|
\section{Introduction}
\label{sec:intro}
Differential Privacy \cite{DworkMNS16} has become a de facto gold
standard definition of privacy for statistical analysis. This success
is mostly due to the generality of the definition, its robustness and
compositionality.
\ifnum\full=1
These valuable properties helped researchers from
many different communities - e.g. machine learning, data analysis, and
security - in coming up with differentially private algorithms for
specific goals.
However, it was quickly understood that getting
differential privacy right in practice is a hard task.
\else
However, getting
differential privacy right in practice is a hard task.
\fi
Even privacy
experts have released fragile code subject to
\ifnum\full=1
attacks~\cite{Haeberlen11,Mironov12,AndryscoKMJLS15,EbadiAS16,Burke17}
\else
attacks~\cite{Haeberlen11,Mironov12}
\fi
and published incorrect algorithms~\cite{Lyu-2017}. This challenge
has motivated the programming language community to develop techniques to
support programmer to show their algorithms differentially
private. Among the techniques that have been proposed there are
\ifnum\full=1
type systems~\cite{ReedP10,Gaboardi2013,BartheGAHRS15,BartheFGAGHS16,ZhangK17,NearDASGWSZSSS19,Wang19},
methods based on model checking and Markov chains~\cite{TschantzKD11,ChatzikokolakisGPX14,LiuWZ18,ChistikovMP18,ChistikovMP19,Barthe2019AutomatedMF},
and program
logics~\cite{barthe2012probabilistic,BartheGAHKS14,Barthe:2016,BartheFGGHS16,SatoBGHK19}.
\else
type systems~\cite{ReedP10,Gaboardi2013,ZhangK17,NearDASGWSZSSS19,Wang19},
methods based on model checking and Markov chains~\cite{TschantzKD11,LiuWZ18,Barthe2019AutomatedMF},
and program
logics~\cite{barthe2012probabilistic,Barthe:2016,SatoBGHK19}.
\fi
\ifnum\full=1
More recently, the formal methods community have also focused on developing techniques to find violations to differential privacy\cite{DingWWZK18,Bichsel:2018,Barthe2019AutomatedMF}.
\else
Several works have also focused on developing techniques to find violations to differential privacy\cite{DingWWZK18,Bichsel:2018,Barthe2019AutomatedMF}.
\fi
\ifnum\full=1
Most of these works focus on either verifying a program differentially private or finding violations to differential privacy and they do not consider techniques supporting both kind of reasoning. An exception is the recent work by Barthe et al.~\cite{Barthe2019AutomatedMF} which proposes a method based on a decidable logic for a simple while
language over finite input and output domains, that can be used for both verifying
and finding violation to differential privacy.
\else
Most of these works focus on either verifying a program differentially private or finding violationsand they do not address both kinds of reasoning. An exception addressing both is the recent work by Barthe et al.~\cite{Barthe2019AutomatedMF} which proposes a method based on a decidable logic for a simple while
language over finite input and output domains.
\fi
Motivated by this picture, we propose a new technique based on relational symbolic execution, named Coupled Relational Symbolic Execution (\textsf{\textup{CRSE}}\xspace), which supports proving and finding violation to differential privacy for programs.
Our technique is based on two essential ingredients: the use of a recently introduced notion of relational symbolic execution~\cite{Farina2019} and the use of approximate probabilistic couplings\cite{Barthe:2016} to reason about differential privacy a relational way. This approach allow us also to support reasoning over countable input and output domains.
\indent{\bf Relational Symbolic Execution.} Symbolic execution
is a classic technique used for bug finding,
testing and proving. In symbolic execution an evaluator executes the program
which consumes symbolic inputs instead of concrete ones. The
evaluator follows, potentially, all the execution paths the program
could take and collects constraints over the symbolic values,
corresponding to these paths.
\ifnum\full=1
The evaluator collects in this way a
description of the traces in terms of constraints on symbolic values
or expressions involving them. Every trace is associated with a set
of constraints and every input satisfiying these constraints
will lead the actual concrete execution along that trace.
Similarly, in relational symbolic execution~\cite{Farina2019} (\textsf{\textup{RSE}}\xspace)
one is concerned with bug finding, testing, or proving for
\emph{relational properties}. These are properties about two
executions of two potentially different programs. \textsf{\textup{RSE}}\xspace executes two
potentially different programs in a symbolic fashion. RSE exploits
relational assumptions about the two inputs to the two programs in
order to reduce the number of states to analyze. This can be
particularly effective when the codes of the two programs share some
similarities, and when the property under consideration is relational
in nature, as in the case of differential privacy.
\else
Similarly, in relational symbolic execution~\cite{Farina2019} (\textsf{\textup{RSE}}\xspace)
one is concerned with bug finding, testing, or proving for
\emph{relational properties}. These are properties about two
executions of two potentially different programs. \textsf{\textup{RSE}}\xspace executes two
potentially different programs in a symbolic fashion and exploits
relational assumptions about the inputs or the programs in
order to reduce the number of states to analyze. This is
effective when the codes of the two programs share some
similarities, and when the property under consideration is relational
in nature, as differential privacy.
\fi
\ifnum\full=1
\indent{\bf Approximate Probabilistic Couplings.} Probabilistic
coupling \cite{Lindvall1992LecturesOT} is a proof technique useful to
relate two random variables through a common joint probability
distribution. Probabilistic coupling has been used in formal
verification~\cite{Jonsson0L01} to lift a relation over the joint support of two
probability distribution to a relation over the two probability
distributions themselves. This allows one to reason about relations
between probability distributions by reasoning about relations on their support,
which can be usually done in a symbolic way.
In this approach the actual probabilistic reasoning is confined to the soundness of the verification system, rather than being spread everywhere. A relaxation of the notion of
coupling, called \emph{approximate probabilistic coupling}~\cite{barthe2012probabilistic,Barthe:2016}, has been designed to reason about differential privacy. This can be seen as a regular probabilistic coupling with some additional parameter describing how close the two probability distribution are.
\medskip
\else
\indent{\bf Approximate Probabilistic Couplings.} Probabilistic coupling is a proof technique usful to lift a relation over the joint support of two
probability distribution to a relation over the two probability
distributions themselves. This allows one to reason about relations
between probability distributions by reasoning about relations on their support,
which can be usually done in a symbolic way.
In this approach the actual probabilistic reasoning is confined to the soundness of the verification system, rather than being spread everywhere in the verification effort. A relaxation of the notion of
coupling, called \emph{approximate probabilistic coupling}~\cite{barthe2012probabilistic,Barthe:2016}, has been designed to reason about differential privacy. This can be seen as a regular probabilistic coupling with some additional parameter describing how close the two probability distribution are.
\fi
In this work, we combine these two approaches in a framework called
Coupled Relational Symbolic Execution (\textsf{\textup{CRSE}}\xspace). In this framework, a
program is executed in a relational and symbolic way. When some
probabilistic primitive is executed, \textsf{\textup{CRSE}}\xspace introduces constraints
corresponding to the existence of an approximate probabilistic
coupling on the output. These constraints are combined with the
constraints on the execution traces generated by symbolically and
relationally executing other non-probabilistic commands. These
combined constraints can be exploited to reduce the number of states
to analyze. When the execution is concluded \textsf{\textup{CRSE}}\xspace\ checks whether
there is a coupling between the two outputs, or whether there is some
violation to the coupling. We show the soundness of this approach for
both proving and refuting differential privacy. However, for finding violations, one cannot reason only symbolically, and since
checking directly a coupling can be computationally expensive, we devise
several heuristic which can be used to
facilitate this task. Using these techniques,
\textsf{\textup{CRSE}}\xspace allows one to verifying differential privacy for an interesting
class of programs, including programs working on countable input and
output domains, and to find violations to programs that are not
differentially private.
\ifnum\full=1
As we discussed at the begin of this section, other techniques have been devised to achieve similar goals. \textsf{\textup{CRSE}}\xspace is not a replacement for them
but it should be seen as an additional method to put in the set of
tools of the privacy developer which provides an high level of
generality. Indeed, by being a totally symbolic technique, it can
leverage on a pletora of current technologies such as SMT solvers, e.g.~\cite{DeMoura:2008}, algebraic solvers, e.g.~\cite{Mathematica}, and
numeric solvers, e.g.~\cite{MatlabOTB}.
Summarizing, the contribution of our work are:
\begin{itemize}
\item We combine relational symbolic execution and approximate probabilistic coupling in a new technique, named Coupled Relational Symbolic Execution (\textsf{\textup{CRSE}}\xspace).
\item We show \textsf{\textup{CRSE}}\xspace sound for both proving programs differentially private and for refuting differential privacy.
\item We devise a set of heuristic that can help a programmer in finding violations to differential privacy.
\item We show how \textsf{\textup{CRSE}}\xspace can help in proving and refuting differential privacy for an interesting class of programs
\end{itemize}
\else
\textsf{\textup{CRSE}}\xspace is not a replacement for other techniques that have been proposed for
the same task, it should be seen as an additional method to put in the set of
tools of the privacy developer which provides an high level of
generality. Indeed, by being a totally symbolic technique, it can
leverage on a pletora of current technologies such as SMT solvers, algebraic solvers, and numeric solvers.
Summarizing, the contribution of our work are:
\begin{itemize}
\item We combine relational symbolic execution and approximate probabilistic coupling in a new technique: Coupled Relational Symbolic Execution.
\item We show \textsf{\textup{CRSE}}\xspace sound for both proving and refuting differential privacy.
\item We devise a set of heuristic for finding violations to differential privacy.
\item We show how \textsf{\textup{CRSE}}\xspace proves and refutes several examples.
\end{itemize}
\fi
\section{\textsf{\textup{CRSE}}\xspace Informally}
\label{sec:highlevel}
In this section, we will motivate in an informal way \textsf{\textup{CRSE}}\xspace through
three examples of programs showing potential errors in implementations
of (supposedly) differentially private algorithms.
\ifnum\full=1
In doing this we
will also presenting the notation that will use in the rest of the
paper.
\subsection{Single query with wrong noise parameter.}
\indent\emph{Differential Privacy.}
\else
\paragraph{Single query with wrong noise parameter.}
\fi
Informally, a randomized function $A$
is $\epsilon$-differential privacy if it maps two databases $d_1$ and $d_2$ that differ for the
data of one single individual (denoted $d_1\sim d_2$) to output distributions that are
indistinguishable up to some value
\ifnum\full=1
$\epsilon$ - usually referred to as the privacy budget - this is formalized by requiring that the log-ratio of the two probability distributions is bounded pointwise by $\epsilon$, i.e. for every $u$,
$\Big |\log \frac{\Pr_{x\leftarrow A(d_1)}[x=u]}{\Pr_{x\leftarrow A(d_2)}[x=u]}\Big |\leq\epsilon$ - we will give the
precise definition in Section~\ref{sec:prelim}. The smaller the
$\epsilon$, the more privacy is guaranteed.
A standard way to achieve differential privacy when we are interested
in a numeric query over a dataset is to add to the query result some
noise sampled from the Laplace distribution with mean 0 and scale
proportional to the \emph{sensitivity} of the function (how far the
function maps two databases differing for the data of one single
individual) over $\epsilon$~\cite{DworkMNS16}.
\else
$\epsilon$ - usually referred to as the privacy budget. The smaller
the $\epsilon$, the more privacy is guaranteed. A standard way to
achieve differential privacy for numeric queries is to add to their
result noise sampled from the Laplace distribution with mean 0 and
scale proportional to the \emph{sensitivity} of the function (how far
the function maps two databases $d_1\sim d_2$) over $\epsilon$~\cite{DworkMNS16}.
\fi
\begin{wrapfigure}[15]{L}{0.50\textwidth}
\vspace{-1.5cm}
\begin{minipage}[t]{0.50\textwidth}
\begin{algorithm}[H]
\caption{\\\hspace{\textwidth}A buggy Laplace mechanism}
\begin{flushleft}
\hspace*{\algorithmicindent} \textbf{Input:} $q$: $\mathcal{D} \rightarrow \mathbb{Z}$, $d:\mathcal{D}, \epsilon:\mathcal{R}^{+}$\\
\hspace*{\algorithmicindent} \textbf{Output:} $o: \mathbb{Z}$\\
\hspace*{\algorithmicindent} \textbf{Required} $d_1\sim d_2\Rightarrow |q(d_1)-q(d_2)|\leq r$
\end{flushleft}
\begin{algorithmic}[1]
\State $v\gets q(d)$
\State ${\color{red}\rho\overset{\$}{\gets}\lapp{0}{\epsilon}}$
\State $o\gets v+\rho$
\State \Return $o$
\end{algorithmic}
\label{alg:wrongnoise}
\end{algorithm}
\end{minipage}
\caption{Example 1. The algorithm is not $\epsilon$-DP.}
\end{wrapfigure}
\ifnum\full=1
Algorithm \ref{alg:wrongnoise} is a wrong implementation of this
principle - more in general it is a simple example of a program that is
implemented with the wrong noise parameters. Specifically, it takes
in input a numeric query $q$ with type $\mathcal{D} \rightarrow \mathbb{Z}$ a
database $d\in\mathcal{D}$, and the privacy budget we want to guarantee
$\epsilon\in\mathcal{R}^{+}$. It then computes the query on the database,
adds Laplace noise with scale equal to $\frac{1}{\epsilon}$ to the
result of the query\footnote{We actually use the inverse of the scale
as a parameter. That is the instruction
$\rass{x}{\lapp{0}{\epsilon}}$ denotes a sample from the Laplace
distribution with mean 0 and scale $\frac{1}{\epsilon}$. This will
help in considering $\epsilon$ as a \emph{budget to spend}.}, and
releases the result.
This program is not $\epsilon$-differentially private, because it
doesn't calibrate the Laplace noise to the sensitivity of the
query. In fact, as a precondition we assert that the query $q$ is
$r$-sensitive by the requirement
$d_1\sim d_2\Rightarrow |q(d_1)-q(d_2)|\leq r$, asserting that given
two databases $d_1$ and $d_2$ differing for the data of one individual
the query $q$ returns
two results that are at most at distance $r$. The program implementing
algorithm
\ref{alg:wrongnoise} would be $\epsilon$-differentially private if we added
noise proportional to $\frac{1}{r\epsilon}$ instead of
$\frac{1}{\epsilon}$, that is using the assignment
$\rho\overset{\$}{\gets}\lapp{0}{r\epsilon}$ instead of
$\rho\overset{\$}{\gets}\lapp{0}{\epsilon}$, in line 2 the algorithm.
To show formally that we have a privacy violation, accordingly to the definition of differential privacy, we need to witness a query $q$, two databases $d_1$ and $d_2$ in the relation $d_1\sim d_2$, and a possible output $u$ making the two probability distributions distinguishable for more than $\epsilon$. Approching this task directly is intractable~\cite{GaboardiNP20}.
Instead, in order to do this, \textsf{\textup{CRSE}}\xspace\ will execute the program in a relational
symbolic fashion and it will try to prove that in two runs of the program the output variable has the same value and the privacy budget spent is at
most $\epsilon$. Technically, this is implemented by considering the
postcondition $o_1=o_2 \land \epsilon_c\leq \epsilon$, where
$\epsilon_c$ is a distinguished variable recording the privacy budget
spent. If \textsf{\textup{CRSE}}\xspace\ succeed, then the program is
$\epsilon$-differentially private. If there is an execution that
invalidates this post-condition, then we will have a candidate for a
witness of the violation.
\else
Algorithm \ref{alg:wrongnoise} is a wrong implementation of this
principle - more in general it is a simple example of a program that is
implemented with the wrong noise parameters. It takes
a numeric query $q$ a database $d$, and the privacy budget
$\epsilon$. It then computes $q(d)$
and adds to it Laplace noise with scale\footnote{The instruction
$\rass{x}{\lapp{0}{\epsilon}}$ denotes a sample from the Laplace
distribution with mean 0 and scale $\epsilon$. Using the inverse of the scale makes reasoning about the \emph{budget} more direct.} $\epsilon$, and
releases the result.
This program is not $\epsilon$-differentially private, because it
doesn't calibrate the noise to the sensitivity of the
query. In fact, as a precondition we assert that the query $q$ is
$r$-sensitive by the requirement
$d_1\sim d_2\Rightarrow |q(d_1)-q(d_2)|\leq r$. The program implementing
algorithm
\ref{alg:wrongnoise} would be $\epsilon$-differentially private if we added
noise proportional to $r\epsilon$ instead of
$\epsilon$%
To show formally that we have a privacy violation, we need to witness
a query $q$, two databases $d_1$ and $d_2$ in the relation
$d_1\sim d_2$, and a possible output $u$ making the two probability
distributions distinguishable for more than $\epsilon$. Approching
this task directly is intractable~\cite{GaboardiNP20}. Instead, in
order to do this, \textsf{\textup{CRSE}}\xspace\ will execute the program in a relational
symbolic fashion and it will try to prove that in two runs of the
program the output variable has the same value and the privacy budget
spent is at most $\epsilon$. Technically, this is implemented by
considering the postcondition $o_1=o_2 \land \epsilon_c\leq \epsilon$,
where $\epsilon_c$ is a distinguished variable recording the privacy
budget spent. If \textsf{\textup{CRSE}}\xspace\ succeeds, then the program is
$\epsilon$-differentially private.
\fi
To avoid resorting to sampling, when \textsf{\textup{CRSE}}\xspace executes the command for Laplace (as in line 2), following the approximate probabilistic coupling idea from~\cite{Barthe:2016}, it couples the
samples ($\rho_1, \rho_2$) in the two runs, and adds the
constraint $\rho_1+k=\rho_2$, for some $k$. It also tracks the budget spent with the constraint
$\epsilon_c=\lvert k \rvert \cdot\epsilon$.
The intuition behind this
constraint is that we can ensure the two samples to be at some
distance if we \emph{pay} enough budget. From this we can see that if
$o_1$ is to be equal to $o_2$ then $k$ needs to be necessarily equal
to $v_1-v_2$. Since, $q(d_1)=v_1, q(d_2)=v_2$, the difference
$v_1-v_2$ is bounded above by $r$, and we get that, in the worst case
$\epsilon_c=r\epsilon$. This means that in order to achieve equality
of the output variables and hence, $\epsilon$ differential privacy, we
need to spend at least $r$ times the budget $\epsilon$. So, if we are trying
to use less budget, the constraints will give us a candidate for a
witness of the violation.
\ifnum\full=1
\subsection{Two buggy Sparse Vector implementations.}
\else
\paragraph{Two buggy Sparse Vector implementations.}
\fi
\begin{figure*}
\vspace{-1.5cm}
\begin{minipage}[t]{0.46\textwidth}
\begin{algorithm}[H]
\caption{A buggy Above Threshold}
\begin{flushleft}
\hspace*{\algorithmicindent} \textbf{Input:} $t,\epsilon \in\mathbb{R},d\in\mathcal{D}, q[i]:\mathcal{D}\rightarrow \mathbb{N}$ \\
\hspace*{\algorithmicindent} \textbf{Output:} $o: [\bot^{i}, z,\bot^{n-i-1}]$\\
\hspace*{\algorithmicindent} \textbf{Required} $d_1\sim d_2\Rightarrow |q[i](d_1)-q[i](d_2)|\leq 1$
\end{flushleft}
\begin{algorithmic}[1]
\State $o \gets \bot^{n}; r \gets n+1$
\State $\hat{t} \gets \lapp{t}{\frac{\epsilon}{2}}$\\
{\tt for}~($i$~{\tt in~}1{:}$n$)~{\tt do}
\State\ \ \ \ $\hat{s}\gets \lapp{q[i](d)}{\frac{\epsilon}{4}}$\\
\ \ \ \ {\tt if}~$\hat{s}>\hat{t}\wedge r=n+1$~{\tt then}
\State \ \ \ \ \ \ \ \ ${\color{red}o[i]\gets \hat{s}}; r\gets i$
\State \Return \emph{o}
\end{algorithmic}
\label{alg:wrongsvt-1}
\end{algorithm}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\begin{algorithm}[H]
\caption{Another buggy Above Threshold}
\begin{flushleft}
\hspace*{\algorithmicindent} \textbf{Input:} $t,\epsilon \in\mathbb{R},d\in\mathcal{D}, q[i]:\mathcal{D}\rightarrow \mathbb{N}$\\
\hspace*{\algorithmicindent} \textbf{Output:} $o\in \{\bot,\top\}^{n}$ \\
\hspace*{\algorithmicindent} \textbf{Required} $d_1\sim d_2\Rightarrow |q[i](d_1)-q[i](d_2)|\leq 1$
\end{flushleft}
\begin{algorithmic}[1]
\State $\hat{t} \gets \lapp{t}{\frac{\epsilon}{2}}$\\
{\tt for}~($i$~{\tt in~}1{:}$n$)~{\tt do}\\
\ \ \ \ {\tt if}~${\color{red}q[i](d)}\geq \hat{t}$~{\tt then}
\State \ \ \ \ \ \ \ \ $o[i]\gets \top$\\
\ \ \ \ {\tt else}
\State \ \ \ \ \ \ \ \ $o[i]\gets \bot$
\State \Return \emph{o}
\end{algorithmic}
\vspace{.6mm}
\label{alg:wrongsvt-2}
\end{algorithm}
\end{minipage}
\end{figure*}
The next two examples are variations of the same algorithm: \emph{above threshold}, a component of the
\emph{sparse vector} technique~\cite{Lyu-2017}.
Given a numeric
threshold, an array of numeric queries of length $n$, and a dataset, this algorithm returns the
index of the first query whose result exceeds the
threshold - and potentially it should also return the value of that query.
This should be done in a way that preserves differential privacy. To do this
in the right way, a program should add noise to the threshold (even if it is not a sensitive data), add noise to each query, compare the values, and return the index of the first query for which this comparison succeed. The analysis of this algorithm is rather complex: it uses the noise on the threshold as a way to pay only once for all the queries that are below the threshold, and the noise on the queries to pay for the first and only query that is above the threshold, if any. Due to this complex analysis, this algorithm has been a benchmark for tools for reasoning about differential privacy~\cite{Barthe:2016,ZhangK17,Barthe2019AutomatedMF}.
Algorithm \ref{alg:wrongsvt-1} has a bug making the whole algorithm not differentially
private, for values of $n$ greater than 4.
\ifnum\full=1
The program takes in input an array of queries of
type $\mathcal{D} \rightarrow \mathbb{Z}$, a privacy budget $\epsilon$ and a thresold $t$.
\else
\fi
The program initializes an array of outputs $o$ to all bottoms values, and a variable $r$ to $n+1$ which will be used
as guard in the main loop. It then adds noise to the threshold, and iterates over all the queries adding
noise to their results. If one of the noised-results is above the noisy threshold it saves
the value in the array of outputs and updates the value of the guard variable,
causing it to exit the main loop. Otherwise it keeps iterating.
The bug is returning the value of the noisy query that is above the threshold and not only its index, as done by the instruction in red in line 6 - this is indeed not enough for guaranteeing differential privacy.
For $n<5$ this program can be shown $\epsilon$-differentially private by using
the composition property of differential privacy that says that the k-fold
composition of $\epsilon$-DP programs is $k\epsilon$-differentially private(Section \ref{sec:prelim}). However, for $n\geq 5$ the more sophisticated analysis
we described above fails.
The proof
principle \textsf{\textup{CRSE}}\xspace will use to try to show this program $\epsilon$-differentially private is
to prove the assertion $o_1=\iota \implies o_2=\iota
\land\epsilon_c\leq \epsilon$, for every $\iota \leq n$ - the soundness of this principle has been proved in~\cite{Barthe:2016}. That is,
\textsf{\textup{CRSE}}\xspace will try to prove the following assertions (which would prove the program without bug $\epsilon$-differentially private):
\begin{itemize}
\item[$\bullet$] $o_1=[\hat{s}_1,\bot,\dots,\bot] \implies o_2=[\hat{s}_1,\bot,\dots,\bot] \land\epsilon_c\leq \epsilon$
\item[$\bullet$] $o_1=[\bot,\hat{s}_1,\dots,\bot] \implies o_2=[\bot,\hat{s}_1,\dots,\bot] \land\epsilon_c\leq \epsilon$
\item[] $\dots$
\item[$\bullet$] $o_1=[\bot,\dots,\hat{s}_1] \implies o_2=[\bot,\dots,\hat{s}_1] \land\epsilon_c\leq \epsilon$
\end{itemize}
While proving the first assertion, \textsf{\textup{CRSE}}\xspace will first couple at line 3 the threshold as $\hat{t}_1+k_0=\hat{t}_2$, for $k_0>1$ where $1$ is the sensitivity of the queries, which is needed to guarantee that all the query results below the threshold in one run stay below the threshold in the other run, then, it will increase appropriately the privacy budget by $k_0\frac{\epsilon}{2}$. As a second step it will couple $\hat{s}_1+k_1=\hat{s}_2$ in line 4.
Now, the only way for the assertion $o_1=[\hat{s}_1,\bot,\bot]\implies o_2=[\hat{s}_1,\bot,\bot]$ to hold,
is guaranting that both $\hat{s_1}=\hat{s}_2$ and $\hat{s}_1\geq t_1 \implies \hat{s_2}\geq t_2$ hold.
But these two assertions are not consistent with each other because $k_0\geq 1$.
That is, the only way, using these coupling rules, to guarantee that the run
on the right follows the same branches of the run on the left (this being necessary for proving the postcondion)
is to couple the samples $\hat{s}_1$ and $\hat{s}_2$ so that they are different,
this necessarily implying the negation of the postcondition. This would not the the case, if we were returning only the index of the query, since we can have that both the queries are above the threshold but return different values. Indeed,
by substituting line 7 with $\rass{o[i]}{\top}$ the program can be proven $\epsilon$-differentially private.
So the \emph{refuting} principle \textsf{\textup{CRSE}}\xspace will use here is the one that finds a trace on the left run
such that the only way the right run can be forced to follow it is by making
the output variables different.
A second example with bug of the above threshold algorithm is shown in
Figure \ref{alg:wrongsvt-2}. In this example, in the body of the
loop, the test is performed between the noisy threshold and the actual
value of the query on the database - that is, we don't add noise to
the query. \textsf{\textup{CRSE}}\xspace will use for this example another \emph{refuting}
principle based on reachability. In particular, it will vacuously
couple the two thresholds at line 1. That is it will not introduce any
relation between $\hat{t}_1$, and $\hat{t}_2$. \textsf{\textup{CRSE}}\xspace will then search
for a trace which is satisfiable in the first run but not in the
second one. This translates in an output event which has positive
probability on the first run but 0 probability in the second one
leading to an unbounded privacy loss, and making the algorithm not
$\epsilon$-differentially private for all finite
$\epsilon$. Interestingly this unbounded privacy loss can be achieved
with just 2 iterations.
\section{Preliminaries}
\label{sec:prelim}
\ifnum\full=1
\subsubsection*{Discrete Probability Distributions}
Let $A$ be a
denumerable set, a \emph{subdistribution} over A is a function
$\mu:A\to [0,1]$ with weight $\sum_{a\in A}\mu(a)$ less or equal than
1. We can think abour subdistributions as functions assigning to each
subset of A a probability mass. We denote the set of subdistributions
over $A$ as $\sdistr{A}$. When a subdistribution has weight equal to
1, then we call it a \emph{distribution}. We denote the set of
distributions over $A$ by $\distr{A}$.
An example of a subdistribution that we will use in the sequel is the \emph{null} subdistribution $\mu_0:A\to[0,1]$, assigning to every element of $A$ mass 0. Another example is the Dirac's distribution $\textbf{\textup{unit}}(a):A\to[0,1]$, defined for $a\in A$ as
\[
\textbf{\textup{unit}}(a)(x)\equiv \left \{
\begin{array}{rcl}
1 && \text{if}\ x=a \cr
0 && \text{otherwise}
\end{array}
\right .\
\]
This is a distribution assigning all the mass to the element $a\in A$. The set of subprobability distributions can be given the structure of a \emph{monad}, with unit the function $\textbf{\textup{unit}}$ associating with each element its Dirac's distribution - this is why we chose this notation.
We have also a function $\textbf{\textup{bind}}:\sdistr{A}\rightarrow
(A\rightarrow \sdistr{B})\rightarrow\sdistr{B}$ allowing us to compose subdistributions (as we compose monads). This is defined as
$\textbf{\textup{bind}}\equiv \lambda \mu.\lambda f.\lambda
a.\displaystyle\sum_{b\in\mathcal{O}'}\mu(b)\cdot f(b)(a)$. We will use these constructions to give a semantics to our language in Section~\ref{sec:conf_to_distr}.
We will also use the following notion of $\epsilon$-divergence to define a notion of approximate coupling at the end of this section.
\begin{definition}
Let $\epsilon\geq 0$. The \emph{$\epsilon$-divergence} between two
subdistributions $\mu_1, \mu_2\in\sdistr{A}$, denoted by
$\divergence{\epsilon}{\mu_1}{\mu_2}$,
is defined as:
\[
\divergence{\epsilon}{\mu_1}{\mu_2}\equiv
\sup_{E\subseteq O}\bigg (\mu_1(E) -\exp(\epsilon)\cdot \mu_2(E)\bigg)
\]
\end{definition}
\else
\paragraph{Discrete Probability Distributions}
Let $A$ be a denumerable set, a \emph{subdistribution} over A is a
function $\mu:A\to [0,1]$ with weight $\sum_{a\in A}\mu(a)$ less or
equal than 1. We denote the set of
subdistributions over $A$ as $\sdistr{A}$. When a subdistribution has
weight equal to 1, then we call it a \emph{distribution}. We denote
the set of distributions over $A$ by $\distr{A}$.
The \emph{null} subdistribution $\mu_0:A\to[0,1]$ assigns to every element of $A$ mass 0. The Dirac's distribution $\textbf{\textup{unit}}(a):A\to[0,1]$, defined for $a\in A$ as $\textbf{\textup{unit}}(a)(x)\equiv 1$ if $x=a$, and $\textbf{\textup{unit}}(a)(x)\equiv 0$, otherwise. The set of subprobability distributions can be given the structure of a \emph{monad}, with unit the function $\textbf{\textup{unit}}$.
We have also a function $\textbf{\textup{bind}}\equiv \lambda \mu.\lambda f.\lambda
a.\displaystyle\sum_{b\in\mathcal{O}'}\mu(b)\cdot f(b)(a)$ allowing us to compose subdistributions (as we compose monads).
We will use the notion of $\epsilon$-divergence $\divergence{\epsilon}{\mu_1}{\mu_2}$ between two
subdistributions $\mu_1, \mu_2\in\sdistr{A}$ to define approximate coupling, this is defined as:$
\divergence{\epsilon}{\mu_1}{\mu_2}\equiv
\sup_{E\subseteq O}\big (\mu_1(E) -\exp(\epsilon)\cdot \mu_2(E)\big)
$.
\fi
\ifnum\full=1
\subsubsection*{Differential Privacy}
\label{sec:dp}
Differential Privacy intuitively guarantees that computation over any two inputs differing for the data of one individual result in close distributions over outputs. Formally, it is defined as follows.
\begin{definition}[Differential Privacy\cite{DworkMNS16}]
Let $\epsilon\geq 0$ and $0\leq \delta\leq 1$. Let $\sim\subseteq \mathcal{D}\times\mathcal{D}$.
An algorithm $\mathcal{A}:\mathcal{D}\rightarrow \distr{\mathcal{O}}$ is $(\epsilon,\delta)$-differentially private w.r.t $\sim$ iff
$\forall D\sim D'.\forall o\subseteq\mathcal{O}. \Pr[\mathcal{A}(D)\in o] \leq e^{\epsilon} \Pr[\mathcal{A}(D')\in o] + \delta$.
\end{definition}The
relation $\sim$ models which pairs of input databases should be
considered sensitive, i.e., what data should be nearly
indistinguishable for an adversary. In this work we will mostly
consider the vanilla definition of differential privacy where
$\delta=0$. Differential privacy implies a number of interesting
properties. Here we will describe the most interesting ones for this work.
\begin{lemma}[Sequential Composition\cite{DworkMNS16}]
\label{lem:sequential}
Given an $A_1$ and $A_2$, respectively $(\epsilon_1,\delta_1)$-dp and $(\epsilon_2,\delta_2)$-dp,
their sequential composition $A(d)\equiv A_2(\langle A_1(d),d\rangle)$ is $(\epsilon_1+\epsilon_2, \delta_1+\delta_2)$-dp.
\end{lemma}
In the specific case of $A_2$ being 0-d.p, for instance when $A_2$
ignores or does not depend on $d$, the property of sequential
composition is called \emph{post-processing}. It intuitively means
that an $(\epsilon,\delta)$ differentially private answer remains such
when arbitrarly post processed, as long as the post processing does
not depend on the data. Any differentially private version of a
numeric query has necessarily to hide the difference in output of
two adjacent inputs\cite{Vadhan827361}. This difference in output is
captured by the following notion of sensitivity of a function.
\begin{definition}
Let $\sim\subseteq \mathcal{D}\times\mathcal{D}$, and $f:\mathcal{D}\rightarrow\mathbb{Z}$. Then $f$ is $k$ sensitive if
$\lvert f(x)-f(y)\rvert\leq k$, for all $x\sim y$.
\end{definition}
The following lemma provides the first differentially private primitive.
\begin{lemma}[Laplace Mechanism\cite{DworkMNS16}]
\label{lem:laplace}
Let $\epsilon>0$, and assume that $f:\mathcal{D}\mapsto \mathbb{Z}$ is a $k$ sensitive function
with repsect to $\sim\subseteq \mathcal{D}\times\mathcal{D}$. Then the randomized algorithm
mapping $D$ to $f(D)+\nu$, where $\nu$ is sampled from the Laplace distribution with
scale $\frac{1}{\epsilon}$, is $k\epsilon$-differentially private w.r.t to $\sim$.
\end{lemma}
\else
\paragraph{Differential Privacy}
\label{sec:dp}
Formally, differential privacy~\cite{DworkMNS16} is a property of a program defined as:
\begin{definition}
Let $\epsilon\geq 0$ and $\sim\subseteq \mathcal{D}\times\mathcal{D}$.
A program $\mathcal{A}:\mathcal{D}\rightarrow \distr{\mathcal{O}}$ is $\epsilon$-differentially private with respect to $\sim$ if and only if
$\forall D\sim D'.\forall o\in\mathcal{O}. \Pr[\mathcal{A}(D)=o] \leq e^{\epsilon} \Pr[\mathcal{A}(D')=o]$\footnote{We use the vanilla definition of differential privacy for simplicity of explanation, but allowing a $\delta>0$ would not change
how \textsf{\textup{CRSE}}\xspace works.}.
\end{definition}
The adjacency
relation $\sim$ models which pairs of input databases should be
indistinguishable to an adversary. Differentially private program can be composed~\cite{DworkMNS16}:
given programs $A_1$ and $A_2$, respectively $\epsilon_1$ and $\epsilon_2$ differentially private,
their sequential composition $A(d)\equiv A_2(\langle A_1(d),d\rangle)$ is $\epsilon_1+\epsilon_2$-differentially private.
The following lemma is a classical result in differential privacy~\cite{DworkMNS16}.
\begin{lemma}
\label{lem:laplace}
Let $\epsilon>0$, and assume that $f:\mathcal{D}\mapsto \mathbb{Z}$ is a $k$ sensitive function
with repsect to $\sim\subseteq \mathcal{D}\times\mathcal{D}$. Then the randomized algorithm
mapping $D$ to $f(D)+\nu$, where $\nu$ is sampled from a discrete version of the Laplace distribution with
scale $\frac{1}{\epsilon}$, is $k\epsilon$-differentially private w.r.t to $\sim$.
\end{lemma}
where a program $f:\mathcal{D}\rightarrow\mathbb{Z}$ is $k$ sensitive if
$\lvert f(x)-f(y)\rvert\leq k$, for all $x\sim y$.
\fi
\ifnum\full=1
\subsubsection*{Approximate Probabilistic Liftings }
In this section we will give the formal and precise defintion of
probabilistic approximate liftings and
we will make explicit their connection with differential privacy.
\begin{definition}
Given two sub-distributions $\mu_1\in\sdistr{A}, \mu_2\in\sdistr{B}$, a relation
$\Psi\subseteq A\times B$, and $\epsilon\in\mathbb{R},\delta\in[0,1]$,
we say that $\mu_1,\mu_2$ are related by the $(\epsilon,\delta)$ approximate lifting
of $\Psi$ iff there exists $\mu_L,\mu_R\in\distr{A\times B}$ such that:
\begin{itemize}
\item $\pi_1(\mu_L)=\mu_1$ and $\pi_2(\mu_R)=\mu_2$
\item $\supp{\mu_L}\cup\supp{\mu_R}\subseteq \Psi$
\item $\divergence{\epsilon}{\mu_L}{\mu_R}\leq \delta$
\end{itemize}
\end{definition}
\begin{lemma}[Foundamental Property of Liftings\cite{Barthe:2016}]
\label{lem:foundamental}
Let $\mu_1,\mu_2\in\distr{A}, \epsilon,\delta\geq 0$. Then $\divergence{\epsilon}{\mu_1}{\mu_2}\leq \delta$ iff $\mu_1 (=)^{\epsilon,\delta} \mu_2$.
\end{lemma}
From Lemma \ref{lem:foundamental} we can derive that an algorithm $A$
is $(\epsilon,\delta)$-dp w.r.t to an dajcency relation $\sim$ iff
$A(d_1) (=)^{\epsilon,\delta} A(d_2)$ for all $d_1\sim d_2$.
The following lemma states another useful proof principle.
\begin{lemma}[Pointwise Differential Privacy\cite{Barthe:2016}]
\label{lem:pointwise}
An algorithm $A:\mathcal{D}\rightarrow \distr{B}$ is $(\epsilon,\delta)$-dp w.r.t $\sim$ iff
there exists $\{\delta_b\mid \delta_b\geq 0\}_{b\in B}$ such that $\displaystyle\sum \delta_b\leq \delta$
and $A(d_1) (\Psi_{b})^{\epsilon,\delta_b}A(d_2)$ for every $d_1\sim d_2$.
Where $\Psi_b\equiv\{(x_1,x_2)\mid x_1=b \implies x_2=b\}\subseteq B\times B$.
\end{lemma}
The next lemma, finally, casts the Laplace mechanisms in terms of couplings.
\begin{lemma}
\label{em:laplace_gen}
Let $L_{v_1,b}, L_{v_2,b}$ two random variables with law Laplace distribution with mean $v_1$, and $v_2$ respectively, and $b$ as scale.
Then $L_{v_1, b} \{(z_1,z_2)\mid z_1+k=z_2\in \mathbb{Z}\times\mathbb{Z}\}^{\mid k + v_1 - v_2\mid\epsilon} L_{v_2,b}$, for all $k\in\mathbb{Z}, \epsilon\geq 0$.
\end{lemma}
\else
\paragraph*{Probabilistic Liftings and Coupling}
The notion of approximate probabilistic coupling is internalized by the notion of approximate lifting. We will focus here on distributions and we will use a simplified version of the notion presented in~\cite{Barthe:2016} since we are focusing on pure differential privacy.
\begin{definition}
Given two sub-distributions $\mu_1\in\distr{A}, \mu_2\in\distr{B}$, a relation
$\Psi\subseteq A\times B$, and $\epsilon\in\mathbb{R}$,
we say that $\mu_1,\mu_2$ are related by the $\epsilon$ approximate lifting
of $\Psi$, denoted $\mu_1 (\Psi)^{\epsilon} \mu_2$, iff there exists $\mu_L,\mu_R\in\distr{A\times B}$ such that:
1) $\lambda a.\sum_b\mu_L(a,b)=\mu_1$ and $\lambda b.\sum_a\mu_R(a,b)=\mu_2$,
2) $\{(a,b)| \mu_L(a,b)>0 \lor \mu_R(a,b)>0\}\subseteq \Psi$,
3) $\divergence{\epsilon}{\mu_L}{\mu_R}\leq 0$.
\end{definition}
Approximate lifting satisfies the following fundamental property.
\begin{lemma}
\label{lem:foundamental}
Let $\mu_1,\mu_2\in\distr{A}, \epsilon \geq 0$. Then $\divergence{\epsilon}{\mu_1}{\mu_2}\leq 0$ iff $\mu_1 (=)^{\epsilon} \mu_2$.
\end{lemma}
From Lemma \ref{lem:foundamental} we have that an algorithm $A$
is $\epsilon$-differentially private w.r.t to $\sim$ iff
$A(d_1) (=)^{\epsilon} A(d_2)$ for all $d_1\sim d_2$.
The next lemma, finally, casts the Laplace mechanisms in terms of couplings.
\begin{lemma}
\label{em:laplace_gen}
Let $L_{v_1,b}, L_{v_2,b}$ two Laplace random variables with mean $v_1$, and $v_2$ respectively, and scale $b$.
Then $L_{v_1, b}\, \{(z_1,z_2)\mid z_1+k=z_2\in \mathbb{Z}\times\mathbb{Z}\}^{\mid k + v_1 - v_2\mid\epsilon}\, L_{v_2,b}$, for all $k\in\mathbb{Z}, \epsilon\geq 0$.
\end{lemma}
\fi
\section{Concrete languages}
\label{sec:conc_lang}
\ifnum\full=1
In this section we will describe the syntax and the semantics
of the concrete languages \textsf{\textup{PFOR}}\xspace and {\textsf{\textup{RPFOR}}}\xspace. We call them
\emph{concrete} languages, as opposed to \emph{symbolic}, as it is
standard in the symbolic execution literature(\cite{King:1976,Farina2019}). The first
language, \textsf{\textup{PFOR}}\xspace, is the language in which our programs will be written
in and is a simple imperative language with for loops and random assignment.
In order to prove relational properties about them we will define
{\textsf{\textup{RPFOR}}}\xspace that, with a relational semantics, will capture pairs of \textsf{\textup{PFOR}}\xspace
programs and their paired semantics. We start off with \textsf{\textup{PFOR}}\xspace.
\else
In this section we sketch the two \textsf{\textup{CRSE}}\xspace concrete languages, the unary one \textsf{\textup{PFOR}}\xspace and the relational one {\textsf{\textup{RPFOR}}}\xspace. More details are in the Appendix.
\fi
\ifnum\full=1
\begin{wrapfigure}{L}{0.6\textwidth}
\vspace{-0.8cm}
\fbox{
\begin{minipage}[t]{0.55\textwidth}
\begin{align*}
\mathcal{E} \ni e::= &v\mid x\mid \arracc{a}{e}\mid \len{a}\mid e \oplus e\\
\mathcal{C}\ni c ::= &{\tt skip}\mid \seq{c}{c}\mid \ass{x}{e}\mid \ass{\arracc{a}{e}}{e}\mid\\
&\rass{x}{\lapp{e}{e}}\mid \ifte{e}{c}{c}\mid\\
&\cfor{x}{e}{e}{c}
\end{align*}
\end{minipage}
}
\caption{Syntax of \textsf{\textup{PFOR}}\xspace}
\label{fig:pfor-syntax}
\end{wrapfigure}
\subsection{\textsf{\textup{PFOR}}\xspace syntax}
\label{sec:pfor}
\textsf{\textup{PFOR}}\xspace is a basic FOR-like language with array and probabilistic sampling from the Laplace distribution. The syntax ispretty standard and we present it in Figure
\ref{fig:pfor-syntax}. We let $n,n_1,n_2$ range
over $\mathbb{Z}$. We let $e_1,e_2, e$ range over the set of
arithmetic expressions $\mathcal{E}$ which is inductively defined. Expressions are
basic values $v\in\mathcal{V}\equiv\mathbb{Z}\cup\mathcal{X}_p$, where the set $\mathcal{X}_p$
contains values denoting random expressions and will be explained more at the semantic level in the next
section. Program variables are also expressions $x\in\mathbb{V}$ as well as
arithmetic operations $e_1\oplus e_2$ where $\oplus\in\{+,-,*,/ \}$. Finally,
array accesses $\arracc{a}{e}$, and $\len{a}$ when $a$ is an array name in $\mathbb{A}$.
We assume $\mathbb{V},\mathbb{A}$, and $\mathcal{X}_p$ to be pairwise disjoint. The set
of commands $\mathcal{C}$ includes assignments, array assignments, the
${\tt skip}$ command, sequencing, branching, and a looping construct. Finally, we
also include a primitive instruction $\rass{x}{\lapp{e_1}{e_2}}$ to
model random sampling from the laplace distribution.
\subsection{\textsf{\textup{PFOR}}\xspace Semantics}
As mentioned above, the set $\mathcal{X}_p$ contains values denoting
random expressions. We call values in $\mathcal{X}_p$ distribution values.
We will use capital letters such as $X,Y,\dots$ to
denote arbitrary elements in $\mathcal{X}_p$
In Figure \ref{fig:pconstraints-syntax}, we introduce a grammar of
random expressions, where $X$ ranges over $\mathcal{X}_p$ and
$n,n_1,n_2\in\mathbb{Z}$. The simple constraints in the syntactic
categories $ra$ and $re$ record that a random
value is either associated with a specific distribution, or that the
computation is conditioned on some random expression being greater than
0 or less than or equal than 0. The former constraints, as we will
see, come from branching instructions. We treat constraint lists $p,
p'$, in Figure \ref{fig:pconstraints-syntax} as lists of simple
constraints and hence, from now on, we will use the infix operators
$::$ and $@$, respectively, for appending a simple constraint to a
constraint and for concatenating two constraints. The symbol $[]$
denotes the empty list of probabilistic constraints. Environments in
the set $\mathcal{M}$, or probabilistic memories, map program variables to values in
$\mathcal{V}$, and array names to elements in
$\textbf{Array}\equiv\bigcup_{i}\mathcal{V}^i$, so the type of a
memory $m\in\mathcal{M}$ is $\mathbb{V}\rightarrow\mathcal{V}
\cup\mathbb{A}\rightarrow \textbf{Array}$.
We will distinguish between probabilistic concrete memories in $\mathcal{M}$ and concrete
memories in the set $\mathcal{M}_{\textup{c}}\equiv \mathbb{V}\rightarrow \mathbb{Z} \cup
\mathbb{A}\rightarrow\bigcup_{i}\mathbb{Z}^i$.
Probabilistic concrete memories are meant to denote subdistributions
over the set of concrete memories $\mathcal{M}_c$, more about this connection
in Section \ref{sec:conf_to_distr}.
\begin{wrapfigure}{L}{0.5\textwidth}
\vspace{-0.8cm}
\fbox{
\begin{minipage}{0.4\textwidth}
\begin{flalign*}
ra&::=\rass{X}{\lapp{n_1}{n_2}}&\\
re&::=n\mid X\mid re\oplus re&\\
P \ni p&::=X=re\mid re>0\mid re\leq 0\mid &\\
&\ \ \ ra \mid p::P\mid []&
\end{flalign*}
\end{minipage}
}
\caption{Concrete probabilistic constraints}
\label{fig:pconstraints-syntax}
\end{wrapfigure}
Expressions in \textsf{\textup{PFOR}}\xspace
are given meaning through a big-step evaluation semantics specified by
a judgment of the form: $\econf{m}{e}{p}\downarrow_{\textup{c}} \vconf{v}{p'}$, where
$m\in\mathcal{M}, e\in\mathcal{E}, p,p'\in P, v\in\mathcal{V}$. The judgments
reads as: expression $e$ reduces to the value $v$ and probabilistic
constraints $p'$ in an enviroment $m$ with probabilistic concrete constraints
$p$. Commands are given meaning through a small-step evaluation
semantics specified by a judgment of the form: $\conf{m}{c}{p}\rightarrow_{\textup{c}}
\conf{m'}{c'}{p'}$, where $m,m'\in\mathcal{M}, c,c'\in\mathcal{C},p,p'\in
P$. The judgment reads as: the probabilistic concrete configuration $\conf{m}{c}{p}$
steps in to the probabilistic concrete configuration $\conf{m'}{c'}{p'}$. We call a
probabilistic concrete configuration of the form $\conf{m}{{\tt skip}}{p}$ final. A
set of concrete configurations $\mathscr{D}$ is called final and we denote it by $\final{\mathscr{D}}$ if
all its concrete configurations are final. We will use this predicate even for sets of sets of concrete configurations
with the obvious lifted meaning. Figure
(\ref{fig:pfor-selrules}) shows a selection of the rules defining
these judgments.
\begin{figure}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{if-false}]
{\econf{m}{e}{p}\downarrow_{\textup{c}} \vconf{v}{p'}\and
v\in\mathbb{Z}\and v\leq 0
}
{\conf{m}{\ifte{e}{c_1}{c_2}}{p} \rightarrow_{\textup{c}} \conf{m}{c_2}{p'}}
\inferrule[\rulestyle{if-true-prob}]
{\econf{m}{e}{p}\downarrow_{\textup{c}} \vconf{v}{p'} \and v\in\mathcal{X}_p
\and p''\equiv p'@v>0
}
{\conf{m}{\ifte{e}{c_1}{c_2}}{p} \rightarrow_{\textup{c}} \conf{m}{c_1}{p''}}
\inferrule[\rulestyle{lap-ass}]
{
\econf{m}{e_1}{p} \downarrow_{\textup{c}}\vconf{n_1}{p_1} \and
\econf{m}{e_2}{p_1} \downarrow_{\textup{c}}\vconf{n_2}{p_2} \\\\
n_2>0 \and
\fresh{X}{\mathcal{X}_p} \\\\ p' \equiv p_1@X=\lapp{n_1}{n_2}
}
{
\conf{m}{\rass{x}{\lapp{e_1}{e_2}}}{p} \rightarrow_{\textup{c}} \conf{m[x\mapsto X]}{{\tt skip}}{p'}
}
\end{mathpar}
}
\caption{\textsf{\textup{PFOR}}\xspace selected rules}
\label{fig:pfor-selrules}
\end{figure}
Most of the rules are self-explanatory so we only describe the ones
which are non standard. Rule $\rulestyle{lap-ass}$
handles the random assignment. It evaluates the mean $e_1$ and the
scale $e_2$ of the distribution and checks that $e_2$ actually denotes
a positive number. The semantic predicate $\fresh{\cdot}{\cdot}$
asserts that the first argument is drawn non deterministically from
the second argument and that it was never used before in the
computation. Notice that if one of these two expressions reduces to a
probabilistic symbolic value the computation halts. Rule
$\rulestyle{if-true-prob}$ (and $\rulestyle{if-false-prob}$) reduces
the guard of a branching instruction to a value. If the value is a
probabilistic symbolic constraint then it will nondeterministically
choose one of the two branches recording the choice made in the list
of probabilistic constraints. If instead the value of the guard is
a numerical constant it will choose the right branch
deterministically using the rules $\rulestyle{if-false}$ and
$\rulestyle{if-true}$ (not showed).
As clear from the rules a run of a \textsf{\textup{PFOR}}\xspace program can generate many
different final concrete configurations.
A different judgment of the form $\mathscr{D} \Rightarrow_{\textup{c}}
\mathscr{D}'$, where
$\mathscr{D},\mathscr{D}'\in\mathcal{P}(\mathcal{M}\times\mathcal{C}\times P)$,
and in particular its transitive and reflexive closure ( $\Rightarrow_{\textup{c}}^{*}$), will help us in collecting
all the possible final configurations stemmign from a computation.
\begin{wrapfigure}{L}{0.6\textwidth}
\vspace{-0.5cm}
\fbox{
\begin{minipage}{0.55\textwidth}
\begin{mathpar}
\inferrule[\rulestyle{Sub-distr-step}]
{
\mathscr{D}_t\equiv\{\conf{m'}{c'}{p'}\mid
\conf{m}{c}{p} \rightarrow_{\textup{c}} \conf{m'}{c'}{p'} \}\\\\
\conf{m}{c}{p} \in \mathscr{D}\\\\
\mathscr{D}'\equiv\bigg(\mathscr{D}\setminus\{\conf{m}{c}{p}\}\bigg ) \cup \mathscr{D}_t
}
{
\mathscr{D} \Rightarrow_{\textup{c}} \mathscr{D}'
}
\end{mathpar}
\end{minipage}
}
\caption{\rulestyle{Sub-distr-rule}}
\label{fig:rule_confs}
\end{wrapfigure}
The only rule that defines the judgment, $\rulestyle{Sub-distr-step}$,
is presented in Figure \ref{fig:rule_confs}.
Rule $\rulestyle{Sub-distr-step}$ selects non
deterministically one configuration $s=\conf{m}{c}{p}$ from
$\mathscr{D}$, removes $s$ from it, and adds to $\mathscr{D}'$ all the
configurations $s'$ that are reachable from $s$.
\subsection{From configurations to subdistribution}
\label{sec:conf_to_distr}
In section \ref{sec:prelim} we defined the notions of lifting,
coupling and differential privacy using subdistributions in the form
of functions from a set of atomic events to the interval $[0,1]$. The
semantics of the languages proposed so far though only deal with
subidstributions represented as set of concrete probabilistic
configurations. In this section we will map the latter to the former.
We start by giving two operators used to compose and define new
subdistridbutions in the functional form, that is: $\unit{\cdot}$, and $\bind{\cdot}{\cdot}$.
In the following, we use lambda notation and the denumerable sets $\mathcal{O}, \mathcal{O}'$
are universally quantified. The first one is defined as:
$\textbf{\textup{unit}}: \mathcal{O} \rightarrow \sdistr{\mathcal{O}} \equiv \lambda a. \lambda x. \left \{
\begin{array}{rcl}
1 && \text{if}\ x=a \cr
0 && \text{otherwise}
\end{array}
\right .\
$.
The second one, is defined as
\[
\textbf{\textup{bind}}: \sdistr{\mathcal{O}'}\rightarrow
(\mathcal{O}'\rightarrow \sdistr{\mathcal{O}})\rightarrow\sdistr{\mathcal{O}}\equiv \lambda \mu.\lambda f.\lambda
a.\displaystyle\sum_{b\in\mathcal{O}'}\mu(b)\cdot f(b)(a)
\]
In particular $\unit{\cdot}$ takes an arbitrary element $a$ in a set $\mathcal{O}$
and returns a delta distribution centered in $a$.
$\bind{\cdot}{\cdot}$
builds a new subdistribution starting from a family an initial
subdistribution and a family of conditional distributions. Using
$\unit{\cdot}, \bind{\cdot}{\cdot}$ it is possible to give a monadic
structure to the semantics of the language as it is done in (\cite{barthe2012probabilistic})
for the language \textbf{\textup{pWhile}}.
In Figure \ref{fig:translation} we define a translation function
($\denotemp{\cdot}{\cdot}$) and, auxiliary functions as well, between
a single probabilistic concrete configuration and a subdistribution
defined using the $\unit{\cdot}/\bind{\cdot}{\cdot}$ constructs. We make use of the
constant subdistribution $\mu_0$ which maps every element to mass 0,
and is usually referred to as the \emph{null} subdistribution, also by $\lapp{n_1}{n_2}(z)$
we denote the mass of (discrete version of) the Laplace distribution centered in $n_1$ with scale $n_2$ at the point $z$.
\begin{figure*}
\fbox{
\[
\begin{array}{lll}
\denotemp{m_s}{p}&=&\bind {\denotep{p}} {(\lambda s_o. \unit{s_o(m_s)})}\\
\denotep{[]}&=&\unit{[]}\\
\denotep{X=re::p'}&=& \bind{\denotep{p'}} {\lambda s_o. \bind{\denotere{re}{s_o}}{\lambda z_o. \unit{X=z_o::s_o}}}\\
\denotep{re>0::p'}&=& \bind{\denotep{p'}}{\lambda s_o. \bind{\denotere{re}{s_o}}{\lambda z_o. \text{if}\ (z_o>0)\ \text{then}\ \unit{z_o}\ \text{else}\ \mu_0}}\\
\denotep{re\leq 0::p'}&=& \bind{\denotep{p'}}{\lambda s_o. \bind{\denotere{re}{s_o}}{\lambda z_o. \text{if}\ (z_o\leq 0)\ \text{then}\ \unit{z_o}\ \text{else}\ \mu_0}}\\
\denotere{\lapp{n_1}{n_2}}{s}&=&\lambda z.\lapp{n_1}{n_2}(z)\\
\denotere{n}{s}&=&\unit{n}\\
\denotere{X}{s}&=&\unit{s(X)}\\
\denotere{re_1\oplus re_2}{s}&=&\bind{\denotere{re_1}{s}}{\lambda v_1.\bind{\denotere{re_2}{s}}{\lambda v_2.\unit{v_1\oplus v_2}}}
\end{array}
\]
}
\caption{Translation from configuration to $\unit{\cdot}/\bind{\cdot}{\cdot}$ representation of subdistribution}
\label{fig:translation}
\end{figure*}
The idea of the translation is that we can transform a probabilistic
concrete memory $m_s\in\mathcal{M}$ into a distribution over fully concrete memories in
$\mathcal{M}_c$ by sampling from the distributions of the
probabilistic variables defined in $m_s$ in the order they were
decleared which is specified by the probabilistic path constraints.
To do this we first build a substitution for the probabilistic variable
which maps them into integers and then we perform the substitution
on $m_s$.
Given a set of probabilistic concrete memories we can then turn them
in a subdistribution by summing up all the translations of the
single probabilistic configurations. Indeed, given two
subdistributions $\mu_1,\mu_2$ defined over the same set we can always
define the subdistribution $\mu_1+\mu_2$ by the mapping
$(\mu_1+\mu_2)(a)=\mu_1(a)+\mu_2(a)$.
The following Lemma states an equivalence between these two
representations of probability subdistributions. The hypothesis of
the theorem involve a judgment, $\match{m}{p}$, which has not been specified for lack
of space but can be found in the appendix, it deals with
well-formedness of the probabilistic path constraint $p$ with respect
to the concrete probabilistic memory $m$.
\begin{lemma}\label{thm:op_to_denot} If $\match{m}{p}$ and
$\{\conf{m}{c}{p}\}\Rightarrow_{\textup{c}}^{*}\{\conf{m_1}{{\tt skip}}{p_1},\dots,\conf{m_n}{{\tt skip}}{p_n}\}$
then
$\bind{\denotemp{m}{p}}{\cdenote{c}}=\displaystyle\sum_{i=1}^{n}\denotemp{m_i}{p_i}$
\end{lemma}
We can now hence take, in this work, the following as definition of
full denotational semantics of a program executed in a memory:
\begin{definition}
\label{def:full_semantics} The semantics of a program $c$
executed on memory $m$ and probability path constraint $p_0$ is
$\cdenote{c}(m_0,p_0)\equiv
\displaystyle\sum_{(m,{\tt skip},p)\in\mathscr{D}}\denotemp{m}{p}$,
when $\{\conf{m}{c}{p}\}\Rightarrow_{\textup{c}}^{*}\mathscr{D}$, $\final{\mathscr{D}}$, and $\match{m_0}{p_0}$.
If $p_0=[]$ we write $\cdenote{c}(m_0)$.
\end{definition}
\else
\textsf{\textup{PFOR}}\xspace is a basic for language with array and probabilistic sampling from the Laplace distribution $\rass{x}{\lapp{e}{e}}$.
For simplicity, we assume that the parameters of the Laplace distribution are expressions which do not depend on other probability distributions - this is sufficient for the most common examples in differential privacy. The rest of the syntax is pretty standard and we omit it here.
For executing programs, we use a distribution transformer semantics based on constraints $p$ over a set $\mathcal{X}_p$ whose elements denote
random values - we call them distribution values and use capital letters such as $X,Y,\dots$ for them.
Expressions in \textsf{\textup{PFOR}}\xspace
are given meaning through a big-step evaluation semantics specified by
a judgment of the form: $\econf{m}{e}{p}\downarrow_{\textup{c}} \vconf{v}{p'}$, where
$m\in\mathcal{M}$ is a memory containing potentially distribution values.
Commands are given meaning through a small-step evaluation
semantics specified by a judgment of the form: $\conf{m}{c}{p}\rightarrow_{\textup{c}}
\conf{m'}{c'}{p'}$. We call a
probabilistic concrete configuration of the form $\conf{m}{{\tt skip}}{p}$ final. A
set of concrete configurations $\mathscr{D}$ is called final and we denote it by $\final{\mathscr{D}}$ if
all its concrete configurations are final.
We also have a collecting semantics
proving judgment of the form $\mathscr{D} \Rightarrow_{\textup{c}}
\mathscr{D}'$ which can be mapped to the corresponding semantics over probability distributions.
\fi
\ifnum\full=1
\subsection{{\textsf{\textup{RPFOR}}}\xspace syntax}
\label{sec:rpfor}
\textsf{\textup{PFOR}}\xspace's semantics is unary, syntactically meaning that the configurations it deals with
are characterized by memories mapping variables and arrays names to single objects.
Semantically, it means that it captures only the computation of a program over a single memory.
In order to be able to reason about a relational property, such as differential
privacy, we will build on top of it a relational language called {\textsf{\textup{RPFOR}}}\xspace with
a relational semantics dealing with pair of traces. Intuitively,
an execution of a single {\textsf{\textup{RPFOR}}}\xspace program represents the execution of two
\textsf{\textup{PFOR}}\xspace programs. Inspired by the approach of
\cite{pottier2002information}, we extend the grammar of \textsf{\textup{PFOR}}\xspace with a
pair constructor $\pair{\cdot}{\cdot}$ which can be used at the
level of values $\pair{v_1}{v_2}$,
expressions~$\pair{e_1}{e_2}$, or commands $\pair{c_1}{c_2}$.
Notice that $c_i, e_i, v_i$ for $i\in\{1,2\}$ are commands,
expressions, and values in \textsf{\textup{PFOR}}\xspace, hence nested pairing is not
allowed. This syntactic invariant is preserved by the rules handling
the branching instruction. Pair constructs are used to indicate where
commands, values, or expressions might be different in the two unary
executions represented by a single {\textsf{\textup{RPFOR}}}\xspace execution. To define the
semantics for {\textsf{\textup{RPFOR}}}\xspace, we first extend memories to allow program
variables to map to pairs of integers, and array variables to map to
pairs of arrays.
The set of expressions and commands in {\textsf{\textup{RPFOR}}}\xspace, $\mathcal{E}_{\textup{r}}, \mathcal{C}_{\textup{r}}$
are generated by the grammars:
\begin{align*}
\mathcal{E}_{\textup{r}}\ni e_r&::=v\mid e \mid \pair{e_1}{e_2} &\mathcal{C}_{\textup{r}}\ni c_r &::=\ass{x}{e_r}\mid \rass{x}{\lapp{e_{r}}{e_{r}}} \mid c \mid \pair{c_1}{c_2}
\end{align*}
where $v\in\mathcal{V}_{\textup{r}} , e, e_1,e_2 \in \mathcal{E}, c,c_1,c_2\in\mathcal{C}$.
Values can now be also pairs of unary values, that is $\mathcal{V}_{\textup{r}}\equiv\mathcal{V} \cup \mathcal{V}^2$.
\subsection{{\textsf{\textup{RPFOR}}}\xspace semantics}
In the following we will use the following projection functions
$\proj{i}{\cdot}$ for $i\in\{1,2\}$, which project, respectively, the
first (left) and second (right) elements of a pair construct (i.e.,
$\proj{i}{\pair{c_1}{c_2}}=c_i$, $\proj{i}{\pair{e_1}{e_2}}=e_i$ with
$\proj{i}{v}=v$ when $v\in\mathcal{V}$), and are homomorphic for other
constructs. The semantics of expressions in {\textsf{\textup{RPFOR}}}\xspace is specified
through the following judgment $\reconf{m_1}{m_2}{e}{p_1}{p_2}\downarrow_{\textup{rc}}
\rvconf{v}{p'_1}{p'_2}$, where $m_1,m_2\in\mathcal{M}, p_1,p_2,
p'_1,p'_2\in P, e\in\mathcal{E}_{\textup{r}}, v\in\mathcal{V}_{\textup{r}}$.
Similarly, for commands, we have the following judgment
$\rconf{m_1}{m_2}{c}{p_1}{p_2}\rightarrow_{\textup{rc}}\rconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}$.
Again, we use the predicate $\final{\cdot}$ for configurations
$\rconf{m_1}{m_2}{c}{p_1}{p_2}$ such that $c={\tt skip}$, and lift the
predicate to sets of configurations as well.
Intuitively a relational probabilistic concrete configuration
$\rconf{m_1}{m_2}{c}{p_1}{p_2}$ denotes a pair of probabilistic
concrete states, that is a pair of subdistributions over the space of
concrete memories. In Figure \ref{fig:rpfor-selrules} a selection of the rules defining
the the judgements is presented. Most of the rules are quite natural.
Notice how branching instructions combine both probabilistic and relational
nondeterminism.
\begin{figure}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{r-expr-1}]
{
\econf{m_1}{\proj{1}{e}}{p_1}\downarrow_{\textup{c}} \vconf{v_1}{p'_1} \\\\
\econf{m_2}{\proj{2}{e}}{p_2}\downarrow_{\textup{c}} \vconf{v_2}{p'_2} \\\\
v_1,v_2 \in \mathbb{Z} \and v_1=v_2
}
{\reconf{m_1}{m_2}{e}{p_1}{p_2}\downarrow_{\textup{rc}}\rvconf{v_1}{p'_1}{p'_2}}
\inferrule[\rulestyle{r-expr-2}]
{
\econf{m_1}{\proj{1}{e}}{p_1}\downarrow_{\textup{c}} \vconf{v_1}{p'_1} \\\\
\econf{m_2}{\proj{2}{e}}{p_2}\downarrow_{\textup{c}} \vconf{v_2}{p'_2} \\\\ (\exists i\in\{1,2\}. v_i\not\in\mathbb{Z} \vee v_1\neq v_2)
}
{\reconf{m_1}{m_2}{e}{p_1}{p_2}\downarrow_{\textup{rc}}\rvconf{\pair{v_1}{v_2}}{p'_1}{p'_2}}
%
\inferrule[\rulestyle{r-if-conc-conc-true-false}]
{
\reconf{m_1}{m_2}{e}{p_1}{p_2} \downarrow_{\textup{rc}}\rvconf{v}{p'_1}{p'_2}\\\\
\proj{1}{v},\proj{2}{v}\in\mathbb{Z} \and \proj{1}{v}>0 \and \proj{2}{v}\leq 0
}
{
\rconf{m_1}{m_2}{\ifte{e}{c_1}{c_2}}{p_1}{p_2}\rightarrow_{\textup{rc}} \\\\ \rconf{m_1}{m_2}{\pair{\proj{1}{c_1}}{\proj{2}{c_2}}}{p'_1}{p'_2}
}
\inferrule[\rulestyle{r-if-prob-prob-true-false}]
{
\reconf{m_1}{m_2}{e}{p_1}{p_2} \downarrow_{\textup{rc}}\rvconf{v}{p'_1}{p'_2}\\\\
\proj{1}{v},\proj{2}{v}\in\mathcal{X}_p
}
{
\rconf{m_1}{m_2}{\ifte{e}{c_1}{c_2}}{p_1}{p_2}\rightarrow_{\textup{rc}}\\\\
\rconf{m_1}{m_2}{\pair{\proj{1}{c_1}}{\proj{2}{c_2}}}{\proj{1}{v}>0@p'_1}{\proj{2}{v}\leq 0@p'_2}
}
\inferrule[\rulestyle{r-pair-step}]
{
\{i,j\}=\{1,2\}
\and
\conf{\proj{i}{m}}{c_{i}}{p_i}\rightarrow_{\textup{c}}\conf{m'_{i}}{c'_{i}}{p'_i}
\\\\
c'_{j}=c_{j}\and p'_{j}=p_{j} \and
m'_{j}= \proj{j}{m}
}
{
\rconf{m_1}{m_2}{\pair{c_{1}}{c_{2}}}{p_1}{p_2}\rightarrow_{\textup{rc}} \\\\ \rconf{m'_1}{m'_2}{\pair{c'_{1}}{c'_{2}}}{p'_1}{p'_2}
}
\end{mathpar}
}
\caption{{\textsf{\textup{RPFOR}}}\xspace selected rules}
\label{fig:rpfor-selrules}
\end{figure}
So, as in the case of \textsf{\textup{PFOR}}\xspace, we
collect sets of relational configurations using the judgment
$\mathscr{R} \Rightarrow_{\textup{rc}} \mathscr{R}'$
with $\mathscr{R},\mathscr{R}'\in\mathcal{P}(\mathcal{M}\times\mathcal{M}\times \mathcal{C}_{\textup{r}}\times P\ \times P)$,
defined by only one rule presented in Figure \ref{fig:rule_pconfs}: $\tiny{\rulestyle{SUB-PDISTR-STEP}}$.
\begin{figure}
\begin{mathpar}
\inferrule[\tiny{\rulestyle{SUB-PDISTR-STEP}}]
{
\rconf{m_1}{m_2}{c}{p_1}{p_2} \in \mathscr{R} \\\\
\mathscr{R}_t\equiv\{\rconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}\\\\
\rconf{m_1}{m_2}{c}{p_1}{p_2} \rightarrow_{\textup{rc}} \rconf{m'_1}{m'_2}{c'}{p'_1}{p'_2} \}\\\\
\mathscr{R}'\equiv\bigg(\mathscr{R}\setminus\{\rconf{m_1}{m_2}{c}{p_1}{p_2}\}\bigg ) \cup \mathscr{R}_t
}
{
\mathscr{R} \Rightarrow_{\textup{rc}} \mathscr{R}'
}
\end{mathpar}
\caption{The only rule defining the $\Rightarrow_{\textup{rc}}$ relation.}
\label{fig:rule_pconfs}
\end{figure}
The rule picks and remove non
deterministically one relational configuration from a set and adds to
it all those configurations that are reachable from it.
As mentioned before a run of a program in {\textsf{\textup{RPFOR}}}\xspace corresponds to the
execution of two runs the program in \textsf{\textup{PFOR}}\xspace. Before making this
precise we extend projection functions to relational configurations in
the following way:
$\proj{i}{\rconf{m_1}{m_2}{c}{p_1}{p_2}}=\conf{m_i}{c}{p_i}$, for
$i\in\{1,2\}$. Projection functions extend in the obvious way also to
sets of relational configurations. We are now ready to state the
following lemma:
\begin{lemma}
Let $i\in\{1,2\}$ then $\mathscr{R}\Rightarrow_{\textup{rc}}^{*}\mathscr{R}'$ iff
$\proj{i}\mathscr{R}\Rightarrow_{\textup{c}}^{*}\proj{i}{\mathscr{R}'}$.
\end{lemma}
\else
{\textsf{\textup{RPFOR}}}\xspace
extends the grammar of \textsf{\textup{PFOR}}\xspace with a
pair constructor $\pair{\cdot}{\cdot}$, which can be used at the
level of values, expressions, or commands but which cannot be nested. Intuitively,
an execution of a single {\textsf{\textup{RPFOR}}}\xspace program represents the execution of two
\textsf{\textup{PFOR}}\xspace programs and the pair construct is used to indicate where
programs might be different in two unary
executions represented by a single {\textsf{\textup{RPFOR}}}\xspace execution.
Values in {\textsf{\textup{RPFOR}}}\xspace can now be pairs of \textsf{\textup{PFOR}}\xspace values.
The semantics of expressions in {\textsf{\textup{RPFOR}}}\xspace extends naturally the one of \textsf{\textup{PFOR}}\xspace where judgments
$\reconf{m_1}{m_2}{e}{p_1}{p_2} \downarrow_{\textup{rc}} \rvconf{v}{p'_1}{p'_2}$ reads as: starting in the enviroments $m_1$ and $m_2$ with
probabilistic constraint $p_1$ and $p_2$, the expression e reduces to
the value $v$ (which can be a pair of values) and probabilistic
constraints $p_1'$ and $p_2'$, respectively. Similarly, for commands,
we have the following judgment
$\rconf{m_1}{m_2}{c}{p_1}{p_2}\rightarrow_{\textup{rc}}\rconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}$.
Again, we use the predicate $\final{\cdot}$ for configurations
$\rconf{m_1}{m_2}{c}{p_1}{p_2}$ such that $c={\tt skip}$, and lift the
predicate to sets of configurations as well. Intuitively a relational
probabilistic concrete configuration $\rconf{m_1}{m_2}{c}{p_1}{p_2}$
denotes a pair of probabilistic concrete states, that is a pair of
subdistributions over the space of concrete memories.
We also have the corresponding collecting semantics $\mathscr{R} \Rightarrow_{\textup{rc}} \mathscr{R}'$.
\fi
\ifnum\full=1
\section{Symbolic languages}
In this section we proceed to lift the concrete languages, metioned in Section
(\ref{sec:conc_lang}), to their
symbolic versions (respectively, {\textsf{\textup{SPFOR}}}\xspace and {\textsf{\textup{SRPFOR}}}\xspace).
As it is standard in symbolic execution literature,
the first step is to extend, with symbolic values $X\in\mathcal{X}$, the set
of values of the concrete languages, after that rules of semantics
execution will be defined. We start off by doing this for the
language {\textsf{\textup{PFOR}}}\xspace.
\subsection{{\textsf{\textup{SPFOR}}}\xspace: Syntax}
\begin{wrapfigure}{L}{0.6\textwidth}
\begin{minipage}{0.55\textwidth}
\vspace{-0.8cm}
\fbox{
\[
\begin{array}{rcl}
\mathcal{E}_{\textup{s}} \ni e::= &v\mid x\mid X\mid \arracc{a}{e}\mid \len{a}\mid e \oplus e\\
\mathcal{C}_{\textup{s}}\ni c ::= &{\tt skip}\mid \seq{c}{c}\mid \ass{x}{e}\mid \ass{\arracc{a}{e}}{e}\mid\\
&\rass{x}{\lapp{e}{e}}\mid \ifte{e}{c}{c}\mid\\
&\cfor{x}{e}{e}{c}
\end{array}
\]
}
\end{minipage}
\caption[{\textsf{\textup{SPFOR}}}\xspace syntax]{{\textsf{\textup{SPFOR}}}\xspace syntax. $X\in\mathcal{X}$}
\label{fig:splang-syntax}
\end{wrapfigure}
We now extend {\textsf{\textup{PFOR}}}\xspace expressions with symbolic values
$X\in\mathcal{X}$. Syntax of {\textsf{\textup{SPFOR}}}\xspace is presented in Figure
\ref{fig:splang-syntax}.
As we can see the syntax is similar to that of {\textsf{\textup{PFOR}}}\xspace except that the
set of expressions has been increased with symbolic values from
$\mathcal{X}$ denoting integers. We assume $\mathcal{X}_p\cap
\mathcal{X}=\emptyset$. This assumption reflects the idea that symbolic
values in $\mathcal{X}$ do not denote unknown or sets of probability
distributions but only unknown sets of integers. The set of
values is now ${\ensuremath{\textup{\textbf{V}}}_{\textup{s}}}\equiv\pvalset\cup\mathcal{X}$, we will also
need the set ${\ensuremath{\textup{\textbf{V}}}_{\textup{is}}}\equiv\mathbb{Z}\cup\mathcal{X}$. Notice how
symbolic values can very well appear in probabilistic expressions.
\subsection{{\textsf{\textup{SPFOR}}}\xspace: Semantics of expressions}
In order to collect constraints on symbolic values we extend
configurations with set of constraints over integer values, drawn from
the set $\mathcal{S}$, not to be confused with probabilistic path
constraints. The former express constraints over integer values, for
instance parameters of the distributions. The grammar of constraints
over integers and array values is presented in Figure \ref{fig:splang-constraints-grammar}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\[
\begin{array}{rcl}
\cstrset_{e}\ni e &::=& n \ \mid X \mid i\mid e \oplus e \mid\store{e}{e}{e}\mid \\ && \select{e}{e} \mid | e |\\
\mathcal{S}\ni s &::=& \top\mid e\circ e\ \mid s \wedge s\mid \neg s\mid\forall i.s
\end{array}
\]
\caption[Grammar of constraints, again]{Grammar of constraints. $X\in\mathcal{X}, n\in\textup{\textbf{V}}$.}
\label{fig:splang-constraints-grammar}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\[
\begin{array}{rcl}
sra&::=&\rass{Y}{\lapp{c_e}{c_e}}\\
sre&::=&n\mid X\mid Y\mid re\oplus re\\
SP&::=&Y=re\mid re>0\mid re\leq 0\mid ra
\end{array}
\]
\caption[Grammar of symbolic probabilistic constraints]{Grammar of symbolic probabilistic constraints. $c_e\in\mathcal{S}, X\in\mathcal{X}, Y\in\mathcal{X}_p$}
\label{fig:probcstr_syntax}
\end{subfigure}
\end{figure}
In particular constraint expressions include standard arithmetic
expressions with values being symbolic or integer constants, and array
selection. Actual constraints include first order logic formulas over
arithmetic expressions. Finally, probabilistic path constraints now
can also contain symbolic integer values and hence the grammar gets
updated to what is shown in Figure \ref{fig:probcstr_syntax}.
What changes with respect to Figure \ref{fig:pconstraints-syntax} is
that now arithmetic expressions can also include symbolic integer
values. Indeed, also probabilistic path constraints now can be
symbolic. Since values have been extended, also memories, now
properly symbolic, change type in:
${\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\equiv\mathbb{V}\rightarrow{\ensuremath{\textup{\textbf{V}}}_{\textup{s}}} \cup\mathbb{A}\rightarrow \textbf{Array}_{\textup{s}}$,
where $\textbf{Array}_{\textup{s}}\equiv\{(X,v)\mid X\in \mathcal{X}, v\in{\ensuremath{\textup{\textbf{V}}}_{\textup{is}}}\}$. In particular, we
represent arrays in memory as pairs $(X,v)$, where $v$ is a (concrete
or symbolic) integer value representing the length of the array, and
$X$ is a symbolic value representing the array contents. The content
of the arrays is kept and refined in the set of constraints by means
of $\select{\cdot}{\cdot}$ and $\store{\cdot}{\cdot}{\cdot}$ relation
symbols. Evaluation judgments, both for expressions and commands,
will also include a set of constraints over integers. This is because
constraints can also be generated during evaluation of expressions.
We can no proceed with the semantics of expressions.
\begin{figure}
\vspace{-0.7cm}
\centering
\fbox{
$\speconf{m}{e}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'}$
}
\begin{minipage}{0.7\textwidth}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{S-P-Op-2}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'} \\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''}\\\\
v_1,v_2\in \mathcal{X}_p \and \fresh{X}{\mathcal{X}_p}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''@[X=v_1\oplus v_2]}{s''}}
\inferrule[\rulestyle{S-P-Op-5}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'} \\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''} \\\\
\{i,j\}=\{1,2\}\and v_i\in\mathbb{Z} \and v_j\in\mathcal{X} \and \fresh{X}{\mathcal{X}}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''}{s''\cup\{X=v_1\oplus v_2\}}}
\inferrule[\rulestyle{S-P-Op-6}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'} \\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''} \\\\
\{i,j\}=\{1,2\}\and v_i\in\mathcal{X} \and v_j\in\mathcal{X}_p \and \fresh{X}{\mathcal{X}_p}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''@[X=v_1\oplus v_2]}{s''}}
\end{mathpar}
}
\end{minipage}
\caption[{\textsf{\textup{SPFOR}}}\xspace: Semantics of expressions]{{\textsf{\textup{SPFOR}}}\xspace: Semantics of expressions, selected rules.}
\label{fig:sem-splang-expr}
\end{figure}
Figure \ref{fig:sem-splang-expr} shows the judgment form.
The judgment is then inductively defined.
We only show few rules for this judgment. We briefly describe the rules
presented. Rule \rulestyle{S-P-Op-2} applies
when an arithmetic operation has both of its operands that reduce
respectively to elements $\mathcal{X}_p$. Appropriately it updates
the set of probabilistic constraints.
Rules \rulestyle{S-P-Op-5} instead fires when one of them is an integer
and the other is a symbolic value. In this case only the
list of symbolic constraints needs to be updated.
Finally, in rule $\rulestyle{S-P-Op-6}$ one of the operands
reduces to an element in $\mathcal{X}_p$ and the other to an element in
$\mathcal{X}$. We only update the list of probabilistic constraints
appropriately, as integer constraints cannot contain symbols in
$\mathcal{X}_p$.
\else
\section{Symbolic languages}
In this section we lift the concrete languages, mentioned in Section
\label{sec:conc_lang}, to their
symbolic versions (respectively, {\textsf{\textup{SPFOR}}}\xspace and {\textsf{\textup{SRPFOR}}}\xspace) by extending them with symbolic values $X\in\mathcal{X}$.
\paragraph{{\textsf{\textup{SPFOR}}}\xspace}
expressions extend \textsf{\textup{PFOR}}\xspace expressions with symbolic values
$X\in\mathcal{X}$:
$\mathcal{E}_{\textup{s}} \ni e::= v\mid x\mid X\mid \arracc{a}{e}\mid \len{a}\mid e \oplus e$.
We assume $\mathcal{X}_p\cap
\mathcal{X}=\emptyset$ - this because we want symbolic
values in $\mathcal{X}$ to denote only unknown sets of integers, rather than sets of probability
distributions.
Commands in {\textsf{\textup{SPFOR}}}\xspace are the same as in {\textsf{\textup{PFOR}}}\xspace but now
symbolic values can appear in probabilistic expressions.
In order to collect constraints on symbolic values we extend
configurations with set of constraints over integer values, drawn from
the set $\mathcal{S}$ (Figure~\ref{fig:splang-constraints-grammar}), not to be confused with probabilistic path
constraints (Figure~\ref{fig:probcstr_syntax}). The former express constraints over integer values, for
instance parameters of the distributions.
\begin{figure*}
\vspace{-0.5cm}
\centering
\fbox{
\begin{minipage}{0.95\textwidth}
\begin{subfigure}{0.5\textwidth}
\[
\begin{array}{rcl}
\cstrset_{e}\ni e &::=& n \ \mid X \mid i\mid e \oplus e \mid\store{e}{e}{e}\mid \\ && \select{e}{e} \mid | e |\\
\mathcal{S}\ni s &::=& \top\mid e\circ e\ \mid s \wedge s\mid \neg s\mid\forall i.s\\[-3mm]
\end{array}
\]
\caption[Grammar of constraints, again]{Symbolic constraints. $X\in\mathcal{X}, n\in\textup{\textbf{V}}$.}
\label{fig:splang-constraints-grammar}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\[
\begin{array}{rcl}
sra&::=&\rass{Y}{\lapp{c_e}{c_e}}\\
sre&::=&n\mid X\mid Y\mid re\oplus re\\
SP&::=&Y=re\mid re>0\mid re\leq 0\mid ra\\[-3mm]
\end{array}
\]
\caption[Grammar of symbolic probabilistic constraints]{Prob. constraints. $c_e\in\mathcal{S}, X\in\mathcal{X}, Y\in\mathcal{X}_p$}
\label{fig:probcstr_syntax}
\end{subfigure}
\end{minipage}
}
\caption{Grammar of constraints}
\end{figure*}
In particular constraint expressions include standard arithmetic
expressions with values being symbolic or integer constants, and array
selection. Probabilistic path constraints now
can also contain symbolic integer values. Hence, also probabilistic path constraints now can be
symbolic.
Memories can now contain symbolic values and we
represent arrays in memory as pairs $(X,v)$, where $v$ is a (concrete
or symbolic) integer value representing the length of the array, and
$X$ is a symbolic value representing the array content. The content
of the arrays is kept and refined in the set of constraints by means
of the $\select{\cdot}{\cdot}$ and $\store{\cdot}{\cdot}{\cdot}$ operations.
The semantics of expressions is captured by the judgment
$\speconf{m}{e}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'}$
including now a set of constraints over integers.
\begin{figure*}
\vspace{-0.5cm}
\centering
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{S-P-Op-2}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'} \and v_1,v_2\in \mathcal{X}_p\\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''}
\and \fresh{X}{\mathcal{X}_p}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''@[X=v_1\oplus v_2]}{s''}}
\and
\inferrule[\rulestyle{S-P-Op-5}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'}\quad \{i,j\}=\{1,2\},\,v_i\in\mathbb{Z} \\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''} \quad
v_j\in\mathcal{X},\, \fresh{X}{\mathcal{X}}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''}{s''\cup\{X=v_1\oplus v_2\}}}
\and
\inferrule[\rulestyle{S-P-Op-6}]
{
\speconf{m}{e_1}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'}{s'} \\\\
\speconf{m}{e_2}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p''}{s''} \\\\
\{i,j\}=\{1,2\}\and v_i\in\mathcal{X} \and v_j\in\mathcal{X}_p \and \fresh{X}{\mathcal{X}_p}
}
{\speconf{m}{e_1\oplus e_2}{p}{s}\downarrow_{\textup{\tiny{SP}}} \spval{X}{p''@[X=v_1\oplus v_2]}{s''}}
\end{mathpar}
}
\caption[{\textsf{\textup{SPFOR}}}\xspace: Semantics of expressions]{{\textsf{\textup{SPFOR}}}\xspace: Semantics of expressions, selected rules.}
\label{fig:sem-splang-expr}
\end{figure*}
Figure \ref{fig:sem-splang-expr} gives few rules for this judgment. We briefly describe the rules
presented. Rule \rulestyle{S-P-Op-2} applies
when an arithmetic operation has both of its operands that reduce
respectively to elements in $\mathcal{X}_p$. Appropriately it updates
the set of probabilistic constraints.
Rules \rulestyle{S-P-Op-5} instead fires when one of them is an integer
and the other is a symbolic value. In this case only the
list of symbolic constraints needs to be updated.
\fi
\ifnum\full=1
\subsection{{\textsf{\textup{SPFOR}}}\xspace: Semantics of commands}
We can now formalize the semantics of commands of {\textsf{\textup{SPFOR}}}\xspace.
Again, we provide a selection of the rules of the small step semantics in Figure \ref{fig:sem-splang-cmd}.
Rule \rulestyle{S-P-If-sym-true} fires when a branching instruction is
to be executed and the guard is reduced to either an integer or a
value in $\mathcal{X}$. In this case we can very well proceed with the true
branch recording in the set of integer constraints the fact that the
guard is greater than 0. Notice that if the guard is an integer
actually less than or equal than 0 then there will never be a ground
substitution for that set of constraints and hence this is not
unsound.
\begin{figure}
\centering
\fbox{
$\spcconf{m}{c}{p}{s}\rightarrow_{\textup{\tiny{SP}}} \spcconf{m'}{c'}{p'}{s'}$
}
\begin{mathpar}
\inferrule[\rulestyle{S-P-If-sym-true}]
{
\speconf{m}{e}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'} \\\\ v\in{\ensuremath{\textup{\textbf{V}}}_{\textup{is}}}
}
{ \spcconf{m}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p}{s} \rightarrow_{\textup{\tiny{SP}}} \\\\\spcconf{m}{c_\mathit{tt}}{p'}{s'\cup\{v>0\}} }
\inferrule[\rulestyle{S-P-If-prob-false}]
{
\speconf{m}{e}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'} \\\\ v\in\mathcal{X}_p
}
{ \spcconf{m}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p}{s} \\\\ \spcconf{m}{c_\mathit{ff}}{p'@[v\leq 0]}{s'} }
\inferrule[\rulestyle{S-P-Lap-Ass}]
{
\speconf{m}{e_a}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_a}{p'}{s'} \\\\
\speconf{m}{e_b}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_b}{p''}{s''} \\\\
\fresh{X}{\mathcal{X}_p} \and v_a,v_b\in{\ensuremath{\textup{\textbf{V}}}_{\textup{is}}} \\\\
s'''=s''\cup\{v_b>0\} \and p'''=p''@[\rass{X}{\lapp{v_a}{v_b}}] \\\\
m'\equiv m[x\mapsto X]
}
{ \spcconf{m}{\rass{x}{\lapp{e_a}{e_b}}}{p}{s} \\\\ \spcconf{m'}{{\tt skip}}{p'''}{s'''} }
\end{mathpar}
\caption{Semantics of {\textsf{\textup{SPFOR}}}\xspace (selected rules)}
\label{fig:sem-splang-cmd}
\end{figure}
In that case the rule, showed in the appendix,
\rulestyle{S-P-If-sym-false} would instead lead to a satisfiable
constraint. Rule \rulestyle{S-P-If-prob-false} handles a branching
instruction which has a guard reducing to a value in $\mathcal{X}_p$. In this
case we can proceed in both branches, even though here we only show
one of the two rules, by recording the conditioning fact on the list
of probabilistic constraints. Finally, rule \rulestyle{S-P-Lap-Ass} handles
probabilistic assignment. After having reduced both the expression for the mean
and the expression for the scale to values we check that those are both
either integers or symbolic integers, if that's the case we make sure
that the scale is greater than 0 and we add a probabilistic constraints
recording the fact that the modified variable now points to a probabilistic
symbolic value related to a Laplace distribution.
Notice that again we don't handle situations where the expression for
the mean or the expression for the scale reduces to a probabilistic
symbolic value.
\else
The semantics of commands of {\textsf{\textup{SPFOR}}}\xspace is described by small step semantics
judgments of the form: $\spcconf{m}{c}{p}{s}\rightarrow_{\textup{\tiny{SP}}} \spcconf{m'}{c'}{p'}{s'}$,
including a set of constraints over integers.
We provide a selection of the rules in Figure \ref{fig:sem-splang-cmd}.
Rule \rulestyle{S-P-If-sym-true} fires when a branching instruction is
to be executed and the guard is reduced to either an integer or a
value in $\mathcal{X}$. In this case we can proceed with the true
branch recording in the set of integer constraints the fact that the
guard is greater than $0$.
\begin{figure*}
\begin{minipage}{0.95\textwidth}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{S-P-If-sym-true}]
{
\speconf{m}{e}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'} \and v\in{\ensuremath{\textup{\textbf{V}}}_{\textup{is}}}
}
{ \spcconf{m}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p}{s} \rightarrow_{\textup{\tiny{SP}}} \\\\\spcconf{m}{c_\mathit{tt}}{p'}{s'\cup\{v>0\}} }
\inferrule[\rulestyle{S-P-If-prob-false}]
{
\speconf{m}{e}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v}{p'}{s'} \and v\in\mathcal{X}_p
}
{ \spcconf{m}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p}{s} \rightarrow_{\textup{\tiny{SP}}}\\\\ \spcconf{m}{c_\mathit{ff}}{p'@[v\leq 0]}{s'} }
\inferrule[\rulestyle{S-P-Lap-Ass}]
{
\speconf{m}{e_a}{p}{s} \downarrow_{\textup{\tiny{SP}}} \spval{v_a}{p'}{s'} \and
\speconf{m}{e_b}{p'}{s'} \downarrow_{\textup{\tiny{SP}}} \spval{v_b}{p''}{s''} \and
\fresh{X}{\mathcal{X}_p} \\\\ v_a,v_b\in{\ensuremath{\textup{\textbf{V}}}_{\textup{is}}} \and
s'''=s''\cup\{v_b>0\} \and p'''=p''@[\rass{X}{\lapp{v_a}{v_b}}] }
{ \spcconf{m}{\rass{x}{\lapp{e_a}{e_b}}}{p}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{ m[x\mapsto X]}{{\tt skip}}{p'''}{s'''} }
\end{mathpar}
}
\end{minipage}
\caption{{\textsf{\textup{SPFOR}}}\xspace: Semantics of commands (selected rules)}
\label{fig:sem-splang-cmd}
\end{figure*}
Rule \rulestyle{S-P-If-prob-false} handles a branching
instruction which has a guard reducing to a value in $\mathcal{X}_p$. In this
case we can proceed in both branches, even though here we only show
one of the two rules, by recording the conditioning fact on the list
of probabilistic constraints. Finally, rule \rulestyle{S-P-Lap-Ass} handles
probabilistic assignment. After having reduced both the expression for the mean
and the expression for the scale to values we check that those are both
either integers or symbolic integers, if that's the case we make sure
that the scale is greater than 0 and we add a probabilistic constraints
recording the fact that the modified variable now points to a probabilistic
symbolic value related to a Laplace distribution.
\fi
\ifnum\full=1
\subsection{{\textsf{\textup{SPFOR}}}\xspace: Collecting semantics}
Semantics of {\textsf{\textup{SPFOR}}}\xspace introduces two levels of nondeterminism, the
first one is given by branching instructions whose guard reduces to a
symbolic value, the second one comes from branching instructions whose
guard reduces to a probabilistic symbolic value. The collecting
semantics of {\textsf{\textup{SPFOR}}}\xspace, specified by the judgment with form $
\mathscr{H} \Rightarrow_{\textup{sp}} \mathscr{H}' $ (where $
\mathscr{H},
\mathscr{H}'\in\mathcal{P}{{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times\textup{\textbf{SPForCmd}}\xspace\times SP\times
\mathcal{S}}$) and whose only rule is specified in Figure
\ref{fig:sem-splang-collecting}, takes care of both of them. Unlike in
the deterministic case of the rule $\rulestyle{Set-Step}$, where only
one configuration was chosen nondeterministically from the initial
set, here we select nondeterministically a (maximal) set of
configurations all sharing the same symbolic constraints.
\begin{figure}
\begin{mathpar}
\inferrule[\rulestyle{s-p-collect}]
{
\mathscr{D}_{[s]}\subseteq \mathscr{H} \\\\
\mathscr{H}'\equiv\{\spcconf{m'}{c'}{p'}{s'} \mid \exists \spcconf{m}{c}{p}{s}\in\mathscr{D}_{[s]} \text{ s.t. }
\spcconf{m}{c}{p}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{m'}{c'}{p'}{s'} \land \sat{s'} \}}
{\mathscr{H}\Rightarrow_{\textup{sp}} \bigg(\mathscr{H}\setminus \mathscr{D}_{[s]} \bigg)\cup \mathscr{H}' }
\end{mathpar}
\caption{Rule for $\Rightarrow_{\textup{sp}}$ }
\label{fig:sem-splang-collecting}
\end{figure}
The notation $\mathscr{D}_{[s]}\subseteq \mathscr{H}$ means that
$\mathscr{D}$ is the maximal subset of configuration in
$\mathscr{H}$ which have $s$ as set of constraints. That is
$\mathscr{D}_{[s]}\equiv \{\spcconf{m}{c}{p}{s}\mid
\spcconf{m}{c}{p}{s}\in\mathscr{H}\}$.
Again, we extend the notation and use $\mathscr{H}\spsetstepby{\mathscr{D}_{[s]}}\mathscr{H}'$
when we want to make explicit the set of symbolic configurations, $\mathscr{D}_{[s]}$,
that we are using to make the step.
So \rulestyle{s-p-collect}, starts from a set of configurations
and reaches all of those that are reachable from it. By reachable we
mean that they have a satisfiable set of configurations and are
reachable from one of the original configurations with only one step of the
symbolic semantics.
Similarly to deterministic case, the following lemma of coverage connects the {\textsf{\textup{PFOR}}}\xspace with {\textsf{\textup{SPFOR}}}\xspace.
The difference is in the use of sets of configurations instead of single configurations.
Notice that in a set of constraints can appear constraints involving probabilistic symbols.
For instance if the i-th element of an array is associated with a random expression.
The predicate $\sat{\cdot}$ does not take in consideration relations involving probabilistic symbolic
constraints but only relations involving symbolic values denoting integers.
\begin{lemma}[Probabilistic Unary Coverage]
\label{lem:plang-coverage}
If $\mathscr{H}\spsetstepby{\mathscr{D}_{[s]}} \mathscr{H}'$ and $\sigma\models_{\mathcal{I}} \mathscr{D}_{[s]}$ then
$\exists \sigma', \mathscr{D}_{[s']}\subseteq \mathscr{H}'$ such that $\sigma'\models_{\mathcal{I}} \mathscr{D}_{[s']}$, and
$\sub{\mathscr{D}_{[s]}}{\sigma} \psetstep^{*}\sub{\mathscr{D}_{[s']}}{\sigma'}$.
\end{lemma}
Intuitively, Lemma \ref{lem:plang-coverage} ensures us that
a concrete execution is covered by a symbolic one.
\else
The semantics of {\textsf{\textup{SPFOR}}}\xspace has two sources of nondeterminism, from guards which reduce to
symbolic values, and from guards which reduce to a probabilistic symbolic value. The collecting
semantics of {\textsf{\textup{SPFOR}}}\xspace, specified by judgments as $
\mathscr{H} \Rightarrow_{\textup{sp}} \mathscr{H}'$ (for sets of configurations $\mathscr{H}$ and $\mathscr{H}'$) takes care of both of them.
The rule for this judgment form is:
\begin{mathpar}
\inferrule[\rulestyle{s-p-collect}]
{
\mathscr{D}_{[s]}\subseteq \mathscr{H} \\\\
\mathscr{H}'\equiv\{\spcconf{m'}{c'}{p'}{s'} \mid \exists \spcconf{m}{c}{p}{s}\in\mathscr{D}_{[s]} \text{ s.t. }
\spcconf{m}{c}{p}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{m'}{c'}{p'}{s'} \\\\\land \sat{s'} \}}
{\mathscr{H}\Rightarrow_{\textup{sp}} \big(\mathscr{H}\setminus \mathscr{D}_{[s]} \big)\cup \mathscr{H}' }
\end{mathpar}
Unlike in
the deterministic case of the rule $\rulestyle{Set-Step}$, where only
one configuration was chosen nondeterministically from the initial
set, here we select nondeterministically a (maximal) set of
configurations all sharing the same symbolic constraints.
The notation $\mathscr{D}_{[s]}\subseteq \mathscr{H}$ means that
$\mathscr{D}$ is the maximal subset of configuration in
$\mathscr{H}$ which have $s$ as set of constraints.
We use $\mathscr{H}\spsetstepby{\mathscr{D}_{[s]}}\mathscr{H}'$
when we want to make explicit the set of symbolic configurations, $\mathscr{D}_{[s]}$,
that we are using to make the step.
Intuitively, \rulestyle{s-p-collect} starts from a set of configurations
and reaches all of those that are reachable from it - all the configurations that have a satisfiable set of constraints and are
reachable from one of the original configurations with only one step of the
symbolic semantics.
Notice that in a set of constraints we can have constraints involving probabilistic symbols, e.g. if the i-th element of an array is associated with a random expression. Nevertheless, the predicate $\sat{\cdot}$ does not need to take in consideration relations involving probabilistic symbolic
constraints but only relations involving symbolic values denoting integers.
The following lemma of coverage connects {\textsf{\textup{PFOR}}}\xspace with {\textsf{\textup{SPFOR}}}\xspace ensuring that
a concrete execution is covered by a symbolic one.
\begin{lemma}[Probabilistic Unary Coverage]
\label{lem:plang-coverage}
If $\mathscr{H}\spsetstepby{\mathscr{D}_{[s]}} \mathscr{H}'$ and $\sigma\models_{\mathcal{I}} \mathscr{D}_{[s]}$ then
$\exists \sigma', \mathscr{D}_{[s']}\subseteq \mathscr{H}'$ such that $\sigma'\models_{\mathcal{I}} \mathscr{D}_{[s']}$, and
$\sub{\mathscr{D}_{[s]}}{\sigma} \psetstep^{*}\sub{\mathscr{D}_{[s']}}{\sigma'}$.
\end{lemma}
\fi
\ifnum\full=1
\subsection{{\textsf{\textup{SRPFOR}}}\xspace}
\label{sec:rsplang}
We finally arrived to the last rung on this ladder of languages. The
language presented in this section is the the symbolic extension of
the concrete language {\textsf{\textup{RPFOR}}}\xspace. It can also be seen as the relational
extension of {\textsf{\textup{SPFOR}}}\xspace. The key part of this language's semantics will
be the handling of the probabilistic assignment. For that construct we
will provide 2 rules instead of one. The first one is the obvious one
which carries on a standard symbolic probabilistic assignment. The
second one will implement a coupling semantics on the basis of Section
\ref{sec:prelim}.
We start off by providing the syntax of the language.
\subsection{{\textsf{\textup{SRPFOR}}}\xspace: Syntax}
\begin{wrapfigure}[9]{L}{0.55\textwidth}
\begin{minipage}{0.53\textwidth}
\vspace{-0.8cm}
\fbox{
\[
\begin{array}{rcl}
\mathcal{E}_{\textup{rs}}\xspace \ni e_{sr} ::= &e_{s} \mid \pair{e_{s}}{e_{s}} \mid e_{sr} \oplus e_{sr} \mid \arracc{a}{e_{sr}}\\
\mathcal{C}_{\textup{rs}}\xspace \ni c_{sr} ::= &c_{s} \mid \pair{c_{s}}{c_{s}} \mid \seq{c_{sr}}{c_{sr}} \mid \ass{x}{e_{sr}} \mid \\
&\ass{\arracc{a}{e_{sr}}}{e_{sr}}\mid \rass{x}{\lapp{e_{sr}}{e_{s}}}\mid \\
&\ifte{e_{sr}}{c_{sr}}{c_{sr}} \mid\\
& \cfor{x}{e_{sr}}{e_{sr}}{c_{sr}}
\end{array}
\]
}
\caption{{\textsf{\textup{SRPFOR}}}\xspace syntax. $e_{s}\in\mathcal{E}_{\textup{s}}, c_{s}\in\mathcal{C}_{\textup{s}}$.}
\label{fig:srplang-syntax}
\end{minipage}
\end{wrapfigure}
Figure \ref{fig:srplang-syntax} shows the semantics of the language
{\textsf{\textup{SRPFOR}}}\xspace. We extended the language ${\textsf{\textup{SPFOR}}}\xspace$ with the pairing
construct, both at the level of expressions and commands. Importantly,
only unary symbolic expressions and commands are admitted in the
pairing construct. This invariant is maintained during branching by
projection functions. In fact, projection function $\proj{i}{\cdot}$ for $i\in\{1,2\}$,
extend to also relational symbolic expressions and commands in the following way:
$\proj{i}{\pair{e_{1}}{e_{2}}}=e_{i}, \proj{i}{\pair{c_{1}}{c_{2}}}=c_{i}$.
With $\proj{i}{v}=v$, for $v\in{\ensuremath{\textup{\textbf{V}}}_{\textup{s}}}$. Also, the projection
functions behave homomorphically on the other constructs.
\subsection{{\textsf{\textup{SRPFOR}}}\xspace: Semantics of expressions}
\begin{wrapfigure}[11]{L}{0.65\textwidth}
\centering
\fbox{
$\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s}\downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s'}$
}
\begin{minipage}{0.6\textwidth}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{S-R-P-Lift}]
{\speconf{m_1}{\proj{1}{e}}{p_1}{s}\downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'_1}{s'} \\\\
\speconf{m_2}{\proj{2}{e}}{p_2}{s'}\downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p'_2}{s''} \\\\
v=\left \{
\begin{array}{rcl}
(v_1,v_2) && \text{if}\ (v_i\not\in\mathbb{Z}, i\in\{1,2\})\vee v_1\neq v_2 \cr
v_1 && \text{otherwise}
\end{array}
\right .\
}
{\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s''}}
\end{mathpar}
}
\end{minipage}
\caption{{\textsf{\textup{SRPFOR}}}\xspace: Semantics of expressions.}
\label{fig:srplang-sem-expr}
\end{wrapfigure}
As usual we provide a big-step evaluation semantics for expressions.
The judgment form and a selection of the rules defining the judgment
are provided in Figure \ref{fig:srplang-sem-expr}. The set of values now is
${\ensuremath{\textup{\textbf{V}}}_{\textup{srp}}}\equiv {\ensuremath{\textup{\textbf{V}}}_{\textup{s}}}\cup{\ensuremath{\textup{\textbf{V}}}_{\textup{s}}}^2$.
The only rule defining the judgment $\downarrow_{\textup{\tiny{SRP}}}$ is
\rulestyle{S-R-P-Lift}. It project the symbolic relational expression
first on the left and evaluates it to a unary symbolic value,
potentially updating the probabilistic symbolic constraints and the
symbolic constraints. It then does the same projecting the expression
on the right but starting from the potentially previously updated
constraints. Now, the only case when the value returned is unary is
when both the previous evaluation returned equal integers, in all the
other cases an element in ${\ensuremath{\textup{\textbf{V}}}_{\textup{s}}}^2$ is returned. So, the
relational symbolic semantics leverages on the unary semantics.
\else
\paragraph{{\textsf{\textup{SRPFOR}}}\xspace}
\label{sec:rsplang}
The
language presented in this section is the the symbolic extension of
the concrete language {\textsf{\textup{RPFOR}}}\xspace. It can also be seen as the relational
extension of {\textsf{\textup{SPFOR}}}\xspace. The key part of this language's semantics will
be the handling of the probabilistic assignment. For that construct we
will provide 2 rules instead of one. The first one is the obvious one
which carries on a standard symbolic probabilistic assignment. The
second one will implement a coupling semantics.
The syntax of the {\textsf{\textup{SRPFOR}}}\xspace extends the syntax of {\textsf{\textup{RPFOR}}}\xspace by adding symbolic values, since it is almost identical to the one of {\textsf{\textup{RPFOR}}}\xspace we omit it here.
As in the case of {\textsf{\textup{RPFOR}}}\xspace, only unary symbolic expressions and
commands are admitted in the pairing construct. This invariant is
maintained by the semantics rules.
As for the other languages, we provide a big-step evaluation semantics for expressions proving judgments of the form $\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s}\downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s'}$.
The only rule defining the judgment $\downarrow_{\textup{\tiny{SRP}}}$ is the following:
\begin{mathpar}
\inferrule[\rulestyle{S-R-P-Lift}]
{ \begin{array}{c}
\speconf{m_1}{\proj{1}{e}}{p_1}{s}\downarrow_{\textup{\tiny{SP}}} \spval{v_1}{p'_1}{s'}\cr
\speconf{m_2}{\proj{2}{e}}{p_2}{s'}\downarrow_{\textup{\tiny{SP}}} \spval{v_2}{p'_2}{s''} \end{array}\and
\hspace{-0.4cm}v=\left \{
\begin{array}{rcl}
(v_1,v_2) && \text{if}\ (v_i\not\in\mathbb{Z}, i\in\{1,2\})\vee v_1\neq v_2 \cr
v_1 && \text{otherwise}
\end{array}
\right .\
}
{\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s''}}
\end{mathpar}
It project the symbolic relational expression
first on the left and evaluates it to a unary symbolic value,
potentially updating the probabilistic symbolic constraints and the
symbolic constraints. It then does the same projecting the expression
on the right but starting from the potentially previously updated
constraints. Now, the only case when the value returned is unary is
when both the previous evaluation returned equal integers, in all the
other cases a pair of values is returned. So, the
relational symbolic semantics leverages on the unary semantics.
\fi
\ifnum\full=1
\subsection{{\textsf{\textup{SRPFOR}}}\xspace: Semantics of commands}
\begin{wrapfigure}[8]{L}{0.35\textwidth}
\vspace{-0.8cm}
\begin{minipage}{0.30\textwidth}
\fbox{
\[
\begin{array}{rcl}
\mathcal{CTX}&::=& \emptyctxt{\cdot} \mid \seq{\mathcal{CTX}}{c}\\
\ctxtp&::=&\pair{\seq{\cdot}{c}}{\cdot} \mid \pair{\cdot}{\seq{\cdot}{c}} \\
&&\pair{\cdot}{\cdot} \mid \pair{\seq{\cdot}{c}}{\seq{\cdot}{c}}
\end{array}
\]
}
\end{minipage}
\caption{Grammars of evaluation contexts}
\label{fig:contexts}
\end{wrapfigure}
Before proceeding with the semantics of commands, we need to introduce the following
grammars of contexts. We use evaluation contexts to simplify the exposition of the rules since
for specific constructs such as relational probabilistic assignment we will
have more than one rule we could fire.
The two grammars for the evaluation contexts are shown in Figure \ref{fig:contexts}.
Notice how $\ctxtp$ gets saturated by pairs of commands.
Before proceeding we want to make a syntactical distinction between
commands. In particular, we call \emph{synchronizing} all the commands in $\mathcal{C}_{\textup{rs}}\xspace$
with the following shapes $\rass{x}{\lapp{e_1}{e_2}}$, $\pair{\rass{x}{\lapp{e_1}{e_2}}}{\rass{x'}{\lapp{e'_1}{e'_2}}}$.
We call commands with this structure: synchronizing because
they allow synchronization of two runs as we will see later on.
In particular, synchronizing commands are the ones that allow
the use of coupling semantics and coupling rules.
We call non synchronizing all the other commands in $\mathcal{C}_{\textup{rs}}\xspace$
The semantics of commands is again provided
\subsubsection{Non synchronizing commands}
In this section we provide the semantics for non synchronizing
commands. In particular we are defining a judgment for a small-step
semantics with the form in Figure \ref{fig:judg-form-srplang-sem-cmds}.
A selection of the rules inductively defining the judgment is specified in Figure
\ref{fig:srplang-sem-cmds-non-synch}.
\begin{figure}
\centering
\fbox{
$\srpcconf{m_1}{m_2}{c}{p_1}{p_2}{s}\rightarrow_{\textup{\tiny{SRP}}} \srpcconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}{s'}$
}
\caption[{\textsf{\textup{SRPFOR}}}\xspace: Judgment form for semantics of commands.]{{\textsf{\textup{SRPFOR}}}\xspace: Judgment form for semantics of non synchronizing commands.
$m_1,m_2,m'_1,m'_2\in{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}, c,c'\in\mathcal{C}_{\textup{rs}}\xspace, p,p'\in SP, s,s'\in\mathcal{S}$.}
\label{fig:judg-form-srplang-sem-cmds}
\end{figure}
\begin{figure}
\begin{mathpar}
\inferrule[\rulestyle{s-r-if-prob-prob-true-false}]
{
\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s}\downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s'} \and \proj{1}{v},\proj{2}{v}\in\mathcal{X}_p\\\\
p''_1\equiv p'_1@[\proj{1}{v}>0] \and p''_2\equiv p'_2@[\proj{2}{v}\leq 0]
}
{\srpcconf{m_1}{m_2}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p_1}{p_2}{s}\rightarrow_{\textup{\tiny{SRP}}} \srpcconf{m_1}{m_2}{\pair{\proj{1}{c_\mathit{tt}}}{\proj{2}{c_\mathit{ff}}}}{p''_1}{p''_2}{s'}}
\inferrule[\rulestyle{s-r-if-prob-sym-true-false}]
{
\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s}\downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s'} \and \proj{1}{v}\in\mathcal{X}_p \and \proj{2}{v}\in\mathcal{X}\\\\
p''_1\equiv p'_1@[\proj{1}{v}>0] \and s'''\equiv s''\cup\{\proj{2}{v}\leq 0\}
}
{\srpcconf{m_1}{m_2}{\ifte{e}{c_\mathit{tt}}{c_\mathit{ff}}}{p_1}{p_2}{s}\rightarrow_{\textup{\tiny{SRP}}} \srpcconf{m_1}{m_2}{c_\mathit{tt}}{p''_1}{p'_2}{s'''}}
\inferrule[\rulestyle{s-r-pair-lap-skip}]
{
\spcconf{m_1}{\rass{x}{\lapp{e_a}{e_b}}}{p_1}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{m'_1}{{\tt skip}}{p'_1}{s'}
}
{\srpcconf{m_1}{m_2}{\pair{\rass{x}{\lapp{e_a}{e_b}}}{{\tt skip}}}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}}
\srpcconf{m'_1}{m_2}{\pair{{\tt skip}}{{\tt skip}}}{p'_1}{p_2}{s'}}
\inferrule[\rulestyle{s-r-pair-lapleft-sync}]
{
c\not\equiv\rass{x}{\lapp{e'_a}{e'_b}} \and
\spcconf{m_2}{c}{p_2}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{m'_2}{c'}{p'_2}{s'} \and \ctxtp\equiv \pair{\cdot}{\cdot}
}
{\srpcconf{m_1}{m_2}{\ctxtp(\rass{x}{\lapp{e_a}{e_b}}, c)}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}}
\srpcconf{m_1}{m'_2}{\pair{\rass{x}{\lapp{e_a}{e_b}}}{c'}}{p_1}{p'_2}{s'}}
\inferrule[\rulestyle{s-r-pair-ctxt-1}]
{
\rass{x}{\lapp{e_a}{e_b}}\notin\{c_1,c_2\} \and |\{c_1,c_2\}|=2 \\\\
\{1,2\}=\{i,j\} \and
m'_{i}\equiv m_i \and \spcconf{m_j}{c_j}{p_j}{s} \rightarrow_{\textup{\tiny{SP}}} \spcconf{m'_j}{c'_j}{p'_j}{s'} \\\\
c'_i\equiv c_i \and p'_i\equiv p_i \and
}
{\srpcconf{m_1}{m_2}{\ctxtp(c_1,c_2)}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}}
\srpcconf{m'_1}{m'_2}{\ctxtp(c'_1,c'_2)}{p'_1}{p'_2}{s'}}
\inferrule[\rulestyle{s-r-pair-ctxt-2}]
{
\ctxtp\not\equiv \pair{\cdot}{\cdot}\\\\
\srpcconf{m_1}{m_2}{\pair{c_1}{c_2}}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}} \srpcconf{m'_1}{m'_2}{\pair{c'_1}{c'_2}}{p'_1}{p'_2}{s'}
}
{
\srpcconf{m_1}{m_2}{\ctxtp(c_1,c_2)}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}}
\srpcconf{m'_1}{m'_2}{\ctxtp(c'_1,c'_2)}{p'_1}{p'_2}{s'}
}
\end{mathpar}
\caption{{\textsf{\textup{SRPFOR}}}\xspace: Semantics of non synchronizing commands. Selected rules.}
\label{fig:srplang-sem-cmds-non-synch}
\end{figure}
An explanation of the rules follows.
Rule \rulestyle{s-r-if-prob-prob-true-false} fires when evaluating
a branching instruction. In particular, it fires when the guard
evaluates on both side to a probabilistic symbolic value.
In this case the semantics can continue with the true branch
on the left run and with the false branch on the right one.
Notice that commands are projected to avoid pairing commands
appearing in a nested form.
In the case where the guard of a branching instruction evaluates to
a probabilistic symbolic value on the left run and a symbolic integer
value on the right one, rule \rulestyle{s-r-if-prob-sym-true-false}
can apply. The rule allows to continue on the true branch on the left
run and on the false branch on the right one. Notice that in one case
the probabilistic list of constraints is updated, while on the
other the symbolic set of constraints. Rule \rulestyle{s-r-pair-lap-skip}
handles the pairing command where on the left hand side we have
a probabilistic assignment and on the right a skip instruction.
In this case, there is no \emph{hope for synchronization} between the two runs
and hence we can just unarily perform the left probabilistic assignment relying
on the unary symbolic semantics. Rule \rulestyle{s-r-pair-lapleft-sync} instead
applies when on the left we have a probabilistic assignment and on the right we have
another arbitrary command. In this case we can hope to reach a situation where
on the right run another probabilistic assignment appears. Hence,
it makes sense to continue the computation unarily on the right side.
Rule \rulestyle{s-r-pair-ctxt-1} applies when a pairing command is built out
of two different unary commands neither of which is a probabilistic assignment, or a sequence of commands.
In this case we can just rely on the unary semantics and execute one step
on side.
Rule \rulestyle{s-r-pair-ctxt-2} instead applies in all the other cases by recursively relying
on the $\rightarrow_{\textup{\tiny{SRP}}}$ semantics.
\else
For the semantics of commands we use the following evaluation contexts to simplify the exposition: $\mathcal{CTX}::= \emptyctxt{\cdot} \mid \seq{\mathcal{CTX}}{c}$ and
$\ctxtp::= \pair{\seq{\cdot}{c}}{\cdot} \mid \pair{\cdot}{\seq{\cdot}{c}}\mid \pair{\cdot}{\cdot} \mid \pair{\seq{\cdot}{c}}{\seq{\cdot}{c}}$.
Notice how $\ctxtp$ gets saturated by pairs of commands.
Before proceeding we want to make a
Moreover, we separate commands in two classes. We call \emph{synchronizing} all the commands in $\mathcal{C}_{\textup{rs}}\xspace$
with the following shapes $\rass{x}{\lapp{e_1}{e_2}}$, $\pair{\rass{x}{\lapp{e_1}{e_2}}}{\rass{x'}{\lapp{e'_1}{e'_2}}}$, since
they allow synchronization of two runs using coupling rules.
We call non synchronizing all the other commands. We relegate the explanation of their semantics to the appendix.
%
%
\fi
\ifnum\full=1
\subsubsection{{\textsf{\textup{SRPFOR}}}\xspace: Collecting semantics for non synchronizing commands}
Again $\rightarrow_{\textup{\tiny{SRP}}}$ is a non deterministic semantics. The
non determinism comes from the use of probabilistic symbols as guards
in branching instructions, as well as from symbolic values used as
guards. Finally the layer of non determinism given by the relational
approach which allows runs to take different branches in a branching
instruction.
So, in order to collect all the possible traces stemming from such non determinism
we define a collecting semantics relating set of configurations to set of configurations.
The semantics is specified through a judgment with the form: $ \mathscr{S}\mathscr{R} \Rightarrow_{\textup{srp}} \mathscr{S}\mathscr{R}' $,
with $ \mathscr{S}\mathscr{R}, \mathscr{S}\mathscr{R}'\in\mathcal{P}({\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times\mathcal{C}_{\textup{rs}}\xspace\times SP\times SP\times\mathcal{S})$.
The only rule defining the judgment is given in Figure \ref{fig:sem-srplang-collecting}.
\begin{figure}
\begin{mathpar}
\inferrule[\rulestyle{s-r-p-collect}]
{
\mathscr{R}_{[s]}\subseteq \mathscr{S}\mathscr{R} \\\\
\mathscr{S}\mathscr{R}'\equiv\{\srpcconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}{s'} \mid
\\\\\exists \srpcconf{m_1}{m_2}{c}{p_1}{p_2}{s}\in\mathscr{R}_{[s]} \text{ s.t. }
\srpcconf{m_1}{m_2}{c}{p_1}{p_2}{s} \rightarrow_{\textup{\tiny{SRP}}} \srpcconf{m'_1}{m'_2}{c'}{p'_1}{p'_2}{s'} \land \sat{s'} \}}
{\mathscr{S}\mathscr{R}\Rightarrow_{\textup{srp}} \bigg(\mathscr{S}\mathscr{R}\setminus \mathscr{R}_{[s]} \bigg)\cup \mathscr{S}\mathscr{R}' }
\end{mathpar}
\caption{Rule for $\Rightarrow_{\textup{srp}}$ }
\label{fig:sem-srplang-collecting}
\end{figure}
The rule, and the auxiliary notation $\mathscr{R}_{[s]}$,
is pretty similar to the one in Figure \ref{fig:sem-splang-collecting}.
The only difference is that here sets of symbolic relational probabilistic configurations
are considered instead of symbolic (unary) probabilistic configurations.
\else
Again $\rightarrow_{\textup{\tiny{SRP}}}$ is a non deterministic semantics. The
non determinism comes from the use of probabilistic symbols and symbolic values as guards, and by the relational
approach. So, in order to collect all the possible traces stemming from such non determinism
we define a collecting semantics relating set of configurations to set of configurations.
The semantics is specified through a judgment of the form: $ \mathscr{S}\mathscr{R} \Rightarrow_{\textup{srp}} \mathscr{S}\mathscr{R}' $,
with $ \mathscr{S}\mathscr{R}, \mathscr{S}\mathscr{R}'\in\mathcal{P}({\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times\mathcal{C}_{\textup{rs}}\xspace\times SP\times SP\times\mathcal{S})$.
The only rule defining it is a natural lifting of the one for the unary semantics.
We can extend the coverage lemma to the relational setting:
\begin{lemma}[Probabilistic Relational Coverage]
\label{lem:rplang-coverage}
If $\mathscr{S}\mathscr{R} \srpsetstepby{\mathscr{R}_{[s]}} \mathscr{S}\mathscr{R}'$ and $\sigma\models_{\mathcal{I}} \mathscr{R}_{[s]}$ then
$\exists \sigma', \mathscr{R}_{[s']}\in\mathscr{S}\mathscr{R}'$ such that $\mathscr{R}_{[s']}\subseteq\mathscr{S}\mathscr{R}',
\sigma'\models_{\mathcal{I}} \mathscr{R}_{[s']}$, and
$\sub{\mathscr{R}_{[s]}}{\sigma} \Rightarrow_{\textup{rp}}^{*}\sub{\mathscr{R}_{[s']}}{\sigma'}$.
\end{lemma}
\fi
\ifnum\full=1
\subsubsection{Synchronizing commands}
So far all the semantics rules for ${\textsf{\textup{SRPFOR}}}\xspace$ we presented are uniquely determined
by the syntactic construct they are associated to. Also, the rules of
the $\rightarrow_{\textup{\tiny{SRP}}}$ semantics and, hence, also of the $\Rightarrow_{\textup{srp}}$
semantics, don't deal with synchronizing commands. For those we want
to be able to apply different rules. In order to consider all the
possibilities we define a new judgment with form $ \mathscr{G} \rightsquigarrow \mathscr{G}' $, with
$\mathscr{G},\mathscr{G}'\in\mathcal{P}(\mathcal{P}({\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times\mathcal{C}_{\textup{rs}}\xspace\times SP\times SP\times\mathcal{S}))$
\begin{figure}
\begin{mathpar}
\inferrule[\rulestyle{Proof-Step-No-Sync}]
{
\mathscr{S}\mathscr{R}\in \mathscr{G} \and
\mathscr{S}\mathscr{R} \Rightarrow_{\textup{srp}} \mathscr{S}\mathscr{R}'
\and \mathscr{G}'\equiv \bigg (\mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg )\cup\{\mathscr{S}\mathscr{R}'\}
}
{\mathscr{G} \rightsquigarrow \mathscr{G}'}
\inferrule[\rulestyle{Proof-Step-No-Coup}]
{
\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G} \\\\
\srpeconf{m_1}{m_2}{e_a}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}}\srpval{v_a}{p'_1}{p'_2}{s_a} \and
\srpeconf{m_1}{m_2}{e_b}{p'_1}{p'_2}{s_a} \downarrow_{\textup{\tiny{SRP}}} \srpval{v_b}{p''_1}{p''_2}{s_b} \\\\
\fresh{X_1,X_2}{\mathcal{X}_p} \and m'_1\equiv m_1[x\mapsto X_1] \and m'_2\equiv m_2[x\mapsto X_2] \\\\
p'''_1\equiv p''_1@[\rass{X_1}{\lapp{\proj{1}{v_a}}{\proj{1}{v_b}}}]\and p'''_2\equiv p''_2@[\rass{X_2}{\lapp{\proj{2}{v_a}}{\proj{2}{v_b}}}] \\\\
\mathscr{S}\mathscr{R}'\equiv \bigg (\mathscr{S}\mathscr{R}\setminus \{\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s}\}\bigg)\cup
\{\srpcconf{m'_1}{m'_2}{\mathcal{CTX}[{\tt skip}]}{p'''_1}{p'''_2}{s''}\}\\\\
\mathscr{G}'\equiv \bigg( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg ) \cup \{\mathscr{S}\mathscr{R}'\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
\inferrule[\rulestyle{Proof-Step-Avoc}]
{
\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G} \\\\
\srpeconf{m_1}{m_2}{e_a}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}}\srpval{v_a}{p'_1}{p'_2}{s_a} \and
\srpeconf{m_1}{m_2}{e_b}{p'_1}{p'_2}{s_a} \downarrow_{\textup{\tiny{SRP}}} \srpval{v_b}{p''_1}{p''_2}{s_b} \\\\
\fresh{X_1,X_2}{\mathcal{X}} \and m'_1\equiv m_1[x\mapsto X_1] \and m'_2\equiv m_2[x\mapsto X_2] \\\\
\mathscr{S}\mathscr{R}'\equiv \bigg (\mathscr{S}\mathscr{R}\setminus \{\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s}\}\bigg)\cup
\{\srpcconf{m'_1}{m'_2}{\mathcal{CTX}[{\tt skip}]}{p''_1}{p''_2}{s''}\}\\\\
\mathscr{G}'\equiv \bigg( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg ) \cup \{\mathscr{S}\mathscr{R}'\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
\end{mathpar}
\caption[{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics]{{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics, selected rules.}
\label{fig:proof-semantics-1}
\end{figure}
\begin{figure}[h]
\begin{mathpar}
\inferrule[\rulestyle{Proof-Step-Lap-Gen}]
{
\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G} \\\\
\srpeconf{m_1}{m_2}{e_a}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}}\srpval{v_a}{p'_1}{p'_2}{s_a} \and
\srpeconf{m_1}{m_2}{e_b}{p'_1}{p'_2}{s_a} \downarrow_{\textup{\tiny{SRP}}} \srpval{v_b}{p''_1}{p''_2}{s_b} \\\\
s'\equiv s_b\cup \{\proj{1}{v_b}=\proj{2}{v_b}, \proj{1}{v_b}>0\} \and m_1(\epsilon_c)=E'=m'_2(\epsilon_c) \\\\
\fresh{E'',X_1, X_2, K, K'}{\mathcal{X}} \and
m'_1\equiv m_1[x\mapsto X_1][\epsilon_c\mapsto E''],\and m'_2=m_2[x\mapsto X_2][\epsilon_c\mapsto E'']\\\\
m(\epsilon)=E \and
s''\equiv s'\cup \{X_1+K=X_2, K\leq K', K'\cdot E=\proj{1}{v_b}, E''=E'+|\proj{1}{v_a}-\proj{2}{v_a}|\cdot K'\}\\\\
p'''_1\equiv p''_1@[\rass{X_1}{\lapp{\proj{1}{v_a}}{\proj{1}{v_b}}}]\and p'''_2\equiv p''_2@[\rass{X_2}{\lapp{\proj{2}{v_a}}{\proj{2}{v_b}}}] \\\\
\mathscr{S}\mathscr{R}'\equiv \bigg (\mathscr{S}\mathscr{R}\setminus \{\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s}\}\bigg)\cup
\{\srpcconf{m'_1}{m'_2}{\mathcal{CTX}[{\tt skip}]}{p'''_1}{p'''_2}{s''}\}\\\\
\mathscr{G}'\equiv \bigg( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg ) \cup \{\mathscr{S}\mathscr{R}'\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
\inferrule[\rulestyle{Proof-Step-If}]
{
\rho=\srpcconf{m_1}{m_2}{\mathcal{CTX}[\ifte{e}{c_1}{c_2}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G}\\\\
\srpeconf{m_1}{m_2}{e}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}} \srpval{v}{p'_1}{p'_2}{s'}\and \{\oplus_1,\oplus_2\}=\{>,\leq\}\\\\
\models s'\implies \proj{1}{v}\oplus_{i} 0 \iff \proj{2}{v}\oplus_{i} 0\\\\
\mathscr{S}\mathscr{R} \srpsetstepby{\{\rho\}} \mathscr{S}\mathscr{R}' \and
\mathscr{G}'\equiv \bigg( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg ) \cup \{\mathscr{S}\mathscr{R}'\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
\inferrule[\rulestyle{Proof-Step-Other-Cmds}]
{
\rho= \srpcconf{m_1}{m_2}{\mathcal{CTX}[c]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G}\\\\
c\neq \ifte{\cdot}{\cdot}{\cdot}\and \mathscr{S}\mathscr{R} \srpsetstepby{\{\rho\}} \mathscr{S}\mathscr{R}'\\\\
\mathscr{G}'\equiv \bigg( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\bigg ) \cup \{\mathscr{S}\mathscr{R}'\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
\end{mathpar}
\caption[{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics]{{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics, other rules}
\label{fig:proof-semantics}
\end{figure}
In Figures \ref{fig:proof-semantics-1} and \ref{fig:proof-semantics},
a selected collection of the
rules defining the judgment is presented. One of the rules defining
this last judgment relies on the previously defined collecting
semantics. Indeed, rule \rulestyle{Proof-Step-No-Sync} applies when
no synchronizing commands are involved, and hence there is no
possible coupling rule to be applied. Before proceeding with the
explanation of the other rules there is the need to explain the
variable $\epsilon_c$ which is used in the latter rules. The variable
$\epsilon_c$ is a variable that symbolically counts the current level
of privacy in the current relational execution. The variable gets
increased when the rule $\rulestyle{Proof-Step-Lap-Gen}$
fires. We have chosen to omit a similar ghost counter variable for
$\delta$ so rules would be more readable.
The variable gets increased also when the rules handling the pairing command
$\pair{\rass{x}{\lapp{e_a}{e_b}}}{\rass{x}{e'_a}{e'_b}}$. This latter
rule is shown in Appendix. This symbolic counter
variable is useful when trying to prove equality of certain variables
without spending more than a specific budget.
In the set of sets of configurations $\mathscr{G}$, a set of configurations, $\mathscr{S}\mathscr{R}$,
is nondeterministically chosen. Among elements in $\mathscr{S}\mathscr{R}$ a configuration
is also nondeterministically chosen.
Using contexts we check that in the selected configuration the
next command to execute is the probabilistic assignment.
After reducing to values both the mean and scale expression,
and verified (that is, assumed in the set of constraints)
that in the two runs the scales have the same value, the rule adds to the set of
constraints a new element, that is,
$E''=E'+|\proj{1}{v_a}-\proj{2}{v_a}|\cdot K'$, where $K, K', E''$ are
fresh symbols denoting integers and $E'$ is the symbolic integer to which the budget
variable $\epsilon_c$ maps to. Notice that $\epsilon_c$ needs to
point to the same symbol in both memories. This because it's a shared
variable tracking the privacy budget spent so far in both runs. This
new constraint increases the budget spent. The other constraint added
is the real coupling relation, that is $X_1+K=X_2$. Where $X_1, X_2$ are fresh in $\mathcal{X}$.
Later, $K$ will be existentially quantified in order to search for a proof of
$\epsilon$-indistinguishability. Rule $\rulestyle{Proof-Step-Avoc}$
does not use any coupling rule but treats the samples in a purely in
a symbolic manner. It intuitively asserts that the two samples are
drawn from the distributions and assigns to them arbitrary integers
free to vary on the all domain of Thai Laplace distribution. Finally,
$\rulestyle{Proof-Step-No-Coup}$ applies to synchronizing commands as
well. It does not add any relational constraints to the samples.
This rules intuitively means that we are not correlating in any way
the two samples. Notice that since we are not using any coupling rule
we don't need to check that the scale value is the same in the two
runs as it is requested in the previous rule. We could think of this
as a way to encode the relational semantics of the program in an
expression which later can be fed in input to other tools. The main
difference with the previous rule is that here we treat the sampling
instruction symbolically and that's why the fresh symbols are in
$\mathcal{X}_p$ denoting full distributions, while in the previous rule the
fresh symbols are in $\mathcal{X}$ denoting sampled integers, even though in
no particular relation. When the program involves a synchronizing
command we basically fork the execution when it's time to execute it.
In particular the set of configurations gets to continue the
computation in different ways, one for every rule applicable.
\subsection{Coverage}
The coverage lemma can be extended also to the relational setting. To do
that though we need to consider only the fragment of the $ \rightsquigarrow$
semantics that only uses the rules $\rulestyle{Proof-Step-No-Sync}$, and
$\rulestyle{Proof-Step-No-Coupl}$. We denote this fragment with
the notation $\xRightarrow[\sim]{}$.
While the semantics that uses
only the following rules will be denoted by $\xRightarrow[]{\sim}$: $\rulestyle{Proof-Step-Lap-Gen}$,
$\rulestyle{Proof-Step-Other-Cmds}$, $\rulestyle{Proof-Step-If}$.
A similar relational coverage Lemma holds for the $\Rightarrow_{\textup{srp}}$ semantics
and hence it trivially extends to $\xRightarrow[\sim]{}$.
\begin{lemma}[Probabilistic Relational Coverage]
\label{lem:rplang-coverage}
If $\mathscr{S}\mathscr{R} \srpsetstepby{\mathscr{R}_{[s]}} \mathscr{S}\mathscr{R}'$ and $\sigma\models_{\mathcal{I}} \mathscr{R}_{[s]}$ then
$\exists \sigma', \mathscr{R}_{[s']}\in\mathscr{S}\mathscr{R}'$ such that $\mathscr{R}_{[s']}\subseteq\mathscr{S}\mathscr{R}',
\sigma'\models_{\mathcal{I}} \mathscr{R}_{[s']}$, and
$\sub{\mathscr{R}_{[s]}}{\sigma} \Rightarrow_{\textup{rp}}^{*}\sub{\mathscr{R}_{[s']}}{\sigma'}$.
\end{lemma}
\else
\paragraph{Semantics of synchronizing commands}
We define a new judgment with form $ \mathscr{G} \rightsquigarrow \mathscr{G}' $, with
$\mathscr{G},\mathscr{G}'\in\mathcal{P}(\mathcal{P}({\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times{\ensuremath{\bm{\mathcal{M}}_{\textup{\tiny{SP}}}}}\times\mathcal{C}_{\textup{rs}}\xspace\times SP\times SP\times\mathcal{S}))$.
\begin{figure*}
\begin{minipage}{0.98\textwidth}
\fbox{
\begin{mathpar}
\inferrule[\rulestyle{Proof-Step-No-Sync}]
{
\mathscr{S}\mathscr{R}\in \mathscr{G} \and
\mathscr{S}\mathscr{R} \Rightarrow_{\textup{srp}} \mathscr{S}\mathscr{R}'
\and \mathscr{G}'\equiv \big (\mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\big )\cup\{\mathscr{S}\mathscr{R}'\}
}
{\mathscr{G} \rightsquigarrow \mathscr{G}'}
\inferrule[\rulestyle{Proof-Step-Avoc}]
{
\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G} \\\\
\srpeconf{m_1}{m_2}{e_a}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}}\srpval{v_a}{p'_1}{p'_2}{s_a} \\\\
\srpeconf{m_1}{m_2}{e_b}{p'_1}{p'_2}{s_a} \downarrow_{\textup{\tiny{SRP}}} \srpval{v_b}{p''_1}{p''_2}{s_b} \\\\
\fresh{X_1,X_2}{\mathcal{X}} \and m'_1\equiv m_1[x\mapsto X_1] \and m'_2\equiv m_2[x\mapsto X_2]\\\\
\mathscr{G}'\equiv \big( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\big ) \cup \{\mathscr{S}\mathscr{R}'\}\\\\
\mathscr{S}\mathscr{R}'\equiv \big (\mathscr{S}\mathscr{R}\setminus \{\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s}\}\big)\\\\\cup
\{\srpcconf{m'_1}{m'_2}{\mathcal{CTX}[{\tt skip}]}{p''_1}{p''_2}{s''}\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
%
\and
\inferrule[\rulestyle{Proof-Step-Lap-Gen}]
{
\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s} \in \mathscr{S}\mathscr{R}\in \mathscr{G} \\\\
\srpeconf{m_1}{m_2}{e_a}{p_1}{p_2}{s} \downarrow_{\textup{\tiny{SRP}}}\srpval{v_a}{p'_1}{p'_2}{s_a} \\\\
\srpeconf{m_1}{m_2}{e_b}{p'_1}{p'_2}{s_a} \downarrow_{\textup{\tiny{SRP}}} \srpval{v_b}{p''_1}{p''_2}{s_b} \\\\
s'\equiv s_b\cup \{\proj{1}{v_b}=\proj{2}{v_b}, \proj{1}{v_b}>0\} \and m_1(\epsilon_c)=E'=m'_2(\epsilon_c) \\\\
\fresh{E'',X_1, X_2, K, K'}{\mathcal{X}} \and
m'_1\equiv m_1[x\mapsto X_1][\epsilon_c\mapsto E''] \\\\ m'_2=m_2[x\mapsto X_2][\epsilon_c\mapsto E'']\and
m(\epsilon)=E \\\\
s''\equiv s'\cup \{X_1+K=X_2, K\leq K', K'\cdot E=\proj{1}{v_b}, E''=E'+|\proj{1}{v_a}-\proj{2}{v_a}|\cdot K'\}\\\\
p'''_1\equiv p''_1@[\rass{X_1}{\lapp{\proj{1}{v_a}}{\proj{1}{v_b}}}]\and p'''_2\equiv p''_2@[\rass{X_2}{\lapp{\proj{2}{v_a}}{\proj{2}{v_b}}}]
\\\\
\mathscr{G}'\equiv \big( \mathscr{G}\setminus\{\mathscr{S}\mathscr{R}\}\big ) \cup \{\mathscr{S}\mathscr{R}'\}
\\\\
\mathscr{S}\mathscr{R}'\equiv \big (\mathscr{S}\mathscr{R}\setminus \{\srpcconf{m_1}{m_2}{\mathcal{CTX}[\rass{x}{\lapp{e_a}{e_b}}]}{p_1}{p_2}{s}\}\big)\\\\\cup
\{\srpcconf{m'_1}{m'_2}{\mathcal{CTX}[{\tt skip}]}{p'''_1}{p'''_2}{s''}\}
}
{
\mathscr{G} \rightsquigarrow \mathscr{G}'
}
%
%
\end{mathpar}
}
\end{minipage}
\caption[{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics]{{\textsf{\textup{SRPFOR}}}\xspace: Proof collecting semantics, selected rules}
\label{fig:proof-semantics}
\end{figure*}
In Figure \ref{fig:proof-semantics},
we give a selection of the
rules.
Rule \rulestyle{Proof-Step-No-Sync} applies when no synchronizing
commands are involved, and hence there is no possible coupling rule to
be applied. In the other rules, we use the variable $\epsilon_c$ to
symbolically count the privacy budget in the current relational
execution. The variable gets increased when the rule
$\rulestyle{Proof-Step-Lap-Gen}$ fires.
This symbolic counter
variable is useful when trying to prove equality of certain variables
without spending more than a specific budget. In the set of sets of
configurations $\mathscr{G}$, a set of configurations,
$\mathscr{S}\mathscr{R}$, is nondeterministically chosen. Among elements in
$\mathscr{S}\mathscr{R}$ a configuration is also nondeterministically
chosen. Using contexts we check that in the selected configuration
the next command to execute is the probabilistic assignment. After
reducing to values both the mean and scale expression, and verified
(that is, assumed in the set of constraints) that in the two runs the
scales have the same value, the rule adds to the set of constraints a
new element, that is, $E''=E'+|\proj{1}{v_a}-\proj{2}{v_a}|\cdot K'$,
where $K, K', E''$ are fresh symbols denoting integers and $E'$ is the
symbolic integer to which the budget variable $\epsilon_c$ maps to.
Notice that $\epsilon_c$ needs to point to the same symbol in both
memories. This because it's a shared variable tracking the privacy
budget spent so far in both runs. This new constraint increases the
budget spent. The other constraint added is the real coupling
relation, that is $X_1+K=X_2$. Where $X_1, X_2$ are fresh in $\mathcal{X}$.
Later, $K$ will be existentially quantified in order to search for a
proof of $\epsilon$-indistinguishability. Rule
$\rulestyle{Proof-Step-Avoc}$ does not use any coupling rule but
treats the samples in a purely in a symbolic manner. It intuitively
asserts that the two samples are drawn from the distributions and
assigns to them arbitrary integers free to vary on the all domain of
the Laplace distribution.
When the program involves a synchronizing command we
basically fork the execution when it's time to execute it. In
particular the set of configurations gets to continue the computation
in different ways, one for every rule applicable.
\fi
\ifnum\full=1
\section{Derivations}
\label{sec:derivations}
The language of relational assertions is defined using first order predicate logic formulas
involving relational program expressions and logical variables in an unspecified set $\textsf{\textup{LogVar}}$. The interpretation of
a relational assertions is naturally defined as a subset of $\mathcal{M}_{\textup{c}}\times\mathcal{M}_{\textup{c}}$,
that is the set of pairs of memories modelling the assertion. We will let capital greek letters
such as $\Phi, \Psi\dots$ range of the set of relational assertions. We will also need an additional substituting
function $\tocstr{\cdot}{\cdot}$ taking an assertion and a memory in input and returning the assertion where all the
program variables have been substituted with the values to which the memory maps variables to.
That is, given a memory, relational or unary, $m$ and an assertion, relational or unary, $\Phi$,
$\tocstr{m}{\Phi}$ is a constraint. More details can be found in \cite{Farina2019}.
\begin{definition}
Let $\Phi, \Psi$ be relational assertions, $c\in\mathcal{C}_{\textup{r}}$, $\mathcal{I}:\textsf{\textup{LogVar}}\rightarrow\mathbb{R}$ defined on $\epsilon,\delta$.
We say that, $\Phi$ yields $\Psi$ through $c$ within $\epsilon$ and $\delta$ under $\mathcal{I}$
(and we write $\judg{c}{(\epsilon,\delta)}{\Phi}{\Psi}{\mathcal{I}}$) iff
\begin{itemize}
\item $\{\{ \{\srconf{m_{I_{1}}}{m_{I_{2}}}{c}{[]}{[]}{\tocstr{m_{I}}{\Phi}}\}\} \} \rightsquigarrow^{*}\mathscr{G}$
\item $\exists \mathcal{H}_{\textup{sr}}=\{\mathscr{H}{s_1}, \dots, \mathscr{H}{s_t}\}\in\mathscr{G}$ such that
\begin{itemize}
\item $\final{\mathcal{H}_{\textup{sr}}}$
\item $\forall \srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{s}\in\bigcup_{\mathscr{D}\in\mathcal{H}_{\textup{sr}}}\mathscr{D}.\exists \vec{k}$.
\[s\implies \tocstr{\pair{m_1}{m_2}}{\Psi\land \epsilon_c\leq \epsilon \land \delta_c\leq \delta}\]
\end{itemize}
where $m_{I}\equiv\pair{m_{I_1}}{m_{I_2}}=
\pair{m'_{I_1}[\epsilon_c\mapsto 0][\delta_c\mapsto 0]}{m'_{I_2}[\epsilon_c\mapsto 0][\delta_c\mapsto 0]}$,
$m'_{I_1}$, and $m'_{I_2}$ are fully symbolic memories, and $\vec{k}=k_1, k_2,\dots$ are the symbols generated by the rules for synchronizing commands.
\end{itemize}
\end{definition}
The idea of this definition is to automatize the proof search.
When proving differential privacy we will
usually consider $\Psi$ as being equality of the output variables in
the two runs and $\Phi$ as being our preconditions.
\section{Soundness of proofs and refutations}
In this section we will connect the material presented in Section
\ref{sec:derivations} with the one presented in Section \ref{sec:prelim}.
\subsection{Soundness}
\begin{lemma}
\label{lemma:sound}
Let $c\in\mathcal{C}_{\textup{r}}$ with $o$ its output variable taking values over the set $O$. Let $c\in\mathcal{C}_{\textup{r}}$.
If $\judg{c}{(\epsilon,\delta)}{d_1\sim d_2}{o_1=o_2}{\mathcal{I}}$ then $c$ is ($\epsilon$,$\delta$)-differentially private.
Also, if $\judg{c}{(\epsilon,\delta)}{d_1\sim d_2}{o_1=\iota \implies o_2=\iota}{\mathcal{I}}$ forall $\iota\in O$
then $c$ is ($\epsilon$,$\delta$)-differentially private.
\end{lemma}
\subsection{Refutations through pure semantics}
\begin{lemma}
\label{lem:refutations}
$\{\{\{\srconf{m_1}{m_2}{c}{[]}{[]}{\tocstr{\pair{m_1}{m_2}}{\Phi}}\}\}\}\hat{ \rightsquigarrow}\mathscr{G}$,
and $\mathscr{H}{s}\in\mathcal{H}\in\mathscr{G}$ and, $\exists \sigma\models_{\mathbb{Z}} s$ such that
$\Delta_{\epsilon}(\cdenote{\proj{1}{c}}(\sub{m_1}{\sigma}), \cdenote{\proj{2}{c}}(\sub{m_2}{\sigma}))>\delta$
then $c$ is not differentially private.
\begin{proof}
It suffices to notice that the two distributions $\cdenote{\proj{1}{c}}(\sub{m_1}{\sigma})$, and $\cdenote{\proj{2}{c}}(\sub{m_2}{\sigma})$
violate the $\delta$ bound of the $\Delta_{\epsilon}$ distance. Then $\sub{m_1}{\sigma}$ and $\sub{m_2}{\sigma}$ are counterexamples.
\end{proof}
\end{lemma}
\else
\paragraph{\bf Metatheory}
\label{sec:derivations}
The language of relational assertions $\Phi, \Psi\dots$ is defined using first order predicate logic formulas
involving relational program expressions and logical variables in $\textsf{\textup{LogVar}}$. The interpretation of
a relational assertions is naturally defined as a subset of $\mathcal{M}_{\textup{c}}\times\mathcal{M}_{\textup{c}}$,
that is the set of pairs of memories modelling the assertion. We will denote by
$\tocstr{\cdot}{\cdot}$ the substitution
function mapping the variables in an assertion to the values they have in a memory (unary or relational). More details are in \cite{Farina2019}.
\begin{definition}
Let $\Phi, \Psi$ be relational assertions, $c\in\mathcal{C}_{\textup{r}}$, $\mathcal{I}:\textsf{\textup{LogVar}}\rightarrow\mathbb{R}$ be an interpretation defined on $\epsilon$.
We say that, $\Phi$ yields $\Psi$ through $c$ within $\epsilon$ under $\mathcal{I}$
(and we write $\judg{c}{\epsilon}{\Phi}{\Psi}{\mathcal{I}}$) iff
1) $\{\{ \{\srconf{m_{I_{1}}}{m_{I_{2}}}{c}{[]}{[]}{\tocstr{m_{I}}{\Phi}}\}\} \} \rightsquigarrow^{*}\mathscr{G}$
2) $\exists \mathcal{H}_{\textup{sr}}=\{\mathscr{H}{s_1}, \dots, \mathscr{H}{s_t}\}\in\mathscr{G}$ such that $\final{\mathcal{H}_{\textup{sr}}}$ and
$\forall \srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{s}$ $\in\bigcup_{\mathscr{D}\in\mathcal{H}_{\textup{sr}}}\mathscr{D}.$ $\exists\vec{k}$.
$s\implies \tocstr{\pair{m_1}{m_2}}{\Psi\land \epsilon_c\leq \epsilon }$
where $m_{I}\equiv\pair{m_{I_1}}{m_{I_2}}=
\pair{m'_{I_1}[\epsilon_c\mapsto 0]}{m'_{I_2}[\epsilon_c\mapsto 0]}$,
$m'_{I_1}$, and $m'_{I_2}$ are fully symbolic memories, and $\vec{k}=k_1, k_2,\dots$ are the symbols generated by the rules for synchronizing commands.
\end{definition}
The idea of this definition is to automatize the proof search.
When proving differential privacy we will
usually consider $\Psi$ as being equality of the output variables in
the two runs and $\Phi$ as being our preconditions.
We can now prove the soundness of our approach.
\begin{lemma}
\label{lemma:sound}
Let $c\in\mathcal{C}_{\textup{r}}$. If $\judg{c}{\epsilon}{d_1\sim d_2}{o_1=o_2}{\mathcal{I}}$ then $c$ is $\epsilon$-differentially private.
\end{lemma}
We can also prove the soundness of refutations obtained by the semantics.
\begin{lemma}
\label{lem:refutations}
$\{\{\{\srconf{m_1}{m_2}{c}{[]}{[]}{\tocstr{\pair{m_1}{m_2}}{\Phi}}\}\}\}{ \rightsquigarrow}\mathscr{G}$,
and $\mathscr{H}{s}\in\mathcal{H}\in\mathscr{G}$ and, $\exists \sigma\models_{\mathbb{Z}} s$ such that
$\Delta_{\epsilon}(\cdenote{\proj{1}{c}}(\sub{m_1}{\sigma}), \cdenote{\proj{2}{c}}(\sub{m_2}{\sigma}))>0$
then $c$ is not differentially private.
\end{lemma}
\fi
\ifnum\full=1
\section{Strategies for counterexample finding}
Lemma \ref{lem:refutations} is hard to use to find counterexamples
because given two arbitrary probability distributions computing their
$\epsilon$-divergence is hard in general. For this reasons we will now
describe three strategies that might help in reducing the effort in
counterexample finding. This strategies help in isolating traces that could
potentially lead to violations. For this we need first some notation.
Given a set of constraints $s$ we define the triple
$\Omega=\langle\Omega_1, \Omega_2, C(\vec{k})\rangle\equiv\langle
\proj{1}{s}, \proj{2}{s},s\setminus(\proj{1}{s}\cup \proj{2}{s})
\rangle$.
Given a relational symbolic configuration we can always split its
symbolic set of constraints along the $\Omega$ pattern. Given a set
of constraint $s$ of a relational trace its $\Omega_1$ projection is
the set of constraints generated by the branching instruction
performed by the left run, simlarly for $\Omega_2$. Finally,
$C(\vec{k})$ is the set of relational constraints coming from either
preconditions or invariants or, from the rule
$\tiny{\rulestyle{PROOF-STEP-LAP-GEN}}$. The, potentially empty, vector $\vec{k}=K_1,\dots K_n$
is the set of fresh symbols $K$ generated by that rule. We sometimes abuse notation
and consider $\Omega$ also as a set of constraints given by the union
of its first, second and third projection. Given a set of constraints we will also
consider it as a single proposition given by the conjunction of its elements.
\subsection{A simplifying assumption on traces and events}
In this section we will formalize the main assumption that we will use
in order to apply the strategies for counterexample finding presented
in the following two subsections.
\begin{assumption}
\label{ass:assumption}
Consider $c\in\mathcal{C}_{\textup{r}}$ with output variable $o$, then $c$ is such that $ \{\{\{\srconf{m_1}{m_2}{c}{[]}{[]}{s}\}\}\} \rightsquigarrow^{*}\mathscr{G}$ and
\[
\forall \mathscr{H}{\langle \Omega_1, C(\vec{k}), \Omega_2\rangle}\in\mathcal{H}\in\mathscr{G}.\final{\mathcal{H}}\wedge o_1=o_2 \implies \Omega_1\Leftrightarrow \Omega_2
\]
\end{assumption}
The idea of this assumption is to consider only programs for which
it is necessary, for the output variable on both runs to assume the same values,
that the two runs follow the same branches. That is, if the two output variables
are different then the two executions must have, at some point, taken different branches.
\paragraph{} The following definition will be used to distinguish
relational traces which are reachable on one run but not on the
other. We call this traces \emph{orthogonal}.
\begin{definition} A final relational symbolic trace is orthogonal
when its set of constraints is such that $\exists\sigma.\sigma \not\models \Omega_2$
and $\sigma \models \Omega_1\wedge C(\vec{k})$. That is a trace for which the
following formula is satisiable: $\neg(\Omega_1\wedge C(\vec{k})\implies \Omega_2)$
\end{definition}
\paragraph{} The next definition, instead, will be used to isolate relational
traces for which it's not possible that the left one is executed but the right one
is not. We call this traces \emph{specular}.
\begin{definition} A final relational symbolic trace
is specular when its set of constraints is such that
$\exists \vec{k}.\Omega_1\wedge C(\vec{k}) \implies \Omega_2$.
\end{definition}
The constraint $\Omega_1\wedge C(\vec{k})$ includes all the
constraints coming from the left projection's branching of the
symbolic execution and all the relational assumptions such as the
adjacency condition, and all constraints added by the potentially
fired $\tiny{\rulestyle{PROOF-STEP-LAP-GEN}}$ rule. A specular trace
is such that its left projection constraints plus the relational
assumptions imply the right projection constraints.
We will now describe three strategies that will be used to isolate
relational symbolic traces potentially leading to counterexamples.
\subsection{Strategy A}
In this strategy \textsf{\textup{CRSE}}\xspace uses only the rule
$\tiny{\rulestyle{PROOF-STEP-AVOC}}$ for sampling instructions, also
this strategy searches for orthogonal relational traces. Under
assumption \ref{ass:assumption}, if this happens for a program then it
must be the case that the progam can output one value on one run with
some probability but the same value has 0 probability of being output
on the second run. This very fact implies that for some input the
program has an unbounded privacy loss. To implement this strategy
\textsf{\textup{CRSE}}\xspace looks for orthogonal relational traces
$\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$ such that: $ \exists
\sigma. \sigma \models \Omega_1\wedge C(\vec{k})$ but $\sigma
\not\models \Omega_2$. Notice that using this strategy $\vec{k}$ will
always be empty, as the rule used for samplings does not introduce any
coupling between the two samples. Hence, we don't need to quantify
over those symbols, and we can just write $C$. The set $C$ though
might very well be non empty, because it will potentially include
relational assumptions, e.g. adjacency of the inputs.
\subsection{Strategy B}
This strategy symbolically executes the program in order to find a
specular trace for which no matter how we relate, within the budget, the various pairs of
samples $X^{i}_1, X^{i}_2$ in the two runs - using the relational
schema $X^i_1+K_i=X^i_2$ - the postcondition is always false. That is
\textsf{\textup{CRSE}}\xspace looks for specular relational traces $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$
such that:
\[ \forall \vec{k}.\bigg[ (\Omega_1\wedge C(\vec{k}) \implies \Omega_2 )
\wedge \tocstr{\pair{m_1}{m_2}}{\epsilon_c \leq \epsilon )}\bigg ] \implies \tocstr{\pair{m_1}{m_2}}{o_1\neq o_2}
\]
\subsection{Strategy C}
This strategy looks for relational traces for which the output
variable takes the same value on the two runs but too much of the
budget was spent. That is \textsf{\textup{CRSE}}\xspace looks for traces $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$
such that:
\[
\forall \vec{k}. \bigg[ \Omega_1 \wedge C(\vec{k}) \wedge \Omega_2 \implies \tocstr{\pair{m_1}{m_2}}{o_1=o_2} \bigg] \implies \tocstr{\pair{m_1}{m_2}}{\epsilon_c >\epsilon}
\]
\else
\section{Strategies for counterexample finding}
Lemma \ref{lem:refutations} is hard to use to find counterexamples
in practice. For this reasons we will now
describe three strategies that can help in reducing the effort in
counterexample finding. This strategies help in isolating traces that could
potentially lead to violations. For this we need first some notation.
Given a set of constraints $s$ we define the triple
$\Omega=\langle\Omega_1, \Omega_2, C(\vec{k})\rangle\equiv\langle
\proj{1}{s}, \proj{2}{s},s\setminus(\proj{1}{s}\cup \proj{2}{s})
\rangle$. We sometimes abuse notation
and consider $\Omega$ also as a set of constraints given by the union
of its first, second and third projection, and we will also
consider a set of constraints as a single proposition given by the conjunction of its elements.
The set $C(\vec{k})$ contains relational constraints coming from either
preconditions or invariants or, from the rule
$\rulestyle{Proof-Step-Lap-Gen}$. The, potentially empty, vector $\vec{k}=K_1,\dots K_n$
is the set of fresh symbols $K$ generated by that rule.
In the rest of the paper we will assume the following simplifying assumption.
\begin{assumption}
\label{ass:assumption}
Consider $c\in\mathcal{C}_{\textup{r}}$ with output variable $o$, then $c$ is such that $ \{\{\{\srconf{m_1}{m_2}{c}{[]}{[]}{s}\}\}\} \rightsquigarrow^{*}\mathscr{G}$ and
$ \forall \mathscr{H}{\langle \Omega_1, C(\vec{k}), \Omega_2\rangle}\in\mathcal{H}\in\mathscr{G}.\final{\mathcal{H}}\wedge o_1=o_2 \implies \Omega_1\Leftrightarrow \Omega_2$.
\end{assumption}
This assumption allow us to consider only programs for which
it is necessary, for the output variable on both runs to assume the same value,
that the two runs follow the same branches. That is, if the two output differ then the two executions must have, at some point, taken different branches.
The following definition will be used to distinguish
relational traces which are reachable on one run but not on the
other. We call these traces \emph{orthogonal}.
\begin{definition} A final relational symbolic trace is orthogonal
when its set of constraints is such that $\exists\sigma.\sigma \not\models \Omega_2$
and $\sigma \models \Omega_1\wedge C(\vec{k})$. That is a trace for which $\neg(\Omega_1\wedge C(\vec{k})\implies \Omega_2)$ is satisiable.
\end{definition}
The next definition, instead, will be used to isolate relational
traces for which it is not possible that the left one is executed but the right one
is not. We call these traces \emph{specular}.
\begin{definition} A final rel. symbolic trace
is specular if
$\exists \vec{k}.\Omega_1\wedge C(\vec{k}) \implies \Omega_2$.
\end{definition}
The constraint $\Omega_1\wedge C(\vec{k})$ includes all the
constraints coming from the left projection's branching of the
symbolic execution and all the relational assumptions such as the
adjacency condition, and all constraints added by the potentially
fired $\rulestyle{Proof-Step-Lap-Gen}$ rule. A specular trace
is such that its left projection constraints plus the relational
assumptions imply the right projection constraints.
We will now describe our three strategies.
\noindent\emph{Strategy A}
In this strategy \textsf{\textup{CRSE}}\xspace uses only the rule
$\rulestyle{Proof-Step-Avoc}$ for sampling instructions, also
this strategy searches for orthogonal relational traces. Under
assumption \ref{ass:assumption}, if this happens for a program then it
must be the case that the progam can output one value on one run with
some probability but the same value has 0 probability of being output
on the second run. This implies that for some input the
program has an unbounded privacy loss. To implement this strategy
\textsf{\textup{CRSE}}\xspace looks for orthogonal relational traces
$\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$ such that: $ \exists
\sigma. \sigma \models \Omega_1\wedge C(\vec{k})$ but $\sigma
\not\models \Omega_2$. Notice that using this strategy $\vec{k}$ will
always be empty, as the rule used for samplings does not introduce any
coupling between the two samples.
\noindent\emph{Strategy B}
This strategy symbolically executes the program in order to find a
specular trace for which no matter how we relate, within the budget, the various pairs of
samples $X^{i}_1, X^{i}_2$ in the two runs - using the relational
schema $X^i_1+K_i=X^i_2$ - the postcondition is always false. That is
\textsf{\textup{CRSE}}\xspace looks for specular relational traces $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$
such that:
$ \forall \vec{k}.[ (\Omega_1\wedge C(\vec{k}) \implies \Omega_2 )
\wedge \tocstr{\pair{m_1}{m_2}}{\epsilon_c \leq \epsilon )} ] \implies \tocstr{\pair{m_1}{m_2}}{o_1\neq o_2}$.
\noindent\emph{Strategy C}
This strategy looks for relational traces for which the output
variable takes the same value on the two runs but too much of the
budget was spent. That is \textsf{\textup{CRSE}}\xspace looks for traces $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\Omega}$
such that:
$\forall \vec{k}. [ \Omega_1 \wedge C(\vec{k}) \wedge \Omega_2 \implies \tocstr{\pair{m_1}{m_2}}{o_1=o_2} ] \implies \tocstr{\pair{m_1}{m_2}}{\epsilon_c >\epsilon}
$.
Of the
presented strategies only strategy A is sound with respect to counterexample
finding, while the other two apply when the algorithm cannot be proven
differentially private by any combination of the rules. In this second
case though, \textsf{\textup{CRSE}}\xspace provides counterexamples which agree with other
refutation oriented results in literature. This strategies are hence termed
\emph{useful} because they amount to heuristics that can be applied in
some situations.
\fi
\ifnum\full=1
\section{Examples}
In this section we will review the examples presented in Section \ref{sec:highlevel}
and variations thereof to show how \textsf{\textup{CRSE}}\xspace works.
\subsection{Unsafe sparse vector implementation: Algorithm \ref{alg:wrongsvt-1}}
\label{subsec:unsafe-svt-1}
In this section we will describe in more detail how
\textsf{\textup{CRSE}}\xspace deals with Algorithm \ref{alg:wrongsvt-1}.
The algorithm is not $\epsilon$-differentially private.
An easy fix would be to add noise the output too, that is, substitute line 7 with
$\ass{o[i]}{\lapp{q[i](D)}{\frac{\epsilon}{2}}}$, giving us an $2\epsilon$-differentially private algorithm.
\begin{wrapfigure}[15]{L}{0.45\textwidth}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{img-svt3.png}
\caption{Two runs of Algorithm \ref{alg:wrongsvt-1} for 5 iterations}
\label{img:tracessvt}
\end{minipage}
\end{wrapfigure}
Algorithm \ref{alg:wrongsvt-1} satisfies assumption
\ref{ass:assumption} because it outputs the whole array $o$ which takes
values of the form $\bot^{i},t$ or $\bot^{n}$ for $1\leq i\leq n$ and
$t\in\mathbb{R}$. The array, hence, encodes the whole trace.
So if two runs of the algorithm output the same
value it must be the case that they followed they same branching
instructions.
Let's first notice that the algorithm is trivially $\epsilon$
differentially private, for any $\epsilon$, when the number of iterations $n$ is less than
or equal to 4. Indeed it's enough to apply the sequential
composition theorem and get the obvious bound $\frac{\epsilon}{4}\cdot
n$. \textsf{\textup{CRSE}}\xspace can prove this by applying the rule
$\rulestyle{Proof-Step-Lap-Gen}$ $n$ times, and then
choosing $K_1,\dots, K_n$ all equal to 0. This would imply
the statement of equality of the output variables spending less than
$\epsilon$. Hence, it is obvious that if there is a potential
counterexample it can only be found after 4 iterations. In fact, a
potential counterexample can be found in 5 iterations.
If we apply strategy B to this algorithm and follow the relational
symbolic trace that applies the rule
$\rulestyle{Proof-Step-Lap-Gen}$ for all the samplings we can
isolate the relational specular trace showed in Figure
\ref{alg:wrongsvt-1}, which corresponds to the left execution
following the false branch for the first four iterations and then
following the true branch and setting the fifth element of the array
to the sampled value.
Let's denote the respective final relational configuration by $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{ s}$.
The set of constraints is as follows: $s=\langle \Omega_1, C(\vec{k}), \Omega_2\rangle=$
\begin{align*}
\langle \{T_1>S^1_1, T_1>S^2_1, T_1>S^3_1, T_1>S^4_1, T_1&\leq S^5_1\},\\
\{T_1+k_0=T_2, S^1_1+k_1=S^1_2, S^2_1+k_2=&S^2_2, \\
S^3_1+k_3=S^3_2, S^4_1+k_4=S^4_2, S^5_1+k_5&=S^5_2, \\
E_6=k_0\frac{\epsilon}{2}+\frac{\epsilon}{4}\displaystyle\sum_{i=1}^4k_i\dots\},\\
\{T_2>S^1_2, T_2>S^2_2, T_2>S^3_2, T_2>S^4_2, T_2&\leq S^5_2\}\rangle
\end{align*}
with $m_1(\epsilon_c)=m_2(\epsilon_c)=E_6, m_1(o)=[S^1_1,\dots, S^5_1], m_2(o)=[S^1_2,\dots, S^5_2], m_1(t)=T_1, m_2(t)=T_2$.
We can see that strategy B applies, because
\begin{equation*}
\models \forall \vec{k}.\bigg[ (\Omega_1\wedge C(\vec{k}) \implies \Omega_2 )
\wedge \tocstr{\pair{m_1}{m_2}}{\epsilon_c \leq \epsilon )}\bigg ] \implies \tocstr{\pair{m_1}{m_2}}{o_1\neq o_2}
\end{equation*}
holds. The probability associated with these two traces can be
expressed as:
\begin{equation*}
\Gamma_{j}(\vec{q}(D_j), \epsilon, T, o)\equiv
\int_{-\infty}^{+\infty}\textup{pdflap}^{T}_{\frac{\epsilon}{2}}(\rho)\bigg(\displaystyle\prod_{i=1}^{4}\textup{cdflap}^{q[i](D_{j})}_{\frac{\epsilon}{4}}(\rho)\Pr[\hat{s}^{5}_{j}=o \wedge \hat{s}^{5}_{j}\geq \rho]\bigg) d\rho
\end{equation*}
where $j\in\{1,2\}$ denotes which run (left or right) we are
considering, $\vec{q}$ is the vector of queries such that $\mid
q[i](D_1)-q[i](D_2)\mid \leq 1$ for adjacent databases $D_1, D_2$ and
$1\leq i\leq 5$. Also, $\textup{pdflap}^{T}_{\frac{\epsilon}{2}}(\rho)$
denotes the probability density at the point $\rho$ of a random
variable with Laplace distribution
with mean $T$ and scale $\frac{2}{\epsilon}$, and, finally, $\textup{cdflap}^{q[i](D_{j})}_{\frac{\epsilon}{4}}(\rho)$ denotes the
cumulative distributive function at the point $\rho$, of a random variable with Laplace distribution with mean $q[i](D_{j})$ and
scale $\frac{4}{\epsilon}$. Let's define $\Gamma(\vec{q}(D_1), \vec{q}(D_2), \epsilon, T, o)\equiv \frac{\Gamma_1(\vec{q}(D_1), \epsilon, T, o)}{\Gamma_2(\vec{q}(D_1), \epsilon, T, o)}$, then we can see that $\Gamma([00001],[11110], 1, 0, 0)>e^{1}$.
Where
\[ [q[1](D_1),q[2](D_1),q[3](D_1),q[4](D_1),q[5](D_1)]=\]$[00001]$
and, \[ [q[1](D_2),q[2](D_2),q[3](D_2),q[4](D_2),q[5](D_2)]=\]$[11110]$.
This pair of traces is, in fact, the same that has been found in
\cite{Lyu-2017} for a sligthly more general version of Algorithm
(\ref{alg:wrongsvt-1}). Strategy B selects this relational trace
because, as already noticed in \cite{Barthe:2016} for a different
version of the algorithm, in order to make sure that the traces follow
the same branches, the coupling rules enforce necessarily that the two
samples released are different, preventing the \textsf{\textup{CRSE}}\xspace to prove equality
of the output variables in the two runs.
\subsection{Unsafe sparse vector implementation: Algorithm \ref{alg:wrongsvt-2}}
Algorithm \ref{alg:wrongsvt-2} also satisfies assumption
\ref{ass:assumption}: that is, the output encodes univocally the whole
history of the trace, and hence every trace corresponds injectively to
an event. The algorithm is trivially $\epsilon$ differentially private
for one iteration. This because, intuitively, adding noise to the
threshold protects the result of the query as well at the branching
instruction, but only for one iteration, after that there is no
resampling. Indeed, the algorithm is not
$\epsilon$-differentially private, for any finite $\epsilon$ already
at the second iteration, and a witness for this can be found using \textsf{\textup{CRSE}}\xspace.
We can see this using strategy B. Thanks to
this strategy we will isolate a relational orthogonal trace, similarly
to what has been found in \cite{Lyu-2017} for the same algorithm.
\textsf{\textup{CRSE}}\xspace will unfold the loop twice, and it will scan all relational traces
to see if there is an orthogonal trace. In particular, the relational trace that corresponds
to the output $o_1=o_2=[\bot,\top]$,
that is the the trace with set of constraints $\langle \Omega_1, C(\vec{k}), \Omega_2 \rangle=$
\begin{align*}
\langle \{T_1>q_{1d1}, T_1\leq q_{2d1}\},\\
\{|q_{1d1}- q_{1d2}|\leq 1, |q_{2d1}- q_{2d2}|\leq 1\}\\
\{T_2>q_{1d2}, T_2\leq q_{2d2}\}\rangle
\end{align*}
Since the vector $\vec{k}$ is empty we can omit it and just write
$C$. It is easy to see now that the following sigma:
$\sigma\equiv[q_{1d1}\mapsto 0, q_{2d1}\mapsto 1, q_{1d2}\mapsto 1,
q_{2d2}\mapsto 0]$, proves that this relational trace is orthogonal:
that is $\sigma \models \Omega_1\wedge C$, but $\sigma \not
\models\Omega_2$. Indeed if we consider two inputs $D_1,D_2$ and two
queries $q_1, q_2$ such that: $q_1(D_1)=q_2(D_2)=0,
q_2(D_1)=q_1(D_2)=1$ we get that the probability of outputting the
value $o=[\bot,\top]$ is positive in the first run, but it is 0 on the second.
This implies that the algorithm can merely be proven to be $\infty$-differentially private.
\else
\paragraph{\bf Examples}
\emph{Unsafe sparse vector implementation: Algorithm \ref{alg:wrongsvt-1}}
\label{subsec:unsafe-svt-1}
We already discussed why this algorithm is not $\epsilon$-differentially private.
Algorithm \ref{alg:wrongsvt-1} satisfies Assumption
\ref{ass:assumption} because it outputs the whole array $o$ which takes
values of the form $\bot^{i},t$ or $\bot^{n}$ for $1\leq i\leq n$ and
$t\in\mathbb{R}$. The array, hence, encodes the whole trace.
So if two runs of the algorithm output the same
value it must be the case that they followed they same branching
instructions.
Let's first notice that the algorithm is trivially $\epsilon$
differentially private, for any $\epsilon$, when the number of iterations $n$ is less than
or equal to 4.
\begin{wrapfigure}[12]{L}{0.37\textwidth}
\vspace{-8mm}
\begin{minipage}[t]{0.45\textwidth}
\includegraphics[width=0.8\textwidth]{img-svt3.png}
\caption{Two runs of Alg. \ref{alg:wrongsvt-1}.}
\label{img:tracessvt}
\end{minipage}
\end{wrapfigure}
Indeed it's enough to apply the sequential
composition theorem and get the obvious bound $\frac{\epsilon}{4}\cdot
n$. \textsf{\textup{CRSE}}\xspace can prove this by applying the rule
$\rulestyle{Proof-Step-Lap-Gen}$ $n$ times, and then
choosing $K_1,\dots, K_n$ all equal to 0. This would imply
the statement of equality of the output variables spending less than
$\epsilon$. A
potential counterexample can be found in 5 iterations.
If we apply strategy B to this algorithm and follow the relational
symbolic trace that applies the rule
$\rulestyle{Proof-Step-Lap-Gen}$ for all the samplings we can
isolate the relational specular trace showed in Figure
\ref{alg:wrongsvt-1}, which corresponds to the left execution
following the false branch for the first four iterations and then
following the true branch and setting the fifth element of the array
to the sampled value.
Let's denote the respective final relational configuration by $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{ s}$.
The set of constraints is as follows: $s=\langle \Omega_1, C(\vec{k}), \Omega_2\rangle=
\langle \{T_1>S^1_1, T_1>S^2_1, T_1>S^3_1, T_1>S^4_1, T_1\leq S^5_1\},
\{T_1+k_0=T_2, S^1_1+k_1=S^1_2, S^2_1+k_2=S^2_2,
S^3_1+k_3=S^3_2, S^4_1+k_4=S^4_2, S^5_1+k_5=S^5_2,
E_6=k_0\frac{\epsilon}{2}+\frac{\epsilon}{4}\displaystyle\sum_{i=1}^4k_i\dots\},
\{T_2>S^1_2, T_2>S^2_2, T_2>S^3_2, T_2>S^4_2, T_2\leq S^5_2\}\rangle$
with $m_1(\epsilon_c)=m_2(\epsilon_c)=E_6, m_1(o)=[S^1_1,\dots, S^5_1], m_2(o)=[S^1_2,\dots, S^5_2], m_1(t)=T_1, m_2(t)=T_2$.
We can see that strategy B applies, because we have
$
\models \forall \vec{k}.[ (\Omega_1\wedge C(\vec{k}) \implies \Omega_2 )
\wedge \tocstr{\pair{m_1}{m_2}}{\epsilon_c \leq \epsilon )} ] \implies \tocstr{\pair{m_1}{m_2}}{o_1\neq o_2}$.
Computing the probability associated with these two traces we can verify that we have a counterexample.
This pair of traces is, in fact, the same that has been found in
\cite{Lyu-2017} for a sligthly more general version of Algorithm
(\ref{alg:wrongsvt-1}). Strategy B selects this relational trace
since in order to make sure that the traces follow
the same branches, the coupling rules enforce necessarily that the two
samples released are different, preventing \textsf{\textup{CRSE}}\xspace to prove equality
of the output variables in the two runs.
\paragraph{Unsafe sparse vector implementation: Algorithm \ref{alg:wrongsvt-2}}
Also this algorithm satisfies Assumption \ref{ass:assumption}. The
algorithm is $\epsilon$ differentially private for one
iteration. This because, intuitively, adding noise to the threshold
protects the result of the query as well at the branching instruction,
but only for one iteration. The algorithm is not $\epsilon$-differentially private, for any finite
$\epsilon$ already at the second iteration, and a witness for this can
be found using \textsf{\textup{CRSE}}\xspace. We can see this using strategy B. Thanks to
this strategy we will isolate a relational orthogonal trace, similarly
to what has been found in \cite{Lyu-2017} for the same algorithm.
\textsf{\textup{CRSE}}\xspace will unfold the loop twice, and it will scan all relational
traces to see if there is an orthogonal trace. In particular, the
relational trace that corresponds to the output $o_1=o_2=[\bot,\top]$,
that is the the trace with set of constraints
$\langle \Omega_1, C(\vec{k}), \Omega_2 \rangle= \langle
\{T_1>q_{1d1}, T_1\leq q_{2d1}\}, \{|q_{1d1}- q_{1d2}|\leq 1,
|q_{2d1}- q_{2d2}|\leq 1\} \{T_2>q_{1d2}, T_2\leq q_{2d2}\}\rangle$.
Since the vector $\vec{k}$ is empty we can omit it and just write
$C$. It is easy to see now that the following sigma:
$\sigma\equiv[q_{1d1}\mapsto 0, q_{2d1}\mapsto 1, q_{1d2}\mapsto 1,
q_{2d2}\mapsto 0]$, proves that this relational trace is orthogonal:
that is $\sigma \models \Omega_1\wedge C$, but $\sigma \not
\models\Omega_2$. Indeed if we consider two inputs $D_1,D_2$ and two
queries $q_1, q_2$ such that: $q_1(D_1)=q_2(D_2)=0,
q_2(D_1)=q_1(D_2)=1$ we get that the probability of outputting the
value $o=[\bot,\top]$ is positive in the first run, but it is 0 on the second.
Hence, the algorithm can only be proven to be $\infty$-differentially private.
\fi
\ifnum\full=1
\paragraph{A safe sparse vector implementation}
As already mentioned two variations of Algorithm \ref{alg:wrongsvt-1} can be proven secure.
The first one substitutes $\rass{o[i]}{\lapp{q[i](D)}{\frac{\epsilon}{2}}}$ in place of line 7,
while the second one substitutes $\ass{o[i]}{\top}$.
The former version can be proven $2\epsilon$-dp, while the latter: $\epsilon$-dp.
We will explain a proof of this last statement for a constant $n$, for example 5.
The proof presented is based on that in \cite{Barthe:2016}, but will
have a relational symbolic execution style instead of an apRHL$^{+}$ one.
\textsf{\textup{CRSE}}\xspace will try to prove the following postconditions:
\begin{enumerate}
\item $o_1=[\top,\bot,\dots,\bot] \implies o_2=[\top,\bot,\dots,\bot] \land\epsilon_c\leq \epsilon$
\item $o_1=[\bot,\top,\dots,\bot] \implies o_2=[\bot,\top,\dots,\bot] \land\epsilon_c\leq \epsilon$
\item $o_1=[\bot,\dots,\top,\bot] \implies o_2=[\bot,\dots,\top,\bot] \land\epsilon_c\leq \epsilon$
\item $\dots$
\item $o_1=[\bot,\dots,\bot,\top] \implies o_2=[\bot,\dots,\bot,\top] \land\epsilon_c\leq \epsilon$
\end{enumerate}
When trying to prove the i-th one the only interesting iteration will be the i-th one. This because
all the others the postcondition will be vacuously true, and also the budget spent will be $k_0\frac{\epsilon}{2}$,
the one spent for the threshold, and for all the other sampling instruction we can spend 0 by just setting $k_j=q[j](D_2)-q[j](D_1)$ for all
$j\neq i$, that is by couplin in this way the samples: $\hat{s}_1+k_j=\hat{s}_2$, with $k_j=q[j](D_2)-q[j](D_1)$, spending $\lvert k_j+ q[j](D_2)-q[j](D_1)\rvert=0$.
So, at the i-th iteration the samples are coupled $\hat{s}_1+k_i=\hat{s}_2$, with $k_i=1$.
So if $\hat{s}_1\geq \hat{t}_1$ then also $\hat{s}_2\geq \hat{t}_2$, and also, if
$\hat{s}_1< \hat{t}_1$ then also $\hat{s}_2< \hat{t}_2$. This implies that at th i-th iteration
we enter on the right run the true branch iff we enter the true branch on the left one.
This by spending $\lvert k_i + q[i](D_2)-q[i](D_1)\rvert\frac{\epsilon}{4}\leq 2\frac{\epsilon}{4}$.
For a total of $\epsilon$.
\paragraph{Unsafe Laplace mechanism: Algorithm \ref{alg:wrongnoise} }
Algorithm \ref{alg:wrongnoise} is not $\epsilon$ differentially
private for any finite $\epsilon$. The intuition is that not enough
noise is added to hide the difference of the result of a query applied
to two adjacent databases. This translates in any possible potential proof based
on the coupling rules in using too much of the budget.
The program of Algorithm \ref{alg:wrongnoise} has only one possible
final relational trace: $\srconf{m_1}{m_2}{{\tt skip}}{p_1}{p_2}{\langle
\Omega_1, C(\vec{k}, \Omega_2)\rangle}$. Since there are no branching
instructions $\Omega_1=\{ \proj{1}{2E} >0\}$ and $\Omega_2=\emptyset$,
where $m_1(\epsilon)=m_2(\epsilon)=E$. Since there is one sampling
instruction $C(\vec{k})$ will include the following set of constraints
$\{|Q_{d1}- Q_{d2}|\leq 1, R_1+K=R_2, E_{c}=\mid K\mid\cdot 2\cdot
K'\cdot E,O_1=R_1+ Q_{d1}, O_2= R_2+ Q_{d2}, E_c=K'\cdot E\}$, with
$m_1(o)=O_1, m_2(o)=O_2, m_1(\epsilon_c)=m_2(\epsilon_c)=E_c$.
Intuitively we can see that, given this set of constraints, if it has
to be the case that $O_1=O_2$ then, $Q_{d1}-Q_{d_2}=K$. But
$Q_{d1}-Q_{d_2}$ can be 1 and hence, $E_{c}$ is at least 2. This tells
us that if we want to equate the two output variables we need to spen
at least twice the budget. Any relational input satisfying the precondition will give us
a counterexample, provided the two projections are different.
\paragraph{A safe Laplace mechanism}
By substituting line 2 in Algorithm \ref{alg:wrongnoise} with $\rass{\rho}{\lapp{0}{\epsilon}}$
we get an $\epsilon$-dp algorithm. Indeed when executing that line \textsf{\textup{CRSE}}\xspace would generate the following
constraint $p_1+k_0=p_2 \land\mid k_0 + 0 - 0\mid\leq k_1\land o_1=v_1+p_1\land o_2=v_2+p_2$.
Which by instantiating $k=0,k_1=v_2-v_1$ implies $o_1=o_2\land \epsilon_c\leq \epsilon$.
\else
\paragraph{A safe sparse vector implementation}
Algorithm \ref{alg:wrongsvt-1} can be proven $\epsilon$-differentially private if we replace $\ass{o[i]}{\top}$ to line 7.
Let us consider a proof of this statement for $n=5$.
\textsf{\textup{CRSE}}\xspace will try to prove the following 5 sentences:
$o_1=[\top,\bot,\dots,\bot] \implies o_2=[\top,\bot,\dots,\bot] \land\epsilon_c\leq \epsilon$, $\dots$,
$o_1=[\bot,\dots,\bot,\top] \implies o_2=[\bot,\dots,\bot,\top] \land\epsilon_c\leq \epsilon$.
The only interesting iteration will be the i-th one. This because
all the others the postcondition will be vacuously true, and also the budget spent will be $k_0\frac{\epsilon}{2}$,
the one spent for the threshold, and for all the other sampling instruction we can spend 0 by just setting $k_j=q[j](D_2)-q[j](D_1)$ for all
$j\neq i$, that is by couplin in this way the samples: $\hat{s}_1+k_j=\hat{s}_2$, with $k_j=q[j](D_2)-q[j](D_1)$, spending $\lvert k_j+ q[j](D_2)-q[j](D_1)\rvert=0$.
So, at the i-th iteration the samples are coupled $\hat{s}_1+k_i=\hat{s}_2$, with $k_i=1$.
So if $\hat{s}_1\geq \hat{t}_1$ then also $\hat{s}_2\geq \hat{t}_2$, and also, if
$\hat{s}_1< \hat{t}_1$ then also $\hat{s}_2< \hat{t}_2$. This implies that at th i-th iteration
we enter on the right run the true branch iff we enter the true branch on the left one.
This by spending $\lvert k_i + q[i](D_2)-q[i](D_1)\rvert\frac{\epsilon}{4}\leq 2\frac{\epsilon}{4}$.
For a total of $\epsilon$.
\fi
\ifnum\full=1
\section{Related Works}
The closest work to ours is \cite{Barthe2019AutomatedMF} where authors
devise a decision logic for differential privacy. The logic allows to soundly prove
or disprove $\epsilon$ and $(\epsilon,\delta)$ differential privacy
by encoding the semantics of the progam into a decidable fragment of the first-order
theory of the reals with exponentiation. The programs considered
don't allow assignemnts to real and integer variables inside the body
of while loops.
Other two works very related to ours are \cite{DingWWZK18} and
\cite{Bichsel:2018}. Their approach to finding counterexamples to
differential privacy differs from ours in two main ways. First of all
they use a statistical approach by approximating the output
distributions of the program on two related inputs and then smartly
checking whether for some events these output distributions provide an
out of bound ratio. Secondly they are by nature numerical methods
providing results stating that an algorithm is not $\epsilon$
differential private for some actual concrete $\epsilon$. These kind
of results obviously imply that the algorithm is not differentially
private for all other concrete $\epsilon'$ such that
$\epsilon'>\epsilon$, but they can only suggest that the algorithm is not private
for also the $\epsilon'$, such that $\epsilon'< \epsilon$, if that
is indeed the case. Our work instead is a purely symbolic
technique which provides results stating that an algorithm is not
$\epsilon$-differentially private for any finite $\epsilon$. An
advantage of our approach is obviously the speed of the analysis which
does not require any sampling.
In \cite{LiuWZ18} authors add model checking to the tools for counterexample finding
to differential privacy. The main difference with our work is that, as usual, model checking analyzes a
model of the code and not directly the code. Also, in the specific case
of the sparse vector algorithm family, their work seem to be able to handle only
a finite number of iterations.
This work can be seen as a non trivial probabilistic extension of the
framework presented in \cite{Farina2019}, where sampling instructions
in the relational symbolic semantics are handled through an
adaptation, in a symbolic execution framework, of the apRHL$^{+}$
rules first presented in \cite{Barthe:2016}. Their logic
proves judgments implying differential privacy but does not help in
finding counterexamples when the program is not private.
This work is also close to \cite{Albarghouthi:2017} where authors
devised a framework to automatically discover proofs of privacy using
coupling rules, but again thier work does not help in refuting privacy of
buggy programs.
\else
\section{Related Works}
The closest works to ours are
\cite{Albarghouthi:2017,Barthe2019AutomatedMF}. In
\cite{Albarghouthi:2017} the authors devised a synthesis framework to
automatically discover proofs of privacy using coupling rules similar
to ours. Their approach is not based on symbolic execution but on
synthesis technique. Moreover, their framework cannot be directly used
to refuting privacy of buggy programs.
In \cite{Barthe2019AutomatedMF} the authors
devise a decision logic for differential privacy which can to soundly prove
or disprove differential privacy. The programs considered
don't allow assignemnts to real and integer variables inside the body
of while loops. While their technique is different from our, their logic could be potentially integrated in our framework as a decision procedure.
Another related work is \cite{LiuWZ18}, where the authors study model checking as a tool for counterexample finding
to differential privacy. The main difference with our work is in the basic technique and in the fact that model checking reason about a
model of the code, rather than the code itself. They also consider the above threshold example and they are able to handle only
a finite number of iterations.
Other works have studied how to find violations to differential privacy~\cite{DingWWZK18,Bichsel:2018}. Their approach differs from ours in two ways: first,
they use a statistical approach; second, they look at concrete values of
the data and the privacy parameters. By using an approach based on symbolic
execution we are able to reason about symbolic values, and so consider
$\epsilon$-differential privacy for any finite $\epsilon$. Moreover, our
technique does not need sampling - although we still need to compute
distributions to confirm a violation.
Our work can be seen as a probabilistic extension of the
framework presented in \cite{Farina2019}, where sampling instructions
in the relational symbolic semantics are handled through rules inspired by
the logic apRHL$^{+}$~\cite{Barthe:2016}. This logic can be used to prove
differential privacy but does not directly help in
finding counterexamples when the program is not private.
\fi
\ifnum\full=1
\section{Conclusion and Future Work}
In this work we presented \textsf{\textup{CRSE}}\xspace: a symbolic execution engine framework
which integrates relational reasoning and probabilistic couplings.
The framework allows both proving differential privacy of the most
known differentially private algorithms, and refutation of buggy
versions thereof which are particularly trick to distinguish from the
correct ones. While refuting it is also able isolate traces and
events which leads to counterexamples to differnetial privacy. When
proving \textsf{\textup{CRSE}}\xspace uses a similar approach to apRHL$^+$ but follows a
strong postcondion approach instead of the more standard weak
precondition style of proof, as it is common in symbolic execution
style of proofs. \textsf{\textup{CRSE}}\xspace uses refuting principles, or
strategies, to isolate potentially \emph{dangerous} traces. Of the
presented strategies only one is sound with respect to counterexample
finding, while the other two apply when the algorithm cannot be proven
differentially private by any combination of the rules. In this second
case though \textsf{\textup{CRSE}}\xspace provides counterexamples which agree with other
refutation oriented results in literature.
Future work includes interfacing more efficiently \textsf{\textup{CRSE}}\xspace with numeric
solvers to find maximums of ratios of probabilities of traces.
\else
\section{Conclusion}
We presented \textsf{\textup{CRSE}}\xspace: a symbolic execution engine framework
integrating relational reasoning and probabilistic couplings.
The framework allows both proving and refuting differential privacy.
When
proving \textsf{\textup{CRSE}}\xspace can be seen as strong postcondion calculus. When refuting \textsf{\textup{CRSE}}\xspace
uses several strategies to isolate potentially \emph{dangerous} traces.
Future work includes interfacing more efficiently \textsf{\textup{CRSE}}\xspace with numeric
solvers to find maximums of ratios of probabilities of traces.
\fi
\bibliographystyle{splncs04}
\ifnum\full=1
|
quant-ph/0601051
|
\section{Introduction}\label{sec1}
A realistic quantum system is never isolated, but is immersed in the
surrounding environment (alias bath, reservoir) and interacts
continuously with it. Such a system without ignorable coupling to
the environment can be called open quantum system. Generally, the
environment consists of a huge number of degrees of freedom, it is
even whole outside world (universe) of the concerning open quantum
system. In fact, we might not know the exact state of the outside
world, having only some statistical information to describe it.
However, we are really interested in a reliable and effective theory
of open system dynamics under the influence of its environment.
The basic idea of quantum theory of open systems is thought of as an
interesting open system and its surrounding environment form a total
composite system, or vis versa, a composite system can be decomposed
into an interesting open system and a surrounding environment. The
key matters of quantum theory of open systems are to determine the
interaction between the open system and its environment and build
the physical models of the open system and its environment. Open
system dynamics is just a law of this open quantum system how to
evolute with time and its solution at any given time.
Open quantum system and its dynamics are very important for many
interesting quantum theory branches such as quantum optics
\cite{Carmichael,Plenio}, condensed matter theory, quantum
information and computing \cite{Nielson,Perskill}, more concretely,
quantum decoherence, quantum measurement \cite{Zurek,Schlosshauer},
quantum dissipation \cite{Weiss,Strunz}, quantum transport
\cite{Haug}, quantum chaos \cite{Haake} et. al. Study of open system
dynamics is helpful for understanding some very essential problems
in physics, for example, the transition from quantum to classical
world.
A variety of different formal techniques have been developed and
used in dealing with open quantum systems. From the above reviews
and books, the interested readers may get them. Here, we intend to
start with ``the first principle" of quantum mechanics, that is, the
Schr\"odinger equation or the von Neumann equation, and then try to
build a theoretical formulism including the general and explicit
forms of motion equation, dynamical solution, and perturbation
theory of open systems.
It is clear that such a ``first principle" scheme might be not
suitable to the cases when one cannot clearly know the environment
model and/or the system-environment coupling form since the
environment is too huge and too complicated. However, our
conclusions might be helpful for building the models of such some
systems. Moreover, one of possible ways to avoid this difficulty is
to use the Milburn dynamics \cite{Milburn}. That is, the environment
is separated into near- and remote two parts, the Hamiltonians of
the near environment (often with finite degree of freedoms) and the
coupling to the interesting open system are assumed to be clearly
known, but the influence of the remote environment on the
interesting system is incarnated by an extra term in Milburn motion
equation compared with the von Neumann motion equation. Similarly,
we successfully obtain the general and explicit solution of Milburn
dynamics of the interesting system according our scheme.
Because of the dissipative nature of open systems, we must turn to
the density matrix for a proper description whatever the initial
state is pure or mixed. Actually, we are interested in the
properties of open systems only, it will be appropriate to study the
reduced density matrix evolution with time or motion equation or its
solution. Here, the reduced density matrix describing the open
systems is obtained by tracing out the degrees of freedom of the
environment from the total (system plus environment) density matrix.
Due to the system and environment being entangled generally in
system evolution with time, directly solving Schr\"odinger equation
or von Neumann equation of the total system is a formidable task by
using the existed methods. Traditionally, this problem is studied by
perturbation theory in system-environment coupling scheme. Ones
often take the interaction between the system and its environment as
a perturbed part and then use the interaction picture to derive out
the master equation of open systems via some physical approximations
such as Born-Markov ones and the others. If an open system is
exactly solvable, the coupling $H_{\rm SE}$ is weak, the evolution
time is short enough, and the used physical approximations are
indeed appropriate, this has been proved to be an effective method.
However, when the above conditions do not satisfied sufficiently,
the problem gets complicated and perhaps leads to some difficulties,
although some formal techniques have been developed and used in
order to overcome some possible shortcomings. For generality and
reliability in theory, we feel that we have to consider whether
these approximations are necessary, if without these approximations,
can we obtain the formulism of open system dynamics? The conclusions
obtained here answer these problems.
In this paper, we will provide the amelioration of the existed
scheme of open system dynamics and try to build a theoretical
formulism using our recent investigations on quantum mechanics in
general quantum systems \cite{My1,My2}. We first obtain the exact
solution of open systems including all order approximations of
perturbation and then give the improved form of perturbed solution
of open systems absorbing the partial contributions from the high
order even all order approximations of perturbation. Only under the
factorizing initial condition, we derive out the exact master
equation and its perturbed form via the standard cut-off
approximation of perturbation. Moreover, we propose the improved
form of perturbed master equation. In special, based on our master
equation, we re-deduce the Redfield master equation without using
Born-Markov approximation, and we point out the differences between
our master equation and existed ones. We also obtain the solution of
open system dynamics in the Milburn model. In order to illustrate
our open system dynamics, we study Zurek model of two-state open
system and its extension with two transverse fields. We are sure
that our open system dynamics can be used to more open systems since
its generality and clearness, and its calculations are simpler and
more efficient, its results are more accurate and more reliable than
the existed scheme.
This paper is arranged as follows: besides Sec. \ref{sec1} is an
introduction, in Sec. \ref{sec2}, by virtue of a system-environment
separated representation, we first obtain a general and explicit
solution of open systems including all order approximations; in Sec.
\ref{sec3}, we gain the improved form of perturbed solution of open
systems, which absorbs the partial contributions from the high order
even all order approximations of perturbation; in Sec. \ref{sec4} we
deduce the exact master equation of open systems only under the
factorizing initial condition; in Sec. \ref{sec5} we get the
perturbed form of our master equation and its amelioration; in Sec.
\ref{sec6}, based on our master equation, we re-deduce Redfield
master equation without using Born-Markov approximation, and we
point out the differences between our master equation and existed
ones; in Sec. \ref{sec7}, we obtain the solution of open system
dynamics in the Milburn model for the Milburn-type closed
total-systems. This implies that our solution and methods are
applicable to more general open systems; in Sec. \ref{sec8}, we
study Zurek model of two-state open system and its extension with
two transverse fields; In Sec. \ref{sec9}, we summarize our
conclusions and give some discussions.
\section{General and explicit solution of open system
dynamics}\label{sec2}
In this section, we will derive out a general and explicit solution
of open systems by using our recent work of exact solution in
general quantum systems \cite{My1}.
As is well-known, if assuming that the interesting open quantum
system and its environment are taken as a closed (or isolated)
larger composite system, that is, a total system, we can think that
this total system obeys the Schr\"odinger equation or the von
Neumann equation, respectively, for a pure state
$\ket{\Psi_{tot}(t)}$ or a mixed state ${\rho}_{tot}(t)$, that is
\begin{eqnarray}\label{firstse} -{\rm i}\frac{\partial}{\partial
t}\ket{\Psi_{tot}(t)}&=&H_{tot}\ket{\Psi_{tot}(t)},\\
\label{firstvne}
\dot{\rho}_{tot}(t)&=&-{\rm i}\left[H_{tot},\rho_{tot}(t)\right].\end{eqnarray}
where the total system Hamiltonian $H_{tot}$ that we consider here
is made of the sum of the interesting open system Hamiltonian
$H_{\rm S}$ and its surrounding environment Hamiltonian $H_{\rm E}$
plus an interaction $H_{\rm SE}$ between the system and the
environment, that is \begin{equation}
H_{tot}=H_{\rm{S}}+H_{\rm{E}}+H_{\rm{SE}}.\end{equation} Note that the total
system Hilbert space $\mathcal{H}_{tot}$ is defined by the direct
product $\mathcal{H}_{\rm S}\otimes \mathcal{H}_{\rm E}$ of open
system Hilbert space $\mathcal{H}_{\rm S}$ and its environment
Hilbert space $\mathcal{H}_{\rm E}$. Here and in the following, we
will discuss time-independent Hamiltonian and we have taken
$\hbar=1$ for simplicity.
In an open system dynamics, a key difficulty to lead to the problem
becomes intractable is that there is the interaction between the
open system and its environment with huge degree of freedom. With
time evolution, the open system inevitably entangles with its
environment. Therefore, we starts from a system-environment
separated representation (SESR). This representation is beneficial
for obtaining the general and explicit solution of open system
dynamics as well as proposing the improved scheme of perturbed
theory \cite{My2}, because in the SESR we can conveniently trace off
the degree of freedom of environment. Introducing the SESR is a
simple and natural idea, and we will see it is also very useful. To
this purpose, we first divide the $H_{tot}$ into two parts \begin{equation}
H_{tot}={H}_{tot0}+{H}_{tot1}, \end{equation} and, without loss of generality,
we denote \begin{eqnarray}\label{H0form} {H}_{tot0}&=&H_{\rm S0}+H_{\rm E0}+
H_{\rm SE0},\quad \label{H1form} {H}_{tot1}=H_{\rm S1}+H_{\rm E1}+
H_{\rm SE1}.\end{eqnarray} It is clear that \begin{eqnarray} H_{\rm S0}&=&h_{\rm
S0}\otimes I_{\rm E},\quad H_{\rm E0}=I_{\rm S}\otimes h_{\rm
E0},\end{eqnarray} while we need the coupling Hamiltonian with the following
form \begin{eqnarray} H_{\rm SE0}&=&\sum_{m,n}c_{mn} S_{m0}\otimes B_{n0}.
\end{eqnarray} It is general enough if we do not restrict the forms of
$S_{m0}$ and $B_{n0}$. In the above expressions, the total Hilbert
space is $\mathcal{H}_{tot}=\mathcal{H}_{\rm S}\otimes
\mathcal{H}_{\rm E}$, $I_{\rm S}$ and $I_{\rm E}$ are, respectively,
the identity operators in $\mathcal{H}_{\rm S}$ and $
\mathcal{H}_{\rm E}$, and $c_{mn}$ are coupling constants between
the open system and its environment. Note that $H_{\rm S0}$ and
$H_{\rm E0}$ are always hermitian as usual. In addition, we need
$H_{\rm{SE}0}$ be also necessarily hermitian. In fact, because
$H_{\rm S0}$ and $H_{\rm E0}$ commute, the SESR always exists. The
aim to add $H_{\rm SE0}$ is to obtain better precision and to
simplify the perturbed part when passing to perturbation theory. It
must be pointed out that the general principle to divide $H$ into
two parts is to let the terms as more as possible belong to ${H}_0$
but the precondition is that there exist the commuting relations:
\begin{equation} [h_{\rm S0},S_{m0}], \quad [h_{\rm E0},\sum_{n}
c_{mn}B_{n0}]=0, \quad \mbox{or}\quad [h_{\rm S0},\sum_m
c_{mn}S_{m0}], \quad [h_{\rm E0},B_{n0}]=0. \end{equation} Moreover, that the
eigenvalue problem of $H_{tot0}$ is solvable. In fact, this
solvability implies that $h_{\rm S0}$ and $h_{\rm E0}$ are solvable,
then $h_{\rm S0}$ and $S_m$, $h_{\rm E0}$ and $\sum_{n} c_{mn}B_n$
have the common eigenvectors, or $h_{\rm S0}$ and
$\sum_{m}c_{mn}S_m$, $h_{\rm E0}$ and $B_n$ have the common
eigenvectors i.e, we have, respectively, \begin{eqnarray} h_{\rm S0}
\ket{\phi^{\gamma}}&=&E_{\gamma}\ket{\phi^{\gamma}},\quad S_{m0}
\ket{\phi^{\gamma}}=s_{m\gamma}\ket{\phi^{\gamma}},\quad h_{\rm E0}
\ket{\chi_v}=\varepsilon_v\ket{\chi_v}, \quad \sum_{n} c_{mn}B_{n0}
\ket{\chi_v}=r_{mv}\ket{\chi_v} \end{eqnarray} and \begin{eqnarray} h_{\rm S0}
\ket{\phi^{\gamma}}&=&E_{\gamma}\ket{\phi^{\gamma}},\quad
\sum_{m}c_{mn}S_{m0}
\ket{\phi^{\gamma}}=s_{n\gamma}\ket{\phi^{\gamma}},\quad h_{\rm E0}
\ket{\chi_v}=\varepsilon_v\ket{\chi_v}, \quad B_{n0}
\ket{\chi_v}=r_{nv}\ket{\chi_v}. \end{eqnarray}
They indicate that the eigenvectors of $H_{tot0}$, or the common
eigenvectors of $H_{\rm S0}$, $H_{\rm E0}$ and $H_{\rm SE0}$ are
\begin{equation} \ket{\Phi^{\gamma
v}}=\ket{\phi^{\gamma}}\otimes\ket{\chi^{v}},\end{equation} which span a
separate representation of the system and the environment, and it is
clear that \begin{equation} H_{tot0}\ket{\Phi^{\gamma v}}=E_{\gamma
v}\ket{\Phi^{\gamma v}},\end{equation} \begin{equation} E_{\gamma v}=
E_{\gamma}+\varepsilon_v+\sum_{m} s_{m\gamma} r_{mv}\quad \mbox{or}
\quad E_{\gamma v}= E_{\gamma}+\varepsilon_v+\sum_{n} s_{n\gamma}
r_{nv}. \end{equation}
It must be emphasized that the principle of Hamiltonian split is not
just the best solvability in more general cases. If the cut-off
approximation of perturbation is necessary, it requires that the
off-diagonal elements of the perturbing Hamiltonian $H_{tot1}$
matrix in the SESR is small enough compared with the diagonal
elements of $H_{tot}=H_{tot0}+H_{tot1}$ matrix in the same
representation according to our improved scheme of perturbation
theory. In addition, if there are the degeneracies, the Hamiltonian
split is also restricted by the condition that the degeneracies can
be completely removed via the usual diagonalization procedure of the
degenerate subspaces and our Hamiltonian redivision, or specially,
if the remained degeneracies are allowed, it requires that the
off-diagonal elements of the perturbing Hamiltonian matrix between
any two degenerate levels are always vanishing, in order to let our
improved scheme of perturbation theory work well \cite{My2}. As an
example, it has been studied in Sec. \ref{sec8}.
From the formal solution of the von Neumann equation of the total
system \begin{equation} \label{gsrhos}\rho_{tot}(t)={\rm e}^{-{\rm i} H_{tot}
t}\rho_{tot}(0){\rm e}^{{\rm i} H_{tot} t},\end{equation} and our expression of the time
evolution operator \cite{My1} \begin{eqnarray} {\rm e}^{-{\rm i} H_{tot}
t}&=&\sum_{l=0}^\infty \mathcal{A}_l(t),\end{eqnarray} it immediately follows
that the solution of total system density matrix with time evolution
is \begin{equation} \rho_{tot}(t)=\sum_{k,l=0}^\infty
\mathcal{A}_k(t)\rho_{tot}(0)\mathcal{A}_l(-t)=\sum_{k,l=0}^\infty
\mathcal{A}_k(t)\rho_{tot}(0)\mathcal{A}^\dagger_l(t).\end{equation}
In the SESR, we have \begin{eqnarray} \label{gestos}
\rho_{tot}(t)&=&\sum_{\beta, u,\beta^\prime,u^\prime}\sum_{\gamma,
v,\gamma^\prime,v^\prime}\sum_{k,l=1}^\infty
A_k^{\gamma\beta}(t)\rho^{\beta u,\beta^\prime
u^\prime}(0)A_l^{\beta^\prime u^\prime, \gamma^\prime
v^\prime}(-t)\ket{\Phi^{\gamma v}}\bra{\Phi^{\gamma^\prime
v^\prime}}\\
&=& \sum_{\beta, u,\beta^\prime,u^\prime}\sum_{\gamma,
v,\gamma^\prime,v^\prime}\sum_{k,l=1}^\infty
A_k^{\gamma\beta}(t)\rho^{\beta u,\beta^\prime
u^\prime}(0)A_l^{\beta^\prime u^\prime, \gamma^\prime
v^\prime}(-t)\left[\ket{\phi^{\gamma}}\bra{\phi^{\gamma^\prime}}\right]
\otimes \left[\ket{\chi^v}\bra{\chi^{v^\prime}}\right],\end{eqnarray} where
\begin{eqnarray} A_l^{\gamma v,\gamma^\prime v^\prime}(t)&=&\bra{\Phi^{\gamma
v}}\mathcal{A}_l(t)\ket{\Phi^{\gamma^\prime v^\prime}},\\
\rho_{tot}^{\gamma v,\gamma^\prime v^\prime}(0)&=&\bra{\Phi^{\gamma
v}}\rho_{tot}(0)\ket{\Phi^{\gamma^\prime v^\prime}}.\end{eqnarray} In Ref.
\cite{My1}, we have found the explicit forms of $\mathcal{A}_l(t)$.
In the SESR, they read \begin{eqnarray} \label{Aldefinition} A_0^{\gamma
v,\gamma^\prime v^\prime}(t)&=&{\rm e}^{-{\rm i} E_{\gamma v}
t}\delta_{\gamma\gamma^\prime}\delta_{vv^\prime},\\ A_l^{\gamma
v,\gamma^\prime
v^\prime}(t)&=&\sum_{\gamma_1,\cdots,\gamma_{l+1}}\sum_{v_{\gamma_1},\cdots,v_{\gamma_{l+1}}}\left[
\sum_{i=1}^{l+1}(-1)^{i-1}\frac{{\rm e}^{-{\rm i} E_{\gamma_iv_i}
t}}{d_i(E[\gamma v,l])}\right]\nonumber\\
& &\times \prod_{j=1}^{l}H_{tot1}^{\gamma_j
v_{j},\gamma_{j+1}v_{j+1}} \delta_{\gamma_1\gamma}\delta_{v_{1}{v}}
\delta_{\gamma_{l+1}\gamma^\prime}\delta_{v_{l+1}v^\prime},\end{eqnarray} and
all $H_{tot1}^{\gamma_j
v_{\gamma_j},\gamma_{j+1}v_{j}}=\bra{\Phi^{\gamma_j
v_{j}}}H_{tot1}\ket{\Phi^{\gamma_{j+1}v_{j+1}}}$ form so-called
``perturbing Hamiltonian matrix", that is, the representation matrix
of the perturbing Hamiltonian in the unperturbed Hamiltonian
representation (SESR). While \begin{eqnarray} d_1(E[\gamma
v,l])&=&\prod_{i=1}^{l}\left(E_{\gamma_{1}v_1}
-E_{\gamma_{i+1}v_{j=1}}\right),\\
d_i(E[\gamma v,l])&=&
\prod_{j=1}^{i-1}\left(E_{\gamma_{j}v_j}
-E_{\gamma_{i}v_i}\right)\!\!\!\prod_{k=i+1}^{l+1}\left(E_{\gamma_{i}v_i}
-E_{\gamma_{k}v_k}\right),\\[-3pt] d_{l+1}(E[\gamma v,l])
&=&\prod_{i=1}^{l}\left(E_{\gamma_{i}v_i}-E_{\gamma_{l+1}v_{l+1}}\right).\end{eqnarray}
By tracing off the degree of freedom of environment space, we obtain
the explicit expression of time evolution of reduced density matrix
of open system \begin{equation}\label{esos}
\rho_{\rm{S}}(t)=\sum_{k,l=0}^\infty\sum_{\beta, u,\beta^\prime,
u^\prime}\sum_{\gamma v,\gamma^\prime}A_k^{\gamma v,\beta
u}(t)\rho_{tot}^{\beta u,\beta^\prime u^\prime}(0)A_l^{\beta^\prime
u^\prime,\gamma^\prime
v}(-t)\ket{\phi^{\gamma}}\bra{\phi^{\gamma^\prime}},\end{equation} where we
have used the fact ${\rm Tr}_{\rm E}\left(\ket{{\Phi}^{\gamma
v}}\bra{{\Phi}^{\gamma^\prime v^\prime}}\right)
=\ket{{\psi}^{\gamma}}\bra{{\psi}^{\gamma^\prime}}\delta_{v
v^\prime}$, which is an advantage of the SESR.
It is clear that in the above expression, we need to know the
concrete forms of $\ket{\phi^\gamma}$ and $\ket{\chi^v}$ in order to
obtain the explicit expressions of $A_k^{\gamma v,\beta u}(t)$. In
fact, this is a physical reason why we take the form of $H_{tot0}$
as Eq.(\ref{H0form}) so that the eigenvectors and eigenvalues of
$H_{tot0}$ are obtainable.
Note that there are apparent divergences in the above exact
solution. For the tidiness in form, we keep these apparent
divergences in our expressions, but we can completely eliminate them
by the limit process \cite{My2}. In other words, our exact solution
of open systems should be understood in the limitation sense.
Just as pointed out above, there is, at least, an inherent SESR
(ISESR) in the total system if taking $H_{tot 0}=H_{\rm S}+H_{\rm
E}$. We will be able to obtain the similar solution as Eq.
(\ref{esos}). However, the ISESR is not unique in general because,
in principle, a part of $H_{\rm S}$ and/or a part of $H_{\rm E}$ can
be absorbed to $H_{\rm SE}$ if $I_{\rm S}$ and $I_{\rm E}$ are
thought of as, respectively, the system operator and the environment
operator. In this sense, the difference between the SESR and the
ISESR is that the SESR allows to contain a part of $H_{{\rm
SE}0}=\sum_{m,n} c_{mn}S_{m0}\otimes B_{n0}$, in which, $S_{m0}\neq
I_{\rm S}$ and $\sum_{n}c_{mn} B_{n0}\neq I_{\rm E}$ for all $m$. Of
course, if the cut-off approximation of perturbation is necessary,
it requires that the absorbed parts from $H_{\rm S}$ and $H_{\rm E}$
are small enough. Such an example is discussed in Sec. \ref{sec8}.
In addition, one of the reasons to introduce the SESR is to make the
Hamiltonian redivision and absorbing the perturbing parts of $H_{\rm
S}$ and $H_{\rm E}$ to the perturbing Hamiltonian of the total
system look like more natural.
Different from the general and explicit solution (\ref{gsrhos}), the
coefficients of our above solution (\ref{esos}) of open system
dynamics are $c$-number functions whose forms are expressed clearly.
Because $A_k^{\gamma v,\beta u}(t)$ include all of order
approximations, this solution is, in fact, exact although it is an
infinite series. Our solution in form is general enough, and it is
able to applied to the cases that $H_{\rm S}$ and/or $H_{\rm E}$ are
not exactly solvable. It is clear that we do not use the accustomed
approximations such as the Born-Markov approximation, the
factorization assumption for the initial state et. al. Hence, it
should be more general and more reliable in theory. Moreover, by
virtue of the improved scheme of perturbation theory proposed by us,
we can obtain the improved perturbed solution of open system
dynamics with better precision and higher efficiency because the
contributions from the high order even all order approximations of
perturbation are absorbed into the lower order approximations.
\section{Improved perturbed solution of open system
dynamics}\label{sec3}
Traditional scheme of perturbation theory has been successfully used
to solve many systems. However, in our point of view, it is still
improvable, even it has a flaw because it introduce the perturbing
parameter too early so that the contributions from the high order
even all order approximations of the diagonal and off-diagonal
elements of the perturbing Hamiltonian matrix are, respectively,
inappropriately dropped and prematurely cut off. For some systems,
the influences on the calculational precision because of this flaw
can be not neglectable with the evolution time increasing. Actually,
the traditional scheme of perturbation theory does not give a
general term form of expanding coefficient evolution with time for
any order approximation and does not explicitly express the general
term as an obvious $c$-number function. Thus, it is necessary to
find the perturbed solution (or perturbed energy and perturbed state
vector) from the low to the high order approximation step by step up
to some order approximation for a needed precision. Recently, in our
work, we proposed an improved scheme of perturbation theory based on
the general and explicit form of our exact solution \cite{My1,My2}.
In our improved scheme, we introduce the approximation as late as
possible, and consider subtly and systemically the affection of
high-order approximation to the low-order one by the dynamical
rearrangement and summation method. This finally results in the
improved form of perturbed solution, and its expansion coefficients
contain reasonably the high-order energy improvement. In this
section, we will apply our improved scheme of perturbation theory to
open systems.
It must be emphasized that before applying our improved form of
perturbed solution, we have to first carried out the digonalization
of degenerate subspaces if there is degeneracy and do the
Hamiltonian redivision when $H_{tot1}$ has the diagonal elements, in
order to completely removed possible degeneracies by this procedure.
When the remained degeneracies are allowed, it requires that the
off-diagonal elements of the perturbing Hamiltonian matrix between
any two degenerate levels are always vanishing. For more complicated
cases, we will study in the near further.
Therefore, up to the three order improved approximation, we have
\begin{eqnarray}\label{ipsos} \rho_{\rm{S}}(t)&=&\sum_{\stackrel{\scriptstyle
l,k=0}{k+l\leq 3}}^3\sum_{\beta, u,\beta^\prime,
u^\prime}\sum_{\gamma v,\gamma^\prime}A_{{\rm I}l}^{\gamma v,\beta
u}(t)\rho_{tot}^{\beta u,\beta^\prime u^\prime}(0)A_{{\rm
I}k}^{\beta^\prime u^\prime,\gamma^\prime
v}(-t)\ket{\phi^{\gamma}}\bra{\phi^{\gamma^\prime}}+\mathcal{O}\left(H_1^4\right),\end{eqnarray}
where
\begin{equation} A_{{\rm I}0}^{\gamma v, \gamma^\prime
v^\prime}(t)={\rm e}^{-{\rm i}\widetilde{E}_{\gamma v} t}\delta_{\gamma
\gamma^\prime}\delta_{vv\prime},\end{equation}
\begin{eqnarray} A_{\rm I 1}^{\gamma v,\gamma^\prime
v^\prime}(t)&=&\left[\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma v}
t}}{E_{\gamma v}-E_{\gamma^\prime v^\prime}}-\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma^\prime v^\prime} t}}{E_{\gamma
v}-E_{\gamma^\prime v^\prime}}\right]g_1^{\gamma v,\gamma^\prime
v^\prime},\end{eqnarray}
\begin{eqnarray} A_{\rm I2}^{\gamma v,\gamma^\prime
v^\prime}(t)&=&\sum_{\gamma_1,v1}\left\{-\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma v} t}-{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_1 v_1}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)^2} g_1^{\gamma
v,\gamma_1 v_1}g_1^{\gamma_1 v_1, \gamma
v}\delta_{\gamma\gamma^\prime}
\delta_{vv^\prime}+\left[\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma v}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)
\left(E_{\gamma v}-E_{\gamma^\prime v^\prime}\right)}\right.\right.\nonumber\\
& & \left.\left.-\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_1 v_1}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)\left(E_{\gamma_1
v_1}-E_{\gamma^\prime v^\prime}\right)} +\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma^\prime v^\prime} t}}{\left(E_{\gamma
v}-E_{\gamma^\prime v^\prime}\right)\left(E_{\gamma_1
v_1}-E_{\gamma^\prime v^\prime}\right)}\right]g_1^{\gamma v,\gamma_1
v_1}g_1^{\gamma_1 v_1,\gamma^\prime v^\prime}\eta_{\gamma
v,\gamma^\prime v^\prime}\right\},\hskip 1.0cm\end{eqnarray}
\begin{eqnarray} A_{\rm I3}^{\gamma v,\gamma^\prime
v^\prime}(t)&=&\sum_{\gamma_1 v_1,\gamma_2 v_2}\left[-\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma v} t}}{\left(E_{\gamma v}-E_{\gamma_1
v_1}\right)\left(E_{\gamma v}-E_{\gamma_2
v_2}\right)^2}-\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma v}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)^2\left(E_{\gamma
v}-E_{\gamma_2 v_2}\right)}
\right.\nonumber\\
& &\left.+\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_1 v_1}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)^2\left(E_{\gamma_1
v_1}-E_{\gamma_2 v_2}\right)}-\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_2
v_2} t}}{\left(E_{\gamma v}-E_{\gamma_2v_2}\right)^2
\left(E_{\gamma_1 v_1}-E_{\gamma_2 v_2}\right)}\right] g_1^{\gamma
v,\gamma_1 v_1}g_1^{\gamma_1 v_1\gamma_2 v_2}g_1^{\gamma_2
v_2,\gamma v}\delta_{\gamma\gamma^\prime}\delta_{vv^\prime}
\nonumber\\
& &-\sum_{\gamma_1}\left[\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma v}
t}}{\left(E_{\gamma v}-E_{\gamma_1 v_1}\right) \left(E_{\gamma
v}-E_{\gamma^\prime v^\prime}\right)^2}+\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma v} t}}{\left(E_{\gamma v} -E_{\gamma_1
v_1}\right)^2\left(E_{\gamma v}-E_{\gamma^\prime
v^\prime}\right)}\right]g_1^{\gamma v,\gamma_1v_1}
g_1^{\gamma_1v_1,\gamma v} g_1^{\gamma v\gamma^\prime v^\prime}\nonumber\\
& &+\sum_{\gamma_1,\gamma_2}\left[\frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\gamma v} t}\eta_{\gamma v,\gamma_2
v_2}}{\left(E_{\gamma v}-E_{\gamma_1v_1}\right) \left(E_{\gamma
v}-E_{\gamma_2 v_2}\right) \left(E_{\gamma v}-E_{\gamma^\prime
v^\prime}\right)}-\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_1v_1}
t}\eta_{\gamma_1v_1,\gamma^\prime v^\prime}}{\left(E_{\gamma
v}-E_{\gamma_1v_1}\right)\left(E_{\gamma_1
v_1}-E_{\gamma_2v_2}\right)
\left(E_{\gamma_1 v_1}-E_{\gamma^\prime v^\prime}\right)}\right.\nonumber\\
& &\left.+\frac{{\rm e}^{-{\rm i} \widetilde{E}_{\gamma_2v_2} t}\eta_{\gamma
v,\gamma_2v_2}}{\left(E_{\gamma v}-E_{\gamma_2v_2}\right)
\left(E_{\gamma_1v_1}-E_{\gamma_2v_2}\right)
\left(E_{\gamma_2v_2}-E_{\gamma^\prime v^\prime}\right)}\right]
g_1^{\gamma v,\gamma_1v_1}
g_1^{\gamma_1v_1,\gamma_2v_2}g_1^{\gamma_2v_2,\gamma^\prime
v^\prime}\eta_{\gamma v,\gamma^\prime v^\prime},\end{eqnarray} where
$\delta_{\gamma\gamma^\prime}$ and $\delta_{vv^\prime}$ are the
usual discrete delta functions, while
$\eta_{\gamma\gamma^\prime}=1-\delta_{\gamma\gamma^\prime}$,
$\eta_{vv^\prime}=1-\delta_{vv^\prime}$, and $\eta_{\gamma
v,\gamma^\prime
v^\prime}=\eta_{\gamma\gamma^\prime}+\delta_{\gamma\gamma^\prime}\eta_{vv^\prime}
=\eta_{vv^\prime}+\eta_{\gamma\gamma^\prime}\delta_{vv^\prime}$.
Moreover, we have defined so-called improved form of perturbed
energy by\begin{equation} \label{iped} \widetilde{E}_{\gamma v}=E_{\gamma
v}+G_{\gamma v}^{(1)} +G_{\gamma v}^{(2)}+G_{\gamma
v}^{(3)}+G_{\gamma v}^{(4)}+G_{\gamma v}^{(5)}+\cdots ,\end{equation} where,
$G_{\gamma v}^{(1)}=h_1^{\gamma v}$ are diagonal elements of
$H_{tot1}$ and $g_1^{\gamma_i v_i,\gamma_j v_j}$ are off-diagonal
elements of $H_{tot1}$ in the representation of $H_{tot0}$. In
addition, $h_1^{\gamma v}$ include the diagonal elements after the
diagonalization of degenerate subspaces. While \begin{equation} G_{\gamma
v}^{(2)}=\sum_{\gamma_1,v_1}\frac{1}{E_{\gamma v}-E_{\gamma_1 v_1}}
g_{1}^{\gamma v, \gamma_1 v_1}g_{1}^{\gamma_1v_1,\gamma v}, \end{equation}
\begin{equation} G_{\gamma v}^{(3)}=\sum_{\gamma_1, v_1,\gamma_2, v_2}
\frac{1}{(E_{\gamma v}-E_{\gamma_1 v_1})(E_{\gamma v}-E_{\gamma_2
v_2})} g_{1}^{\gamma
v,\gamma_1v_1}g_{1}^{\gamma_1v_1,\gamma_2v_2}g_{1}^{\gamma_2v_2,\gamma
v}, \end{equation} \begin{eqnarray} G_{\gamma
v}^{(4)}&=&\sum_{\gamma_1,\gamma_2,\gamma_3}\sum_{v_1,v_2,v_3}
\frac{g_{1}^{\gamma v,\gamma_1v_1}g_{1}^{\gamma_1v_1,\gamma_2v_2}
g_{1}^{\gamma_2v_2,\gamma_3v_3}g_{1}^{\gamma_3v_3,\gamma v}
\eta_{\gamma v,\gamma_2v_2}}{(E_{\gamma v}-E_{\gamma_1 v_1})
(E_{\gamma v}-E_{\gamma_2 v_2})(E_{\gamma v}-E_{\gamma_3 v_3})}\nonumber\\
& &-\sum_{\gamma_1,\gamma_2}\sum_{v_1,v_2}\frac{g_{1}^{\gamma
v,\gamma_1v_1} g_{1}^{\gamma_1 v_1,\gamma v}g_{1}^{\gamma v,\gamma_2
v_2} g_{1}^{\gamma_2v_2,\gamma v}}{{(E_{\gamma v}-E_{\gamma_1
v_1})}^{2}(E_{\gamma v}-E_{\gamma_2v_2})}, \end{eqnarray} \begin{eqnarray}
G_\gamma^{(5)}&=&\sum_{\gamma_1,\gamma_2,\gamma_3,\gamma_4}
\sum_{v_1,v_2,v_3,v_4}\frac{g_1^{\gamma v,\gamma_1v_1}g_1^{\gamma_1
v_1,\gamma_2 v_2} g_1^{\gamma_2 v_2,\gamma_3 v_3}g_1^{\gamma_3
v_3,\gamma_4 v_4}g_1^{\gamma_4 v_4,\gamma v}\eta_{\gamma
v,\gamma_2v_2}\eta_{\gamma v,\gamma_3v_3}} {\left(E_{\gamma
v}-E_{\gamma_1 v_1}\right)\left(E_{\gamma v}-E_{\gamma_2 v_2}\right)
\left(E_{\gamma v}-E_{\gamma_3 v_3}\right)\left(E_{\gamma v}-E_{\gamma_4v_4}\right)}\nonumber\\
& & -\sum_{\gamma_1,\gamma_2,\gamma_3}\sum_{v_1,v_2,v_3}\left[
\frac{g_1^{\gamma v,\gamma_1v_1} g_1^{\gamma_1v_1,\gamma
v}g_1^{\gamma
v,\gamma_2v_2}g_1^{\gamma_2v_2,\gamma_3v_3}g_1^{\gamma_3v_3,\gamma
v}} {\left(E_{\gamma v}-E_{\gamma_1v_1}\right)^2\left(E_{\gamma
v}-E_{\gamma_2 v_2}\right) \left(E_{\gamma v}-E_{\gamma_3
v_3}\right)} \right.\nonumber\\
& &\left.+\frac{g_1^{\gamma v,\gamma_1v_1} g_1^{\gamma_1v_1,\gamma
v}g_1^{\gamma v,\gamma_2 v_2}g_1^{\gamma_2 v_2,\gamma_3
v_3}g_1^{\gamma_3 v_3,\gamma v}} {\left(E_{\gamma v}-E_{\gamma_1
v_1}\right)\left(E_{\gamma v}-E_{\gamma_2 v_2}\right)^2
\left(E_{\gamma v}-E_{\gamma_3 v_3}\right)}+\frac{g_1^{\gamma
v,\gamma_1 v_1} g_1^{\gamma_1 v_1,\gamma v}g_1^{\gamma
v,\gamma_2v_2}g_1^{\gamma_2v2,\gamma_3v_3}g_1^{\gamma_3v_3,\gamma
v}} {\left(E_{\gamma v}-E_{\gamma_1 v_1}\right)\left(E_{\gamma
v}-E_{\gamma_2 v_2}\right) \left(E_{\gamma v}-E_{\gamma_3
v_3}\right)^2}\right].\end{eqnarray} It must be emphasized that if only based
on our calculations that was completed in Ref. \cite{My2}, the
improved perturbed energy in the exponential powers of $A_{\rm I1}$,
$A_{\rm I2}$ and $A_{\rm I3}$ will be cut off, respectively, to
$G_{\gamma_iv_i}^{(4)}$, $G_{\gamma_iv_i}^{(3)}$ and
$G_{\gamma_iv_i}^{(2)}$. However, according to our conjecture, we
think that they can congruously written as the definition
(\ref{iped}).
Our improved perturbed solution inherits some features from our
exact solutions, for example, it is an explicit $c$-number function,
easy to calculate, does not need the extra approximations. In
principle, we should can calculate to any order of improved
approximation. It must be emphasized that our improved form of
perturbed solution absorbs the partial contributions from the high
order even all order approximations of perturbation. This means that
our solution has better precision and higher efficiency. In fact,
these advantages have been seen in our recent work \cite{My1,My2}.
\section{Master equation of open systems including all order approximations}\label{sec4}
Because we have obtained the general and explicit solution of the
open system dynamics when the Hamiltonians of the system, its
environment and the interaction between them are known, it is
unnecessary to derive out the dynamical equation of open systems.
However, in order to understand the affection from the environment,
compare our solution with the existed motion equations and reveal
the improvement of our method, we would like to discuss the motion
equation and master equation in this section.
It is more convenient to derive out the master equation in the
inherent SESR (ISESR) of open systems. That is, we take
$H_{tot}=H_{\rm S}+H_{\rm E}$. In fact, it make us more easily
compare our results with the existed ones. Obviously, the bases of
ISESR are $\ket{\psi^\gamma}\otimes\ket{\omega^v}$, that is \begin{eqnarray}
H_{\rm S}\ket{\psi^\gamma}\otimes\ket{\omega^v}
=E_{{\rm S}\gamma}\ket{\psi^\gamma}\otimes\ket{\omega^v},\\
H_{\rm
E}\ket{\psi^\gamma}\otimes\ket{\omega^v}=\varepsilon_{{\rm E}
v}\ket{\psi^\gamma}\otimes\ket{\omega^v}.\end{eqnarray} Similar to the way in
Sec. \ref{sec2}, we can obtain the exact solutions $\rho_{tot}(t)$
and $\rho_{\rm S}(t)$. All we need to do is to change
$\ket{\phi^{\gamma}}\otimes \ket{\chi^v}$ as
$\ket{\psi^\gamma}\otimes\ket{\omega^v}$ and define the all matrix
elements in the ISESR, for example, ${A}_k^{\beta^\prime
u^\prime,\gamma^\prime
v^\prime}(t)=\bra{\psi^\gamma\omega^v}\mathcal{A}_k(t)
\ket{\psi^{\gamma^\prime}\omega^{v^\prime}}$. Therefore,
\begin{eqnarray}\label{estotisser}
\rho_{tot}(t)&=&\sum_{k,l=0}^\infty\sum_{\beta, u,\beta^\prime,
u^\prime}\sum_{\gamma, v,\gamma^\prime, v^\prime}{A}_k^{\gamma
v,\beta u}(t)\varrho_{tot}^{\beta u,\beta^\prime
u^\prime}(0){A}_l^{\beta^\prime u^\prime,\gamma^\prime
v^\prime}(-t)\ket{\psi^{\gamma}}\bra{\psi^{\gamma^\prime}}\otimes
\ket{\omega^{v}}\bra{\omega^{v^\prime}}. \end{eqnarray} \begin{equation}\label{essisser}
\rho_{\rm S}(t)=\sum_{\gamma v,\gamma^\prime}\sum_{k,l=0}^\infty\sum_{\beta, u,\beta^\prime,
u^\prime}{A}_k^{\gamma v,\beta u}(t)\varrho_{tot}^{\beta
u,\beta^\prime u^\prime}(0){A}_l^{\beta^\prime
u^\prime,\gamma^\prime
v}(-t)\ket{\psi^{\gamma}}\bra{\psi^{\gamma^\prime}}.\end{equation}
From the solution (\ref{estotisser}), it is easy to get that
${\rm Tr}_{\rm E}\left\{\left[H_{\rm
S},{\rho}_{tot}(t)\right]\right\}=\left[h_{\rm S},\rho_{\rm
S}(t)\right]$ and ${\rm Tr}_{\rm E}\left\{\left[H_{\rm
E},{\rho}_{tot}(t)\right]\right\}=0$. Hence, \begin{equation} \label{des0}
\dot{\rho}_{\rm S}(t)={\rm Tr}_{\rm E}\dot{\rho}_{tot}(t)=-{\rm i}{\rm Tr}_{\rm
E}\left\{\left[H_{tot},{\rho}_{tot}(t)\right]\right\}=-{\rm i}\left[h_{\rm
S},\rho_{\rm S}(t)\right]-{\rm i}{\rm Tr}_{\rm E}\left\{\left[H_{\rm
SE},{\rho}_{tot}(t)\right]\right\},\end{equation} where $h_{\rm S}={\rm Tr}_{\rm
E}H_{\rm S}$. Denoting system operators by $S_m$ and bath operators
by $B_n^\prime$, the most general form of $H_{\rm SE}$ is \begin{equation}
\label{H1form} H_{tot1}=H_{\rm SE}=\sum_{m,n} c_{mn}S_{m}\otimes
B^\prime_n=\sum_{m}S_{m}\otimes B_m,\end{equation} where $B_m=\sum_n
c_{mn}B^\prime_n$. Substituting the above relation into Eq.
(\ref{des0}), we obtain the motion equation of open systems \begin{eqnarray}
\label{des1}\dot{\rho}_{\rm S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}\sum_m \left[S_m,{\rm Tr}_{\rm E}\left\{\left(I_{\rm
S}\otimes B_m\right)\rho_{tot}(t)\right\}\right].\end{eqnarray} The second
term of its right side represents the influence of the environment
on the system.
In order to express the motion equation (\ref{des1}) of the open
systems in the explicit matrix form, we introduce so-called
factorizing initial state assumption, that is, the system and its
environment are uncorrelated initially such that the total density
matrix is a direct product of the system and its environment density
matrices, \begin{equation} \rho_{tot}(0)=\rho_{\rm{S}}(0)\otimes
\rho_{\rm{E}}(0).\end{equation} Its advantage is to make us easily consider
the actions of the operators on, respectively, the open system space
and its environment space, and finally we can easily trace off the
degree of freedom of environment space. In order to use this
advantage, we introduce two new operators \begin{eqnarray}
\mathcal{A}_{L}^{(k)}(t)&=&\mathcal{A}_k(t)\mathcal{A}^{-1}_0(t)
=\sum_{\beta,\beta^\prime}\mathcal{P}_{\rm
S}(\beta,\beta^\prime)\otimes
\mathcal{A}_{{\rm E}L}^{(k)}(t,\beta,\beta^\prime),\\
\mathcal{A}_{R}^{(l)}(-t)&=&\mathcal{A}^{-1}_0(-t)
\mathcal{A}_l(-t)=\sum_{\gamma,\gamma^\prime}\mathcal{P}_{\rm
S}(\gamma,\gamma^\prime)\otimes \mathcal{A}_{{\rm
E}R}^{(l)}(-t,\gamma,\gamma^\prime), \end{eqnarray} where $\mathcal{P}_{\rm
S}(\beta,\beta^\prime)=\ket{\psi^\beta}\bra{\psi^{\beta^\prime}}$
are the basis operators of the system Hilbert space
$\mathcal{H}_{\rm S}$, while the operators $\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)$ and $\mathcal{A}_{{\rm
E}R}^{(k)}(-t,\gamma,\gamma^\prime)$ are defined in environment
Hilbert space $\mathcal{H}_{\rm E}$ by \begin{eqnarray} \mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)&=&\sum_{u,u^\prime}{A}_k^{\beta
u,\beta^\prime u^\prime}(t){\rm e}^{{\rm i} E_{\beta^\prime u^\prime}t}
\ket{\omega^u}\bra{\omega^{u^\prime}},\\
\mathcal{A}_{{\rm
E}R}^{(l)}(-t,\gamma,\gamma^\prime)&=&\sum_{v,v^\prime}{\rm e}^{-{\rm i}
E_{\gamma v}t}{A}_l^{\gamma v,\gamma^\prime v^\prime}(-t)
\ket{\omega^v}\bra{\omega^{v^\prime}}.\end{eqnarray} Thus, we see that
$\mathcal{A}_{L}^{(k)}(t)$ and $\mathcal{A}_{R}^{(l)}(-t)$ are
decomposed as the summations whose every terms with the form that
the open system parts and its environment parts are separate. Hence,
we obtain \begin{equation}\label{rhoinmatrix}
\rho_{tot}(t)=\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}\left[\mathcal{P}_{\rm
S}(\beta,\beta^\prime)\varrho_{\rm S}(t)\mathcal{P}_{\rm
S}(\gamma,\gamma^\prime)\right]\otimes\left[\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)\varrho_{\rm E}(t)\mathcal{A}_{{\rm
E}R}^{(l)}(-t,\gamma,\gamma^\prime)\right],\end{equation} where \begin{eqnarray}
\varrho_{\rm
S}(t)&=&{\rm e}^{-{\rm i} h_{\rm S} t}\rho_{\rm S}(0){\rm e}^{-{\rm i} h_{\rm S} t},\\
\varrho_{\rm E}(t)&=&{\rm e}^{-{\rm i} h_{\rm E} t}\rho_{\rm E}(0){\rm e}^{-{\rm i}
h_{\rm E} t},\end{eqnarray} and then \begin{eqnarray}
\varrho_{tot}(t)=\mathcal{A}_0(t)\rho_{tot}(0)\mathcal{A}_0(-t)=\varrho_{\rm
S}(t)\otimes\varrho_{\rm E}(t).\end{eqnarray}
Substituting Eq. (\ref{rhoinmatrix}) into the motion equation
(\ref{des1}) it immediately follows that \begin{eqnarray}
\label{meem}\dot{\rho}_{\rm S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}\sum_m\sum_{k,l=0}^\infty
\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}C^{m,kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)
\left[S_m,\mathcal{P}_{\rm S}(\beta,\beta^\prime)\varrho_{\rm
S}(t)\mathcal{P}(\gamma,\gamma^\prime)\right],\end{eqnarray} where we have
used the fact that \begin{equation}
C^{m,kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)={\rm Tr}_{\rm
E}\left[B_m\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)\varrho_{\rm E}(t)\mathcal{A}_{{\rm
E}R}^{(l)}(-t,\gamma,\gamma^\prime)\right].\end{equation}
Further deduction needs us to obtain the motion equation of
$\varrho_{\rm S}(t)$ that is expressed by $\rho_{\rm S}(t)$. In
fact, based on Eq. (\ref{rhoinmatrix}), we have \begin{eqnarray} \varrho_{\rm
S}(t)=\rho_{\rm S}(t)-\sum_{\stackrel{\scriptstyle
k,l=0}{k+l>0}}\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}
K^{kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)
\left[\mathcal{P}_{\rm S}(\beta,\beta^\prime)\varrho_{\rm
S}(t)\mathcal{P}_{\rm S}(\gamma,\gamma^\prime)\right],\end{eqnarray} where we
define the coefficients \begin{equation}
K^{kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)={\rm Tr}_{\rm
E}\left[\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)\varrho_{\rm E}(t)\mathcal{A}_{{\rm
E}R}^{(l)}(-t,\gamma,\gamma^\prime)\right].\end{equation} Therefore, we can
use the iterative method to rewrite it as \begin{eqnarray} \label{iterative}
\varrho_{\rm S}(t)&=&\rho_{\rm S}(t)+\sum_{M=1}^\infty(-1)^M
\left[\prod_{m=1}^M\sum_{\stackrel{\scriptstyle
k_m,l_m=0}{k_m+l_m>0}}\sum_{\beta_m,\beta_m^\prime,\gamma_m,\gamma_m^\prime}
K^{k_ml_m}_{\beta_m\beta_m^\prime,\gamma_m\gamma_m^\prime}(t)\right]\nonumber\\
& &\times \left[\prod_{i=1}^M\mathcal{P}_{\rm
S}(\beta_i,\beta_i^\prime)\right] \rho_{\rm
S}(t)\left[\prod_{j=1}^M\mathcal{P}_{\rm
S}(\gamma_j,\gamma_j^\prime)\right].\end{eqnarray} Substituting it into Eq.
(\ref{meem}), we obtain \begin{eqnarray}\label{eme} \dot{\rho}_{\rm
S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}\sum_m\sum_{k,l=0}^\infty
\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}C^{m,kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)
\left[S_m,\mathcal{P}_{\rm S}(\beta,\beta^\prime)\rho_{\rm
S}(t)\mathcal{P}(\gamma,\gamma^\prime)\right]\nonumber\\
& &-{\rm i}\sum_m\sum_{k,l=0}^\infty
\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}C^{m,kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)
\sum_{N=1}^\infty(-1)^N
\left(\prod_{n=1}^N\sum_{\stackrel{\scriptstyle
k_n,l_n=0}{k_n+l_n>0}}\sum_{\beta_n,\beta_n^\prime,\gamma_n,\gamma_n^\prime}
K^{k_nl_n}_{\beta_n\beta_n^\prime,\gamma_n\gamma_n^\prime}(t)\right)\nonumber\\
& &\times \left[S_m,\mathcal{P}_{\rm
S}(\beta,\beta^\prime)\left(\prod_{i=1}^N\mathcal{P}_{\rm
S}(\beta_i,\beta_i^\prime)\right) \rho_{\rm
S}(t)\left(\prod_{j=1}^N\mathcal{P}_{\rm
S}(\gamma_j,\gamma_j^\prime)\right)\mathcal{P}_{\rm
S}(\gamma,\gamma^\prime)\right]. \end{eqnarray}
Up to now, we have not introduced any approximation except for the
factorization assumption for the initial state. Since our master
equation (\ref{eme}) including all order approximations, we can say
it is an exact master equation of open systems.
\section{Perturbed master equation of open systems}\label{sec5}
In the most cases, the interaction between the open system and its
environment is weak. We can cut off the above exact master equation
to some given order approximation. It is clear that since we absorb
the coupling coefficients into $B_m$, we known
$C^{m,nl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)$ is a quantity
of the $(n+l+1)$th order approximation,
$K^{nl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)$ is a quantity of
the $(n+l)$th order approximation from their definitions. Although
we can obtain any given order approximation of master equation based
on our exact master equation (\ref{eme}), in most cases, we only are
interested in up to the second order approximation. Because \begin{equation}
C_{\beta\beta^\prime,\gamma\gamma^\prime}^{m,00}={\rm Tr}_{\rm
E}\left[B_m\varrho_{\rm
E}(t)\right]\delta_{\beta\beta^\prime}\delta_{\gamma\gamma^\prime},\end{equation}
\begin{eqnarray} C_{\beta\beta^\prime,\gamma\gamma^\prime}^{m,0l}
&=&\delta_{\beta\beta^\prime}{\rm Tr}_{\rm E}\left[B_m\varrho_{\rm
E}(t)\mathcal{A}_{{\rm E}R}^{(l)}(-t,\gamma,\gamma^\prime)\right],\\
C_{\beta\beta^\prime,\gamma\gamma^\prime}^{m,k0} &=&{\rm Tr}_{\rm
E}\left[\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)\varrho_{\rm E}(t)B_m\right]
\delta_{\gamma\gamma^\prime}.\end{eqnarray}
\begin{eqnarray} K_{\beta\beta^\prime,\gamma\gamma^\prime}^{0l}
&=&\delta_{\beta\beta^\prime}{\rm Tr}_{\rm E}\left[\varrho_{\rm
E}(t)\mathcal{A}_{{\rm E}R}^{(l)}(-t,\gamma,\gamma^\prime)\right],\\
K_{\beta\beta^\prime,\gamma\gamma^\prime}^{k0} &=&{\rm Tr}_{\rm
E}\left[\mathcal{A}_{{\rm
E}L}^{(k)}(t,\beta,\beta^\prime)\varrho_{\rm E}(t)\right]
\delta_{\gamma\gamma^\prime},\end{eqnarray} we have \begin{eqnarray} \label{meepf}
\dot{\rho}_{\rm S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}[J(t),{\rho}_{\rm S}(t)]-{\rm i}\sum_m[S_m,{\rho}_{\rm
S}(t)C_{Rm}^{(1)}(t)+C_{Lm}^{(1)}(t){\rho}_{\rm S}(t)]\nonumber\\
& &+{\rm i}[J(t),{\rho}_{\rm S}(t)R^{(1)}(t)+L^{(1)}(t){\rho}_{\rm
S}(t)]+\mathcal{O}(H_{tot1}^3),\end{eqnarray} where \begin{equation} J(t)=\sum_m
{S}_m(t){\rm Tr}_{\rm E}\left({B}_m\varrho_{\rm E}(t)\right),\end{equation} \begin{eqnarray}
C_{Lm}^{(1)}(t)&=&{\rm Tr}_{\rm E}\left\{\left[I_{\rm S}\otimes
B_m\right]\mathcal{A}_1(t){\rm e}^{{\rm i} H_{tot0} t}\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]\right\},\\
C_{Rm}^{(1)}(t)&=&{\rm Tr}_{\rm E}\left\{\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]{\rm e}^{-{\rm i} H_{tot0}
t}\mathcal{A}_1(-t)\left[I_{\rm
S}\otimes B_m\right]\right\},\\
L^{(1)}(t)&=&{\rm Tr}_{\rm E}\left\{\mathcal{A}_1(t){\rm e}^{{\rm i} H_{tot0}
t}\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]\right\},\\
R^{(1)}(t)&=&{\rm Tr}_{\rm E}\left\{\left[I_{\rm S}\otimes\varrho_{\rm
E}(t)\right]{\rm e}^{-{\rm i} H_{tot0} t}\mathcal{A}_1(-t)\right\},\end{eqnarray} while
\begin{eqnarray} \mathcal{A}_1(t)&=&\sum_{\beta,\gamma,u,v} \frac{{\rm e}^{-{\rm i}
E_{\beta u}t}-{\rm e}^{-{\rm i} E_{\gamma v}t}}{E_{\beta u}-E_{\gamma
v}}\left(\sum_m
S_m^{\beta\gamma}B_m^{uv}\right)\ket{\psi^\beta}\bra{\psi^\gamma}\otimes
\ket{\omega^u}\bra{\omega^v}\\
&=&\sum_{\beta,\gamma,u,v}\mathcal{A}_1^{\beta u,\gamma
v}(t)\ket{\psi^\beta}\bra{\psi^\gamma}\otimes
\ket{\omega^u}\bra{\omega^v}.\end{eqnarray} We can see that the Redfield
master equation will be obtained from our this master equation
without using Born-Markov approximation in next section.
In order to absorbing the partial contributions from the high order
even all order approximations into the lower order approximations,
we can use our improved scheme of perturbation theory. In similar
way used above, we have \begin{eqnarray} \label{meeI} \dot{\rho}_{\rm
S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]+{\rm i}\left[J(t),{\rho}_{\rm
S}(t)\right]-{\rm i}\sum_{a=0}^1\sum_m\left[S_m,{\rho}_{\rm
S}(t)C_{{\rm I}Rm}^{(a)}(t)+C_{{\rm I}Lm}^{(a)}(t){\rho}_{\rm S}(t)\right]\nonumber\\
& &-{\rm i}\left[J(t),{\rho}_{\rm S}(t)R_{\rm I}^{(1)}(t)+L_{\rm
I}^{(1)}(t){\rho}_{\rm S}(t)\right]+{\rm i}\sum_m\left[S_m,C_{{\rm
I}Lm}^{(0)}(t){\rho}_{\rm S}(t)R_{\rm I}^{(1)}(t)+C_{{\rm
I}Lm}^{(0)}(t)L_{\rm I}^{(1)}(t){\rho}_{\rm
S}(t)\right.\nonumber\\
& &\left.+{\rho}_{\rm S}(t)R_{\rm I}^{(1)}(t)C_{{\rm
I}Rm}^{(0)}(t)+L_{\rm I}^{(1)}(t){\rho}_{\rm S}(t)C_{{\rm
I}Rm}^{(0)}(t)\right] +\mathcal{O}(H_{tot1}^3),\end{eqnarray} where we have
defined \begin{eqnarray} C_{{\rm I}Lm}^{(k)}(t)&=&{\rm Tr}_{\rm
E}\left\{\left[I_{\rm S}\otimes B_m\right]\mathcal{A}_{{\rm
I}k}(t){\rm e}^{{\rm i} H_{tot0} t}\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]\right\},\\
C_{{\rm I}Rm}^{(l)}(t)&=&{\rm Tr}_{\rm E}\left\{\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]{\rm e}^{-{\rm i} H_{tot0}
t}\mathcal{A}_{{\rm I}l}(-t)\left[I_{\rm
S}\otimes B_m\right]\right\},\\
L_{\rm I}^{(k)}(t)&=&{\rm Tr}_{\rm E}\left\{\mathcal{A}_{{\rm
I}k}(t){\rm e}^{{\rm i} H_{tot0} t}\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]\right\},\\
R_{\rm I}^{(l)}(t)&=&{\rm Tr}_{\rm E}\left\{\left[I_{\rm
S}\otimes\varrho_{\rm E}(t)\right]{\rm e}^{-{\rm i} H_{tot0}
t}\mathcal{A}_{{\rm I}l}(-t)\right\},\end{eqnarray} while \begin{eqnarray}
\mathcal{A}_{\rm I1}(t)&=&\sum_{\beta,\gamma,u,v} \frac{{\rm e}^{-{\rm i}
\widetilde{E}_{\beta u}t}-{\rm e}^{-{\rm i} \widetilde{E}_{\gamma
v}t}}{E^\prime_{\beta u}-E^\prime_{\gamma
v}}H_{tot1}^{\prime}{}^{\beta u,\gamma
v}\left(1-\delta_{\beta\gamma}\delta_{uv}\right)
\ket{\psi^\beta}\bra{\psi^\gamma}\otimes
\ket{\omega^u}\bra{\omega^v}\\
&=&\sum_{\beta,\gamma,u,v}\mathcal{A}_{\rm I1}^{\beta u,\gamma
v}(t)\ket{\psi^\beta}\bra{\psi^\gamma}\otimes
\ket{\omega^u}\bra{\omega^v}.\end{eqnarray} Here, $\widetilde{E}_{\gamma
v}=E_{{\rm S}\gamma}+\varepsilon_{{\rm E} v}+h_{\gamma v}+G_{\gamma
v}^{(2)}+G_{\gamma v}^{(3)}+G_{\gamma v}^{(4)}+\cdots$,
$E^\prime_{\gamma v}=E_\gamma+\varepsilon_v+h_{\gamma v}$,
$h_{\gamma v}$ are diagonal elements of $H_{tot1}$, and the
perturbed part of Hamiltonian in $G^{(i)}_{\gamma v}$ has be
redivided as $H_{tot1}^\prime=H_{\rm SE}-\sum_{\gamma v}h_{\gamma
v}\ket{\psi^\gamma}\bra{\psi^\gamma}\otimes\ket{\omega^v}\bra{\omega^v}$,
that is, $g_1^{\gamma v, \gamma^\prime
v^\prime}=\bra{\psi^\gamma\omega^v}H_{tot1}^\prime\ket{\psi^\gamma\omega^v}$.
It must be emphasized that the operators in the above definitions
and expressions are defined in the ISESR (that has been diagoalized
in the degenerate subspaces if the degeneracy cases exist). However,
$A_{{\rm I}k}(\pm t)$ including $\widetilde{E}$ have to be
calculated using $H_1^\prime$ that is the perturbing Hamiltonian via
the redivision skill. Hence, it is important to distinguish
$H_{tot0}=H_{\rm S}+H_{\rm E}$, $H_{tot1}=H_{\rm SE}$ and their
redivision $H_{tot0}^\prime$, $H_{tot1}^\prime$ in spite of them in
the same ISESR. In addition, we assume all degeneracies are
completely removed by the diagonalization procedure of degenerate
subspaces and/or hamiltonian redivision for simplicity and
determination. If the remained degeneracies are allowed, it requires
that the off-diagonal elements of the perturbing Hamiltonian matrix
between any two degenerate levels are always vanishing, in order to
let our improved scheme of perturbation theory work well.
In the above derivation of our master equation, we do not use
Born-Markov approximation, but only standard cut-off approximation.
From our point of view, it is more reasonable in physics theory and
its precision and reliability should be better in practical
applications.
\section{Re-deduction of Redfield master equation}\label{sec6}
In order to compare our master equation with the known master
equations and illustrate the validness of our master equation, we
will deduce the Redfield master equation from our master equation
without using the Born-Markov approximation in this section. In
addition, we point out what differences between our master equation
and the existed one, and provide the comments on well-known
approximations using in open system dynamics.
Firstly, we assume a thermal equilibrium for the environment, that
is, \begin{equation}
\rho_{\rm{E}}(0)=\frac{e^{-\beta_{\rm B} H_{\rm E}}}{{\rm Tr} e^{-\beta_{\rm B} H_{\rm E}}}
=\frac{1}{Z}\sum_{v}e^{-\beta_{\rm B}\varepsilon_{{\rm
E}v}}\ket{\omega^v}\bra{\omega^v}, \end{equation}
where $\beta_{\rm B}=1/k_{\rm B}T$ with $T$ the bath equilibrium.
This is justified when the environment is ``very large". Thus, it is
easy to get \begin{eqnarray}
L^{(1)}(t)&=&-R^{(1)}(t)=F^{(1)}(t)\nonumber\\
&=&\sum_{\beta,\gamma,u}\sum_{m}\frac{{\rm e}^{-{\rm i}\left(E_{{\rm
S}\beta}-E_{{\rm S}\gamma}\right)t}-1}{E_{{\rm S}\beta}-E_{{\rm
S}\gamma}}S_m^{\beta\gamma}B^{uu}\rho_{\rm
E}^u\ket{\psi^\beta}\bra{\psi^\gamma}\\
&=&-{\rm i}{\rm e}^{-{\rm i} h_{\rm S} t}\int_0^t{\rm d} \tau {\rm Tr}_{\rm
E}\left[\widehat{H}_{\rm SE}(\tau)\left(I_{\rm
S}\otimes\rho_E(0)\right)\right]{\rm e}^{{\rm i} h_{\rm S}
t}\\
&=&-{\rm i}\int_0^t{\rm d} \tau {\rm Tr}_{\rm E}\left[\overline{H}_{\rm
SE}(\tau)\left(I_{\rm S}\otimes\rho_E(0)\right)\right],\end{eqnarray} where
\begin{equation} \overline{H}_{\rm SE}(\tau)={\rm e}^{-{\rm i} H_0 \tau}{H}_{\rm SE}{\rm e}^{{\rm i}
H_0 \tau}.\end{equation}
Likewise, we have \begin{eqnarray} C_{Lm}^{(1)}(t)&=&-{\rm i}{\rm e}^{-{\rm i} h_{\rm S}
t}\int_0^t{\rm d} \tau {\rm Tr}_{\rm E}\left[\left(I_{\rm
S}\otimes\widehat{B}_m(t)\right)\widehat{H}_{\rm
SE}(\tau)\left(I_{\rm S}\otimes\rho_E(0)\right)\right]{\rm e}^{{\rm i} h_{\rm
S}
t}\\
&=&-{\rm i}\int_0^t{\rm d} \tau {\rm Tr}_{\rm E}\left[\left(I_{\rm
S}\otimes{B}_m\right)\overline{H}_{\rm SE}(\tau)\left(I_{\rm
S}\otimes\rho_E(0)\right)\right],\end{eqnarray} \begin{eqnarray}
C_{Rm}^{(1)}(t)&=&{\rm i}{\rm e}^{-{\rm i} h_{\rm S} t}\int_0^t{\rm d} \tau {\rm Tr}_{\rm
E}\left[\left(I_{\rm S}\otimes\rho_E(0)\right)\widehat{H}_{\rm
SE}(\tau)\left(I_{\rm S}\otimes\widehat{B}_m(t)\right)\right]{\rm e}^{{\rm i}
h_{\rm S}
t}\\
&=&{\rm i}\int_0^t{\rm d} \tau {\rm Tr}_{\rm E}\left[\left(I_{\rm
S}\otimes{B}_m\right)\overline{H}_{\rm SE}(\tau)\left(I_{\rm
S}\otimes\rho_E(0)\right)\right].\end{eqnarray}
Therefore, our master equation (\ref{meepf}) up to the second order
approximation can be rewritten as \begin{eqnarray} \label{meepf1}
\dot{\rho}_{\rm S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}[J(t),{\rho}_{\rm S}(t)]-\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[H_{\rm SE},\left[\overline{H}_{\rm
SE}(\tau),{\rho}_{\rm
S}(t)\otimes\rho_{\rm E}(0)\right]\right]\right\}\nonumber\\
& &+\left[J(t),\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[\overline{H}_{\rm SE}(\tau),{\rho}_{\rm
S}(t)\otimes\rho_{\rm E}(0)\right]\right]\right\}.\end{eqnarray}
If we introduce the interaction picture, that is, an operator
$\widehat{O}$ in this picture is defined by a corresponding operator
in the Schr\"odinger picture \begin{equation} \widehat{O}(t)={\rm e}^{{\rm i} H_{tot0} t}O
{\rm e}^{-{\rm i} H_{tot0} t}.\end{equation} It is clear that for an operator $F_{\rm
S}$ in the open system Hilbert space and an operator $F_{\rm E}$ in
the environment Hilbert space, we have \begin{eqnarray} \widehat{F}_{\rm
S}(t)&=&{\rm e}^{{\rm i} h_{\rm S} t}F_{\rm S} {\rm e}^{-{\rm i} h_{\rm S} t},\\
\widehat{F}_{\rm E}(t)&=&{\rm e}^{{\rm i} h_{\rm E} t}F_{\rm E} {\rm e}^{-{\rm i} h_{\rm
E} t}.\end{eqnarray} It immediately follows the master equation in the
interaction picture: \begin{eqnarray} \label{meepf2}
\frac{{\rm d}\widetilde{\rho}_{\rm S}(t)}{{\rm d}
t}&=&-{\rm i}\left[\widetilde{J}(t),\widetilde{\rho}_{\rm
S}(t)\right]-\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[\widetilde{H}_{\rm SE},\left[\widetilde{H}_{\rm
SE}(\tau),\widetilde{\rho}_{\rm
S}(t)\otimes{\rho}_{\rm E}(0)\right]\right]\right\}\nonumber\\
& &+\left[\widetilde{J}(t),\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[\widetilde{H}_{\rm SE}(\tau),\widetilde{\rho}_{\rm
S}(t)\otimes{\rho}_{\rm E}(0)\right]\right]\right\}. \end{eqnarray} It must
be emphasized that $\widetilde{\rho}_{\rm S}(t)$ is equal to ${\rm e}^{{\rm i}
h_{\rm S} t}\rho_{\rm S}(t){\rm e}^{-{\rm i} h_{\rm S} t}$, but not ${\rm e}^{{\rm i}
h_{\rm S} t}\rho_{\rm S}(0){\rm e}^{-{\rm i} h_{\rm S} t}$.
In special,when we introduce the assumption \begin{equation}
\label{app1}{\rm Tr}_{\rm E}\left\{\left[\widetilde{H}_{\rm
SE},\rho_{tot}(0)\right]\right\}=0.\end{equation} we have\begin{equation}
\sum_m\sum_{\beta\beta^\prime,\gamma\gamma^\prime}
C_{\beta\beta^\prime,\gamma\gamma^\prime}^{m,00}\left[S_m,\mathcal{P}_{\rm
S}(\beta,\beta^\prime)\varrho_{\rm
S}(t)\mathcal{P}(\gamma,\gamma^\prime)\right]={\rm e}^{-{\rm i} h_{\rm
S}t}{\rm Tr}_{\rm E}\left\{\left[\widetilde{H}_{\rm
SE},\rho_{tot}(0)\right]\right\}{\rm e}^{{\rm i} h_{\rm S}t}=0.\end{equation} Thus, Eq.
(\ref{meem}) becomes \begin{eqnarray} \label{meem1}\dot{\rho}_{\rm
S}(t)&=&-{\rm i}\left[h_{\rm S},\rho_{\rm
S}(t)\right]-{\rm i}\sum_m\sum_{\stackrel{\scriptstyle
k,l=0}{k+l>0}}^\infty
\sum_{\beta,\beta^\prime,\gamma,\gamma^\prime}C^{m,kl}_{\beta\beta^\prime,\gamma\gamma^\prime}(t)
\left[S_m,\mathcal{P}_{\rm S}(\beta,\beta^\prime)\varrho_{\rm
S}(t)\mathcal{P}(\gamma,\gamma^\prime)\right],\end{eqnarray} that is, the
perturbed form of master equation up to the second order
approximation reads \begin{equation} \label{meuseapp1}\dot{\rho}_{\rm
S}(t)=-{\rm i}\left[h_{\rm S},\rho_{\rm
S}\right]-{\rm i}\sum_m[S_m,{\rho}_{\rm
S}(t)C_{Rm}^{(1)}(t)+C_{Lm}^{(1)}(t){\rho}_{\rm S}(t)].\end{equation} This
means that the approximation ({\ref{app1}) leads to the following
terms \begin{equation} \label{app1d1}{\rm i}[J(t),{\rho}_{\rm
S}(t)]+\left[J(t),\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[\overline{H}_{\rm SE}(\tau),{\rho}_{\rm
S}(t)\otimes\rho_{\rm E}(0)\right]\right]\right\},\end{equation} or,
equivalently, in the interaction picture \begin{equation}\label{app1d2}
-{\rm i}\left[\widetilde{J}(t),\widetilde{\rho}_{\rm
S}(t)\right]+\left[\widetilde{J}(t),\int_0^t{\rm d} \tau{\rm Tr}_{\rm
E}\left\{\left[\widetilde{H}_{\rm SE}(\tau),\widetilde{\rho}_{\rm
S}(t)\otimes{\rho}_{\rm E}(0)\right]\right]\right\}\end{equation} are dropped
by comparing Eq. (\ref{meuseapp1}) with Eq. (\ref{meepf1}) or
(\ref{meepf2}).
Usually, the approximation is thought of as a unimportant
restriction since one can absorb the the dropped terms into the
system Hamiltonian $H_{\rm S}$. However, based on the above result,
we think that the approximation (\ref{app1}) is a real assumption
because the second term in (\ref{app1d1}) or (\ref{app1d2}) is not
nontrivial and it can not be absorbed into $H_{\rm S}$ in general.
In other words, the the second order contribution to the master
equation from the second term in (\ref{app1d1}) or (\ref{app1d2})
should be considered and the approximation (\ref{app1}) should be
rechecked for the concrete open systems except for the cases when
$J(t)=0$. Actually, we think, if $J(t)$ is not equal to zero, the
last term appears in our master equation (\ref{meepf1}) or
(\ref{meepf2}) is obviously different from the existed master
equations.
It is very interesting, when the approximation (\ref{app1}) can be
used to some given open systems, we immediately from the equation
(\ref{meuseapp1}) obtain
\begin{eqnarray}\label{Redfieldme}\frac{{\rm d}\widetilde{\rho}_{\rm S}(t)}{{\rm d}
t}&=&-\int_0^t{\rm d} \tau{\rm Tr}_{\rm E}\left\{\left[\widetilde{H}_{\rm
SE},\left[\widetilde{H}_{\rm SE}(\tau),\widetilde{\rho}_{\rm
S}(t)\otimes{\rho}_{\rm E}(0)\right]\right]\right\}. \end{eqnarray} This is
just the well-known Redfield master equation. This conclusion
implies that the Redfield master equation is still valid without
introducing Born-Markov approximation. Therefore, we think that
Born-Markov approximation is unnecessary for the master equation
with the second order perturbed approximations. From our point of
view, this is a real physical reason why ones should use jointly
Born- and Markov approximations and why ones can obtain useful
conclusions in the cases without Born-Markov approximation. In fact,
those terms that are dropped by Born approximation are compensated
by Markov approximation. In other words, Born approximation plus
Markov approximation back to no approximation based on our results.
\section{Milburn dynamics for open systems}\label{sec7}
Historically, a useful dynamical model of open system is the Milburn
model \cite{Milburn}. It provides a way to describe so-called
``intrinsic" decoherence. However, in our point of view, perhaps, it
can be called external-external environment decoherence. That is,
Milburn dynamics might be alternatively explained as the effect of
environment of the composite system or the large environment of the
proper system. This explanation is, in fact, a conclusion from that
we believe the von Neumann equation is uniquely correct for a closed
system. One argues what mechanism results in that the external
influence is reflected by the extra term in Milburn dynamics. We can
not answer it at present, but we would like to ask what condition
changes the dynamics from the von Neumann's to the Milburn's. If the
answer is the Milburn dynamics is a nature of closed quantum
systems, then it is very difficult how to understand the free
parameter $\theta_0$.
Actually, only one can do something within the near environment in
order to control decoherence, for example, the self-interaction of
environment, and it is possible that one only knows how to
appropriately describe the dynamics of near environment and the
interaction between the system and near environment but are short of
the knowledge about the remote environment. Therefore, in this
section, we intend to use the Milburn model to consider the dynamics
of the composite system made up of the system and its near
environment. The conclusions obtained here imply the our solution
and methods are also applicable to more general open systems such as
the Milburn model.
Dynamics in the Milburn model replaces the usual von-Neumann
equation of the density matrix by
\begin{equation}\label{MEM}
\dot{\rho}_{tot}(t)=-{\rm i}[H_{tot},\rho_{tot}(t)]-\frac{\theta_0}{2}[H_{tot},[H_{tot},\rho_{tot}(t)]]
,\end{equation}
\noindent where $\theta_0$ is a constant meaning that there is some
minimum unitary-phase transformation. This implies that coherence is
destroyed as the physical properties of the system approach a
macroscopic level. Hence, seemingly, the ``intrinsic" decoherence
explanation looks like to be reasonable. However, the minimum
unitary-phase transformation is not clear. The parameter $\theta_0$
in the Milburn model is still ``free". In other words, $\theta_0$ is
not been given by the theory. If we think the extra term is resulted
in by the remote environment, $\theta_0$ should be able be known by
the experiment.
Now, we directly extend Milburn dynamics to a Milburn-type closed
quantum system consisting of the concerned system and its near
environment. The Hamiltonian in eq.(\ref{MEM}) still reads $
H_{tot}=H_{\rm S}+H_{\rm{E}}+H_{\rm{SE}}$. Here, a Milburn-type
closed quantum system is not really closed system from the view that
a really closed system must obey the von Neumann equation. Actually,
an alternative explanation is that a Milburn-type closed quantum
system is still affected by the remote (larger environment), and
this influence is represented by an extra term with $\theta_0$
multiplier because one cannot know the Hamiltonian of its remote
environment and the interaction form between the interesting system
and its remote environment. Obviously, when $\theta_0=0$, Milburn
dynamics back to von Neumann dynamics. This implies that the (very)
remote environment can be ignored.
The formal solution of Milburn dynamics for the composite system can
be written as \cite{Kimm} \begin{equation} \label{fs}
\rho_{tot}(t)=\exp\left\{-{\rm i} H_{tot}t-\theta_0 H_{tot}^2
t/2\right\}\left[{\rm e}^{\mathfrak{M}t}\rho_{tot}(0)\right]\exp\left\{{\rm i}
H_{tot}t-\theta_0 H_{tot}^2 t/2\right\}= \sum_k^\infty
M_k(t)\rho_{tot}(0)M_k^\dagger(t),\end{equation}
where ${\mathfrak{M}}$ is a superoperator, i.e,
${\mathfrak{M}}\rho_{tot}=\theta_0 H_{tot}\rho_{tot} H_{tot}$, and
the Kraus operators $M_k(t)$ is in the form \begin{equation}
M_k(t)=\sqrt{\frac{(\theta_0t)^k }{k!}}H_{tot}^k \exp\left\{-{\rm i}
H_{tot}t-\theta_0 H_{tot}^2 t/2\right\}. \end{equation} Without loss of
generality, using of the denotation $(A+B)^K=A^K+f^K(A,B)$ and
$f^0(A,B)=0$, we can write \begin{equation} M_k(t)=\sqrt{\frac{(\theta_0t)^k
}{k!}}\left[{H}_{tot0}^k \exp\left\{-{\rm i} {H}_{tot0}t-\theta_0
{H}_{tot0}^2 t/2\right\}+\sum_{n=0}^\infty\sum_{m=0}^n \frac{(-{\rm i}
t)^n}{n!}\left(\frac{\theta_0}{2{\rm i}}\right)^m
C_n^mf^{n+m+k}(H_{tot0},H_{tot1})\right]. \end{equation}
Just as we find the exact solution of open system in von Neumann
dynamics, we need a system-environment separated representation
(SESR), which has been given in Sec. \ref{sec2}. Thus, in this SESR,
based on the our expansion formula of operator binomials power
\cite{My1} we have \begin{eqnarray} \label{fk} f^K({H}_{tot0},H_{tot1})
&=&\sum_{l=1}^K\sum_{\gamma_1,\cdots,\gamma_{l+1}}\sum_{v_1,\cdots,v_{l+1}}
C_l^K({E}[\gamma v,l]) \left[\prod_{j=1}^lH_{tot1}^{\gamma_j
v_j,\gamma_{j+1},v_{j+1}}\right] \ket{\psi^{\gamma_1}
}\bra{\psi^{\gamma_{l+1}}}\otimes\ket{\omega^{v_1}}\bra{\omega^{v_{l+1}}}\label{fkc},\end{eqnarray}
\begin{equation} C^K_l({E}[\gamma v,l])=\sum_{i=1}^{l+1} (-1)^{i-1}
\frac{{E}_{\gamma v}^K}{d_i({E}[\gamma v,l])}, \end{equation} where
$d_i({E}[\gamma v,l])$ is defined in Sec. \ref{sec2}. Therefore, the
expression of $M_k(t)$ is changed to a summation according to the
order (or power) of the $H_{tot1}$ as follows \begin{eqnarray} \label{mk}
M_k^{\gamma v,\gamma^\prime v^\prime}(t)&=&
\sqrt{\frac{(\theta_0t)^k }{k!}}{E}_{\gamma v}^k g({E}_{\gamma
v},t)\delta_{\gamma\gamma^\prime}\delta_{v v^\prime} +
\sqrt{\frac{(\theta_0t)^k }{k!}}\sum_{l=1}^\infty
\sum_{\gamma_1,\cdots,\gamma_{l+1}}\sum_{v_1,\cdots,v_{l+1}}\left[
\sum_{i=1}^{l+1}(-1)^{i-1}\frac{{E}_{\gamma_i v_i}^k g({E}_{\gamma_i
v_i};t)}{d_i({E}[\gamma v,l])}\right]\nonumber\\
& &\times \prod_{j=1}^{l}H_{tot1}^{\gamma_j
v_j,\gamma_{j^\prime}v_j^\prime}
\delta_{\gamma_1\gamma}\delta_{\gamma_{l+1}\gamma^\prime}
\delta_{v_1v}\delta_{v_{l+1}v^\prime},\end{eqnarray} where the time evolution
function $g(x;t)$ with the exponential form is defined by \begin{equation}
g(x;t)=\exp\left\{-{\rm i} xt -\theta_0 x^2 t/2\right\}. \end{equation} Obviously,
$M_k^\dagger{}^{\gamma v,\gamma^\prime v^\prime}(t)$ can be given
via replacing $g(x;t)$ by $g^*(x;t)$. Furthermore, we obtain the
expression of time evolution of reduced density matrix of open
systems, that is a general and explicit solution of open systems in
Milburn dynamics: \begin{eqnarray} \label{se1oa}
\rho_{\mathcal{S}}(t)&=&\sum_{\beta,u,\beta^\prime,u^\prime}
\sum_{\gamma,v,\gamma^\prime,v^\prime}M_k^{\beta u,\beta
u^\prime}(t) \rho^{\beta^\prime u^\prime, \gamma^\prime v^\prime}(0)
M_k^\dagger{}^{\gamma^\prime v^\prime,\gamma v}\delta_{uv}\ket{\psi^\beta}\bra{\psi^\gamma}\nonumber\\
&=& \sum_{\beta,\gamma,v}\rho^{\beta v,\gamma v}(0)g({E}_{\beta
v}-{E}_{\gamma v};t) \ket{\psi^\beta}\bra{\psi^{\gamma}}+
\sum_{\beta,u}\sum_{\gamma^\prime,v^\prime,\gamma}\rho^{\beta
u,\gamma^\prime v^\prime}(0)
\sum_{l=1}^\infty\sum_{\gamma_1,\cdots,\gamma_{l+1}}\left[\sum_{i=1}^{l+1}(-1)^{i-1}\frac{
g({E}_{\beta u}-{E}_{\gamma_i v_i};t)}{d_k({E}[\gamma
v,l])}\right]\nonumber\\
& &\times
\left[\prod_{j=1}^{l}H_{tot1}^{\gamma_j\gamma_{j+1}}\right]
\delta_{\gamma^\prime \gamma_1}\delta_{v^\prime
v_1}\delta_{\gamma_{l+1} \gamma}\delta_{v_{l+1} u}
\ket{{\psi}^\beta}\bra{\psi^{\gamma}}+
\sum_{\beta,\beta^\prime,u^\prime}\sum_{\gamma,v}\rho^{\beta^\prime
u^\prime,\gamma v}(0) \nonumber\\
& & \times
\sum_{l=1}^\infty\sum_{\beta_1,\cdots,\beta_{l+1}}\left[\sum_{i=1}^{l+1}(-1)^{i-1}\frac{
g({E}_{\beta_i v_i}-{E}_{\gamma v};t)}{d_k({E}[\beta
u,l])}\right]\left[\prod_{j=1}^{l}H_{tot1}^{\beta_j\beta_{j+1}}\right]
\delta_{\beta \beta_1}\delta_{v,u_1}\delta_{\beta_{l+1}
\beta^\prime}\delta_{u_{l+1} u^\prime}
\ket{\psi^\beta}\bra{\psi^{\gamma}}\nonumber\\
& &+
\sum_{\beta,u,\beta^\prime,u^\prime}\sum_{\gamma,v,\gamma^\prime,v^\prime}\rho^{\beta^\prime
u^\prime,\gamma^\prime v^\prime}(0)\sum_{k=1}^\infty
\sum_{l=1}^\infty\sum_{\beta_1,\cdots,\beta_{k+1}}\sum_{\gamma_1,\cdots,\gamma_{l+1}}
\left[\sum_{i=1}^{k+1}\sum_{j=1}^{l+1}(-1)^{i+j}\frac{
g({E}_{\beta_i u_i}-{E}_{\gamma_j v_j};t)}
{d_i({E}[\beta u])d_j({E}[\gamma v,l])}\right]\nonumber\\
&
&\times\left[\prod_{i=1}^{k}H_{tot1}^{\beta_iv_i,\beta_{i+1}v_{i+1}}\right]
\left[\prod_{j=1}^{l}H_{tot1}^{\gamma_j
v_j,\gamma_{j+1}v_{j+1}}\right]
\delta_{\beta\beta_1}\delta_{\beta_{k+1}\beta^\prime}\delta_{uu_1}\delta_{u_{l+1}u^\prime}
\delta_{\gamma^\prime\gamma_1}\delta_{\gamma_{l+1}\gamma}\delta_{v^\prime
v_1}\delta_{v_{l+1}v}\delta_{uv}
\ket{\psi^{\beta}}\bra{{\psi}^{\gamma}}.\hskip 0.5cm \end{eqnarray} It is
clear that if $\theta_0=0$, this solution is just the form of
solution of van Neumann dynamics that is obtained in Sec.
\ref{sec2}. Usually, the finite (even often low) order approximation
about $H_{tot1}$ can be taken, thus this expression will be cut off
to the finite terms. Similar to the methods used in Secs.
\ref{sec3}, \ref{sec4} and \ref{sec5}, we can study the perturbed
solution and motion equation of open systems in Milburn dynamics. It
is not difficult, so we omit them in order to save space.
\section{Example and application}\label{sec8}
In order to concretely illustrate our general and explicit solution
of open system dynamics, we recall an exactly solvable two-state
open system for decoherence that was first introduced by Zurek
\cite{Zurek,Zurek1982}. In this Zurek model, the ``free"
(unperturbed) Hamiltonian $H_{\rm S0}$ and the self-interaction
(perturbing) $H_{\rm S1}$ of concerning two-state system and the
`free" (unperturbed) Hamiltonian $H_{\rm E0}$ and the
self-interaction (perturbing) $H_{\rm E1}$ of the environment are
taken as to be equal to zero. The total Hamiltonian of the composite
system made of the interesting system plus the environment only has
their interaction term, that is \begin{equation} \label{zmh} H_{\rm
Zurek}=H_{\rm SE}=\sigma^z_{\rm S}\otimes B_{z\rm E},\end{equation} where the
environment operator $B_{z\rm E}$ is defined by \begin{equation} B_{z\rm E}=
\sum_{k=1}^{N_{\rm E}} \left(\bigotimes_{i=1}^{k-1}I_{{\rm
E}i}\right)\otimes\left(Z_k\sigma^{z_k}\right)\otimes\left(\bigotimes_{j=k+1}^{N_{\rm
E}}I_{{\rm E}j}\right),\end{equation} and $N_{\rm E}$ is the degree of freedom
of the environment, which is very large even infinite.
It is clear that this Zurek model can be exactly solved out. Its
eigenvectors are so-called natural bases \begin{equation}\label{zms} \ket{n_{\rm
S}n_{\rm E}}=\ket{n_{\rm S};n_1,n_2\cdots}=\ket{n_{\rm
S}}\otimes\bigotimes_{k=1}^{N_{\rm E}}\ket{n_k},\end{equation} \vskip -0.5cm
where $n_{\rm S},n_1,n_2,\cdots=0,1$ and \vskip -0.5cm\begin{eqnarray}
\ket{0}_{\rm S}&=&\left(\begin{array}{c} 1\\0
\end{array}\right),\quad\ket{1}_{\rm S}=\left(\begin{array}{c} 0\\1
\end{array}\right),\\
\ket{0}_{k}&=&\left(\begin{array}{c} 1\\0
\end{array}\right),\quad\ket{1}_{k}=\left(\begin{array}{c} 0\\1 \end{array}\right)
.\end{eqnarray} The corresponding eigenvalues are \begin{equation} \label{zme}E_{n_{\rm
S}n_{\rm E}}=E_{n_{\rm S}n_1n_2\cdots}=(-1)^{n_{\rm
S}}\sum_{k=1}^{N_{\rm E}}(-1)^{n_k}Z_k.\end{equation} Note that we use a
simple notation $n_{\rm E}$ to denote $n_1,n_2,\cdots$ here and
after.
Now, let we solve this Zurek model by using our exact solution or
the improved form of the perturbed solution. From our point of view,
the assumption that $H_{\rm S}$ and $H_{\rm E}$ are taken as zero is
a theoretical simplification. In fact, we can think that $H_{\rm S}$
and $H_{\rm E}$ are constants so that we can absorb them into energy
eigenvalues or, equivalently, directly omit them since these
constants do not affect physics. Therefore, the base of the SESR can
be taken as the natural bases (\ref{zms}).
Since $H_{\rm SE}$ is completely diagonal in this SESR, that is \begin{equation}
\bra{m_{\rm S} m_{\rm E}}H_{\rm SE}\ket{n_{\rm S}n_{\rm
E}}=(-1)^{n_{\rm S}}\sum_{k=1}^{N_{\rm E}}(-1)^{n_k}Z_k
\delta_{m_{\rm S}n_{\rm S}}\prod_{i=1}^{N_{\rm
E}}\delta_{m_in_i}.\end{equation} We should use the Hamiltonian redivision
skill, and then \begin{equation} H_{tot0}^\prime=H_{Zurek}.\end{equation} It is easy to
get \begin{eqnarray} \widetilde{E}_{n_{\rm S}n_{\rm E}}&=&\widetilde{E}_{n_{\rm
S}n_1n2\cdots}=(-1)^{n_{\rm S}}\sum_{k=1}^{N_{\rm
E}}(-1)^{n_k}Z_k.\\
A_{{\rm I}l}(t)&=&0, \quad (l>0).\end{eqnarray} where we have used the fact
$g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}=0$ based on
$H_{tot1}^\prime=0$. This means that the perturbed solution part of
higher than the zeroth order approximation is vanishing. Therefore,
our exact solution or the improved form of the perturbed solution
including only non-vanishing zeroth order part becomes \begin{eqnarray}
\rho_{Zurek}(t)&=&\sum_{m_{\rm S},n_{\rm S}=0}^1\sum_{m_{\rm
E},n_{\rm E}}{\rm e}^{-{\rm i} \widetilde{E}_{m_{\rm S}m_{\rm
E}}t}\rho^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}(0){\rm e}^{{\rm i}
\widetilde{E}_{n_{\rm S}n_{\rm E}}t}\ket{m_{\rm S}m_{\rm
E}}\bra{n_{\rm S}n_{\rm E}}={\rm e}^{-{\rm i} H_{Zurek}t}\rho(0){\rm e}^{{\rm i}
H_{Zurek}t}.\end{eqnarray} Obviously, it is equal to the exact solution of
the Zurek model (\ref{zmh}) via directly solving it. Of course, the
solutions $\rho_{\rm S}(t)$ of this open system obtained by our
exact solution or improved form of perturbed solution formula or
directly solving method are consistent. Therefore, we can say our
improved form of perturbed solution indeed absorbs the contributions
from all order approximations of the perturbing Hamiltonian $H_{\rm
SE}$ since it is diagonal. In addition, we would like to point out
that although there are the degeneracies in $\widetilde{E}_{n_{\rm
S}n_{\rm E}}$ when $m_{\rm S}+m_k=n_{\rm S}+n_k$, our improved form
of perturbed solution can work well since $H_{tot1}^\prime=0$.
In order to reveal the advantages of our exact solution and
perturbed solution, we add two transverse fields, respectively, in
the system and the environment, that is \begin{eqnarray}
\label{ezm}H_{tot}&=&\mu\sigma^x_{\rm S}\otimes I_{\rm E}+H_{\rm
Zurek}+\sigma^z_{\rm S}\otimes B_{x\rm E}= \mu\sigma^x_{\rm
S}\otimes I_{\rm E}+\sigma^z_{\rm S}\otimes\left(B_{x\rm E}+B_{z\rm
E}\right),\end{eqnarray} where \begin{equation} B_{x\rm E}=\left[\sum_{k}^{N_{\rm
E}}\left(\bigotimes_{i=1}^{k-1}I_{{\rm
E}i}\right)\otimes\left(X_k\sigma^{x_k}_{\rm
E}\right)\otimes\left(\bigotimes_{j=k+1}^{N_{\rm E}}I_{{\rm
E}j}\right)\right].\end{equation} The problem only with the system transverse
field was studied in Ref. \cite{Cucchietti}. The model (\ref{ezm})
is not exactly solvable unless $\mu=0$. Obviously, there are four
kinds of the SESRs.
{\em Case one}: The Hamiltonian split is \begin{eqnarray} \label{hs0}
H_{tot0}=H_{\rm S}+H_{\rm E}=\mu\sigma^x_{\rm S}\otimes I_{\rm
E},\quad H_{tot1}=\sigma^z_{\rm S}\otimes\left(B_{x\rm E}+B_{z\rm
E}\right).\end{eqnarray} The bases of unperturbed SESR are \begin{equation}
\label{isser1}\ket{\psi_{\rm S}^{n_{\rm S}}\chi^{n_{\rm
E}}}=\ket{\psi_{\rm S}^{n_{\rm S}}}\otimes\bigotimes_{k=1}^{N_{\rm
E}}\ket{\chi^{n_k}},\end{equation} \vskip -0.5cm where \vskip -0.5cm\begin{eqnarray}
\ket{\psi^{n_{\rm S}}}&=&\frac{1}{\sqrt{2}}\left[\ket{0}_{\rm
S}+(-1)^{n_{\rm
S}}\ket{1}_{\rm S}\right]\\
\ket{\chi^{n_k}}&=&\frac{1}{\sqrt{X_k^2+\left(Z_k+(-1)^{n_k}Y_k\right)^2}}
\left[\left(Z_k+(-1)^{n_k}Y_k\right)\ket{0}_k+X_k\ket{1}_k\right].\end{eqnarray}
where $Y_k=\sqrt{X_k^2+Z_k^2}$. Here, $\ket{\chi^{n_k}}$ are the
eigenvectors of the environment operator $B_{x\rm E}+B_{z\rm
E}=\left(X_k\sigma^x+Z_k\sigma^z\right)$, and corresponding
eigenvalues are $(-1)^{n_k}Y_k$. Thus, the eigenvalues of $H_{tot0}$
acting on $\ket{\psi_{\rm S}^{n_{\rm S}}\chi^{n_{\rm E}}}$ are \begin{equation}
\label{h0e0} E_{n_{\rm S}n_{\rm E}}=E_{n_{\rm
S},n_1n_2\cdots}=\mu(-1)^{n_{\rm S}}.\end{equation}
{\em Case two}: The Hamiltonian split is the same as (\ref{hs0}),
and the corresponding eigenvalues of $H_{tot0}$ are then the same as
(\ref{h0e0}). But the bases of unperturbed SESR can be taken as \begin{equation}
\label{isser2}\ket{\psi^{n_{\rm S}}n_{\rm E}}=\ket{\psi^{n_{\rm
S}}}\otimes\bigotimes_{k=1}^{N_{\rm E}}\ket{n_k}.\end{equation}
{\em Case three}: The Hamiltonian split is \begin{eqnarray} \label{hs1}
H_{tot0}= \mbox{0 or constant}, \quad H_{tot1}=\mu\sigma^x_{\rm
S}\otimes I_{\rm E}+\sigma^z_{\rm S}\otimes\left(B_{x\rm E}+B_{z\rm
E}\right).\end{eqnarray} The bases of a selected SESR are just the natural
bases $\ket{n_{\rm S}n_{\rm E}}$ defined in (\ref{zms}). Then, we
use our Hamiltonian redivision skill to obtain \begin{equation}
\label{hsp1}H_{tot0}^\prime=H_{\rm Zurek},\quad
H_{tot1}^\prime=\mu\sigma^x_{\rm S}\otimes I_{\rm E}+\sigma^z_{\rm
S}\otimes B_{x\rm E}.\end{equation} The corresponding eigenvalues of
$H_{tot0}^\prime$ is given by (\ref{zme}).
{\em Case four}: The Hamiltonian split is the same as (\ref{hs1}).
But the bases of the unperturbed SESR can be chosen as \begin{equation}
\ket{n_{\rm S}\chi_{\rm n_{\rm E}}}=\ket{n_{\rm
S}}\otimes\bigotimes_{k=1}^{N_{\rm E}}\ket{\chi^{n_k}}.\end{equation} Then, we
use our Hamiltonian redivision skill to obtain \begin{equation}\label{hs2}
H_{tot0}^\prime=H_{\rm Zurek}+\sigma^z_{\rm S}\otimes B_{x\rm
E},\quad H_{tot1}^\prime=\mu\sigma^x_{\rm S}\otimes I_{\rm E}.\end{equation}
The corresponding eigenvalue is \begin{equation} \label{case4e0}
E^\prime_{n_{\rm S}n_{\rm E}}=E^\prime_{n_{\rm
S}n_1n_2\cdots}=(-1)^{n_{\rm S}}\sum_{k=1}^{N_{\rm
E}}(-1)^{n_k}Y_k.\end{equation}
It must be emphasized that the four kinds of choices on the SESRS
aim at the different preconditions if the cut-off approximation of
perturbation is necessary. Cases one and two are used to the
preconditions that $\mu\gg Z_k$ and/or $\mu\gg X_k$, that is, the
transverse field $\mu$ is strong. Case three is chosen when $Z_k\gg
\mu$ and $Z_k\gg X_k$. In other words, two transverse fields are
weak. Case four is suitable to solve the problem under $Z_k\gg \mu$
and/or $X_k\gg \mu$. This means that the transverse field $\mu$ is
weak.
It is easy to see that in cases one and two there are two degenerate
subspaces with $N_{\rm E}$ dimensions, which cannot be completely
removed via the usual diagonalization procedure of the degenerate
subspaces and our Hamiltonian redivision. However, the conditions
that degeneracies happen are $\delta_{m_{\rm S}n_{\rm s}}$. For case
one \begin{equation} g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}=\delta_{m_{\rm
S}\left(1-n_{\rm S}\right)}\left[\sum_{k=1}^{N_{\rm
E}}(-1)^{m_k}Y_k\right]\prod_{l=1}^{N_{\rm E}}\delta_{m_ln_l},\end{equation}
while for case two \begin{equation} g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}=\delta_{m_{\rm S}\left(1-n_{\rm
S}\right)}\left[\sum_{k=1}^{N_{\rm E}}(-1)^{m_k}Z_k
\left(\prod_{l=1}^{N_{\rm E}}
\delta_{m_ln_l}\right)+\sum_{k=1}^{N_{\rm
E}}\left(\prod_{i=1}^{k-1}\delta_{m_in_i}\right)
X_k\delta_{m_k\left(1-n_k\right)}\left(\prod_{j=k+1}^{N_{\rm
E}}\delta_{m_jn_j}\right)\right].\end{equation} This implies such a fact that
in both case one and case two $g_1^{m_{\rm S}m_{\rm E},n_{\rm
S}n_{\rm E}}$ are vanishing between any two degenerate levels, that
is $g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}\delta_{m_{\rm
S}n_{\rm s}}=0$. Therefore, our improved scheme of perturbation
theory can work well. However, note that the preconditions that
$\mu\gg Z_k$ and/or $\mu\gg X_k$ in cases one and two are the same,
we prefer to use the choice of case one because its calculation is
easier than case two in our improved scheme of perturbation theory.
As to case three, we also can not completely remove the degeneracies
via the usual diagonalization procedure of the degenerate subspaces
and our Hamiltonian redivision. The conditions that degeneracies
happen are solutions of the following equation\begin{equation}
\sum_{k=1}^{N_{\rm E}}Z_k\left[(-1)^{m_{\rm S}+m_k}-(-1)^{n_{\rm
S}+n_k}\right]=0,\end{equation} while the off-diagonal elements of the
perturbing Hamiltonian matrix are \begin{equation} g_1^{m_{\rm S}m_{\rm
E},n_{\rm S}n_{\rm E}}=\mu\delta_{m_{\rm S}\left(1-n_{\rm
S}\right)}\prod_{k=1}^{N_{\rm E}}\delta_{m_k,n_k}+\delta_{m_{\rm
S}n_{\rm S}}\sum_{k=1}^{N_{\rm
E}}\left(\prod_{i=1}^{k-1}\delta_{m_in_i}\right)
X_k\delta_{m_k\left(1-n_k\right)}\left(\prod_{j=k+1}^{N_{\rm
E}}\delta_{m_jn_j}\right).\end{equation} It is clear that we can not
guarantee, in general, that $g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}$ are vanishing between any two degenerate levels. If the result
is indeed so. This SESR is not a good choice because the remained
degeneracies will result in the difficulty to use the usual
perturbation theory and complication in our improved ones. If we
still intend to use the cut-off approximation of perturbation, the
results from case three will not be satisfied enough if the
evolution time is long enough, because $A_{l}(t)$ or $A_{{\rm
I}l}(t)$ has the extra terms proportional to the evolution time that
can not be simply absorbed to the exponential power for $l\geq 2$ in
our views.
Fortunately, that case four can be covered the precondition that
$Z_k\gg \mu$ and $Z_k\gg X_k$ in case three. Hence, we give up the
choice of case three and only use the SESR in case four. Actually,
the above problems originally motivate us to consider how to choose
the appropriate SESR for open systems, which has been seen in Sec.
\ref{sec2}.
It is easy to get that the conditions that degeneracies happen in
case four are solutions of the following equation\begin{equation}\label{case4dc}
\sum_{k=1}^{N_{\rm E}}Y_k\left[(-1)^{m_{\rm S}+m_k}-(-1)^{n_{\rm
S}+n_k}\right]=0,\end{equation} while the off-diagonal elements of the
perturbing Hamiltonian matrix are \begin{equation} \label{case4e1}g_1^{m_{\rm
S}m_{\rm E},n_{\rm S}n_{\rm E}}=\mu\delta_{m_{\rm S}\left(1-n_{\rm
S}\right)}\prod_{k=1}^{N_{\rm E}}\delta_{m_k,n_k}.\end{equation} Our
interesting task is to seek for the conditions that degeneracies
happen when $g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}\neq 0$.
Hence, we substitute $m_k=n_k$ for any $k$, into Eq. (\ref{case4dc})
and rewrite it as\begin{equation} \left[\sum_{k=1}^{N_{\rm
E}}Y_k(-1)^{m_k}\right]\left[(-1)^{m_{\rm S}}-(-1)^{n_{\rm
S}}\right]=0.\end{equation} Its solution is $m_{\rm S}=n_{\rm S}$ unless the
exception $\sum_{k=1}^{N_{\rm E}}Y_k(-1)^{m_k}=0$. However, this
exception is not valid limiting our problem to open systems because
it means that $H_{tot0}^\prime=0$ from Eq. (\ref{case4e0}), or
equivalently, the total $H_{tot}$ becomes $\mu\sigma^x_{\rm
S}\otimes I_{\rm E}$. Again jointly considering it and the
expression (\ref{case4e1}) of $g_1^{m_{\rm S}m_{\rm E},n_{\rm
S}n_{\rm E}}$ in case four, we obtain the conclusion that
$g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm E}}$ are indeed vanishing
between any two degenerate levels.
In the following discussion, we only focus on case one with the
strong transverse field $\mu$ and case four with weak transverse
field $\mu$ in order to illustrate our exact solution and improved
form of perturbed solution more simply and better.
Let us define \begin{equation} \delta_{m_{\rm E}n_{\rm E}}=\prod_{i=1}^{N_{\rm
E}}\delta_{m_in_i}.\end{equation} \begin{equation} f_{n_{\rm E}}=\sum_{n_1,n_2,\cdots=0}^1
Y_k(-1)^{n_k}=\sum_{n_{\rm E}=0}^1 Y_k(-1)^{n_k}.\end{equation} Thus, for
case one, we can rewrite the off-diagonal elements of the perturbing
Hamiltonian matrix as \begin{equation} g_1^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}=\delta_{m_{\rm S}\left(1-n_{\rm S}\right)}f_{m_{\rm
E}}\delta_{m_{\rm E}n_{\rm E}}.\end{equation} Substituting it into the
definition of $G_{n_{\rm S}n_{\rm E}}^{(a)}$, we obtain \begin{equation}
G_{n_{\rm S}n_{\rm E}}^{(2)}=\frac{(-1)^{n_{\rm S}}f^2_{n_{\rm
E}}}{2\mu},\quad G_{n_{\rm S}n_{\rm E}}^{(3)}=0, \quad G_{n_{\rm
S}n_{\rm E}}^{(4)}=-\frac{(-1)^{n_{\rm S}}f^4_{n_{\rm E}}}{8\mu^3}
.\end{equation} Hence, we have \begin{equation} \widetilde{E}_{n_{\rm S}n_{\rm
E}}=(-1)^{n_{\rm S}}\mu\left[1+\frac{1}{2}\left(\frac{f_{n_{\rm
E}}}{2\mu}\right)^2-\frac{1}{2}\left(\frac{f_{n_{\rm
E}}}{2\mu}\right)^4\right].\end{equation}
Similarly, for case four, from Eqs. (\ref{case4e0}) and
(\ref{case4e1}) it follows that \begin{equation} G_{n_{\rm S}n_{\rm
E}}^{(2)}=\frac{(-1)^{n_{\rm S}}\mu^2}{2f_{n_{\rm E}}}, \quad
G_{n_{\rm S}n_{\rm E}}^{(3)}=0, \quad G_{n_{\rm S}n_{\rm
E}}^{(4)}=-\frac{(-1)^{n_{\rm S}}\mu^4}{8f^3_{n_{\rm E}}}.\end{equation}
Hence, we have \begin{equation} \label{case4ei} \widetilde{E}_{n_{\rm S}n_{\rm
E}}=(-1)^{n_{\rm S}}f_{n_{\rm
E}}\left[1+\frac{1}{2}\left(\frac{\mu}{2f_{n_{\rm
E}}}\right)^2-\frac{1}{2}\left(\frac{\mu}{2f_{n_{\rm
E}}}\right)^4\right].\end{equation}
It is clear that there is a corresponding relation between the case
of the strong transverse field $\mu$ and the case of weak transverse
field $\mu$, that is, their perturbed solutions will be the same
under the exchanging transformation $\mu\Leftrightarrow f_{n_{\rm
E}}$. Hence, we only write down, for case one, the zeroth, first and
second order parts of total system density matrix at time $t$,
respectively\begin{equation} \rho_{tot}^{(0)}(t)=\sum_{m_{\rm S},m_{\rm
E}=0}^1\sum_{n_{\rm S},n_{\rm
E}=0}^1{\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{n_{\rm S}n_{\rm E}}\right)t}\rho_{tot}^{m_{\rm
S}m_{\rm E},n_{\rm S}n_{\rm E}}(0)\ket{\psi_{\rm S}^{m_{\rm
S}}}\bra{\psi_{\rm S}^{n_{\rm S}}}\otimes\ket{\chi^{m_{\rm
E}}}\bra{\chi^{n_{\rm E}}},\end{equation} \begin{eqnarray}
\rho_{tot}^{(1)}(t)&=&\sum_{m_{\rm S},m_{\rm E}=0}^1\sum_{n_{\rm
S},n_{\rm E}=0}^1\left({\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}-{\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{\left(1-n_{\rm S}\right)n_{\rm
E}}\right)t}\right)\frac{(-1)^{n_{\rm S}}f_{n_{\rm E}}}{2\mu}\nonumber\\
& &\times \rho_{tot}^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}(0)\ket{\psi_{\rm S}^{m_{\rm S}}}\bra{\psi_{\rm S}^{1-n_{\rm
S}}}\otimes\ket{\chi^{m_{\rm E}}}\bra{\chi^{n_{\rm
E}}}\nonumber\\
& &+\sum_{m_{\rm S},m_{\rm E}=0}^1\sum_{n_{\rm S},n_{\rm
E}=0}^1\left({\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}-{\rm e}^{-{\rm i}\left(\widetilde{E}_{\left(1-m_{\rm
S}\right)m_{\rm E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}\right)\frac{(-1)^{m_{\rm S}}f_{m_{\rm E}}}{2\mu}\nonumber\\
& &\times \rho_{tot}^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}(0)\ket{\psi_{\rm S}^{1-m_{\rm S}}}\bra{\psi_{\rm S}^{n_{\rm
S}}}\otimes\ket{\chi^{m_{\rm E}}}\bra{\chi^{n_{\rm E}}},\end{eqnarray} \begin{eqnarray}
\rho_{tot}^{(2)}(t)&=&-\sum_{m_{\rm S},m_{\rm E}=0}^1\sum_{n_{\rm
S},n_{\rm E}=0}^1\left({\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}-{\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{\left(1-n_{\rm S}\right)n_{\rm
E}}\right)t}\right)\left(\frac{f_{n_{\rm E}}}{2\mu}\right)^2\nonumber\\
& &\times \rho_{tot}^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}(0)\ket{\psi_{\rm S}^{m_{\rm S}}}\bra{\psi_{\rm S}^{n_{\rm
S}}}\otimes\ket{\chi^{m_{\rm E}}}\bra{\chi^{n_{\rm
E}}}\nonumber\\
& &-\sum_{m_{\rm S},m_{\rm E}=0}^1\sum_{n_{\rm S},n_{\rm
E}=0}^1\left({\rm e}^{-{\rm i}\left(\widetilde{E}_{m_{\rm S}m_{\rm
E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}-{\rm e}^{-{\rm i}\left(\widetilde{E}_{\left(1-m_{\rm
S}\right)m_{\rm E}}-\widetilde{E}_{n_{\rm S}n_{\rm
E}}\right)t}\right)\left(\frac{f_{m_{\rm E}}}{2\mu}\right)^2\nonumber\\
& &\times \rho_{tot}^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}(0)\ket{\psi_{\rm S}^{m_{\rm S}}}\bra{\psi_{\rm S}^{n_{\rm
S}}}\otimes\ket{\chi^{m_{\rm E}}}\bra{\chi^{n_{\rm
E}}}\nonumber\\
& &+\sum_{m_{\rm S},m_{\rm E}=0}^1\sum_{n_{\rm S},n_{\rm
E}=0}^1\left({\rm e}^{-{\rm i}\widetilde{E}_{m_{\rm S}m_{\rm
E}}t}-{\rm e}^{-{\rm i}\widetilde{E}_{\left(1-m_{\rm S}\right)m_{\rm
E}}t}\right)\left({\rm e}^{-{\rm i}\widetilde{E}_{n_{\rm S}n_{\rm
E}}t}-{\rm e}^{-{\rm i}\widetilde{E}_{\left(1-n_{\rm S}\right)n_{\rm
E}}t}\right)\nonumber \\
& &\times \frac{(-1)^{m_{\rm S}+n_{\rm S}}f_{m_{\rm E}}f_{n_{\rm
E}}}{4\mu^2}\rho_{tot}^{m_{\rm S}m_{\rm E},n_{\rm S}n_{\rm
E}}(0)\ket{\psi_{\rm S}^{1-m_{\rm S}}}\bra{\psi_{\rm S}^{1-n_{\rm
S}}}\otimes\ket{\chi^{m_{\rm E}}}\bra{\chi^{n_{\rm E}}}.\end{eqnarray} It is
easy to give the solution of the reduced density matrix of the open
system up to the improved form of the second order approximation by
tracing off the environment space, that is \begin{equation} \rho_{\rm
S}(t)={\rm Tr}_{\rm
E}\left[\rho_{tot}^{(0)}(t)+\rho_{tot}^{(1)}(t)+\rho_{tot}^{(2)}(t)\right].\end{equation}
For a given initial state, this trace is very easy to calculate and
the explicit form of of the open system solution is obtained. Then
we can discuss the decoherence and entanglement dynamics according
to the methods in, for example, \cite{Cucchietti,Ourmd}, and they
are arranged in our forthcoming manuscript (in preparing)
\cite{Ournew}.
\section{Discussion and conclusion}\label{sec9}
This paper studies open systems dynamics, which is the third in our
serial studies on quantum mechanics in general quantum systems. Its
conclusions are obtained based on our previous two works
\cite{My1,My2}.
It must be emphasized that we study open system dynamics according
to the ``the first principle", that is, the Schr\"odinger equation
and the von Neumann equation, and we do not consider the
phenomenological methods and theories. For generality in theory, we
obtain the exact solution of the open system without using any
approximation. The deduction of our exact master equation only uses
the factorizing initial condition. Particularly, we derive out our
perturbed master equation and its improved form, but we give up all
of approximations used in the traditional methods and formulism
except for the factorizing initial condition. It is very interesting
that we get the Redfield master equation without using the
Born-Markov approximation. This implies the Born-Markov
approximation is unnecessary based on our results.
A simple but key idea to obtain our exact solution of open systems
is an appropriate choice of the SESR. In fact, it closely connects
with the Hamiltonian redivision skill \cite{My2}. In Sec.
\ref{sec8}, we have clearly stated its reasons. Originally, the aim
that we propose this idea is to break the accustomed choice of
$H_{\rm S}+H_{\rm E}$, and build a picture to allow the interaction
between the open system and its environment into our unperturbed
representation. This makes the Hamiltonian redivision skill to look
like more natural.
Our exact solution and master equation of open systems are general
and explicit in form because all order approximations of the
perturbing Hamiltonian not only are completely included but also are
clearly expressed, although it is an infinite series. In special,
they are in $c$-number function forms rather than operator forms.
This means that they can inherit the same advantage as the Feynman
path integral expression. Moreover, they are power series of the
perturbing Hamiltonian like as the Dyson series in the interaction
picture. This implies that the cut-off approximation of perturbation
can be made for the needed precision of the problems.
Based on our improved scheme of perturbation theory, the improved
forms of perturbed solution and perturbed master equation can absorb
the partial contributions from the high order even all order
approximations of perturbation. Therefore, we can say that our open
system dynamics is actually calculable, operationally efficient,
conclusively more accurate.
In order to extend our method, we also discuss Milburn model of open
systems. In fact, from our point of view, Miburn model of dynamics
should be applied to so-called Milburn-type closed quantum systems
made up of the interesting open system and its near environment. A
Milburn-type closed quantum system is not really closed system from
the view that a really closed system must obey the von Neumann
equation. If one cannot know the Hamiltonian of its remote
environment and the interaction form between the interested system
and its remote environment, Milburn model of dynamics might be a
choice scheme to study this kind of open systems. In the above
sense, the extra term with $\theta_0$ multiplier in the Milburn
equation represents the influence from the remote environment.
Obviously, when $\theta_0=0$, Milburn dynamics back to von Neumann
dynamics. This implies that the (very) remote environment can be
ignored. We obtain the exact solution that can provide a general
tool to investigate those interesting and complicated open systems
when the environment model is partially known. However, there a free
parameter $\theta_0$ in the Milburn model. It is still not been
given by the theory, but it should be able be known by the
experiment if we think the extra term in the Milburn dynamics is
resulted in by the remote environment.
Note that our open system dynamics is derived from the first
principle, our open system dynamics is not applicable to the cases
that ones do not clearly know the Hamiltonians of the open system,
its environment and the interaction between the system and the
environment unless at this time Milburn model is suitable. How to
relate with some phenomenological theory of open systems will be
done in the near future.
As examples, Zurek model of two-state open system and its extension
with two transverse fields are studied, respectively, in the strong
and weak fields acting on the system. We specially display how to
choose the appropriate SESR. They indicate that our open system
dynamics is a powerful theory and tool. We are sure that our open
system dynamics can be used to more open systems since its
generality and clearness, and the calculations are simpler and more
efficient, the results are more accurate and more reliable than the
existed scheme.
In summary, our results can be thought of as theoretical
developments of open system dynamics, and they are helpful for
understanding the theory of quantum mechanics and providing some
powerful tools for the calculation of decoherence, entanglement
dynamics, quantum dissipation, quantum transport in general quantum
systems and so on. Together with our exact solution and perturbation
theory \cite{My1,My2}, they can finally form the foundation of
theoretical formulism of quantum mechanics in general quantum
systems. Further study on quantum mechanics of general quantum
systems is on progressing.
\section*{Acknowledgments}
We are grateful all the collaborators of our quantum theory group in
the institute for theoretical physics of our university. This work
was funded by the National Fundamental Research Program of China
under No. 2001CB309310, partially supported by the National Natural
Science Foundation of China under Grant No. 60573008.
\vskip -0.1in
|
math/0601584
|
\section{Introduction}
\setcounter{equation}{0}
The most salient feature of the class of braid matrices presented in ref. \cite{1}, setting it apart from other known
examples, is the number of free parameters. This class was obtained for $N^2\times N^2$ braid matrices for {\it odd}
\begin{equation}
N=\left(2p-1\right),\qquad \left(p=1,2,\ldots\right).
\end{equation}
Such matrices, depending on a spectral parameter $\theta$ and satisfying the braid equation
\begin{equation}
\widehat{R}_{12}\left(\theta-\theta'\right)\widehat{R}_{23}\left(\theta\right)\widehat{R}_{12}\left(\theta'\right)
=\widehat{R}_{23}\left(\theta'\right)\widehat{R}_{12}\left(\theta\right)\widehat{R}_{23}\left(\theta-\theta'\right)
\end{equation}
have $\frac 12\left(N+3\right)\left(N-1\right)$ free parameters when the overall normalization is fixed. Thus for
$N=3,\,5,\,7,\ldots$, the respective number of parameters are $6,\,16,\,30,\ldots$. These parameters appear in the
coefficients of the $N^2$ projectors (the "{\em nested sequence}" defined in ref. \cite{1}) providing the basis of
$\widehat{R}\left(\theta\right)$.
The projectors are defined as follows. Let $\left(ij\right)$ be the $N\times N$ matrix with a single non-zero element
1 on row $i$ and column $j$. Then $N^2$ projectors are defined, with $\epsilon=\pm$ and $\overline{i}=N-i+1$, as
\begin{eqnarray}
&& P_{pp}=(pp)\otimes (pp),\nonumber\\
&&2P_{pi\left(\epsilon\right)}=(pp)\otimes\left[(ii)+(\overline{i}\overline{i})+\epsilon\left((i\overline{i})+
(\overline{i}i)\right)\right],\nonumber\\
&&2P_{ip\left(\epsilon\right)}=\left[(ii)+(\overline{i}\overline{i})+\epsilon\left((i\overline{i})+
(\overline{i}i)\right)\right]\otimes(pp),\nonumber\\
&&2P_{ij\left(\epsilon\right)}=(ii)\otimes(jj)+(\overline{i}\overline{i})\otimes(\overline{j}\overline{j})+
\epsilon\left[(i\overline{i})\otimes(j\overline{j})+(\overline{i}i)\otimes(\overline{j}j)\right],\nonumber\\
&&2P_{i\overline{j}\left(\epsilon\right)}=(ii)\otimes(\overline{j}\overline{j})+(\overline{i}\overline{i})\otimes
(jj)+\epsilon\left[(i\overline{i})\otimes(\overline{j}j)+(\overline{i}i)\otimes(j\overline{j})\right],
\end{eqnarray}
where, from (1.1),
\begin{equation}
i=1,\,2,\ldots,\,p-1,\qquad
\overline{i}=N-i+1=2p-1,\,2p-2,\ldots,\,p+1,\qquad
p=\frac 12\left(N+1\right).
\end{equation}
The projectors satisfy (with $\alpha$, $\beta$ standing for triplets $\left(i,j,\epsilon\right)$)
\begin{equation}
P_\alpha P_\beta=\delta_{\alpha\beta}P_{\alpha},\qquad \sum_\alpha P_\alpha^2=I_{N^2\times N^2}.
\end{equation}
Their total number is
\begin{equation}
1+4\left(p-1\right)+4\left(p-1\right)^2=\left(2p-1\right)^2=N^2.
\end{equation}
For our class of solutions, normalizing to 1 the coefficient of $P_{pp}$,
\begin{equation}
\widehat{R}\left(\theta\right)=P_{pp}+\sum_{i,\epsilon}\left(e^{m_{pi}^{(\epsilon)}\theta}P_{pi(\epsilon)}
+e^{m_{ip}^{(\epsilon)}\theta}P_{ip(\epsilon)}\right)+\sum_{i,j,\epsilon}\left(e^{m_{ij}^{(\epsilon)}\theta}P_{ij(\epsilon)}
+e^{m_{i\overline{j}}^{(\epsilon)}\theta}P_{i\overline{j}(\epsilon)}\right),
\end{equation}
with the crucial constraint
\begin{equation}
m_{ij}^{(\epsilon)}=m_{i\overline{j}}^{(\epsilon)},\qquad \left(\overline{j}=N-j+1=2p-j\right).
\end{equation}
This sufficient and necessary constraint concerning the coefficient of $\theta$ in the exponents, leaves
\begin{equation}
\frac 12\left(N+3\right)\left(N-1\right)
\end{equation}
free parameters. For $N=3$ one thus obtains, with 6 free parameters,
\begin{equation}
\widehat{R}\left(\theta\right)=\left|\begin{array}{ccccccccc}
a_+ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & a_- \\
0 & b_+ & 0 & 0 & 0 & 0 & 0 & b_- & 0 \\
0 & 0 & a_+ & 0 & 0 & 0 & a_- & 0 & 0 \\
0 & 0 & 0 & c_+ & 0 & c_- & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & c_- & 0 & c_+ & 0 & 0 & 0 \\
0 & 0 & a_- & 0 & 0 & 0 & a_+ & 0 & 0 \\
0 & b_- & 0 & 0 & 0 & 0 & 0 & b_+ & 0 \\
a_- & 0 & 0 & 0 & 0 & 0 & 0 & 0 & a_+ \\
\end{array}\right|
\end{equation}
where
\begin{equation}
a_{\pm}=\frac 12\left(e^{m_{11}^{(+)}\theta}\pm e^{m_{11}^{(-)}\theta}\right),\qquad b_{\pm}=
\frac 12\left(e^{m_{12}^{(+)}\theta}\pm e^{m_{12}^{(-)}\theta}\right),\qquad c_{\pm}=
\frac 12\left(e^{m_{21}^{(+)}\theta}\pm e^{m_{21}^{(-)}\theta}\right).
\end{equation}
The parameters $a_{\pm}$ are each repeated in (1.10) according to (1.8), since
\begin{equation}
m_{1\overline{1}}^{(\pm)}=m_{11}^{(\pm)},\qquad \left(\overline{1}=3\right).
\end{equation}
This is the case we will study mostly in the following sections. The corresponding results for $N>3$ will be indicated
briefly. For example, the generalization of the considerations below in this section for $N>3$ is entirely
straight-forward. To explore the statistical model associated to (1.10) one starts by constructing explicit
representations of the monodromy matrices $t_{ij}^{(r)}\left(\theta\right)$ of successive orders $\left(r=1,2,3,\ldots
\right)$ obtained by taking coproducts of the fundamental $3\times 3$ blocks (with the same $\theta$ for each
factor)
\begin{equation}
t_{ij}^{(r)}=\sum_{j_1,\ldots,j_{r-1}}t_{ij_1}\otimes t_{j_1j_2}\otimes\cdots\otimes t_{j_{r-1},j}.
\end{equation}
For $N=3$,
\begin{equation}
t^{(r)}=\left|\begin{array}{ccccc}
t_{11}^{(r)}& & t_{12}^{(r)} & & t_{1\overline{1}}^{(r)} \\
&&&&\\
t_{21}^{(r)} && t_{22}^{(r)} && t_{2\overline{1}}^{(r)} \\
&&&&\\
t_{\overline{1}1}^{(r)} && t_{\overline{1}2}^{(r)} && t_{\overline{1}\overline{1}}^{(r)} \\
\end{array}\right|.
\end{equation}
If the $\widehat{R}tt$ equation for the blocks $t_{ij}^{(r)}$ (appendix C),
\begin{equation}
\widehat{R}\left(\theta-\theta'\right)\left(t^{(r)}\left(\theta\right)\otimes
t^{(r)}\left(\theta'\right)\right)=\left(t^{(r)}\left(\theta'\right)\otimes
t^{(r)}\left(\theta\right)\right)\widehat{R}\left(\theta-\theta'\right)
\end{equation}
is satisfied for $r=1$, then the coproduct construction (1.13) ensures that (1.15) is satisfied for all higher values
$r=2,\,3,\ldots$. The solution for $r=1$ is given by
\begin{equation}
t^{(1)}\left(\theta\right)\equiv t\left(\theta\right)=P\widehat{R}\left(\theta\right)=R\left(\theta\right),
\end{equation}
where $P$ is the permutation matrix
\begin{equation}
P=\sum_{ij}\left(ij\right)\otimes\left(ji\right)
\end{equation}
and $R\left(\theta\right)$ is the Yang-Baxter (YB) matrix. This is a standard result valid generally for solutions
of (1.2). (See appendix B of ref. \cite{1} for sources cited.)
The transfer matrix, for each order $r$, is defined to be the trace (with argument $\theta$)
\begin{equation}
T^{(r)}=t_{11}^{(r)}+t_{22}^{(r)}+t_{\overline{1}\overline{1}}^{(r)}.
\end{equation}
The properties of the model depend crucially on the eigenvalues of $T^{(r)}$. Refs. \cite{2,3,4} provide ample
information citing numerous basic sources.
So our basic task will be to construct the eigenstates and eigenvalues of $T^{(r)}\left(\theta\right)$. Remarkable
feature following from (1.10) (and more generally from (1.7)) will be presented in the following sections and appendices.
We will also construct chain Hamiltonians and potentials leading to factorizable $S$-matrices starting from our class of
$\widehat{R}\left(\theta\right)$.
Concerning each aspect we will try to display the role of our multiple parameters. For all
\begin{equation}
m_{ij}^{(+)}\theta\geq m_{ij}^{(-)}\theta
\end{equation}
the elements of $\widehat{R}\left(\theta\right)$and hence the Boltzmann weights are non-negative, consistent with
physical interpretations. For {\it definiteness} we consider the sector, say
\begin{equation}
m_{11}^{(+)}> m_{11}^{(-)}>m_{12}^{(+)}> m_{12}^{(-)}m_{21}^{(+)}>
m_{21}^{(-)},\qquad \theta\geq 0
\end{equation}
of (1.10). The eigenvalues will be ordered differently for other
sectors. They can be considered separately.
\section{Transfer matrix, eigenvectors, eigenvalues ($N=3$): crucial features}
\setcounter{equation}{0}
We start by signalling some crucial features to be encountered below in the explicit constructions restricted (in this
section) to $N=3$.
\begin{enumerate}
\item The trace of the transfer matrix (1.17) of order $r$ will turn out to be
\begin{equation}
\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)=2e^{rm_{11}^{(+)}\theta}+1.
\end{equation}
Of the six parameters $\left(m_{11}^{(\pm)},m_{12}^{(\pm)},m_{21}^{(\pm)}\right)$ of (1.11) {\it only} $m_{11}^{(+)}$
appears in the trace. A simple explanation of this fact will be given after discussing the generalization for $N>3$.
\item The eigenvalue $e^{rm_{11}^{(+)}\theta}$ is obtained exactly {\it twice} for each $r$ and the value 1 only
{\it once}.
\item The remaining $\left(3^r-3\right)$ eigenvalues occur in multiplets of {\it zero sum} due to the presence of
{\it roots of unity}. Hence they do not contribute to the trace. For $r$ a prime number there will be "$r$-plets"
(and possibly "$nr$-plets", $n$ being an integer)
\begin{equation}
e^{\mu\theta}\left(1,e^{\frac{2\pi i}{r}},e^{\frac{2\pi i}{r}\cdot 2},\ldots,,e^{\frac{2\pi i}{r}\cdot\left(r-1\right)}
\right),
\end{equation}
where $\mu$ is a {\it linear} combination of the parameters $m_{ij}^{(\pm)}$. When $r$ is factorizable lower order
multiplets can be present corresponding to the factors. Thus for $r=4$ one obtains both doublets and quadruplets
\begin{equation}
e^{\mu_2\theta}\left(1,-1\right),\qquad e^{\mu_4\theta}\left(1,i,-1,-i\right),
\end{equation}
with appropriate linear combinations $\mu_2$, $\mu_4$ to be displayed below.
\item Apart from possible roots of unity phase factors the modulus of each eigenvalue is a simple exponential of the
type $e^{\mu\theta}$ of (2.2). For $r=3$, for example, one obtains for $\mu$ the values (appendix A)
\begin{eqnarray}
&&3m_{11}^{(+)},\,\left(m_{11}^{(+)}+2m_{11}^{(-)}\right),\nonumber\\
&&\left(m_{11}^{(+)}+m_{12}^{(+)}+m_{21}^{(+)}\right),\,\left(m_{11}^{(+)}+m_{12}^{(-)}+m_{21}^{(-)}\right),\nonumber\\
&&\left(m_{11}^{(-)}+m_{12}^{(+)}+m_{21}^{(-)}\right),\,\left(m_{11}^{(-)}+m_{12}^{(-)}+m_{21}^{(+)}\right),\nonumber\\
&&\left(m_{12}^{(+)}+m_{21}^{(+)}\right),\,\left(m_{12}^{(-)}+m_{21}^{(-)}\right),\nonumber\\
&&0.
\end{eqnarray}
Along with roots of unity factors these provides all the 27 eigenvalues as will be shown below.
\item The values of $\mu$ depend crucially on the subspaces, to be introduced below, which are invariant under the
action of $T^{(r)}\left(\theta\right)$, the transfer matrix.
\end{enumerate}
\subsection{Construction of $T^{(r)}\left(\theta\right)$ for $N=3$}
The standard construction of the fundamental $3\times 3$ block matrices $t_{ij}\left(\theta\right)$ implementing (1.15),
(1.16), (1.10), (1.11) leads to (for $t^{(1)}\left(\theta\right)\equiv t\left(\theta\right)$ with $\overline{1}=3$)
\begin{eqnarray}
&&t_{11}\left(\theta\right)=\left|\begin{array}{ccc}
a_+ & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & a_- \\
\end{array}\right|,\qquad t_{12}\left(\theta\right)=\left|\begin{array}{ccc}
0 & 0 & 0 \\
c_+ & 0 & c_- \\
0 & 0 & 0 \\
\end{array}\right|,\qquad t_{1\overline{1}}\left(\theta\right)=\left|\begin{array}{ccc}
0 & 0 & a_- \\
0 & 0 & 0 \\
a_+ & 0 & 0 \\
\end{array}\right|,\nonumber\\
&&t_{21}\left(\theta\right)=\left|\begin{array}{ccc}
0 & b_+ & 0 \\
0 & 0 & 0 \\
0 & b_- & 0 \\
\end{array}\right|,\qquad t_{22}\left(\theta\right)=\left|\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}\right|,\qquad t_{2\overline{1}}\left(\theta\right)=\left|\begin{array}{ccc}
0 & b_- & 0 \\
0 & 0 & 0 \\
0 & b_+ & 0 \\
\end{array}\right|,\\
&&t_{\overline{1}1}\left(\theta\right)=\left|\begin{array}{ccc}
0 & 0 & a_+ \\
0 & 0 & 0 \\
a_- & 0 & 0 \\
\end{array}\right|,\qquad t_{\overline{1}2}\left(\theta\right)=\left|\begin{array}{ccc}
0 & 0 & 0 \\
c_- & 0 & c_+ \\
0 & 0 & 0 \\
\end{array}\right|,\qquad t_{\overline{1}\overline{1}}\left(\theta\right)=\left|\begin{array}{ccc}
a_- & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & a_+ \\
\end{array}\right|,\nonumber
\end{eqnarray}
where, from (1.11),
\begin{equation}
\left(a_+\pm a_-\right)=e^{m_{11}^{(\pm)}\theta},\qquad \left(b_+\pm b_-\right)=e^{m_{12}^{(\pm)}\theta},\qquad
\left(c_+\pm c_-\right)=e^{m_{21}^{(\pm)}\theta}.
\end{equation}
One now has to be implement these in (1.13), (1.14) and (1.18) to obtain $T^{(r)}\left(\theta\right)$. Then one proceeds
to construct eigenvalues of $T^{(r)}\left(\theta\right)$.
\subsection{Subspaces invariant under the action of $T^{(r)}\left(\theta\right)$}
We start by introducing convenient, compact notations. The state vectors of the fundamental representation (2.5) are
denoted as
\begin{equation}
\left(\left|\begin{array}{c} 1 \\ 0 \\ 0 \\\end{array}\right\rangle,\,\left|\begin{array}{c}
0 \\ 1 \\ 0 \\\end{array}\right\rangle,\,\left|\begin{array}{c} 0 \\ 0 \\ 1 \\
\end{array}\right\rangle\right)\equiv\left(\left|\begin{array}{c}
1 \\\end{array}\right\rangle,\,\left|\begin{array}{c}
2 \\\end{array}\right\rangle,\,\left|\begin{array}{c}
\overline{1} \\\end{array}\right\rangle\right).
\end{equation}
Tensor products for higher orders are constructed as
\begin{equation}
\left(\left|1\right\rangle\otimes\left|1\right\rangle,\left|1\right\rangle\otimes\left|2\right\rangle,
\left|1\right\rangle\otimes\left|\overline{1}\right\rangle,\ldots\right)\equiv\left(\left|11\right\rangle,
\left|12\right\rangle,\left|1\overline{1}\right\rangle,\ldots\right)
\end{equation}
and so on in evident continuation. The {\it order} of the labels $(1,2,\overline{1})$ will indicate the tensor product
structure. Thus, for example,
\begin{equation}
\left|1\right\rangle\otimes\left|1\right\rangle\otimes\left|2\right\rangle\otimes
\left|\overline{1}\right\rangle\otimes\left|1\right\rangle\equiv\left|112\overline{1}2\right\rangle
\end{equation}
Corresponding to the $r$-th order coproduct, $T^{(r)}\left(\theta\right)$ acts on a space
spanned by $3^r$ states (for $N=3$). Let
\begin{equation}
S\left(r,k\right),\qquad \left(k=0,1,\ldots,r\right)
\end{equation}
denote the subspaces labeled by $k$, the {\it multiplicity} of the index $2$. The coefficients of different
power of $x$ in the expansion
\begin{equation}
\left(x+2\right)^r=1\cdot x^r+2rx^{r-1}+\cdots+2^{r-k}\binom r{r-k}x^k+\cdots+2^r
\end{equation}
give the number of states in the respective subspaces. Setting $x=1$ one obtains the total number
\begin{equation}
\left(1+2\right)^r=3^r.
\end{equation}
For example, for $r=3$, one obtains the subspaces,
\begin{eqnarray}
&&S\left(3,3\right):\qquad \left|222\right\rangle\nonumber\\
&&S\left(3,2\right):\qquad \left|221\right\rangle,\left|22\overline{1}\right\rangle,\left|212\right\rangle,
\left|2\overline{1}2\right\rangle,\left|122\right\rangle,\left|\overline{1}22\right\rangle,\nonumber\\
&&S\left(3,1\right):\qquad \left|211\right\rangle,\left|21\overline{1}\right\rangle,\left|2\overline{1}1\right\rangle,
\left|2\overline{1}\overline{1}\right\rangle\nonumber\\
&&\phantom{S\left(3,1\right):\qquad}\left|121\right\rangle,\left|12\overline{1}\right\rangle,
\left|\overline{1}21\right\rangle,
\left|\overline{1}2\overline{1}\right\rangle\nonumber\\
&&\phantom{S\left(3,1\right):\qquad}\left|112\right\rangle,\left|1\overline{1}2\right\rangle,
\left|\overline{1}12\right\rangle,\left|\overline{1}\overline{1}2\right\rangle\nonumber\\
&&S\left(3,0\right):\qquad \left|111\right\rangle,\left|11\overline{1}\right\rangle,\left|1\overline{1}1\right\rangle,
\left|\overline{1}11\right\rangle\nonumber\\
&&\phantom{S\left(3,0\right):\qquad}\left|\overline{1}\overline{1}\overline{1}\right\rangle,
\left|\overline{1}\overline{1}1\right\rangle,\left|\overline{1}1\overline{1}\right\rangle,
\left|1\overline{1}\overline{1}\right\rangle
\end{eqnarray}
A striking and most helpful consequence of the structure of the matrices (2.5) and their coproducts is: {\it each subspace
$S\left(r,k\right)$ is invariant under the action of $T^{(r)}\left(\theta\right)$}. This facilitates considerably the
construction of eigenstates. One works on lower dimensional spaces.
One possible approach is as follows: One selects any one state from the $2^{r-k}\binom r{r-k}$ states of
$S\left(r,k\right)$ and computes the action of $T^{(r)}\left(\theta\right)$ on it. One gets on the r.h.s. a linear
combination of states belonging to $S\left(r,k\right)$. Thus, for example,
\begin{eqnarray}
&&T^{(4)}\left(\theta\right)\left|1111\right\rangle=\left(a_+^4+a_-^4\right)\left|1111\right\rangle+
2a_+^2a_-^2\left(\left|\overline{1}\overline{1}\overline{1}\overline{1}\right\rangle+
\left|1\overline{1}1\overline{1}\right\rangle+\left|\overline{1}1\overline{1}1\right\rangle\right)+\nonumber\\
&&\phantom{T^{(4)}\left(\theta\right)\left|1111\right\rangle=}\left(a_+^2+a_-^2\right)a_+a_-\left(\left|11\overline{1}\overline{1}\right\rangle+
\left|\overline{1}\overline{1}11\right\rangle+\left|\overline{1}11\overline{1}\right\rangle+
\left|1\overline{1}\overline{1}1\right\rangle\right),
\end{eqnarray}
where $a_{\pm}=\frac 12\left(e^{m_{11}^{(+)}\theta}\pm e^{m_{11}^{(-)}\theta}\right)$ as noted before. Next one
computes the action of $T^{(4)}$ successively on the other states appearing on the right. This continues until
one obtains the coefficients for a closed subsystem. Then one searches for linear combinations such that under
$T^{(4)}$ it is reproduced to within a factor. Thus one systematically obtains all eigenvectors and eigenvalues, for
the subspace $S\left(r,k\right)$. For our class one has to solve systems of {\it linear} equations with fairly simple
coefficient. Even the 81 eigenstates and eigenvalues for $r=4$ were obtained directly without using a computer program
and without any real difficulties.
We have thus obtained exhaustive solutions for $r=1,\,2,\,3,\,4$. The corresponding $3,\,9,\,27$ and $81$ eigenvalues
are presented in appendix A. We have also obtained explicitly all the corresponding eigenstates. For brevity they are
not presented here. The eigenvalues of the appendix A fully illustrate the crucial properties (1) to (5) signalled at
the start of this section. In the following section we indicate a related but somewhat differently formulated approach
for various comparisons.
\section{Linear constraints for eigenvectors for $N=3$ and comparison with algebraic Bethe ansatz}
\setcounter{equation}{0}
In section 2 we have noted how, exploiting the invariance of the subspaces $S\left(r,k\right)$ defined by (2.10)
one can construct step by step all the eigenstates. The comments following (2.14) indicate how the relevant linear
equations are obtained. We formulate below the approach in a systematic, explicit fashion.
Starting with (2.5) and (2.6) for $t_{ij}^{(1)}\left(\theta\right)=t_{ij}\left(\theta\right)$ we define the operators
\begin{eqnarray}
&&U=b_+\left(-\theta\right)t_{21}\left(\theta\right)+b_-\left(-\theta\right)t_{2\overline{1}}\left(\theta\right)
=\left(\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{array}\right),\nonumber\\
&&A=t_{22}\left(\theta\right)=\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0 \\
\end{array}\right),\nonumber\\
&&D=b_-\left(-\theta\right)t_{21}\left(\theta\right)+b_+\left(-\theta\right)t_{2\overline{1}}\left(\theta\right)
=\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 1 & 0 \\
\end{array}\right).
\end{eqnarray}
Then (suppressing arguments $\theta$ of $t_{ij}$)
\begin{eqnarray}
&&t_{11}\left(A,U,D\right)=\left(0,a_+U,a_-D\right)\nonumber\\
&&t_{12}\left(A,U,D\right)=\left(0,c_+A,c_-A\right)\nonumber\\
&&t_{1\overline{1}}\left(A,U,D\right)=\left(0,a_+D,a_-U\right)\nonumber\\
&&t_{21}\left(A,U,D\right)=\left(b_+U+b_-D,0,0\right)\nonumber\\
&&t_{22}\left(A,U,D\right)=\left(A,0,0\right)\nonumber\\
&&t_{2\overline{1}}\left(A,U,D\right)=\left(b_-U+b_+D,0,0\right)\nonumber\\
&&t_{\overline{1}1}\left(A,U,D\right)=\left(0,a_-D,a_+U\right)\nonumber\\
&&t_{\overline{1}2}\left(A,U,D\right)=\left(0,c_-A,c_+A\right)\nonumber\\
&&t_{\overline{1}\overline{1}}\left(A,U,D\right)=\left(0,a_-U,a_+D\right).
\end{eqnarray}
Also from (2.7) and (3.1)
\begin{equation}
U\left|2\right\rangle=\left|1\right\rangle,\qquad A\left|2\right\rangle=\left|2\right\rangle,\qquad
D\left|2\right\rangle=\left|\overline{1}\right\rangle.
\end{equation}
For any $r$, starting with $S\left(r,r\right)$ one obtains the basic eigenstate (trivially since $S\left(r,r\right)$
is of dimension 1),
\begin{equation}
T^{(r)}\left(\theta\right)\left|22\ldots 2\right\rangle=1\left|22\ldots 2\right\rangle.
\end{equation}
Now one moves up in $\left(r-k\right)$ stepwise.
\paragraph{$\underline{S\left(r,r-1\right)\,(\dim 2r}$):} With $2r$ coefficients $\left(u_i,d_i\right)$
$\left(i=1,\ldots,r\right)$ one can label the states as
\begin{eqnarray}
\left(\begin{array}{l}
\left(u_1U+d_1D\right)\otimes A\otimes\cdots\otimes A \\
+A\otimes\left(u_2U+d_2D\right)\otimes A\otimes\cdots\otimes A \\
\vdots \\
+ A\otimes A\otimes\cdots\otimes A\otimes\left(u_rU+d_rD\right)\\
\end{array}\right)\left|22\ldots 2\right\rangle.
\end{eqnarray}
The action of $T^{(r)}\left(\theta\right)$ on these leads to a linear system of equations in $\left(u_i,d_i\right)$
corresponding to eigenstates. For $S\left(r,r-1\right)$ the solution is particularly simple. Define
\begin{equation}
\left|\omega,\epsilon\right\rangle=\left(\begin{array}{l}
A\otimes A\otimes\cdots\otimes A\otimes\left(U+\epsilon D\right) \\
+\omega A\otimes A\otimes\cdots\otimes\left(U+\epsilon D\right)\otimes A \\
+\omega^2 A\otimes A\otimes\cdots\otimes\left(U+\epsilon D\right)\otimes A\otimes A \\
\vdots \\
+\omega^{r-1}\left(U+\epsilon D\right)\otimes A\otimes A\otimes\cdots\otimes \\
\end{array}\right)
\left|22\ldots 2\right\rangle,
\end{equation}
where $\epsilon=\pm$ and $\omega$ can have $r$ values (as a $r$-th root of unity)
\begin{equation}
\omega=\left(1,e^{\frac{i2\pi}{r}},\ldots,e^{\frac{i2\pi}{r}\cdot\left(r-1\right)}\right).
\end{equation}
One obtains
\begin{equation}
T^{(r)}\left(\theta\right)\left|\omega,\epsilon\right\rangle=\omega^{r-1}e^{(m_{12}^{(\epsilon)}+
m_{21}^{(\epsilon)})\theta}\left|\omega,\epsilon\right\rangle.
\end{equation}
The $2r$ eigenvalues are
\begin{equation}
e^{(m_{12}^{(\epsilon)}+
m_{21}^{(\epsilon)})\theta}\left(1,e^{\frac{i2\pi}{r}},\ldots,e^{\frac{i2\pi}{r}\left(r-1\right)}\right).
\end{equation}
The next step is $k=r-2$.
\paragraph{$\underline{S\left(r,r-2\right)\,(\dim 2r\left(r-1\right))}$:} A set of states spanning this subspace
is given by
\begin{equation}
\sum_{i\neq j}\left(A\otimes\cdots\otimes A\otimes\left(u_iU+d_i D\right)\otimes A\otimes\cdots\otimes
\left(u_jU+d_j D\right)\otimes A\otimes\cdots\otimes A\right)
\left|22\ldots 2\right\rangle.
\end{equation}
The parameters $\left(u,d\right)$ have to be constrained to obtain eigenstates. At each step one obtains sets of
linear constraints. The pattern is now evident. At each step one inserts, as in (3.10), $\left(r-k\right)$ factors
of the type $\left(u_iU+d_i D\right)$ excluding their coincidence.
Finally for $k=0$, one has $S\left(r,0\right)$ of dimension $2^r$. Here a basis spanning the subspace can be labeled
as
\begin{equation}
\left(\sum_{i}\left(u^{(i)}_1U+d^{(i)}_1 D\right)\otimes \left(u^{(i)}_2U+d^{(i)}_2 D\right)\otimes\cdots\otimes
\left(u^{(i)}_rU+d^{(i)}_r D\right)\right)
\left|22\ldots 2\right\rangle.
\end{equation}
Since there are no label $\left|2\right\rangle$ left, it can be shown from (3.2) that $T^{(r)}\left(\theta\right)$,
acting on $S\left(r,0\right)$ simplifies to
\begin{eqnarray}
&&T^{(r)}\approx t_{11}^{(r)}+t_{\overline{1}\overline{1}}^{(r)}\approx
t_{11}\otimes t_{11}^{(r-1)}+t_{1\overline{1}}\otimes t_{\overline{1}1}^{(r-1)}+
t_{\overline{1}1}\otimes t_{1\overline{1}}^{(r-1)}+t_{\overline{1}\overline{1}}
\otimes t_{\overline{1}\overline{1}}^{(r-1)}.
\end{eqnarray}
In successive steps ($t^{(r-1)}\rightarrow t\otimes t^{(r-2)}$ and so on) only the indices $\left(1,\overline{1}\right)$
need be retained. They only give non zero contributions. From (3.2)
\begin{eqnarray}
&&t_{11}\left(u_s^{(i)}U+d_s^{(i)}D\right)=\left(a_+u_s^{(i)}U+a_-d_s^{(i)}D\right)\equiv X_{11}^{(i)}\left(s\right),
\nonumber\\
&&t_{1\overline{1}}\left(u_s^{(i)}U+d_s^{(i)}D\right)=\left(a_-d_s^{(i)}U+a_+u_s^{(i)}D\right)\equiv
X_{1\overline{1}}^{(i)}\left(s\right),\nonumber\\
&&t_{\overline{1}1}\left(u_s^{(i)}U+d_s^{(i)}D\right)=\left(a_+d_s^{(i)}U+a_-u_s^{(i)}D\right)\equiv
X_{\overline{1}1}^{(i)}\left(s\right),\nonumber\\
&&t_{\overline{1}\overline{1}}\left(u_s^{(i)}U+d_s^{(i)}D\right)=\left(a_-u_s^{(i)}U+a_+d_s^{(i)}D\right)\equiv
X_{\overline{1}\overline{1}}^{(i)}\left(s\right).
\end{eqnarray}
Define, with indices taking values $\left(1,\overline{1}\right)$ only,
\begin{equation}
X_{ab}^{(i)}\left(1,2,\ldots,r\right)=\sum_{b_1,\ldots,b_{r-1}}X_{ab_1}^{(i)}\left(1\right)\otimes
X_{b_1b_2}^{(i)}\left(2\right)\otimes\cdots\otimes X_{b_{r-1}b}^{(i)}\left(r\right).
\end{equation}
The action of $T^{(r)}\left(\theta\right)$ on the generic state (3.11) finally reduces to
\begin{equation}
\left(\sum_{i=1}^r\left(X_{11}^{(i)}\left(1,2,\ldots,r\right)+X_{\overline{1}\overline{1}}^{(i)}\left(1,2,\ldots,r\right)
\right)\right)\left|22\ldots 2\right\rangle.
\end{equation}
It is of particular interest to see what parameterizations in (3.11) corresponds to the {\it two} eigenstates (and two
only for any $r$) that contribute to the trace.
For $r=2$,
\begin{eqnarray}
&&T^{(2)}\left(\theta\right)\left(\left|11\right\rangle+\left|\overline{1}\overline{1}\right\rangle\right)=e^{2m_{11}^{(+)}\theta}
\left(\left|11\right\rangle+\left|\overline{1}\overline{1}\right\rangle\right),\nonumber\\
&&T^{(2)}\left(\theta\right)\left(\left|1\overline{1}\right\rangle+\left|\overline{1}1\right\rangle\right)=e^{2m_{11}^{(+)}\theta}
\left(\left|1\overline{1}\right\rangle+\left|\overline{1}1\right\rangle\right).
\end{eqnarray}
Along with
\begin{eqnarray}
&&T^{(2)}\left(\theta\right)\left|22\right\rangle=\left|22\right\rangle
\end{eqnarray}
the two states of (3.16) yield
\begin{eqnarray}
&&\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=2e^{2m_{11}^{(+)}\theta}+1.
\end{eqnarray}
For other six states provide the $3$ zero sum doublets of (A.4), (A.5), (A.6).
For $r=3$. Apart from $8$ zero sum triplets of eigenvalues (see appendix A) and the corresponding eigenstates
one obtains for
\begin{eqnarray}
&&V_1=\left|111\right\rangle+\left|1\overline{1}\overline{1}\right\rangle+\left|\overline{1}1\overline{1}\right\rangle
+\left|\overline{1}\overline{1}1\right\rangle\\
&&V_2=\left|\overline{1}\overline{1}\overline{1}\right\rangle+\left|\overline{1}11\right\rangle+
\left|1\overline{1}1\right\rangle+\left|11\overline{1}\right\rangle\\
&&T^{(3)}\left(\theta\right)\left(V_1,V_2\right)=e^{3m_{11}^{(+)}\theta}\left(V_1,V_2\right).
\end{eqnarray}
Along with $\left|222\right\rangle$ these assure
\begin{eqnarray}
&&\hbox{tr}\left(T^{(3)}\left(\theta\right)\right)=2e^{3m_{11}^{(+)}\theta}+1.
\end{eqnarray}
For $r=4$, the two corresponding combinations are
\begin{eqnarray}
&&V_1=\left|1111\right\rangle+\left|11\overline{1}\overline{1}\right\rangle+\left|1\overline{1}\overline{1}1\right\rangle
+\left|1\overline{1}1\overline{1}\right\rangle+\left|\overline{1}\overline{1}\overline{1}\overline{1}\right\rangle+
\left|\overline{1}\overline{1}11\right\rangle+\left|\overline{1}1\overline{1}1\right\rangle+
\left|\overline{1}11\overline{1}\right\rangle\nonumber\\
&&V_2=\left|111\overline{1}\right\rangle+\left|11\overline{1}1\right\rangle+\left|1\overline{1}11\right\rangle
+\left|\overline{1}111\right\rangle+\left|\overline{1}\overline{1}\overline{1}1\right\rangle+
\left|\overline{1}\overline{1}1\overline{1}\right\rangle+\left|\overline{1}1\overline{1}\overline{1}\right\rangle+
\left|1\overline{1}\overline{1}\overline{1}\right\rangle
\end{eqnarray}
with
\begin{equation}
T^{(4)}\left(\theta\right)\left(V_1,V_2\right)=e^{4m_{11}^{(+)}\theta}\left(V_1,V_2\right).
\end{equation}
Along with $\left|2222\right\rangle$ these assure
\begin{eqnarray}
&&\hbox{tr}\left(T^{(4)}\left(\theta\right)\right)=2e^{4m_{11}^{(+)}\theta}+1.
\end{eqnarray}
The general pattern is now visible. Indeed the general result is that (with relative coefficients in the sums below
being all unity as in the example above)
\begin{eqnarray}
&&V_e=\left(\hbox{sum of states with {\it even} number of} \left|1\right\rangle\right)\nonumber\\
&&V_0=\left(\hbox{sum of states with {\it odd} number of} \left|1\right\rangle\right)
\end{eqnarray}
give
\begin{equation}
T^{(r)}\left(\theta\right)\left(V_e,V_0\right)=e^{rm_{11}^{(+)}\theta}\left(V_e,V_0\right).
\end{equation}
Along with $\left|22\ldots 2\right\rangle$ they assure
\begin{eqnarray}
&&\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)=2e^{rm_{11}^{(+)}\theta}+1.
\end{eqnarray}
All the reaming $\left(3^r-3\right)$ states are grouped unto subsets giving "zero-trace" multiplets of eigenvalues
involving roots of unity.
This a as far as we propose to go in explicitly constructing the eigenstates. We repeat that, as mentioned in appendix
A, we could obtain the complete sets for $r=1,2,3,4$ the $3,3^2,3^3,3^4$ eigenstates exhaustively.
Let us now compare our approach with that via algebraic Bethe ansatz (Refs. \cite{3,5,6} provide a
considerable number of references). For ready comparison we recapitulate the essential results for the relatively
simple and well studied case of $6$-vertex models. We follow the notation of ref. \cite{3} for the {\it ferroelectric
regime} in particular. Starting with the $4\times 4$ 6-vertex braid matrix and denoting the $N$-th order transfer
matrix blocks as ($N$, $r$ having {\it different} significance here as compared to our notations)
\begin{equation}
T^{(N)}\left(\theta\right)=\left(\begin{array}{cc}
A\left(\theta\right) & B\left(\theta\right) \\
C\left(\theta\right) & D\left(\theta\right) \\
\end{array}\right)
\end{equation}
with
\begin{equation}
\hbox{tr}\left(T^{(N)}\left(\theta\right)\right)=A\left(\theta\right)+D\left(\theta\right).
\end{equation}
The eigenvalues of this trace are extracted from the ansatz
\begin{equation}
\psi\left(\theta_1,\theta_2,\ldots,\theta_r\right)=B\left(\theta_1\right)B\left(\theta_2\right)
\cdots B\left(\theta_r\right)\left(\left|{1\atop 0}\right\rangle_1\otimes\left|{1\atop 0}\right\rangle_2\otimes\cdots\otimes
\left|{1\atop 0}\right\rangle_N\right).
\end{equation}
For the chosen regime, denoting
\begin{equation}
\lambda_j=i\left(\theta_j+\frac{\gamma}2\right)
\end{equation}
(where $\gamma$ is the single free parameter of $\widehat{R}$) the constraints on the parameters $\left(\theta_1,\ldots,\theta_r\right)$ for (3.31) to give eigenstates reduce (due
to the $Rtt$ algebra) to
\begin{equation}
\left[\frac{\sin\left(\lambda_j+i\gamma/2\right)}{\sin\left(\lambda_j-i\gamma/2\right)}\right]^N
=-\prod_{k=1}^r\frac{\sin\left(\lambda_j-\lambda_k+i\gamma\right)}{\sin\left(\lambda_j-\lambda_k-i\gamma\right)}.
\end{equation}
One has to find the solutions (in general complex) for $\left(\theta_1,\ldots,\theta_r\right)$ from these set of
nonlinear constraints.
For our case the invariance of the subspaces $S\left(r,k\right)$ defined in (2.10) under the action of $T^{(r)}
\left(\theta\right)$ (our $r$ being $N$ of (3.29)) clearly indicates the choice of
\begin{equation}
\left|22\ldots 2\right\rangle=\left|\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}\right\rangle\otimes\left|\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}\right\rangle\otimes\cdots\otimes\left|\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}\right\rangle
\end{equation}
with eigenvalue 1 (for all successive orders $r$) as the starting point. This is the subspace $S\left(r,r\right)$
with only one state.
From (3.1) to (3.28) we have defined and implemented operators which acting on (3.34) moves stepwise through the subspaces
\begin{equation}
S\left(r,r\right),\,S\left(r,r-1\right),\ldots,\,S\left(r,1\right),\,S\left(r,0\right).
\end{equation}
For our class of higher dimensional structures, already for $N=3$, the state-labels $\left(1,2,\overline{1}\right)$
necessitate different types of actions on the index 2. Instead of a single complex $\theta$-dependent matrix
(like $B\left(\theta\right)$ of (3.31)) we have chosen and systematically implemented the operators $\left(U,A,D\right)$
to move through the sequence (3.35). Al each step our formalism leads to relatively simple {\it linear} constraints, (3.6)
giving the complete results for $S\left(r,r-1\right)$ being the simplest example. The results (3.26), (3.27) and (3.28)
give the complete trace. The results in appendix A (for $r=
1,2,3,4$ for spaces of $3,9,27,81$ dimensions respectively) give a fair idea of the structure of the eigenvalues.
The fact that one finally solves only sets of linear equations with simple constant coefficients is not evident directly
from (3.13), (3.14) and (3.15) for example. But the fact that one ends up only with eigenvalues of the form $e^{\mu\theta}$
(where, as in appendix A, $\mu$ is a linear function of the parameters $m_{ij}^{(\pm)}$) leads finally to such
constraints. For the $3^r-3$ eigenvalues of zero total trace one searches for multiplets formed by roots unity and hence
summing to zero. This also is very helpful in constructing eigenstates. The eigenvalues, for each $r$, are known to a
certain extent (through certainly not entirely) beforehand.
\section{Hamiltonians and conserved quantities ($N=3$)}
\setcounter{equation}{0}
We study here the role of our parameters in the sequence of conserved quantities, the first one in the sequence being
chain Hamiltonian \cite{3,4}. (Many source are cited in ref. \cite{3}.) Define
\begin{equation}
H_n=\left.\frac{\partial^n}{\partial \theta^n}\ln T^{(r)}\left(\theta\right)\right|_{\theta=0}.
\end{equation}
The commutativity of the transfer matrices $T\left(\theta\right)$, $T\left(\theta'\right)$ implies
\begin{equation}
\left[H_n,H_m\right]=0.
\end{equation}
If $H_1$ is regarded as the Hamiltonian of the system, there is infinite set of conserved quantities.
Using standard results \cite{3,4} and taking account of our normalization and the regularity, i.e.
\begin{equation}
\widehat{R}\left(0\right)=PR\left(0\right)=I,
\end{equation}
one obtains (since $\left(P\widehat{R}\left(0\right)\right)^{-1}\left(P\partial_{\theta}\widehat{R}
\left(\theta\right)\right)_{\theta=0}=\left(\partial_{\theta}\widehat{R}\left(\theta\right)\right)_{\theta=0}$)
\begin{equation}
H_1=\left(T^{(r)}\left(0\right)\right)^{-1}\left.\frac{\partial}{\partial \theta}T^{(r)}\left(\theta\right)
\right|_{\theta=0}=\sum_{k=1}^r I\otimes\cdots\otimes\widehat{R}\left(0\right)_{k,k+1}\otimes\cdots\otimes I.
\end{equation}
Note that due to the trace (circular boundary) constraint $k+1=r+1\approx 1$. Indeed starting with $r=2$, evaluating
directly and explicitly
\begin{equation}
\left(T^{(2)}\left(0\right)\right)^{-1}\left.\frac{\partial}{\partial\theta}T^{(2)}\left(\theta\right)\right|_{\theta=0}
\end{equation}
and setting
\begin{equation}
x_{\pm}=\frac 12\left(m_{11}^{(+)}\pm m_{11}^{(-)}\right),\qquad
y_{\pm}=\frac 12\left(m_{12}^{(+)}\pm m_{12}^{(-)}\right),\qquad
z_{\pm}=\frac 12\left(m_{21}^{(+)}\pm m_{21}^{(-)}\right),
\end{equation}
one obtains (writing $H$ for $H_1$ when $r=2$)
\begin{eqnarray}
&&H=\left(\begin{array}{ccccccccc}
2x_+ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2x_- \\
0 & y_++z_+ & 0 & 0 & 0 & 0 & 0 & y_-+z_- & 0 \\
0 & 0 & 2x_+ & 0 & 0 & 0 & 2x_- & 0 & 0 \\
0 & 0 & 0 & y_++z_+ & 0 & y_-+z_- & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & y_-+z_- & 0 & y_++z_+ & 0 & 0 & 0 \\
0 & 0 & 2x_- & 0 & 0 & 0 & 2x_+ & 0 & 0 \\
0 & y_-+z_- & 0 & 0 & 0 & 0 & 0 & y_++z_+ & 0 \\
2x_- & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2x_+ \\
\end{array}\right)\nonumber\\
&&\phantom{H}=\dot{\widehat{R}}\left(0\right)+P\dot{\widehat{R}}\left(0\right)P\nonumber\\
&&\phantom{H}=\dot{\widehat{R}}\left(0\right)_{12}+\dot{\widehat{R}}\left(0\right)_{21},
\end{eqnarray}
where, in evident notations,
\begin{eqnarray}
&&\dot{\widehat{R}}\left(0\right)_{12}=\left(x_+,\,y_+,\,x_+,\,z_+,\,0,\,z_+,\,x_+,\,y_+,\,x_+\right)_{\hbox{diag.}}+
\nonumber\\&&\phantom{\dot{\widehat{R}}\left(0\right)_{12}=}\left(x_-,\,y_-,\,x_-,\,z_-,\,0,\,z_-,\,x_-,\,y_-,
\,x_-\right)_{\hbox{anti-diag.}}
\end{eqnarray}
and
\begin{eqnarray}
&&\dot{\widehat{R}}\left(0\right)_{21}=\left(x_+,\,z_+,\,x_+,\,y_+,\,0,\,y_+,\,x_+,\,z_+,\,x_+\right)_{\hbox{diag.}}+
\nonumber\\&&\phantom{\dot{\widehat{R}}\left(0\right)_{12}=}\left(x_-,\,z_-,\,x_-,\,y_-,\,0,\,y_-,\,x_-,\,z_-,
\,x_-\right)_{\hbox{anti-diag.}}.
\end{eqnarray}
The expressions for (12) and (21) are related through the interchanges
\begin{equation}
\left(y_{\pm},z_{\pm}\right)\rightarrow \left(z_{\pm},y_{\pm}\right).
\end{equation}
The appearance of (21) in (4.7) is consistent with the remark below (4.4).
For higher derivatives one has
\begin{eqnarray}
&&\left.\frac{d^l}{d\theta^l}\widehat{R}\left(\theta\right)\right|_{\theta=0}=
\left(x_+^l,\,y_+^l,\,x_+^l,\,z_+^l,\,0,\,z_+^l,\,x_+^l,\,y_+^l,\,x_+^l\right)_{\hbox{diag.}}+\nonumber\\
&&\phantom{\left.\frac{d^l}{d\theta^l}\widehat{R}\left(\theta\right)\right|_{\theta=0}=}
\left(x_-^l,\,y_-^l,\,x_-^l,\,z_-^l,\,0,\,z_-^l,\,x_-^l,\,y_-^l,\,x_-^l\right)_{\hbox{anti-diag.}}.
\end{eqnarray}
For $H_2$ one now obtains, as compared to (4.4),
\begin{eqnarray}
&&H_2=\sum_{j\neq k} I\otimes\cdots\otimes I\otimes
\dot{\widehat{R}}\left(0\right)_{j,j+1}\otimes I\cdots\otimes I\otimes
\dot{\widehat{R}}\left(0\right)_{k,k+1}\otimes I\cdots\otimes I+\nonumber\\
&&\phantom{H_2=}\sum_{k}I\cdots\otimes I\otimes
\ddot{\widehat{R}}\left(0\right)_{k,k+1}\otimes I\cdots\otimes I.
\end{eqnarray}
Generalization to higher orders are carried out in evident fashion.
In section 5 of ref. \cite{1} in constructing $\theta$-expansions the $H$ defined in (5.1) is {\it precisely}
$\dot{\widehat{R}}\left(0\right)$ of (4.8) above generalized to all odd $N$, namely $N=3,\,5,\,7,\ldots$. There it was
noted (eq. (5.9) of ref. \cite{1}),
\begin{equation}
\left[H_{12}+H_{23},\left[H_{12},H_{23}\right]\right]=0,
\end{equation}
where $H_{12}=H\otimes I$ and $H_{23}=I\otimes H$. This vanishing double commutator is the simplest version of the
Reshetikhin condition given in eqs. (3.20)of ref. \cite{2} as
\begin{equation}
\left[H_{12}+H_{23},\left[H_{12},H_{23}\right]\right]=X_{12}-X_{23},
\end{equation}
the r.h.s. being the difference of two two-point-quantities. In (4.13) the r.h.s. is simply zero.
\section{Potential for factorizable $S$-matrices and Cayley transforms ($N=3$)}
\setcounter{equation}{0}
Potentials for scattering of bosons or fermions with quadratic interaction terms (sec. 3 of ref. 2 and sec. 1 of ref. 3
provide more references) can correspond to factorizable $S$-matrices (factorizable into two particle scatterings,
independently of the chosen order of the latter ones) provided that such potentials are inverse Cayley
transforms of Yang-Baxter matrices of appropriate dimensions, i.e. $V$ being the potential (for a chosen helicity fixing
the sign of $\theta$)
\begin{equation}
-iV=\left(R\left(\theta\right)-\lambda\left(\theta\right)I\right)^{-1}
\left(R\left(\theta\right)+\lambda\left(\theta\right)I\right).
\end{equation}
As compared to refs. 2, 3 we display explicitly a free normalization factor
\begin{equation}
\left(\lambda\left(\theta\right)\right)^{-1}R\left(\theta\right).
\end{equation}
Our multiparametric case shows clearly that though the normalization (if well-defined) trivially cancels in the YB or
the braid equation it must be compatible with the existence of the inverse of
\begin{equation}
\left(\lambda^{-1}\left(\theta\right)R\left(\theta\right)-I\right).
\end{equation}
We will find that
\begin{equation}
\lambda\left(\theta\right)\neq
\left(1,e^{m_{11}^{(\pm)}\theta},\pm e^{\frac 12(
m_{12}^{(\pm)}+m_{21}^{(\pm)})\theta}\right).
\end{equation}
The inverse (when $\widehat{R}\left(\theta\right)$ is given by (1.10))
\begin{equation}
\left(\widehat{R}\left(\theta\right)-\lambda'\left(\theta\right)I\right)^{-1}
\end{equation}
can be shown to exist for
\begin{equation}
\lambda'\left(\theta\right)\neq
\left(1,e^{m_{11}^{(\pm)}\theta},e^{m_{12}^{(\pm)}\theta},e^{m_{21}^{(\pm)}\theta}\right).
\end{equation}
The significance of (5.6) is simple, the r.h.s. exhibiting simply the coefficient of the projector
\begin{equation}
\left(P_{22},P_{11}^{(\pm)},P_{12}^{(\pm)},P_{21}^{(\pm)}\right)
\end{equation}
in (1.10).
Diagonalizing $\widehat{R}\left(\theta\right)$ the situation becomes particularly transparent (eqs. (3.4), (3.5) of
ref. 1). $M$ being given by (3.5) of ref. 1,
\begin{eqnarray}
&&M\left(\widehat{R}\left(\theta\right)-\lambda'\left(\theta\right)I\right)M^{-1}=\nonumber\\
&&\left(e^{m_{11}^{(+)}\theta},e^{m_{12}^{(+)}\theta},e^{m_{11}^{(+)}\theta},
e^{m_{21}^{(+)}\theta},1,e^{m_{21}^{(-)}\theta},e^{m_{11}^{(-)}\theta},e^{m_{12}^{(-)}\theta},
e^{m_{11}^{(-)}\theta}\right)_{\hbox{diag.}}-\lambda'\left(\theta\right)I.
\end{eqnarray}
When $\lambda'\left(\theta\right)$ is equal to any one of the eigenvalues (including 1) the determinant of
$\left(\widehat{R}\left(\theta\right)-\right.$ $\left.\lambda'\left(\theta\right)I\right)$ vanishes. Hence (5.6).
For (5.1) one requires invertibility of
\begin{equation}
P\widehat{R}\left(\theta\right)-\lambda\left(\theta\right)I.
\end{equation}
The action of $P$ finally leads to (5.4) rather than (5.6).
Defining $X$ through
\begin{equation}
\left(R\left(\theta\right)-\lambda\left(\theta\right)I\right)X=I,
\end{equation}
we present below the explicit form of $X$ for our $N=3$ case.
\begin{equation}
X=\left(\begin{array}{ccccccccc}
x_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x_8 \\
0 & x_2 & 0 & x_6 & 0 & x_7 & 0 & x_4 & 0 \\
0 & 0 & x_3 & 0 & 0 & 0 & x_9 & 0 & 0 \\
0 & x_{10} & 0 & x_2 & 0 & x_4 & 0 & x_{11} & 0 \\
0 & 0 & 0 & 0 & x_5 & 0 & 0 & 0 & 0 \\
0 & x_{11} & 0 & x_4 & 0 & x_2 & 0 & x_{10} & 0 \\
0 & 0 & x_9 & 0 & 0 & 0 & x_3 & 0 & 0 \\
0 & x_4 & 0 & x_7 & 0 & x_6 & 0 & x_2 & 0 \\
x_8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x_1 \\
\end{array}\right),
\end{equation}
where (writing $\lambda$ for $\lambda\left(\theta\right)$)
\begin{eqnarray}
&&x_{1}=\frac 12\left(\frac
1{e^{m_{11}^{(+)}\theta}-\lambda}+\frac
1{e^{m_{11}^{(-)}\theta}-\lambda}\right),\qquad x_8=\frac 12\left(\frac 1{e^{m_{11}^{(+)}\theta}-\lambda}-\frac
1{e^{m_{11}^{(-)}\theta}-\lambda}\right),\nonumber\\
&&x_2=\frac \lambda 2\left(\frac
1{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}+\frac
1{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right),\nonumber\\
&&x_4=\frac \lambda 2\left(\frac
1{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}-\frac
1{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right),\nonumber\\
&&x_{3}=\frac 12\left(\frac
1{e^{m_{11}^{(+)}\theta}-\lambda}-\frac
1{e^{m_{11}^{(-)}\theta}+\lambda}\right),\qquad x_9=\frac 12\left(\frac 1{e^{m_{11}^{(+)}\theta}-\lambda}+\frac
1{e^{m_{11}^{(-)}\theta}+\lambda}\right),\nonumber\\
&&x_5=\frac 1{1-\lambda},\nonumber\\
&&x_6=\frac 12\left(\frac{e^{m_{21}^{(+)}\theta}}{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}+\frac
{e^{m_{21}^{(-)}\theta}}{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right),\nonumber\\
&&x_7=\frac 12\left(\frac
{e^{m_{21}^{(+)}\theta}}{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}-\frac
{e^{m_{21}^{(-)}\theta}}{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right),\nonumber\\
&&x_{10}=\frac 12\left(\frac{e^{m_{12}^{(+)}\theta}}{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}+\frac
{e^{m_{12}^{(-)}\theta}}{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right),\nonumber\\
&&x_{11}=\frac 12\left(\frac{e^{m_{12}^{(+)}\theta}}{e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}-\lambda^2}-\frac
{e^{m_{12}^{(-)}\theta}}{e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}-\lambda^2}\right).
\end{eqnarray}
Now, from (5.1), (5.10) and (5.11),
\begin{eqnarray}
&&-iV=\left(R\left(\theta\right)-\lambda\left(\theta\right)I\right)^{-1}
\left(R\left(\theta\right)+\lambda\left(\theta\right)I\right)\nonumber\\
&&\phantom{-iV}=\left(R\left(\theta\right)-\lambda\left(\theta\right)I\right)^{-1}
\left(R\left(\theta\right)-\lambda\left(\theta\right)I+2\lambda\left(\theta\right)I\right)\nonumber\\
&&\phantom{-iV}=X\left(X^{-1}+2\lambda\left(\theta\right)I\right)\nonumber\\
&&-iV=I+2\lambda\left(\theta\right)X.
\end{eqnarray}
From (5.12) it is evident that $X$ is well-defined only when (5.4) is satisfied. We have thus obtained explicitly,
for $N=3$, the potential leading to a factorizable $S$-matrix. The role played by our parameters is now displayed.
Note that any $\lambda\left(\theta\right)$ satisfying (5.4) can be implemented. One many choose to display this
dependence on $\lambda$ by denoting the potential as $V\left(\lambda\right)$. With our $V\left(\lambda\right)$ one now
considers the fermionic lagrangian
\begin{equation}
{\cal L}=\int dx\left(i\overline{\psi}_a\gamma_\nu\partial_\nu\psi_a-g\left(\overline{\psi}_a\gamma_\nu\psi_c\right)
V_{ab,cd}\left(\overline{\psi}_b\gamma_\nu\partial_\nu\psi_d\right)\right),
\end{equation}
where
\begin{equation}
V=\sum_{ab,cd}\left(V_{ab,cd}\right)\left(ab\right)\otimes\left(cd\right).
\end{equation}
There is an analogous, simpler,formulation for bosons. We will not further analyze the consequences of our $V$. But it
should be compared to the detailed studies of the solutions obtained in refs. \cite{7,8}.
\section{$N>3$}
\setcounter{equation}{0}
So far we have studied the case $N=3$ in detail. Now we indicate
briefly the crucial new features arising for $N>3$. Many aspects
are conserved also, as will be pointed out.
The first major feature is the generalization of (2.1). For $N=2p-1$, one obtains
\begin{equation}
\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)=2\left(e^{rm_{11}^{(+)}\theta}+e^{rm_{22}^{(+)}\theta}+\cdots+
e^{rm_{p-1,p-1}^{(+)}\theta}\right)+1,\qquad p=2,\,3,\,4,\ldots
\end{equation}
There are
\begin{equation}
2\left(p-1\right)+1=2p-1=N
\end{equation}
terms. An explanation, promised below (2.1), is as follows. {\it Only the diagonal blocks $t_{ii}\left(\theta\right)$ have
diagonal terms}. Thus in (2.5) only $t_{11}\left(\theta\right)$, $t_{22}\left(\theta\right)$, $t_{\overline{1}
\overline{1}}\left(\theta\right)$, has non zero elements on the diagonal, their sum being
\begin{equation}
\hbox{tr}\left(T^{(1)}\left(\theta\right)\right)=2\left(a_++a_-\right)+1=2 e^{m_{11}^{(+)}\theta}+1,\qquad (r=1)
\end{equation}
For $r=2$ (and $N=3$) one obtains from the coproduct structure
\begin{eqnarray}
&&\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=\hbox{tr}\left(\left(a_+a_+,0,a_+a_-\right)_{\hbox{diag.}}
+\left(0,1,0\right)_{\hbox{diag.}}+\left(a_-a_-,0,a_-a_+\right)_{\hbox{diag.}}+\right.\nonumber\\
&&\phantom{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=}\left.\left(a_-a_+,0,a_-a_-\right)_{\hbox{diag.}}
+\left(a_+a_-,0,a_+a_+\right)_{\hbox{diag.}}\right)+\nonumber\\
&&\phantom{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=}\hbox{tr}\left(\hbox{blocks with nondiagonal terms only}\right)
\nonumber\\
&&\phantom{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)}=2\left(a_++a_-\right)^2+1\nonumber\\
&&\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=2e^{2m_{11}^{(+)}\theta}+1
\end{eqnarray}
and so on.
For $n>3$ the basic features persist. Along with the crucial constraint (1.8)
\begin{equation}
m_{ij}^{(\epsilon)}=m_{i\overline{j}}^{(\epsilon)},\qquad (\overline{j}=2p-j)
\end{equation}
which symmetrizes the blocks on the diagonal (generalizing (1.10)), the final result is
\begin{eqnarray}
&&\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)=2\sum_{i=1}^{p-1}\left(a_{ii}^{(+)}+a_{ii}^{(-)}\right)^r+1\\
&&\phantom{\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)}=2\sum_{i=1}^{p-1}e^{m_{ii}^{(+)}\theta}+1.
\end{eqnarray}
The result is obtained directly by looking closely at the structure of the matrices concerned, {\it without constructing
eigenstates and their eigenvalues}. We have checked (6.7) directly and explicitly for arbitrary $p$ (appendix A).
But this result has profound consequence on the spectrum of the eigenvalues for each $r$. Given $\left(N,r\right)$ and
the coproduct rule,
\begin{enumerate}
\item the number of eigenvalues for $T^{(r)}\left(\theta\right)=N^r$
\item the number of eigenvalues contributing in the trace=$N$
\item the remaining $\left(N^r-N\right)$ eigenvalues must sum to give zero contribution in the trace.
\end{enumerate}
For $N=3$ we have shown (appendix A) how this constraint is satisfied via the multiplet structures
\begin{eqnarray}
&&e^{\mu\theta}\left(1,\omega_{(l)},\omega_{(l)}^2,\ldots,\omega_{(l)}^{l-1}\right),\qquad \omega_l=e^{\frac{2\pi i}l}\nonumber\\
&&1+\omega_{(l)}+\omega_{(l)}^2+\cdots+\omega_{(l)}^{l-1}=0,
\end{eqnarray}
where $\mu$ is linear in $m_{ij}^{(\pm)}$ and for
\begin{eqnarray}
&&r=2,\qquad l=2\nonumber\\
&&r=3,\qquad l=3\nonumber\\
&&r=4,\qquad l=2,\,4
\end{eqnarray}
and so on.
This multiplet structure involving roots of unity can also be shown to be carried over for $N>3$ explicitly. But apart
from the fact that the number of eigenstates and the number of states in the linear combinations giving eigenstates
increase very fast there are no other basic difficulties.
For $N=5$, for example, generalizing the basis (2.7), for $p=3$, to
\begin{equation}
\left(\left|\begin{array}{c}
1 \\
0 \\
0 \\
0 \\
0 \\
\end{array}\right\rangle,\left|\begin{array}{c}
0 \\
1 \\
0 \\
0 \\
0 \\
\end{array}\right\rangle,\left|\begin{array}{c}
0 \\
0 \\
1 \\
0 \\
0 \\
\end{array}\right\rangle,\left|\begin{array}{c}
0 \\
0 \\
0 \\
1 \\
0 \\
\end{array}\right\rangle,\left|{\begin{array}{c}
0 \\
0 \\
0 \\
0 \\
1 \\
\end{array}}\right\rangle\right)\equiv\left(\left|1\right\rangle,\left|2\right\rangle,\left|3\right\rangle,
\left|\overline{2}\right\rangle,\left|\overline{1}\right\rangle,\right)
\end{equation}
one again obtains subspaces stable under the action of $T^{(r)}\left(\theta\right)$
\begin{equation}
S\left(r,k\right)\qquad \left(k=0,1,\ldots,r\right),
\end{equation}
where $k$ is now the multiplicity of the index 3. The operator structures (3.1), (3.2), (3.3) are now generalized, in
terms of operators $\left(t_{31}\left(\theta\right),t_{3\overline{1}}\left(\theta\right)\right)$, $\left(t_{32}
\left(\theta\right),t_{3\overline{2}}\left(\theta\right)\right)$, $t_{33}\left(\theta\right)$ to construct
\begin{eqnarray}
&&U_1\left|3\right\rangle=\left|1\right\rangle,\qquad U_2\left|3\right\rangle=\left|2\right\rangle,\nonumber\\
&&D_1\left|3\right\rangle=\left|\overline{1}\right\rangle,\qquad D_2\left|3\right\rangle=\left|\overline{2}\right
\rangle,\nonumber\\
&&A\left|3\right\rangle=\left|3\right\rangle.
\end{eqnarray}
The subspace $S\left(r,r\right)$ is still given by a single state
\begin{equation}
T^{(r)}\left(\theta\right)\left|33\ldots 3\right\rangle=\left|33\ldots 3\right\rangle.
\end{equation}
The subspace $S\left(r,r-1\right)$ is now spanned by $4r$ (instead of $2r$ for $N=3$) states and is easily diagonalized.
The stepwise generalization for $N=7,\,9,\ldots$ is now fairly evident. The dimensions of $S\left(r,k\right)$ is given
by a generalization of (2.11) by the successive coefficients in
\begin{equation}
\left(x+2\left(p-1\right)\right)^r=1\cdot x^r+2r\left(p-1\right)x^{r-1}+\cdots+
\left(2\left(p-1\right)\right)^{r-k}\binom{r}{r-k}x^k+\cdots+\left(2\left(p-1\right)\right)^{r}.
\end{equation}
For $x=1$, one gets the total dimension
\begin{equation}
\left(2p-1\right)^r=N^{r}.
\end{equation}
The generalization of (3.26) is also fairly direct. When the order $r$ of $T^{(r)}\left(\theta\right)$ is a prime number
there is an amusing encounter with a theorem of Fermat in considering our multiplet structures. This is discussed in
appendix B.
The generalization of the structure of the Hamiltonian of sec. 4 involving $\dot{\widehat{R}}\left(0\right)$ of (4.8)
is particulary straightforward. But even for $N=5$ we have 24 non-zero terms on the diagonal and as many on the
antidiagonal.
The potential (5.1) for $N=5,\,7,\ldots$ now involve inversions of $N^2$ dimensional matrices $\left(R\left(\theta\right)
-\lambda\left(\theta\right)I\right)$. This is again straightforward, given our specific structure of $\widehat{R}
\left(\theta\right)$, but evidently lengthly.
Apart from such general indications as presented above systematic studies for cases $N>3$ are beyond the scope of this
work. In particular possible substructures in each subspace $S\left(r,k\right)$ corresponding to multiplicities of
different indices (say $(1,\overline{1})$ and $(2,\overline{2})$ for $N=5$) should be formulated with care.
\section{Generalization of the nested sequence of projectors}
\setcounter{equation}{0}
The sequence of projectors (1.3), forming a complete ortho-normalized basis admits the more general parametrization
displayed below (for odd $N$)
\begin{eqnarray}
&& P_{pp}=(pp)\otimes (pp),\nonumber\\
&&\left(u_{pi}+u_{pi}^{-1}\right)P_{pi\left(\pm\right)}=(pp)\otimes\left[u_{pi}^{\pm 1}(ii)+
u_{pi}^{\mp 1}(\overline{i}\overline{i})\pm \left(v_{pi}(i\overline{i})+
v_{pi}^{-1}(\overline{i}i)\right)\right],\nonumber\\
&&\left(u_{ip}+u_{ip}^{-1}\right)P_{ip\left(\pm\right)}=\left[u_{ip}^{\pm 1}(ii)+
u_{ip}^{\mp 1}(\overline{i}\overline{i})\pm \left(v_{ip}(i\overline{i})+
v_{ip}^{-1}(\overline{i}i)\right)\right]\otimes(pp),\\
&&\left(u_{ij}+u_{ij}^{-1}\right)P_{ij\left(\epsilon\right)}=u_{ij}^{\pm 1}(ii)\otimes(jj)+
u_{ij}^{\mp 1}(\overline{i}\overline{i})\otimes(\overline{j}\overline{j})+
\pm\left[v_{ij}(i\overline{i})\otimes(j\overline{j})+v_{ij}^{-1}(\overline{i}i)\otimes(\overline{j}j)\right],\nonumber\\
&&\left(u_{i\overline{j}}+u_{i\overline{j}}^{-1}\right)P_{i\overline{j}\left(\epsilon\right)}=u_{i\overline{j}}^{\pm 1}
(ii)\otimes(\overline{j}\overline{j})+u_{i\overline{j}}^{\mp 1}(\overline{i}\overline{i})\otimes(jj)+
\pm\left[v_{i\overline{j}}(i\overline{i})\otimes(\overline{j}j)+v_{i\overline{j}}^{-1}(\overline{i}i)\otimes
(j\overline{j})\right],\nonumber
\end{eqnarray}
where the supplementary parameters introduced are compatible with the orthonormality and completeness conditions (1.5).
For even $N$ also an analogues parametrization can be introduced. Thus $6$-vertex and $8$-vertex projector basis (given
in (6.1) of ref. 1) can be generalized to
\begin{eqnarray}
&&\left(u_{11}+u_{11}^{-1}\right)P_{11\left(\pm\right)}=
\left(\begin{array}{cccc}
u_{11}^{\pm 1} & 0 & 0 & \pm v_{11} \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\pm v_{11}^{-1} & 0 & 0 & u_{11}^{\mp 1} \\
\end{array}\right)\nonumber\\
&&\left(u_{1\overline{1}}+u_{1\overline{1}}^{-1}\right)P_{1\overline{1}\left(\pm\right)}=
\left(\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & u_{1\overline{1}}^{\pm 1} & \pm v_{1\overline{1}} & 0 \\
0 & \pm v_{1\overline{1}}^{-1} & u_{1\overline{1}}^{\mp 1} & 0 \\
0 & 0 & 0 & 0 \\
\end{array}\right)
\end{eqnarray}
Braid matrices on such bases of projectors and associated statistical models can be studied systematically. Such a study
will be presented elsewhere with suitable restrictions on parameters for specific solutions. Setting $u_{ab}=v_{ab}=1$ for
all values of the indices one recovers the projectors of (1.3).
\section{Discussion}
\setcounter{equation}{0}
Our results presented above, are limited to formal study of the transfer matrix, construction of chain Hamiltonians and
potentials corresponding to factorizable $S$-matrices. Adequate study of the, consequences of the features obtained, of
their deeper significance remains to be done. The role of our parameters should be analyzed in various domains for
comparison with corresponding features of well-known statistical models \cite{3,9,10,11}.
Certain features are, of course, immediately available for our case. Thus the free energy (defined with the opposite
sign in ref. \cite{10}), is given by the maximum eigenvalue $\left(\theta>0\right)$ as
\begin{equation}
f=-\lim_{r\rightarrow \infty}\frac 1r\ln e^{rm_{11}^{(+)}\theta}=-m_{11}^{(+)}\theta
\end{equation}
if we choose, say, the order
\begin{equation}
m_{11}^{(+)}>m_{22}^{(+)}>\cdots > m_{p-1,p-1}^{(+)}.
\end{equation}
In our case results depend on the sector of the parameters selected (their ordering). The second largest eigenvalue,
also of particular interest, is again directly obtained once the ordering is fixed.
Correlation functions are of major interest and a domain of intense activity \cite{12,13}. Here our model can have quite
interesting consequences. This aspect also remains to be explored.
We intend to continue our study elsewhere. But we consider the series of remarkable features presented here to be
sufficiently rich in content. They open up a significantly different domain, as compared to standard, well known cases.
\vskip 0.5cm
\noindent{\bf Acknowledgments:} {\em It is a pleasure to thank Jean Lascoux and Alain Lascoux for discussions concerning
the theme of appendix B. One of us (BA) wants to thank Patrick Mora for a Kind invitation at Ecole Polytechnique. He is
also very grateful to the members of the group for their warm hospitality. This work is supported by a grant of
"La Fondation Charles de Gaulle".}
\newpage
\begin{appendix}
\section{\LARGE Eigenvalues of $T^{(r)}\left(\theta\right)$ for $N=3$, $r=1,2,3,4$ and direct
construction of trace for $N>3$}
\setcounter{equation}{0}
For each $r$, the eigenvalues of $T^{(r)}\left(\theta\right)$ is given systematically for the subspaces
$S\left(r,k\right)$ defined in (2.10), for $k=0,1,\ldots,r$.
\paragraph{$\underline{r=1\,(\dim\,3)}$:}
\begin{eqnarray}
&&\phantom{S\left(1,0\right)\,\left(\dim\,2\right):\qquad}\hbox{eigenvalues}\nonumber\\
&&S\left(1,0\right)\,\left(\dim\,2\right):\qquad e^{m_{11}^{(+)}\theta}\left(1,1\right)\\
&&S\left(1,1\right)\,\left(\dim\,1\right):\qquad 1\\
&&\underline{\hbox{tr}\left(T^{(1)}\left(\theta\right)\right)=2e^{m_{11}^{(+)}\theta}+1}.
\end{eqnarray}
\paragraph{$\underline{r=2\, (\dim\, 9)}$:}
\begin{eqnarray}
&&S\left(2,0\right)\,\left(\dim\,4\right):\qquad e^{2m_{11}^{(+)}\theta}\left(1,1\right)\\
&&\phantom{S\left(2,0\right)\,\left(\dim\,4\right):\qquad} e^{2m_{11}^{(-)}\theta}\left(1,-1\right)\nonumber\\
&&S\left(2,1\right)\,\left(\dim\,4\right):\qquad e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta}
\left(1,-1\right)\\
&&\phantom{S\left(2,0\right)\,\left(\dim\,4\right):\qquad} e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta}
\left(1,-1\right)\nonumber\\
&&S\left(2,2\right)\,\left(\dim\,1\right):\qquad 1\\
&&\underline{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=2e^{2m_{11}^{(+)}\theta}+1}.
\end{eqnarray}
\paragraph{$\underline{r=3\,(\dim\, 27)}$:}
\begin{eqnarray}
&&S\left(3,0\right)\,\left(\dim\,8\right):\qquad e^{3m_{11}^{(+)}\theta}\left(1,1\right)\nonumber\\
&&\phantom{S\left(3,0\right)\,\left(\dim\,8\right):\qquad} e^{(m_{11}^{(+)}+2m_{11}^{(-)})\theta}
\left(1,e^{\frac{2\pi i}3},e^{\frac{2\pi i}3\cdot 2}\right) \qquad \hbox{[2 times]}\\
&&S\left(3,1\right)\,\left(\dim\,12\right):\qquad \left(\begin{array}{c}
e^{(m_{11}^{(+)}+m_{12}^{(+)}+m_{21}^{(+)})\theta}\\
e^{(m_{11}^{(+)}+m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
e^{(m_{11}^{(-)}+m_{12}^{(+)}+m_{21}^{(-)})\theta} \\
e^{(m_{11}^{(-)}+m_{12}^{(-)}+m_{21}^{(+)})\theta} \\
\end{array}\right)\left(1,e^{\frac{2\pi i}3},e^{\frac{2\pi i}3\cdot 2}\right)\\
&&S\left(3,2\right)\,\left(\dim\,6\right):\qquad\left(\begin{array}{c}
e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
\end{array}\right)\left(1,e^{\frac{2\pi i}3},e^{\frac{2\pi i}3\cdot 2}\right)\\
&&S\left(3,3\right)\,\left(\dim\,1\right):\qquad 1\\
&&\underline{\hbox{tr}\left(T^{(3)}\left(\theta\right)\right)=2e^{3m_{11}^{(+)}\theta}+1}.
\end{eqnarray}
\paragraph{$\underline{r=4\, (\dim\, 81)}$:}
\begin{eqnarray}
&&S\left(4,0\right)\,\left(\dim\,16\right):\qquad e^{4m_{11}^{(+)}\theta}\left(1,1\right)\nonumber\\
&&\phantom{S\left(4,0\right)\,\left(\dim\,16\right):\qquad} e^{2(m_{11}^{(+)}+m_{11}^{(-)})\theta}
\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\qquad\hbox{[3 times]}\nonumber\\
&&\phantom{S\left(4,0\right)\,\left(\dim\,16\right):\qquad} e^{4m_{11}^{(-)}\theta}\left(1,-1\right)\\
&&S\left(4,1\right)\,\left(\dim\,32\right):\qquad e^{(m_{11}^{(+)}+m_{11}^{(-)}+m_{12}^{(+)}+m_{21}^{(-)})
\theta}\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\qquad\hbox{[2 times]}
\nonumber\\
&&\phantom{S\left(4,1\right)\,\left(\dim\,32\right):\qquad}e^{(m_{11}^{(+)}+m_{11}^{(-)}+m_{12}^{(-)}+m_{21}^{(+)})
\theta}\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\qquad\hbox{[2 times]}
\nonumber\\
&&\phantom{S\left(4,1\right)\,\left(\dim\,32\right):\qquad}\left(\begin{array}{c}
e^{(2m_{11}^{(+)}+m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{(2m_{11}^{(+)}+m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
e^{(2m_{11}^{(-)}+m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{(2m_{11}^{(-)}+m_{12}^{(-)}+m_{21}^{(+)})\theta} \\
\end{array}\right)
\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\\
&&S\left(4,2\right)\,\left(\dim\,24\right):\qquad\left(\begin{array}{c}
e^{2(m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{2(m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
\end{array}\right)\left(1,-1\right)\nonumber\\
&&\phantom{S\left(4,2\right)\,\left(\dim\,24\right):\qquad}
e^{(m_{12}^{(+)}+m_{12}^{(-)}+m_{21}^{(+)}+m_{21}^{(-)})\theta}\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},
e^{\frac{2\pi i}4\cdot 3}\right)\nonumber\\
&&\phantom{S\left(4,2\right)\,\left(\dim\,24\right):\qquad}\left(\begin{array}{c}
e^{(m_{11}^{(+)}+m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{(m_{11}^{(+)}+m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
e^{(m_{11}^{(-)}+m_{12}^{(+)}+m_{21}^{(-)})\theta} \\
e^{(m_{11}^{(-)}+m_{12}^{(-)}+m_{21}^{(+)})\theta} \\
\end{array}\right)
\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\\
&&S\left(4,3\right)\,\left(\dim\,8\right):\qquad\left(\begin{array}{c}
e^{(m_{12}^{(+)}+m_{21}^{(+)})\theta} \\
e^{(m_{12}^{(-)}+m_{21}^{(-)})\theta} \\
\end{array}\right)\left(1,e^{\frac{2\pi i}4},e^{\frac{2\pi i}4\cdot 2},e^{\frac{2\pi i}4\cdot 3}\right)\\
&&S\left(4,4\right)\,\left(\dim\,1\right):\qquad 1\\
&&\underline{\hbox{tr}\left(T^{(4)}\left(\theta\right)\right)=2e^{4m_{11}^{(+)}\theta}+1}.
\end{eqnarray}
Now we indicate briefly the direct construction of trace for all $N$, without constructing the full set of eigenvalues
explicitly.
Set, for the coefficients on the diagonal and the anti-diagonal respectively
\begin{equation}
d_{ij}=\frac 12\left(e^{m_{ij}^{(+)}\theta}+e^{m_{ij}^{(-)}\theta}\right)=d_{i\overline{j}},\qquad
a_{ij}=\frac 12\left(e^{m_{ij}^{(+)}\theta}-e^{m_{ij}^{(-)}\theta}\right)=a_{i\overline{j}},
\end{equation}
where
\begin{equation}
i=1,\,2,\ldots,\,p-1\qquad \overline{i}=(2p-1),\ldots,\,(p+1)\qquad p=\frac 12\left(N+1\right).
\end{equation}
From (1.7) and (1.15)
\begin{eqnarray}
&&t\left(\theta\right)=\sum_i\left((pi)\otimes\left(d_{ip}(ip)+a_{ip}(\overline{i}p)\right)+
(p\overline{i})\otimes\left(a_{ip}(ip)+d_{ip}(\overline{i}p)\right)+\right.\nonumber\\
&&\phantom{t\left(\theta\right)=}\left.(ip)\otimes\left(d_{pi}(pi)+a_{pi}(p\overline{i})\right)+(\overline{i}p)\otimes\left(a_{pi}(pi)+
d_{pi}(p\overline{i})\right)\right)\nonumber\\
&&\phantom{t\left(\theta\right)=}\sum_{i,j}\left((ji)\otimes\left(d_{ij}(ij)+a_{ij}(\overline{i}\overline{j})\right)+
(\overline{j}\overline{i})\otimes\left(a_{ij}(ij)+d_{ij}(\overline{i}\overline{j})\right)+\right.\nonumber\\
&&\phantom{t\left(\theta\right)=}\left.(j\overline{i})\otimes\left(d_{ij}(\overline{i}j)+a_{ij}(i\overline{j})\right)+
(\overline{j}i)\otimes\left(d_{ij}(\overline{i}j)+a_{ij}(i\overline{j})\right)\right)\nonumber\\
&&\phantom{t\left(\theta\right)=}\sum_{i}\left((i\overline{i})\otimes\left(d_{ii}(\overline{i}i)+a_{ii}
(i\overline{i})\right)+(\overline{i}i)\otimes\left(a_{ii}(\overline{i}i)+d_{ii}(i\overline{i})\right)\right)\nonumber\\
&&\phantom{t\left(\theta\right)=}\sum_{i}\left((ii)\otimes\left(d_{ii}(ii)+a_{ii}(\overline{i}\overline{i})\right)+
(\overline{i}\overline{i})\otimes\left(a_{ii}(ii)+d_{ii}(\overline{i}\overline{i})\right)\right)+\nonumber\\
&&\phantom{t\left(\theta\right)=}(pp)\otimes(pp)
\end{eqnarray}
Crucial features to be noted
\begin{enumerate}
\item Only the diagonal blocks have nonzero terms on the diagonal;
\item In each such blocks there are only two non-zero terms (with only one for the $p$-th);
\item These features are iterated under successive coproducts.
\end{enumerate}
Thus
\begin{eqnarray}
&&\hbox{tr}\left(T\left(\theta\right)\right)=\hbox{tr}\left(\sum_i\left(t_{ii}\left(\theta\right)+
t_{\overline{i}\overline{i}}\left(\theta\right)\right)+t_{pp}\left(\theta\right)\right)\nonumber\\
&&\phantom{\hbox{tr}\left(T\left(\theta\right)\right)}=2\sum_i\left(d_{ii}+a_{ii}\right)+1\nonumber\\
&&\phantom{\hbox{tr}\left(T\left(\theta\right)\right)}=2\sum_ie^{m_{ii}^{(+)}\theta}+1\\
&&\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)=2\sum_i\left(d_{ii}\left(d_{ii}+a_{ii}\right)+
a_{ii}\left(d_{ii}+a_{ii}\right)\right)+1\nonumber\\
&&\phantom{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)}=2\sum_i\left(d_{ii}+a_{ii}\right)^2+1\nonumber\\
&&\phantom{\hbox{tr}\left(T^{(2)}\left(\theta\right)\right)}=2\sum_ie^{2m_{ii}^{(+)}\theta}+1
\end{eqnarray}
and continuing stepwise
\begin{eqnarray}
&&\hbox{tr}\left(T^{(r)}\left(\theta\right)\right)=2\sum_{i=1}^{p-1}\left(d_{ii}+a_{ii}\right)^r+1
=2\sum_{i=1}^{p-1}e^{rm_{ii}^{(+)}\theta}+1
\end{eqnarray}
This is how the structure of our projector basis leads to a direct evaluation of $\hbox{tr}\left(T^{(r)}
\left(\theta\right)\right)$ for the general case without the full list of eigenvalues. The trace is given by
$2(p-1)+1=2p-1=N$ eigenvalues. The remaining $\left(N^r-N\right)$ eigenvalues give zero trace, as we have seen, due
to multiplet structures corresponding to roots of unity.
\section{\LARGE Encounter with a theorem of Fermat}
\setcounter{equation}{0}
A well-known theorem of Fermat states
\begin{equation}
N^r=N\,\hbox{mod.}\, r,
\end{equation}
where $\left(N,r\right)$ are positive integers and $r$ is a prime number. Writing it as
\begin{equation}
N^r-N=rM,
\end{equation}
we try to obtain the integer $M$ explicitly with the following purpose.
In sec. 6 we noted that out of $N^r$ eigenvalues of the transfer matrix $T^{(r)}\left(\theta\right)$ of order $r$
$\left(N^r-N\right)$ eigenvalues must give zero trace when summed. We have also seen how such a zero trace constraint is
implemented in our case through multiplets corresponding to roots of unity as explained in (6.8), (6.9) and (6.10).
When $r$ is a prime number the minimal multiplets can be consistently "r-plets" (or "$nr$-plets") only if (B.2) is satisfied. But precisely
this is guaranteed by (B.1). This is the link of our multiplet structure with (B.1). When $r$ is factorizable one can have
lower multiplets $\left(r_1,r_2,\ldots\right)$ for, say, $r=r_1\cdot r_2\cdots r_n$. This is already seen for $r=4$ in
(6.10). We are interested here in odd integers $N$, but (B.2) holds also for even $N$. We now construct $M$ giving the
number of $r$-plets.
Different constructions of $M$ are certainly possible. The one particularly suitable for our purpose is as follow. One has
for $r=1,\,3,\,5,\,7,\,11,\ldots$ respectively
\begin{eqnarray}
&&N-N=0,\nonumber\\
&&N^3-N=\left(N-1\right)N\left(N+1\right),\nonumber\\
&&N^5-N=\left(N-2\right)\left(N-1\right)N\left(N+1\right)\left(N+2\right)+5\left(N-1\right)N\left(N+1\right)\nonumber\\
&&N^7-N=\left(N-3\right)\left(N-2\right)\left(N-1\right)N\left(N+1\right)\left(N+2\right)\left(N+3\right)+\nonumber\\
&&\phantom{N^7-N=}7\left(2\left(N-2\right)\left(N-1\right)N\left(N+1\right)\left(N+2\right)+
3\left(N-1\right)N\left(N+1\right)\right)\nonumber
\end{eqnarray}
Continuing thus with product of consecutive factors
\begin{eqnarray}
&&N^{11}-N=\left(N-5\right)\cdots N\cdots\left(N+5\right)+\nonumber\\
&&\phantom{N^7-N=}11\left[5\left(N-4\right)\cdots N\cdots\left(N+4\right)+57\left(N-3\right)\cdots N\cdots\left(N+3\right)
+\right.\nonumber\\
&&\phantom{N^7-N=}128\left(N-2\right)\cdots N\cdots\left(N+2\right)+31\left(N-1N\left(N+1\right)\right]
\end{eqnarray}
Thus for $N=3$, $r=3,\,5,\,7,\ldots$, one has respectively 8 triplets, 48 5-plets, 312 7-plets and so on
(unless 10-plets, 14-plets and so on are also obtained).
The first term is
\begin{equation}
\left(N-\frac{r-1}2\right)\ldots \left(N+\frac{r-1}2\right)
\end{equation}
and being a product of $r$ consecutive integers, evidently divisible by $r$. The lower order products all have $r$ as
a factor. Hence the result. When $N\leq \frac{r-1}2$ one or more higher order products vanish. But through now $N^r$ is
not directly, visibly present on the right, the results still hold. Thus though ($N-3$) vanishes in the leading term,
\begin{equation}
3^7-3=0+7\cdot (312).
\end{equation}
For completeness we give the general result below
\begin{eqnarray}
&&N^r-N=\sum_{p=1}^{\frac{r-1}2}A_p\left(r\right)\prod_{k=-p}^p\left(N+k\right)\equiv
\sum_{p=0}^{\frac{r-1}2}A_p\left(r\right)B\left(N,p\right)
\end{eqnarray}
where
\begin{eqnarray}
&&A_{\frac{r-1}2}\left(r\right)=1,\nonumber\\
&&A_{\frac{r-2k+1}2}\left(r\right)=\sum_{m_1=1}^{k-1}\sum_{m_2=1}^{m_1-1}\cdots
\sum_{m_{k-1}=1}^{m_{k-2}-1}\left[-\frac{H_{m_1}^{2k-2m_1}\left(r\right)}{\left(2k-2m_1\right)!}\right]\times
\left[-\frac{H_{m_2}^{2m_1-2m_2}\left(r\right)}{\left(2m_1-2m_2\right)!}\right]\times\cdots\times\nonumber\\
&&\phantom{A_{\frac{r-2k+1}2}\left(r\right)=}\left[-\frac{H_{m_{k-1}}^{2m_{k-2}-2m_{k-1}}\left(r\right)}
{\left(2m_{k-2}-2m_{k-1}\right)!}\right],\qquad
k=2,3,\ldots,\frac{r-1}2.
\end{eqnarray}
The elements $H_k^m$ are given by
\begin{eqnarray}
&&H_k^m\left(r\right)=\left(\sum_{p_1=-\frac{r-2k+1}2}^{\frac{r-2k+1}2}p_1\right)
\left(\sum_{p_2=-\frac{r-2k+1}2\atop p_2\neq p_1}^{\frac{r-2k+1}2}p_2\right)\cdots
\left(\sum_{{p_m=-\frac{r-2k+1}2\atop p_m\neq p_1,\ldots,p_{m-1}}}^{\frac{r-2k+1}2}p_m\right)\qquad m\neq
0\nonumber\\
&&H_k^0\left(r\right)=1.
\end{eqnarray}
For example,
\begin{eqnarray}
&&H_k^2\left(r\right)=-\frac{1}{12}\left(r-2k+1\right)\left(r-2k+2\right)\left(r-2k+3\right),\nonumber\\
&&H_k^4\left(r\right)=\frac{1}{240}\left(5r+17-10k
\right)\left(r+3-2k\right)\left(r-2k+2\right)\left(r-2k+1\right)\left(
r-2k\right)\left(r-1-2k\right),\nonumber\\
&&H_{\frac{r-2k+1}2}^{2k}\left(r\right)=\left(-1\right)^k(2k)!(k!)^2,\,\,
k=0,\ldots,\frac{r-1}2,\nonumber\\
&&H_{\frac{r-2p+1}2}^{2p+2k-r-1}\left(r\right)=\left(2p+2k-r-1\right)!\left.\frac{d^{r-2k+2}B(N,p)}{dN^{r-2k+2}}
\right|_{N=0},\,\, p=\frac{{r-2k+1}}2,\ldots,\frac{{r-1}}2,\nonumber\\
&&H_{m_1}^{2k-2m_1}\left(r\right)=\left(2k-2m_1\right)!\left.\frac{d^{2m_1+1}B(N,\frac{r-2m_1+1}2)}{dN^{2m_1+1}}
\right|_{N=0},\,\, m_1=k,\ldots,\frac{{r-1}}2
\end{eqnarray}
The above formula gives the general expression for the coefficients. But, as Alain Lascoux has pointed out, the symmetric
form of Newton's interpolation fromula relevant for our case leads to complete functions as coefficients. These can be
obtained systematically and conveniently. Thus one obtains, for example,
\begin{eqnarray}
&&N^{11}-N=\left(N-5\right)\cdots N\cdots\left(N+5\right)+\nonumber\\
&&\phantom{N^7-N=}\left(N-4\right)\cdots N\cdots\left(N+4\right)\left(1^2+2^2+3^2+4^2+5^2\right)+\nonumber\\
&&\phantom{N^7-N=}\left(N-3\right)\cdots N\cdots\left(N+3\right)\left(1^4+2^4+3^4+4^4+1^2\cdot 2^2+1^2\cdot 3^2\right.\nonumber\\
&&\phantom{N^7-N=}\left.+1^2\cdot 4^2+2^2\cdot 3^2+2^2\cdot 4^2+3^2\cdot 4^2\right)+\nonumber\\
&&\phantom{N^7-N=}\left(N-2\right)\cdots N\cdots\left(N+2\right)\left(1^6+2^6+3^6+1^4\cdot 3^2+2^4\cdot 3^2\right.
\nonumber\\
&&\phantom{N^7-N=}\left.
+2^4\cdot 1^2+3^4\cdot 1^2+3^4\cdot 2^2+1^4\cdot 2^2+1^2\cdot 2^2\cdot 3^2\right)+\nonumber\\
&&\phantom{N^7-N=}\left(N-1\right)N\left(N+1\right)\left(1^8+2^8+1^6\cdot 2^2+2^6\cdot 1^2
+1^4\cdot 2^4\right)
\end{eqnarray}
This gives (B.3) in which 11 is already factorized.
\section{\LARGE $\widehat{R}tt$-algebra}
\setcounter{equation}{0}
We present below a canonical formulation of the $\widehat{R}tt$ algebra \cite{14} specifically adapted to our case. The
Baxterized form with $N^2$ blocks of $N\times N$ matrices $t_{ij}\left(\theta\right)$ must satisfy
\begin{equation}
\widehat{R}\left(\theta-\theta'\right)\left(t\left(\theta\right)\otimes t\left(\theta'\right)\right)=
\left(t\left(\theta'\right)\otimes t\left(\theta\right)\right)\widehat{R}\left(\theta-\theta'\right),
\end{equation}
where (since $P^2=I$)
\begin{equation}
t\left(\theta\right)\otimes t\left(\theta'\right)=
\left(t\left(\theta\right)\otimes I\right)\left(I\otimes t\left(\theta\right)\right)\equiv
t_1\left(\theta\right)t_2\left(\theta\right)=
\left(t_1\left(\theta\right)P\right)\left(Pt_2\left(\theta\right)\right).
\end{equation}
But $t_1\left(\theta\right)=Pt_2\left(\theta\right)P$, hence
\begin{equation}
t_1\left(\theta\right)P=t_2\left(\theta\right)P\equiv \widehat{t}\left(\theta\right).
\end{equation}
Thus
\begin{equation}
\widehat{R}\left(\theta-\theta'\right)\widehat{t}\left(\theta\right)\widehat{t}\left(\theta'\right)=
\widehat{t}\left(\theta'\right)\widehat{t}\left(\theta\right)\widehat{R}\left(\theta-\theta'\right),
\end{equation}
where one has just matrix multiplication of the same matrix $\widehat{t}$ with arguments $\left(\theta,\theta'\right)$.
Now suppose that one has obtained explicitly the diagonalizer $M$ of $\widehat{R}\left(\theta\right)$. When
\begin{equation}
\widehat{R}\left(\theta\right)=\sum_{\alpha,\beta}f_{\alpha\beta}\left(\theta\right)P_{\alpha\beta},
\end{equation}
where $P_{\alpha\beta}$ form a complete basis of $\theta$-independent projectors (and the minimal polynomial
equation satisfied by $\widehat{R}\left(\theta\right)$ has no multiple roots for consistency) one can construct a
$\theta$-independent $M$ to diagonalize each $P_{\alpha\beta}$ simultaneously. For our nested sequence of projectors
(1.3) the diagonalizer is given by in sec. 3 of ref. 1 as
\begin{eqnarray}
&&\sqrt{2}M=\sqrt{2}M^{-1}=\sqrt{2}(pp)\otimes (pp)+\nonumber\\
&&\phantom{\sqrt{2}M=\sqrt{2}M^{-1}=}(pp)\otimes\left(\sum_i\left( (ii)-(\overline{i}\overline{i})+(i\overline{i})+
(\overline{i}i)\right)\right)\nonumber\\
&&\phantom{\sqrt{2}M=\sqrt{2}M^{-1}=}\left(\sum_i\left( (ii)-(\overline{i}\overline{i})+(i\overline{i})+
(\overline{i}i)\right)\right)\otimes(pp)+\\
&&\phantom{\sqrt{2}M=\sqrt{2}M^{-1}=}\sum_{i,j}\left(\left((ii)-(\overline{i}\overline{i})\right)\otimes
\left((jj)+(\overline{j}\overline{j})\right)+\left((i\overline{i})+(\overline{i}i)\right)\otimes
\left((j\overline{j})+(\overline{j}j)\right)\right),\nonumber
\end{eqnarray}
where
\begin{equation}
i=1,2,\ldots,p-1,\qquad \overline{i}=2p-1,\ldots,p+1,\qquad N=2p-1.
\end{equation}
For $N=3$ ($p=2$), one obtains (see (3.5) of ref. 1)
\begin{eqnarray}
&&\sqrt{2}M=\sqrt{2}M^{-1}=\left(1,1,1,1,\sqrt{2},-1,-1,-1,-1\right)_{\hbox{diag.}}+\nonumber\\
&&\phantom{\sqrt{2}M=\sqrt{2}M^{-1}=}\left(1,1,1,1,\sqrt{2},1,1,1,1\right)_{\hbox{anti-diag.}}
\end{eqnarray}
($\sqrt{2}$ being the common element of diag and anti-diag).
The general case is now evident. Defining
\begin{equation}
M\widehat{R}\left(\theta\right)M^{-1}=D\left(\theta\right)
\end{equation}
a diagonal matrix and
\begin{equation}
M\widehat{t}\left(\theta\right)M^{-1}=K\left(\theta\right)
\end{equation}
quite generally
\begin{equation}
D\left(\theta-\theta'\right)K\left(\theta\right)K\left(\theta'\right)=
K\left(\theta'\right)K\left(\theta\right)D\left(\theta-\theta'\right).
\end{equation}
This our canonical formulation \cite{15}. For our present case (3.3), (3.4) of ref. \cite{1})
\begin{equation}
D\left(\theta\right)=\left(e^{m_{11}^{(+)}\theta},e^{m_{12}^{(+)}\theta},\ldots,e^{m_{11}^{(-)}\theta}\right).
\end{equation}
From (C.11)
\begin{equation}
D_{aa}\left(\theta-\theta'\right)K_{ac}\left(\theta\right)K_{cb}\left(\theta'\right)=
K_{ac}\left(\theta'\right)K_{cb}\left(\theta\right)D_{bb}\left(\theta-\theta'\right).
\end{equation}
For $N=3$, for our case, defining the $3\times 3$ diagonal blocks
\begin{eqnarray}
&&d_{11}\left(\theta\right)=\left(e^{m_{11}^{(+)}\theta},e^{m_{12}^{(+)}\theta},
e^{m_{11}^{(+)}\theta}\right)_{\hbox{diag;}}\nonumber\\
&&d_{22}\left(\theta\right)=\left(e^{m_{21}^{(+)}\theta},1,e^{m_{21}^{(-)}\theta}\right)_{\hbox{diag;}}\nonumber\\
&&d_{\overline{1}\overline{1}}\left(\theta\right)=\left(e^{m_{11}^{(-)}\theta},e^{m_{12}^{(-)}\theta},
e^{m_{11}^{(-)}\theta}\right)_{\hbox{diag;}}
\end{eqnarray}
and denoting $\left(K\left(\theta\right),K\left(\theta'\right),D\left(\theta-\theta'\right)\right)\equiv
\left(K,K',D''\right)$ when $d_{11}\left(\theta-\theta'\right)\equiv d_{11}''$ and so on
\begin{equation}
\left(\begin{array}{ccc}
d_{11}''\left(KK'\right)_{11} & d_{11}''\left(KK'\right)_{12} & d_{11}''\left(KK'\right)_{1\overline{1}} \\
d_{22}''\left(KK'\right)_{21} & d_{22}''\left(KK'\right)_{22} & d_{22}''\left(KK'\right)_{2\overline{1}} \\
d_{\overline{1}\overline{1}}''\left(KK'\right)_{\overline{1}1} & d_{\overline{1}\overline{1}}''
\left(KK'\right)_{\overline{1}2} & d_{\overline{1}\overline{1}}''\left(KK'\right)_{\overline{1}\overline{1}} \\
\end{array}\right)=\left(\begin{array}{ccc}
\left(K'K\right)_{11}d_{11}'' & \left(K'K\right)_{12}d_{22}'' & \left(K'K\right)_{1\overline{1}}
d_{\overline{1}\overline{1}}'' \\
\left(K'K\right)_{21}d_{11}'' & \left(K'K\right)_{22}d_{22}'' & \left(K'K\right)_{2\overline{1}}
d_{\overline{1}\overline{1}}'' \\
\left(K'K\right)_{\overline{1}1}d_{11}'' & \left(K'K\right)_{\overline{1}2}d_{22}''
& \left(K'K\right)_{\overline{1}\overline{1}}d_{\overline{1}\overline{1}}'' \\
\end{array}\right).
\end{equation}
Substituting the explicit form of $\left(KK'\right)_{ij}$ one obtains the full set of 81 relations (for $N=3$) of the
$\widehat{R}tt$-algebra. Each $K_{ij}$ is a $3\times 3$ block whose elements are the blocks of $t_{ij}$. Thus
\begin{eqnarray}
&&2K_{11}=\left(\begin{array}{ccc}
t_{11}+t_{\overline{1}\overline{1}} & t_{12}+t_{\overline{1}2} & t_{1\overline{1}}+t_{\overline{1}1} \\
0 & 0 & 0 \\
t_{1\overline{1}}+t_{\overline{1}1} & t_{12}+t_{\overline{1}2} & t_{11}+t_{\overline{1}\overline{1}} \\
\end{array}\right)\nonumber\\
&&\phantom{2K_{11}}=\left(t_{11}+t_{\overline{1}\overline{1}}\right)\left((11)+(\overline{1}\overline{1})\right)
+\left(t_{12}+t_{\overline{1}2}\right)\left((12)+(\overline{1}2)\right)+\nonumber\\
&&\phantom{2K_{11}=}\left(t_{1\overline{1}}+t_{\overline{1}1}\right)\left((1\overline{1})+(\overline{1}1)\right).
\end{eqnarray}
In such a notation
\begin{eqnarray}
&&2K_{\overline{1}\overline{1}}=\left(t_{11}+t_{\overline{1}\overline{1}}\right)\left(-(11)+(\overline{1}\overline{1})
\right)+\left(t_{12}+t_{\overline{1}2}\right)\left(-(12)+(\overline{1}2)\right)+\nonumber\\
&&\phantom{2K_{11}=}\left(t_{1\overline{1}}+t_{\overline{1}1}\right)\left(-(1\overline{1})+(\overline{1}1)\right),\\
&&2K_{1\overline{1}}=\left(t_{11}-t_{\overline{1}1}\right)\left((11)-(\overline{1}\overline{1})
\right)+\left(t_{12}-t_{\overline{1}2}\right)\left((12)-(\overline{1}2)\right)+\nonumber\\
&&\phantom{2K_{11}=}\left(t_{11}-t_{\overline{1}\overline{1}}\right)\left((1\overline{1})-(\overline{1}1)\right),\\
&&2K_{\overline{1}1}=\left(t_{1\overline{1}}-t_{\overline{1}1}\right)\left((11)+(\overline{1}\overline{1})
\right)+\left(t_{12}-t_{\overline{1}2}\right)\left((12)+(\overline{1}2)\right)+\nonumber\\
&&\phantom{2K_{11}=}\left(t_{11}-t_{\overline{1}\overline{1}}\right)\left((1\overline{1})+(\overline{1}1)\right),\\
&&2K_{12}=\left(t_{11}+t_{1\overline{1}}+t_{\overline{1}1}+t_{\overline{1}\overline{1}}\right)(21)+
\sqrt{2}\left(t_{12}+t_{\overline{1}2}\right)(22)\nonumber\\
&&\phantom{2K_{11}=}\left(t_{11}-t_{\overline{1}\overline{1}}+t_{\overline{1}1}-t_{\overline{1}\overline{1}}
\right)(2\overline{1}),\\
&&2K_{\overline{1}2}=\left(t_{11}+t_{1\overline{1}}-t_{\overline{1}1}-t_{\overline{1}\overline{1}}\right)(21)+
\sqrt{2}\left(t_{12}-t_{\overline{1}2}\right)(22)\nonumber\\
&&\phantom{2K_{11}=}\left(t_{11}-t_{\overline{1}\overline{1}}-t_{\overline{1}1}+t_{\overline{1}\overline{1}}
\right)(2\overline{1}),\\
&&2K_{21}=\left(t_{21}+t_{2\overline{1}}\right)\left((11)+(\overline{1}\overline{1})\right)+2t_{22}(12)
+\left(t_{21}-t_{2\overline{1}}\right)\left((\overline{1}1)-(\overline{1}\overline{1})\right),\\
&&2K_{2\overline{1}}=\left(t_{21}-t_{2\overline{1}}\right)\left(-(11)+(1\overline{1})\right)+2t_{22}(\overline{1}2)
+\left(t_{21}+t_{2\overline{1}}\right)\left((\overline{1}1)+(\overline{1}\overline{1})\right),\\
&&2K_{22}=\sqrt{2}\left(t_{21}+t_{2\overline{1}}\right)(21)+
2t_{22}(22)+\sqrt{2}\left(t_{21}-t_{2\overline{1}}\right)(2\overline{1}).
\end{eqnarray}
We have not directly utilized the $\widehat{R}tt$ constraints in constructing eigenstates. But since (C.1) is the basic
equation providing the starting point we present here the most compact approach to the full set of 81 constraints for
$N=3$.
\end{appendix}
\newpage
|
nucl-th/0601032
|
\section*{Acknowledgements} F. J. L. thanks the organizers of this
interesting meeting at the 30th BFKL birthday, where QCD, Regge theory
and experimental data where harmoniously combined, for their warm
hospitality and the opportunity to report on this work.
|
math/0601381
|
\part[1]{\frac{\partial}{\partial #1}}
\newcommand\half{\frac{1}{2}}
\newcommand\quarter{\frac{1}{4}}
\newcommand\norm[1]{||\,#1\,||}
\newcommand{\mbox{\bf R}^{n}}{\mbox{\bf R}^{n}}
\newcommand{\hk}[1]{\langle #1\rangle}
\newcommand{\mbox{\bf R}}{\mbox{\bf R}}
\newcommand{\mbox{\bf C}}{\mbox{\bf C}}
\newcommand{\mbox{\bf Z}}{\mbox{\bf Z}}
\newcommand{\mbox{\bf N}}{\mbox{\bf N}}
\newcommand{\mbox{\rm WF\,}}{\mbox{\rm WF\,}}
\newcommand{\mbox{\rm SE\,}}{\mbox{\rm SE\,}}
\newcommand{\mbox{\rm dist\,}}{\mbox{\rm dist\,}}
\newcommand{\mbox{\rm Spec\,}}{\mbox{\rm Spec\,}}
\newcommand{\mbox{\rm ad\,}}{\mbox{\rm ad\,}}
\newcommand{\mbox{\rm Tr\,}}{\mbox{\rm Tr\,}}
\renewcommand{\exp}{\mbox{\rm exp\,}}
\newcommand{\mbox{\rm supp}}{\mbox{\rm supp}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newtheorem{dref}{Definition}[section]
\newtheorem{lemma}[dref]{Lemma}
\newtheorem{theo}[dref]{Theorem}
\newtheorem{prop}[dref]{Proposition}
\newtheorem{remark}[dref]{Remark}
\newtheorem{ex}[dref]{Example}
\newtheorem{coro}[dref]{Corollary}
\renewcommand{\thedref}{\thesection.\arabic{dref}}
\newenvironment{proof}{\par\noindent{{\bf Proof:}}}{\hfill$\Box$\medskip}
\title{Eigenvalue asymptotics for randomly
perturbed non-\sa{} \op{}s}
\author{Mildred Hager\\ \lagom Dept of
Mathematics\\ \lagom University of California\\
\lagom Berkeley, CA 94720\\ \lagom mhager@math.berkeley.edu
\and Johannes
Sj{\"o}strand\\ \lagom CMLS\\ \lagom Ecole Polytechnique\\ \lagom FR 91120
Palaiseau c\'edex\\ \lagom johannes@{}math.polytechnique.fr \\ \lagom and
UMR7640--CNRS}
\date{}
\begin{document}
\maketitle
\begin{abstract} We consider quite general $h$-\pop{}s on ${\bf R}^n$ with small
random perturbations and show that in the limit $h\to 0$ the \ev{}s are
distributed according to a Weyl law with a probabality that tends to 1.
The first author has previously obtained a similar result in dimension 1.
Our class of perturbations is different.\medskip
\par
\centerline{\bf R\'esum\'e}\medskip
\par Nous consid\'erons des op\'erateurs $h$-pseudodiff\'erentiels assez
g\'en\'eraux et nous montrons que dans la limite $h\to 0$, les valeurs
propres se distribuent selon une loi de Weyl, avec une probabilit\'e qui
tend vers 1. Le premier auteur a d\'ej\`a obtenu un r\'esultat semblable
en dimension 1. Notre classe de perturbations est diff\'erente.
\end{abstract}
\vskip 2mm
\noindent
{\bf Keywords and Phrases:} Non-selfadjoint, eigenvalue,
random perturbation
\vskip 1mm
\noindent
{\bf Mathematics Subject Classification 2000}: 35P20, 30D35
\tableofcontents
\section{Introduction}\label{int}
\setcounter{equation}{0}
\par This work can be viewed as a continuation of \cite{Ha2}, where one
of us studied random perturbations of non-\sa{} $h$-\pop{}s on ${\bf R}$
and showed that Weyl \asy{}s holds with a \proba{} that is very close to
1. In the present work we consider the multidimensional case and weaken
some of the assumptions in \cite{Ha2} (like independence of the
differentials and analyticity of the symbol). Our random perturbations are
slighly different however, in \cite{Ha2} they are given by a random
potential while here they are rather given by a random integral \op{}.
\par Before continuing the general discussion, we fix the framework more in
detail. We will work in the semi-classical limit on ${\bf R}^n$. Write $\rho
=(x,\xi )$ and let $m\ge 1$ be an order \fu{} on the phase space ${\bf
R}^{2n}_{x,\xi }$:
\eekv{int.1}
{\exists C_0\ge 1,\ N_0>0\hbox{ such that }m(\rho )\le C_0\langle \rho
-\mu \rangle ^{N_0}m(\mu ),}
{\forall \rho ,\mu \in{\bf R}^{2n},\ \langle \rho -\mu \rangle
=\sqrt{1+\vert \rho -\mu \vert ^2}.}
The corresponding symbol space (cf \cite{DiSj}) is then
\ekv{int.2}
{
S({\bf R}^{2n},m)=\{ a\in C^\infty ({\bf R}^{2n});\, \vert \partial _\rho
^\alpha a(\rho )\vert \le C_\alpha m(\rho ),\, \rho \in {\bf R}^{2n},\,
\alpha \in{\bf N}^{2n}\}.
}
Let
\ekv{int.3}
{
P(\rho ;h)\sim p(\rho )+hp_1(\rho )+...\hbox{ in } S({\bf R}^{2n},m).
}
Assume $\exists$ $z_0\in{\bf C},\, C_0>0$ such that
\ekv{int.4}
{
\vert p(\rho )-z_0\vert \ge m(\rho )/C_0,\ \rho \in{\bf R}^{2n}.
}
Let $\Sigma $ denote the closure of $p({\bf R}^{2n})$ so that $\Sigma
=p({\bf R}^{2n})\cup \Sigma _\infty $, where $\Sigma _\infty \subset {\bf
C}$ is the set of accumulation points of $p$ in the limit $(x,\xi
)=\infty $.
\par For $h>0$ small enough, we also let $P$ denote the $h$-Weyl
quantization,
$$
Pu(x)=P^w(x,hD_x;h)u(x)={1\over (2\pi h)^n}\iint e^{{i\over h}(x-y)\cdot
' }P({x+y\over 2},\eta ;h)u(y)dyd\eta .
$$
\par Let $\Omega \Subset {\bf C}\sm \Sigma _\infty $ be open simply
connected
and not entirely contained in $\Sigma $. Then, as we shall see,
\smallskip
\par\noindent $1^o$ $\sigma (P)\cap \Omega $ is discrete for $h>0$ small enough,
\smallskip
\par\noindent $2^o$ $\forall\,\epsilon >0$, $\exists\, h(\epsilon )>0$, such that
$$
\sigma (P)\cap \Omega \subset \Sigma +D(0,\epsilon ),\ 0<h\le h(\epsilon ).
$$
Here $D(0,\epsilon )$
denotes the open disc in ${\bf C}$ with center $0$ and radius $\epsilon
>0$ and we equip the \op{} $P$ with the domain $H(m):=
(P-z_0)^{-1}(L^2({\bf R}^n))$, where the \op{} to the right is the
pseudodifferential inverse of $P-z_0$ (see \cite{DiSj} and \cite{Ha2}).
\par If $P$ is \sa{} (so that $p$ is real-valued) we have Weyl \asy{}s:
\par For every interval $I\subset \Omega $ with ${\rm vol}_{{\bf
R}^{2n}}(p\inv (\partial I))=0$, the number $N(P,I)$ of \ev{}s of $P$ in
$I$ satisfies
\ekv{int.5}
{ N(P,I)={1\over (2\pi h)^n}({\rm vol\,}(p\inv (I))+o(1)),\ h\to 0. } This
result has been proved with increasing generality and precision by
J.~Chazarain, B.~Helffer--D.~Robert, and V.~Ivrii. (We here
follow the presentation of
\cite{DiSj} where references to original works can be found. The corresponding
developement for \sa{} partial differential \op{}s in the high energy limit
has a long and rich history starting with the work of H.~Weyl \cite{We}.)
A very simple and explicit example is given by the harmonic oscillator
$P={1\over 2}((hD)^2+x^2):L^2({\bf R})\to L^2({\bf R})$, $P(x,\xi )=p(x,\xi
)={1\over 2}(x^2+\xi ^2)$. In this case the \ev{}s are given by $\lambda
_k(h)=(k+{1\over 2})h$, $k=0,1,2,...$
\par In the non-\sa{} case, Weyl \asy{}s does not always hold. If $P$ is a
\dop{} with \an{} \coef{}s on the real line, then often the spectrum is
determined by action integrals over complex cycles, having nothing to do
with volumes of subsets of real phase space. A simple example of this is
given by the non-\sa{} harmonic oscillator,
\ekv{int.6}
{
P={1\over 2}((hD)^2+ix^2):L^2({\bf R})\to L^2({\bf R}),
}
whose spectrum is equal to $\{ e^{i\pi /4}(k+{1\over 2})h;\, k\in{\bf
N}\}$; This is easy to see by the method of complex scaling, or by
applying the general multidimensional result of \cite{Sj}. In this case, we
have $\Sigma _\infty =\emptyset $, and $\Sigma $ is the closed 1st
quadrant. Clearly the number of \ev{}s in an open set
$\Gamma \Subset {\bf C}$ intersecting the 1st quadrant, whose closure
avoids the ray given $\arg z={\pi \over 4}$ is equal to zero while the
corresponding
Weyl \coef{} ${\rm vol}(p^{-1}(\Gamma ))$ is not. (Further results about
the non-\sa{} harmonic oscillator have been obtained by E.B. Davies and L.
Boulton, see \cite{Da} and further references given there).
\par However, in this case and for quite a general class of $h$-\pop{}s in
one dimension, it was shown by one of us in \cite{Ha2} that if we replace
the \op{} $P$ by $P+\delta Q_\omega $, where $0<\delta \ll 1$
varies in a
suitable parameter range and $Q_\omega $ is a random potential of a
suitable type then we do
have Weyl \asy{} in the interior of $\Sigma $ with a \proba{} that is
close to 1. The book \cite{EmTr} of M. Embree and L.N. Trefethen as well as
the paper \cite{TrCh} by L.N. Trefethen and S.J. Chapman contain (in our
opinion) numerical examples where one can see the onset of Weyl-\asy{}s
after adding small random \pert{}s.
\par In this work we establish similar results in arbitrary dimension that we
shall now describe. Let $0<\widetilde{m},\widehat{m}\le 1$ be
square integrable order \fu{}s on ${\bf R}^{2n}$
such that $\widetilde{m}$ or $\widehat{m}$ is integrable, and let
$\widetilde{S}\in S(\widetilde{m})$, $\widehat{S}\in S(\widehat{m})$ be
elliptic symbols. We use the same symbols to denote the $h$-Weyl
quantizations. The \op{}s $\widetilde{S}$, $\widehat{S}$ are then \hs{}
with
$$
\Vert \widetilde{S}\Vert _{{\rm HS}}, \Vert \widehat{S}\Vert _{{\rm HS}}\backsim
h^{-{n\over 2}},
$$
where $\backsim$ indicates same order of magnitude.
Let $\widetilde{e}_1,\widetilde{e}_2,...$, and
$\widehat{e}_1,\widehat{e}_2,...$ be \on{} bases for $L^2({\bf R}^n)$. Our
random perturbation will be
\ekv{int.7}
{ Q_\omega =\widehat{S}\circ \sum_{j,k}\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\circ \widetilde{S}, } where $\alpha _{j,k}$
are \indep{} complex ${\cal N}(0,1)$ \rv{}s, and
$\widehat{e_j}\widetilde{e}_k^*u=(u\vert \widetilde{e}_k)\widehat{e}_j$,
$u\in L^2$. In the appendix Section
\ref{ap} we show that up to a change of the set of \indep{} ${\cal
N}(0,1)$-laws, the representation \no{gp.1} is \indep{} of the choice of
bases $\widehat{e}_j$ and $\widetilde{e}_j$.
Let
\ekv{int.8}
{
M=C_1h^{-n},
}
for some $C_1\gg 1$. Then, as we shall see in Section \ref{gp}, we have
the following estimate on the \proba{} that $Q$ be large in Hilbert-Schmidt norm:
\ekv{int.9}
{
P(\Vert Q\Vert _{{\rm HS}}^2\ge M^2)\le C\exp (-h^{-2n}/C),
}
for some new constant $C>0$. In the following discussion we may restrict
the attention to the case when $\Vert Q_\omega \Vert _{{\rm HS}}\le M$. We
wish to study the \ev{} distribution of $P+\delta Q_\omega $ for $\delta $
in a suitable range.
\par Let $\Gamma \Subset \Omega $ be open with $C^2$ \bdy{} and assume that for every $z\in\partial
\Gamma $:
\eekv{int.10}
{&&\Sigma _z:=p\inv (z)\hbox{ is a smooth sub-\mfld{} of }T^*{\bf R}^n\hbox{
on}}
{&&\hbox{which }dp,d\overline{p}\hbox{ are linearly \indep{}
at every point.}} The following result will be established in Section
\ref{sa}.
\begin{theo}\label{int1}
Let $\Gamma \Subset \Omega $ be open with $C^2$ \bdy{} and make the
assumption \no{int.10}. For $0<h\ll 1$, let $\delta >0$ be a small parameter such that
$$
0<\delta\ll h^{3n+1/2}.
$$
For some small parameter $0<\epsilon\ll 1$ assume $h\ln \delta \inv \ll \epsilon \ll 1$ (or
equivalently $\delta \ge e^{-\epsilon /(Dh)}$ for some $D\gg 1$,
implying also that $\epsilon \ge \widetilde{C}h\ln h\inv$ for some
$\widetilde{C}>0$). Then there is a constant $C>0$ (that is independent of $h$, $\delta$ and $\epsilon$) such that the number
$N(P_\delta ,\Gamma )$ of \ev{}s of $P_\delta $ in $\Gamma $ satisfies
\ekv{int.11}
{
\vert N(P_\delta ,\Gamma )-{1\over (2\pi h)^n}{\rm vol\,}(p^{-1}(\Gamma
))\vert \le C{\sqrt{\epsilon }\over h^n}
}
with \proba{}
$$\ge 1 -{C\over \sqrt{\epsilon }}e^{-{\epsilon/2 \over (2\pi
h)^n}}.
$$
\end{theo}
This is a restatement of Theorem \ref{sa1}. In Theorem \ref{sa2} we give a
similar statement about the simultaneous Weyl \asy{}s for all $\Gamma $s
in a \fy{} of sets that satisfy the assumptions of the above
theorem \ufly{}. The lower bound on the \proba{} becomes slightly worse
but is still very close to 1 for suitable values of $\epsilon $.
\par The condition \no{int.10} says that $\partial \Gamma $ does not
intersect the set of critical values of $p$ and this is clearly not a
serious restriction when $\overline{\Gamma }$ is contained in the interior
of $\Sigma $. However, we also would like to study the \ev{} distribution
near the \bdy{} of $\Sigma $, and we then need a weaker assumption.
\par Let $\Gamma \Subset \Omega $ be open with
$C^\infty $ \bdy{}. For $z$ in a \neigh{} of $\partial \Gamma $ and
$0<s,t\ll 1$, we put
\ekv{int.12}
{
V_z(t)={\rm Vol\,}\{\rho \in{\bf R}^{2n};\, \vert p(\rho )-z\vert ^2\le
t\}.
}
Our weak assumption, replacing \no{int.10} is
\ekv{int.13}
{
\exists \kappa\in ]0,1], \hbox{ such that }V_z(t)={\cal O}(t^{\kappa}),\hbox{ \ufly{} for }z\in{\rm neigh\,}(\partial \Gamma ),\ 0\le t\ll 1.
}
Here we have written in an informal way ``${\rm neigh\,}(\partial \Gamma)$'' for
some neighbourhood of $\partial \Gamma$.
Notice that \no{int.10} implies \no{int.13} with $\kappa=1$.
\par Generically, we will have $\{ p,\{ p,\overline{p}\}\}\ne 0$ when
$p(\rho )\in\partial \Sigma $ and if we assume that
\ekv{int.14}
{\hbox{at every point of }p\inv (0),\hbox{ we have }
\{ p,\overline{p}\}\ne 0 \hbox{ or } \{ p,\{ p,\overline{p}\}\} \ne 0,}
then as shown in Example \ref{gc0}, we have \no{int.13} with $\delta
_0=3/4$. (When $p$ is analytic, we believe that \no{int.13} will always
hold with some $\kappa>0$ but have not consulted with experts in
analytic geometry.) Under this more general assumption, we have
\begin{theo}\label{int2} Assume \no{int.13} and let $\delta $ satisfy
$$ 0<\delta \ll h^{3n+1/2}. $$ Let
$N(P+\delta Q_\omega ,\Gamma )$ be the number of \ev{}s of $P+\delta
Q_\omega $ in $\Gamma $. Then for every fixed $K>0$ and for $0<r\ll 1$:
\eekv{int.15}
{
&&\vert N(P+\delta Q_\omega ,\Gamma )-{1\over (2\pi h)^n}\iint_{p\inv
(\Gamma )}dxd\xi \vert \le} {&&{C\over h^n}\Big({\epsilon \over r}+C_K\big(r^K+\ln
({1\over r})\iint_{p\inv (\partial \Gamma +D(0,r)} dxd\xi \big) \Big),\ 0<r\ll 1,
}
with \proba{}
\ekv{int.16}
{
\ge 1-{C\over r}e^{-{\epsilon\over 2}(2\pi
h)^{-n}}} provided that
\ekv{int.17}{
h^{\kappa}\ln{1\over \delta }
\ll \epsilon \ll 1,
}
or equivalently,
$$
e^{-{\epsilon \over Ch^{\kappa}}}\le \delta ,\ C\gg 1,\ \epsilon \ll 1,
$$
implying that $\epsilon \ge \widetilde{C}h^{\kappa}\ln{1\over h}$, for
some $\widetilde{C}>0$.
\end{theo}
\par This is a restatement of Theorem \ref{gc2} and as explained after that
theorem, when
$\kappa>1/2$, the integral in the \rhs{} of \no{int.15} is ${\cal
O}(r^{2\kappa-1})$ and it follows that we have Weyl \asy{}s with \proba{}
close to 1, if we let $r$ be a suitable power of $\epsilon $. To have the same conclusion when $\kappa\le 1/2$ we can
assume that the integral is ${\cal O}(r^{\alpha _0})$ for some $\alpha _0>0$.
\par Again we have a similar theorem about the simultaneous \asy{}s for
$N(P+\delta Q_\omega ,\Gamma )$ when $\Gamma $ varies in a bounded family
of domains satisfying all the assumptions \ufly{}. See Theorem \ref{gc3}.
\par The proofs follow the same general strategy as those of \cite{Ha2}
with some essential differences:
\smallskip
\par\noindent We do not use any non-vanishing assumption on the Poisson
bracket ${1\over i}\{ p,\overline{p}\}$.
Instead we work systematically with the \op{}s $P^*P$ and $PP^*$ and their
\ef{}s in order to set up a Grushin-\pb{}.
\smallskip
\par\noindent As in \cite{Ha2} we reduce ourselves to the study of a
random \hol{} \fu{}, but in the present work this \fu{} appears as the
determinant of the full \op{} (essentially) and we need to make some
estimates for determinants of random matrices, and especially to prove
that such a determinant is not too small with a \proba{} close to 1. Those
estimates were \sufly{} elementary to be carried out by hand, but we think
that future generalizations and improvements will require a careful study
of the existing results on random determinants and possibly the
derivation of new results in that direction. See the book \cite{Gi} of
V.I.~Girko.
\medskip
\par \noindent
{\it Acknowledgements.} We are grateful to F. Klopp for helping us to find
some references. The first author was supported by a postdoctorial
fellowship from Ecole Polytechnique. She also thanks Y.~Colin de
Verdi\`ere for an interesting discussion around random functions. The second
author is grateful to the Japan Society for the promotion of Sciences and
to the Dept of Mathematics of Tokyo Unversity for offering excellent
working conditions during the month of July, 2005. He also thanks
E.~Servat for a very interesting remark. We also thank the referee for
many detailed remarks that have helped to improve the presentation.
\section{Determinants and Grushin \pb{}s}\label{dg}
\setcounter{equation}{0}
Here we mainly follow \cite {SjZw} and give a more explicit formulation of
one of the results there. Let ${\cal H}_1$, ${\cal H}_2$, ${\cal G}_1$, ${\cal
G}_2$ be complex Hilbert spaces and let $A_{j,k}:\,{\cal H}_k\to {\cal G}_j$
be \bdd{} \op{}s depending in a $C^1$ fashion on the real parameter $t\in
]a,b[$. We also assume that $\dot{A}_{j,k}$ are of trace class and
continuously dependent of $t$ in the space of such \op{}s. Here ``over-dot''
means derivative with respect to $t$.
\begin{prop}\label{dg1} (\cite{SjZw}) Assume in addition that
$A=(A_{j,k}):{\cal H}_1\times {\cal H}_2\to {\cal G}_1\times {\cal G}_2$
has a \bdd{} inverse $B=(B_{j,k})$, and that $A_{2,2}$ and $B_{1,1}$
are invertible. (The invertibility of one of $A_{2,2}$, $B_{1,1}$ implies
that of the other.)
Then
\ekv{dg.1}
{{\rm tr\,}\dot{A}B={\rm tr\,}\dot{A}_{2,2}A_{2,2}^{-1}-{\rm
tr\,}B_{1,1}^{-1}\dot{B}_{1,1}.}
\end{prop}
\begin{proof}
We expand
$$
\dot{B}_{j,k}=-\sum_\nu \sum_\mu B_{j,\nu } \dot{A}_{\nu ,\mu }B_{\mu ,k},
$$
that are of the trace class too.
In particular,
$$
-\dot{B}_{1,1}=B_{1,1}\dot{A}_{1,1}B_{1,1}+B_{1,1}\dot{A}_{1,2}B_{2,1}+
B_{1,2}\dot{A}_{2,1}B_{1,1}+B_{1,2}\dot{A}_{2,2}B_{2,1}.
$$
Rewrite the \rhs{} of \no{dg.1}:
\begin{eqnarray*}
&&{\rm tr\,}\dot{A}_{2,2}A_{2,2}^{-1}-{\rm tr\,}B_{1,1}^{-1}\dot{B}_{1,1}\\
&=&\tr \dot{A}_{2,2}A_{2,2}^{-1}+\tr \dot{A}_{1,1}B_{1,1}+\tr
\dot{A}_{1,2}B_{2,1} +\tr B_{1,1}^{-1}B_{1,2} \dot{A}_{2,1}B_{1,1}+
\tr B_{1,1}^{-1}B_{1,2}\dot {A}_{2,2}B_{2,1}\\
&=&\tr \dot{A}_{2,2}A_{2,2}^{-1}+\tr \dot{A}_{1,1}B_{1,1}+\tr
\dot{A}_{1,2}B_{2,1} +\tr \dot{A}_{2,1}B_{1,2} +
\tr B_{1,1}^{-1}B_{1,2}\dot{A}_{2,2}B_{2,1}\\
&=&\tr \dot{A}_{2,2}(A_{2,2}^{-1}+B_{2,1}B_{1,1}^{-1}B_{1,2})+\tr
\dot{A}_{1,1}B_{1,1}+\tr \dot{A}_{1,2}B_{2,1}+\tr \dot{A}_{2,1}B_{1,2}\\
&=&\tr \dot{A}B.
\end{eqnarray*}
Here we used the cyclicity of the trace and for the last equality the fact
that
\ekv{dg.2}
{
B_{2,2}=A_{2,2}^{-1}+B_{2,1}B_{1,1}^{-1}B_{1,2}.
}
To check \no{dg.2} we proceed by equivalences:
\begin{align*}
\hbox{\no{dg.2}}
&\iff
A_{2,2}^{-1}=B_{2,2}-B_{2,1}B_{1,1}^{-1}B_{1,2}\\
&\iff
1 = A_{2,2}B_{2,2}-A_{2,2}B_{2,1}B_{1,1}^{-1}B_{1,2}\\
&\iff
1=(1-A_{2,1}B_{1,2})+A_{2,1}B_{1,1}B_{1,1}^{-1}B_{1,2}\\
&\iff
1= 1.
\end{align*}
Here and in the following, we often denote the identity operator by $1$ when the
meaning is clear from the context.
\end{proof}
\par Consider the case ${\cal H}_1={\cal G}_1={\cal H}$, ${\cal
H}_2={\cal G}_2={\bf C}^N$,
$$
A={\cal P}=\begin{pmatrix}P &R_-\cr R_+ &0\end{pmatrix}.
$$
Assume also that $P$, ${\cal P}$ are invertible.
(In the proposition we can permute the
indices $1$ and $2$ and think of $P$ as $A_{2,2}$.) We look for
$$
\widetilde{{\cal P}}=\begin{pmatrix}P &\widetilde{R}_-\cr \widetilde{R}_+
&\widetilde{R}_{+-}\end{pmatrix}:{\cal H}\times {\bf C}^N\to {\cal H}\times {\bf C}^N,
$$
that is also invertible, i.e.\ we should be able to solve uniquely
\ekv{dg.3}
{
\begin{cases}Pu+\widetilde{R}_-\widetilde{u}_-=v,\cr
\widetilde{R}_+u+\widetilde{R}_{+-}\widetilde{u}_-=\widetilde{v}_+.
\end{cases}
}
Let
$$
\begin{pmatrix}E &E_+\cr E_- &E_{-+}\end{pmatrix}=\begin{pmatrix}P &R_-\cr R_+ &0\end{pmatrix}^{-1}.
$$
Rewrite the first \e{} in \no{dg.3} as
$Pu=v-\widetilde{R}_-\widetilde{u}_-$. The general solution to that \e{} is
$$
u=E(v-\widetilde{R}_-\widetilde{u}_-)+E_+v_+,
$$
where $v_+$, $\widetilde{u}_-$ should be determined so that
\ekv{dg.4}
{
0=E_-(v-\widetilde{R}_-\widetilde{u}_-)+E_{-+}v_+.
}
\par The second \e{} in \no{dg.3} becomes
\ekv{dg.5}
{
\widetilde{v}_+=\widetilde{R}_+E(v-\widetilde{R}_-\widetilde{u}_-)+
\widetilde{R}_+E_+v_++\widetilde{R}_{+-}\widetilde{u}_-.
}
Hence we get the following system that is equivalent to \no{dg.3}:
\ekv{dg.6}
{
\begin{cases}
E_{-+}v_+-E_-\widetilde{R}_-\widetilde{u}_-=-E_-v,\cr
\widetilde{R}_+E_+v_++(\widetilde{R}_{+-}-\widetilde{R}_+E\widetilde{R}_-)\widetilde{u}_-
=\widetilde{v}_+-\widetilde{R}_+Ev,
\end{cases}
}
so \no{dg.3} is well-posed iff
\ekv{dg.7}
{
\begin{pmatrix}E_{-+}&-E_-\widetilde{R}_-\cr \widetilde{R}_+E_+
&\widetilde{R}_{+-}-\widetilde{R}_+E\widetilde{R}_-\end{pmatrix}:{\bf C}^{2N}\to{\bf
C}^{2N}
}
is invertible.
\par Choose $\widetilde{R}_+=tR_+$, $\widetilde{R}_-=sR_-$,
$\widetilde{R}_{+-}=r{\rm id}_{{\bf C}^N}$ with $s,t,r\in{\bf C}$, and use
that $R_+E_+=1$, $E_-R_-=1$, $R_+E=0$, to see that the matrix \no{dg.7} is
equal to
\ekv{dg.8}
{
\begin{pmatrix}E_{-+} &-s\cr t & r\end{pmatrix}.
}
This matrix is invertible precisely when $(s,t,r)$ belongs to the set
\ekv{dg.9}
{
\{ (s,t,0);\, st\ne 0\}\cup \{ (s,t,r);\, r\ne 0,\, -{st\over r}\not\in
\sigma (E_{-+})\} .
}
\par Since $P$ is invertible, we know that $0\not\in \sigma (E_{-+})$. We
can therefore find a $C^1$-curve
$$
[0,1]\ni \tau \mapsto (s(\tau ),t(\tau ),r(\tau ))\in\,\hbox{the set
\no{dg.9}},
$$
with
$$(s(0),t(0),r(0))=(1,1,0),\quad (s(1),t(1),r(1))=(0,0,1).$$
This means that we have a $C^1$ deformation
$$
{\cal P}(\tau )=\begin{pmatrix}P &s(\tau )R_-\cr t(\tau )R_+ &r(\tau )1\end{pmatrix}:\ {\cal
H}\times {\bf C}^N\to {\cal H}\times {\bf C}^N
$$
of bijective \op{}s with
$${\cal P}(0)={\cal P},\ {\cal P}(1)=\begin{pmatrix}P &0\cr 0 &1\end{pmatrix}.$$
Applying \no{dg.1} with the indices ``1'' and ``2'' permuted gives
$$
\tr \dot{{\cal P}}{\cal P}^{-1}=\tr \dot{P}P^{-1}-\tr
E_{-+}^{-1}(\tau )\dot{E}_{-+}(\tau )=-\tr
E_{-+}^{-1}(\tau )\dot{E}_{-+}(\tau ),
$$
where now ``over-dot'' means derivative with respect to $\tau$.
If we integrate from $\tau =0$ to $\tau =1$, we get with a suitable
choice of branches for $\ln$:
$$
\ln \det P-\ln \det {\cal P}=\ln \det E_{-+}(0).
$$
For this relation to make sense we also assume that
\ekv{dg.9.5}
{
P-1\hbox{ is of trace class}.
}
\par Then for the original \op{} and its inverse we have
\ekv{dg.10}
{
\ln \det P=\ln \det {\cal P}+\ln \det E_{-+},
}
or equivalently,
\ekv{dg.11}
{
\det P=\det {\cal P}\det E_{-+}.
}
\section{General frame-work and reduction to trace class \op{}s}\label{gf}
\setcounter{equation}{0}
Let $m\ge 1$ be an order \fu{} on ${\bf R}^{2n}$ in the sense that there
exist constants $C_0\ge 1$, $N_0>0$, such that
$$
m(\rho )\le C_0\langle \rho -\mu \rangle ^{N_0}m(\mu ),\ \forall \rho ,\mu
\in{\bf R}^{2n},
$$
where we write $\langle \rho \rangle =\sqrt{1+\vert \rho \vert ^2}$. We
consider a symbol
$$
P(\rho ;h)\sim p(\rho )+hp_1(\rho )+...\hbox{ in }S({\bf R}^{2n},m),
$$
where
$$
S({\bf R}^{2n},m)=\{ a\in C^\infty ({\bf R}^{2n});\, \vert \partial _{x,\xi
}^\alpha a(x,\xi )\vert \le C_\alpha m(x,\xi ),\, \forall (x,\xi )\in{\bf
R}^{2n},\, \alpha \in{\bf N}^{2n}\} .
$$
Put
$$
\Sigma =\overline{p({\bf R}^{2n})},\ \Sigma _\infty= \{ z\in{\bf
C};\,\exists \rho _j\in{\bf R}^{2n},\ j=1,2,3,...,\, \rho _j\to \infty ,\,
p(\rho _j)\to z,\, j\to \infty \}.
$$
Assume $\exists z_0\in{\bf C}\setminus \Sigma ,\, C_0>0$, such that
\ekv{gf.1}
{
\vert p(\rho )-z_0\vert \ge m(\rho )/C_0,\ \forall \rho \in{\bf R}^{2n}.
}
Then as pointed out in \cite{Ha2}, for every $z\in {\bf C}\setminus \Sigma
$, there exists $C>0$ such that
\ekv{gf.2}
{
\vert p(\rho )-z\vert \ge m(\rho )/C,\ \forall \rho \in{\bf R}^{2n},
}
and for every $z\in {\bf C}\setminus \Sigma _\infty
$, there exists $C>0$ such that
\ekv{gf.3}
{
\vert p(\rho )-z\vert \ge m(\rho )/C,\ \forall \rho \in{\bf R}^{2n}\hbox{
with }\vert \rho \vert \ge C.
}
\par Let $\Omega \Subset {\bf C}\setminus \Sigma _\infty $ be open
simply connected containing at least one point $z_0\in{\bf C}\setminus
\Sigma $.
\begin{lemma}\label{gf1}
For every compact set $K\subset \Omega $, there exists a smooth map
$\kappa :\Omega \setminus\{ z_0\}\to \Omega \setminus\{ z_0\}$ with
$\kappa (z)=z$ for all $z$ in a \neigh{} of $\partial \Omega $, such that
$\kappa (\Sigma \cap \Omega )\cap K=\emptyset$.
\end{lemma}
\begin{proof}
$\Omega $ is diffeomorphic to the open unit disc $D(0,1)$ in such a way
that $z_0$ corresponds to $0$. Now consider $\widetilde{\kappa
}:D(0,1)\setminus\{ 0\}\to D(0,1)\setminus\{ 0\}$ defined by
$\widetilde{\kappa }(z)=f(\vert z\vert )z/\vert z \vert $, where $f$ is
a smooth \fu{} on $]0,1]$ with $1-\epsilon \le f(r)\le 1$, such that
$f(r)=r$ for $1-r\le \epsilon /2$. Choosing $\epsilon >0$ small enough and
conjugating with the diffeomorphism above we
get the desired map $\kappa $.
\end{proof}
\par Let $\widetilde{\Omega }\Subset \Omega $ be open.
Take $\kappa $ as in the lemma with $K$ containing the closure of
$\widetilde{\Omega }$. Extend $\kappa $ to be the identity in ${\bf
C}\setminus\Omega $ and put $\widetilde{p}=\kappa \circ p$. Then
$\widetilde{p}(\rho )-z$ is elliptic in the sense of \no{gf.2}, \ufly{}
for $z\in \widetilde{\Omega }$ and
\ekv{gf.4}
{
\widetilde{p}-p\in C_0^\infty ({\bf R}^{2n}).
}
Put
$$
\widetilde{P}=\widetilde{p}+hp_1+h^2p_2+ ...\ .
$$
\par Now pass to \op{}s and denote by the same letters symbols and their
$h$-Weyl quantizations. We shall consider $P$ as a closed \op{}:
$L^2\to L^2$ with domain $H(m):=(P-z_0)\inv L^2$ (see \cite{Ha2}).
From the discussion above, we get
\begin{itemize}
\item For every compact set $K\subset {\bf C}\setminus \Sigma $, we have
$\sigma (P)\cap K=\emptyset$, when $h>0$ is small enough.
\item $\sigma (P)\cap \widetilde{\Omega }$ is discrete when $h>0$ is small
enough.
\item $\sigma (\widetilde{P})\cap \widetilde{\Omega }=\emptyset$ when $h>0$
is small enough.
\end{itemize}
\par In view of the last property and \no{gf.4}, we also have (for
$h>0$ small enough),
\begin{prop}\label{gf2}
For $z\in \widetilde{\Omega }$, we have that
$$
P(z):=(\widetilde{P}-z)^{-1}(P-z)=1+K(z),
$$
where $K(z)$ is a trace class \op{}. Moreover,
$$
z\in \sigma (P)\Leftrightarrow 0\in \sigma (P(z)).
$$
\end{prop}
\par Notice that $K(z)=(\widetilde{P}-z)^{-1}(P-\widetilde{P})$ is the
quantization of a symbol belonging to the intersection of $S(\widetilde{m})$
for all order \fu{}s $\widetilde{m}$.
\section{Some functional calculus}\label{fc}
\setcounter{equation}{0}
Let $P=1+K$, $K\in {\rm Op}_h(S(m))$, where $m\in C^\infty ({\bf
R}^{2n};]0,\infty [)$ is an integrable order \fu{}. We also assume that
$K=k_0+hk_1+...$ in $S(m)$ on the symbol level. We shall review some
functional calculus for $Q=P^*P$ and more generally for a \sa{} \op{}
$Q\ge 0$ with $Q\sim q+hq_1+...$ on the symbol level, with $Q-1\in S(m)$,
$q\ge 0$.
\par Let $\psi \in C_0^\infty ({\bf R})$. For $h\ll \alpha \ll 1$ we shall
study the properties of $\psi (\alpha \inv Q)$.
\par To this end we shall consider $\alpha \inv Q$ as a symbol with
$h/\alpha $ as a new semiclassical parameter, after a suitable dilation in
phase space. More precisely, we make the change of variables
$$
x=\alpha ^{1\over 2}\widetilde{x},\ D_x=\alpha ^{-{1\over 2}}D_{\widetilde{x}}
$$
and write
\ekv{fc.25}
{{1\over \alpha }Q(x,hD_x;h)={1\over \alpha }Q(\alpha ^{1\over
2}(\widetilde{x},{h\over \alpha }D_{\widetilde{x}});h),}
with symbol $\alpha \inv Q(\alpha ^{1/2}(\widetilde{x},\widetilde{\xi
});h)$ for the $h/\alpha $-quantization.
The lower order terms are ${\cal O}(h/\alpha )$ \ufly{} with all their
derivatives, so we shall just look at the leading symbol
\ekv{fc.26}
{
{q(\alpha ^{1\over 2}(x,\xi ))\over \alpha },
}
where we dropped the tildes on the new variables. The (new) associated order
\fu{} will be
\ekv{fc.27}
{
m(x,\xi ):=1+{q(\alpha ^{1\over 2}(x,\xi ))\over \alpha }\ge 1.
}
We have
$$
\nabla m={(\nabla q)(\alpha ^{1\over 2}(x,\xi ))\over \alpha ^{1\over
2}}\le C{q^{1\over 2}(\alpha ^{1\over 2}(x,\xi ))\over \alpha ^{1\over
2}}\le Cm(x,\xi )^{1\over 2},
$$
$$
\nabla ^2m={\cal O}(1),
$$
so by Taylor's formula,
$$m(\rho )
=m(\mu )+{\cal O}(1)m(\mu )^{1\over 2}\vert \rho -\mu \vert +{\cal
O}(1)\vert \rho -\mu \vert ^2,
$$
and since $m(\mu )\ge 1$:
\ekv{fc.28}
{
m(\rho )\le C\langle \rho -\mu \rangle ^2 m(\mu ).
}
Hence $m$
is an order \fu{}, \ufly{} \wrt{} $\alpha $.
\par Similarly, we have the improved symbol estimates
\ekv{fc.29}
{
\nabla {q(\alpha ^{1\over 2}\rho )\over \alpha }={\cal O}(1)m^{1\over 2},
}
\ekv{fc.30}
{
\nabla ^2{q(\alpha ^{1\over 2}\rho )\over \alpha }={\cal O}(1),
}
\ekv{fc.31}
{
\nabla ^k {q(\alpha ^{1\over 2}\rho )\over \alpha }={\cal O}(\alpha
^{{k\over 2}-1}),\ k\ge 2.
}
In particular, we have the standard symbol estimates
\ekv{fc.32}
{
\nabla ^k {q(\alpha ^{1\over 2}\rho )\over \alpha }={\cal O}(1)m(\rho ).
}
It is therefore clear that we can apply the functional calculus in the
version of \cite{HeSj} (see also \cite{DiSj}), to see that if $\psi \in
C_0^\infty ({\bf R})$, and if we interpret $Q/\alpha $
as the \rhs{} of \no{fc.25}, then
\ekv{fc.33}
{
\psi (\alpha \inv Q)= {\rm Op}_{{h\over \alpha
},\widetilde{x}}(\widetilde{f}),
}
where
\ekv{fc.34}
{
\widetilde{f}=\sum_{0}^\infty ({h\over \alpha })^\nu f_\nu
(\widetilde{x},\widetilde{\xi }),\hbox{ in }S(m\inv ),
}
with $f_0(\widetilde{x},\widetilde{\xi })=\psi (\alpha \inv q(\alpha
^{1/2}(\widetilde{x},\widetilde{\xi })))$,
\ekv{fc.35}
{
f_\nu =\sum_{j\le j(\nu )}a_{j,\nu }(\widetilde{x},\widetilde{\xi
},\alpha )\psi ^{(j)}(\alpha \inv q(\alpha
^{1/2}(\widetilde{x},\widetilde{\xi }))),\ a_{j,\nu }\in S(1).
}
\begin{prop}\label{fc5}
Let $\widetilde{m}=\widetilde{m}_\alpha (\widetilde{x},\widetilde{\xi })$
be an order \fu{}, \ufly{} \wrt{} $\alpha $, such that
$\widetilde{m}(\widetilde{x},\widetilde{\xi })=1$ for $\alpha \inv
q(\alpha ^{1/2}(\widetilde{x},\widetilde{\xi }))\le \sup {\rm supp\,}\psi
+1/C$, for some $C>0$ that is \indep{} of $\alpha $. Then \no{fc.34} holds
in $S(\widetilde{m})$, for $h$ and $h/\alpha $ \sufly{} small.
\end{prop}
\begin{proof}
Write $q_\alpha =q(\alpha ^{1/2}(\widetilde{x},\widetilde{\xi
}))/\alpha $, $Q_\alpha =\alpha \inv Q(\alpha ^{1/2}(\widetilde{x},{h\over
\alpha }D_{\widetilde{x}});h)$ and drop the tildes. Let
$\widehat{q}_\alpha \in S(m)$ be such that
$\sup{\rm
supp\,}\psi +1/(5C)\le \widehat{q}_\alpha $, and be equal to
$q_\alpha $ when $q_\alpha \ge \sup{\rm supp\,}\psi +2/(5C)$.
Let $\chi
_\alpha \in S(1)$ be equal to 1 when $q_\alpha \le \sup{\rm supp\,}\psi
+3/(5C)$ and equal to 0
when $q_\alpha \ge \sup{\rm supp\,}\psi
+4/(5C)$. We use the same symbols $q_\alpha $, $\widehat{q}_\alpha $,
$\chi _\alpha $ to denote the $h/\alpha $
quantizations.
\par Let $\widetilde{\psi }$ be an almost \hol{} extension of $\psi $ and
recall the Cauchy-Green-Riemann-Stokes formula, in the \op{} sense
(\cite{HeSj}, \cite{DiSj}, \cite{Dy}):
$$
\psi (q_\alpha )={1\over \pi }\int (z-q_\alpha )\inv {\partial \widetilde{\psi
}(z)\over \partial \overline{z}}
L(dz),
$$
where $L(dz)$ denotes Lebesgue-measure.
For $z$ in a \neigh{} of ${\rm supp\,}\widetilde{\psi }$, we write
$$
(z-q_\alpha )\inv =(z-q_\alpha )\inv \circ \chi _\alpha
+(z-\widehat{q}_\alpha )\inv \circ (1-\chi _\alpha )-(z-q_\alpha )\inv
(\widehat{q}_\alpha -q_\alpha )(z-\widehat{q}_\alpha )\inv (1-\chi
_\alpha ).
$$
Then, since the middle term is \hol{} near the support of
$\widetilde{\psi }$,
$$
\psi (q_\alpha )=\psi (q_\alpha )\circ \chi _\alpha -{1\over \pi }\int
(z-q_\alpha )\inv (\widehat{q}_\alpha -q_\alpha )(z-\widehat{q}_\alpha
)\inv (1-\chi _\alpha ) {\partial \widetilde{\psi }\over \partial
\overline{z}}L(dz).
$$
Here the symbol of $\psi (q_\alpha )\circ \chi _\alpha $ has the \asy{}
expansion \no{fc.34} in $S(\widetilde{m})$, thanks to the extra factor
$\chi _\alpha $ and with the same terms given by \no{fc.35}. For $z\in
{\rm neigh\,}({\rm supp\,}\widetilde{\psi })$, $(z-\widehat{q}_\alpha
)\inv \in{\rm Op\,}(1/m)$ depends \hol{}ally on $z$ and thanks to the
factor $\widehat{q}_\alpha -q_\alpha $, whose support on the symbol level
is separated from that of $1-\chi _\alpha $ by some fixed positive
distance, we know that $(\widehat{q}_\alpha -q_\alpha )(z-\widehat{q}_\alpha
)\inv (1-\chi _\alpha )\in (h/\alpha )^N{\rm Op\,}(S(\widetilde{m}))$ for
any $N\ge 0$ and any $\widetilde{m}$ as in the proposition. Combining this
with the estimates for the symbol of $(z-q_\alpha )\inv$ from the
Beals lemma as in \cite{HeSj} (see also
\cite{DiSj}, Proposition 8.6), we get the proposition.
\end{proof}
\par We next apply the functional result to the study of certain
determinants. Let $\chi \in C_0^\infty ([0,+\infty [;[0,+\infty [)$, $\chi
(0)>0$ and extend $\chi $ to $C_0^\infty ({\bf R};{\bf C})$ in such a
way that $\chi (t)>0$ near 0 and $t+\chi (t)\ne 0$, $\forall t\in{\bf R}$.
We want to study $\ln\det (Q+\alpha \chi (\alpha \inv Q))$, when $h\ll
\alpha \ll 1$. Let us first recall from \cite{MeSj} that if
$\widetilde{Q}={\rm Op}_h(\widetilde{q})$ with $\widetilde{q}\in S(1)$,
$\widetilde{q}>0$, $\widetilde{q}-1\in S(\widetilde{m})$, where
$\widetilde{m}$ is an integrable order \fu{}, then
\ekv{fc.36}
{
\ln\det \widetilde{Q}={1\over (2\pi h)^n}(\iint\ln \widetilde{q}(x,\xi
)dxd\xi +{\cal O}(h)).
}
In fact, let $\widetilde{Q}_t=(1-t)1+t\widetilde{Q}$, so that
$\widetilde{Q}_0=1$, $\widetilde{Q}_1=\widetilde{Q}$. Then by standard
elliptic calculus, with $\widetilde{q}_t=(1-t)+t\widetilde{q}$, we have
\begin{eqnarray*}
{d\over dt}\ln\det \widetilde{Q}_t&=&\tr \widetilde{Q}_t\inv {d\over
dt}\widetilde{Q}_t\\
&=&{1\over (2\pi h)^n}(\iint \widetilde{q}_t\inv {d\over
dt}\widetilde{q}_t\, dxd\xi
+{\cal O}(h))\\
&=&{1\over (2\pi h)^n}({d\over dt}\iint \ln \widetilde{q}_t(x,\xi ) dxd\xi
+{\cal O}(h)),
\end{eqnarray*}
and integrating from $t=0$ to $t=1$, we get \no{fc.36}.
\par For $\alpha =\alpha _1>0$ fixed with $\alpha _1\ll 1$, this applies
to $Q+\alpha _1\chi (\alpha _1\inv Q)$ and we get
\ekv{fc.37}
{
\ln\det (Q+\alpha _1\chi (\alpha _1\inv Q))={1\over (2\pi h)^{n}}(\iint
\ln (q+\alpha _1\chi (\alpha _1\inv q))dxd\xi +{\cal O}(h)).}
\par We have for $t>0$ and $E\ge 0$:
$$
{d\over dt}\ln (E+t\chi ({E\over t}))={1\over t}\psi ({E\over t}),
$$
with
$$
\psi (E)={\chi (E)-E\chi '(E)\over E+\chi (E)}.
$$
Now for $h\ll\alpha \le t\le \alpha _1$, we get from Proposition \ref{fc5} by dilatation:
\begin{eqnarray}\label{fc.37.5}
&&\hskip -2truecm {d\over dt}\ln \det (Q+t\chi (t\inv Q))=\tr t\inv \psi (t\inv Q)\\
&=&\Big({t\over 2\pi h}\Big)^n\Big(\iint {1\over t}\psi ({q(t^{1\over
2}(\widetilde{x},\widetilde{\xi }))\over
t})d\widetilde{x}d\widetilde{\xi }
+{\cal O}({h\over t}){1\over t}\iint
\widehat{\chi }({q(t^{1\over
2}(\widetilde{x},\widetilde{\xi }))\over t})d\widetilde{x}d\widetilde{\xi }
\nonumber\\
&&
+{\cal O}(1){1\over t}({h\over t})^\infty \iint (1+{\rm
dist\,}((\widetilde{x},\widetilde{\xi });{\rm supp\,}\widehat{\chi }
({q(t^{1\over
2}(\cdot ))\over t})))^{-N}d\widetilde{x}d\widetilde{\xi }\Big),
\nonumber
\end{eqnarray}
where $0\le \widehat{\chi }\in C_0^\infty ({\bf R})$ is equal to one on
some interval containing $[0,\sup {\rm supp\,}\psi ] $ and the last term is coming from the
``remainder'' in the asymptotic development (\ref{fc.34}).
\par We are interested in the integral of this quantity from $t=\alpha $
to $t=\alpha _1$. Let us first treat the leading term
\begin{eqnarray*}
({t\over 2\pi h})^n\iint {1\over t}\psi ({q(t^{1\over
2}(\widetilde{x},\widetilde{\xi }))\over
t})d\widetilde{x}d\widetilde{\xi }
&=&{1\over (2\pi h)^n}\iint {1\over t}\psi ({q(x,\xi )\over t})dxd\xi \\
&=&{1\over (2\pi h)^n}\iint {d\over dt}\ln (q+t\chi ({q\over t}))dxd\xi ,
\end{eqnarray*}
and integrating this from $t=\alpha $ to $t=\alpha _1$, we get
\ekv{fc.38}
{
\Big[
{1\over (2\pi h)^n}\iint \ln (q+t\chi ({q\over t}))dxd\xi
\Big]_{t=\alpha }^{\alpha _1}.
}
\par The second term in \no{fc.37.5} is
\begin{eqnarray*}
{\cal O}(1)({t\over h})^n{h\over t^2}\iint \widehat{\chi }\left({1\over
t}q\big(t^{1\over 2}(\widetilde{x},\widetilde{\xi
})\big) \right)d\widetilde{x}d\widetilde{\xi }&=& {\cal O}(1){1\over h^n}{h\over
t^2}\iint \widehat{\chi }\big({1\over t}q(x,\xi )\big)dxd\xi \\
&\le& {\cal O}(1)h^{-n}{h\over t^2}\iint_{q(x,\xi )\le C^2t}dxd\xi .
\end{eqnarray*}
Integrating this from $t=\alpha $ to $t=\alpha _1$, we get
\begin{eqnarray}\label{fc.39}
&\displaystyle {\cal O}(1)h^{1-n}\iint \int_{\mbox{\rm max} (\alpha ,q/C)\le t\le \alpha _1}{1\over
t^2}dtdxd\xi &\\
= &\displaystyle {\cal O}(1)h^{1-n}\iint_{q(x,\xi )\le C\alpha _1}\left( {1\over \mbox{\rm max} \big(\alpha
,q(x,\xi )/C\big)}-{1\over \alpha _1}\right)dxd\xi& \nonumber \\
\le &\displaystyle {\cal O}(1)h^{-n} \iint_{q(x,\xi )\le C\alpha _1}{h\over \alpha +q(x,\xi
)}dxd\xi& .\nonumber
\end{eqnarray}
\par When estimating the third term in \no{fc.37.5} we consider separately
the regions $\vert (x,\xi )\vert \le C$ and $\vert (x,\xi )\vert > C$ for
some large $C\ge 1$. Consider first the region $\vert (x,\xi )\vert \le C$.
Put
$$
d_t(\widetilde{x},\widetilde{\xi })={\rm
dist\,}((\widetilde{x},\widetilde{\xi }),\{ (y,\eta );\, {1\over
t}q(t^{1\over 2}(y,\eta ))\le \widehat{C}\}),\ \widehat{C}=\sup {\rm
supp\,}\widehat{\chi }.
$$
For $(y,\eta )$ with $q(t^{1\over 2}(y,\eta ))/t\le \widehat{C}$, we
have $\nabla (q(t^{1\over 2}(y,\eta ))/t)={\cal O}(1)$ and
$\nabla ^2 (q(t^{1\over 2}(y,\eta ))/t)={\cal O}(1)$, so by Taylor
expanding at $(y,\eta )$ we get
\begin{eqnarray*}
{1\over t}q(t^{1\over 2}(\widetilde{x},\widetilde{\xi }))\le {\cal
O}(1)(1+d_t(\widetilde{x},\widetilde{\xi
})+d_t(\widetilde{x},\widetilde{\xi })^2)\le {\cal
O}(1)(1+d_t(\widetilde{x},\widetilde{\xi }))^2.
\end{eqnarray*}
The contribution to the third term in \no{fc.37.5} from $t^{1/2}\vert
(\widetilde{x},\widetilde{\xi })\vert \le C$ is therefore
\begin{eqnarray}\label{fc.40}
&&{\cal O}_N(1)({h\over t})^\infty {1\over t}\iint_{\vert
(\widetilde{x},\widetilde{\xi })\vert \le Ct^{-1/2}} (1+{1\over
t}q(t^{1\over 2}(\widetilde{x},\widetilde{\xi
})))^{-N}d\widetilde{x}d\widetilde{\xi }\\
&&=
{\cal O}_{M,N}(1){1\over h^n}({h\over t})^M {1\over t}\iint_{\vert (x,\xi
)\vert \le C}(1+{1\over t}q(x,\xi ))^{-N} dxd\xi .
\nonumber
\end{eqnarray}
We integrate this from $t=\alpha $ to $\alpha _1$, so we want to estimate
$$
h^M\int_\alpha ^{\alpha _1}{1\over t^{M+1}(1+{1\over t}q(x,\xi ))^N}dt.
$$
If $q\le \alpha $, we get
$$
{\cal O}(1)h^M\int_\alpha ^{\alpha _1}{1\over t^{M+1}}dt={\cal O}(1)(({h\over
\alpha })^M).
$$
If $\alpha <q\le \alpha _1$, we get
\begin{eqnarray*}
h^M\int_\alpha ^q {1\over t^{M+1}}(1+{q\over t})^{-N}dt +h^M\int_q^{\alpha
_1} {1\over t^{M+1}}(1+{q\over t})^{-N}dt
\backsim h^M\int_\alpha ^q {t^N\over t^{M+1}q^N}dt+({h\over q})^M,
\end{eqnarray*}
with the symbol $\backsim$ indicating same order of magnitude.
Choose
$N=M+1$, to get
$$
\backsim ({h\over q})^M.
$$
For $\alpha _1\le q$, we get
$${\cal O}(1)h^M\int_\alpha ^{\alpha _1}{t^N\over
t^{M+1}q^N}dt={\cal O}(1)h^M$$
with the same choice of $N$.
Thus the expression \no{fc.40} is
$$
{{\cal O}(1)\over h^n}\iint_{\vert (x,\xi )\vert \le C}\Big( {h\over \alpha
+q(x,\xi )}\Big) ^M dxd\xi ,\ \forall M\ge 0.
$$
We next look at the contribution to the last term in \no{fc.37.5} from the
region $t^{1\over 2}\vert (\widetilde{x},\widetilde{\xi })\vert >C$, which
is
\begin{eqnarray*}
{\cal O}(1)\Big({h\over t}\Big)^\infty {1\over t}\iint_{t^{1\over 2}\vert
(\widetilde{x},\widetilde{\xi })\vert \ge C}(1+\vert
(\widetilde{x},\widetilde{\xi })\vert
)^{-N}d\widetilde{x}d\widetilde{\xi }={\cal O}(1)({h\over
t})^Mt^{\widetilde{N}},\ \forall M,\widetilde{N},\\
={\cal O}(1)h^M,\ \forall M.
\end{eqnarray*}
\par Summing up, we have proved
\begin{prop}\label{fc6}
Let $\chi \in C_0^\infty ({\bf R})$ with $\chi (0)>0$ and $\chi (t)\ge 0$
for $t\ge 0$. Then for $h\ll \alpha \ll 1$:
\begin{eqnarray}\label{fc.41}
&&\ln \det (Q+\alpha \chi (\alpha \inv Q))=
\\&&{1\over (2\pi h)^n}\Big(\iint \ln
(q+\alpha \chi ({q\over \alpha }))dxd\xi +{\cal O}(1)\iint_{\vert (x,\xi
)\vert \le C}{h\over \alpha +q(x,\xi )}dxd\xi \Big) +{\cal O}(h^\infty
).\nonumber
\end{eqnarray}
\end{prop}
\par Most of the proof was based on \no{fc.37.5}, of which the second part is
valid for any $\psi \in C_0^\infty ({\bf R})$. The estimates leading to
the preceding proposition, also give
\begin{prop}\label{fc7}
Let $\chi \in C_0^\infty ({\bf R})$, and choose $\widehat{\chi }\in
C_0^\infty ({\bf R};[0,1])$ equal to $1$ on an interval containing 0
and ${\rm supp\,}\chi $. Then for $0<h\ll \alpha \ll 1$, we have
\begin{eqnarray}\label{fc.42}
\hskip -1truecm \tr \chi (\alpha \inv Q)&=&{1\over (2\pi h)^n}\Big(\iint \chi ({q(x,\xi )\over
\alpha })dxd\xi +{\cal O}({h\over \alpha })\iint \widehat{\chi }({q(x,\xi
)\over \alpha })dxd\xi\\&&+{\cal O}_{N,M}(1)({h\over \alpha })^M\iint_{\vert
(x,\xi )\vert \le C}(1+{q\over \alpha })^{-N}dxd\xi +{\cal O}(h^\infty
)\Big).\nonumber
\end{eqnarray}
\end{prop}
\par Put
\ekv{fc.43}
{
V(t)=\iint_{q(x,\xi )\le t}dxd\xi ,\ 0\le t\le {1\over 2},
}
so that $0\le V(t)$ is an increasing \fu{}. By introducing an assumption
on $V(t)$, we shall make \no{fc.42} more explicit and replace \no{fc.41} by
a more explicit estimate.
\par We assume that there exists $\kappa\in ]0,1]$ such that
\ekv{fc.44}
{
V(t)={\cal O}(1)t^{\kappa},\ 0\le t\le {1\over 2}.
}
\par The first integral in \no{fc.41} can be written
\begin{eqnarray*}
\iint \ln (q+\alpha \chi ({q\over \alpha }))dxd\xi &=& \iint \ln
(q) dxd\xi +{\cal O}(1)\iint_{q\le C\alpha }\ln {1\over q}dxd\xi \\
&=& \iint \ln (q)dxd\xi +{\cal O}(1)\int_0^{C\alpha}\ln {1\over
q}dV(q),
\end{eqnarray*}
so \no{fc.41} gives
\eekv{fc.45}
{&& \ln \det (Q+\alpha \chi ({1\over \alpha }Q))}
{&=& {1\over (2\pi h)^n}(\iint \ln (q)dxd\xi +{\cal
O}(1)\int_0^{C\alpha }\ln {1\over q}dV(q)+{\cal O}(1)\int_0^{1\over
2}{h\over \alpha +q}dV(q)+{\cal O}(h^\infty )). }
\par Similarly \no{fc.42} can be written
\eekv{fc.46}
{{\rm tr\,}\chi (\alpha \inv Q)&=&{1\over (2\pi h)^n}\Big( \int
\chi ({q\over \alpha })dV(q)+{\cal O}({h\over \alpha
})\int\widehat{\chi }({q\over \alpha })dV(q)}
{&&+{\cal O}_{N,M}(1)({h\over \alpha })^M\int_0^{\alpha
_1}(1+{q\over \alpha })^{-N}dV(q)+{\cal O}(h)\Big) .}
In particular, the number $N(\alpha )$ of \ev{}s of $Q$ in
$[0,\alpha ]$ satisfies
\ekv{fc.47}
{
N(\alpha )\le {\cal O}(1)\big( h^{-n}\int_0^{\alpha _1}(1+{q\over
\alpha })^{-N}dV(q) +h^{1-n}).
}
\begin{prop}\label{fc8}
Under the assumption \no{fc.44}, we have
\ekv{fc.48}
{
\int_0^\alpha \ln (q)dV(q)={\cal O}(\alpha ^{\kappa}\ln \alpha
), }
\ekv{fc.49}
{
\int_0^{\alpha _1}{h\over \alpha +q}dV(q)=
\begin{cases}{\cal O}(\alpha ^{\kappa}
{h\over \alpha }),\hbox{ for }\kappa<1,\cr \cr {\cal O}(h\ln{1\over
\alpha }),\hbox{ when }\kappa=1,\end{cases} }
\\
\ekv{fc.50}
{
N(\alpha )={\cal O}(\alpha ^{\kappa}h^{-n}+h^{1-n}).
}
\end{prop}
\begin{proof}
This follows by straight forward calculations, starting with an
integration by parts.
$$
\int_0^\alpha \ln (q)dV(q)=[\ln (q)V(q)]_0^\alpha -\int_0^\alpha
{1\over q} V(q)dq={\cal O}(\alpha ^{\kappa}\ln (\alpha )),
$$
\begin{eqnarray*}
\int_0^{\alpha _1}{h\over \alpha +q}dV(q)&=&[{h\over \alpha
+q}V(q)]_0^{\alpha _1}+\int_0^{\alpha _1}{h\over (\alpha
+q)^2}V(q)dq\\ \medskip
&=&{\cal O}(h)+{\cal O}(1)\int_0^{\alpha _1}{hq^{\kappa}\over
(\alpha +q)^2}dq\\ \medskip
&=&{\cal O}(h)+{\cal O}(1)h\alpha ^{\kappa-1}\int_0^{\alpha
_1/\alpha }{\widetilde{q}^{\kappa}\over
(1+\widetilde{q})^2}d\widetilde{q}\\ \medskip
&=&\begin{cases}{\cal O}(h\alpha ^{\kappa-1}),\ 0<\kappa<1,\cr \cr {\cal
O}(h\ln{1\over \alpha }),\ \kappa=1.\end{cases}
\end{eqnarray*}
To get \no{fc.50}, we use \no{fc.47} and the estimate
\begin{eqnarray*}
\int_0^{\alpha _1}(1+{q\over \alpha })^{-N}dV(q)&=& [(1+{q\over
\alpha })^{-N}V(q)]_0^{\alpha _1}+N\int_0^{\alpha _1}(1+{q\over
\alpha })^{-(N+1)}V(q){dq\over \alpha }\\
&=&{\cal O}(1)\alpha ^N+{\cal O}(1)\int_0^\infty
(1+\widetilde{q})^{-(N+1)}\widetilde{q}^{\kappa }d\widetilde{q}\alpha ^{\kappa}\\
&=&{\cal O}(1)\alpha ^{\kappa}.
\end{eqnarray*}
\end{proof}
\par Since we will always assume that $h\ll \alpha \ll 1$, and since
$\kappa\le 1$, we can simplify \no{fc.50} to
\ekv{fc.53}
{N(\alpha )={\cal O}(1)\alpha ^{\kappa}h^{-n}.}
\begin{prop}\label{fc9}
Assume \no{fc.44}. Under the assumptions of Proposition \ref{fc6}, we have
\ekv{fc.51}
{
\ln \det (Q+\alpha \chi (\alpha \inv Q))={1\over (2\pi h)^n}(\iint
\ln (q) dxd\xi +{\cal O}(1) \alpha ^{\kappa}\ln \alpha ). }
\par Under the assumptions of Proposition \ref{fc7}, we have
\ekv{fc.52}
{
{\rm tr\,}\chi (\alpha \inv Q)={1\over (2\pi h)^n}(\iint \chi
({q(x,\xi )\over \alpha })dxd\xi +{\cal O}(1)\alpha ^{\kappa
}{h\over \alpha }). }
\par For $0<h\ll \alpha \ll 1$, the number $N(\alpha )$ of \ev{}s
of $Q$ in $[0,\alpha ]$ satisfies \no{fc.50}.
\end{prop}
\par Notice that when $Q\ge 0$:
$$
\ln \det Q\le \ln \det (Q+\alpha \chi (\alpha \inv Q)),
$$
so \no{fc.51} with $\alpha =Ch$, $C\gg 1$, gives an upper bound which is
more precise than the one in \cite{MeSj}.
\section{Grushin \pb{} for the unperturbed \op{}}\label{gu}
\setcounter{equation}{0}
\par In Section \ref{gf} we introduced the \op{}
$$
P(z)=1+K(z),\quad K(z)\in {\rm Op}_h(S(m))
$$
where $m$ is an integrable order \fu{}, so that $K(z)$ is a trace class
\op{}. Here $P(z)$ depends \hol{}ally on $z\in \widetilde{\Omega
}\Subset {\bf C}$, where $\widetilde{\Omega }$ is open.
Also recall that $P(z)=(\widetilde{P}-z)\inv (P-z)$. We are
interested in the spectrum of small random perturbations of $P$; $P_\delta
=P+\delta Q$. Correspondingly, we get $P_\delta (z)=(\widetilde{P}-z)\inv
(P_\delta -z)=1+K_\delta (z)$, and the main work in later sections will be to
study $\vert \det (1+K_\delta (z))\vert $. The upper bounds will be fairly
simple to get, and the delicate point will be to get lower bounds. As a
preparation for this more delicate step, we here study a Grushin \pb{} for
the unperturbed \op{} $P(z)$. In this section $z\in\widetilde{\Omega }$
will be fixed and we simply write $P$ instead $P(z)$.
\par Let $e_1,e_2,...$ be an orthonormal (ON) basis of eigenvectors of $P^*P$
and let $0\le \lambda _1\le \lambda _2\le ...$ be the corresponding \ev{}s.
(Strictly speaking, if we want the \ev{}s to form an increasing sequence,
the set of indices $j$ should be of the form $J=J_0\cup J_1\cup J_2$, with
\begin{itemize}
\item $J_0={\bf N}$ or a finite set, $0\le \lambda _j<1$, for $j\in J_0$,
\item $J_1={\bf N}$ or a finite set, $\lambda _j=1$, $j\in J_1$
\item $J_2=-{\bf N}$ or a finite set, $\lambda _j>1$, $j\in J_2$.\end{itemize}
We will only be concerned with finitely many indices from $J_0$.)
\par Since $P=P(z)$ is a Fredholm operator of index zero by Proposition \ref{gf2}, we know that $PP^*$ and $P^*P$ have the same
number $N_0$ of \ev{}s equal to $0$. Let $f_1,...,f_{N_0}$ be an ON basis of
$\ker(PP^*)$. For $j>N_0$, we have $\lambda _j>0$ and $Pe_j$ is an
eigenvector of $PP^*$ with \ev{} $\lambda _j$: $$ PP^*Pe_j=\lambda _jPe_j.
$$
Using standard notations for norms and scalar products, we
put $f_j=\Vert Pe_j\Vert \inv Pe_j$. Then $\{ f_j\}_{j\in J}$ is an ON
system of eigenvectors of $PP^*$, with $PP^*f_j=\lambda _jf_j$. Let $f\in
L^2({\bf R}^n)$ with $(f\vert f_j)=0$ for all $j\in J$. Then $(P^*f\vert
e_j)=(f\vert Pe_j)$. If $j\le N_0$, we get $(P^*f\vert e_j)=(f\vert 0)=0$,
and if $j\ge N_0+1$, we get $(P^*f\vert e_j)=\Vert Pe_j\Vert (f\vert
f_j)=0$. Hence $P^*f=0$, so $f\in \ker(PP^*)$ and hence $f=0$ is zero
since $\ker(PP^*)$ is the span of $f_1,...,f_{N_0}$. We conclude
that $\{ f_j\}_{j\in J}$ is an ON \it basis \rm of eigenvectors of $PP^*$.
By construction, we have $Pe_j=w_{j}f_j$ with $0\le w_{j} =\Vert Pe_j\Vert $. Then
$$
w_{j}^2=(P^*Pe_j\vert e_j)=\lambda _j,
$$
so $w_{j}=\sqrt{\lambda _j}$ and it follows that,
\ekv{gu.1}
{Pe_j=\sqrt{\lambda _j}f_j,}
\ekv{gu.2}{P^*f_j=\sqrt{\lambda _j}e_j}
for all $j\in J$.
\par Let $0<\alpha \ll 1$ and let $N=N(\alpha )$ be given by
\ekv{gu.3}
{
\lambda _j\le \alpha \Leftrightarrow j\le N(\alpha ).
}
Define
$$
R_+:L^2({\bf R}^n)\to {\bf C}^N,\quad R_-:{\bf C}^N\to L^2({\bf R}^n)
$$
by
$$
R_+u=\sqrt{\alpha }((u\vert e_j))_{j=1}^N,\quad R_-u_-=\sqrt{\alpha
}\sum_1^N u_-(j)f_j,
$$
and put
\ekv{gu.4}
{ {\cal P}=\begin{pmatrix}P &R_-\cr R_+ &0\end{pmatrix}:L^2({\bf R}^n)\times {\bf C}^N\to
L^2({\bf R}^n)\times {\bf C}^N .}
If $u=\sum_{j\in J}u_je_j$, $u_-=(u_-(j))_{j=1}^N,$
we get
$$
{\cal P}
\begin{pmatrix}u \cr u_-\end{pmatrix}=
\begin{pmatrix}\sum_J\sqrt{\lambda
_j}u_jf_j+\sum_1^N \sqrt{\alpha }u_-(j)f_j \cr (\sqrt{\alpha }u_j)_{j=1}^N\end{pmatrix},
$$
and we conclude that
\eeekv{gu.5}
{\vert \det {\cal P}\vert &=&(\prod_1^N \vert \det
\begin{pmatrix}
\sqrt{\lambda
_j}&\sqrt{\alpha } \cr \sqrt{\alpha } &0\end{pmatrix}\vert )(\prod_{N<j\in
J}\sqrt{\lambda _j})}
{&=&\alpha ^N\prod_{N<j\in J}\sqrt{\lambda _j}}{&=&\alpha ^{N\over 2}\prod_J \mbox{\rm max}
(\sqrt{\alpha },\sqrt{\lambda _j}).}
Notice that
\ekv{gu.6}
{
\vert \det P\vert =\prod_J \sqrt{\lambda _j}.
}
\par Let $\delta _j(k)=\delta _{j,k},\ 1\le j,k\le N$. Then ${\cal P}$
maps ${\bf C}e_j\times {\bf C}\delta _j$ to ${\bf C}f_j\times {\bf C}\delta
_j$ and has the corresponding matrix
$$
\begin{pmatrix}\sqrt{\lambda _j}&\sqrt{\alpha }\cr \sqrt{\alpha } &0\end{pmatrix}.$$
The
inverse is given by
$$
\begin{pmatrix} 0 &{1\over \sqrt{\alpha }}\cr {1\over \sqrt{\alpha }}
&-{\sqrt{\lambda _j}\over \alpha } \end{pmatrix},
$$
so if $v=\sum_J v_j f_j$, $v_+=\sum_1^N v_+(j)\delta _j$, then (writing as before ${\cal E}$ for ${\cal P}^{-1}$)
\ekv{gu.7}
{
{\cal E}\begin{pmatrix}v\cr v_+\end{pmatrix}=
\begin{pmatrix}\sum_{N+1}^\infty {1\over
\sqrt{\lambda _j}}v_je_j+{1\over \sqrt{\alpha }}\sum_1^N v_+(j)e_j\cr
\sum_1^N{1\over \sqrt{\alpha }}v_j\delta _j-\sum_1^N {\sqrt{\lambda
_j}\over \alpha }v_+(j)\delta _j\end{pmatrix}
=\begin{pmatrix}
E &E_+\cr E_- &E_{-+}\end{pmatrix}
\begin{pmatrix}v\cr v_+\end{pmatrix},}
where
\eeeekv{gu.8}
{&&E_+v_+={1\over \sqrt{\alpha }}\sum_1^N v_+(j)e_j,}
{&&E_-v={1\over \sqrt{\alpha }}\sum_1^N v_j\delta _j,}
{&&E_{-+}=-{1\over \alpha }{\rm diag\,}(\sqrt{\lambda _j}),}
{&&\Vert E\Vert ,\Vert E_+\Vert ,\Vert E_-\Vert ,\Vert E_{-+}\Vert
\le {1\over
\sqrt{\alpha }}.}
From \no{gu.5}--\no{gu.8}, we see that
\ekv{gu.9}
{
\vert \det P\vert=\vert \det{\cal P}\vert \vert \det{E_{-+}}\vert ,
}
as we already know from \no{dg.11}.
\par We next study $\vert \det {\cal P}\vert $, when $h\ll \alpha \ll 1$.
The formula \no{gu.5} can be written
\ekv{gu.10}
{
\vert \det{\cal P}\vert ^2=\alpha ^N\det 1_\alpha (P^*P),
}
where $1_\alpha (t)=\mbox{\rm max} (\alpha ,t)$. Let $\chi \in C_0^\infty
([0,2[;[0,1])$ be equal to 1 on $[0,1]$. Then for $t\ge 0$,
\ekv{gu.11}
{
t+{\alpha \over 4}\chi ({4t\over \alpha })\le 1_\alpha (t)\le t+\alpha
\chi ({t\over \alpha }).
}
\par In the following, we assume that $Q=P^*P$ satisfies the assumptions
of Section \ref{fc}, including \no{fc.44}, and choose $h\ll
\alpha \ll 1$. Then we know that
\ekv{gu.12}
{N(\alpha )={\cal O}(\alpha ^{\kappa}h^{-n}),}
and Proposition \ref{fc9} in combination with \no{gu.10}--\no{gu.12} show
that
\ekv{gu.13}
{
\ln \vert \det {\cal P}\vert ^2={1\over (2\pi h)^n}(\iint \ln (q)dxd\xi
+{\cal O}(1)\alpha ^{\kappa}\ln \alpha ).
}
As noticed after Proposition \ref{fc9}, we also have the upper bound
\ekv{gu.14}
{
\ln\det P^*P\le {1\over (2\pi h)^n}(\iint \ln(q)dxd\xi +{\cal
O}(1)\alpha ^{\kappa}\ln {1\over\alpha}).
}
\section{The Hilbert-Schmidt norm of
a Gaussian random matrix.
}\label{hs}
\setcounter{equation}{0}
Let $\alpha (\omega )$ be a complex Gaussian random variable with density
\ekv{hs.1}
{
{1\over \pi \sigma ^2}e^{-\vert \alpha \vert ^2/\sigma ^2}L(d\alpha ),\
L(d\alpha )=d\Re \alpha\, d\Im \alpha ,
}
that is a ${\cal N}(0,\sigma ^2 )$-law with $\sigma ^2 $ denoting the variance.
The distribution of $\vert \alpha
(\omega )\vert ^2$ is
\ekv{hs.2}
{\mu d\alpha ={1\over s}e^{-r/s}H(r)dr,}
where $s=\sigma ^2$ and $H(r)$ denotes the standard Heaviside \fu{}.
Notice that
$$
\Vert \vert \alpha \vert ^2\Vert _{L^1}=\langle \vert \alpha \vert
^2\rangle =\sigma ^2.
$$
\par Let
$\alpha _j(\omega )$, $j=1,2,...$ be independent random variables as above
with variance $\sigma _j^2$ and assume for simplicity that $\sigma _1\ge
\sigma _j$ for all $j$. We also assume that
\ekv{hs.3}
{
\sum_1^\infty \sigma _j^2<\infty ,
}
implying the a.s. convergence of $\sum_1^\infty \vert \alpha _j(\omega
)\vert ^2$.
\par We want to estimate the \proba{} that $\sum \vert \alpha _j(\omega
)\vert ^2\ge a$. The \proba{} distribution of $\sum_1^\infty \vert \alpha
_j(\omega )\vert ^2$ is equal to $(\mu _1*\mu _2*....)dx$, where $\mu _j$
is given in \no{hs.2} with $s=s_j=\sigma _j^2$, so that
\ekv{hs.4}{\sum_1^\infty s_j<\infty .}
\par The \F{} \tf{} of $\mu _j$ is given by
\ekv{hs.5}
{\widehat{\mu }_j(\rho )={1\over 1+is_j\rho },}
which has a simple pole at $\rho =i/s_j$. The probability that we are
after, is
\ekv{hs.6}
{
\int_a^\infty (\mu _1*\mu _2*...)dr={1\over 2\pi }\int \prod_1^\infty
(\widehat{\mu }_j(\rho ))\overline{\widehat{1_{[a,\infty [}}}(\rho )d\rho ,
}
by Parseval's identity. Here
\ekv{hs.7}
{
\widehat{1_{[a,\infty [}}(\rho )={1\over i(\rho -i0)}e^{-ia\rho },
}
so the \proba{} \no{hs.5} becomes
\ekv{hs.8}
{
{i\over 2\pi }\int_{-\infty }^\infty (\prod_1^\infty {1\over
1+is_j\rho }){1\over \rho +i0}e^{ia\rho }d\rho .
}
The assumption \no{hs.4} implies that the infinite product converges away
from the poles $i/s_j$. For
$\rho $ in a half plane $\Im \rho \le b<{1\over 2s_1}$, we have
$$
\vert {1\over 1+is_1\rho }\vert \le {1\over ((1-bs_1)^2+s_1^2(\Re \rho
)^2)^{1\over 2}},
$$
$$
\vert \prod _2^\infty {1\over 1+is_j\rho }\vert \le \prod _2^\infty {1\over
1-bs_j}\le \exp (C_0\sum_2^\infty bs_j),
$$
where $C_0$
is a universal constant appearing in the estimate,
$$
{1\over 1-t}\le e^{C_0t}, \ 0\le t\le {1\over 2}.
$$
\par Shifting the contour in \no{hs.8} from ${\bf R}$ to ${\bf R}+ib$ and
choosing $b=1/(2s_1)$, we can estimate the \proba{} \no{hs.6} from above
by
\ekv{hs.9}
{
C(s_1)\exp [{C_0\over 2s_1}\sum_1^\infty s_j -{1\over 2s_1}a],
}
where $C_0>0$ is the universal constant introduced above and $C(s_1)$ can be chosen
\ufly{} \bdd{} on any compact subset of $]0,+\infty ]$.
\begin{remark}\label{hs1}\rm
W. Bordeaux Montrieux has used a more elementary argument in the case of
real matrices, by means of the Markov-Chebyschev inequality. With $\mathsf P$
denoting the \proba{} and $\langle \rangle $ the expectation values, it
gives for every $a>0$:
\ekv{hs.10}
{ a\mathsf P(\sum_1^\infty \vert \alpha _j(\omega )\vert ^2\ge a)\le \langle
\sum_1^\infty \vert \alpha _j(\omega )\vert ^2\rangle =
\sum_1^\infty \langle \vert \alpha _j(\omega )\vert ^2\rangle
=\sum_1^\infty \sigma _j^2.}
We will prefer \no{hs.9} however, since it gives an exponential
decay \wrt{} $a$.
\end{remark}
\begin{remark}\label{hs2}\rm
If $Q=(\alpha _{j,k}(\omega ))_{j,k\in{\bf N}}$ is a random matrix where
$\alpha _{j,k}(\omega )$
are \indep{} ${\cal N}(0,\sigma _{j,k}^2)$ laws, and
\ekv{hs.11}
{
\sum_{j,k}\sigma _{j,k}^2 <\infty,
}
then \no{hs.9} gives an estimate on the \proba{} that the Hilbert-Schmidt
norm is $\ge a^{1/2}$:
\ekv{hs.12}
{
\mathsf P(\Vert (\alpha _{j,k}(\omega ))\Vert _{{\rm HS}}^2\ge a)\le C(s_1)\exp [{C_0\over
2s_1}\sum_{j,k\in{\bf N}^2}\sigma _{j,k}^2-{1\over 2s_1}a]
}
where $C_0$, $C(s_1)$
is the same constants as in \no{hs.9} and $s_1=\mbox{\rm max} \sigma _{j,k}^2$.
\end{remark}
\section{Estimates on determinants of Gaussian random matrices}\label{er}
\setcounter{equation}{0}
Consider first a random vector
\ekv{er.1}
{\trans{u(\omega )}=(\alpha _1(\omega ),...,\alpha _N(\omega ))\in{\bf C}^N,
}
where $\alpha _1,...,\alpha _N$ are \indep{} complex Gaussian random variables
with a ${\cal N}(0,1)$ law and $\omega $ is the random parameter living
in a probability space with
probability $\mathsf P$. The law of $\alpha _j$, i.e. the direct image of $\mathsf P$ under $\alpha _j$,
is given by
\ekv{er.2}
{
(\alpha _j)_*(\mathsf P)={1\over \pi }e^{-\vert z\vert ^2}L(dz)=:f(z)L(dz)
}
and $L(dz)=L_{\bf C}(dz)$ is the Lebesgue measure on
${\bf C}$.
\par The distribution of $u$ is
\ekv{er.4}
{
u_*(\mathsf P)={1\over \pi ^N}e^{-\vert u\vert ^2}L_{{\bf C}^N}(du).
}
If $U:{\bf C}^N\to {\bf C}^N$ is unitary, then $Uu$ has the same
distribution as $u$.
\par We next compute the distribution of $\vert u(\omega )\vert ^2$. The
distribution of $\vert \alpha _j(\omega )\vert ^2$ is $\mu (r)dr$, where
$$
\mu (r)=-H(r){d\over dr}e^{-r}=e^{-r}H(r),
$$
where $H(r)=1_{[0,\infty [}(r)$. We have $\widehat{\mu }(\rho )={1\over
1+i\rho }$.
\par We have $\vert u(\omega )\vert ^2=\sum_1^N \vert \alpha _j(\omega
)\vert ^2$ and since $\vert \alpha _j(\omega )\vert ^2$ are \indep{} and
identically distributed, the distribution of $\vert u(\omega )\vert ^2$ is
$\mu *...*\mu\, dr=\mu ^{*N}dr$, where $*$ indicates convolution. For
$r>0$, we get by straight forward calculation the $\chi_{2N}^2$ distribution (for the variable $2r$)
\ekv{er.4.5}
{
\mu ^{*N}dr={r^{N-1}e^{-r}\over (N-1)!}H(r) dr.
}
Recall here that
$$
\int_0^\infty r^{N-1}e^{-r}dr=\Gamma (N)=(N-1)!,
$$
so $\mu ^{*N}$ is indeed normalized.
\par The expectation value of each $\vert \alpha _j(\omega )\vert ^2$ is $1$ so:
\ekv{er.5}
{\langle \vert u(\omega )\vert ^2\rangle =N.}
\par We next estimate the probability that $\vert u(\omega )\vert ^2$ is
very large in a fashion that is slightly different from that of Section
\ref{hs}. It will be convenient to pass to the variable $\ln (\vert
u(\omega )\vert ^2)$, which has the distribution obtained from \no{er.4.5}
by replacing $r$ by $t=\ln r$, so that $r=e^t$, $dr/r=dt$. Thus $\ln
(\vert u(\omega )\vert ^2) $ has the distribution
\ekv{er.6}
{
{r^Ne^{-r}\over (N-1)!}H(r){dr\over r}={e^{Nt-e^t}\over (N-1)!}dt=:\nu
_N(t)dt.
}
\par Now consider a random matrix
\ekv{er.8}
{
(u_1...u_N)
}
where $u_k(\omega )$ are random vectors in ${\bf C}^N$ (here viewed as
column vectors) of the form
$$
\trans{u_k(\omega )}=(\alpha _{1,k}(\omega ),...,\alpha _{N,k}(\omega )),
$$
and all the $\alpha _{j,k}$ are \indep{} with the same law \no{er.2}.
\par
Then
\ekv{er.10}
{
\det (u_1\,u_2...u_N)=\det (u_1\,\widetilde{u}_2...\widetilde{u}_N),
}
where $\widetilde{u}_j$ are obtained in the following way (assuming the $u_j$
to be linearly \indep{}, as they are almost surely): $\widetilde{u}_2$ is
the \og{} projection of $u_2$ in the \og{} complement $(u_1)^{\perp}$,
$\widetilde{u}_3$ is the \og{} projection of $u_3$ in
$(u_1,u_2)^\perp=(u_1,\widetilde{u_2})^\perp$, etc.
\par If $u_1$ is fixed, then $\widetilde{u}_2$ can be viewed as a random
vector in ${\bf C}^{N-1}$ of the type \no{er.1}, \no{er.2}, and with
$u_1,u_2$ fixed, we can view $\widetilde{u}_3$ as a random vector of the same
type in ${\bf C}^{N-2}$ etc. On the other hand
\ekv{er.9'}
{
\vert \det (u_1\, u_2...u_N)\vert ^2=\vert u_1\vert ^2\vert
\widetilde{u}_2\vert ^2\cdot ..\cdot \vert \widetilde{u}_N\vert ^2.
}
The squared lengths $\vert u_1\vert^2 , \vert
\widetilde{u}_2\vert^2 ,...,\vert \widetilde{u}_N\vert^2 $ are \indep{} random
variables with distributions $\mu ^{*N}dr, \mu ^{*(N-1)}dr,..., \mu dr$.
This reduction plays an important role in \cite{Gi}.
The following lemma will not be used directly.
\begin{lemma}\label{er2}
Let $\alpha ,\beta >0$ be \indep{} random variables with distributions
$\mu _\alpha (r){dr\over r}$, $\mu _\beta (r) {dr\over r}$. Then the
product $\alpha \beta $ has the distribution $\mu _{\alpha \beta
}{dr\over r}$, with
\ekv{er.10'}
{
\mu _{\alpha \beta }=\mu _\alpha \sharp \mu _\beta :={\cal M}\inv (({\cal
M}\mu _\alpha ) ({\cal M}\mu _\beta )).
}
Here
$$
{\cal M}\mu (\tau )=\int r^{-i\tau }\mu (r){dr\over r}
$$
is the Mellin \tf{} of $\mu $.
\end{lemma}
\begin{proof} Recall that the Mellin \tf{} of $\mu (r)$ is the \F{} \tf{}
of $\mu (e^t)$; $r=e^t$, $r\inv dr=dt$. The distribution of $\ln \alpha $
is related to that of $\alpha $ by the same change of variables $\mu
_\alpha (r){dr\over r}\to \mu _\alpha (e^t)dt=\nu _\alpha (t)dt$. Since
multiplication on the \F{} \tf{} side corresponds to convolution,
\no{er.10'}
is equivalent to the fact that the distribution of the sum of two \indep{}
random variables is equal to the convolution of the distributions of the
two variables.\end{proof}
\par The proof also shows that the multiplicative convolution in the lemma
is given by
\ekv{er.11}
{
\mu _\alpha \sharp \mu _\beta (r)=\int_0^\infty \mu _\alpha ({r\over
\rho })\mu _\beta (\rho ){d\rho \over \rho }.}
\par As already mentioned we shall not use the lemma directly but rather its
proof by taking logarithms and use that the distribution
of the random variable $\ln \vert \det (u_1\, u_2...u_N)\vert ^2$ is equal
to
\ekv{er.12}
{
(\nu _1*\nu _2*...*\nu _N)dt,
}
with $\nu _j$ defined in \no{er.6}.
\par We have
$$
\nu _N(t)\le \widetilde{\nu }_N(t):={1\over (N-1)!}e^{Nt}.
$$
Choose $x(N)\in{\bf R}$ such that
\ekv{er.13}
{
\int_{-\infty }^{x(N)}\widetilde{\nu }_N(t)dt=1.
}
More explicitely, we have
\ekv{er.14}
{{1\over N!}e^{Nx(N)}=1,\quad x(N)={1\over N}\ln (N!)={1\over N}\ln
\Gamma (N+1).}
Using Stirling's formula,
$$
{(N-1)!\over \sqrt{2\pi }}={\Gamma (N)\over \sqrt{2\pi
}}=e^{-N}N^{N-{1\over 2}}(1+{\cal O}({1\over N})),
$$
we get
\eeekv{er.15}
{x(N)&=&{1\over N}\big({1\over 2}\ln (2\pi )-(N+1)+(N+{1\over 2})\ln
(N+1)+{\cal O}({1\over N})\big)}
{&=& {1\over N}\big( (N+{1\over 2})\ln N-N+C_0+{\cal O}({1\over N})\big)
}
{&=& \ln N+{1\over 2N}\ln N-1+{C_0\over N}+{\cal O}({1\over N^2}),
}
where $C_0=(\ln 2\pi )/2>0$.
\par With this choice of $x(N)$, we put
$$
\rho _N(t)=1_{]-\infty ,x(N)]}(t)\widetilde{\nu }_N(t),
$$
so that $\rho _N(t)dt$ is a probability measure ``obtained from $\nu
_N(t)dt$, by transfering mass to the left'' in the sense that
\ekv{er.16}
{
\int f\nu _N dt\le \int f\rho _N dt,
}
whenever $f$ is a \bdd{} decreasing \fu{}. Equivalently,
$$
g*\nu _N\le g*\rho _N,
$$
whenever $g$ is a \bdd{} increasing \fu{}. Now, for such a $g$, both $g*\nu
_N$ and $g*\rho _N$ are \bdd{} increasing functions, so by induction, we
get
$$
g*\nu _1*...*\nu _N\le g*\rho _1*...*\rho _N.
$$
In particular, by taking $g=H$, we get
\ekv{er.17}
{
\int_{-\infty }^t (\nu _1*...*\nu _N )(s)ds\le \int_{-\infty }^t
(\rho _1*...*\rho _N)(s)ds, \ t\in {\bf R}.
}
\par We have by \no{er.14}
\eekv{er.18}
{
\widehat{\rho }_N(\tau )&=&\int_{-\infty }^{x(N)}{1\over (N-1)!}e^{t(N-i\tau
)}dt={1\over (N-1)!(N-i\tau )}e^{Nx(N)-ix(N)\tau}
}
{ &=&{e^{-ix(N)\tau }\over 1-i{\tau \over N}}.}
This \fu{} has a pole at $\tau =-iN$.
\par Similarly,
\ekv{er.19}
{
\widehat{1_{]-\infty ,a]}}(\tau )={i\over \tau +i0}e^{-ia\tau }.
}
By Parseval's formula, we get
\begin{eqnarray}\label{er.19bis}
\int_{-\infty }^a \rho _1*..*\rho _Ndt&=&{1\over 2\pi }\int_{-\infty
}^\infty {\cal F}(\rho _1*..*\rho _N)(\tau )\overline{{\cal
F}1_{]-\infty ,a]}}(\tau )dt\\
&=&{1\over 2\pi }\int_{-\infty }^{+\infty }e^{-i\tau
(\sum_1^Nx(j)-a)}{-i\over \tau -i0}\prod_1^N{1\over (1-{i\tau \over
j})}d\tau .
\end{eqnarray}
We deform the contour to $\Im \tau =-1/2$ (half-way between ${\bf R}$ and
the first pole in the lower half-plane).
For $j\ge 2$, we use the estimate
$$
\vert {1\over 1-{i\tau \over j}}\vert \le {1\over 1-{1\over 2j}}=\exp
({1\over 2j}+{\cal O}({1\over j^2})),
$$
when $\Im \tau =-1/2$. Hence,
$$
\prod_2^N \vert {1\over 1-{i\tau \over j}}\vert\le \exp ({1\over
2}\sum_2^N({1\over j}+{{\cal O}(1)\over j^2}))\le CN^{1\over 2}.
$$
It follows that for $a\le \sum_1^N x(j):$
\ekv{er.20}
{
\int_{-\infty }^a \rho _1*..*\rho _Ndt\le CN^{1\over 2}\exp (-{1\over
2}(\sum_1^N x(j)-a)).
}
In view of \no{er.17}, (\ref{er.19bis}) the \rhs{} is an upper bound for the probability
that $\ln \vert \det (u_1...u_N)\vert ^2\le a$.
\par From the formula \no{er.15}, we get for some constants $C_1, C_2\in{\bf R}$:
\ekv{er.21}
{
\sum_1^N x(j)\ge C_1+(N+{1\over 2})\ln N-2N+{1\over 4}(\ln N)^2+C_0\ln N
\ge C_2+(N+{1\over 2})\ln N -2N.
}
Hence, for $a\le C_2+(N+{1\over 2})\ln N-2N$,
\eeekv{er.22}
{&& \hskip -1truecm \mathsf P(\ln \vert \det (u_1...u_N)\vert ^2\le a)}
{&\le& CN^{1\over 2}\exp [-{1\over 2}(C_2+(N+{1\over 2})\ln N-2N-a)]}
{&=&C\exp [-{1\over 2}(C_2+(N-{1\over 2})\ln N-2N-a)].}
\par We shall next extend our bounds on the
\renewcommand\proba{probability} \proba{}
for the determinant to be small, to determinants of the form
$$
\det (D+Q)
$$
where $Q=(u_1...u_N)$ is as before, and $D=(d_1...d_N)$ is a fixed
complex $N\times N$ matrix. As before, we can write
$$
\vert \det ((d_1+u_1)...(d_N+u_N))\vert ^2=\vert d_1+u_1\vert ^2\vert
\widetilde{d}_2+\widetilde{u}_2\vert ^2\cdot ..\cdot \vert
\widetilde{d}_N+\widetilde{u}_N\vert ^2,
$$
where $\widetilde{d}_2=\widetilde{d}_2(u_1)$,
$\widetilde{u}_2=\widetilde{u}_2(u_1,u_2)$ are the \og{} projections of
$d_2$, $u_2$ on $(d_1+u_1)^\perp$,
$\widetilde{d}_3=\widetilde{d}_3(u_1,u_2)$,
$\widetilde{u}_3=\widetilde{u}_3(u_1,u_2,u_3)$ are the \og{} projections
of $d_2$, $u_2$ on $(d_1+u_1,d_2+u_2)^{\perp}$ and so on.
\par Let $\nu _d^{(N)}(t)dt$ be the \proba{} distribution of $\ln \vert
d+u\vert^2 $, when $d\in {\bf C}^N$ is fixed and $u\in{\bf C}^N$ is random
as in \no{er.1}, \no{er.2}. Notice that $\nu _0^{(N)}(t)=\nu ^{(N)}(t)$ is the
density we have alreay studied.
\begin{lemma}\label{er3} For every $a\in{\bf R}$, we have
$$\int_{-\infty }^a \nu _d^{(N)}(t)dt\le \int_{-\infty }^a \nu
^{(N)}(t)dt.$$
\end{lemma}
\begin{proof}
Equivalently, we have to show that $\mathsf P(\vert d+u\vert ^2\le
\widetilde{a})\le \mathsf P(\vert u\vert ^2\le \widetilde{a})$ for every
$\widetilde{a}>0$. For this, we may assume that $d=(c,0,...,0)$, $c>0$. We
then only have to prove that
$$
\mathsf P(\vert c+\Re u_1\vert ^2\le b^2)\le \mathsf P(\vert \Re u_1\vert ^2\le b^2),\ b>0,
$$
and here we may replace $\mathsf P$ by the corresponding \proba{} density
$$
\mu (t)dt={1\over {\sqrt{\pi }}}e^{-t^2}dt
$$
for $\Re \mu _1$. Thus, we have to show that
\ekv{er.23}
{
{1\over \sqrt{\pi }}\int_{\vert c+t\vert \le b}e^{-t^2}dt\le
{1\over \sqrt{\pi }}\int_{\vert t\vert \le b}e^{-t^2}dt .
}
Fix $b$ and rewrite the \lhs{} as
$$
I(c)={1\over \sqrt{\pi }}\int_{-b-c}^{b-c}e^{-t^2}dt.
$$
The derivative satisfies (recall that $c>0$)
$$
I'(c)={1\over {\sqrt{\pi }}}(e^{-(b+c)^2}-e^{-(b-c)^2})\le 0.
$$
hence $c\mapsto I(c)$ is decreasing and \no{er.23} follows, since it is
trivially fulfilled when $c=0$.
\end{proof}
\par Now consider the \proba{} that $\ln \vert \det (D+Q)\vert ^2\le a$.
If $\chi _a(t)=H(a-t)$, this \proba{} becomes
\begin{eqnarray*}
&& \int ..\int \mathsf P(du_1)...\mathsf P(du_N)\times \\ &&\hskip -7truemm\chi _a(\ln \vert
d_1+u_1\vert ^2+\ln \vert
\widetilde{d}_2(u_1)+\widetilde{u}_2(u_1,u_2)\vert ^2+...+\ln \vert
\widetilde{d}_N(u_1,..,u_{N-1})+\widetilde{u}_N(u_1,..,u_N)\vert ^2).
\end{eqnarray*}
Here we first carry out the integration \wrt{} $u_N$, noticing that with
the other $u_1,..,u_{N-1}$ fixed, we may consider
$\widetilde{d}_N(u_1,..,u_{N-1})$
as a fixed vector in ${\bf C}\simeq (d_1+u_1,...,d_{N-1}+u_{N-1})^\perp$
and $\widetilde{u}_N$ as a random vector in ${\bf C}$. Using also the
lemma, we get
\begin{eqnarray*}
&&\mathsf P(\ln \vert \det (D+Q)\vert ^2\le a)\\
&=&\int ..\int \nu
_{\widetilde{d}_N}^{(1)}(t_N)dt_N\mathsf P(du_{N-1})..\mathsf P(du_1)\times \\
&&\chi _a(\ln \vert d_1+u_1\vert ^2+..+\ln \vert
\widetilde{d}_{N-1}(u_1,..,u_{N-2})+\widetilde{u}_{N-1}(u_1,..,u_{N-1})\vert
^2+t_N)\\
&\le& \int ..\int \nu^{(1)}(t_N)dt_N\mathsf P(du_{N-1})..\mathsf P(du_1)\times\\
&&\chi _a(\ln \vert d_1+u_1\vert ^2+..+\ln \vert
\widetilde{d}_{N-1}(u_1,..,u_{N-2})+\widetilde{u}_{N-1}(u_1,..,u_{N-1})\vert
^2+t_N).
\end{eqnarray*}
We next estimate the $u_{N-1}$- integral in the same way and so on.
Eventually, we get
\begin{prop}\label{er4}
Under the assumptions above,
\begin{eqnarray*}
\mathsf P(\ln \vert \det (D+Q)\vert ^2\le a)&\le& \int ..\int \chi
_a(t_1+...+t_N)\nu ^{(1)}(t_N)\nu ^{(2)}(t_{N-1})..\nu ^{(N)}(t_1)\\
&=&\mathsf P(\ln \vert \det Q\vert ^2\le a).\end{eqnarray*}
In particular the estimate \no{er.22}
extends to random perturbations of constant matrices:
\ekv{er.24}
{\mathsf P(\ln \vert \det (D+Q)\vert ^2\le a)\le
C\exp [-{1\over 2}(C_2+(N-{1\over 2})\ln N-2N-a)],}
when $a\le C_2 +(N+{1\over 2})\ln N-2N$.
\end{prop}
\section{Grushin \pb{} for the perturbed \op{}}\label{gp}
\setcounter{equation}{0}
\par Let $P$
be as in Section \ref{gf}. Let $0<\widetilde{m},\widehat{m}\le 1$ be
square integrable order \fu{}s on ${\bf R}^{2n}$
such that $\widetilde{m}$ or $\widehat{m}$ is integrable, and let
$\widetilde{S}\in S(\widetilde{m})$, $\widehat{S}\in S(\widehat{m})$ be
elliptic symbols. We use the same symbols to denote the $h$-Weyl
quantizations. The \op{}s $\widetilde{S}$, $\widehat{S}$ will be \hs{}
with
$$
\Vert \widetilde{S}\Vert _{{\rm HS}}, \Vert \widehat{S}\Vert _{{\rm HS}}\backsim
h^{-{n\over 2}}.
$$
Let $\widetilde{e}_1,\widetilde{e}_2,...$, and
$\widehat{e}_1,\widehat{e}_2,...$ be \on{} bases for $L^2({\bf R}^n)$. Our
random perturbation will be
\ekv{gp.1}
{
Q_\omega =\widehat{S}\circ \sum_{j,k}\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\circ \widetilde{S},
}
where $\alpha _{j,k}$
are \indep{} complex ${\cal N}(0,1)$ \rv{}s. See the appendix, Section \ref{ap}
for a general discussion.
\par Consider the polar decompositions
\ekv{gp.2}{\widehat{S}=\widehat{D}\widehat{U},\
\widetilde{S}=\widetilde{U}\widetilde{D},}
where
$\widehat{U}$, $\widetilde{U}$ are unitary \pop{}s with symbol in $S(1)$
and $\widehat{D}$, $\widetilde{D}$ are positive \sa{} elliptic \pop{}s
with symbol in $S(\widehat{m})$ and $S(\widetilde{m})$ respectively. After
replacing $\widehat{e}_j$
by $\widehat{U}\widehat{e}_j$ and $\widetilde{e}_k$ by
$\widetilde{U}^*\widetilde{e}_k$, we get with the new \on{} bases that
\ekv{gp.3}
{
Q_\omega =\widehat{D}\circ \sum_{j,k}\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\circ \widetilde{D},
}
Now as we recall in the appendix (Section \ref{ap}), we may replace the
bases $\widehat{e}_j$ and $\widetilde{e}_j$ by any new \on{} bases we like,
if we replace the $\alpha _{j,k}(\omega )$ by a new set of \rv{}s (that we
also denote by $\alpha _{j,k}$) having identical properties. If we
choose $\widehat{e}_j$ to be an \on{} basis of \ef{}s of $\widehat{D}$ and
similarly for $\widetilde{e}_j$, then we get
\ekv{gp'.4}
{ Q_\omega =\sum_{j,k}\widehat{s}_j\widetilde{s}_k\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*, } where $\widehat{s}_j>0$ and
$\widetilde{s}_j>0$ are the \ev{}s of $\widehat{D}$ and $\widetilde{D}$
respectively, i.e. the singular values of $\widehat{S}$ and
$\widetilde{S}$.
\par We are then precisely in the situation of Section \ref{hs}, noting
that $\widehat{s}_j\widetilde{s}_k\alpha _{j,k}(\omega )$ are \indep{}
${\cal N}(0,\widehat{s}_j^2\widetilde{s}_k^2)$-laws, so \no{hs.12} can be applied
with $\sigma _{j,k}=\widehat{s}_j\widetilde{s}_k$,
$$
\sum_{j,k}\sigma _{j,k}^2=\Vert \widehat{S}\Vert _{{\rm HS}}^2\Vert
\widetilde{S}\Vert ^2_{{\rm HS}}\backsim h^{-2n}.
$$
We also know that
$$
s_1=\mbox{\rm max} \sigma _{j,k}=\Vert \widehat{S}\Vert \Vert \widetilde{S}\Vert
\backsim 1.
$$
From \no{hs.12}, we deduce that
\ekv{gp'.5}
{
\mathsf P(\Vert Q_\omega \Vert _{{\rm HS}}^2\ge a)\le C\exp [Ch^{-2n}-{a\over C}]
}
for some constant $C>0$. Let
\ekv{gp'.6}
{
M=C_1h^{-n},
}
for some $C_1\gg 1$. Then \no{gp'.5} gives
\ekv{gp'.7}
{
\mathsf P(\Vert Q\Vert _{{\rm HS}}^2\ge M^2)\le C\exp (-h^{-2n}/C),
}
for some new constant $C>0$.
\par We also want to control the trace class norm of
$Q_\omega $, so we will use the assumption that one of $\widetilde{m}$
and $\widehat{m}$ is integrable. Assume for instance that $\widehat{m}$ is
integrable. Then $\widehat{m}^{1/2}$ is square integrable and we can
factorize $\widehat{S}=\widehat{S}_1\widehat{S}_2$, with
$\widehat{S}_j\in{\rm Op}(\widehat{m}^{1/2})$ being Hilbert-Schmidt operators. Let us write
$$
Q_\omega =\widehat{S}_1\widehat{S}_2\sum_{j,k}\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\widetilde{S}.
$$
Now recall that the composition of two \hs{} \op{}s is of trace class and
the corresponding trace class norm does not exceed the product of the \hs{}
norms of the two factors. Knowing that $\Vert \widehat{S}_1\Vert
_{{\rm HS}}={\cal O}(h^{-n/2})$
and applying \no{gp'.7} to $\widehat{S}_2\sum_{j,k}\alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\widetilde{S}$, we get
\ekv{gp'.8}
{
\mathsf P(\Vert Q_\omega \Vert _{\rm tr}\ge M^{3/2})\le C\exp (-h^{-2n}/C).
}
\par In the following we will restrict the attention to $Q_\omega $'s with
\ekv{gp.4}
{
\Vert Q_\omega \Vert _{{\rm HS}}\le M,\ \Vert Q_\omega \Vert _{\rm tr}\le M^{3/2},
}
and we have just seen that the \proba{} that this is the case is bounded
from below by $1-Ce^{-h^{-2n}/C}$.
\par We wish to study the \ev{} distribution of
\ekv{gp.5}
{
P_\delta =P-\delta Q_\omega ,
}
when $\delta >0$ is \sufly{} small. (The minus sign is for notational
convenience only.)
\par Recall from Section \ref{gf}, that for $z\in\widetilde{\Omega }$,
\ekv{gp.6}
{
P(z)=(\widetilde{P}-z)^{-1}(P-z)
}
is a trace class \pert{} of the identity. We now introduce
\ekv{gp.7}
{
P_\delta (z)=(\widetilde{P}-z)\inv (P-\delta Q_\omega -z)=P(z)-\delta
(\widetilde{P}-z)\inv Q_\omega .
}
The Grushin \pb{} will be used to find lower bounds for $\vert \det P_\delta
(z)\vert $. First we derive an upper bound: We have with $P_\delta
(z)=P_\delta $, $P=P(z)$:
\eeekv{gp.8}
{&&\hskip -10truemm P_\delta ^*P_\delta }
{&=&P^*P-\delta (P ^*(\widetilde{P}-z)\inv
Q_\omega +Q_\omega ^*(\widetilde{P}^*-\overline{z})\inv P -\delta
Q_\omega ^*(\widetilde{P}^*-\overline{z})\inv (\widetilde{P}-z)\inv
Q_\omega )}
{&=&P^*P+\delta R,
}
where
\eekv{gp.9}
{
\Vert R\Vert_{{\rm HS}} &\le& C (\Vert Q_\omega \Vert _{{\rm HS}}+\delta \Vert
Q_\omega \Vert\Vert
Q_\omega \Vert_{{\rm HS}})\le \widetilde{C}M,
}
{
\Vert R\Vert_{{\rm tr}} &\le& C (\Vert Q_\omega \Vert _{{\rm tr}}+\delta \Vert
Q_\omega \Vert\Vert
Q_\omega \Vert_{{\rm tr}})\le \widetilde{C}M^{3/2},
}
provided that $\delta \Vert Q_\omega \Vert\le {\cal O}(1)$, as will
follow from \no{gp.12}.
\par In Section \ref{fc} we studied $P^*P+\alpha \chi (\alpha \inv P^*P)$
for $h\ll \alpha \ll 1$. This \op{} is
$\ge \alpha $ if $1_{[0,1]}\le \chi $, as we may assume. Now assume that
\ekv{gp.12}
{
\delta M\ll h .
}
Then
$$
P^*P+\alpha \chi (\alpha \inv P^*P)+\delta R\ge {\alpha \over 2},
$$
and
\begin{eqnarray*}
\ln\det P_\delta ^*P_\delta &\le& \ln \det (P_\delta ^*P_\delta +\alpha
\chi ({P^*P\over \alpha }))
\\&=& \ln \det (P^*P+\alpha \chi ({P^*P\over \alpha })+\delta R)
\\
&=&\ln \det (P^*P+\alpha \chi ({P^*P\over \alpha }))+\int_0^\delta \tr
((P^*P+\alpha \chi ({P^*P\over \alpha })+tR)\inv R)dt.
\end{eqnarray*}
The integral is ${\cal O}(1){\delta \over \alpha }\Vert R\Vert _\tr ={\cal
O}(1)\delta M^{3\over 2}/\alpha $ and combining this with \no{fc.51}
(assuming now \no{fc.44}), we get
\ekv{gp.13}
{
\ln \det P_\delta ^*P_\delta \le {1\over (2\pi h)^n}(\iint \ln \vert
p\vert ^2 dxd\xi +{\cal O}(1)\alpha ^{\kappa}\ln {1\over \alpha })+{\cal O}(1){\delta M^{3\over 2}\over \alpha }.
}
Here we choose $\alpha =Ch$ , $C\gg 1$ and we can drop the last remainder
term if we assume that
\ekv{gp.14}
{
\delta M^{3\over 2}\ll h^{1+\kappa-n}\ln {1\over h},\ \delta \ll
h^{1+\kappa+n/2}\ln {1\over h}.
}
For $n\geq 2$ this follows from \no{gp.12}, but for $n =1$ it might be a stronger assumption depending on the value of $\kappa$.
Then
\ekv{gp.15}
{
\ln \vert \det P_\delta \vert \le {1\over (2\pi h)^n}(\iint \ln \vert
p\vert dxd\xi +{\cal O}(1)h^{\kappa}\ln {1\over h}).
}
\par Still with $h\ll \alpha \ll 1$ we define $R_+$, $R_-$, ${\cal
P}_0={\cal P}$, ${\cal E}_0={\cal E}$ as in Section \ref{gu}. Here $z$ is
fixed, $P=P(z)$. With $P_\delta =P_\delta (z)$, we put
\ekv{gp.16}
{
{\cal P}_\delta =\begin{pmatrix}
P_\delta &R_-\cr R_+ &0\end{pmatrix}:\, L^2({\bf
R}^n)\times {\bf C}^N\to L^2({\bf R}^n)\times {\bf C}^N.
}
Now $\Vert \delta (\widetilde{P}-z)\inv Q_\omega \Vert \le C\delta M$,
and
$$
{\delta M\over \sqrt{\alpha }}\ll {h\over \sqrt{\alpha }}\ll 1
$$
under the assumption \no{gp.12}, so ${\cal P}_\delta $ has the inverse
\ekv{gp.17}
{
{\cal E}_\delta ={\cal E}_0\Big( 1-
\begin{pmatrix}\delta (\widetilde{P}-z)\inv
Q_\omega &0\cr 0 &0\end{pmatrix}{\cal E}_0\Big)\inv
}
of norm $\le {\cal O}(1/\sqrt{\alpha })$. Writing
\ekv{gp.18}
{
\widetilde{Q}_\omega =(\widetilde{P}-z)\inv Q_\omega ,
}
we have the Neumann series expansion
\eekv{gp.19}
{ {\cal E}_\delta &=&\begin{pmatrix}E^\delta &E_+^\delta \cr E_-^\delta
&E_{-+}^\delta \end{pmatrix} \\ \notag}{&=&
\begin{pmatrix}\sum_{j=0}^\infty E^0(\delta \widetilde{Q}_\omega E^0)^j
&\sum_{j=0}^\infty
(E^0\delta
\widetilde{Q}_\omega )^jE_+^0 \cr \sum_{j=0}^\infty E_-^0(\delta
\widetilde{Q}_\omega E^0)^j
&E_{-+}^0+\sum_{j=1}^\infty E_-^0(\delta \widetilde{Q}_\omega E^0)^{j-1}\delta
\widetilde{Q}_\omega E_+^0\end{pmatrix}.
}
\par For $0\le t\le \delta $ we have
\begin{eqnarray*}
{d\over dt}\ln \det {\cal P}_t &=&-\tr {\cal E}_t
\begin{pmatrix}\widetilde{Q}_\omega &0\cr 0 &0\end{pmatrix}\\
&=&{\cal O}(1) {1\over \sqrt{\alpha }}\Vert \widetilde{Q}_\omega \Vert _\tr\\
&=&{\cal O}(1){1\over \sqrt{\alpha }}M^{3\over 2},
\end{eqnarray*}
so $$
\ln \det {\cal P}_\delta =\ln \det ({\cal P}) +{\cal O}(1){\delta \over
\sqrt{\alpha }}M^{3\over 2}.
$$
Applying \no{gu.13}, we get
\ekv{gp.20}
{
\ln \vert \det {\cal P}_\delta \vert ={1\over (2\pi h)^n}(\iint\ln\vert
p\vert dxd\xi +{\cal O}(1)
\alpha ^{\kappa}\ln \alpha
)
+{\cal
O}(1){\delta \over \sqrt{\alpha }}M^{3\over 2}.
}
Again, under the strengthened assumption \no{gp.14}, we get with
$\alpha =Ch$, $C\gg 1$,
\ekv{gp.21}
{
\ln \vert \det {\cal P}_\delta \vert ={1\over (2\pi h)^n}(\iint\ln\vert
p\vert dxd\xi +{\cal O}(1)
h^{\kappa}\ln {1\over h}
).
}
\par The idea to get a lower bound for $\ln \vert \det P_\delta \vert $ with
high \proba{} is now to use \no{dg.10}, \no{dg.11} which gives
\ekv{gp.22}
{
\ln \vert \det P_\delta \vert =\ln \vert \det {\cal P}_\delta \vert +\ln
\vert \det E_{-+}^\delta \vert ,
}
and to get a lower bound for $\ln \vert \det E_{-+}^\delta \vert $.
\section{Lower bounds on the determinant}\label{lb}
\setcounter{equation}{0}
We keep the assumptions
formulated in the beginning of Section \ref{gp}, in particular \no{gp.1}. We
restrict the attention to the case when \no{gp.4} holds with $M$ given by
\no{gp'.6}, and recall that so is the case with probability
$\ge 1-Ce^{-h^{-2n}/C}$. The restrictions \no{gp.12}, \no{gp.14} on $\delta $
will be further strengthened below.
\par Using a formula of the type \no{gp.22} we shall show that for every
$z\in \widetilde{\Omega }$, the determinant of $P_\delta (z)$ is very
likely not to be too small. For that we study the \proba{} distribution of the
random matrix $E_{-+}^\delta $, and show that we are close enough to
the Gaussian case to be able to apply the results of Section \ref{er} to the
determinant. Recall that we work under the assumption \no{gp.4}, which is
fulfilled with \proba{} $\ge 1-Ce^{-h^{-2n}/C}$. We want to study the map
\begin{eqnarray*}
Q\mapsto E_{-+}^\delta =E_{-+}^0+\sum_1^\infty E_-^0(\delta
\widetilde{Q}E^0)^{j-1}\delta \widetilde{Q}E_+^0\\
=E_{-+}^0+\delta E_-^0\widetilde{Q}E_+^0+\sum_2^\infty
\Big( {\delta Ch^{-n}\over
\sqrt{\alpha }}\Big)^j{1\over \sqrt{\alpha }}R_j,
\end{eqnarray*}
where $\widetilde{Q}=(\widetilde{P}-z)\inv Q$ and $\Vert R_j\Vert _{{\rm HS}}\le 1$.
Here, we used that $\Vert E_\pm ^0\Vert ,\, \Vert E^0\Vert \le
1/\sqrt{\alpha }$. We can rewrite this further as
\begin{eqnarray}\label{lb.1}
E_{-+}^\delta &=&E_{-+}^0+{\delta \over \alpha }(\sqrt{\alpha
}E_-^0\widetilde{Q}\sqrt{\alpha }E_+^0+\frac{Ch^{-n}{\delta Ch^{-n}
}}{\sqrt{\alpha}}\sum_0^\infty \Big( {\delta Ch^{-n}\over
\sqrt{\alpha }}\Big)^j R_{j+2})\\
&=:&E_{-+}^0+{\delta \over \alpha }\widehat{Q}. \nonumber
\end{eqnarray}
\par We strengthen \no{gp.12}, \no{gp.14} to
\ekv{lb.5}
{
\frac{\delta M^2}{\sqrt{\alpha}} \ll 1,
}
and recall that by (\ref{gp'.6}), $M=C_{1}h^{-n}$.
Then
$$
{\delta Ch^{-n}\over \sqrt{\alpha }}\ll {h^n\over C}\ll 1,\
$$
and we get
\ekv{lb.6}
{
\widehat{Q}=\sqrt{\alpha }E_-^0\widetilde{Q}\sqrt{\alpha }E_+^0+T,\ \Vert
T\Vert _{{\rm HS}}\le \frac{C^2h^{-2n}\delta}{\sqrt{\alpha}} \ll 1.
}
\par In view of \no{gp.1} we have
\ekv{lb.7}
{
\sqrt{\alpha }E_-^0\widetilde{Q}\sqrt{\alpha }E_+^0=\sqrt{\alpha
}E_-^0(\widetilde{P}-z)\inv
\widehat{S}\sum_{j,k}\widehat{e}_j\alpha _{j,k}
\widetilde{e}_k^*\widetilde{S}\sqrt{\alpha
}E_+^0,
}
where we recall from \no{gu.8} that
\ekv{lb.8}
{
\sqrt{\alpha }E_+^0v_+=\sum_1^N v_+(j)e_j,\quad \sqrt{\alpha
}E_-^0v(j)=(v\vert f_j),\quad 1\le j\le N,
}
where $e_1,..,e_N$
and $f_1,...,f_N$ are \on{} bases for $\ran(1_{[0,\alpha
]}(P(z)^*P(z)))$ and $\ran(1_{[0,\alpha
]}(P(z)P(z)^*))$ respectively, writing $\ran(B)$ for the range of $B$.
\par Here, we wish to apply the discussion of Section \ref{ap}. The \op{}s
$\widetilde{S}\sqrt{\alpha }E_+^0$, $\sqrt{\alpha
}E_-^0(\widetilde{P}-z)\inv \widehat{S}$ are clearly \hs{} of rank
$\le $ $N$. Let $\widetilde{t}_j$, $\widehat{t}_j$ denote the
singular values of these \op{}s so that $\widetilde{t}_j=\widehat{t}_j=0$
for $j\ge N+1$.
\begin{lemma}\label{lba}
We have
\ekv{lb.9}
{
{1\over C}\le \widetilde{t}_j,\, \widehat{t}_j\le C,\ 1\le j\le N,
}
where $C>0$
is \indep{} of $h,\alpha $.
\end{lemma}
\begin{proof}
\no{lb.8} shows that $\Vert \sqrt{\alpha }E_+^0\Vert ,\Vert \sqrt{\alpha
}E_-^0\Vert \le 1$, and clearly $\Vert
(\widetilde{P}-z)\inv\widehat{S}\Vert ,\Vert \widetilde{S}\Vert ={\cal
O}(1)$, so the upper bound in \no{lb.9} is clear.
\par On the other hand, $\sqrt{\alpha }E_+^0v_+$ is confined to a \bdd{}
region in phase space, and it is easy to show that
$$
C\Vert \widetilde{S}\sqrt{\alpha }E_+^0v_+\Vert \ge \Vert
\sqrt{\alpha }E_+^0v_+\Vert =\Vert v_+\Vert ,
$$
which implies that the smallest \ev{} of $((\widetilde{S}\sqrt{\alpha
}E_+^0)^*(\widetilde{S}\sqrt{\alpha
}E_+^0))^{1/2}$ is $\ge $ $1/C$. The lower bound on $\widetilde{t}_j$
follows. The argument for $\widehat{t}_j$ is essentially the same.
\end{proof}
\par Let $\widehat{f}_1,...,\widehat{f}_N$ and
$\widetilde{f}_1,...,\widetilde{f}_N$ be \on{} bases in ${\bf C}^N$
of \ef{}s of $((\sqrt{\alpha }E_-^0(\widetilde{P}-z)\inv
\widehat{S})(\sqrt{\alpha }E_-^0(\widetilde{P}-z)\inv \widehat{S})^*
)^{1/2}$ and $((\widetilde{S}\sqrt{\alpha }E_+^0)^*
(\widetilde{S}\sqrt{\alpha }E_+^0))^{1/2}$ respectively, with $\widehat{t}_j$
and $\widetilde{t}_k$ as the corresponding \ev{}s. We can then choose the
\on{} bases $\{ \widehat{e}_j\}$, $\{\widetilde{e}_j\}$ in $L^2$ so that
\ekv{lb.9.5}
{\widehat{e}_j={1\over \widehat{t}_j}(\sqrt{\alpha
}E_-^0(\widetilde{P}-z)\inv \widehat{S})^*\widehat{f}_j,\quad
\widetilde{e}_j={1\over \widetilde{t}_j}(\widetilde{S}\sqrt{\alpha
}E_+^0)\widetilde{f}_j,
}
for $j=1,2,...,N$. Then from \no{lb.7}, we get
\ekv{lb.10}
{
\sqrt{\alpha }E_-^0\widetilde{Q}\sqrt{\alpha }E_+^0=\sum_{1\le j,k\le
N}\widehat{t}_j\widetilde{t}_k\alpha _{j,k}\widehat{f}_j\widetilde{f}_k^*.
}
\par Now we will be a a little more specific about the assumption
\no{gp.4}. We will restrict the attention to the set ${\cal Q}_M$ of
matrices $(\alpha _{j,k}(\omega ))$ such that
\ekv{lb.11}
{
\Vert \widehat{S}_2\sum \alpha _{j,k}(\omega
)\widehat{e}_j\widetilde{e}_k^*\widetilde{S}\Vert _{{\rm HS}}\le M,
}
which implies \no{gp.4} and which is fulfilled with \proba{} $\ge$
$1-C\exp (-h^{-2n}/C)$. Here we recall that we assumed $\widehat{m}$ to be
integrable and wrote $\widehat{S}=\widehat{S}_1\widehat{S}_2$ with
$\widehat{S}_j\in{\rm Op}(S(\widehat{m}^{1/2}))$. (When $\widetilde{m}$ is
integrable instead, we make a corresponding factorization of $\widetilde{S}$.)
\par \no{lb.6} can be reformulated as
\ekv{lb.12}
{
\widehat{Q}(\alpha )={\rm diag\,}(\widehat{t}_j)\circ \Big( (\alpha
_{j,k})_{1\le j,k\le N}+\widetilde{T}(\alpha_{\cdot} )\Big) \circ {\rm
diag\,}(\widetilde{t}_k),
}
\ekv{lb.13}
{
\Vert \widetilde{T}(\alpha_{\cdot} )\Vert _{{\rm HS}}\le {\cal O}(1)\frac{\delta M^2}{\sqrt{\alpha}},
}
for $(\alpha _{j,k})\in{\cal Q}_M$.
\par Let $\3 (\alpha _{j,k})\3 $ denote the norm in \no{lb.11} and let
${\cal H}$ be the corresponding Hilbert space of ${\bf N}\times {\bf
N}$ matrices. We shall view ${\rm HS}({\bf C}^N,{\bf C}^N)=:{\cal H}_N$ as a
subspace of ${\cal H}$ in the natural way. Note that the two norms are
\ufly{} equivalent on this subspace.
\par The Cauchy inequality implies (after decreasing $M$ by a constant
factor) that
the differential of the map $\alpha_{\cdot} \mapsto \widetilde{T}(\alpha_{\cdot} )$ satisfies the
following estimate on ${\cal Q}_M$:
\ekv{lb.14}
{
\Vert d\widetilde{T}\Vert _{{\cal H}\to{\cal H}_N}={\cal O}(1)\frac{\delta M}{\sqrt{\alpha}}.
}
On ${\cal H}$, ${\cal H}_N$
we have the basic \proba{} measures,
\ekv{lb.15}
{
\mu _{\cal H}=\prod_{j,k=1}^\infty (e^{-\vert \alpha _{j,k}\vert
^2}{L(d\alpha _{j,k})\over \pi })\quad \mu _{{\cal H}_N}=
\prod_{j,k=1}^N (e^{-\vert \alpha _{j,k}\vert
^2}{L(d\alpha _{j,k})\over \pi }).
}
We shall now estimate $\Pi _*(\mu _{\cal H})$ on ${\cal Q}_M$, where
\ekv{lb.15.5}
{
\Pi ((\alpha_{j,k}) )=(\alpha _{j,k})_{1\le j,k\le
N}+\widetilde{T}(\alpha_{\cdot} ),
}
and to do so, we identify $\widetilde{T}(\alpha_{\cdot} )$ with its image in
${\cal H}$ under the natural inclusion ${\cal H}_N\subset {\cal H}$, and
write
\ekv{lb.16}
{
\Pi =\Pi _0\circ \kappa ,\quad \kappa (\alpha_{\cdot} )=\alpha_{\cdot}
+\widetilde{T}(\alpha_{\cdot} ),\quad \Pi _0(\alpha_{\cdot} )=(\alpha _{j,k})_{1\le j,k\le N},
}
for $\alpha_{\cdot} =(\alpha _{j,k})\in{\cal H}$.
\par We first proceed formally, ignoring some technical difficulties due to
the infinite dimension. We have
\eeeekv{lb.17}
{
\vert \, \Vert \kappa (\alpha_{\cdot} )\Vert _{{\rm HS}}^2-\Vert \alpha_{\cdot} \Vert
_{{\rm HS}}^2\, \vert &=& \vert \, \Vert \Pi _0\kappa (\alpha_{\cdot} )\Vert _{{\rm HS}}^2-\Vert \Pi
_0\alpha_{\cdot} \Vert _{{\rm HS}}^2\, \vert
}
{
&=& \vert \, \Vert \Pi _0\kappa (\alpha_{\cdot} )\Vert _{{\rm HS}}-\Vert \Pi
_0\alpha_{\cdot} \Vert _{{\rm HS}}\, \vert\times \vert \, \Vert \Pi _0\kappa (\alpha_{\cdot} )\Vert _{{\rm HS}}+\Vert \Pi
_0\alpha_{\cdot} \Vert _{{\rm HS}}\, \vert
}
{
&\le& \Vert \widetilde{T}(\alpha_{\cdot} )\Vert_{{\rm HS}} \big( 2\3 \alpha_{\cdot} \3+{\cal O}(\frac{\delta
M^2}{\sqrt{\alpha}})\big)
}
{
&\le& {\cal O}(1)\frac{\delta M^3}{\sqrt{\alpha}},
}
where we strengthen the assumption \no{lb.5} to
\ekv{lb.20}
{ \frac{{\delta }M^3}{\sqrt{\alpha}}\ll 1,\hbox{ or equivalently }{\delta }\ll h^{3n+1/2}. }
As for the Jacobian of $\kappa $, we recall that
if $A:{\cal H}\to {\cal H}$ is linear with $\Vert A\Vert _{\rm tr}\ll 1$
(\ufly{} \wrt{} $M$), then $\det (1+A)=1+{\cal O}(\Vert A\Vert _{\tr})$.
Also, if $A$ is of rank $\le N^2$, we know that $\Vert A\Vert _\tr \le
N^2\Vert A\Vert $, so in our case we get from \no{lb.14} that $$
\det {\partial \kappa \over \partial x}=1+{\cal
O}(1)\frac{\delta N^2M}{\sqrt{\alpha}}.
$$
Here the remainder term is $\ll 1$ in view of the assumption \no{lb.20} and
the fact that $N\ll M$. (Recall that $N={\cal O}(\alpha ^{\kappa}h^{-n})$ by \no{fc.50}.)
\par If $F$ is a locally defined \hol{} map $:{\cal H}\to {\cal H}$,
then
$$
L(dF(x))=\vert \det {\partial F\over \partial x}\vert ^2L(dx),
$$
so in our case,
$$
L(d\kappa (x))=(1+{\cal O}(1)\frac{\delta N^2M}{\sqrt{\alpha}})L(dx).
$$
\par Combining this with \no{lb.17}, we get
$$
\kappa _*(\mu _{\cal H})\le (1+{\cal O}(1)\frac{\delta M^3}{\sqrt{\alpha}})\mu _{\cal H}\hbox{
on }{\cal Q}_M.
$$
Since $(\Pi _0)_*\mu _{\cal H}=\mu _{{\cal H}_N}$, we conclude that
\ekv{lb.21}
{
\Pi _*(\mu _{\cal H})\le (1+{\cal O}(1)\frac{\delta M^3}{\sqrt{\alpha}})\mu _{{\cal H}_N}\hbox{
on }{\cal Q}_M.
}
At the end of this section we shall complete the proof of \no{lb.21}
by means of finite dimensional approximations.
\par For $\alpha_{\cdot} =\alpha_{\cdot} (\omega )\in{\cal Q}_M$ we want to estimate the
\proba{} that $\vert \det E_{-+}^\delta \vert $ is small. According to
Proposition \ref{er4}, the $\mu _{{\cal H}_N}(d\check{Q})$-measure of the
set of matrices $\check{Q}$ with
$$
\vert \det ({\rm diag\,}(\widehat{t}_j)\inv\circ {\alpha \over \delta
}E_{-+}^0\circ {\rm diag\,}(\widetilde{t}_j)\inv +\check{Q})\vert \le
e^{a}
$$
is
$$
\le Ce^{-{1\over 2}(C_2+(N-{1\over 2})\ln N-2N-a)},
$$
if
\ekv{lb.22}
{ a\le C_2+(N+{1\over 2})\ln N-2N. }
In view of \no{lb.1}, \no{lb.12},
\no{lb.21} this is also (after a slight increase of $C$) an upper bound for
the \proba{} to have $(\alpha _{j,k})\in{\cal Q}_M$ and $$
\vert \det ({\rm diag\,}(\widehat{t}_j)\inv\circ ({\alpha \over \delta
}E_{-+}^0+\widehat{Q})\circ {\rm diag\,}(\widetilde{t}_j)\inv )\vert \le
e^{a},
$$
or equivalently that
$$
\vert \det E_{-+}^\delta \vert \le e^{{a}}({\delta \over \alpha
})^N\prod_1^N\widehat{t}_j\prod_1^N\widetilde{t}_j.
$$
\par Write
\ekv{lb.25}
{
a=C_2+(N-{1\over 2})\ln N-2N-b
}
and restrict the attention to $b\ge 0$. Then
$$
e^{a}=e^{C_2+(N-{1\over 2})\ln N-2N-b}
$$
and we get
\ekv{lb.26}
{
\mathsf P(\hbox{\no{lb.11} holds and }\vert \det E_{-+}^\delta\vert \le e^{N\ln{1\over
\alpha }-N\ln{1\over\delta }+(N-{1\over 2})\ln N-CN+
C_2-b})\le e^{-b}.
}
\par Summing up the discussion so far, we have
\begin{prop}\label{lb1}
Consider the Grushin \pb{} \no{gp.16}. Assume \no{fc.44}
and choose $\alpha =Ch$, $C\gg 0$. Then there exist positive constants
$\widetilde{C}_0,\widetilde{C}_1,\widetilde{C}_2,\widetilde{C}$ such that for $b\ge 0$
\eekv{lb.28}
{
&& \mathsf P(\hbox{\no{lb.11} holds and }
\vert \det E_{-+}^\delta \vert
\ge e^{-\widetilde{C}_0h^{\kappa-n}\ln {1\over h}-\widetilde{C}_1-
\widetilde{C}_2h^{\kappa-n}\ln{1\over \delta }-b})
}
{
&&\ge
1-\widetilde{C}e^{-b}-\widetilde{C}e^{-C_0h^{-2n}}.
}
Here $\delta $ is assumed to satisfy \no{lb.20}.
\end{prop}
\par In view of \no{gp.21}, \no{gp.22}, we get
\begin{theo}\label{lb2}
We now return to the original $P_\delta (z)$ in \no{gp.7} and we assume \no{fc.44}
\ufly{} for all $z$ in some open set $\widehat{\Omega }\Subset
\widetilde{\Omega }$. If
$\delta $ satisfies \no{gp.12}, \no{gp.14} there is a constant $C>0$ such that
\ekv{lb.29}
{
\ln \vert \det P_\delta \vert \le {1\over (2\pi h)^n}(\iint \ln \vert
p\vert dxd\xi +Ch^{\kappa}\ln{1\over h}),\ \forall
z\in\widehat{\Omega },
}
with \proba{} $\ge 1-Ce^{-C_0h^{-2n}}$. If $\delta $ satisfies the stronger condition
\no{lb.20},
then there are constants $C,
\widetilde{C},C_0>0$ such that for every $z\in\widehat{\Omega }$ and
$\epsilon \ge 0$:
\ekv{lb.30}
{
\ln \vert \det P_\delta \vert \ge {1\over (2\pi h)^n}(\iint \ln \vert
p\vert dxd\xi -Ch^{\kappa}(\ln{1\over h}+\ln{1\over \delta })-\epsilon )
}
with \proba{} $\ge 1-e^{-\epsilon (2\pi
h)^{-n}}-\widetilde{C}e^{-C_0h^{-2n}}$.\end{theo}
\par Notice that the last term in the lower bound for the \proba{} is much
smaller than the second term, and can therefore be eliminated.
\par We end this section by completing the proof of \no{lb.21} by finite
dimensional approximation. (We suggest the reader to proceed directly to
Section \ref{sa}.)
\begin{lemma}\label{lb3}
We can choose the \on{} bases $\{\widehat{e}_j\}$, $\{\widetilde{e}_j\}$
in $L^2$ so that \no{lb.9.5} is fulfilled for $1\le j\le N$ and such that
the square of the norm in \no{lb.11} is equivalent to
\ekv{lb.31}
{
\sum_{j,k}\vert \alpha _{j,k}\vert ^2\widehat{\mu }_2(j)^2\widetilde{\mu
}(k)^2,
}
where $\widehat{\mu }_2(j)$, $\widetilde{\mu }(k)$ denote the singular
values of $\widehat{S}_2$ and $\widetilde{S}$ respectively.
\end{lemma}
\par In this lemma we did not try to have any \uf{}ity \wrt{} $h$.
\par Assume the lemma for a while. Then for $\widetilde{N}\ge N+1$, we can
replace $Q_\omega $ in \no{gp.1} by
$$
Q_\omega ^{\widetilde{N}}=\widehat{S}\circ \Big( \sum_{1\le j,k\le
\widetilde{N}}\alpha _{j,k}(\omega )\widehat{e}_j\widetilde{e}_k^*
+\sum_{j{\rm \, or\,}k\ge \widetilde{N}+1}\beta
_{j,k}^{\widetilde{N}}(\alpha ^{\widetilde{N}}(\omega
))\widehat{e}_j\widetilde{e}_k^*\Big)\circ
\widetilde{S},
$$
which depends on finitely many \rv{}s. Here $\alpha
^{\widetilde{N}}(\omega )=(\alpha _{j,k}(\omega ))_{1\le j,k\le\widetilde{N}}$
and $\beta ^{\widetilde{N}}_{j,k}$ are the linear \fu{}s of $\alpha
^{\widetilde{N}}$ which minimize
$$\Vert \widehat{S}_2\circ (\sum_{j,k\le
\widetilde{N}}\alpha _{j,k}\widehat{e}_j\widetilde{e}_k^*+
\sum_{j\,{\rm or\,}k>\widetilde{N}}\beta _{j,k}\widehat{e}_j\widetilde{e}_k^*)\circ
\widetilde{S}\Vert _{\rm HS}.$$
Here we can use the
$\widehat{e}_j$, $\widetilde{e}_j$ of Lemma \ref{lb3}. On the set ${\cal
Q}_M$, we have $\widehat{S}_2\circ Q^{\widetilde{N}}\circ \widetilde{S}\to
\widehat{S}_2Q\widetilde{S}$ in \hs{} norm, and $\Vert
\widehat{S}_2Q^{\widetilde{N}}\widetilde{S}\Vert
_{\rm HS}\le M$ when $\alpha (\omega )\in{\cal Q}_M$.
\par We get the corresponding matrix $E_{-+}^{\delta
,\widetilde{N}}=E_{-+}^0+{\delta \over \alpha
}\widehat{Q}_{\widetilde{N}}$ and $\widehat{Q}_{\widetilde{N}}$ can be
written as in \no{lb.12} with $\widetilde{T}(\alpha )$ replaced by
$\widetilde{T}_{\widetilde{N}}(\alpha )$ satisfying \no{lb.13}. Now instead
of $\mu _{\cal H}$ we have the finite dimensional measure
$$
\mu _{{\cal H}_{\widetilde{N}}}=\prod_{j,k=1}^{\widetilde{N}}(e^{-\vert
\alpha _{j,k}\vert ^2}{L(d\alpha _{j,k})\over \pi }),
$$
which we can view as the restriction of $\mu _{\cal H}$ to the tribe
generated by $\alpha _{j,k}$ with $1\le j,k\le\widetilde{N}$) and we
define $\Pi ^{\widetilde{N}}$
as in \no{lb.15.5} with $\widetilde{T}$ replaced by
$\widetilde{T}_{\widetilde{N}}$. The subsequent arguments now become
rigorous since we are in finite dimension and we get
$$
\Pi ^{\widetilde{N}}_*(\mu _{{\cal
H}_{\widetilde{N}}})\le (1+{\cal O}(1)\frac{\delta M^3}{\sqrt{\alpha}})\mu _{{\cal H}_N}\hbox{
on }{\cal Q}_M.
$$
Since $\Pi ^{\widetilde{N}}\to \Pi $ on ${\cal Q}_M$, we obtain \no{lb.21}
in the limit.
We next prove Lemma \ref{lb3}.
\begin{proof}
We consider first the following simplified problem. Let $m_S\le 1$ be a
square integrable order \fu{} and let $S\in{\rm Op\,}(S(m_S))$ be elliptic.
We look for an \on{} basis $e_1,e_2,..$ in $L^2$ such that
\ekv{lb.32}
{
\Vert \sum u_kSe_k\Vert ^2\backsim \sum \mu _j(S)^2\vert u_k\vert ^2,
}
where $\mu _1(S)\ge \mu _2(S)\ge ...\to 0$ are the singular values of $S$
and such that
$$
e_1,...,e_{N_0}
$$
is a prescribed \on{} family of \fu{}s in ${\cal S}$.
\par Since $\sum \mu _j^2= {\cal O}(h^{-n})$, we have $N\mu _N^2={\cal
O}(h^{-n})$, and using also that $\mu _N\le {\cal O}(1)$, we get
$$
\mu _N\le {{\cal O}(1)\over (Nh^n)^{1/2}+1}.
$$
On the other hand there exists a constant $\kappa _0>0$ such that
$$m_S(\rho )\ge {1\over C_0}\langle \rho \rangle ^{-\kappa _0},$$
and we can use the mini-max principle to compare the \ev{}s of $(S^*S)^{1/2}$
with those of $(1+((hD)^2+x^2))^{-\kappa _0/2}$ and deduce that
$$
\mu _N\ge {1\over {\cal O}(1)}{1\over (1+hN^{1/n})^{\kappa _0/2}}.
$$
\par If 0$<\mu \ll 1$, we have, with $p$ denoting the symbol of
$(S^*S)^{1/2}$, that
\begin{eqnarray*}
{\rm dist\,}(0,p^{-1}([0,2\mu ]))&\ge &{\rm dist\,}(0,m_S^{-1}([0,{2\mu
\over C}]))\\
&\ge & {\rm dist\,}(0,\{ \rho ;\, {1\over C_0}\langle \rho \rangle
^{-\kappa _0}\le {2\mu \over C}\} )\\
&\ge &{1\over C_1}\mu ^{-1/\kappa _0}.
\end{eqnarray*}
\par If $u$ is a corresponding normalized \ef{}, we have $(\mu ^{-1}(S^*S)^{1/2}-1
)u=0$, and we notice that $\mu ^{-1}(S^*S)^{1/2}\in{\rm Op\,}(S(\mu
^{-1}m_S))$, where $\mu ^{-1}m_S$ satisfies \ufly{} the axioms of an
order \fu{}, when $\mu \to 0$. We conclude that
$$
u={\cal O}(1),\hbox{ in }H(m)
$$
\ufly{} \wrt{} $m$ if $m=m_\mu $ belongs to a family of order\fu{}s that
satisfy \ufly{} the axioms and $m=1$ on $\{ \rho \in T^*{\bf R}^n;\,
p(\rho )\le 2\mu \}$. From this, we deduce that
$$
(\phi \vert u)={\cal O}(\mu ^N),\ \forall N,
$$
if $\phi \in {\cal S}$ is fixed, and $\mu \to 0$.
\par Let $f_1,f_2,...$ be an \on{} basis of \ef{}s of $(S^*S)^{1/2}$ with
$\mu _1\ge \mu _2\ge ... $ the corresponding decreasing enumeration of \ev{}s.
Then $(e_j\vert f_k)={\cal O}(k^{-\infty })$, $1\le j\le N_0$, $k\ge 1$.
Let $N\gg N_0$. For $j\ge N+1$, put
\ekv{lb.32a}
{
g_j=f_j-\Pi _{E_{N_0}}f_j=f_j+r_j,\ E_{N_0}\ni r_j={\cal O}(j^{-\infty }).
}
Here, we let $E_{N_0}=(e_1,...,e_{N_0})$
be the span of $e_1,...,e_{N_0}$ and $\Pi _{E_{N_0}}:L^2\to E_{N_0}$ be the
corresponding \og{} projection. Then $g_j\in E_{N_0}^\perp$ and for
$j,k>N$:
\ekv{lb.32b}
{
(g_j\vert g_k)=\delta _{j,k}+{\cal O}(j^{-\infty }k^{-\infty }).
}
Here the estimates are \uf{} \wrt{} $N$ and if $N$
is \sufly{} large, we see that $G=((g_j\vert g_k))$ is a positive definite
matrix of which any real power has elements satisfying \no{lb.32b}.
Let $(a_{j,k})=G^{-1/2}$, so that $a_{j,k}=\delta _{j,k}+{\cal
O}(j^{-\infty }k^{-\infty })$, $j,k>N$. Put
$$
e_j=\sum_{k>N}a_{j,k}g_k,\ j>N.
$$
Then $e_j$, $j>N$
form an \on{} basis in the span $G_N^\perp$ of $g_{N+1},g_{N+2},...$. We
see that for $j>N$:
\ekv{lb.32c}
{
e_j=f_j+\sum_{k>N}{\cal O}(j^{-\infty }k^{-\infty })f_k+\widetilde{r}_j,\
E_{N_0}\ni\widetilde{r}_j={\cal O}(j^{-\infty }).
}
\par $G_N=(G_N^\perp )^\perp$ is a space of dimension $N$, containing
$E_{N_0}$. For $1\le j\le N$, we consider
$$
\Pi _{G_N}f_j=f_j-\Pi _{G_N^\perp}f_j,
$$
\begin{eqnarray*}
\Pi _{G_N^\perp}f_j&=&\sum_{N+1}^\infty (f_j\vert e_k)e_k\\
&=& \sum_{N+1}^\infty (f_j\vert \widetilde{r}_k)e_k\\
&=&\sum_{N+1}^\infty {\cal O}(k^{-\infty })(f_k+\sum_{\ell =N+1}^\infty
{\cal O}(k^{-\infty }\ell^{-\infty })f_{\ell}+\widetilde{r}_k )\\
&=& \sum_{N+1}^\infty {\cal O}(k^{-\infty })f_k+{\cal O}(N^{-\infty }),
\end{eqnarray*}
where the last term is in $E_{N_0}$. Thus, we get for $1\le j\le N$:
$$
\Pi _{G_N}f_j=f_j+\sum_{N+1}^\infty {\cal O}(k^{-\infty
})f_k+\widehat{r}_j,\ E_{N_0}\ni \widehat{r}_j={\cal O}(N^{-\infty }).
$$
This implies
\ekv{lb.32d}
{
\Pi _{G_N}f_j=f_j+\sum_1^\infty {\cal O}((k+N)^{-\infty })f_k,\ 1\le j\le
N.
}
\par Now, complete $e_1,...,e_{N_0}$ to an \on{} basis $e_1,...,e_N$ in
$G_N$. Then $e_1,e_2,.....$ is an \on{} basis in $L^2$. \no{lb.32d} shows
that $\Pi _{G_N}f_1,...,\Pi _{G_N}f_N$ is very close to being an \on{}
basis in $G_N$, and we see that
\ekv{lb.32e}
{
e_j=\sum_{k=1}^N u_{j,k}f_k+\sum_1^\infty {\cal O}((k+N)^{-\infty })f_k,\
1\le j\le N,
}
where $(u_{j,k})_{1\le j,k\le N}$ is a unitary matrix. \no{lb.32c},
\no{lb.32e} imply that for all $j\ge 1$:
\ekv{lb.33}
{
e_j=\sum_{k=1}^\infty a_{j,k}f_k+\sum_{k=1}^\infty {\cal O}((j+N)^{-\infty
}(k+N)^{-\infty })f_k,
}
where $a_{j,k}=u_{j,k}$ for $j,k\le N$ and $a_{j,k}=\delta _{j,k}$ when
$\mbox{\rm max} (j,k)>N$.
\par We now fix $N$ \sufly{} large so that the above estimates hold. Using
\no{lb.33}, we get
$$
e_j=f_j+\sum_k {\cal O}(j^{-\infty }k^{-\infty })f_k,\
Se_j=\mu _jf_j+\sum_k {\cal O}(j^{-\infty }k^{-\infty })f_k,
$$
$$
(Se_k\vert Se_j)=\mu _k^2\delta _{j,k}+{\cal O}(k^{-\infty }j^{-\infty }),
$$
\begin{eqnarray*}
\Vert \sum_1^\infty u_k Se_k\Vert ^2&=&\sum_{k=1}^\infty \mu
_k^2u_k\overline{u}_k+\sum_{k=1}^\infty \sum_{j=1}^\infty {\cal
O}(k^{-\infty }j^{-\infty })u_k\overline{u}_j\\
&=& ((\mu ^2+K)u\vert u)_{\ell^2}\\
&=& ((1+\mu \inv K\mu \inv )\mu u\vert \mu u)
\end{eqnarray*}
where $\mu $ denotes the \op{} ${\rm diag\,}(\mu _j)$. Here $\mu \inv K\mu
\inv$ is compact: $\ell^2\to \ell^2$, so
$1+\mu \inv K\mu \inv$ is a non-negative \sa{} Fredholm \op{} of index 0.
If $u(j)={\cal O}(j^{M_0})$ and $(1+\mu \inv K\mu \inv )u=0$, then $u={\cal
O}(j^{-\infty })$. If $0\ne v\in\ell^2$, $(1+\mu \inv K\mu \inv )v=0$, we
conclude that $$ ((1+\mu \inv K\mu \inv )v\vert v)=((\mu ^2+K)u\vert
u)=\Vert S(\sum_1^\infty u_ke_k)\Vert^2>0 $$ thus $1+\mu \inv K\mu \inv$ is
bijective and we finally conclude that
\no{lb.32} holds.
\par Now we can finish the proof of the lemma. We choose
$\{\widehat{e}_j\}$, $\{\widetilde{e}_j\}$ in \no{lb.9.5} so that
\no{lb.32} holds with $S=\widehat{S}_2$, $e_j=\widehat{e}_j$ and
$S=\widetilde{S}^*$, $e_j=\widetilde{e}_j$ respectively. Then the square
of the norm \no{lb.11} is equal to
\ekv{lb.34}
{
\sum_{i,j,k,\ell }(\widehat{S}_2\widehat{e}_i\vert
\widehat{S}_2\widehat{e}_j)(\widetilde{S}^*\widetilde{e}_k\vert
\widetilde{S}^*\widetilde{e}_\ell )\alpha _{i,k}\overline{\alpha
}_{j,\ell}=(\widehat{{\cal S}}\otimes \widetilde{{\cal S}}\alpha \vert
\alpha )_{\ell^2\otimes \ell^2},
}
where
$$
\widehat{{\cal S}}_{j,i}=(\widehat{S}_2\widehat{e}_i\vert
\widehat{S}_2\widehat{e}_j),\ \widetilde{{\cal S}}_{\ell
,k}=(\widetilde{S}^*\widetilde{e_k}\vert
\widetilde{S}^*\widetilde{e}_\ell ).
$$
From \no{lb.32} we know that
\begin{eqnarray*}
&&\widehat{{\cal S}}=\widehat{\mu }\widehat{P}\widehat{P}\widehat{\mu },\
\widehat{\mu }={\rm diag\,}(\widehat{\mu }_2(j))\\
&&\widetilde{{\cal S}}=\widetilde{\mu }
\widetilde{P}\widetilde{P}\widetilde{\mu },\
\widetilde{\mu }={\rm diag\,}(\widetilde{\mu }(j)),
\end{eqnarray*}
where $\widehat{P}$, $\widetilde{P}$ are positive \sa{} \op{}s satisfying
$$
{1\over C}I\le \widehat{P},\,\widetilde{P}\le CI.
$$
Then \no{lb.34} can be written
$$
\Vert (\widehat{P}\widehat{\mu }\otimes \widetilde{P}\widetilde{\mu
})\alpha \Vert _{\ell^2\otimes \ell^2}^2,
$$
and the lemma follows.
\end{proof}
\section{Spectral \asy{}s when $dp,\, d\overline{p}$ are \indep{}}\label{sa}
\setcounter{equation}{0}
\par Let $\Gamma \Subset \Omega $ be open with $C^2$
\bdy{} and assume that for every $z\in \partial \Gamma $:
\eekv{sa.1}
{&&\Sigma _z:=p\inv (z)\hbox{ is a smooth sub-\mfld{} of }T^*{\bf R}^n\hbox{
on}}
{&&\hbox{which }dp,d\overline{p}\hbox{ are linearly \indep{}
at every point.}}
This assumption, which is satisfied also in a \neigh{} of $\partial
\Gamma $,
implies that ${\rm codim\,}(\Sigma _z)=2$. The assumption
can also be rephrased more briefly by saying that $\partial\Gamma$
does not contain
any critical value of $p:{\mathbb R}^{2n}\to {\mathbb R}^2$. Here $p$ is the leading symbol of the
original ($z$-\indep{} \op{}.) If $p_z(\rho )=(\widetilde{p}(\rho )-z)\inv
(p(\rho )-z)$ is the principal symbol of $(\widetilde{P}-z)\inv (P-z)$, we
introduce
\ekv{sa.2}
{I(z)=\int_{{\bf R}^2}\ln \vert p_z(\rho )\vert d\rho }
which is the same integral as in \no{lb.29}, \no{lb.30} (where $z$ was fixed).
It is easy to see that $I(z)$ is a smooth \fu{} on the \neigh{} of
$\partial \Gamma $ where \no{sa.1} holds and as in
\cite{MeSj} we can compute $\Delta _zI(z)$. Since $z\mapsto p_z(\rho )$ is
\hol{}, we know that $\Delta _z\ln \vert p_z(\rho )\vert =0$ when
$p_z(\rho )\ne 0$, ie when $\rho \not\in \Sigma _z$. On the other hand
$p_z(\rho )=(\widetilde{p}(\rho )-z)\inv (p(\rho )-z)$ where the first
factor is \hol{} in $z$ and non-vanishing, so
$$
\Delta _z\ln \vert p_z(\rho )\vert =\Delta _z\ln \vert p(\rho )-z\vert
=2\pi \delta (z-p(\rho )).
$$
If $\phi \in C_0^\infty (\Omega )$, we get
\begin{eqnarray*}
&&\int (\Delta _zI(z))\phi (z)L(dz)=\int\int \Delta _z(\ln \vert p_z(\rho
)\vert )\phi (z)L(dz)d\rho \\
&&=2\pi \int\int \delta (z-p(\rho ))\phi (z)L(dz)d\rho =2\pi \int \phi
(p(\rho ))d\rho .
\end{eqnarray*}
Thus we get (as in \cite{Ha1, Ha2} when $n=1$):
\ekv{sa.3}
{
{1\over 2\pi }\Delta (I(z))L(dz)=p_*(d\rho )\hbox{ near }\partial
\Gamma ,
}
where $d\rho $ is the symplectic volume element. Notice that this formula
is still true without the assumption \no{sa.1} and hence not only in a
\neigh{} of $\partial \Gamma $, but in $\Omega$; however $I(z)$ is no more smooth in general
but still well-defined as a distribution.
This fact will be used in the proof of Theorem \ref{sa1}.
\par In view of \no{sa.1}, we have $V(t)\backsim t$ and \no{fc.44}
holds \ufly{} with $\kappa=1$, when $z$ varies in
a \neigh{} of $\partial \Gamma $. Correspondingly
the conclusions in Theorem \ref{lb2} hold \ufly, when $z$ varies in a
small \neigh{} of $\partial \Gamma $.
\begin{theo}\label{sa1}
Let $\Gamma \Subset \Omega $ be open with $C^2$ \bdy{} and make the
assumption \no{sa.1}. Let $\delta >0$ satisfy \no{lb.20}
and
assume that $h\ln {1\over \delta }\ll \epsilon \ll 1$ (or equivalently
$\delta \ge e^{-\epsilon /(Ch)}$, $C\gg 1$, $\epsilon \ll 1$, implying
also that $\epsilon \ge \widetilde{C}h\ln {1\over h}$ for some
$\widetilde{C}>0$).
Then
with $C>0$ large enough, the number $N(P_\delta ,\Gamma )$ of \ev{}s of
$P_\delta $ in $\Gamma $ satisfies
\ekv{sa.4}
{
\vert N(P_\delta ,\Gamma )-{1\over (2\pi h)^n}{\rm vol\,}(p^{-1}(\Gamma
))\vert \le C{\sqrt{\epsilon }\over h^n}
}
with \proba{}
$$\ge 1 -{C\over \sqrt{\epsilon }}e^{-{\epsilon/2 \over (2\pi
h)^n}}.
$$
\end{theo}
\begin{proof}
The \ev{}s of $P_\delta $ in $\widetilde{\Omega }$ coincide with the zeros
of the \hol{} \fu{}
\ekv{sa.5}
{F_\delta (z)=\det P_\delta (z).}
Theorem \ref{lb2} tells us that there exists a \neigh{} $\widehat{\Omega }$
of $\partial \Omega $ such that
\smallskip
\par\noindent (a) With \proba{} $\ge 1-Ce^{-C_0h^{-2n}}$, we have
$$
\ln \vert F_\delta (z)\vert \le {1\over (2\pi h)^n}(I(z)+Ch\ln{1\over h}),\
z\in\widehat{\Omega }.
$$
\smallskip
\par\noindent (b) For every $z\in\widehat{\Omega }$ and $\epsilon >0$ we have
$$
\ln \vert F_\delta (z)\vert \ge {1\over (2\pi h)^n}(I(z)-Ch(\ln{1\over
h}+\ln{1\over \delta })-\epsilon ),
$$
with \proba{} $\ge 1-e^{-\epsilon (2\pi h)^{-n}}-Ce^{-C_0h^{-2n}}$. Notice
here that $\ln {1\over \delta }\ge \ln{1\over h}$.\smallskip
\par We can then repeat the arguments of \cite{Ha1, Ha2}. Recall Proposition
6.1 from \cite{Ha2} proved in a more general form in \cite{Ha1}.
\begin{prop}\label{sa1.5} Let $\widehat{\Omega }$, $\widetilde{\Omega }$
be open \neigh{}s of $\partial \Gamma $ and $\overline{\Gamma }$
respectively.
Let $\phi \in C^\infty (\widehat{\Omega };{\bf R})$ and let $f$ be a
\hol{} \fu{} in $\widetilde{\Omega }$ such that
\ekv{sa.6}
{
\vert f(z;\widetilde{h})\vert \le e^{\phi (z)/\widetilde{h}},\
z\in\widehat{\Omega },\, 0<\widetilde{h}\ll 1.
}
Assume that for some $\epsilon >0$, $\epsilon \ll 1$, $\exists
z_k\in\widehat{\Omega }$, $k\in J$, such that
$$
\partial \Gamma \subset\bigcup_{k\in J}D(z_k,\sqrt{\epsilon }),\ \# J={\cal
O}({1\over \sqrt{\epsilon }}),
$$
\ekv{sa.7}
{
\vert f(z_k;\widetilde{h})\vert \ge e^{{1\over \widetilde{h}}(\phi
(z_k)-\epsilon )},\ k\in J.
}
Then the number of zeros of $f$ in $\Gamma $ satisfies
$$
\# (f\inv (0)\cap \Gamma )={1\over 2\pi \widetilde{h}}\int_\Gamma \Delta
\phi L(dz)+{\cal O}({\sqrt{\epsilon }\over \widetilde{h}}),
$$
where we let $\phi $
denote some distribution in ${\cal D}'(\Gamma \cup\widehat{\Omega })$
extending the previous function $\phi $.\end{prop}
\par The original statement in \cite{Ha1, Ha2} was with a smooth \fu{} $\phi $
defined in a whole \neigh{} of $\overline{\Gamma }$ satisfying \no{sa.6}
there, but the proof works without any changes under the weaker
assumptions above.
\par In view of (a), (b), we can apply the proposition with
$\widetilde{h}=(2\pi h)^n$ and $\epsilon $ replaced by $2\epsilon $,
$\phi =I(z)+Ch\ln {1\over
h}$, $f=F_\delta $. Then \no{sa.6} holds with a \proba{} as in (a), while
\no{sa.7} holds with a \proba{}
$$
\ge 1-{C\over \sqrt{\epsilon }}e^{-{\epsilon \over 2}(2\pi h)^{-n}}
-Ce^{-C_0h^{-2n}}.
$$
We can define $\phi $ as a distribution in a full \neigh{} of
$\overline{\Gamma }$ by \no{sa.2}. Then
$$
{1\over 2\pi \widetilde{h}}\int_\Gamma \Delta \phi L(dz)={1\over (2\pi
h)^n}\int_\Gamma {1\over 2\pi }\Delta I(z)L(dz)={1\over (2\pi
h)^n}\iint_{p\inv (\Gamma )}dxd\xi .
$$
The theorem follows.
\end{proof}
\par We next give a result about the simultaneous Weyl \asy{}s for a
\fy{} of domains
\begin{theo}\label{sa2}
Let ${\cal G}$ be a \fy{} of domains $\Gamma \Subset \Omega $ that
satisfy the assumptions of Theorem \ref{sa1} \ufly{} in the following sense:
Each $\Gamma $ is of the form $g(z)<0$ (with $g=g_\Gamma $) where $g$
belongs to a \bdd{} set in $C^2(\overline{\Omega })$ and $g>1/C$ on
$\partial \Omega $ and $\vert dg\vert >1/C$ on $\partial \Gamma $, where
$C>0$ is \indep{} of $\Gamma $. We also assume that \no{sa.1} holds for
all $z\in\partial \Gamma $, $\Gamma \in{\cal G}$, \ufly{} \wrt{}
$(z,\Gamma )$.
\par Choose $\delta ,\epsilon $ as in Theorem \ref{sa1}. Then with \proba{}
$$\ge 1 -{{C}\over \epsilon }e^{-{\epsilon /2\over (2\pi
h)^n}}, $$
we have \no{sa.4} with a constant $C$ \indep{} of $\Gamma $.
\end{theo}
\begin{proof}
As in the proof of Theorem \ref{sa1}, we use Proposition \ref{sa2} with an
appropriate grid of points $z_k$ (see \cite{Ha2} for further details). We
now need ${\cal O}(1/\epsilon )$ points to achieve that the union of the
$D(z_k,\sqrt{\epsilon })$ covers the union of all the $\partial \Gamma $,
rather than ${\cal O}(1/\sqrt{\epsilon })$ points as in the proof of Theorem
\ref{sa1}.
\end{proof}
\section{Counting zeros of holomorphic \fu{}s} \label{cz}
\setcounter{equation}{0}
\par Let $\Gamma \Subset {\bf C}$ have smooth \bdy{} $\partial\Gamma $. Assume for
simplicity that $\gamma :=\partial \Gamma $ is connected (or
equivalently that $\Gamma $ is simply connected). This is for
notational convenience only. For $0<r\ll 1$, we put
\ekv{cz.1}
{\gamma _r=\gamma +D(0,r)=\partial \Gamma +D(0,r).}
Then $\gamma _r$ has smooth \bdy{} and is a thin domain of width
$\approx 2r$. Let $G_r(z,w)$, $P_r(z,w)$ denote the Green and
Poisson kernels of $\gamma _r$, so that the Dirichlet \pb{}
$$
\Delta u=v,\ {u_\vert }_{\partial \gamma _r}=f,\quad u,v\in
C^\infty (\overline{\gamma _r}),\ f\in C^\infty (\partial
\gamma _r),$$ has the unique solution
$$
u(z)=\int_{\gamma _r}G_r(z,w)v(w)L(dw)+\int_{\partial \gamma
_r}P_r(z,w)f(w)\vert dw\vert.
$$
\par We recall some properties of the Green kernel: If $\Omega
\Subset {\bf C}$ has a smooth \bdy{} and $G_\Omega (x,y) $is
the corresponding Green kernel, then
\ekv{cz.2}{
G_\Omega \le 0
,}
\ekv{cz.3}{
G_\Omega \hbox{ is }C^\infty \hbox{ for }x\ne y,
}
\ekv{cz.4}
{
G_\Omega (x,y)={1+o(1)\over 2\pi }\ln \vert x-y\vert \hbox{ for
}x\approx y,\ x,y\not\in \partial \Omega ,\ x-y\to 0. }
\ekv{cz.5}
{
G_\Omega ({x\over r},{y\over r})=G_{r\Omega }(x,y),\ x,y\in
r\Omega. }
$\Omega ={1\over r}\gamma _r$ is a very long domain of
approximately constant width and \no{cz.4} is valid \ufly{} for
$x,y\in \Omega $, $\vert x-y\vert \le {\cal O}(1)$, ${\rm
dist\,}(x,\partial \Omega ),{\rm dist\,}(y,\partial \Omega )\ge
1/{\cal O}(1)$. Moreover,
\ekv{cz.6}
{
\vert G_{r\inv \gamma _r}(x,y)\vert \le C_0 e^{-\vert x-y\vert
/C_0},\ x,y\in r\inv \gamma _r,\, \vert x-y\vert \ge {1\over {\cal
O}(1)}. }
\par To recover these well-known facts, notice that $r\inv \gamma _r$ is
given by $-1<\phi (x)<1$, where $\phi (x)$ is the suitably signed distance
from $r\inv \partial \Gamma $ to $x$, so that $\vert \nabla \phi (x) \vert
=1$, $\vert \nabla ^2\phi (x)\vert ={\cal O}(r)$. If $u\in H_0^1(r\inv
\gamma _r)$, we have by integration by parts,
$$
\int_{r\inv\gamma _r}((\nabla \phi )^2+\phi (x)\Delta \phi )\vert u\vert
^2dx=-2\Re \int_{r\inv \gamma _r}\phi (\nabla \phi \cdot \nabla
u)\overline{u}dx,
$$
implying
$$
\int (1-{\cal O}(r))\vert u\vert ^2dx\le (2+{\cal O}(r)) \Vert \nabla
u\Vert \Vert u\Vert ,
$$
$$
\Vert u\Vert \le (2+{\cal O}(r))\Vert \nabla u\Vert ,
$$
$$
-\Delta \ge ({1\over 4}-{\cal O}(r)),
$$
where $\Delta =\Delta _{r\inv \gamma _r}$ is the Dirichlet Laplacian on
$r\inv \gamma _r$. From this estimate we can develop exponential decay
estimates for $-\Delta $, since we still have a positive lower bound for
$\Re (e^\psi (-\Delta )e^{-\psi })=-\Delta -\vert \nabla \psi \vert ^2$,
if $\vert \nabla \psi (x)\vert^2\le 1/5 $. We drop the ensuing routine arguments.
\par
In view of \no{cz.5}, \no{cz.6} we get
\ekv{cz.7}
{\vert G_r(x,y)\vert \le C_0e^{-{1\over C_0r}\vert x-y\vert },\
x,y\in \gamma _r,\ \vert x-y\vert \ge r/{\cal O}(1),}
\eekv{cz.8}
{&&
\vert G_r(x,y)\vert ={1+o(1)\over 2\pi }\ln \vert {x\over r}-{y\over r}\vert ,\hbox{
for }\vert x-y\vert \le r/C_0, }
{&&
{\rm dist\,}(x,\partial \Omega ),{\rm
dist\,} (y,\partial \Omega )\ge r/C_0,\ \vert {x\over r}-{y\over
r}\vert \to 0 .}
\par Let $\phi $ be a continuous subharmonic \fu{} defined in some \neigh{} of
$\overline{\gamma _r}$. Let
\ekv{cz.9}
{
\mu =\mu _\phi =\Delta \phi
}
be the corresponding locally finite positive measure.
Let $u$ be a \hol{} \fu{} defined in a \neigh{} of $\overline{\gamma _r}$.
We assume that
\ekv{cz.10}
{
h\ln \vert u(z)\vert \le \phi (z),\ z\in\overline{\gamma _r}.
}
\begin{lemma}\label{cz1}
Let $C_1,C_2>1$ and let $z_0\in\overline{\gamma _{(1-{1\over C_1})r}}$ be
a point where
\ekv{cz.11}
{h\ln \vert u(z_0)\vert \ge \phi (z_0)-\epsilon ,\ 0<\epsilon \ll 1.}
Then the number of zeros of $u$ in $D(z_0,C_2r)\cap \gamma _{(1-{1\over
C_2})r}$ is
\ekv{cz.12}{\le {C_3\over h}(\epsilon +\int_{\gamma _r}-G_r(z_0,w)\mu (dw)),}
where $C_3$ is independent of $\epsilon ,h$.
\end{lemma}
\begin{proof}
Writing $\phi $ as a uniform limit of an increasing sequence of smooth
\fu{}s, we may assume that $\phi \in C^\infty $.
Let
$$
n_u(dz)=\sum 2\pi \delta (z-z_j),
$$
where $z_j$ are the zeros of $u$ counted with their multiplicity. We may
assume that no $z_j$ are situated on $\partial \gamma _r$. Then, since
$\Delta \ln \vert u\vert =n_u$,
\eeekv{cz.13}
{
h\ln \vert u(z)\vert &=& \int_{\gamma _r} G_r(z,w)h n_u (dw)+\int_{\partial
\gamma _r}P_r(z,w)h\ln \vert u(w)\vert \vert dw\vert}
{&\le& \int_{\gamma _r}G_r(z,w)hn_u(dw)+\int_{\partial \gamma
_r}P_r(z,w)\phi (w)\vert dw\vert}
{&=& \int_{\gamma _r}G_r(z,w)hn_u(dw)+\phi (z)-\int_{\gamma
_r}G_r(z,w)\mu (dw).}
Putting $z=z_0$ in \no{cz.13} and using \no{cz.11}, we get
$$
\int_{\gamma _r}-G_r(z_0,w)hn_u(dw)\le \epsilon +\int_{\gamma
_r}-G_r(z_0,w)\mu (dw).
$$
Now $$
-G_r(z_0,w)\ge {1 \over 2\pi C_3},\ C_3>0,
$$
in $D(z_0,C_2r)\cap \gamma _{(1-{1\over C_2})r}$, and we get \no{cz.12}.
\end{proof}
\par Notice that this argument is basically the same as when using Jensen's
formula to estimate the number of zeros of a \hol{} \fu{} in a disc. We could
assume the bound $h\ln \vert u(z)\vert \le \phi (z)$, in
$D(z_0,\widetilde{C}_2r)\cap \gamma _{(1-{1\over
\widetilde{C}_2})r}=:\widetilde{\Omega }_r$ for some
$\widetilde{C}_2>C_2$. Then we can replace the bound \no{cz.12} by
$$
{C_3\over h}(\epsilon +\int_{\widetilde{\Omega }_r}-G_{\widetilde{\Omega
}_r}(z_0,w)\mu (dw)),
$$
which is sharper, since $-G_{\Omega _1}\le -G_{\Omega _2}$, when $\Omega
_1\subset \Omega _2$.
\par Now we sharpen the assumption \no{cz.11} and assume
\ekv{cz.14}
{
h\ln \vert u(z_j)\vert \ge \phi (z_j)-\epsilon ,
}
where $z_1,...,z_N\in \gamma _{(1-{1\over C_1})r}$
are points such that
\ekv{cz.15}
{
\gamma _r\subset \bigcup_1^N D(z_j,C_1r),\quad N\backsim {1\over r}.
}
We may assume that $z_1,z_2,...,z_N$ are arranged in such a way that
\ekv{cz.16}
{
\vert z_j-z_k\vert \backsim r{\rm dist\,}(j,k),\ j\ne k,
}
where $j,k$ are viewed as elements of ${\bf Z}/N{\bf Z}$ and we take the
natural distance on that set. We will also assume for a while that $\phi $
is smooth.
\par According to Lemma \ref{cz1}, we have
\ekv{cz.17}
{
\# (u\inv (0)\cap (D(z_j,C_1r)\cap \gamma _{(1-{1\over C_1})r}))\le
{C_3\over h}(\epsilon +\int_{\gamma _r}-G_r(z_j,w)\mu (dw)).
}
\par We introduce
\ekv{cz.18}
{
\widetilde{r}=(1-{1\over C_1})r
}
and consider the harmonic \fu{}s on $\gamma _{\widetilde{r}}$,
\ekv{cz.19}
{
\Psi (z)=h(\ln \vert u(z)\vert +\int_{\gamma
_{\widetilde{r}}}-G_{\widetilde{r}}(z,w)n_u(dw)),
}
\ekv{cz.20}
{
\Phi (z)=\phi (z)+\int_{\gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z,w)\mu
(dw).
}
Then $\Phi (z)\ge \phi (z)$ with equality on $\partial \gamma
_{\widetilde{r}}$. Similarly, $\Psi (z)\ge h\ln \vert u(z)\vert $ with
equality on $\partial \gamma _{\widetilde{r}}$.
\par Consider the harmonic \fu{}
\ekv{cz.21}
{
H(z)=\Phi (z)-\Psi (z),\ z\in \gamma _{\widetilde{r}}.
}
Then on $\partial \gamma _{\widetilde{r}}$, we have by \no{cz.10} that
$$
H(z)=\phi (z)-h\ln \vert u(z)\vert \ge 0,
$$
so by the maximum principle,
\ekv{cz.22}
{
H(z)\ge 0,\hbox{ on }\gamma _{\widetilde{r}}.
}
By \no{cz.14}, we have
\eeekv{cz.23}
{
H(z_j)&=& \Phi (z_j)-\Psi (z_j)
}
{
&=&\phi (z_j)-h\ln \vert u(z_j)\vert +
\int_{\gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z_j,w)\mu
(dw)-\int_{\gamma _{\widetilde{r}}} -G_{\widetilde{r}}(z_j,w)hn_u(dw)
}
{
&\le & \epsilon +\int_{\gamma _{\widetilde{r}}} -G_{\widetilde{r}}(z_j,w)\mu (dw).
}
\par Harnack's inequality implies that
\ekv{cz.24}
{
H(z)\le {\cal O}(1)(\epsilon +\int -G_{\widetilde{r}}(z_j,w)\mu (dw))\hbox{
on }D(z_j,C_1r)\cap \gamma _{(1-{1\over C_1})\widetilde{r}}.
}
\par Now assume that $u$ extends to a \hol{} \fu{} in a \neigh{} of
$\Gamma \cup \overline{\gamma _r}$. We then would like to evaluate the
number of zeros of $u$ in $\Gamma $. Using \no{cz.17}, we first have
\ekv{cz.25}
{
\# (u^{-1}(0)\cap \gamma _{\widetilde{r}})\le {C\over h}(N\epsilon
+\sum_{j=1}^N \int_{\gamma _r}(-G_r(z_j,w))\mu (dw)).
}
\par Let $\chi \in C_0^\infty (\Gamma \cup \gamma _{(1-{1\over
C_1})\widetilde{r}};[0,1])$ be equal to 1 on $\Gamma $. Of course $\chi $
will have to depend on $r$ but we may assume that for all $k \in{\bf N}$, and as $r\to 0$,
\ekv{cz.26}
{
\nabla ^k\chi ={\cal O}(r^{-k}).
}
We are interested in
\ekv{cz.27}
{
\int \chi (z)hn_u(dz)=\int_{\gamma _{\widehat{r}}}h\ln \vert u(z)\vert
\Delta \chi (z)L(dz),\ \widehat{r}=(1-{1\over C_1})\widetilde{r}.
}
Here we have on $\gamma _{\widetilde{r}}$
\eeeekv{cz.28}
{h\ln \vert u(z)\vert &=&\Psi (z)-\int_{\gamma
_{\widetilde{r}}}-G_{\widetilde{r}}(z,w)hn_u(dw)}
{&=&\Phi (z)-H(z)-\int_{\gamma
_{\widetilde{r}}}-G_{\widetilde{r}}(z,w)hn_u(dw)}
{
&=&\phi (z)+\int_{\gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z,w)\mu
(dw)-H(z)-\int_{\gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z,w)hn_u(dw)
}
{&=& \phi (z)+R(z),}
where the last equality defines $R(z)$.
\par Inserting this in \no{cz.27}, we get
\ekv{cz.29}
{
\int\chi (z)hn_u(dz)=\int \chi (z)\mu (dz)+\int R(z)\Delta \chi (z)L(dz).
}
(Here we also used some extension of $\phi $ to $\Gamma $
with $\mu =\Delta \phi $.) The task is now to estimate $R(z)$ and the
corresponding integral in \no{cz.29}. Put
\ekv{cz.30}
{
M_j=\mu (\Omega _j),\ \Omega _j=D(z_j,C_1r)\cap \gamma _r.
}
Using the exponential decay property \no{cz.7} (equally valid for
$G_{\widetilde{r}}$) we get for $z\in \Omega _j\cap \gamma
_{\widetilde{r}}$, ${\rm dist\,}(z,\partial (D(z_j,C_1r)\cap \gamma
_{\widetilde{r}}))\ge r/{\cal O}(1)$:
\ekv{cz.31}
{
\int_{\gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z,w)\mu (dw)\le
\int_{\Omega _j\cap \gamma _{\widetilde{r}}}-G_{\widetilde{r}}(z,w)\mu
(dw)+{\cal O}(1)\sum_{k\ne j}M_ke^{-{1\over C_0}\vert j-k\vert }.
}
Similarly from \no{cz.24}, we get
\ekv{cz.32}
{
H(z)\le {\cal O}(1)(\epsilon +\int_{\Omega _j\cap \gamma
_{\widetilde{r}}}-G_{\widetilde{r}}(z_j,w)\mu (dw)+\sum_{k\ne j}e^{-{1\over
C_0}\vert j-k\vert }M_k),
}
for $z\in \Omega _j\cap \gamma _{\widetilde{r}}$.
\par This gives the following estimate on the contribution from the first
two terms in $R(z)$ to the last integral in \no{cz.29}:
\eeekv{cz.33}
{&&\int_{\gamma _{\widetilde{r}}}(\int_{\gamma
_{\widetilde{r}}}-G_{\widetilde{r}}(z,w)\mu (dw)-H(z))\Delta \chi (z)L(dz)
} { &=&{\cal O}(1)(N\epsilon +\sum_j (\sup_{z\in \Omega _j\cap \gamma
_{\widehat{r}}}\int_{\Omega _j\cap \gamma _{\widetilde{r}}}
-G_{\widetilde{r}}(z,w)\mu (dw)+\sum_{k\ne j}e^{-{1\over C_0}\vert j-k\vert
}M_k)) }
{ &=& {\cal O}(1)(N\epsilon +\sum_j \sup_{z\in \Omega _j\cap
\gamma _{\widehat{r}}} \int_{\Omega _j\cap \gamma _{\widetilde{r}}}
-G_{\widetilde{r}}(z,w)\mu (dw)+\mu (\gamma
_r)). }
\par The contribution from the last term in $R(z)$ (in \no{cz.28}) to the
last integral in \no{cz.29} is
\ekv{cz.34}
{
\int_{z\in \gamma _{\widehat{r}}}\int_{w\in \gamma
_{\widetilde{r}}}G_{\widetilde{r}}(z,w)hn_u(dw)\Delta \chi (z)L(dz).
}
Here
\begin{eqnarray*}
&& \int_{z\in \gamma _{\widehat{r}}}G_{\widetilde{r}}(z,w)(\Delta \chi )
(z)L(dz)\\
&=& \int_{\widetilde{z}\in \widetilde{r}\inv \gamma _{\widehat{r}}}
G_{\widetilde{r}}(\widetilde{r}\widetilde{z},\widetilde{r}\widetilde{w})\Delta
_z\chi (\widetilde{r}\widetilde{z})\widetilde{r}^2L(d\widetilde{z})\\
&=& \int G_{\widetilde{r}\inv \gamma
_{\widetilde{r}}}(\widetilde{z},\widetilde{w})\Delta _{\widetilde{z}}(\chi
(\widetilde{r}\widetilde{z}))L(dz)={\cal O}(1),
\end{eqnarray*}
so the expression \no{cz.34} is
\eeekv{cz.35}
{
&&{\cal O}(h)\# (u\inv (0)\cap \gamma _{\widetilde{r}})
}
{
&=&{\cal O}(1)({\epsilon \over r}+\sum_{j=1}^N \int_{\gamma
_r}(-G_r(z_j,w))\mu (dw))
}
{&=&
{\cal O}(1)({\epsilon \over r}+\sum_{j=1}^N \int_{\Omega
_j}-G_r(z_j,w)\mu (dw)+\mu (\gamma _r)).
}
Using all this in \no{cz.29}, we get
\eekv{cz.36}
{&&\hskip -5truemm
\int\chi (z)hn_u(dz)=\int \chi (z)\mu (dz)
}
{
&&+{\cal O}(1)({\epsilon \over r}+\sum_j(\sup_{z\in
\Omega _j\cap \gamma _{\widehat{r}}}\int_{\Omega _j\cap \gamma
_{\widetilde{r}}} -G_{\widetilde{r}}(z,w)\mu (dw)+
\int_{\overline{\Omega _j}}-G_r(z_j,w)\mu (dw))+\mu (\gamma _r)).
}
We replace the smoothness assumption on $\phi $ by the assumption that
$\phi $ is continuous near $\Gamma $ and keep \no{cz.14}. Then by
regularization, we still get \no{cz.36}.
\par In order to simplify this further, we introduce a weak regularity
assumption on the measure $\mu $. Assume first that $\mu =\Delta \phi $ is
defined in a fixed $r$-\indep{} neighborhood of $\partial \Gamma $. For
$D(z,t)$ contained in that \neigh{} we assume that as $t\to 0$,
\ekv{cz.37}
{
W_z(t):=\mu (D(z,t))={\cal O}(t^{\rho _0}),
}
for some $0<\rho _0\le 2$.
\begin{remark}\label{cz1.5}\rm
It is easy to see that this assumption on $\Delta \phi $ implies that $\phi $
is continuous near $\Gamma $. In the case $\rho _0>1$, we notice that as $r\to 0$,
\ekv{cz.38}
{
\mu (\gamma _r)={\cal O}(r^{\rho _0-1}).
}
(This is true also for $\rho _0\le 1$ but then of no interest.)
\end{remark}
\begin{lemma}\label{cz2}
Assume \no{cz.37} for some $\rho _0\in ]0,2]$. Then for every domain
$\Omega \subset \gamma _r$ and every $z\in \Omega \cap \gamma _{(1-{1\over
C})r}$, we have for $0<t\le r/2$:
\ekv{cz.39}
{
\int_{\Omega }-G_r(z,w)\mu (dw)\le {\cal O}(1)t^{\rho _0}\ln {r\over
t}+{\cal O}(1)\ln({r\over t})\mu (\Omega ).
}
\end{lemma}
\begin{proof}
Write
$$
\int_\Omega -G_r(z,w)\mu (dw)=\int_{D(z,t)\cap\Omega }-G_r(z,w)\mu
(dw)+\int_{\Omega \setminus D(z,t)}-G_r(z,w)\mu (dw).
$$
For $\vert z-w\vert \ge t$, we have $-G_r(z,w)\le {\cal O}(1)\ln {r\over t}$
(cf \no{cz.5}), so the last integral is ${\cal O}(1)\ln ({r\over t})\mu
(\Omega )$. For $w\in D(z,t)\cap\Omega $, we have
$$
-G_r(z,w)\le {\cal O}(1)\ln {r\over \vert z-w\vert },
$$
hence
\begin{eqnarray*}
\int_{D(z,t)\cap \Omega }-G_r(z,w)\mu (dw)&\le &{\cal O}(1)\int_0^t \ln {r\over
s}dW_z(s)\\
&=&{\cal O}(1)([\ln ({r\over s})W_z(s)]_0^t+\int_0^t {1\over s}W_z(s)ds)\\
&=&{\cal O}(1) t^{\rho _0}\ln {r\over t}.
\end{eqnarray*}
\end{proof}
\begin{coro}\label{cz3}
Under the same assumptions, we have for every $N\in{\bf N}$:
\ekv{cz.40}
{
\int_\Omega -G_r(z,w)\mu (dw)\le {\cal O}_N(1)(r^N+\ln ({1\over r})\mu
(\Omega )).
}
\end{coro}
\begin{proof}
We just choose $t=r^M$, $0<M\in{\bf N}$ and use that $\ln r^{-M}=M\ln r\inv$
\end{proof}
If we assume \no{cz.38}, then the corollary allows us to simplify \no{cz.36}
to
\ekv{cz.41}
{
\int \chi (z)hn_u(dz)=\int \chi (z)\mu (dz)+{\cal O}(1){\epsilon \over
r}+{\cal O}_N(r^N+\ln({1\over r})\mu (\gamma _r)).
}
\par Summing up the discussion, we have proved
\begin{prop}\label{cz4}
Let $\Gamma \Subset {\bf C}$ have smooth \bdy{} and let $\phi $ be a
continuous subharmonic \fu{} defined near $\overline{\Gamma }$.
Then we have the following
result, valid \ufly{} for $0<\epsilon \ll 1$, $0<r\ll 1$, $0<h\ll 1$:
Let $u$ be a \hol{} \fu{}, defined in $\Gamma +D(0,r)$ with $h\ln \vert
u(z)\vert \le \phi (z)$, $z\in\partial \Gamma +D(0,r)$
and assume that there exist $z_1,...,z_N\in\partial \Gamma +D(0,{r\over
2})$ such that
\ekv{cz.41.5}
{\partial \Gamma +D(0,r)\subset \bigcup_1^N
D(z_j,2r),\ N\backsim {1\over r},\ h\ln \vert u(z_j)\vert \ge \phi
(z_j)-\epsilon .}
Then with $\Omega _j=D(z_j,2r)\cap (\partial \Gamma +D(0,r))$, we
have
\eekv{cz.42}
{
&&\vert \# (u\inv (0)\cap \Gamma )-{1\over 2\pi h}\int_\Gamma \Delta \phi
L(dz)\vert \le {{\cal O}(1)\over h}\Big( {\epsilon \over r}+\mu (\gamma _r)
}
{&&
+\sum_j (\sup_{z\in\Omega _j\cap
(\partial \Gamma +D(0,{r\over 4}))}\int_{\Omega _j\cap (\partial \Gamma
+D(0,{r\over 2}))} -G_{r\over 2}(z,w)\mu
(dw)+\int_{\overline{\Omega } _j}-G_r(z_j,w)\mu (dw)\Big) .
}
\par If we assume also that \no{cz.37} holds for some $0<\rho _0\le 2$,
then we have for every $N>0$:
\eekv{cz.43}
{
&&\vert \# (u\inv (0)\cap \Gamma )-{1\over 2\pi h}\int_\Gamma \Delta \phi
L(dz)\vert \le
}
{&&
{{\cal O}(1)\over h}({\epsilon \over r}+{\cal O}_N(1)(r^N+\ln({1\over
r})\mu (\partial \Gamma +D(0,r)))).
}\end{prop}
\begin{ex}\label{cz5}\rm
If $\phi $ is of class $C^2$ near the \bdy{}, then \no{cz.37} is satisfied
with $\rho _0=2$ and $\mu (\partial \Gamma +D(0,r))={\cal O}(r)$. We
choose $N=1$
so that the \rhs{} of \no{cz.43} becomes
$$
{{\cal O}(1)\over h}({\epsilon \over r}+r\ln{1\over r}).
$$
If we choose $r=\sqrt{\epsilon }$, we get
$$
\vert \# (u\inv (0)\cap\Gamma )-{1\over 2\pi h}\int_\Gamma \Delta \phi
(z)L(dz)\vert \le {{\cal O}(1)\over h}\sqrt{\epsilon }\ln{1\over \epsilon }.
$$
In this case we loose a factor $\ln \epsilon \inv$ compared to Proposition
6.1 in
\cite{Ha2}.
\end{ex}
\section{Spectral \asy{}s in a more general case} \label{gc}
\setcounter{equation}{0}
\par Let $\Gamma \Subset \widetilde{\Omega }$ be open with
$C^\infty $ \bdy{}. For $z$ in a \neigh{} of $\partial \Gamma $ and
$0<s,t\ll 1$, we put
\ekv{gc.1}
{
V_z(t)={\rm Vol\,}\{\rho \in{\bf R}^{2n};\, \vert p(\rho )-z\vert ^2\le
t\},\ W_z(s)=V_z(s^2).
}
\par Recall that in any \bdd{} domain in phase space, the symbols $\vert
p_z(\rho )\vert ^2=q_z(\rho )$ and $\vert p(\rho )-z\vert ^2$ are \ufly{}
of the same order of magnitude. If we replace $\vert p(\rho )-z\vert ^2$
by $q_z(\rho )$ in \no{gc.1}, we get a new \fu{} $V_z^{\rm new}(t)$ such
that
\ekv{gc.2}
{
V_z({t\over C})\le V_z^{\rm new}(t)\le V_z(Ct).
}
For the purposes of this paper, we can therefore identify the two \fu{}s
and resort to the second definition whenever we find it convenient. Also,
when $z$ is fixed, we will sometimes write $V(t)$ instead of $V_z(t)$.
\par Our weak assumption, replacing \no{sa.1}, is
\ekv{gc.3}
{
\exists \kappa\in ]0,1], \hbox{ such that }V_z(t)={\cal O}(t^{\kappa}),\hbox{ \ufly{} for }z\in{\rm neigh\,}(\partial \Gamma ),\ 0\le t\ll 1.
}
\begin{ex}\label{gc0}\rm
When \no{sa.1} holds for $z\in{\rm neigh\,}(\partial \Gamma )$, it is clear
that \no{gc.3} is fulfilled with $\kappa=1$ and in particular this is
the case when $\{ p,\overline{p}\}\ne 0$ on $p^{-1}(\partial \Gamma )$.
If $z\in\partial \Sigma \setminus\Sigma _\infty $ then \no{sa.1} cannot
hold, so if $\partial \Gamma \cap\partial \Sigma \ne 0$, the best we can
hope for is that
\ekv{gc.3.1}
{
\forall z \in p^{-1}(\partial \Gamma ) , \hbox{ either }
\{p,\overline{p}\}\ne 0 \hbox{ or } \{p,\{ p,\overline{p}\}\} \ne 0 .
}
This is the situation considered in the 1-dimensional case in \cite{Ha3}
where deterministic upper bounds on the density of the \ev{}s were
obtained. Following some arguments there, we shall see that \it if \no{gc.3.1}
holds, then \no{gc.3} holds with $\kappa={3\over 4}$. \rm In fact, if
we assume \no{gc.3.1} and if $p(\rho _0)=z_0\in\partial \Gamma $, we
estimate the contribution to $W_{z_0}(\tau )$ from a \neigh{} of $\rho _0$
in the following way:
\par If $\vert \{p,\overline{p}\} (\rho _0)\vert\ge 1/C $, then $d\Re
p,d\Im p$ are \indep{} near $\rho _0$ and the contribution is ${\cal
O}(\tau ^2)$. If $\vert \{ p,\overline{p}\}(\rho _0)\vert $ is very small,
we know that $\vert \{ p,\{ p,\overline{p}\}\}(\rho _0)\vert \ge 1/C$ and
in order to fix the ideas we assume that $H_{\Re p}^2\Im p(\rho _0)\ge
1/C$. This means that
$$
H:=\{ \rho ;\, \{\Re p,\Im p\} (\rho )=0\}
$$
is a smooth hypersurface in a \neigh{} of $\rho _0$ and that $H_{\Re p}$
is transversal to $H$ there. A general point in a \neigh{} of $\rho _0$
can therefore we written
$$
\rho =\exp (tH_{\Re p})(\rho '),\ t\in{\rm neigh\,}(0,{\bf R}),\, \rho
'\in H.
$$
Then $\Re p(\rho )=\Re p(\rho ')$, $\Im p(\rho )=\Im p(\rho
')+t^2g(t,\rho )$, $g>1/C$. Write $\Re p=s$ so that a general point $\rho '$
in $H$ is parametrized by $(s,\rho '')$ with $\rho ''\in {\rm
neigh\,}(0,{\bf R}^{2n-2})$.
\par Write $z_0=x_0+iy_0$. For every fixed $\rho ''$, if $\vert
p(\rho )-z_0\vert \le \tau $ then $\vert s-x_0\vert \le \tau $ and $\vert \Im
p(\rho ')+t^2g(t,s,\rho '')-y_0\vert \le \tau $. Then we are confined to
an interval of length $2\tau $ in the $s$-variable and, for every such
fixed $s$, to an interval of length ${\cal O}(\tau ^{1/2})$ in the
$t$-variable, or to the union of two such intervals. By Fubini's theorem,
the contribution to the volume is therefore ${\cal O}(\tau ^{3/2})$. Hence
$W_{z_0}(\tau )={\cal O}(\tau ^{3/2})$, so $V_{z_0}(t)={\cal O}(t^{3/4})$,
as claimed.
\end{ex}
\par In Section \ref{sa} we introduced the distribution $I(z)$ in \no{sa.2} and showed
that $I(z)$ is subharmonic, satisfying \no{sa.3}. This implies that
\ekv{gc.4}
{
\int_{D(z,s)}\Delta (I(w))L(dw)=2\pi W_z(s),
}
and \no{gc.3} is equivalent to
\ekv{gc.5}
{
\int_{D(z,t)}\Delta (I(w))L(dw)={\cal O}(t^{\rho _0}),\hbox{ \ufly{} for
}z\in{\rm neigh\,}(\partial \Gamma ),\ 0\le t\ll 1,
}
with $\rho _0=2\kappa\in ]0,2]$. This is precisely the condition
\no{cz.37} for $I=\phi $, $\mu =\Delta I$. In view of \no{gc.2}, the
assumption \no{gc.3} is also equivalent to requiring \no{fc.44}
to hold \ufly{}
for $z\in{\rm neigh\,}(\partial \Gamma )$ with
$q=q_z=|(\widetilde{p}-z)\inv (p-z)|^2$ .
\par Consider the \hol{} \fu{}
\ekv{gc.6}
{
F_\delta (z;h)=\det P_\delta (z),\ z\in\widetilde{\Omega },
}
where we recall that $P_\delta (z)=(\widetilde{P}-z)\inv (P_\delta -z)$.
Theorem \ref{lb2} and its proof give:
\begin{prop}\label{gc1}
Let $\delta $ satisfy \no{lb.20}. Then
there exist constants $C,C_0,\widetilde{C}>0$ such that
\smallskip
\par\noindent (a) With \proba{} $\ge 1-Ce^{-C_0h^{-2n}}$, we have
\ekv{gc.7}
{
\ln \vert F_\delta (z;h)\vert \le {1\over (2\pi h)^n}(I(z)+Ch^{\delta
_0}\ln{1\over h}),
}
for all $z$ in some fixed \neigh{} of $\partial \Gamma $.
\smallskip
\par\noindent (b) For every $z\in{\rm neigh\,}(\partial \Gamma )$, $\epsilon \ge
0$, we have
\ekv{gc.8}
{
\ln \vert F_\delta (z;h)\vert \ge {1\over (2\pi h)^n}(I(z)-Ch^{\delta
_0}(\ln {1\over h}+\ln {1\over \delta })-\epsilon ),
}
with \proba{} $\ge 1-Ce^{-\epsilon (2\pi
h)^{-n}}-\widetilde{C}e^{-C_0h^{-2n}}$.
\end{prop}
\par For the upper bound \no{gc.7}, we recall that the upper bound \no{gp.15}
was obtained when $\Vert Q\Vert _{{\rm HS}}$ satisfies the estimate \no{gp.4}
and this event is \indep{} of $z$.
\par We can now apply Proposition \ref{cz4}, with $\phi $ equal
to $I+Ch^{\kappa}\ln {1\over h}$ and with $h$ there replaced by
$(2\pi h)^n$, with $\epsilon $ in
\no{cz.14} replaced by
$$
{\cal O}(1)(h^{\kappa}\ln{1\over h}+h^{\kappa}\ln{1\over \delta
}+\epsilon ),
$$
and with $\epsilon $ in \no{gc.8} large enough, so that $\epsilon $ is the dominant term in
the last expression. In other words, we take
\ekv{gc.23}
{
\epsilon \gg h^{\kappa}\ln
{1\over \delta },
}
using also that $\ln\delta \inv \ge\ln h\inv$.
\par For $0<r\ll 1$, choose $z_1,...,z_N$ and $N$ as in the first part
of \no{cz.41.5}. Then in view of (b) in the proposition, the last
estimate in \no{cz.41.5} (with $h$ there replaced by $(2\pi h)^n$)
holds for all $j$ with
a probability
$$\ge 1-{C \over r}e^{-{\epsilon \over 2}(2\pi h)^{-n}}-\widetilde{C}
e^{-C_0h^{-2n}}.
$$
The term
$${1 \over 2\pi h}\int_\Gamma \Delta \phi L(dz) $$
in \no{cz.43} becomes after the substitutions $h\mapsto (2\pi h)^n$,
$\phi \mapsto I$:
$${1\over (2\pi h)^n}\int_\Gamma {\Delta I(z)\over 2\pi }L(dz)=
{1\over (2\pi h)^n}{\rm Vol\,}(p\inv (\Gamma )),$$
where we also used \no{sa.3}.
\begin{theo}\label{gc2} Let $\delta $ satisfy \no{lb.20}.
Assume \no{gc.3}, with $\kappa \in ]0,1]$. Let $N(P+\delta Q_\omega
,\Gamma )$ be the number of \ev{}s of $P+\delta Q_\omega $ in $\Gamma $.
Then for every fixed $K>0$ and for $0<r\ll 1$:
\eekv{gc.24}
{
&&\vert N(P+\delta Q_\omega ,\Gamma )-{1\over (2\pi h)^n}\iint_{p\inv
(\Gamma )}dxd\xi \vert \le} {&&{C\over h^n}({\epsilon \over r}+C_K(r^K +\ln
({1\over r})\iint_{p\inv (\partial \Gamma +D(0,r))}dxd\xi )),\ 0<r\ll 1,
}
with \proba{}
\ekv{gc.25}
{
\ge 1-{C\over r}e^{-{\epsilon\over 2}(2\pi
h)^{-n}}} provided that
\ekv{gc.26}{
h^{\kappa}\ln{1\over \delta }
\ll \epsilon \ll 1,
}
or equivalently,
$$
e^{-{\epsilon \over Ch^{\kappa}}}\le \delta ,\ C\gg 1,\ \epsilon \ll 1,
$$
implying that $\epsilon \ge \widetilde{C}h^{\kappa}\ln{1\over h}$, for
some $\widetilde{C}>0$.
\end{theo}
\par In \no{gc.24} we want the \rhs{} to be much smaller than $h^{-n}$ so
it is natural to assume that
\ekv{gc.27}
{
\ln ({1\over r})\iint_{p\inv (\partial \Gamma +D(0,r))}dxd\xi ={\cal
O}(r^{\alpha _0}),\ r\to 0 ,
}
for some $\alpha _0>0$. When $\kappa\in ]{1\over 2},1]$, we
automatically have \no{gc.27} with any $\alpha _0\in ]0,2\kappa-1[$.
In the \rhs{} of \no{gc.24}, we first choose $N\ge
\alpha_
0$ and we choose $r=\epsilon ^{1/(1+\alpha _0)}$, so that $\epsilon /r$,
$r^{\alpha _0}$ $=$ ${\cal O}(\epsilon ^{{\alpha_0\over 1+\alpha _0}})$.
Then the \rhs{} of \no{gc.24} becomes
$$\le {C\over h^n}\epsilon ^{\alpha_0\over 1+\alpha _0}.$$
\par So, if $1\gg \epsilon \ge \widetilde{C}h^{\kappa}\ln {1\over h}$
with $\widetilde{C}$ \sufly{} large, and $\delta $ is as in the theorem, then
\ekv{gc.30}
{
\vert N(P+\delta Q_\omega ,\Gamma )-{1\over (2\pi h)^n}\iint_{p\inv
(\Gamma )}dxd\xi \vert \le {C\over h^n}\epsilon ^{\alpha _0\over 1+\alpha _0},
}
with \proba{}
\ekv{gc.31}
{
\ge 1-{C\over \epsilon ^{1\over 1+\alpha _0}}e^{-{\epsilon \over 2}(2\pi
h)^{-n}}.
}
This expression is very
close to 1 except possibly in the case $\kappa=1$, $n=1$. In that case,
we replace $\kappa$ by a strictly smaller value and choose $\delta
,\epsilon $ as above.
\begin{theo}\label{gc3}
Let ${\cal G}$
be a \fy{} of domains $\Gamma $ as in Theorem \ref{gc2} satifying the
assumptions there \ufly{} (cf Theorem \ref{sa2}) and in particular we assume
\no{gc.3} \ufly{} for all $z$ in a \neigh{} of the union of all the
$\partial \Gamma $. Then we
have \no{gc.24}
with \proba{} $$\ge 1-{C\over r^2}e^{-{\epsilon \over (2\pi
h)^n}}$$ provided that
$$
h^{\kappa}\ln{1\over \delta }\ll \epsilon \ll 1.
$$
\end{theo}
\section{Appendix: Gaussian random variables in Hilbert spaces} \label{ap}
\setcounter{equation}{0}
\par In this appendix we review some generalities about Gaussian random variables
in Hilbert spaces that seem to be quite standard to probabilists.
\par Let $\alpha _1,\alpha _2,...$ be a sequence of \indep{} ${\cal
N}(0,1)$-laws, and let ${\cal H}$ be a complex separable Hilbert space.
\begin{prop}\label{ap1} Let
$v_1,v_2,...\in{\cal H}$ be a
sequence of vectors such that $\sum_1^\infty \Vert v_j\Vert ^2<\infty $,
then if the sequence $n_1<n_2<...$ tends to $\infty $ \sufly{}
fast, we have that
$$
\lim_{k\to \infty }\sum_1^{n_k}\alpha _j(\omega )v_j \hbox{ exists almost
surely (a.s.)}.
$$
Let $S(\omega )$
denote the almost sure limit. If $\widetilde{n}_k$ is another increasing
sequence tending to infinity, such that the limit
$$\lim_{k\to \infty }\sum_1^{\widetilde{n}_k}\alpha _j(\omega
)v_j=:\widetilde{S}(\omega )$$
exists almost surely, then $\widetilde{S}(\omega )=S(\omega )$
a.s.\end{prop}
\begin{proof} Let $(\Omega ,\mathsf P)$ be the underlying \proba{} space. Then
$f_j:= \alpha _j(\omega )v_j$ can be viewed as elements of $L^2(\Omega
,{\cal H})$ of norm $\Vert v_j\Vert $. They are mutually \og{} since the
$\alpha _j$ are \indep{}. We thus have an \og{} sum $\sum_1^\infty \alpha
_jv_j$ which converges in $L^2(\Omega ;{\cal H})$ and as usual, using the
Chebyschev inequality, we deduce the existence of a sequence of partial sums
that converges a.s.
\end{proof}
\par Let
$e_1,e_2,..$ and $f_1,f_2,...$ be two \on{} bases in ${\cal H}$.
Let $\alpha _1(\omega ),\alpha _2(\omega ),...$ be \indep{} complex
${\cal N}(0,1)$-laws, and consider the formal vector $\sum_1^\infty \alpha
_j(\omega )e_j$. Almost surely, $\{ \alpha _j(\omega )\} _1^\infty $ is not
in $\ell ^2$ so our vector is not in ${\cal H}$. However, if $v\in{\cal
H}$, then a.s., we can define the scalar product
\ekv{ap.2}
{
(\sum_1^\infty \alpha _j(\omega )e_j\vert v)=\sum_1^\infty \alpha
_j(\omega )(e_j\vert v)
}
as in Proposition \ref{ap1}, since $\{ (e_j\vert v)\} _1^\infty \in \ell ^2$.
\par We now look for random variables $\beta _1(\omega ),\beta
_2(\omega ),...$ such that
\ekv{ap.3}
{
\sum_1^\infty \alpha _k(\omega )e_k=\sum_1^\infty \beta _j(\omega )f_j,
}
in the sense that the formal scalar products with $f_1,f_2,...$ are equal.
This leads to the definition
\ekv{ap.4}
{
\beta _j(\omega )=\sum_{k=1}^\infty (e_k\vert f_j)\alpha _k(\omega ),
}
which is well-defined as in Proposition \ref{ap1}, since $k\mapsto
(e_k\vert f_j)$ is in $\ell^2$. For every finite $N$, the variable
\ekv{ap.5}
{
\sum_{k=1}^N (e_k\vert f_j)\alpha _k(\omega )
}
has the density
$$
*_{k=1}^N {1\over \pi \vert (e_k\vert f_j)\vert ^2}e^{-\vert \alpha \vert
^2/\vert (e_k\vert f_j)\vert ^2},
$$
where $*$ indicates convolution products. Hence the characteristic \fu{}
(i.e. the \F{} \tf{}) is
$$\exp \Bigl(-{1\over 4}(\sum_{k=1}^N \vert (e_k\vert
f_j)\vert ^2)\vert \xi \vert ^2\Bigr),$$
so \no{ap.5} is a normal distribution
${\cal N}(0,{\sum_{k=1}^N\vert (e_k\vert f_j)\vert ^2})$. The unitarity of the
matrix $((e_k\vert f_j))$ then implies that $\beta _j(\omega )$
is a ${\cal N}(0,1)$-law.
\begin{prop}\label{ap2}
$\beta _j$ are \indep{} ${\cal N}(0,1)$-laws.
\end{prop}
\begin{proof}
We have already seen that $\beta _j$ are ${\cal N}(0,1)$-laws. To see that they are
\indep{}, we compute (using Proposition \ref{ap1}) the joint distribution
of $\beta _1,\beta _2,...,\beta _N$. Write
$$
\beta ^{(N)}=\begin{pmatrix}\beta _1\cr \beta _2\cr ..\cr \beta
_N\end{pmatrix}=\sum_1^\infty \alpha _k\nu _k,
$$
where
$$
\nu _k=\begin{pmatrix}\sigma _{1,k}\cr\sigma _{2,k}\cr \dots\cr \sigma _{N,k} \end{pmatrix},\
\sigma _{j,k}=(e_k\vert f_j).
$$
$\alpha _k(\omega )\nu _k$ is a \rv{} with values
in ${\bf C}^N$ and with the characteristic \fu{}
\begin{eqnarray*}
\chi _{\alpha _k\nu _k}(\xi )&=&\int e^{-i\Re \alpha (\nu _k\vert \xi
)}e^{-\vert \alpha \vert ^2}{L(d\alpha )\over \pi }\\
&=& \exp \left( {-{1\over 4}\vert (\nu _k\vert \xi )\vert ^2} \right)\\
&=& \exp \left( -{1\over 4}\sum_{\ell =1}^N\sum_{m=1}^N \sigma _{\ell
,k}\overline{\sigma _{m,k}}\overline{\xi }_\ell \xi _m \right).
\end{eqnarray*}
It follows that
$$
\chi _{\sum_1^\infty \alpha _k\nu _k}(\xi )=\exp \left(-{1\over 4}
\sum_{\ell =1}^N\sum_{m=1}^N (\sigma _\ell \vert
\sigma _m)\overline{\xi }_\ell \xi _m \right),
$$
where $\sigma _j=(\sigma _{j,k})_{k=1}^\infty \in\ell^2$. But the $\sigma _j$
form an \on{} system, so finally,
$$
\chi _{\sum_1^\infty \alpha _k\nu _k}(\xi )=\exp -{1\over 4}\vert \xi
\vert ^2.
$$
This means that the joint distribution of $\beta _1,...,\beta _N$ is
$$
{1\over \pi ^N}e^{-\vert \beta \vert ^2}L_{{\bf C}^N}(d\beta ),
$$
and that $\beta _1,...,\beta _N$ are \indep{}.
\end{proof}
\par The \rv{} \no{ap.2} is an ${\cal N}(0,\Vert v\Vert^2 )$-law.
\par If $v\in{\cal H}$ is any finite linear combination of the $f_j$, we
know by construction that
$$
(\sum_1^\infty \alpha _j(\omega )e_j\vert v)=(\sum_1^\infty \beta
_j(\omega )f_j\vert v),\ {\rm a.s.}
$$
If $v\in{\cal H}$ is \aby{}, we write $v=v_\epsilon +r_\epsilon $, where
$v_\epsilon $ is a finite linear combination of the $f_j$ and $\Vert
r_\epsilon \Vert <\epsilon $. We conclude that almost surely,
$$
(\sum_1^\infty \alpha _j(\omega )e_j\vert v)=(\sum_1^\infty \beta
_j(\omega )f_j\vert v)+(\sum_1^\infty \alpha _j(\omega )e_j\vert
r_\epsilon )-(\sum_1^\infty \beta _j(\omega )f_j\vert r_\epsilon ).
$$
Here the last two terms are ${\cal N}(0,\Vert r_\epsilon \Vert ^2)$-laws and hence
as small as we like with a \proba{} as close as we like to 1,
when $\epsilon $
is small eneough. We conclude that
\ekv{ap.6}
{
(\sum_1^\infty \alpha _j(\omega )e_j\vert v)=(\sum_1^\infty \beta
_j(\omega )f_j\vert v)\ {\rm a.s.}
}
\begin{prop}\label{ap3}
Let ${\cal H}$, $\widetilde{{\cal H}}$ be two separable Hilbert spaces and
let $T:{\cal H}\to \widetilde{{\cal H}}$ be a \hs{} \op{}. Let $\alpha
_j(\omega )e_j$, $\beta _j(\omega )f_j$ be as above. Then
$T(\sum_1^\infty \alpha _j(\omega )e_j)$
is well defined a.s. and equal to $T(\sum_1^\infty \beta _j(\omega )f_j)$
a.s.
\end{prop}
\begin{proof}
We define $T(\sum_1^\infty \alpha _j(\omega )e_j)$ as $\sum_1^\infty
\alpha _j(\omega )Te_j$ in the sense of Proposition \ref{ap1}, using that
$$
\sum \Vert Te_j\Vert _{\widetilde{{\cal H}}}^2=\Vert T\Vert _{{\rm HS}}^2<\infty .
$$
Notice also that for every $v\in\widetilde{{\cal H}}$ we have a.s.
\begin{eqnarray*}
(T(\sum_1^\infty \alpha _j(\omega )e_j)\vert v)&=&\sum_1^\infty \alpha
_j(\omega )(Te_j\vert v)\ {\rm a.s.}\\
&=& \sum_1^\infty \alpha _j(\omega )(e_j\vert T^*v).
\end{eqnarray*}
The same considerations apply to $T(\sum_1^\infty \beta _j(\omega )f_j)$
so in view of \no{ap.6}, for every $v\in\widetilde{{\cal H}}$ we have
$$
(T(\sum_1^\infty \alpha _j(\omega )e_j)\vert v)=(T(\sum_1^\infty \beta
_j(\omega )f_j)\vert v)\ {\rm a.s.}
$$
We get the same conclusion a.s. simultaneously for all $v$ in any countable
set, and letting $v=g_1,g_2,....$, where $g_j$
form and \on{} basis in $\widetilde{{\cal H}}$, we conclude that
$$
T(\sum_1^\infty \alpha _j(\omega )e_j)=T(\sum_1^\infty \beta _j(\omega
)f_j)\ {\rm a.s.}
$$
\end{proof}
\par Now let ${\cal E},{\cal F}, {\cal G}, {\cal H}$ be separable Hilbert
spaces and let $T:{\cal E}\to {\cal F}$, $S:{\cal G}\to {\cal H}$ be
\hs{} \op{}s. If $f\in{\cal F}$, $g\in{\cal G}$, we also denote by $g,f$
the corresponding multiplication \op{}s ${\bf C}\ni z\mapsto zg,zf\in
{\cal G}, {\cal F}$, so that $f^*u=(u\vert f)$. Then $gf^*u=(u\vert f)g$
defines an \op{} $:{\cal F}\to G$ which has the \hs{} norm $\Vert g\Vert
\Vert f\Vert $. Let $f_j$, $j=1,2,...$, $g_j$, $j=1,2,...$
be \on{} bases in ${\cal F}$, ${\cal G}$. Then
$\{g_jf_k^*\}_{j,k=1}^\infty $ is an \on{} basis for the space ${\rm HS}({\cal
F},{\cal G}) $of \hs{}
\op{}s ${\cal F}\to {\cal G}$. Now,
$$Sg_jf_k^*T=(Sg_j)(T^*f_k)^*,$$
and
$$
\Vert Sg_jf_k^*T\Vert _{{\rm HS}}^2=\Vert Sg_j\Vert ^2\Vert T^*f_k\Vert ^2.
$$
It follows that
$$
\sum_{j,k}\Vert Sg_jf_k^*T\Vert _{{\rm HS}}^2=\Vert S\Vert _{{\rm HS}}^2\Vert T\Vert
_{{\rm HS}}^2,
$$
and we conclude that
$$
{\rm HS}({\cal F},{\cal G})\ni A\mapsto SAT\in {\rm HS}({\cal E},{\cal H})
$$
is a \hs{} \op{}. The earlier discussion can therefore be applied:
\begin{prop}\label{ap4}
Let
$\alpha _{j,k}(\omega )$ be \indep{} ${\cal N}(0,1)$ laws. Then
$$
S\sum_{j,k}\alpha _{j,k}(\omega )g_jf_k^*T=\sum_{j,k}\alpha _{j,k}(\omega
)Sg_jf_k^*T
$$
is almost surely defined as a \hs{} \op{}. Moreover, if
$\widetilde{g}_j$, $\widetilde{f}_k$ are new \on{} bases in ${\cal G}$,
${\cal F}$, then there exists a new set of \indep{} ${\cal N}(0,1)$-laws $\beta
_{j,k}(\omega )$ such that
\ekv{ap.7}
{
S\circ (\sum_{j,k}\alpha _{j,k}(\omega )g_jf_k^*)\circ T=
S\circ (\sum_{j,k}\beta _{j,k}(\omega )\widetilde{g}_j\widetilde{f}_k^*)\circ
T\ {\rm a.s.}
}
\end{prop}
|
1502.06155
|
\section{Introduction}
Given a probability space $(\Omega, \Sigma, \PP_0)$, we say that $X:~\Omega\rightarrow\RR$ is a \emph{random variable} if it is $\Sigma$-measurable, that is, $\{\omega:~X(\omega)\leq a\}\in\Sigma$ for any $a\in\RR$. We call $\PP_0$ the \emph{base probability measure}, which is fixed in our analysis. For $1\leq p<\infty$, we use $\LL\,^p(\Omega,\Sigma,\PP_0)$~($\LL\,^p$ for short) to denote the set of all random variables $X$ satisfying $\EE(|X|^p)<+\infty$. A risk measure $\RRR$ is a functional from $\LL^2$ to $(-\infty,+\infty]$. It represents ``the risk of loss'' where $X$ represents ``the real amount of loss''. Furthermore, if $\RRR(X)\in(-\infty,+\infty)$ for any $X\in\LL^2$, then we call $\RRR$ a \emph{finite} risk measure. A risk measure $\RRR$ is {\em coherent in the basic sense} ({``\em coherent''} for short) if it satisfies the following five axioms \cite{R07}.
\begin{description}
\item[(A1)] $\RRR(C)=C$ for all constant $C$,
\item[(A2)] $\RRR((1-\lambda)X+\lambda X')\le (1-\lambda)\RRR (X)+\lambda\RRR(X')$ for $\lambda\in[0,1]$ (``convexity''),
\item[(A3)] $\RRR(X)\le\RRR(X')$ if $X\le X'$ almost surely (``monotonicity''),
\item[(A4)] $\RRR(X)\le0$ when $\|X^k-X\|_2\to 0$ with $\RRR(X^k)\le0$ (``closedness''),
\item[(A5)] $\RRR(\lambda X)=\lambda\RRR(X) $ for $\lambda>0$ (``positive homogeneity'').
\end{description}
In early literature on coherency \cite{ADEH97,ADEH99} it was required to have $\RRR(X+C)=\RRR(X)+C$. It can be shown that this follows automatically by ({\b A1}) and ({\b A2}) \cite{RUZ06}. To simplify the notation, we designate $\EE(X):=\EE_{\PP_0}(X)$, where $\EE$ stands for the expectation.
Consider another probability measure $\PP$ on $(\Omega,\Sigma)$, we say that a probability measure $\PP$ is \emph{absolutely continuous} with respect to the base probability measure $\PP_0$~(denoted by $\PP\ll\PP_0$) if $\PP_0(A)=0$ implies $\PP(A)=0$ for any measurable set $A\in\Sigma$. If $\PP\ll\PP_0$, then there is a well defined Radon-Nikodym derivative $Q=\frac{d\PP}{d\PP_0}$, satisfying $Q\geq0$ and $\EE(Q)=1$, such that $\PP(A)=\EE(Q\textbf{1}_A)$ for any measurable set $A\in\Sigma$, where $\textbf{1}_A$ is the indicator of $A.$ We denote the set of such derivatives by
\begin{equation}\label{RN}\PPP:=\left\{Q\in\LL^2:~Q=\frac{d\PP}{d\PP_0}\geq0,~\EE(Q)=1,\PP\ll\PP_0\right\}.\end{equation} We call $Q$ the ``density'' of $\PP$ because the expectation of a random variable $X$ with respect to $\PP$ is equal to $\E(XQ)$, namely
\begin{equation}\label{expect}
\E_{\PP}(X)=\int_\Om X(\om)d\PP(\om)=\int_\Om X(\om)Q(\om)d\PP_0(\om)=\E(XQ).\end{equation}
By a ``risk envelope" we mean a nonempty closed convex subset $\QQ$ of $\PPP$. According to the theory of conjugacy in convex analysis, there is a dual representation for coherent risk measures (\cite[Theorem 4(a)]{R07}), which says that
\begin{quote} {\em $\RRR$ is a coherent risk measure of risk in the basic sense if and only if there is a risk envelope $\QQ$ (which will be uniquely determined) such that}\end{quote}
\begin{equation}\label{DR}\RRR(X)=\sup\limits_{Q\in\QQ}\E(XQ).\end{equation}
Here and below, we will regard this result as ``{\em the dual representation theorem}" for short.
It follows from \reff{DR} that the risk envelope $\QQ$ can be written explicitly as
\begin{equation}\label{e:1.1}
\QQ=\{Q\in\PPP:~\EE(XQ)\leq\RRR(X)~\h{for all}~X\in\LL^2\}.
\end{equation}
\vskip12pt
Note that the requirement $Q\geq0$ in \reff{RN} is equivalent to Axiom ({\b A3}) and the requirement $\EE(Q)=1$ is equivalent to ({\b A1}), as shown in \cite{SR06}. Furthermore, the setting of $X\in \LL^2$ implies $Q\in\LL^2$. Hence all requirements for $Q$ in \reff{RN} are natural. It should be noted that a primary form of the above representation theorem with a finite set $\Omega$ has existed long before the notion of coherent risk measure, see, e.g., \cite{H81}.
The contributions of this paper can be outlined as follows:
\benu
\item We derive formulae of risk measures when the corresponding risk envelopes involve set operations such as union, intersection, and convex combination.
\item We present independent proofs for the correspondence between several popular risk measures and their risk envelopes.
\item We study sufficient and necessary conditions on the risk envelope that guarantee the aversity of the risk measure.
\item We indicate the connection between the so-called uncertainty sets in robust optimization and the dual representation of risk measures.
\eenu
The paper is organized as follows. In Section 2, we consider the set operations of risk envelopes. In Sections 3 and 4, we discuss risk envelopes for several popular risk measures and risk aversity, respectively. Section 5 addresses the relationship between the risk measures defined through uncertainty sets and the ones defined through risk envelopes. Section 6 concludes this paper.
\section{Set Operations of Risk Envelopes }
Suppose $\RRR_1,\RRR_2,\cdots,\RRR_n$ is a collection of coherent risk measures on $\LL^2$ with risk envelopes $\QQ_1,\QQ_2,\cdots,\QQ_n$ respectively. Since $\LL^2$ is a \emph{Banach lattice}~(that is, it is a Banach space and $X,Y\in\LL^2$ with $|X|\leq|Y|$ implies $\|X\|_2\leq\|Y\|_2$), if $\RRR_i$ is finite, then it is continuous, subdifferentiable on $\LL^2$, and bounded above in some neighborhood of the origin by Proposition 3.1 of \cite{SR06}. It then follows that, by Theorem 10 of \cite{R74}, the corresponding $\QQ_i$ is compact in the weak topology of $\LL^2$, that is, $\QQ_i$ is \emph{weakly compact}.
The following result deals with convex combination of the sets $\QQ_1,\QQ_2,\cdots,\QQ_n$.
\begin{prop}\label{421} Let $\la_1,...,\la_n$ be positive numbers with $\la_1+\cs+\la_n=1$. If all but perhaps one of the $\RRR_i$'s are finite, then the convex combination $$\RRR(\cdot):=\la_1\RRR_1(\cdot)+\cs+\la_n\RRR_n(\cdot)$$ is a coherent risk measure with risk envelope $$\QQ:=\la_1\QQ_1+\cs+\la_n\QQ_n.$$
\end{prop}
\pf As discussed above, we know that if $\RRR_i$ is finite, then the corresponding $\QQ_i$ is weakly compact. It is easy to see that $\QQ$ is a nonempty and convex subset of $\PPP$~(as defined in (\ref{RN})). Furthermore, $\QQ$ is weakly closed since all but perhaps one of the $\QQ_i$'s are weakly compact, and the sum of finitely many weakly closed set, if all but perhaps one of which is weakly compact, is a weakly closed set. Then $\QQ$ is closed because closedness coincides with weak closedness for convex sets. Therefore, $\QQ$ is a risk envelope, and the result follows from
$$\sup_{Q\in\QQ}\EE(XQ)=\sup_{Q_i\in\QQ_i,i=1,...,n}\EE\l[X(\la_1Q_1+\cs+\la_nQ_n)\r]=\sum_{i=1}^n\la_i\RRR_i(X)=\RRR(X)\eqno\square$$
Next, define
\beaa
&\widetilde{\RRR}_1(X)
:=\max\limits_{1\leq i\leq n}\RRR_i(X),\\
&\widetilde{\RRR}_2(X)
:=\min\limits_{1\leq i\leq n}\RRR_i(X),\\
&\widetilde{\RRR}_3(X):=\cl (\RRR_1\bu\RRR_2\bu\cdots\Box\RRR_n)(X),
\eeaa
where $\cl$ means the closure of the function \cite[Section 1D]{RW97} and
$$(\RRR_1\bu\RRR_2\bu\cdots\bu\RRR_n)(X):=\inf\{\RRR_1(X_1)+\RRR_2(X_2)+\cdots+\RRR_n(X_n):~X_1+X_2+\cdots+X_n=X\}$$ is the so-called {\em inf-convolution} of the functionals $\RRR_i, i=1,...,n.$
Let us call $\widetilde{\RRR}_1 $ and $\widetilde{\RRR}_2$ the ``max'' and the ``min'' of the risk measures $\RRR_1,\RRR_2,\cdots,\RRR_n$, respectively. Clearly, $\widetilde{\RRR}_2(X)$ is not coherent because it may not be convex. We next show that $\widetilde\RRR_1$ and the lower-convexification of $\widetilde\RRR_2$, namely $\widetilde\RRR_3$, are coherent risk measures generated by the risk envelopes $\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$ and $\bigcap\limits_{i=1}^n\QQ_i$, respectively, where $\conv(\cd)$ stands for the convex hull. We begin with the following lemma about $\widetilde{\RRR}_2$ and $\widetilde{\RRR}_3$.
\begin{lemma}\label{p:1.2}
$\widetilde{\RRR}_3$ is the ``lower-convexification'' of $\widetilde{\RRR}_2$ in the sense that
(1) $\widetilde{\RRR}_3(X)\leq\widetilde{\RRR}_2(X)$ for all $X$.
(2) Let $\RRR(X)$ be any convex risk measure satisfying
$\RRR(X)\leq\widetilde{\RRR}_2(X)$ for all $X$. Then $\RRR(X)\leq\widetilde{\RRR}_3(X)$ for all $X$.
\end{lemma}
\pf (1) By the definition of $\widetilde{\RRR}_3$, we have for any $1\leq i\leq n$ and for all $X$,
$$\widetilde{\RRR}_3(X)\leq\RRR_1(0)+\cdots+\RRR_{i-1}(0)+\RRR_i(X)+\RRR_{i+1}(0)+\cdots+\RRR_n(0)=\RRR_i(X).$$
Then by taking closure of infimum on the right hand side and by the definition of $\widetilde{\RRR}_2$, we get $\widetilde{\RRR}_3(X)\leq\widetilde{\RRR}_2(X)$ for all $X$, as desired.
(2) Since $\RRR(X)\leq\widetilde{\RRR}_2(X)$ for all $X$, we have $\RRR(X)\leq\RRR_i(X)$ for any $1\leq i\leq n$ and for all $X$. Furthermore, by the convexity and positive homogeneity of $\RRR$, we have for any $X_1,X_2,\cdots,X_n$ such that $X_1+X_2+\cdots+X_n=X$,
$$\RRR(X)\leq\RRR(X_1)+\RRR(X_2)+\cdots+\RRR(X_n)\leq\RRR_1(X_1)+\RRR_2(X_2)+\cdots+\RRR_n(X_n).$$
Taking closure of infimum on the right hand side and by the definition of $\widetilde{\RRR}_3$, we get $\RRR(X)\leq\widetilde{\RRR}_3(X)$ for all $X$, as desired.\qed
The main result of this section are the following two theorems.
\begin{thm}\label{t:1.1}If $\RRR_1,\cdots,\RRR_n$ are finite, then $\wt\RRR_1(\cdot)$ is a coherent risk measure with risk envelope
$\widetilde{\QQ}_1=\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$.
\end{thm}
\pf We first claim that $\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$ is closed and convex. The convexity is trivial. For closedness, since $\QQ_1,\cdots,\QQ_n$ are all weakly compact, we have that $\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$ is weakly compact because the union of any finite collection of weakly compact sets is again weakly compact, and its convex hull is therefore weakly compact. Furthermore, $\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$ is closed because weak compactness implies weak closedness, and weak closedness coincides with closedness for convex sets. Next, for any $X\in\LL^2$, we have
$$\widetilde{\RRR}_1(X)=\max\limits_{1\leq i\leq n}\RRR_i(X)=\max\limits_{1\leq i\leq n}\left(\sup\limits_{Q\in\QQ_i}\EE(XQ)\right)=\sup\limits_{Q\in\bigcup\limits_{i=1}^n\QQ_i}\EE(XQ)=\sup\limits_{Q\in\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)}\EE(XQ).$$
Hence by the dual representation theorem, $\widetilde{\RRR}_1$ is a coherent risk measure and its risk envelope is $\widetilde{\QQ}_1=\conv\left(\bigcup\limits_{i=1}^n\QQ_i\right)$, as desired.\qed
\begin{thm}\label{t:1.2
$\widetilde{\RRR}_3(\cdot)$ is a coherent risk measure with risk envelope $\bigcap\limits_{i=1}^n\QQ_i$ if and only if $\bigcap\limits_{i=1}^n\QQ_i\neq\emptyset$.
\end{thm}
\pf For the ``if'' part, we first verify that $\widetilde{\RRR}_3(\cdot)$ is a coherent risk measure. By the inf-convolution formula of $\widetilde{\RRR}_3$, the convexity ({\b A2}) and closedness ({\b A4}) hold. For positive homogeneity ({\b A5}), one has
\beaa
\wt\RRR_3(\la X)&=&\cl\inf_{x_2,...,x_n}\l\{\RRR_1(\la X-x_2-...-x_n)+\RRR(x_2)+\cs+\RRR_n(x_n)\r\}\\
&=&\cl\inf_{y_2,...,y_n}\l\{\RRR_1(\la X-\la y_2-...-\la y_n)+\RRR(\la y_2)+\cs+\RRR_n(\la y_n)\r\}\\
&=&\la\wt\RRR_3(X).
\eeaa
Axiom ({\b A1}) is true because
\beq\label{c1}\wt\RRR_3(C)\le \RRR_1(C)+\RRR_2(0)+\cs+\RRR_n(0)\le C\h{ and similarly, }\wt\RRR_3(-C)\le-C.\eeq
Then by convexity and positive homogeneity
\beq\label{c2}0 =\wt \RRR_3(0)\le\wt\RRR_3(C) +\wt\RRR_3(-C)\le\wt\RRR_3(C)- C\ \LR \ \wt\RRR_3(C)\ge C.\eeq
Thus, it follows from \reff{c1} and \reff{c2} that $\wt\RRR_3(C)=C$. Finally, let $X\le Y$ almost surely. Then
\beaa
\wt\RRR_3(X)&=&\cl\inf_{x_2,...,x_n}\l\{\RRR_1(X-x_2-...-x_n)+\RRR(x_2)+\cs+\RRR_n(x_n)\r\}\\
&\le&\cl\inf_{x_2,...,x_n}\l\{\RRR_1(Y-x_2-...-x_n)+\RRR(x_2)+\cs+\RRR_n(x_n)\r\}\\
&=&\wt\RRR_3(Y),
\eeaa
so monotonicity ({\b A3}) holds. Therefore, $\wt\RRR_3(X)$ is a coherent risk measure. Let $\wt\QQ_3$ be its risk envelope. Since $\wt\RRR_3(X)\le\RRR_i(X),$ by \reff{e:1.1}, $\wt\QQ_3\su\QQ_i$ for $1\le i\le n.$ Thus, $\wt\QQ_3\su \bigcap\limits_{i=1}^n\QQ_i$. Conversely, suppose $\widetilde{\RRR}$ is the risk measure with envelope $\bigcap\limits_{i=1}^n\QQ_i$. Since $\widetilde{\RRR}$ is convex and $\widetilde{\RRR}(X)\leq\widetilde{\RRR}_2(X)$ for all $X$, by Lemma \ref{p:1.2} we get $\widetilde{\RRR}(X)\leq\widetilde{\RRR}_3(X)$ for all $X$. Using (\ref{e:1.1}) again, we can get $\bigcap\limits_{i=1}^n\QQ_i\subseteq\widetilde{\QQ}_3$. Thus, we have $\widetilde{\QQ}_3=\bigcap\limits_{i=1}^n\QQ_i$.
We next prove the ``only if" part. If $\widetilde{\RRR}_3(\cdot)$ is a coherent risk measure, then it has a nonempty risk envelope $\widetilde{\QQ}_3$, which is an implication of Axiom ({\b A1}) and the dual representation theorem. Using the same argument from the last paragraph, we can get $\widetilde{\QQ}_3\subseteq\bigcap\limits_{i=1}^n\QQ_i$. Therefore, $\bigcap\limits_{i=1}^n\QQ_i\neq\emptyset$. \qed
\bigskip
To summarize the results in this section, we have the following:
\benu
\item The convex combination of risk envelopes generates the convex combination of the corresponding risk measures.
\item The convex hull of the union of risk envelopes generates the max risk measure.
\item The nonempty intersection of risk envelopes generates the closed lower convexification of the min risk measure.
\eenu
\section{Popular risk measures and their risk envelopes}\label{section:examples_coherent}
Beside set operations, one can create various different coherent risk measures by adding additional functional constraints to the risk envelope $\PPP$ in \reff{RN}. Most of the the results in this section have been stated in Rockafellar's tutorial \cite{R07} without proofs. In fact their proofs are scattered in the literature via different approaches. Here we provide independent proofs based on the unified view of dual representation of risk measures.
\subsection{Risk envelope for relying on expectations}\label{expectation}
In addition to (\ref{RN}), we require $\QQ$ to consist of a single density $Q\equiv1.$ Then from (\ref{DR}),
$$\RRR(X)=\sup_{Q\in\QQ}\EE(XQ)=\EE(X).$$Therefore, the expectation is a coherent risk measure.
\subsection{Risk envelope for worst case analysis}\label{esssup}
Consider the ``essential supremum'' function of of $X$, that is,
\begin{equation}\label{e:esssupdef}
\esssup(X):=\inf\{a:~\PP_0(X>a)=0\}.
\end{equation}
We claim that $\esssup(X)$ is a coherent risk measure. It is straightforward to verify Axioms ({\b A1}), ({\b A2}), ({\b A3}) and ({\b A5}). For Axiom ({\b A4}), suppose $\esssup(X^k)\leq0$ for $k=1,2,\cdots$, and $\|X^k-X\|_2\rightarrow0$ as $k\rightarrow\infty$. Then $X^k\leq0$ almost surely for $k=1,2,\cdots$.
On the other hand, $\|X^k-X\|_2\rightarrow0$ implies $X^k\rightarrow X$ in probability and therefore there exists $\{k_i\}$ such that $X^{k_i}\rightarrow X$ almost surely. It follows that $X\leq0$ almost surely, which implies $\esssup(X)\leq0$.
Next, since $\esssup(\cdot)$ is a coherent measure, it has a risk envelope $\QQ\subseteq\PPP$. Note that $\sup\limits_{Q\in\PPP}\EE(XQ)\leq\esssup(X)$ for any $X\in\LL^2$, and therefore $\PPP\subseteq\QQ$. So $\QQ=\PPP$.
Note that it is possible that $\esssup(X)=\i$ for some $X$, which could happen if $X$ does not have a finite essential supremum. Thus, $\esssup(\cd)$ is not a finite risk measure.
\subsection{The risk measures from subdividing the future}\label{dividing}
Let $\Omega$ be partitioned into subsets $\Omega_1,\cdots,\Omega_r$ having positive probability~($r\geq2$), and for $k=1,\cdots,r$, let
$$\RRR_k(X):=\esssup\limits_{\omega\in\Omega_k}X(\omega):=\inf\{a:~\PP_0(\{X>a\}\cap\Omega_k)=0\}$$
for $X\in\LL^2$. Similar to the approach in analyzing the worst case risk measure in the last subsection, we can see that $\RRR_k(\cdot)$ is a coherent risk measure with risk envelope
$$\QQ_k:=\{Q\in\PPP:~\EE(Q\textbf{1}_{\Omega_k})=1\}=\{Q\in\PPP:~Q=0~\h{for any}~\omega\not\in\Omega_k\}.$$ Moreover, by Proposition \ref{421}, the convex combination of $\RRR_1(\cdot),\cdots,\RRR_r(\cdot)$, i.e.,
\begin{equation}\label{dividing_formula}
\RRR(\cdot):=\lambda_1\RRR_1(\cdot)+\cdots+\lambda_r\RRR_r(\cdot)~\h{for coefficients}~\lambda_i>0~\h{adding to}~1,
\end{equation}
is also a coherent risk measure in $\LL^2$, which is called the risk measure from subdividing the future. Its risk envelope is
\begin{equation}\label{dividing_envelope}
\QQ:=\lambda_1\QQ_1+\cdots+\lambda_r\QQ_r.
\end{equation}
\subsection{The optimized certainty equivalent (OCE)}\label{conditionalvar}
In an important paper \cite{BT07}, Ben-Tal and Teboulle proved that the negative of their OCE function
$${\rm OCE}_u(X)=\sup_\et\{\et+\E[u(X-\et)]\},$$
(where $u$ is a piecewise linear utility function), is a coherent risk measure that includes the well-known {\em Conditional Value at Risk} (CVaR) as a special case. Since $X$ is a risk rather than an income in our context and we are considering risk rather than utility, we define
\beq\label{001}S_r(X):=-{\rm OCE}_u(-X)=\inf_\et\{-\et+\E[-u(-X-\et)]\}=\inf_\be\{\be+\E[r(X-\be)]\},\eeq
where $r(X)=-u(-X)$ and proceed to show that if
$$r(X)=\ga_1[X]_+-\ga_2[-X]_+ \h{ with }0\le\ga_2<1<\ga_1,$$where $[t]_+=\max(t,0)$, then $S_r(X)$ is a coherent risk measure with risk envelope $\ga_2\le Q\le \ga_1,$ i.e. we will prove that
\beq\label{33}S_r(X)=\sup_{Q\in\QQ_{\ga_1,\ga_2}}\EE(XQ),\hbox{ where }\QQ_{\ga_1,\ga_2}:=\left\{Q\in\PPP:~\ga_2\le Q\le\ga_1\right\}.\eeq
Note that $X=[X]_+-[-X]_+$. Then we have
\begin{eqnarray*}
\EE(XQ)&=&\EE(\be Q)+\EE\left[(X-\beta)Q\right]
=\beta+\EE[Q(X-\beta)_+-Q(\be-X)_+]\\&\le&\beta+\EE[\ga_1(X-\beta)_+-\ga_2(\be-X)_+]=\beta+\EE[r(X-\beta)]=S_r(X).
\end{eqnarray*}
Taking supremum on the left hand side over $Q\in\QQ_{\ga_1,\ga_2}$ and taking infimum on the right hand side over all $\beta\in\RR,$ we obtain
\begin{equation}\label{01}\sup_{Q\in\QQ_{\ga_1,\ga_2}}\EE(XQ)\le \min_\beta\l\{\beta+\EE\l[r(X-\beta)\r]\r\}.
\end{equation}
On the other hand, it can be calculated that the minimum in \reff{01} is attained at
\beq\label{002}\be^*=\min\l\{\zeta: F\l(\zeta\r)\ge\frac{\ga_1-1}{ \ga_1-\ga_2}\r\},\eeq where $F$ is the cumulative distribution function of $X$. Since $F$ is non-decreasing and right-continuous in $\zeta$, the minimum in \reff{002} is attained and hence $\be^*$ exists. It follows that
$$\PP_0(X<\be^*)\leq\frac{\ga_1-1}{ \ga_1-\ga_2}\h{ and }\PP_0(X> \be^*)\leq\frac{1-\ga_2}{ \ga_1-\ga_2}.$$ Therefore, there exists $\delta\in[0,1]$ such that
$$\PP_0(X<\be^*)+\delta\PP_0(X=\be^*)=\frac{\ga_1-1}{ \ga_1-\ga_2}\h{ and }\PP_0(X> \be^*)+(1-\delta)\PP_0(X=\be^*)=\frac{1-\ga_2}{ \ga_1-\ga_2}.$$
Set $$Q_0=\ga_2\cdot\left[\textbf{1}_{\{X<\be^*\}}+\delta\cdot\textbf{1}_{\{X=\be^*\}}\right]+\ga_1\cdot\left[\textbf{1}_{\{X>\be^*\}}+(1-\delta)\cdot\textbf{1}_{\{X=\be^*\}}\right].$$
Note that $\ga_2\leq Q_0\leq\ga_1$ and
\begin{eqnarray*}
\EE(Q_0)&=&\ga_2[\PP_0(X<\be^*)+\delta\PP_0(X=\be^*)]+\ga_1[\PP_0(X> \be^*)+(1-\delta)\PP_0(X=\be^*)]\\
&=&\ga_2\frac{\ga_1-1}{ \ga_1-\ga_2}+\ga_1\frac{1-\ga_2}{ \ga_1-\ga_2}=1.
\end{eqnarray*}
Thus $Q_0\in\QQ_{\ga_1,\ga_2}$ and
\begin{eqnarray*}
\sup_{Q\in\QQ_{\ga_1,\ga_2}}\EE(XQ)\ge\EE(XQ_0)
&=&\EE[(X-\be^*)\cdot Q_0]+\be^*\cdot\EE(Q_0)\\
&=&\be^*+\EE[\ga_1(X-\beta^*)_+-\ga_2(\be^*-X)_+]\\
&\ge&\min_{\beta\in\RR}\left\{ \beta+\EE[\ga_1(X-\beta)_+-\ga_2(\be-X)_+]\right\}=S_r(X).
\end{eqnarray*}
Combine (\ref{01}) and the above we obtain \reff{33}.\qed
Now taking $\ga_2=0$ and $\ga_1=(1-\al)^{-1}$ we obtain the CVaR in minimization form by Rockafellar and Uryasev \cite{RUZ06}
\beq\label{32}S_r(X)={\rm CVaR}_\alpha(X)=\min_{\beta\in\RR}\l\{ \beta+{1\over 1-\alpha}\EE(X-\beta)_+\r\}.\eeq Hence what we have shown means that CVaR is also a coherent risk measure and we have that
\beq\cvar_\alpha(X)=\sup_{0\le Q\le\frac{1}{1-\al},~\EE(Q)=1}\EE(XQ).\label{37}\eeq
\subsection{The mean-deviation risk measure}\label{meandeviation}
Fix $0\leq\lambda\leq1$. Define
\begin{equation}\label{e:meandeviationdef}
\RRR(X)=\EE X+\lambda\cdot\|(X-\EE X)_+\|_2
\end{equation}
for all $X\in\LL^2$, where $\|\cdot\|_2$ denotes the $\LL^2$-norm, that is, $\|X\|_2:=\left[\EE(X^2)\right]^{\frac{1}{2}}.$ It is not difficult to check that $\RRR(\cdot)$ is a coherent risk measure on $\LL^2$. In fact, ({\b A1}), ({\b A2}), ({\b A4}) and ({\b A5}) can be easily verified. For ({\b A3})~(the monotonicity), we only need to check that
\begin{equation}\label{axiom3}
\RRR(X)\leq0\h{ for any }X\in\LL^2\text{ and }X\leq0\text{ almost surely},
\end{equation}
since if (\ref{axiom3}) holds, then by convexity, for any $X,Y\in\LL^2$ satisfying $X\leq Y$ almost surely, we have
$$\RRR(X)\leq\RRR(Y)+\RRR(X-Y)\leq\RRR(Y),$$ and ({\b A3}) holds. We next prove (\ref{axiom3}). For any $X\in\LL^2$ and $X\leq0$ almost surely, since $0\leq\lambda\leq1$, we have $$\RRR(X)\leq\EE X+\|(X-\EE X)_+\|_2\leq\EE X+\esssup(X-\EE X)_+=\esssup X\leq0.$$ So (\ref{axiom3}) holds and therefore ({\b A3}) holds. Then $\RRR(\cdot)$ is a coherent risk measure on $\LL^2$.
We next claim that the risk envelope of $\RRR(\cdot)$ is $$\QQ=\left\{0\leq Q\in\LL^2:~\EE(Q)=1,~\|Q-\essinf Q\|_2\leq\lambda\right\}.$$ In fact, on one hand, for any $X\in\LL^2$ and $Q\in\QQ$, we have
\beaa
\EE(XQ)&=&\EE[(X-\EE X)(Q-\essinf Q)]+\EE X\leq\EE X+\EE[(X-\EE X)_+(Q-\essinf Q)]\\
&\leq&\EE X+\|(X-\EE X)_+\|_2\cdot\|Q-\essinf Q\|_2\leq\EE X+\lambda\cdot\|(X-\EE X)_+\|_2
\eeaa
by Cauchy-Schwarz inequality. Hence we get
\begin{equation}\label{e:3.5}
\sup\limits_{Q\in\QQ}\EE(XQ)\leq\EE X+\lambda\cdot\|(X-\EE X)_+\|_2
\end{equation}
for any $X\in\LL^2$. On the other hand, set
$$Q_0:=1+\frac{\lambda\cdot\left[(X-\EE X)_+-\EE(X-\EE X)_+\right]}{\|(X-\EE X)_+\|_2}.$$
Since $0\leq\lambda\leq1$, we have
$$\essinf Q_0=1-\frac{\lambda\cdot\EE(X-\EE X)_+}{\|(X-\EE X)_+\|_2}\geq1-\frac{\EE(X-\EE X)_+}{\|(X-\EE X)_+\|_2}\geq0.$$
Thus, $0\leq Q_0\in\LL^2$, $\EE Q_0=1$ and
$$\|Q_0-\essinf Q_0\|_2=\frac{\left\|\lambda\cdot(X-\EE X)_+\right\|_2}{\|(X-\EE X)_+\|_2}=\lambda,$$
that is, $Q_0\in\QQ$. Then for any $X\in\LL^2$,
\bea\label{e:3.6}
\sup\limits_{Q\in\QQ}\EE(XQ)&\geq&\EE(XQ_0)=\EE X+\frac{\lambda\cdot\EE\left[(X-\EE X)_+\cdot(X-\EE X)\right]}{\|(X-\EE X)_+\|_2}\nonumber\\
&=&\EE X+\frac{\lambda\cdot\|(X-\EE X)_+\|_2^2}{\|(X-\EE X)_+\|_2}=\EE X+\lambda\cdot\|(X-\EE X)_+\|_2.
\eea
(\ref{e:3.5}) and (\ref{e:3.6}) together imply
$$\sup\limits_{Q\in\QQ}\EE(XQ)=\EE X+\lambda\cdot\|(X-\EE X)_+\|_2.$$
We can check that $\QQ$ is nonempty, convex and closed in $\LL^2$. Therefore, it is the risk envelope for the mean-deviation risk measure.
\section{Discussion on Aversity}
Suppose $\RRR(\cdot)$ is a functional from $\LL^2$ to $(-\infty,+\infty]$. We call it an \emph{averse} risk measure if it satisfies axioms ({\b A1}), ({\b A2}), ({\b A4}), ({\b A5}) and
\begin{description}
\item[(A6)] $\RRR(X)>\EE(X)$ for all non-constant $X$.
\end{description}
We are interested in the risk measures which are both coherent and averse. Next we develop the conditions of risk envelopes under which a coherent risk measure is averse. We use the notion ``$A\subset B$'' to denote that $A$ is a proper subset of $B$, that is, $A\subseteq B$ but $A\neq B$. The following necessary condition is trivial.
\begin{prop}\label{p:averse_ness}
Suppose $\RRR(\cdot)$ is a coherent risk measure on $\LL^2$ with risk envelope $\QQ$. If $\RRR(\cdot)$ is averse, then $\{1\}\subset\QQ$.
\end{prop}
On the other hand, a sufficient condition is stated in the following proposition.
\begin{prop}\label{p:averse_suff}
Suppose $\RRR(\cdot)$ is a coherent risk measure with risk envelope $\QQ$. If $1$ is a relative interior point of $\QQ$~(relative to $\PPP$), then $\RRR(\cdot)$ is averse.
\end{prop}
\pf Since $1$ is a relative interior point of $\QQ$~(relative to $\PPP$), there exists $\delta\in(0,1)$ such that
\begin{equation}\label{e:interior}
\{Q\in\PPP:~\|Q-1\|_2<\delta\}\subseteq\QQ.
\end{equation}
If $X$ is not a constant almost surely, then there exists $b\in\RR$ such that $$\PP_0(X\geq b)=p\in(0,1),~~~~~~\PP_0(X<b)=1-p\in(0,1).$$ Set
$$Q_0:=\left\{\begin{array}{ll}1+(1-p)\delta~~~~~~~\h{if}~X\geq b,\\~~~1-p\delta~~~~~~~~~~~~\h{if}~X<b. \end{array}\right.$$ Then we have
$$Q_0\geq0,~~~\EE(Q_0)=1,~~~\|Q_0-1\|_2<\delta.$$ By (\ref{e:interior}), we can get that $Q_0\in\QQ$. Thus,
\begin{equation}\label{ineq1}
\EE(XQ_0)\leq\sup\limits_{Q\in\QQ}\EE(XQ)=\RRR(X).
\end{equation}
Furthermore, we have
\bea\label{ineq2}
\EE(XQ_0)-\EE(X)&=&(1-p)\delta\cdot\EE(X\textbf{1}_{\{X\geq b\}})-p\delta\cdot\EE(X\textbf{1}_{\{X<b\}})\nonumber\\
&>&(1-p)\delta b\cdot\PP_0(X\geq b)-p\delta b\cdot\PP_0(X<b)=0.
\eea
(\ref{ineq1}) and (\ref{ineq2}) together imply that $\RRR(X)>\EE(X)$ for all non-constant $X$. Therefore, $\RRR(\cdot)$ is averse, as desired.\qed
\bigskip
From Propositions \ref{p:averse_ness} and \ref{p:averse_suff}, we can get the following:
\begin{equation}\label{averse}
1~\h{is a relative interior point of}~\QQ~(\h{relative to}~\PPP)\Longrightarrow\RRR(\cdot)~\h{is averse}\Longrightarrow\{1\}\subset\QQ.
\end{equation}
Generally, the converse of (\ref{averse}) may not be true, which can be seen from the following two examples.
\begin{example}\label{exam:averse1}Let
$\RRR(\cdot)=\cvar_{0.5}(\cdot)$.
By \cite{R07}, $\RRR(\cdot)$ is a a coherent and averse risk measure with risk envelope $\QQ=\{Q\in\LL^2:~0\leq Q\leq2,~\EE(Q)=1\}$. However, $1$ is not a relative interior point of $\QQ$~(relative to $\PPP$). In fact, for any $\delta\in(0,1)$, the random variable
$$\widetilde{Q}:=\left\{\begin{array}{ll}~~~3~~~~~~~~\h{\rm with probability}~\frac{\delta^2}{16+\delta^2}\\1-\frac{\delta^2}{8}~~~~~\h{\rm with probability}~\frac{16}{16+\delta^2} \end{array}\right.$$ satisfies $\widetilde{Q}\geq0$, $\EE(\widetilde{Q})=1$ and $\|\widetilde{Q}-1\|_2=\frac{\delta}{2}<\delta$. But $\widetilde{Q}\not\in\QQ$. Therefore, $1$ is not a relative interior point of $\QQ$~(relative to $\PPP$). Hence the converse of the first ``$\Longrightarrow$''in \reff{averse} may not be true.
\end{example}
\begin{example}\label{exam:averse2}
Suppose $\Omega=\{\omega_1,\omega_2,\omega_3\}$ and $\PP_0(\{\omega_1\})=\PP_0(\{\omega_2\})=\PP_0(\{\omega_3\})=1/3$. Set
$$Q_0:~~Q_0(\omega_1)=\frac{3}{4},~Q_0(\omega_2)=\frac{3}{2},~Q_0(\omega_3)=\frac{3}{4}.$$ Then $Q_0\in\PPP$. Take $\QQ:=\conv(\{1,Q_0\})$, then $\{1\}\subset \QQ$. However, if we set $$X:~X(\omega_1)=-1,~X(\omega_2)=0,~X(\omega_3)=1,$$ then $X$ is nonconstant but
$$\RRR(X)=\max\{\EE(X),~\EE(XQ_0)\}=0=\EE(X).$$ Therefore, $\RRR(\cdot)$ is not averse.
\end{example}
From Example \ref{exam:averse2} we can see that the converse of the first ``$\Longrightarrow$'' may not hold even when $\Omega$ is finite. However, the converse of the second ``$\Longrightarrow$'' always holds when $\Omega$ is finite, see the following proposition.
\begin{prop}\label{p:averse_disc}
If $\Omega$ is finite, and $\RRR(\cdot)$ is a coherent risk measure on $\LL^2$ with risk envelope $\QQ$. Then $\RRR(\cdot)$ is averse if and only if $1$ is a relative interior point of $\QQ$~(relative to $\PPP$).
\end{prop}
\pf By Proposition \ref{p:averse_suff}, we only need to prove one direction, that is, aversity implies that $1$ is a relative interior point. Suppose $\Omega=\{\omega_1,\cdots,\omega_n\}$ and $\PP_0(\{\omega_i\})=p_i>0$ for $i=1,2,\cdots,n$. In this case, $$\PPP=\left\{(q_1,\cdots,q_n):~q_1,\cdots,q_n\geq0,~\sum\limits_{i=1}^nq_ip_i=1\right\},$$ and the risk envelope of $\RRR(\cdot)$ is some $\QQ\subseteq\PPP$, that is, $$\RRR(X)=\max\limits_{(q_1,\cdots,q_n)\in\QQ}\{x_1q_1p_1+\cdots+x_nq_np_n\}$$ for any $x_1,\cdots,x_n\in\RR$. Here, $x_i$ represents $X(\omega_i)$ for $i=1,2,\cdots,n$. Moreover, since $\RRR(\cdot)$ is averse, we have
\begin{equation}\label{e:averse}
\max\limits_{(q_1,\cdots,q_n)\in\QQ}\{x_1q_1p_1+\cdots+x_nq_np_n\}>x_1p_1+\cdots+x_np_n
\end{equation}
whenever $x_1,\cdots,x_n$ are not the same. To prove that $(1,1,\cdots,1)$ is a relative interior point of $\QQ$~(relative to $\PPP$), we only need to prove the following:
{\it {\b Claim:} $y_1+\cdots+y_n$ is a relative interior point of $\{y_1q_1+\cdots+y_nq_n:~(q_1,\cdots,q_n)\in\QQ\}$
if $(y_1,\cdots,y_n)$ is not a multiple of $(p_1,\cdots,p_n)$.
}
{\b Proof of the claim.} In fact, since $\frac{y_1}{p_1},\cdots,\frac{y_n}{p_n}$ are not the same, by (\ref{e:averse}) we have
$$\max\limits_{(q_1,\cdots,q_n)\in\QQ}\{y_1q_1+\cdots+y_nq_n\}=\max\limits_{(q_1,\cdots,q_n)\in\QQ}\left\{\frac{y_1}{p_1}\cdot q_1p_1+\cdots+\frac{y_n}{p_n}\cdot q_np_n\right\}>y_1+\cdots+y_n$$ and
$$\min\limits_{(q_1,\cdots,q_n)\in\QQ}\{y_1q_1+\cdots+y_nq_n\}=-\max\limits_{(q_1,\cdots,q_n)\in\QQ}\{(-y_1)q_1+\cdots+(-y_n)q_n\}<y_1+\cdots+y_n.$$
Hence the Claim is true, and therefore $(1,1,\cdots,1)$ is a relative interior point of $\QQ$~(relative to $\PPP$), as desired.\qed
Next, we analyze the examples in Section \ref{section:examples_coherent}. Obviously, the expectation measure $\EE(\cdot)$ in subsection \ref{expectation} is not averse. We call a risk measure $\RRR(\cdot)$ ``law-invariant'' if $\RRR(X)=\RRR(Y)$ whenever $X$ and $Y$ have the same distribution under $\PP_0$. F\"ollmer and Schied \cite{FS02book} proved that if $\RRR(\cdot)$ is a coherent, law-invariant risk measure in $\LL^\infty$~(not $\LL^2$) other than $\EE(\cdot)$, then $\RRR(\cdot)$ is averse. Therefore, since the examples in subsections \ref{esssup}, \ref{conditionalvar} and \ref{meandeviation} are not expectation measure $\EE(\cdot)$ and they are all law-invariant, they should be averse. However, since we now consider the $\LL^2$ case, we cannot use the result in F\"ollmer and Schied \cite{FS02book} directly. Instead, we give a direct proof in the next proposition.
\begin{prop}\label{p:averse_examples}
The risk measures defined in (\ref{e:esssupdef}), (\ref{001}) and (\ref{e:meandeviationdef}) are all averse.
\end{prop}
\pf The proof is trivial for $\esssup(\cdot)$, since the expectation of any random variable is no larger than its essential supremum, and they are equal if and only if the random variable is a constant almost surely.
For the mean deviation measure. Obviously, we have $\EE X+\lambda\cdot\|(X-\EE X)_+\|_2\geq\EE X$ for any $X\in\LL^2$. And the equation holds if and only if $X\leq\EE X$ almost surely, which implies $X=\EE X$~(i.e. $X$ is a constant) almost surely. Therefore, it is averse.
For the OCE measure, if $0\le\ga_2<1<\ga_1$, then for any $X\in\LL^2$, $$\beta+\EE[\gamma_1(X-\beta)_+-\gamma_2(\beta-X)_+]\geq\beta+\EE[(X-\beta)_+-(\beta-X)_+]=\EE(X)$$ for any $\beta\in\RR$. So
$$S_r(X)=\inf\limits_{\beta\in\RR}\left\{\beta+\EE[\gamma_1(X-\beta)_+-\gamma_2(\beta-X)_+]\right\}\geq\EE(X)$$ for any $X\in\LL^2$. Next, if $S_r(X)=\EE(X)$, then there exists a constant $\beta_0\in\RR$ such that $$\beta_0+\EE[\gamma_1(X-\beta_0)_+-\gamma_2(\beta_0-X)_+]=\EE(X)=\beta_0+\EE[(X-\beta_0)_+-(\beta_0-X)_+],$$ that is,
$$(\gamma_1-1)\EE[(X-\beta_0)_+]+(1-\gamma_2)\EE[(\beta_0-X)_+]=0.$$ Since $0\le\ga_2<1<\ga_1$, we can get $\EE[(X-\beta_0)_+]=\EE[(\beta_0-X)_+]=0$, and therefore, $X=\beta_0$ almost surely. So the OCE measure is averse, as desired.\qed
\bigskip
On the contrary, we next show that the risk measure from dividing the future is not averse.
\begin{prop}\label{p:dividing_averse}
The risk measure defined in (\ref{dividing_formula}) is not averse if $r\geq2$.
\end{prop}
\pf If $\PP_0(\Omega_k)\neq\lambda_k$ for some $k=1,2,\cdots,r$, then by (\ref{dividing_envelope}), $1\not\in\QQ$. Thus, by Proposition \ref{p:averse_ness}, $\RRR(\cdot)$ is not averse.
If $\PP_0(\Omega_k)=\lambda_k$ for all $k=1,2,\cdots,r$, then set $X=\sum\limits_{k=1}^rk\textbf{1}_{\Omega_k}$. Obviously $X$ is nonconstant. Since
$$\RRR(X)=\sum\limits_{k=1}^r\lambda_k\cdot k=\sum\limits_{k=1}^rk\PP_0(\Omega_k)=\EE(X),$$ which implies that $\RRR(\cdot)$ is not averse.\qed
\section{Coherent risk measures on affine sets: Risk envelopes and uncertainty sets}
Recently, coherent risk measures have been studied in the literature of robust optimization. For instance, several coherent risk measures were constructed by using the so-called uncertainty sets in Natarajan, Pachamanova, and Sim \cite{NPS09}, while Bertsimas and Brown \cite{BB09} examined the question from a different perspective: If risk preferences are specified by a coherent risk measure, how would the uncertainty set be constructed? In general, from the viewpoint of robust optimization, a risk measure is applied to a random variable of a special structure (say, a linear combination of basic random variables) and is defined by uncertainty sets without involving the exact details of the probability structure of the random variables. In particular, the mean-standard deviation measure, the discrete CVaR, and the distortion risk measure are defined through cone-representable uncertainty sets. If the same risk measure can be constructed by both risk envelope and uncertainty set, then there must be certain relation between the two subjects. It is therefore of interest to explore the connection between risk envelopes and uncertainty sets. This helps to have a deeper understanding on robust optimization.
Let us consider a rather general case in robust optimization, where all uncertain data are affine functions of a finite number of random variables, $X_1,...,X_n,$ where $X_i\in\LL^2(\Omega,\Sigma,\PP_0)$ for $1\leq i\leq n$.
Denote
$$\VV:=\left\{X=a_0+\rv:~a_0,a_1,\cdots,a_n\in\RR\right\}.$$ Then $\VV$ is the affine set generated by $X_1,...,X_n$. We call $\RRR(\cdot)$ a coherent risk measure on $\VV$ if it satisfies ({\b A1})--({\b A5}) for $X$ restricted on $\VV$. Note that any coherent risk measure on $\LL^2$ restricted to $\VV$ is a coherent risk measure on $\VV$. In general, if we define the risk envelope as
\begin{equation}\label{e:envelope}
\QQ^\VV:=\left\{Q\in\PPP:~\EE(XQ)\leq\RRR(X)~\h{for all}~X\in\VV\right\},
\end{equation} then $\QQ^{\VV}$ is convex and closed. Hence the function
\begin{equation}\label{e:dual}
\RRR\left(X\right)=\sup\limits_{Q\in\QQ^\VV}\EE(XQ)
\end{equation}
is a coherent risk measure on $\LL^2$ and thus is also a coherent risk measure on $\VV.$
We next show that the uncertainty set used in robust optimization for constructing a coherent risk measure on $\VV$ is the closure of ``expected image" of $\QQ$ in the sense that
\begin{equation}\label{e:UQ}
\QQ^\VV=\cl\UU_\QQ, \h{ with }\UU_\QQ:=\left\{\begin{pmatrix}\EE(X_1Q)\cr \vdots\cr \EE(X_nQ) \end{pmatrix}:~Q\in\QQ\right\}.
\end{equation}
We need introduce more notations. Let
\[
\UU_\PPP:=\left\{\begin{pmatrix}\EE(X_1Q)\cr \vdots\cr \EE(X_nQ) \end{pmatrix}:~Q\in\PPP\right\}.
\] Then $\UU_\QQ$ is a nonempty and convex subset of $\UU_\PPP$. Given a convex and closed uncertainty set $\UU$, let
\begin{equation}\label{e:QU}
\QQ_\UU:=\left\{Q\in\PPP:~\begin{pmatrix}\EE(X_1Q)\cr \vdots\cr \EE(X_nQ) \end{pmatrix}\in\UU\right\}.
\end{equation}
Then $\QQ_\UU$ is a nonempty and convex subset of $\PPP$. The following lemma is basic.
\begin{lemma}\label{property}.
(1)~$\QQ_{\UU_\PPP}=\PPP$;
(2)~$\UU_{\QQ_\UU}=\UU$;
(3)~$\QQ\subseteq\QQ_{\UU_\QQ}$;
(4)~If $\QQ_1\subseteq\QQ_2$, then $\UU_{\QQ_1}\subseteq\UU_{\QQ_2}$;
(5)~$\UU_1\subseteq\UU_2$ if and only if $\QQ_{\UU_1}\subseteq\QQ_{\UU_2}$.
\end{lemma}
\pf
(1) Trivial.
(2) On one hand, we have $$\UU_{\QQ_\UU}=\left\{[\EE(X_1Q),..., \EE(X_nQ)]':~Q\in\QQ_\UU\right\}\subseteq\UU.$$ On the other hand, for any $(z_1,..., z_n)'\in\UU\subseteq\UU_\PPP$, there exists $Q\in\PPP$ such that $z_i=\EE(X_iQ)$ for any $1\leq i\leq n$. Since $[\EE(X_1Q),..., \EE(X_nQ)]'\in\UU$, by definition we have $Q\in\QQ_\UU$. Therefore,
$$(z_1,..., z_n)'=[\EE(X_1Q),..., \EE(X_nQ)]'\in\UU_{\QQ_\UU}.$$ Hence $\UU\subseteq\UU_{\QQ_\UU}$, and then $\UU_{\QQ_\UU}=\UU$.
(3) For any $Q\in\QQ$, we have $[\EE(X_1Q),..., \EE(X_nQ)]' \in\UU_\QQ$. Then by definition, $Q\in\QQ_{\UU_\QQ}$. Therefore, $\QQ\subseteq\QQ_{\UU_\QQ}$.
(4) Trivial.
(5) The ``only if'' part is trivial. For the ``if'' part, by (4) and (2), $\QQ_{\UU_1}\subseteq\QQ_{\UU_2}$ implies $\UU_{\QQ_{\UU_1}}\subseteq\UU_{\QQ_{\UU_2}}$, that is, $\UU_1\subseteq\UU_2$.\qed
\n {\b Remark.} The converse of (3) may not be true. For example, if $\QQ$ is a singleton $\{1\}$, then $\UU_\QQ=[\EE(X_1),..., \EE(X_n)]' $. Here $\QQ_{\UU_\QQ}$ contains all $Q\in\PPP$ such that $[\EE(X_1Q),..., \EE(X_nQ) ]'=[\EE(X_1),..., \EE(X_n) ]'$, which may not necessarily be constant variable $1$.
\bigskip
The next two propositions describe some relationships between risk envelopes and uncertainty sets.
\begin{prop}\label{rel1}
For any risk envelope $\QQ^\VV\subseteq\PPP$, $\RRR(\cdot)$ is a coherent risk measure on $\VV$ with risk envelope $\QQ$ if and only if it is a coherent risk measure on $\VV$ with uncertainty set $\UU_{\QQ^\VV}$.
\end{prop}
\pf By direct calculation, we can get
\beaa
\sup\limits_{Q\in\QQ^\VV}\EE\left[\left(a_0+\rv\right)Q\right]&=\sup\limits_{Q\in\QQ^\VV}\left(a_0+\sum\limits_{i=1}^na_i\EE(X_iQ)\right)\\
&=\sup\limits_{(z_1,\cdots,z_n)^T\in\UU_{\QQ^\VV}}\left(a_0+\rob\right)
\eeaa
for any $a_0+\rv\in\VV$, as desired.\qed
\begin{prop}\label{rel2}
For any uncertainty set $\UU\subseteq\UU_\PPP$, $\RRR(\cdot)$ is a coherent risk measure on $\VV$ with uncertainty set $\UU^\VV$ if and only if it is a coherent risk measure on $\VV$ with risk envelope $\cl {\QQ_\UU}$.
\end{prop}
\pf It follows from Proposition \ref{rel1} and Lemma \ref{property}~(2), as desired.\qed
The following is a fundamental theorem in \cite{NPS09}, where the authors discussed how to construct coherent risk measures thr However, since uncertainty sets are constructed independent of probability distributions. It is not completely clear how the uncertainty sets are related to the random variables appeared in the problem. We now present a new proof of the theorem, which disclose the connection of the uncertainty set and the risk easure on $\VV.$
\begin{thm}\label{uncertainty}
$\RRR(\cdot)$ is a coherent risk measure on $\VV$ if and only if there exists a nonempty and convex subset $\UU\subseteq\UU_\PPP$ such that
$$\RRR\left(a_0+\rv\right)=\sup\limits_{{z}=(z_1,\cdots,z_n)'\in\UU}\left(a_0+\rob\right)$$ for any $a_0,a_1,\cdots,a_n\in\RR$. We call $\UU$ the ``uncertainty set'' of the risk measure $\RRR(\cdot)$ on $\VV.$ It can be written explicitly as $$\UU=\left\{{z}\in\UU_\PPP:~\max\limits_{a_1,\cdots,a_n\in\RR}\left\{\rob:~\RRR\left(\rv\right)\leq1\right\}\leq1\right\}.$$
\end{thm}
\pf The statement on the ``if and only if'' part follows directly from Proposition \ref{rel1} and Proposition \ref{rel2}. Next, note that $\RRR(\cdot)$ is a coherent risk measure on $\VV$ with risk envelope
$$\QQ^\VV=\left\{Q\in\PPP:~\EE\left[\left(a_0+\rv\right)Q\right]\leq\RRR\left(a_0+\rv\right)~\h{for all}~a_0,a_1,\cdots,a_n\in\RR\right\}$$ if and only if it is a coherent risk measure on $\VV$ with uncertainty set
$$\UU_{\QQ^\VV}=\left\{\begin{pmatrix}\EE(X_1Q)\cr \vdots\cr \EE(X_nQ) \end{pmatrix}:~Q\in\PPP,~\EE\left[\left(a_0+\rv\right)Q\right]\leq\RRR\left(a_0+\rv\right)~\h{for all}~a_0,a_1,\cdots,a_n\in\RR\right\}.$$ Therefore, to complete the proof of Theorem \ref{uncertainty}, we only need to prove
\bea\label{equation}
&\left\{\begin{pmatrix}\EE(X_1Q)\cr \vdots\cr \EE(X_nQ) \end{pmatrix}:~Q\in\PPP,~\EE\left[\left(a_0+\rv\right)Q\right]\leq\RRR\left(a_0+\rv\right)~\h{for all}~a_0,a_1,\cdots,a_n\in\RR\right\}\nonumber\\
=&\left\{\begin{pmatrix}z_1\cr \vdots\cr z_n \end{pmatrix}\in\UU_\PPP:~\max\limits_{a_1,\cdots,a_n\in\RR}\left\{\rob:~\RRR\left(\rv\right)\leq1\right\}\leq1\right\}.
\eea
In fact, since $Q\in\PPP\Longleftrightarrow[\EE(X_1Q),...,\EE(X_nQ)]'\in\UU_\PPP$, and for any $Q\in\PPP$,
\beaa
&&\EE\left[\left(a_0+\rv\right)Q\right]\leq\RRR\left(a_0+\rv\right)~\h{for all}~a_0,a_1,\cdots,a_n\in\RR\\
&\Longleftrightarrow&\sum\limits_{i=1}^na_i\EE(X_iQ)\leq\RRR\left(\rv\right)~\h{for all}~a_1,\cdots,a_n\in\RR\\
&\Longleftrightarrow&\max\left\{\sum\limits_{i=1}^na_i\EE(X_iQ):~a_1,\cdots,a_n\in\RR,~\RRR\left(\rv\right)\leq1\right\}\leq1,
\eeaa
then (\ref{equation}) holds. The proof of Theorem \ref{uncertainty} is completed.\qed
\section{Concluding Remarks}
Artzner, Delbaen, Eber, and Heath introduced the fundamental notion of coherent risk measures \cite{ADEH97,ADEH99}. Rockafellar, Uryasev, and Zabarankin considered a dual representation theorem in $\LL^2$
space \cite{RUZ06}. In this paper, we considered risk measures under set operations and discussed the dual representations and aversity for various popular risk measures. We also studied the relationship
between the risk measure defined by risk envelopes and that defined by uncertainty sets in the case for the risk measures on affine sets of $\LL^2.$
|
1401.7943
|
\section{Introduction}
The New Interface Cement Equilibrated Mortar (NICEM) method proposed in~\cite{GJMN}
is an equilibrated mortar domain decomposition method
that allows for the use of optimized Schwarz algorithms with Robin
interface conditions on non-conforming grids.
It has been analyzed in~\cite{JMN10} in 2D and 3D for $\mathbf{P}_1$ elements.
The purpose of this paper is to extend this numerical analysis in 2D
for piecewise polynomials of higher order. We thus establish new numerical analysis results in the frame of finite element approximation and
also present the iterative algorithm and prove its convergence
in all these cases.
We first consider the problem at the continuous level:\ \ \ Find $u$
such that
\begin{eqnarray}
\label{eq:pbgen}
{\cal L}(u)&=&f \hbox{ in }\Omega\\
\label{eq:pbgen2}
{\cal C}(u)&=&g \hbox{ on }\partial\Omega
\end{eqnarray}
where ${\cal L}$ and ${\cal C}$ are partial differential equations.
The original Schwarz algorithm is based on a decomposition of the
domain $\Omega$ into overlapping subdomains and the resolution of
Dirichlet boundary value problems in each subdomain. It has been
proposed in~\cite{Lions} to use more general interface/boundary conditions for
the problems on the subdomains in order to use a non-overlapping
decomposition of the domain. The convergence factor is also dramatically
reduced.
More precisely, let $\Omega$ be a ${\cal C}^{1,1}$ (or convex polygon
in 2D or polyhedron in 3D) domain of $I\!\!R^d$, $d=2$ or $3$; we
assume it is decomposed into $K$ non-overlapping subdomains:
$
\overline \Omega = \cup_{k=1}^{K} \overline\Omega^k.
$
We suppose that the subdomains $\Omega^k, \ 1 \le k \le K$ are either
${\cal C}^{1,1}$ or polygons in 2D or polyhedrons in 3D. We
assume also that this decomposition is geometrically conforming in the
sense that the intersection of the closure of two different
subdomains, if not empty, is either a common vertex, a common edge, or
a common face of the subdomains in 3D\footnote[1]{This assumption is not restrictive since in the case of
a partition geometrically non-conforming, the faces can be decomposed in
subfaces to obtain a geometrical conformity }.
Let ${\bf n}_k$ be the outward normal from
$\Omega^k$. Let $({\cal B}_{k,\ell})_{1\le k,\ell \le K, k\not= \ell}$
be the chosen transmission conditions on the interface between
subdomains $\Omega^k$ and $\Omega^\ell$ (e.g. ${\cal B}_{k,\ell}={\partial\ \over \partial
{\bf n}_k}+\alpha_k$). What we shall call here a Schwarz type method for
the problem (\ref{eq:pbgen})-(\ref{eq:pbgen2}) is its reformulation:
\ \ \ Find $(u_k)_{1\le k\le K}$ such that
\begin{eqnarray}
{\cal L}(u_k)&=&f \hbox{ in }\Omega^k \nonumber\\
{\cal C}(u_k)&=&g \hbox{ on } \partial\Omega^k \cap \partial\Omega \nonumber\\
{\cal B}_{k,\ell}(u_k)&=&{\cal B}_{k,\ell}(u_\ell)
\hbox{ on }\partial\Omega^k \cap \partial\Omega^\ell, \nonumber
\end{eqnarray}
leading to the iterative procedure
\begin{eqnarray}
{\cal L}(u_k^{n+1})&=&f \hbox{ in }\Omega^k \nonumber\\
{\cal C}(u_k^{n+1})&=&g \hbox{ on } \partial\Omega^k \cap \partial\Omega \nonumber\\
{\cal B}_{k,\ell}(u_k^{n+1})&=&{\cal B}_{k,\ell}(u_\ell^{n})
\hbox{ on }\partial\Omega^k \cap \partial\Omega^\ell. \nonumber
\end{eqnarray}
The convergence factor of associated Schwarz-type domain decomposition
methods depends largely on the choice of the transmission operators ${\cal B}_{k,\ell}$
(see for instance~\cite{Hagstrom,Nataf.4,GHAD2,GHAD1,Despres.3,Despres.4,Douglas,BenDespres,Widlund,Keyes,Bourdonnaye}
and~\cite{Nataf,Quarteroni}).
More precisely, transmission conditions which
reduce dramatically the convergence factor
of the algorithm have been proposed (see~\cite{Japhet2,Japhet1,JNR}) for a convection-diffusion equation,
where coefficients in second order transmission conditions where optimized.
On the other hand, the mortar element method,
first introduced in~\cite{BMP}, enables the use of non-conforming
grids, and thus parallel generation of meshes, local adaptive
meshes and fast and independent solvers.
It is also well suited to the use of
"Dirichlet-Neumann"~(\cite{Quarteroni}), or "Neumann-Neumann"
preconditioned conjugate gradient method applied to the Schur
complement matrix~\cite{Lacour,AMW,TosWid}.
In~\cite{AJMN}, a new cement to match Robin interface conditions with non-conforming
grids in the case of a finite volume discretization was introduced and analyzed.
Such an approach has been extended
to a finite element discretization in~\cite{GJMN}. A variant has been
independently implemented in~\cite{Vouvakis} for the Maxwell equations,
without numerical analysis. Another approach, in the finite volume case, has been proposed in~\cite{Saas}.
The numerical analysis of the NICEM method proposed in~\cite{GJMN}
is done in~\cite{JMN10} for $\mathbf{P}_1$ finite elements, in 2D and 3D.
These results are for interface conditions of order 0 (i.e. ${\cal B}_{k,\ell}={\partial\ \over \partial {\bf n}_k}+\alpha_k$)
and are the prerequisites for the goal in designing this non-overlapping method for
interface conditions such as Ventcel interface
conditions which greatly enhance the information exchange between subdomains, see~\cite{JMN12} for preliminary results on the extension
of the NICEM method to Ventcel conditions.
The purpose of this paper is first to present a general finite element NICEM method in the case of $\mathbf{P}_p$ finite elements, with $p\ge 1$ in 2D and $p=1$ in 3D. We also provide a Robin iterative algorithm and prove its convergence. Then,
we present in full details the error analysis in the case of piecewise polynomials of high order in 2D.
In Section~\ref{sec.defmethod}, we describe the NICEM method in 2D and 3D.
Then, in Section~\ref{sec.algo}, we present the iterative algorithm at the continuous and discrete levels,
and we prove, in both cases, the well-posedness and convergence of the iterative method,
for polynomials of low and high order in 2D, and for $\mathbf{P}_1$ finite elements in 3D.
The convergence is also proven in 3D for $\mathbf{P}_p$ finite elements, $p\ge 1$,
in a weak sense. In Section~\ref{sec.bestfit2D} we extend
the error estimates analysis given in~\cite{JMN10} to 2D piecewise polynomials of higher order.
We finally present in Section~\ref{sec:numresults} simulations for two and four
subdomains, that fit the theoretical estimates.
\section{Definition of the method}\label{sec.defmethod}
We consider the following problem : Find $u$ such that
\begin{eqnarray}
\label{initial_BVP1}
(Id - \Delta)u &=& f \quad \mbox{in } \Omega \\
\label{initial_BVP2}
u &=& 0 \quad \mbox{on } \partial{\Omega},
\end{eqnarray}
where $f$ is given in $L^2(\Omega)$. \\
The variational statement of the problem
(\ref{initial_BVP1})-(\ref{initial_BVP2})
consists in writing the problem as follows : Find $u \in H^1_0(\Omega)$
such that
\begin{eqnarray}\label{initial_VF}
\int_{\Omega} \left(\nabla u \nabla v +uv \right) dx = \int_{\Omega} fvdx,
\quad \forall v \in H^1_0(\Omega).
\end{eqnarray}
We introduce the space $H^1_*(\Omega^k)$ defined by
$$H^1_*(\Omega^k) = \{\varphi\in H^1(\Omega^k),\quad
\varphi = 0 \hbox{ over } \partial\Omega\cap\partial
\Omega^k\},$$
and we introduce $\Gamma^{k,\ell}$ the interface of two adjacent
subdomains, $\Gamma^{k,\ell} =
\partial\Omega^k\cap\partial\Omega^\ell.$
It is standard
to note that the space $H^1_0(\Omega)$ can then be identified with the
subspace of the $K$-tuple ${\underline v}=(v_1,...,v_K)$ that are continuous on
the interfaces:
\begin{eqnarray}
V = \{{\underline v}=(v_1,...,v_K) \in \prod_{k=1}^K H^1_*(\Omega^k), \
\forall k,\ell, k \ne \ell, \ 1 \le k,\ell \le K, \ v_k = v_{\ell}
\mbox{ over }
\Gamma^{k,\ell} \}.
\nonumber
\end{eqnarray}
Following~\cite{JMN10}, in order to glue non-conforming grids with Robin transmission
conditions, we impose the constraint $ v_k = v_{\ell}$ over
$\Gamma^{k,\ell}$
through a Lagrange multiplier in $H^{-1/2}(\partial\Omega^k)$.
The constrained space is then defined as follows
\begin{eqnarray}\label{eq:constrainedspace}
{\cal V} = \displaystyle\lbrace({\underline v},{\underline q})\in
\left(\prod_{k=1}^K H^1_*(\Omega^k)\right)\times \left(\prod_{k=1}^K
H^{-1/2}(\partial\Omega^k)\right), \nonumber\\
\ v_k=v_\ell\hbox{
and }q_k = - q_\ell \hbox{ over }\Gamma^{k,\ell}, \ \forall k,\ell\rbrace.
\end{eqnarray}
Then, problem (\ref{initial_VF}) is equivalent to the following one (see~\cite{JMN10}):
Find $({\underline u},{\underline p}) \in {\cal V}$ such that
\begin{eqnarray*}
\label{eq:constraintpb}
\begin{array}{r}
\displaystyle\sum_{k=1}^{K} \int_{\Omega^k} \left( \nabla u_k\nabla v_k +u_kv_k \right) dx
- \sum_{k=1}^{ K} \
_{H^{-1/2}(\partial\Omega^k)}<p_k,v_k>_{H^{1/2}(\partial\Omega^k)}\hspace{1.2cm}\\
\displaystyle= \sum_{k=1}^{K} \int_{\Omega^k} f_kv_kdx, \quad \forall {\underline v} \in
\prod_{k=1}^KH^1_*(\Omega^k).
\end{array}
\end{eqnarray*}
Being equivalent with the original problem, where $p_k = {\partial
u\over\partial {\bf n}_k}$ over $\partial\Omega^k$, this problem is well
posed. This can also be directly derived from the proof of an inf-sup
condition that follows from the arguments developed hereafter for the
analysis of the iterative procedure.
Note that the Dirichlet-Neumann condition in \eqref{eq:constrainedspace}
is equivalent to the following combined equality
\begin{eqnarray}\label{eq:Robincd}
p_k + \alpha u_k = - p_{\ell} + \alpha u_{\ell} \quad \mbox{ over }
\Gamma^{k,\ell}, \quad \forall k,\ell.
\end{eqnarray}
As noticed in~\cite{JMN10}, for regular enough function
it is also equivalent to
\begin{eqnarray}\label{eq:Robincdint}
\hspace{-1cm}
\ \qquad\int_{\Gamma^{k,\ell}}((p_{k}+\alpha u_{k})-(-p_{\ell}+\alpha
u_{\ell})
)\psi_{k,\ell}
= 0,\
\forall \psi_{k,\ell} \in L^2(\Gamma^{k,\ell}), \ \ \forall k,\ell,
\end{eqnarray}
which is the form under which the discrete method is described.
Let us describe the method in the non-conforming discrete case.
\subsection{Discrete case}\label{sec.discretecasepb}
We introduce now the discrete spaces for piecewise polynomials of higher order
in 2D. Each $\Omega^k$ is provided with
its own mesh ${\cal T}_h^k, \ 1 \le k \le K$, such that
\begin{eqnarray}
\overline \Omega^k=\cup_{T \in {\cal T}_h^k} T. \nonumber
\end{eqnarray}
For $T \in {\cal T}_h^k$, let $h_T$ be the diameter of $T$
($h_T=\sup_{x,y \in T} d(x,y)$) and $h$ the discretization parameter
$h=\max_{1 \le k \le K} h_k,$ with $h_k=\max_{T \in {\cal T}_h^k} h_T.$
As noticed in~\cite{JMN10}, for the sake of readability we prefer to use
$h$ instead of $h_k$, but all the analysis could be performed with $h_k$ instead of $h$.
Let $\rho_T$ be the diameter of the circle (in 2D) or
sphere (in 3D)
inscribed in $T$, then $\sigma_T=\frac{h_T}{\rho_T}$ is a measure of the
non-degeneracy of $T$. We suppose that ${\cal T}_h^k$ is uniformly regular:
there exists $\sigma$ and $\tau$ independent of $h$ such that
$\forall T \in {\cal T}_h^k, \ \sigma_T \le \sigma,$
$\tau h \le h_T .$
We consider that the sets belonging to the meshes are of simplicial type
(triangles), but
the analysis made hereafter can be applied as well for quadrangular meshes.
Let ${\mathbf{P}}_p(T)$ denote the space of all polynomials defined over $T$
of total degree less than or equal to $p$.
The finite elements are of Lagrangian type, of class ${\cal C}^0$.
We define over each subdomain two conforming spaces $Y_h^k$ and
$X_h^k$ by:
\begin{eqnarray}
Y_h^k&=&\{v_{h,k} \in {\cal C}^0(\overline \Omega^k),
\ \ {v_{h,k}}_{|T} \in {\mathbf{P}}_p(T), \ \forall T \in {\cal T}_h^k \},
\nonumber\\
X_h^k&=&\{v_{h,k} \in Y_h^k, \ {v_{h,k}}_{|\partial \Omega^k \cap \partial
\Omega}=0\}.\nonumber
\end{eqnarray}
In what follows we assume that the mesh is designed by taking into
account the geometry of the $\Gamma^{k,\ell}$ in the sense that, the
space of traces over each $\Gamma^{k,\ell}$ of elements of $Y_h^k$ is
a finite element space denoted by ${\cal Y}_h^{k,\ell}$.
Let $k$ be given, the space ${\cal Y}_h^k$ is then the product space of the
${\cal Y}_h^{k,\ell}$ over each $\ell$ such that
$\Gamma^{k,\ell}\not=\emptyset$. With each such interface we associate
a subspace $\tilde W_h^{k,\ell}$ of ${\cal Y}_h^{k,\ell}$ in the same
spirit as in the mortar element method~\cite{BMP} in 2D or~\cite{BBM}
and~\cite{BraessDahmen} in 3D. To be more
specific, in 2D if the space $X_h^k$ consists of continuous piecewise
polynomials of degree $\le p$, then it is readily noticed that the
restriction of $X_h^k$ to $\Gamma^{k,\ell}$ consists in finite element
functions adapted to the (possibly curved) side $\Gamma^{k,\ell}$ of
piecewise polynomials of degree $\le p$. This side has two end points
that we denote as $x_0^{k,\ell}$ and $x_N^{k,\ell}$ that belong to the
set of vertices of the corresponding triangulation of
$\Gamma^{k,\ell}$ : $x_0^{k,\ell}, x_1^{k,\ell},...,x_{N-1}^{k,\ell},
x_N^{k,\ell}$. The space $\tilde W_h^{k,\ell}$ is then the subspace of
those elements of ${\cal Y}_h^{k,\ell}$ that are polynomials of degree
$\le p-1$ over both $[x_0^{k,\ell}, x_1^{k,\ell}]$ and
$[x_{N-1}^{k,\ell}, x_N^{k,\ell}]$. As before, the space $\tilde
W_h^{k}$ is the product space of the $\tilde W_h^{k,\ell}$ over each
$\ell$ such that $\Gamma^{k,\ell}\not=\emptyset$.
Let $\alpha$ be a given positive real number.
Following~\cite{JMN10}, the discrete constrained space is defined as
\begin{eqnarray}
\label{disc.const}
{\cal V}_h = \displaystyle\lbrace({\underline u}_h,{\underline p}_h)\in
\left(\prod_{k=1}^K X_h^k\right)\times
\left(\prod_{k=1}^K \tilde W_h^{k}\right), \nonumber\\
\ \qquad\int_{\Gamma^{k,\ell}}((p_{h,k}+\alpha u_{h,k})-(-p_{h,\ell}+\alpha
u_{h,\ell})
)\psi_{h,k,\ell}
= 0,\
\forall \psi_{h,k,\ell} \in \tilde W_h^{k,\ell}
\rbrace,
\end{eqnarray}
and the discrete problem is the following one :
Find $({\underline u}_h,{\underline p}_h) \in {\cal V}_h$ such that\\\\
$\forall {\underline v}_h=(v_{h,1},...v_{h,K}) \in \prod_{k=1}^K X_h^k,$
\begin{eqnarray}\label{pbdiscret}
\sum_{k=1}^{K} \int_{\Omega^k} \left( \nabla u_{h,k}\nabla v_{h,k} +u_{h,k}
v_{h,k} \right) dx
- \sum_{k=1}^{K} \int_{\partial\Omega^k} p_{h,k} v_{h,k} ds
= \sum_{k=1}^{K} \int_{\Omega^k} f_k v_{h,k} dx.\hspace{10mm}
\end{eqnarray}
The Robin condition \eqref{disc.const} is the discrete counterpart of \eqref{eq:Robincdint}.
\section{Iterative algorithm}\label{sec.algo}
Let us describe the algorithm in the continuous case, and then in the
non conforming discrete case. In both cases, we prove the convergence
of the algorithm towards the solution of the problem.
\subsection{Continuous case}
Let us consider the Robin interface conditions \eqref{eq:Robincd}.
We introduce the following notations: $\ll p,v \gg_{\partial\Omega^k}= _{H^{-1/2}(\partial\Omega^k)}<p,v>_{H^{1/2}(\partial\Omega^k)}$
and $<p,v>_{\Gamma^{k,\ell}}=_{(H_{00}^{1/2}(\Gamma^{k,\ell}))^{\prime}}<p,v>_{H_{00}^{1/2}(\Gamma^{k,\ell})}$.
The algorithm is then defined as follows: let $(u_k^n,p_k^n) \in
H^1_*(\Omega^k) \times H^{-1/2}(\partial\Omega^k)$
be an approximation of $(u,p)$ in $\Omega^k$ at step $n$.
Then, $(u_k^{n+1},p_k^{n+1})$ is the solution in
$H^1_*(\Omega^k) \times H^{-1/2}(\partial\Omega^k)$ of
\begin{eqnarray}
\label{algo_continu}
\int_{\Omega^k} \left( \nabla u_k^{n+1}\nabla v_k
+u_k^{n+1}v_k \right) dx
- \ll p_k^{n+1},v_k\gg_{\partial\Omega^k}
= \int_{\Omega^k} f_kv_kdx, \quad \forall v_k \in H^1_*(\Omega^k), \hspace{8mm}\\
\label{CI_continu}
<p_k^{n+1}+ \alpha u_k^{n+1},v_k>_{\Gamma^{k,\ell}}=
<- p_{\ell}^{n} + \alpha u_{\ell}^{n},v_k>_{\Gamma^{k,\ell}},
\quad \forall v_k \in H_{00}^{1/2}(\Gamma^{k,\ell}). \hspace{8mm}
\end{eqnarray}
It is obvious to remark that this series of equations results in uncoupled
problems
set on every $\Omega^k$. Recalling that $f\in L^2(\Omega)$, the strong
formulation is indeed that
\begin{eqnarray}
-\Delta u_k^{n+1} + u_k^{n+1} &=& f_k \hspace{1.7cm}\hbox{ over }\Omega^k \nonumber\\
\displaystyle{\partial u_k^{n+1}\over\partial {\bf n}_k} + \alpha u_k^{n+1} &=& -p_\ell^n+\alpha
u_\ell^n \quad\hbox{ over } \Gamma^{k,\ell} \nonumber\\
\label{flux_fort}
\displaystyle p_k^{n+1} &=& {\partial u_k^{n+1}\over\partial {\bf n}_k} \hspace{1cm}\hbox{ over }
\partial\Omega^k.
\end{eqnarray}
From this strong formulation it is
straightforward to derive by induction that if each
$p^0_k, \ k=1,...,K$, is chosen in $\prod_\ell H^{1/2}(\Gamma^{k,\ell})$,
then,
for each $k$, $1\le k\le K$, and $n\ge 0$ the solution
$u_k^{n+1}$ belongs to
$H^1(\Omega^k)$ and $p_k^{n+1}$
belongs to $\prod_\ell
H^{1/2}(\Gamma^{k,\ell})$ by standard trace results ($p_k^{n+1} = -p_\ell^n+\alpha(u^n_\ell-u_k^{n+1})$). This regularity
assumption on $p^0_k$ will be done hereafter.
We can prove now that the algorithm (\ref{algo_continu})-(\ref{CI_continu})
converges for all $f \in L^2(\Omega)$:
\begin{theo}
Assume that $f$ is in $L^2(\Omega)$ and $(p^0_k)_{1 \le k \le K}
\in \prod_\ell H^{1/2}(\Gamma^{k,\ell})$. Then, the algorithm
(\ref{algo_continu})-(\ref{CI_continu}) converges
in the sense that
\begin{eqnarray}
\lim_{n \longrightarrow \infty} \left( \|u_k^n - u_k\|_{H^1(\Omega^k)}
+ \|p_k^n-p_k\|_{H^{-1/2}(\partial\Omega^k)} \right)=0,
\mbox{ for } 1\le k\le K, \nonumber
\end{eqnarray}
where $u_k$ is the restriction to $\Omega^k$ of the solution $u$
to (\ref{initial_BVP1})-(\ref{initial_BVP2}), and $p_k = {\partial
u_k \over\partial {\bf n}_k}$ over~$\partial\Omega^k$, $\ 1 \le k \le K$.
\end{theo}
\\\\
{\bf Proof}.
As the equations are linear,
we can take $f=0$. We prove the convergence in the sense that the
associated sequence $(u_k^n,p_k^n)_n$ satisfies
\begin{eqnarray}
\lim_{n \longrightarrow \infty} \left( \|u_k^n\|_{H^1(\Omega^k)}
+ \|p_k^n\|_{H^{-1/2}(\partial\Omega^k)} \right)=0,
\mbox{ for } 1\le k\le K. \nonumber
\end{eqnarray}
We proceed as in~\cite{Lions,Despres.2} by using an energy estimate that we
derive by taking
$v_k=u_k^{n+1}$ in \eqref{algo_continu} and the use of the regularity
property that
$p_k^{n+1} \in L^2(\partial\Omega^k)$
\begin{eqnarray}
\int_{\Omega^k} \left( |\nabla u_k^{n+1}|^2
+|u_k^{n+1}|^2 \right) dx
= \int_{\partial\Omega^k} p_k^{n+1}u_k^{n+1} ds \nonumber
\end{eqnarray}
that can also be written
\begin{eqnarray}
\int_{\Omega^k} \left( |\nabla u_k^{n+1}|^2
+|u_k^{n+1}|^2 \right) dx
=\sum_{\ell} \frac{1}{4\alpha} \int_{\Gamma^{k,\ell}}\left(
( p_k^{n+1}+\alpha u_k^{n+1})^2 - ( p_k^{n+1}-\alpha u_k^{n+1})^2\right)ds.
\nonumber
\end{eqnarray}
By using the interface conditions (\ref{CI_continu}) we obtain
\begin{eqnarray}\label{estim_en}
\int_{\Omega^k} \left( |\nabla u_k^{n+1}|^2
+|u_k^{n+1}|^2 \right) dx
+\frac{1}{4\alpha}\sum_{\ell}\int_{\Gamma^{k,\ell}}
( p_k^{n+1}-\alpha u_k^{n+1})^2ds
\nonumber\\
= \frac{1}{4\alpha}\sum_{\ell}\int_{\Gamma^{k,\ell}}
( - p_{\ell}^{n}+\alpha u_{\ell}^{n})^2ds.
\end{eqnarray}
Let us now introduce two quantities defined at each step $n$ by :
\begin{eqnarray}
E^n=\sum_{k=1}^K \int_{\Omega^k} \left( |\nabla u_k^{n}|^2
+|u_k^{n}|^2 \right)
\quad
\text{and}
\quad
B^n = \frac{1}{4\alpha}\sum_{k=1}^K\sum_{\ell \ne k} \int_{\Gamma^{k,\ell}}
( p_k^{n}-\alpha u_k^{n})^2ds. \nonumber
\end{eqnarray}
By summing up the estimates (\ref{estim_en}) over $k=1,...,K$, we have
$E^{n+1} + B^{n+1} \le B^n$,
so~that, by summing up these inequalities, now over $n$, we obtain :
\begin{eqnarray}
\sum_{n=1}^{\infty} E^{n} \le B^0. \nonumber
\end{eqnarray}
We thus have $\lim_{n \longrightarrow \infty} E^n =0$.
Relation (\ref{flux_fort}) then implies :
\begin{eqnarray}
\lim_{n \longrightarrow \infty} \|p_k^n\|_{H^{-1/2}(\partial\Omega^k)}=0,
\mbox{ for } k=1,...,K, \nonumber
\end{eqnarray}
which ends the proof of the convergence of the continuous algorithm.$\qquad \Box$
\subsection{Discrete case}\label{sec.discretecase}
We first introduce the discrete algorithm defined by: let $(u_{h,k}^n,p_{h,k}^n) \in
X_h^k \times \tilde W_h^{k}$
be a discrete approximation of $(u,p)$ in $\Omega^k$ at step $n$.
Then, $(u_{h,k}^{n+1},p_{h,k}^{n+1})$ is the solution in $X_h^k
\times\tilde W_h^k$ of
\begin{eqnarray}
\label{algo_discret}
\int_{\Omega^k} \left( \nabla u_{h,k}^{n+1}\nabla v_{h,k}
+u_{h,k}^{n+1}v_{h,k} \right) dx -
\int_{\partial\Omega^k}p_{h,k}^{n+1} v_{h,k} ds
= \int_{\Omega^k} f_kv_{h,k}dx ,\ \forall v_{h,k}\in X_h^k, \hspace{9mm}\\
\label{CI_discret}
\hspace{-1mm}\int_{\Gamma^{k,\ell}} (p_{h,k}^{n+1}+ \alpha u_{h,k}^{n+1})\psi_{h,k,\ell} =
\int_{\Gamma^{k,\ell}} ( -p_{h,\ell}^{n} + \alpha
u_{h,\ell}^{n}) \psi_{h,k,\ell} ,
\quad \forall \psi_{h,k,\ell} \in \tilde W_h^{k,\ell}.\hspace{8mm}
\end{eqnarray}
In order to analyze the convergence of this iterative scheme, we have to
precise the norms that can be used on the Lagrange multipliers ${\underline p}_h$.
For any ${\underline p} \in \prod_{k=1}^{K}L^2(\partial \Omega^k)$, in addition to
the natural $L^2$ norm, we can define two better suited norms as follows
\begin{eqnarray}
\|{\underline p}\|_{-{1\over 2}} = \left(\sum_{k=1}^K
\|p_k\|_{H^{-{1\over 2}}(\partial \Omega^k)}^2 \right)^{1 \over 2} \quad \mbox{and} \quad
\|{\underline p}\|_{- {1 \over 2},*} = \left(\sum_{k=1}^K \sum_{\scriptstyle \ell=1
\atop{\atop \scriptstyle \ell \ne k}}^K
\|p_k\|_{H^{-{1\over 2}}_*(\Gamma^{k,\ell})}^2 \right)^{1 \over 2},
\nonumber
\end{eqnarray}
where $\|.\|_{H^{-{1\over 2}}_*(\Gamma^{k,\ell})}$ stands for the dual norm of
${H^{{1\over 2}}_{00}(\Gamma^{k,\ell})}$.
We also need a stability result for the Lagrange multipliers, and refer
to~\cite{BB} in 2D and to~\cite{JMN10} in 3D, in which it is shown that,
\begin{lem}\label{lem.faker}
There exists a constant
$c_*$ such that, for
any $p_{h,k,\ell}$ in $\tilde W_h^{k,\ell}$, there exists an element
$w^{h,k,\ell}$ in
$X_h^k$ that
vanishes over $\partial\Omega^k\setminus\Gamma^{k,\ell}$ and satisfies
\begin{eqnarray}
\label{stab1}
\int_{\Gamma^{k,\ell}} p_{h,k,\ell} w^{h,k,\ell} \ge
\|p_{h,k,\ell}\|^2_{H^{-{1\over
2}}_*(\Gamma^{k,\ell})}
\end{eqnarray}
with a bounded norm
\begin{eqnarray}
\label{stab2}
\|w^{h, k,\ell}\|_{H^1(\Omega^k)} \le c_* \|p_{h,k,\ell}\|_{H^{-{1\over
2}}_*(\Gamma^{k,\ell})}.\nonumber
\end{eqnarray}
\end{lem}
Let $\pi_{k,\ell}$ denote the orthogonal projection operator from
$L^2(\Gamma^{k,\ell})$
onto $\tilde W_h^{k,\ell}$. Then, for $v \in L^2(\Gamma^{k,\ell})$,
$\pi_{k,\ell}(v)$ is the unique element of $\tilde W_h^{k,\ell}$
such that
\begin{eqnarray}\label{eq:defpi}
\int_{\Gamma^{k,\ell}} (\pi_{k,\ell}(v)-v)\psi=0, \quad \forall \psi \in
\tilde W_h^{k,\ell}.
\end{eqnarray}
We are now in a position to prove the convergence of the iterative scheme
\begin{theo}\label{theo2}
Let us assume that $\alpha h \le c$, for some small enough constant $c$.
Then, the discrete problem (\ref{pbdiscret}) has a unique solution
$({\underline u}_h,{\underline p}_h) \in {\cal V}_h$.
The algorithm (\ref{algo_discret})-(\ref{CI_discret}) is well posed and
converges
in the sense that
\begin{eqnarray}
\lim_{n \longrightarrow \infty} \left( \|u_{h,k}^n - u_{h,k}\|_{H^1(\Omega^k)}
+ \sum_{\ell\neq k} \|p_{h,k,\ell}^n-p_{h,k,\ell}\|_{H^{-{1\over
2}}_*(\Gamma^{k,\ell})} \right)=0,
\mbox{ for } 1\le k\le K. \nonumber
\end{eqnarray}
\end{theo}
{\bf Proof}. For the sake of convenience, we drop out the index $h$ in what
follows.
We first assume that problems (\ref{pbdiscret}) and
(\ref{algo_discret})-(\ref{CI_discret}) are well posed and
proceed as in the continuous case and assume that $f=0$.
From (\ref{eq:defpi}) we have
$$\forall v_k \in L^2(\Gamma^{k,\ell}), \quad
\int_{\Gamma^{k,\ell}} p_k^{n+1} v_k = \int_{\Gamma^{k,\ell}} p_k^{n+1}
\pi_{k,\ell}(v_k),$$
and (\ref{CI_discret}) also reads
\begin{eqnarray}\label{eq:constraintprojn}
p_k^{n+1}+\alpha \pi_{k,\ell} (u_k^{n+1})=
\pi_{k,\ell} (-p_{\ell}^n+\alpha u_{\ell}^n) \quad \mbox{over }
\Gamma^{k,\ell}.
\end{eqnarray}
By taking $v_k=u_k^{n+1}$ in (\ref{algo_discret}), we thus have
\begin{eqnarray}
\int_{\Omega^k} \left( |\nabla u_k^{n+1}|^2
+|u_k^{n+1}|^2 \right) dx
\hspace{7cm}\nonumber\\
= \sum_{\ell} \frac{1}{4\alpha} \int_{\Gamma^{k,\ell}}\left(
( p_k^{n+1}+\alpha \pi_{k,\ell} (u_k^{n+1}))^2 - ( p_k^{n+1}-\alpha
\pi_{k,\ell} (u_k^{n+1}))^2\right)ds.
\nonumber
\end{eqnarray}
Then, by using the interface conditions (\ref{eq:constraintprojn}) we obtain
\begin{eqnarray}
\int_{\Omega^k} \left( |\nabla u_k^{n+1}|^2
+|u_k^{n+1}|^2 \right) dx
+\frac{1}{4\alpha}\sum_{\ell}\int_{\Gamma^{k,\ell}}
( p_k^{n+1}-\alpha \pi_{k,\ell}(u_k^{n+1}))^2ds\nonumber\\
= \frac{1}{4\alpha}\sum_{\ell}\int_{\Gamma^{k,\ell}}
(\pi_{k,\ell}(p_{\ell}^{n}-\alpha u_{\ell}^{n}))^2 ds. \nonumber
\end{eqnarray}
It is straightforward to note that
\begin{eqnarray}
\int_{\Gamma^{k,\ell}}
(\pi_{k,\ell}(p_{\ell}^{n}-\alpha u_{\ell}^{n}))^2ds
\le \int_{\Gamma^{k,\ell}}
( p_{\ell}^{n}-\alpha u_{\ell}^{n})^2 ds\nonumber\\
= \int_{\Gamma^{k,\ell}}(p_{\ell}^{n} -\alpha \pi_{\ell,k}(u_{\ell}^{n}) +
\alpha \pi_{\ell,k}(u_{\ell}^{n}) -
\alpha u_{\ell}^{n})^2ds\nonumber\\
= \int_{\Gamma^{k,\ell}}(p_{\ell}^{n} -\alpha \pi_{\ell,k}(u_{\ell}^{n}))^2 +
\alpha ^2(\pi_{\ell,k}(u_{\ell}^{n}) - u_{\ell}^{n})^2ds \nonumber
\end{eqnarray}
since $(Id-\pi_{\ell,k})(u_{\ell}^{n})$ is orthogonal to any element in
$\tilde W_h^{\ell,k}$. For the last term above, we recall that (see~\cite{BMP} in 2D
and~\cite{BBM} or~\cite{BraessDahmen} equation (5.1) in 3D)
\begin{eqnarray*}
\label{eq:propr-pilk}
\int_{\Gamma^{k,\ell}}(\pi_{\ell,k}(u_{\ell}^{n}) - u_{\ell}^{n})^2ds
\le c h \|u_\ell^n\|_{H^{1/2}(\Gamma^{k,\ell})}^2
\le c h \| u_\ell^n\|_{H^1(\Omega^{\ell})}^2.
\end{eqnarray*}
With similar notations as those introduced in the
continuous case, we deduce
\begin{eqnarray}
E^{n+1} + B^{n+1} \le c \alpha h E^n + B^n \nonumber
\end{eqnarray}
and we conclude as in the continuous case: if $c \alpha h < 1$ then
$\lim_{n\rightarrow\infty}E^n = 0$. The convergence of $u_k^n$ towards
0 in the $H^1$ norm follows. Taking $f=0$ in (\ref{algo_discret}), then using (\ref{stab1})
and the convergence of $u_k^n$ towards 0 in the $H^1$ norm, we derive the convergence of $p_k^n$ in the
$H^{-{1\over 2}}_*(\Gamma^{k,\ell})$ norm.
Note that by having $f=0$ and $(u^n,p^n)=0$ prove that
$(u^{n+1},p^{n+1})=0$ from which we derive that the square problem
(\ref{algo_discret})-(\ref{CI_discret}) is uniquely solvable hence well posed.
Similarly, having $f=0$ and getting rid of the superscripts $n$ and $n+1$
in the previous proof gives (with obvious notations) :
\begin{eqnarray}
E + B \le c \alpha h E + B.\nonumber
\end{eqnarray}
The existence and uniqueness of a solution of (\ref{pbdiscret}) then results with similar arguments.
In~\cite{JMN10} the well-posedness of (\ref{pbdiscret}) is addressed through a more direct
proof: let us introduce over
$(\prod_{k=1}^K H^1_*(\Omega^k)\times \prod_{k=1}^K
L^2(\partial\Omega^k))\times \prod_{k=1}^K H^1_*(\Omega^k)$ the
bilinear form
\begin{eqnarray*}
\label{fbs_discret}
\tilde a(({\underline u},{\underline p}), {\underline v})) = \sum_{k=1}^K
\int_{\Omega^k} \left( \nabla u_k\nabla v_k
+u_k v_k \right) dx -\sum_{k=1}^K
\int_{\partial\Omega^k}p_k v_k ds.
\end{eqnarray*}
The space $\prod_{k=1}^K H^1_*(\Omega^k)$ is endowed with the norm
\begin{eqnarray}
\|{\underline v}\|_* = \left(\sum_{k=1}^K \|v_k\|_{H^1(\Omega^k)}^2 \right)^{1 \over
2}.
\nonumber
\end{eqnarray}
\vspace{1mm}
\begin{lem}\label{lem.infsup}
There exists $c^\prime>0$ and a constant $\beta>0$ such that
\vspace{-3mm}
\begin{eqnarray*}
\label{inf-sup_discret}
\mbox{for } \alpha h\le c^\prime, \quad \forall ({\underline u}_h,{\underline p}_h) \in {\cal V}_h ,\ \exists {\underline v}_h\in\prod_{k=1}^K
X_h^k, \hspace{3cm}\nonumber\\
\tilde a(({\underline u}_h,{\underline p}_h), {\underline v}_h)) \ge
\beta (\|{\underline u}_h\|_*+ \|{\underline p}_h\|_{-{1\over 2},*}) \|{\underline v}_h\|_*.
\end{eqnarray*}
Moreover, we have the continuity argument : there exists a constant
$c>0$ such that
\begin{eqnarray}\label{ineq:continuity}
\hspace{-8mm}\forall ({\underline u}_h,{\underline p}_h) \in {\cal V}_h ,\ \forall {\underline v}_h\in\prod_{k=1}^K
X_h^k, \quad
\tilde a(({\underline u}_h,{\underline p}_h), {\underline v}_h)) \le c (\| {\underline u}_h \|_* +
\|{\underline p}_h\|_{-{1 \over 2}}) (\| {\underline v}_h \|_*).
\end{eqnarray}
\end{lem}
This lemma is proven in~\cite{JMN10}, based on Lemma~\ref{lem.faker}.
From Lemma~\ref{lem.infsup}, we have
for any $(\tilde{\underline u}_h,\tilde {\underline p}_h)\in{\cal V}_h$,
\begin{eqnarray}\label{estimuuh}
\| {\underline u} - {\underline u}_h\|_* + \|{\underline p} - {\underline p}_h\|_{-{1 \over 2},*} \le
c (\| {\underline u} - \tilde{\underline u}_h\|_* + \|{\underline p} - \tilde{\underline p}_h\|_{-{1 \over 2}}).
\end{eqnarray}
and we are led to the analysis of the best fit of $({\underline u},{\underline p})$
by elements in ${\cal V}_h$.
As noticed in~\cite{JMN10}, it is well known~\cite{BB,BraessDahmen} but unusual that
the inf-sup and continuity conditions involve different norms: the
$\|\cdot\|_{-{1 \over 2}}$ and $\|\cdot\|_{-{1\over 2},*}$ norms. Thus, these two different norms
appear in \eqref{estimuuh} and the best approximation analysis will be done using
the $\|\cdot\|_{-{1 \over 2}}$ norm, while the error estimates will involve the $\|\cdot\|_{-{1\over 2},*}$ norm.
The analysis of the best fit as been done in~\cite{JMN10} in 2D and 3D for
$\mathbf{P}_1$ approximations. Let us analyze the best approximation of $({\underline u},{\underline p})$ by
elements in ${\cal V}_h$ in the general case of higher order approximations in 2D.
\section{Analysis of the best fit in 2D for higher order approximations}\label{sec.bestfit2D}
In this part we analyze the best approximation of $({\underline u},{\underline p})$ by
elements in ${\cal V}_h$.
Following the same lines as in the analysis of the best fit in the $\mathbf{P}_1$ situation of~\cite{JMN10},
we can prove the following results:
\begin{theo}
\label{best-fit}
Let $u \in H^2(\Omega)\cap H^1_0(\Omega)$,
be such that ${\underline u}=(u_k)_{1\le k\le K}\in \prod_{k=1}^K H^{2+m}(\Omega^k)$ with $u_k=u_{|\Omega^k}$, and
$p-1\ge m \ge 0$. Let us set also
$p_{k,\ell}=\frac{\partial u}{\partial {\bf n}_k}$
over each $\Gamma^{k,\ell}$.
Then there exists $\tilde{{\underline u}}_h$ in
$\prod_{k=1}^K X_h^k$
and $\tilde{{\underline p}}_h=(\tilde{p}_{k \ell h})$, with $ \tilde{p}_{k \ell h}
\in \tilde W_h^{k,\ell}$
such that $(\tilde{{\underline u}}_h,\tilde{{\underline p}}_h)$ satisfy the coupling condition
(\ref{disc.const}), and
\begin{eqnarray}
\| \tilde{{\underline u}}_h -{\underline u}\|_*
&\le& c h^{1+m} \sum_{k=1}^K \| u_k \|_{H^{2+m}(\Omega^k)}
+{c h^m \over \alpha} \sum_{\ell=1}^K\sum_{k < \ell} \| p_{k,\ell} \|_{H^{{1 \over
2}+m}(\Gamma^{k,\ell})},
\nonumber\\ \nonumber\\
\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
&\le& c\alpha h^{2+m} (\|u_k\|_{H^{2+m}(\Omega^k)}
+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
\nonumber\\
&& \qquad \qquad \qquad \qquad\qquad\! + \ c h^{1+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.
\nonumber
\end{eqnarray}
where $c$ is a constant independent of $h$ and $\alpha$.
\end{theo}
If we assume more regularity on the normal derivatives on the interfaces,
we have
\begin{theo}
\label{best-fit.2}
Under the assumptions of Theorem \ref{best-fit} and assuming in addition that
$p_{k,\ell}$ is in $H^{{3 \over 2}+m}(\Gamma_{k,\ell})$.
Then there exists $\tilde{{\underline u}}_h$ in
$\prod_{k=1}^K X_h^k$
and $\tilde{{\underline p}}_h=(\tilde{p}_{k \ell h}), \ \tilde{p}_{k \ell h} \in
\tilde W_h^{k,\ell}$
such that $(\tilde{{\underline u}}_h,\tilde{{\underline p}}_h)$ satisfy
(\ref{disc.const}), and
\begin{multline*}
\| \tilde{{\underline u}}_h -{\underline u}\|_*
\le
c h^{1+m} \sum_{k=1}^K \| u_k \|_{H^{2+m}(\Omega^k)}
+{c h^{m+1} \over \alpha}(\log h)^{\beta(m)} \sum_{\ell=1}^K\sum_{k < \ell} \| p_{k,\ell}
\|_{H^{{3\over 2}+m}(\Gamma^{k,\ell})},
\\
\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le
c\alpha h^{2+m} (\|u_k\|_{H^{2+m}(\Omega^k)}
+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})}) \\
+ \ c h^{2+m} (\log h)^{\beta(m)}\| p_{k,\ell} \|_{H^{{3 \over2}+m}(\Gamma^{k,\ell})}.
\end{multline*}
where $c$ is a constant independent of $h$ and $\alpha$, and $\beta(m)=0$ if
$m\le p-2$ and $\beta(m)=1$ if $m=p-1$.
\end{theo}
The main part of the proof is independent of the degree of the
approximation and is done in~\cite{JMN10}.
Only Lemma~4 in~\cite{JMN10} is dependent of the degree of the approximation and is only proven for
a $\mathbf{P}_1$ approximation. We prove it for higher order approximations:
\begin{lem}\label{lem_1}
Assume the degree of the finite element approximation $p\le 13$. There
exists two constants $c_1>0$ and $c_2>0$ independent of $h$
such that for all
$\eta_{\ell,k}$ in
${\cal Y}_h^{\ell,k}\cap H_0^1(\Gamma^{k,\ell})$, there exists an element
$\psi_{\ell,k}$ in
$\tilde W_h^{\ell,k}$, such that
\begin{eqnarray}
\label{injectif}
\int_{\Gamma^{k,\ell}}(\eta_{\ell,k}+\pi_{k,\ell}(\eta_{\ell,k}))\psi_{\ell,k}
\ge c_1\|\eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}^2,\\
\label{stable}
\| \psi_{\ell,k} \|_{L^2(\Gamma^{k,\ell})} \le
c_2 \| \eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}.
\end{eqnarray}
\end{lem}
\begin{rem}
The limit $p\le 13$ is related to the arguments used in the proof we propose for this lemma,
thus, a priori, only technical. We have not found how to alleviate
this limit but actually, for applications, this limit is quite above
what is generally admitted as the optimal range for the degree of the
polynomial in $h-P$ finite element methods. Indeed, as regards the question of
accuracy with respect to run time, the publication \cite{Vos} analyses in
full details\footnote{Of course the answer to that question depends on
the implementation of the discretization method and the exact
properties of the solution to be approximated but this indicates a
tendency that is confirmed by implementation on a large set of other
applications.} and on a variety of problems and regularity of
solutions, the accuracy achieved by low to high order finite element
approximations as a function of the number of degrees of freedom and
of the run time. It appears that the use of degrees between 5 and 8 is
quite competitive which motivates the present analysis.
\end{rem}
The proof of these results is performed in the following steps.
Note that Lemma~\ref{lem:etapsi_base} below, that generalizes one of the main arguments in the proof of
Lemma~4 in \cite{JMN10} to higher degree in 2d would involve, for a similar generalization in 3d
(see Lemma~7 of \cite{JMN10}), an extension to higher order of the theory developed in \cite{BraessDahmen} that does not
exist yet and goes beyond the scope of the present paper.
\subsection{A first technical result}
\noindent\\
\vspace{-2mm}
\begin{lem}\label{lem:etapsi_base}
Let $1 \le p\le 13$ be an integer. There exists $c$ and $C>0$ such that for all
$\eta\in \mathbf{P}_p([-1,1])$ s.t.
$\eta(-1)=0$ there exists $\psi\in\mathbf{P}_{p-1}([-1,1])$ s.t.
$\eta(1)=\psi(1)$, and
\begin{eqnarray*}
J(\psi;\eta):=\int_{-1}^1 (\eta\,\psi -\frac{1}{4}(\eta-\psi)^2)
\ge c \int_{-1}^1 \eta^2 \quad \text{and} \quad
\int_{-1}^1 \psi^2 &\le& C \int_{-1}^1 \eta^2.
\end{eqnarray*}
\end{lem}
{\bf Proof}. This lemma has been proven in the case $p=1$ in~\cite{JMN10}.
For $p \ge 2$, we prove it by studying for a given $\eta\in \mathbf{P}_p([-1,1])$,
$\eta\neq 0$ the maximization problem : \\
Find $\psi\in\mathbf{P}_{p-1}([-1,1])$ such that
\begin{equation}\label{eq:minconst}
J(\psi;\eta)=\max_{\begin{array}{l}
\varphi \in\mathbf{P}_{p-1}([-1,1])\\
\varphi(1)=\eta(1)
\end{array}}J(\varphi;\eta).
\end{equation}
The function $J$ is strictly concave in $\varphi$ and there exists a function
satisfying the constraint. This problem admits a solution. The functional
$J(\varphi,\eta)$ being quadratic in $(\varphi,\eta)$ and the constraint being
affine, the optimality condition shows that the problem reduces to a linear
problem the right hand side of which depends linearly of $\eta$. The affine
constraint being of rank one, the problem (\ref{eq:minconst}) admits a
unique solution which depends linearly of $\eta$. Therefore, it makes
sense to introduce the operator:
\[
\begin{array}{rcl}
S: \mathbf{P}_{p,0}([-1,1]) &\longrightarrow& \mathbf{P}_{p-1}([-1,1])\\
\hphantom{S: }\eta &\mapsto& \psi \mathrm{\ solution\ to\
(\ref{eq:minconst})},
\end{array}
\]
where $\mathbf{P}_{p,0}([-1,1])$ is the set of functions of $\mathbf{P}_p([-1,1])$ that
vanish at $-1$. In
Lemma~\ref{lem:etapsi_base}, we take $\psi=S(\eta)$. The operator $S$ is
linear from a finite dimensional space to another so that it is continuous
for any norm on these spaces. Therefore there exists $C>0$ possibly depending on $p$ such that
$\int_{-1}^1
\psi^2 \le C \int_{-1}^1 \eta^2$.
Moreover, the function
\[
\begin{array}{rcl}
H: \mathbf{P}_{p,0}([-1,1])\backslash \{0\} &\longrightarrow& {\mathbb R}\\
\hphantom{S: }\eta &\mapsto& \displaystyle\frac{J(S(\eta),\eta)}{\displaystyle\int_{-1}^1
\eta^2}
\end{array}
\]
is continuous and such that $H(\eta)=H(\alpha\eta)$ for any $\alpha\neq
0$. Therefore, it reaches its minimum
which is strictly positive as results from the lemma stated and proven in the next subsection
and the proof of Lemma~\ref{lem:etapsi_base} is complete.$\qquad \Box$
\subsection{Another technical result}
\label{sec:peq2}
\noindent\\
\vspace{-2mm}
\begin{lem}\label{lem:etapsi}
Let $p\le 13$ and $\eta\in\mathbf{P}_p([-1,1])$ s.t. $\eta(-1)=0$ and $\eta$ is not the null function.
Then,
$J(S(\eta);\eta) > 0.$
\end{lem}
{\bf Proof.} We make use of the Legendre polynomials
\[
L_0(x)=1,\ L_1(x)=x,\ (m+1)L_{m+1}(x)=(2m+1)\,x\,L_m(x) - mL_{m-1}(x),\ m\ge 1.
\]
Let us recall that for any $m\ge 0$,
\[
L_m(1)=1,\ L_m(-1)=(-1)^m, \
\int_{-1}^1 L_m(x)\,L_{m'}(x)\,dx = \delta_{m\,m'}\displaystyle\frac{2}{2m+1}.
\]
The polynomial $\eta$ is decomposed on the Legendre polynomials
\[
\eta=\sum_{m=1}^p \eta_m(L_m+L_{m-1}),
\]
and $\psi=S(\eta)$ is sought in the form
\[
\psi=\sum_{m=0}^{p-1} \psi_m L_m
\]
so that it maximizes the quantity $J(\psi;\eta)$ under the constraint
$\eta(1)=\psi(1)$. This corresponds to the min-max problem
\[
\max_{\psi\in\mathbf{P}_{p-1}([-1,1])}\min_{\mu\in{\mathbb R}} {\cal L}(\psi,\mu)
\]
where
\[
{\cal L}(\psi,\mu) = J(\psi;\eta)-\mu (\psi(1)-\eta(1)).
\]
We have to prove that the optimal value is positive. The optimality
relations w.r.t
$\psi$ give
\begin{eqnarray*}
\frac{3}{2}(\eta_m+\eta_{m+1})-\frac{1}{2}\psi_m=\mu \frac{2m+1}{2},\
1\le m\le p-1,\quad
\frac{3}{2}\eta_1-\frac{1}{2}\psi_0=\frac{\mu}{2}.
\end{eqnarray*}
Denoting $\displaystyle R_{p-1}=\sum_{m=0}^{p-1}(2m+1)L_m$, with
$\|R_{p-1}\|_{L^2(]-1,1[)}^2=2p^2$, we get
\begin{equation}\label{eq:psi}
\psi = 3\eta -3\eta_p\,L_p - \mu R_{p-1}.
\end{equation}
Hence, the dual problem writes
\[
\min_{\mu\in{\mathbb R}} G(\mu;\eta),
\]
where
$ G(\mu;\eta) := J(3\eta -3\eta_p\,L_p - \mu
R_{p-1};\eta)-\mu(\psi(1)-\eta(1)) $
and $\psi$ satisfies (\ref{eq:psi}).
After some calculations, $G(\mu;\eta) $ appears a second order polynomial in $\mu$:
\begin{equation}\label{eq:G}
G(\mu;\eta) = \frac{p^2}{2} \mu^2 -\mu (2\eta(1)-3\eta_p)
+(2\|\eta\|_{L^2(]-1,1[)}^2-\frac{9}{2}\frac{\eta_p^2}{2p+1});
\end{equation}
its leading coefficient
is positive and its discriminent is proven to be negative in the next lemma, from which we derive that
$\min_\mu G(\mu;\eta)$ is positive and the proof is complete. $\qquad \Box$
\begin{lem}\label{lem:Delta}
For $p\le 13$, the discriminant of (\ref{eq:G}):
\begin{equation*}\label{eq:discr}
\Delta(\eta) := (2\eta(1)-3\eta_p)^2 +
p^2 (-4\|\eta\|_{L^2(]-1,1[)}^2 + 9\frac{\eta_p^2}{2p+1})
\end{equation*}
is negative if $\eta\in\mathbf{P}_p([-1,1])$, $\eta(-1)=0$ and $\eta$ is not
the null function.
\end{lem}
\noindent\\[3mm]
{\bf Proof of Lemma~\ref{lem:Delta} in the the case $\pmb{p=2}$}. (the proof for $3\le p\le 13$,
is given in Appendix~\ref{appendix:A}).
For $p=2$, a direct computation shows that
\[
\Delta(\eta) =-\frac{80}{3} \eta_1^2-\frac{40}{3} \eta_2\eta_1-\frac{133}{15}\eta_2^2.
\]
The discriminant of the corresponding bilinear form is $-8632/9$. It is negative and the lemma is proven in this case.
$\qquad \Box$
\subsection{Proof of Lemma~\ref{lem_1}}
Using the definition of $\pi_{k,\ell}$, (\ref{eq:defpi}), we derive
\begin{eqnarray}
\int_{\Gamma^{k,\ell}}(\eta_{\ell,k}+\pi_{k,\ell}(\eta_{\ell,k}))\psi_{\ell,k}
&=&\int_{\Gamma^{k,\ell}} \eta_{\ell,k}\psi_{\ell,k} + \int_{\Gamma^{k,\ell}}
(\pi_{k,\ell}(\eta_{\ell,k}))^2
+ \int_{\Gamma^{k,\ell}}
\pi_{k,\ell}(\eta_{\ell,k})(\psi_{\ell,k} - \eta_{\ell,k}). \nonumber
\end{eqnarray}
Then, using the relation
$\pi_{k,\ell}(\eta_{\ell,k})(\psi_{\ell,k} - \eta_{\ell,k})
\ge - (\pi_{k,\ell}(\eta_{\ell,k}))^2-{1 \over
4}(\psi_{\ell,k} - \eta_{\ell,k})^2$
leads~to
\begin{eqnarray}
\int_{\Gamma^{k,\ell}}(\eta_{\ell,k}+\pi_{k,\ell}(\eta_{\ell,k}))\psi_{\ell,k}
\ge \int_{\Gamma^{k,\ell}} \eta_{\ell,k}\psi_{\ell,k} -{1 \over 4}
\int_{\Gamma^{k,\ell}} (\psi_{\ell,k} - \eta_{\ell,k})^2. \nonumber
\end{eqnarray}
Remind that we have denoted as
$x_0^{\ell,k},
x_1^{\ell,k},...,x_{N-1}^{\ell,k}, x_N^{\ell,k}$ the vertices of the
triangulation of $\Gamma^{\ell,k}$ that belong to $\Gamma^{\ell,k}$.
By Lemma~\ref{lem:etapsi_base}, and an easy
scaling argument,
there exists $c, C >0$, $\psi_1 \in \mathbf{P}_{p-1}([x_0^{\ell,k},x_1^{\ell,k}])$, and
$\psi_N \in \mathbf{P}_{p-1}([x_{N-1}^{\ell,k},x_N^{\ell,k}])$, such that
\[
\| \psi_1\|_{L^2(x_0^{\ell,k},x_1^{\ell,k})} + \|
\psi_N\|_{L^2(x_{N-1}^{\ell,k},x_N^{\ell,k})} \le C (\|
\eta_{\ell,k}\|_{L^2(x_0^{\ell,k},x_1^{\ell,k})} +
\|
\eta_{\ell,k}\|_{L^2(x_{N-1}^{\ell,k},x_N^{\ell,k})}),
\]
$\psi_1(x_1^{\ell,k})=\eta_{\ell,k}(x_1^{\ell,k})$,
$\psi_N(x_{N-1}^{\ell,k})=\eta_{\ell,k}(x_{N-1}^{\ell,k})$ and
\[
\int_{x_0^{\ell,k}}^{x_1^{\ell,k}} (\eta_{\ell,k} \psi_1 -{1 \over 4}
(\psi_1 - \eta_{\ell,k})^2)
+ \int_{x_{N-1}^{\ell,k}}^{x_N^{\ell,k}} (\eta_{\ell,k} \psi_N -{1 \over 4}
(\psi_N - \eta_{\ell,k})^2)
\ge c (\int_{x_0^{\ell,k}}^{x_1^{\ell,k}} \eta_{\ell,k}^2+
\int_{x_{N-1}^{\ell,k}}^{x_N^{\ell,k}}
\eta_{\ell,k}^2 ).
\]
Taking
$\psi_{\ell,k}$ in
$\tilde W_h^{\ell,k}$ as follows
\begin{eqnarray}
\psi_{\ell,k}=\left\{
\begin{array}{l}
\psi_1
\hbox{ over }
]x_0^{\ell,k}, x_{1}^{\ell,k}[\\
\eta_{\ell,k}\hbox{ over }
]x_1^{\ell,k}, x_{N-1}^{\ell,k}[\\
\psi_N
\hbox{ over }
]x_{N-1}^{\ell,k}, x_{N}^{\ell,k}[\\
\end{array}\right.\nonumber
\end{eqnarray}
proves Lemma~\ref{lem_1} with $c_1=\min(1,c)$ and $c_2=\max(1,C)$.$\qquad \Box$
\subsection{Proof of Theorem~\ref{best-fit}}
We follow the same steps as in the proof of Theorem~2 in~\cite{JMN10}.
Let $u_{kh}^1$ be the unique element of $X_h^k$ defined as follows :
\begin{itemize}
\item
$(u_{kh}^1)_{|\partial \Omega^k}$ is the best fit of $u_k$ over
$\partial \Omega^k$ in ${\cal Y}_h^{k,\ell}$,
\item
$u_{kh}^1$ at the inner nodes of the triangulation (in $\Omega^k$)
coincide with the interpolate of $u_k$.
\end{itemize}
Then, it satisfies, for $0\le m\le p-1$,
\begin{eqnarray}\label{bestfit_u.2}
\| u_{kh}^1 - u_k\|_{L^2(\partial \Omega^k)}
\le c h^{{3 \over 2}+m} \|u_k\|_{H^{2+m}(\Omega^k)},
\end{eqnarray}
from which we deduce that
\begin{eqnarray}
\label{bestfit_u}
\| u_{kh}^1 - u_k\|_{L^2(\Omega^k)} + h\| u_{kh}^1 - u_k\|_{H^1(\Omega^k)}
\le c h^{2+m} \|u_k\|_{H^{2+m}(\Omega^k)},
\end{eqnarray}
and, from Aubin-Nitsche estimate,
\begin{eqnarray}
\label{bestfit_u.3}
\| u_{kh}^1 - u_k\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c h^{2+m} \|u_k\|_{H^{2+m}(\Omega^k)}.
\end{eqnarray}
We introduce separately the best fit $p_{k \ell h}^1$ of
$p_{k,\ell}=\frac{\partial u}{\partial {\bf n}_k}$
over each
$\Gamma^{k,\ell}$ in $\tilde W_h^{k,\ell}$. Then we have, for $0\le m\le p-1$
\begin{eqnarray}
\label{bestfit_p}
\| p_{k \ell h}^1 - p_{k,\ell} \|_{L^2(\Gamma^{k,\ell})}
&\le& c h^{{1 \over 2}+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})},
\\
\label{bestfit_p.2}
\| p_{k \ell h}^1 - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
&\le& c h^{1+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.
\end{eqnarray}
But there is very few chance that $({\underline u}_h^1,{\underline p}_h^1) \in
\left(\prod_{k=1}^K X_h^k\right)\times \left(\prod_{k=1}^K \tilde
W_h^{k}\right)$ satisfy the
coupling condition (\ref{disc.const}). It misses (\ref{disc.const}) of elements
$\epsilon_{k,\ell}$ and $\eta_{\ell,k}$ such that
\begin{eqnarray}
\label{inteps_eta1}
\int_{\Gamma^{k,\ell}}(p_{k\ell h}^1+\epsilon_{k,\ell}+\alpha u_{kh}^1)
\psi_{k,\ell}
&=& \int_{\Gamma^{k,\ell}}(-p_{\ell kh}^1+\alpha \eta_{\ell,k} +\alpha u_{\ell
h}^1) \psi_{k,\ell}
,\ \forall \psi_{k,\ell} \in \tilde W_h^{k,\ell}
\hspace{10mm}\\
\label{inteps_eta2}
\int_{\Gamma^{k,\ell}}(p_{\ell kh}^1+\alpha \eta_{\ell,k} +\alpha u_{\ell
h}^1) \psi_{\ell,k} &=&
\int_{\Gamma^{k,\ell}}(-p_{k\ell h}^1-\epsilon_{k,\ell}+\alpha u_{kh}^1)
\psi_{\ell,k}
,\ \forall \psi_{\ell,k} \in \tilde W_h^{\ell,k}.\hspace{10mm}
\end{eqnarray}
In order to correct that, without polluting
(\ref{bestfit_u.2})-(\ref{bestfit_p.2}), for each couple $(k,\ell)$ we
choose one side, e.g. the smaller indexed one (hereafter we shall
assume that each couple $(k,\ell)$ is ordered by $k < \ell$).
With this choice, we introduce $\epsilon_{k,\ell} \in
\tilde W_h^{k,\ell}$, $\eta_{\ell,k} \in {\cal Y}_h^{\ell,k} \cap
H_0^1(\Gamma^{k,\ell})$ such that the element $(\tilde{{\underline u}}_h,\tilde{{\underline p}}_h)$,
defined by
\begin{eqnarray}\label{defptilde}
\tilde{u}_{\ell h}=u^1_{\ell h}+\sum_{k<\ell} {\cal R}_{\ell,k}(\eta_{\ell,k}),
\quad
\tilde{p}_{k \ell h}=p^1_{k \ell h}+ \epsilon_{k,\ell} \quad (\mbox{for } k
< \ell),
\end{eqnarray}
satisfy (\ref{disc.const}). Here ${\cal R}_{\ell,k}$ is a discrete lifting operator
as in~\cite{JMN10} (see also~\cite{Widlund,BG})
that satisfies, with a constant $c$ that is $h$-independent,
that vanishes over
$\partial\Omega^{\ell}\setminus\Gamma^{k,\ell}$ and satisfies
\begin{eqnarray} \label{lifting}
\forall w \in {\cal Y}_h^{\ell,k} \cap H_0^1(\Gamma^{k,\ell}),
\ ({\cal R}_{\ell,k}(w))_{| \Gamma_{k,\ell}}=w, \quad \| {\cal R}_{\ell,k}(w) \|_{H^1(\Omega^{\ell})}
\le c \| w \|_{H^{1 \over 2}_{00}(\Gamma^{k,\ell})}.
\end{eqnarray}
The set of equations
(\ref{inteps_eta1})-(\ref{inteps_eta2}) results in a square system of linear algebraic
equations for $\epsilon_{k,\ell}$ and
$\eta_{\ell,k}$ that can be written as follows
\begin{eqnarray}\label{disc.const_2}
\int_{\Gamma^{k,\ell}}(\epsilon_{k,\ell}-\alpha \eta_{\ell,k})\psi_{k,\ell}
&=& \int_{\Gamma^{k,\ell}} e_1 \psi_{k,\ell}
,\ \forall \psi_{k,\ell} \in \tilde W_h^{k,\ell}\\
\label{disc.const_3}
\int_{\Gamma^{k,\ell}}(\epsilon_{k,\ell}+\alpha \eta_{\ell,k})\psi_{\ell,k}
&=& \int_{\Gamma^{k,\ell}} e_2 \psi_{\ell,k}
,\ \forall \psi_{\ell,k} \in \tilde W_h^{\ell,k},
\end{eqnarray}
with
\begin{eqnarray}
\label{e1-e2}
e_1=-p_{k\ell h}^1-p_{\ell kh}^1+\alpha(u_{\ell h}^1-u_{kh}^1),\quad
e_2=-p_{k\ell h}^1-p_{\ell kh}^1+\alpha(u_{kh}^1-u_{\ell h}^1).\quad
\end{eqnarray}
In~\cite{JMN10}, it is shown that the linear system (\ref{disc.const_2})-(\ref{disc.const_3}) is well posed.
We now estimate $\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}$ and $\| \tilde{u}_{\ell h} -
u_{\ell}\|_{H^1(\Omega^{\ell})}$, by first estimating $\|\eta_{\ell,k}
\|_{L^2(\Gamma^{k,\ell})}$:
from (\ref{disc.const_2}) and (\ref{disc.const_3}), we get
\begin{eqnarray}\label{epskl-etalk}
\epsilon_{k,\ell}=
\pi_{k,\ell}(\alpha \eta_{\ell,k} +e_1), \quad
\alpha \eta_{\ell,k}=
\pi_{\ell,k}(-\epsilon_{k,\ell} +e_2).
\end{eqnarray}
Injecting the first equation of \eqref{epskl-etalk} in \eqref{disc.const_2}-\eqref{disc.const_3}, we obtain
\begin{eqnarray}\label{systeme.eta}
\int_{\Gamma^{k,\ell}}(\eta_{\ell,k}+\pi_{k,\ell}(\eta_{\ell,k}))\psi_{\ell,k}
= {1 \over \alpha} \int_{\Gamma^{k,\ell}} (e_2 -
\pi_{k,\ell}(e_1))\psi_{\ell,k}
,\ \forall \psi_{\ell,k} \in \tilde W_h^{\ell,k}.
\end{eqnarray}
Then, from (\ref{injectif}) and (\ref{systeme.eta}) we get
\begin{eqnarray}\label{estim.eta}
c_1\|\eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}^2
\le {1 \over \alpha} \|e_2 - \pi_{k,\ell}(e_1)\|_{L^2(\Gamma^{k,\ell})}
\|\psi_{\ell,k}\|_{L^2(\Gamma^{k,\ell})},
\end{eqnarray}
and using (\ref{stable}) in (\ref{estim.eta}) yields
\begin{eqnarray}\label{estim.eta2}
\|\eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}
\le {c_2 \over \alpha c_1} \|e_2 - \pi_{k,\ell}(e_1)\|_{L^2(\Gamma^{k,\ell})}
\le {c_2 \over \alpha c_1} (\| e_2 \|_{L^2(\Gamma^{k,\ell})}
+\| e_1 \|_{L^2(\Gamma^{k,\ell})}).\qquad \
\end{eqnarray}
Now, from \eqref{e1-e2}, for $i=1,2$
\begin{eqnarray}
\|e_i \|_{L^2(\Gamma^{k,\ell})}
\le \|p_{k\ell h}^1+p_{\ell kh}^1\|_{L^2(\Gamma^{k,\ell})}
+\alpha \|u_{\ell h}^1-u_{kh}^1 \|_{L^2(\Gamma^{k,\ell})},
\nonumber
\end{eqnarray}
and recalling that $p_{k,\ell}=\frac{\partial u_k}{\partial {\bf n}_k}=
-\frac{\partial u_\ell}{\partial {\bf n}_{\ell}}=-p_{\ell,k}$ over each
$\Gamma^{k,\ell}$
\begin{eqnarray}
\|p_{k\ell h}^1+p_{\ell kh}^1\|_{L^2(\Gamma^{k,\ell})}
&\le& \|p_{k\ell h}^1-p_{k,\ell}\|_{L^2(\Gamma^{k,\ell})}
+\|p_{\ell k h}^1-p_{\ell,k}\|_{L^2(\Gamma^{k,\ell})},
\nonumber\\
\|u_{\ell h}^1-u_{kh}^1 \|_{L^2(\Gamma^{k,\ell})}
&\le& \|u_{k h}^1-u_k \|_{L^2(\Gamma^{k,\ell})}
+\|u_{\ell h}^1-u_{\ell} \|_{L^2(\Gamma^{k,\ell})},
\nonumber
\end{eqnarray}
so that, using (\ref{bestfit_u.2}) and (\ref{bestfit_p}), we derive
for $i=1,2$ and $0\le m\le p-1$
\begin{eqnarray}\label{estim.ei}
\| e_i \|_{L^2(\Gamma^{k,\ell})}
\le c \alpha h^{{3 \over 2}+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+ch^{{1 \over 2}+m}
\| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.\hspace{10mm}
\end{eqnarray}
Thus, (\ref{estim.eta2}) yields, for $0\le m \le p-1$,
\begin{eqnarray}\label{estim.eta3}
\|\eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}
\le c h^{{3 \over 2}+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+{ch^{{1 \over 2}+m} \over \alpha}
\| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.\hspace{10mm}
\end{eqnarray}
We can now evaluate $\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}$, using the second equation of (\ref{defptilde}) :
\begin{eqnarray}\label{eval1}
\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le \| \epsilon_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+ \| p_{k \ell h}^1 - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}.
\end{eqnarray}
The term $\| p_{k \ell h}^1 - p_{k,\ell} \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}$ is estimated in (\ref{bestfit_p.2}), so
let us focus on the term $\| \epsilon_{k,\ell} \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}$. From (\ref{epskl-etalk}) we have,
\begin{eqnarray}\label{majoeps}
\| \epsilon_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le \alpha\|\eta_{\ell,k} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+\|e_1 \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+ \|(Id-\pi_{k,\ell})(\alpha \eta_{\ell,k} +e_1) \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}.\hspace{10mm}
\end{eqnarray}
To evaluate $\|e_1 \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}$ we proceed
as for $\|e_1 \|_{L^2(\Gamma^{k,\ell})}$ and from (\ref{bestfit_u.3})
and (\ref{bestfit_p.2}) we have, for $i=1,2$, for $0\le m \le p-1$,
\begin{eqnarray}\label{estim.ei.2}
\|e_i \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c \alpha h^{2+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+ch^{1+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.\nonumber
\end{eqnarray}
The third term in the right-hand side of (\ref{majoeps}) satisfies
\begin{eqnarray}
\|(Id-\pi_{k,\ell})(\alpha \eta_{\ell,k} +e_1) \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})} \le c \sqrt{h} \|\alpha \eta_{\ell,k} +e_1
\|_{L^2(\Gamma^{k,\ell})}. \nonumber
\end{eqnarray}
Then, using (\ref{estim.eta3}) and (\ref{estim.ei}) yields, for $0\le m \le p-1$,
\begin{eqnarray}
\|(Id-\pi_{k,\ell})(\alpha \eta_{\ell,k} +e_1) \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}
&\le& c\alpha h^{2+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})\nonumber\\&
&+ \ c h^{1+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.
\nonumber
\end{eqnarray}
In order to estimate the term $\|\eta_{\ell,k} \|_{H^{-{1 \over
2}}(\Gamma^{k,\ell})}$ in (\ref{majoeps}), we
use (\ref{systeme.eta}) and then the symmetry of the operator $\pi_{k,\ell}$:
\begin{eqnarray}
2\int_{\Gamma^{k,\ell}} \eta_{\ell,k} \psi_{\ell,k} = \int_{\Gamma^{k,\ell}}
(\psi_{\ell,k}-\pi_{k,\ell}(\psi_{\ell,k})) \eta_{\ell,k}
+{1 \over \alpha} \int_{\Gamma^{k,\ell}} (e_2 -
\pi_{k,\ell}(e_1)) \psi_{\ell,k}.
\nonumber
\end{eqnarray}
Then, we have
\begin{eqnarray}
|\int_{\Gamma^{k,\ell}} \eta_{\ell,k} \psi_{\ell,k}|
\le c\sqrt{h}\|\eta_{\ell,k}\|_{L^2(\Gamma^{k,\ell})}
\|\psi_{\ell,k}\|_{H^{1 \over 2}(\Gamma^{k,\ell})}
+{1\over \alpha}\|e_2 - \pi_{k,\ell}(e_1)\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\|\psi_{\ell,k}\|_{H^{1 \over 2}(\Gamma^{k,\ell})}
\nonumber
\end{eqnarray}
from which we deduce that
\begin{eqnarray}\label{estim.etamoins1demi}
\|\eta_{\ell,k}\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c \sqrt{h}\|\eta_{\ell,k}\|_{L^2(\Gamma^{k,\ell})}+ {c\over \alpha}
\|e_2 - \pi_{k,\ell}(e_1)\|_{H^{-{1
\over 2}}(\Gamma^{k,\ell})}.
\end{eqnarray}
Then, using (\ref{estim.eta3}) and the fact that
\begin{eqnarray}\label{estim.e2pie1}
\|e_2 - \pi_{k,\ell}(e_1)\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le \|e_2\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+\|e_1\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+\|e_1- \pi_{k,\ell}(e_1)\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})} \nonumber\\
\le \|e_2\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+\|e_1\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
+c\sqrt{h} \|e_1\|_{L^2(\Gamma^{k,\ell})} \hspace{10mm}
\end{eqnarray}
with (\ref{estim.ei}) and (\ref{estim.ei.2}) yields, for $0\le m \le p-1$,
\begin{eqnarray}
\|\eta_{\ell,k}\|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c h^{2+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+{ch^{1+m} \over \alpha}
\| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}. \nonumber
\end{eqnarray}
Using the previous inequality in (\ref{majoeps}), (\ref{eval1}) yields, for $0\le m \le p-1$,
\begin{eqnarray}\label{estim.p}
\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c\alpha h^{2+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+c h^{1+m} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})}.\hspace{10mm}
\end{eqnarray}
Let us now estimate $\| \tilde{u}_{\ell h} - u_{\ell}\|_{H^1(\Omega^{\ell})}$ :
\begin{eqnarray}\label{estim.u2}
\| \tilde{u}_{\ell h} - u_{\ell}\|_{H^1(\Omega^{\ell})}
&\le& \| u_{\ell h}^1 - u_{\ell}\|_{H^1(\Omega^{\ell})}
+ \sum_{k < \ell} \| {\cal R}_{\ell,k}(\eta_{\ell,k}) \|_{H^1(\Omega^{\ell})}
\end{eqnarray}
and from (\ref{lifting}) and an inverse inequality
\begin{eqnarray}
\| {\cal R}_{\ell,k}(\eta_{\ell,k}) \|_{H^1(\Omega^{\ell})}
\le c h^{- {1 \over 2}} \|\eta_{\ell,k} \|_{L^2(\Gamma^{k,\ell})}. \nonumber
\end{eqnarray}
Hence, from (\ref{estim.eta3}) we have, for $0\le m \le p-1$,
\begin{eqnarray}\label{estim.R}
\| {\cal R}_{\ell,k}(\eta_{\ell,k}) \|_{H^1(\Omega^{\ell})}
\le
ch^{1+m}(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})
+{c h^m\over \alpha} \| p_{k,\ell} \|_{H^{{1 \over 2}+m}(\Gamma^{k,\ell})},
\nonumber
\end{eqnarray}
and (\ref{estim.u2}) yields, for $0\le m \le p-1$,
\begin{eqnarray}\label{estim.u3}
\| \tilde{u}_{\ell h} - u_{\ell}\|_{H^1(\Omega^{\ell})}
\le ch^{1+m} \|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})}
+ ch^{1+m}\sum_{k < \ell} \|u_k\|_{H^{2+m}(\Omega^k)}
\nonumber\\
+{c h^m \over \alpha} \sum_{k < \ell} \| p_{k,\ell} \|_{H^{{1 \over
2}+m}(\Gamma^{k,\ell})}.
\end{eqnarray}
which ends the proof of Theorem~\ref{best-fit}.$\qquad \Box$
\subsection{Proof of Theorem~\ref{best-fit.2}} The proof is the same as
for Theorem~\ref{best-fit}, except that the relation (\ref{bestfit_p})
for $0\le m \le p-1$
is changed in
\begin{eqnarray}\label{eq:logh}
\| p_{k \ell h}^1 - p_{k,\ell} \|_{L^2(\Gamma^{k,\ell})}
&\le& c h^{{3 \over 2}+m}\ (\log h)^{\beta(m)} \| p_{k,\ell} \|_{H^{{3 \over
2}+m}(\Gamma^{k,\ell})}.
\end{eqnarray}
The proof of \eqref{eq:logh} is given in Appendix~\ref{appendix:B}.
Therefore, (\ref{bestfit_p.2}) is changed in
\begin{eqnarray}
\| p_{k \ell h}^1 - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
&\le& c h^{2+m}\ (\log h)^{\beta(m)}\| p_{k,\ell} \|_{H^{{3 \over
2}+m}(\Gamma^{k,\ell})},
\nonumber
\end{eqnarray}
the inequalities (\ref{estim.p}) and (\ref{estim.u3}) are changed respectively in
\begin{eqnarray}
\| \tilde{p}_{k \ell h} - p_{k,\ell} \|_{H^{-{1 \over 2}}(\Gamma^{k,\ell})}
\le c\alpha h^{2+m}
(\|u_k\|_{H^{2+m}(\Omega^k)}+\|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})})\nonumber \\
+c h^{2+m} \ (\log h)^{\beta(m)} \| p_{k,\ell} \|_{H^{{3\over
2}+m}(\Gamma^{k,\ell})}, \nonumber\\[2mm]
\| \tilde{u}_{\ell h} - u_{\ell}\|_{H^1(\Omega^{\ell})}
\le c h^{1+m} \|u_{\ell}\|_{H^{2+m}(\Omega^{\ell})}
+ c h^{1+m} \sum_{k < \ell} \|u_k\|_{H^{2+m}(\Omega^k)}\nonumber \\[-1mm]
+{c h^{1+m} \over \alpha} \ (\log h)^{\beta(m)} \sum_{k < \ell} \| p_{k,\ell}
\|_{H^{{3 \over 2}+m}(\Gamma^{k,\ell})}. \qquad \Box
\nonumber
\end{eqnarray}
\subsection{Error Estimates}
Thanks to (\ref{estimuuh}), we have the following error estimates:
\begin{theo}
\label{error-estimate}
Assume that the solution $u$ of
(\ref{initial_BVP1})-(\ref{initial_BVP2}) is in $H^2(\Omega)\cap H^1_0(\Omega)$, and $u_k=u_{|\Omega^k}\in
H^{2+m}(\Omega^k)$, with
$p-1\ge m \ge 0$,
and let
$p_{k,\ell}=\frac{\partial u}{\partial {\bf n}_k}$
over each $\Gamma^{k,\ell}$.
Then, there exists a constant $c$ independent of $h$ and $\alpha$
such that
\begin{eqnarray}
\| {\underline u}_h -{\underline u}\|_* + \| {\underline p}_h - {\underline p} \|_{-{1 \over 2},*}
\le c(\alpha h^{2+m}+h^{1+m} )
\sum_{k=1}^K \| {\underline u}\|_{H^{2+m}(\Omega^k)} \nonumber\\[-1mm]
+ \ c ({h^m\over \alpha} + h^{1+m})\sum_{k=1}^K \sum_{\ell}\|
p_{k,\ell} \|_{H^{{1
\over 2}+m}(\Gamma^{k,\ell})}.
\nonumber
\end{eqnarray}
\end{theo}
\begin{theo}
\label{error-estimate2}
Assume that the solution $u$ of
(\ref{initial_BVP1})-(\ref{initial_BVP2}) is in $H^2(\Omega)\cap H^1_0(\Omega)$, $u_k=u_{|\Omega^k}\in
H^{2+m}(\Omega^k)$,
and
$p_{k,\ell}=\frac{\partial u}{\partial {\bf n}_k}$
is in $H^{{3 \over 2}+m}(\Gamma_{k,\ell})$ with $p-1\ge m \ge 0$.
Then there exists a constant $c$ independent of $h$ and $\alpha$
such that
\begin{eqnarray}
\| {\underline u}_h -{\underline u}\|_* + \| {\underline p}_h - {\underline p}\|_{-{1 \over 2},*}
\le c(\alpha h^{2+m}+h^{1+m} )
\sum_{k=1}^K \| {\underline u}\|_{H^{2+m}(\Omega^k)} \nonumber\\
+ \ c ({h^{1+m}\over \alpha} + h^{2+m}) (\log h)^{\beta(m)}
\sum_{k=1}^K \sum_{\ell}\|
p_{k,\ell}
\|_{H^{{3
\over 2}+m}(\Gamma^{k,\ell})}
\nonumber
\end{eqnarray}
with $\beta(m)=0$ if
$m\le p-2$ and $\beta(m)=1$ if $m=p-1$.
\end{theo}
\section{Numerical results}\label{sec:numresults}
We consider the initial problem, with exact solution $u(x,y)=x^4y^4+xy\cos(10xy)$.
The domain is the unit square $\Omega=(0,1) \times (0,1)$,
decomposed into non-overlapping subdomains with meshes generated independently.
In Sections~\ref{subsec:errP2}, \ref{subsec:errP123}, to observe the numerical error estimates for the
discrete problem (\ref{pbdiscret}), one need to compute the converged solution of the discrete algorithm
(\ref{algo_discret})-(\ref{CI_discret}) regardless of the algorithm used to compute it. Thus it is
the solution at convergence of the algorithm (\ref{algo_discret})-(\ref{CI_discret}) with a stopping criterion on
the residual (i.e. the jumps of interface conditions) that must be extremely small, e.g. smaller than $10^{-14}$.
For all the other simulations where we are interested in $u_h^n$ and not $u_h$, a residual of $10^{-2}$ times the
target $H^1$ error is considered, for stopping the iterations.
\subsection{Choice of the Robin parameter}\label{Sec:alpha}
In our simulations the Robin parameter $\alpha$ is either an arbitrary constant
or is obtained by minimizing the convergence factor (depending on the
mesh size in that case, see~\cite{JMN10}). In the conforming two subdomains case, with
constant mesh size $h$ and an interface of length $L$, the optimal theoretical value of $\alpha$
which minimizes the convergence factor at the continuous level is (see~\cite{Gander06}):
\vspace{-5mm}
\begin{equation}\label{eq.alphaopt}
\alpha_{\text{opt}}(L,h)=[((\frac{\pi}{L})^2+1)((\frac{\pi}{h})^2+1)]^{\frac{1}{4}}.
\end{equation}
Note that this optimal choice for $\alpha$ does not seem to provide an
optimal error estimate from Theorem \ref{error-estimate}.
Nevertheless, as was illustrated in \cite{JMN10}
the regularity of the normal derivative of $u$ along the interfaces
enters most of the times in the frame of Theorem~\ref{error-estimate2}
that allows a larger range of choice for $\alpha$, compatible with the
above mentioned optimal choice (as regards the algorithm).
In the non-conforming case, we consider the following values :
$\alpha_{\text{min}}=\alpha_{\text{opt}}(L,{h_{\text{min}} \over p})$,
$\alpha_{\text{mean}}=\alpha_{\text{opt}}(L,{h_{\text{mean}}\over p})$,
$\alpha_{\text{max}}=\alpha_{\text{opt}}(L,{h_{\text{max}}\over p})$,
where $h_{\text{min}}$, $h_{\text{mean}}$ and $h_{\text{max}}$ stands respectively for the
smallest, meanest or highest step size on the interface and $p$ is the degree of the approximation.
\subsection{$H^1$ error between the continuous and discrete solutions for $\mathbf{P}_2$ finite elements}
\label{subsec:errP2}
In this part, we compare the relative $H^1$ error in the
non-conforming case to the error obtained on a uniform conforming
grid.
We define the relative $H^1$ error as follows:
Let $u_k=u_{|\Omega^k}, \ 1 \le k \le K$
(where u is the continuous solution), and let
$({\underline u}_h)_k=({\underline u}_h)_{|\Omega^k}$ where ${\underline u}_h$ is the solution of the
discrete problem (\ref{pbdiscret}). Now, let $N_x=\|u\|_*$ and let
$E_k=\|({\underline u}_h)_k-u_k\|_{H^1(\Omega^k)}$, $1 \le k \le K$. Let
$E=(\sum_{i=1}^K E_i^2)^{1/2}.$ The relative $H^1$ error is then
$E/N_x$.
We consider four
initial meshes : the two uniform conforming meshes (mesh 1 and 4) of
Figure~\ref{fig:meshconf}, and the two non-conforming meshes (mesh 2 and 3)
of Figure~\ref{fig:meshnc}. In the non-conforming case, the unit square is
decomposed into four non-overlapping subdomains numbered as in
Figure~\ref{fig:meshnc} on the left.
Figure~\ref{fig:errorestim} shows the relative $H^1$ error versus the number of
refinement for these four meshes, and ${h^2 \over 2}$ (where $h$ is the mesh size)
versus the number of refinement, in logarithmic scale. At each refinement, the
mesh size is divided by two. The results of Figure~\ref{fig:errorestim} show
that the relative $H^1$ error tends to zero at the same rate as the
mesh size squared~(${h^2}$), and this fits with the theoretical error estimates of
Theorem~\ref{error-estimate2}.
On the other hand, we observe that the two
curves corresponding to the non-conforming meshes (mesh 2 and mesh 3)
are between the curves of the conforming meshes (mesh 1 and mesh
4). We observe that the relative $H^1$ error for mesh 3 is close to the relative $H^1$ error for mesh 4
(i.e. the uniform conforming finer mesh), while the one corresponding to mesh 2 is nearly the same as
the error for mesh 1 (i.e. the uniform conforming coarser mesh), as can be expected,
as mesh 3 is more refined than mesh 2 in subdomain $\Omega^4$, where the solution steeply varies.
\begin{figure}[H]
\centering
\includegraphics[height=3.85cm]{mesh1_mail5.eps}
\includegraphics[height=3.85cm]{mesh4_mail5.eps}
\caption{Uniform conforming meshes : mesh 1 (on the left), and mesh 4 (on the right)}
\label{fig:meshconf}
\end{figure}
\begin{figure}[H]
\centering
\hspace{-0.85cm}
\includegraphics[height=3.85cm]{decomp.eps}\hspace{-10.mm}
\includegraphics[height=3.85cm]{mesh2_mail5.eps}\hspace{-10.0mm}
\includegraphics[height=3.85cm]{mesh3_mail5.eps}
\caption{Domain decomposition (on the left), and non-conforming meshes: mesh 2 (on the middle), and mesh 3
(on the right)}
\label{fig:meshnc}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[height=6.5cm]{errorP2_mail5.eps}
\caption{Relative $H^1$ error versus the number of refinements for
the initial meshes : mesh 1, (square line), mesh 2 (plus line),
mesh 3 (star line), and mesh 4 (diamond line). The dashed line is
${h^2 \over 2}$ (where $h$ is the mesh size)
versus the number of refinements, in logarithmic scale}
\label{fig:errorestim}
\end{figure}
\subsection{$H^1$ relative error for different degrees of the finite element approximation}
\label{subsec:errP123}
In this part we study the relative $H^1$ error between the continuous and discrete solutions
versus the mesh size, for $\mathbf{P}_1$, $\mathbf{P}_2$ and $\mathbf{P}_3$ finite elements.
\subsubsection{Decomposition into four subdomains}
\label{subsubsec:err4dom}
We consider a decomposition of the unit square into four non-overlapping subdomains numbered as in
Figure~\ref{fig:meshnc4dom} on the left.
For $\mathbf{P}_1$ and $\mathbf{P}_2$ discretizations, we consider the initial non-conforming meshes represented on
Figure~\ref{fig:meshnc4dom} on the middle, and for a $\mathbf{P}_3$ discretization, we consider the
initial non-conforming meshes represented on Figure~\ref{fig:meshnc4dom} on the right.
\begin{figure}[H]
\centering
\hspace{-0.85cm}
\includegraphics[height=3.85cm]{decomp_4dom.eps}\hspace{-10.mm}
\includegraphics[height=3.85cm]{mesh4dom.eps}\hspace{-10.mm}
\includegraphics[height=3.85cm]{mesh3_3.eps}
\caption{Domain decomposition (left), and non-conforming meshes (middle and right)}
\label{fig:meshnc4dom}
\end{figure}
Figure~\ref{fig:errorestim4dom} shows the relative $H^1$ error between the continuous and discrete solutions
versus the mesh size, on the left for $\mathbf{P}_1$ and $\mathbf{P}_2$ finite elements, and on the right for
$\mathbf{P}_3$ finite elements, in logarithmic scales. For $\mathbf{P}_1$ and $\mathbf{P}_2$ discretizations,
we start with the meshes on Figure~\ref{fig:meshnc4dom} on the middle
and divide by 2 the mesh size four times. In order to compute the error, the
non-conforming solutions are interpolated on a very fine
grid obtained by refining 5 times the initial mesh.
For $\mathbf{P}_3$ discretizations,
we start with the meshes on Figure~\ref{fig:meshnc4dom} on the right
and divide by 3 the mesh size three times. In order to compute the error,
the non-conforming solutions are interpolated on a very fine grid obtained by refining 4 times the initial meshes.
\vspace{-4mm}
\begin{figure}[H]
\hspace{5mm}
\includegraphics[height=7.3cm]{errP123_2.eps}\hspace{-1.5cm}
\vspace*{-20pt}
\caption{Relative $H^1$ error versus the mesh size for
the non-conforming case. Left: for $\mathbf{P}_1$ and $\mathbf{P}_2$ discretizations. Right: for $\mathbf{P}_3$ discretizations}
\label{fig:errorestim4dom}
\end{figure}
The results of Figure~\ref{fig:errorestim4dom} show
that if $p$ is the degree of the approximation, the relative $H^1$ error tends to zero at the same rate as
$h^p$, for $1\le p \le 3$, and this fits with the theoretical error estimates of
Theorem~\ref{error-estimate2}.
\subsubsection{Decomposition into twelve subdomains}
\label{subsubsec:err12dom}
We consider the initial problem with exact solution $u(x,y)=x^3y^2+\sin(xy)$.
The domain is $\Omega=(-3,3) \times (-2,2)$ and is decomposed into
twelve irregularly shaped subdomains as in Figure~\ref{fig:dec12dom}.
The subdomain meshes are generated in an
independent manner as in Figure~\ref{fig:meshnc12dom}. The finite element assemblies are done as in \cite{Cuvelier}.
\begin{figure}[H]
\centering
\psfrag{O1}{$ \Omega^1$}
\psfrag{O2}{$ \Omega^2$}
\psfrag{O3}{$ \Omega^3$}
\psfrag{O4}{$ \Omega^4$}
\psfrag{O5}{$ \Omega^5$}
\psfrag{O6}{$ \Omega^6$}
\psfrag{O7}{$ \Omega^7$}
\psfrag{O8}{$ \Omega^8$}
\psfrag{O9}{$ \Omega^9$}
\psfrag{O10}{$ \Omega^{10}$}
\psfrag{O11}{$ \Omega^{11}$}
\psfrag{O12}{$ \Omega^{12}$}
\includegraphics[height=4.cm]{decomprect.eps}
\caption{Domain decomposition into twelve non-overlapping subdomains}
\label{fig:dec12dom}
\end{figure}
\vspace*{-5mm}
\begin{figure}[H]
\centering
\includegraphics[height=8cm]{mesh12dom.eps}
\vspace*{-30pt}
\caption{Non-conforming meshes}
\label{fig:meshnc12dom}
\end{figure}
Figure~\ref{fig:errorestim12dom} shows the relative $H^1$ error between the continuous and discrete solutions
versus the mesh size, on the left for $\mathbf{P}_1$ and $\mathbf{P}_2$ finite elements, and on the right for
$\mathbf{P}_3$ finite elements, in logarithmic scales.
For $\mathbf{P}_1$ and $\mathbf{P}_2$ discretizations,
we start with the mesh on Figure~\ref{fig:meshnc12dom} and divide by 2 the mesh size four times.
In order to compute the error, the non-conforming solutions are interpolated on a very fine
grid obtained by refining 5 times the initial mesh.
For $\mathbf{P}_3$ discretizations, we start with the mesh on Figure~\ref{fig:meshnc12dom}
and divide by 3 the mesh size three times.
In order to compute the error, the non-conforming solutions are interpolated on a very fine grid obtained
by refining 4 times the initial meshes.
The results of Figure~\ref{fig:errorestim12dom} show
that the relative $H^1$ error tends to zero at the same rate as
$h^p$, for $1\le p \le 3$ where $p$ is the degree of the approximation. This corresponds to the theoretical
error estimates of Theorem~\ref{error-estimate2}.
\vspace*{-5mm}
\begin{figure}[H]
\hspace{-25mm}
\includegraphics[height=8.5cm]{errorP123_12domains.eps}\hspace{-1.5cm}
\vspace*{-20pt}
\caption{Relative $H^1$ error versus the mesh size for
the non-conforming case. Left: for $\mathbf{P}_1$ and $\mathbf{P}_2$ discretizations. Right: for $\mathbf{P}_3$ discretizations}
\label{fig:errorestim12dom}
\end{figure}
\subsection{Convergence : Choice of the Robin parameter}\label{sec.Robinparam}
Let us now study the convergence speed to reach the discrete solution, for
different values of the Robin parameter $\alpha$, in the case of $\mathbf{P}_2$ finite elements.
We first consider a domain decomposition in two subdomains, and then
in four subdomains as shown in Figure~\ref{fig:meshnc2-4dom}. We simulate the error equations (i.e. $f = 0$),
and use a random initial guess so that all the frequency components are present.
\begin{figure}[H]
\centering
\includegraphics[height=4.cm]{mesh2domP2.eps}
\includegraphics[height=4.cm]{mesh4dom2.eps}
\caption{Domain decomposition in two subdomains (left) and in four subdomains (right), with non-conforming meshes}
\label{fig:meshnc2-4dom}
\end{figure}
\subsubsection{2 subdomain case}\label{sec.2sub}
In this part, the unit square is decomposed into two subdomains
with non-conforming meshes (with $703$ and $2145$ nodes respectively)
as shown in Figure~\ref{fig:meshnc2-4dom} (on the left).
On Figure~\ref{fig:errconv2dom} (top left) we represent
the $H^1$ norm of the iterate error, for different values of the Robin
parameter $\alpha$. We observe that
the optimal numerical value of the Robin parameter is close to $\alpha_{\text{min}}$.
As the relative $H^1$ error didn't show where the error is highest,
we also represented on Figure~\ref{fig:errconv2dom} (top right)
the $L^{\infty}$ norm of the iterate error, for different values of the Robin parameter $\alpha$.
We obtain similar results as for the relative $H^1$ error.
The Schwarz algorithm can be interpreted as a Jacobi algorithm applied
to an interface problem (see~\cite{Nataf.4}). In order to accelerate the
convergence, we can replace the Jacobi algorithm by a GMRES~(\cite{Saad}) algorithm.
Figure~\ref{fig:errconv2dom} show respectively the
$H^1$ norm (on the bottom left) and the $L^{\infty}$ norm (on the bottom right) of the GMRES iterate error,
for different values of the Robin parameter $\alpha$.
\vspace{-3mm}
\begin{figure}[H]
\hspace*{-1.8cm}
\includegraphics[height=6.4cm]{conv2dom_H1_it.eps}\hspace{-0.7cm}
\includegraphics[height=6.4cm]{conv2dom_Linf_it.eps}
\hspace*{-1.8cm}
\includegraphics[height=6.4cm]{conv2dom_H1_gmres.eps}\hspace{-0.7cm}
\includegraphics[height=6.4cm]{conv2dom_Linf_gmres.eps}
\caption{Error versus Schwarz (top) or Gmres (bottom) iterations for different values of the Robin
parameter $\alpha$. Left: the $H^1$ error, right: the $L^\infty$ error.}
\label{fig:errconv2dom}
\end{figure}
For $\alpha=\alpha_{\text{min}}$, the convergence is
accelerated by a factor 2 for GMRES, compared to the Schwarz algorithm.
Moreover, the gap between the error values for different $\alpha$
is decreasing when using the GMRES algorithm, compared to the Schwarz method.
Thus, the GMRES algorithm is less sensitive to the choice of the Robin
parameter. The sensitivity of the performance of the Krylov solver to the
optimized value of the parameter is thus not so critical but it is
real and especially visible for ranges of accuracy used for most
practical applications (relative errors of size $10^{-2}$ or
$10^{-3}$). Adding that this effect generally increases with the number
of subdomains and the refinement of the mesh \cite{Gander06} together with the complexity of the equations,
we advise when possible to look for the optimized value.
Moreover, this conclusion on the interest of GMRES compared to Schwarz is established for stationnary problems but is
not yet verified for time dependent problems with a Schwarz waveform relaxation algorithm,
as illustrated for example in~\cite{Hoang}.
In Table~\ref{table:errconv} we show the number of iterations $N$ to reduce the $H^1$ error by a factor
$10^6$ versus the Robin parameter $\alpha$, for different degrees $p$ of the approximation. We observe that
$\alpha_{\min}$ is very close to the optimal numerical value, for all $p=1,2,3$.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l||l|l||l|l|}
\hline
$\alpha$ & N ($p=1)$ & $\alpha$ & N ($p=2$) & $\alpha$ & N ($p=3$)\\ \hline \hline
10 & 57 & 17 & 63 & 23 & 88 \\ \hline
15 & 40 & 22 & 51 & 28 & 74\\ \hline
$17.818 \,(\alpha_{\min})$ & 36 & $25.198 \,(\alpha_{\min})$ & 49 & $30.861 \,(\alpha_{\min})$ & 68 \\ \hline
20 & 39 & 27 & 52 & 33 & 66\\ \hline
25 & 40 & 32 & 61 & 35& 68\\ \hline
30 & 58 & 37 & 71 & 40& 77\\ \hline
\end{tabular}
\end{center}
\caption{Number of iterations to reduce the $H^1$ error by a factor
$10^6$ versus $\alpha$, for different degrees $p$}
\label{table:errconv}
\end{table}
\subsubsection{4 subdomain case}
In this part, the unit square is decomposed into four subdomains
with non-conforming meshes as shown in Figure~\ref{fig:meshnc2-4dom} (on the right).
From the results of Section~\ref{sec.2sub}, we will consider for the optimized parameter $\alpha$ the values
given by the smallest mesh size on the interface.
As we have four interfaces, using formula \eqref{eq.alphaopt}
with $h=h_{\text{min}}^{k,\ell}$, $ 1\le k,\ell \le 4$, with $\Gamma^{k,\ell}$
not empty, we obtain four values given by:
$\alpha_{\text{min}}^{1,2}=18.479,\ \alpha_{\text{min}}^{1,3}=19.250, \ \alpha_{\text{min}}^{2,4}=34.725,
\ \alpha_{\text{min}}^{3,4}=40.709$.
We define $\alpha^*$ the parameter with these four values over the interfaces (i.e. $\alpha^*$ is constant
over each interface, with different constants from one interface to another).
We consider also a constant optimized value $\alpha_{\text{min}}$ over the four interfaces obtained
by taking $h=\min(h_{\text{min}}^{1,2},h_{\text{min}}^{1,3},h_{\text{min}}^{2,4},h_{\text{min}}^{3,4})$ in
formula \eqref{eq.alphaopt}. We obtain $\alpha_{\text{min}}=\alpha_{\text{min}}^{3,4}=40.709$.
On Figure~\ref{fig:err4d} we represent
the $H^1$ norm of the error, for $\alpha^*$, $\alpha_{\text{min}}$ and for different constant values of the Robin
parameter $\alpha$, with the Schwarz method on the left, and the GMRES algorithm on the right.
We observe that the optimal numerical value of the Robin parameter is close to $\alpha_{\text{min}}$.
We also observe in that case that taking the different optimized values over the interfaces (i.e. $\alpha^*$)
doesn't improve substantially the convergence speed compared to taking the same value $\alpha_{\text{min}}$ over
the interfaces.
\section{Conclusions}
We have analyzed the convergence of the iterative algorithm
for $\mathbf{P}_p$ finite elements, with $p\ge 1$ in 2D and $p=1$ in 3D,
for the NICEM method. It relies on Schwarz type algorithms with Robin
interface conditions on non-conforming grids. We have extended the error estimates in 2D for piecewise polynomials
of higher order. Numerical results show that the method preserves the order of the finite elements
for $\mathbf{P}_p$ discretizations, with $p=1,2$ or $3$.
\begin{figure}[H]
\hspace{-1.5cm}
\includegraphics[height=6.cm]{conv4dom_H1_it.eps}\hspace{-0.7cm}
\includegraphics[height=6.cm]{conv4dom_H1_gmres.eps}
\caption{Error versus iterations, for different values of the Robin
parameter $\alpha$. Left: the Schwarz algorithm, right: the GMRES algorithm.}
\label{fig:err4d}
\end{figure}
|
1610.08248
|
\section{Introduction}
\label{sec:introduction}
Stars form from the gravitational collapse of dense cores within turbulent molecular clouds. Due to turbulence and/or the initial rotation of these cores, newly formed stars are attended by protostellar discs of gas and dust \citep[e.g.][]{Terebey:1984a,Attwood:2009a}. The idea that our Solar System formed from a protostellar disc has been discussed since the 18th century \citep{Laplace:1796a}. We now know that discs play a fundamental role in the formation of stars and planets. They provide young stars with the majority of their mass through accretion and a suitable dynamical and chemical environment for the formation of planets \citep[e.g.][]{Lissauer:1993a}.
Mass accretion onto young stars increases their luminosity due to gravitational energy being converted into heat on the accretion shock around the surface of a star. The accretion is typically considered to be continuous \citep{Krumholz:2006a,Bate:2009a,Offner:2009a,Krumholz:2010b}. However, consider a star which has just evolved out of the Class 0 phase, which will ultimately have a mass of $1 \msun$. It has an age of $\sim 10^{5}$ yr and has accumulated half of its final mass. This yields a mean mass accretion rate of $\sim 5 \times 10^{-6} \msun \textup{yr}^{-1}$ and a mean luminosity of $\sim 25 \lsun$. Observational studies show that solar-like protostars have much lower luminosities \citep[e.g.][]{Kenyon:1990a,Evans:2009b, Enoch:2009a}. This is the so called {\it luminosity problem}. It may be circumvented if accretion onto protostars is not continuous, but rather episodic, happening in short bursts \citep[e.g.][]{Dunham:2010a,Dunham:2012a,Audard:2014a}.
FU Ori objects provide evidence of episodic accretion. These objects exhibit sudden luminosity increases on the order of $\sim5$~mag and estimated accretion rates of $> 10^{-4} \msun \textup{yr}^{-1}$ which last from a few tens of years to a few centuries \citep{Herbig:1977a, Hartmann:1996a,Greene:2008a,Peneva:2010a,Green:2011a}.
Episodic accretion maybe due to gravitational instabilities \citep{Vorobyov:2005a,Machida:2011a,Vorobyov:2015a,Liu:2016b}, thermal instabilities in the inner disc region \citep{Hartmann:1985a, Lin:1985a, Bell:1994a}, or due to gravitational interactions in a binary system \citep{Bonnell:1992a,Forgan:2010a}. It has also been suggested that they may be due to the combined effect of gravitational instabilities operating in the outer disc region transferring matter inwards and the magneto-rotational instability (MRI) operating episodically in the inner disc region delivering matter onto the young protostar \citep{Armitage:2001a,Zhu:2007a,Zhu:2009a,Zhu:2009b,Zhu:2010a}.
It is expected that radiative feedback from young protostars will affect the dynamical and thermal evolution of their parent cloud and their discs \citep{Stamatellos:2011a,Stamatellos:2012a,Lomax:2014a,Guszejnov:2016a}.
A significant fraction of low-mass stars and brown dwarfs may form by fragmentation in gravitationally unstable discs \citep{Whitworth:2006a, Stamatellos:2007c, Stamatellos:2009a,Kratter:2010a,Zhu:2012a,Lomax:2014a,Kratter:2016a,Vorobyov:2013a}. Protostellar discs fragment if two conditions are met: (i) They are gravitationally unstable i.e.
\begin{equation}
Q \equiv \frac{\kappa c_{s}}{\pi G \Sigma} < \beta,
\label{eqn:introduction:toomre}
\end{equation}
where $Q$ is the Toomre parameter \citep{Toomre:1964a}, $\kappa$ is the epicyclic frequency, $c_{s}$ is the local sound speed and $\Sigma$ is the disc surface density. The value of $\beta$ is on the order of unity and it is dependant on the assumed geometry of the disc and the equation of state used: for a razor-thin disc, $\beta = 1$; in a 3D disc $\beta = 1.4$ \citep[see][and references therein]{Durisen:2007a}. (ii)~They cool sufficiently fast, i.e. $t_{\textsc{cool}}<(0.5-2)t_{\textsc{orb}}$ where $t_{\textsc{orb}}$ is the local orbital period \citep{Gammie:2001a,Johnson:2003a, Rice:2003a, Rice:2005a}. In the last few years, the validity of second criterion has been scrutinized, and it has been suggested that fragmentation may happen for even slower cooling rates \citep{Meru:2011a,Paardekooper:2011a,Lodato:2011a, Rice:2012a, Tsukamoto:2015a}. However, irrespective of the detailed criteria, there has been significant observational evidence that disc fragmentation does occur \citep{Tobin:2013b, Tobin:2016a, Dupuy:2016a}.
Theoretical work and numerical simulations suggest that the conditions for disc fragmentation are met in the outer disc regions ($>70-100$~AU) \citep[e.g.][]{Whitworth:2006a,Stamatellos:2011d,Stamatellos:2009d,Stamatellos:2008a,Boley:2009a}. Most of the objects formed by disc fragmentation are brown dwarfs, though low-mass stars and planets may also form \citep{Stamatellos:2009a, Zhu:2012a,Vorobyov:2013a}. Fragments that form in gravitationally unstable discs start off with a mass that is determined by the opacity limit for fragmentation, i.e. with a few ${\rm M}_{\rm J}$, where M$_{\rm J}$ is the mass of Jupiter \citep{Low:1976a,Rees:1976a,Boss:1988a,Whitworth:2006a,Boley:2010b,Forgan:2011b,Rogers:2012a}. However, they quickly accrete mass to become brown dwarfs or even low-mass stars \citep{Stamatellos:2009a,Kratter:2010b,Zhu:2012a, Stamatellos:2015a}. A few of the fragments remain in the planetary-mass regime ($M<13~{\rm M}_{\rm J}$) but these are typically ejected from the disc \citep{Li:2015b,Li:2016a} becoming free-floating planets \citep[e.g.][]{Osorio:2000a, Kellogg:2016a}.
These low-mass objects that form by disc fragmentation have properties that are similar to the properties of objects forming from the collapse of isolated low-mass pre-(sub)stellar cloud cores. They are expected to be attended by discs \citep{Stamatellos:2009a,Liu:2015a,Sallum:2015a}, and they may also launch jets perpendicular to the disc axis \citep{Machida:2006a, Gressel:2015a}. \cite{Stamatellos:2015b} suggest that discs around low-mass objects (brown dwarfs and planets) that form by disc fragmentation are more massive from what would be expected if they were formed in collapsing low-mass pre-(sub)stellar cloud cores, which is consistent with recent observations of brown dwarf discs in Upper Sco OB1 and Ophiuchus \citep{van-der-Plas:2016a}.
It is therefore reasonable to assume that low-mass objects that form by disc fragmentation may also exhibit radiative feedback due to accretion of material from their individual discs onto their surfaces. The effect of radiative feedback due to accretion onto low-mass objects such as planets and brown dwarfs has been ignored by previous studies of disc fragmentation. Recent simulations of the evolution of giant proto-planets in self-gravitating discs \citep{Stamatellos:2015a} have shown that radiative feedback from giant planets may reduce gas accretion and hence suppress their mass growth. They found that when radiative feedback is included the fragment's final mass is within the planetary-mass regime \citep[see also][]{Nayakshin:2013a}.
The goal of this paper is to examine how radiative feedback from objects that form by disc fragmentation influences the properties of these objects and whether subsequent fragmentation in the disc is affected. More specifically we investigate whether radiative feedback from objects forming in the disc (hereafter referred to as {\it secondary objects}) suppresses their mass growth, increasing the possibility that these objects will end up as planets rather than brown dwarfs of more massive objects, in contrast with what previous studies suggest \citep[e.g.][]{Stamatellos:2009a,Kratter:2010b}.
We construct numerical experiments to examine three cases of radiative feedback from secondary objects: {\bf (i)}~No radiative feedback: gas is accreted onto the objects but no energy is fed back into the disc due to this process. {\bf (ii)}~Continuous radiative feedback: gas is accreted continuously onto the object and the accretion energy is continuously fed back into the disc. {\bf (iii)}~Episodic radiative feedback: We assume that low mass secondary objects exhibit episodic outbursts just like their higher-mass counterparts do. Gas accumulates into the region close to the object (within $\sim 1$~AU) and when the conditions are right it accretes onto the object (see Section~\ref{sec:numerical_method} for details). Gas accretion onto secondary objects is episodic, resulting in episodic radiative feedback.
In Section~\ref{sec:numerical_method}, we provide the computational framework of this work including the episodic accretion/feedback model we adopt. In Section~\ref{sec:initial_conditions} we discuss the initial conditions of the simulations. We present the results of the effect of radiative feedback on the evolution of discs and on the properties of the objects form by disc fragmentation in Section~\ref{sec:disc_fragmentation_and_the_effect_of_radiative_feedback}. Our results are summarised in Section~\ref{sec:conclusions}.
\section{Numerical Method}
\label{sec:numerical_method}
We use the smoothed-particle hydrodynamics (SPH) code \textsc{seren} \citep{hubber:2011a,hubber:2011b} to simulate gravitationally unstable protostellar discs. Discs are represented by a large number of SPH particles. To avoid small timesteps at a density of $\rho_{\rm sink}=10^{-9} \textup{ g cm}^{-3}$ a particle is replaced by a sink \citep{Bate:1995a} that represents a bound object (star, brown dwarf or planet, depending on its mass). Sinks interact with the rest of the disc both gravitationally and radiatively (in the cases where radiative feedback is included). Gas particles which pass within $R_{\rm sink}=1$~AU and are gravitationally bound to a sink are accreted onto it.
The heating and cooling of gas is performed using the radiative transfer technique ascribed to \cite{Stamatellos:2007b}, where the density and the gravitational potential of a gas particle are used to estimate a column density through which cooling/heating happens, and along with the local opacity, are used to estimate an optical depth for each particle. This can be used to determine the heating and cooling of the particle and incorporates effects from the rotational and vibrational degrees of freedom of $\textup{H}_{2}$, the dissociation of $\textup{H}_{2}$, ice melting, dust sublimation, bound-free, free-free and electron scattering interactions. The equation of state used and the effect of each assumed constituent are described in detail in \textsection 3 of \cite{Stamatellos:2007b}.
The radiative heating/cooling rate of a particle $i$ is
\begin{equation}
\frac{\dif u_{i}}{\dif t} = \frac{4 \sigma_{\textsc{sb}} \left( T_{\textsc{bgr}}^{4} - T_{i}^{4} \right)}{\bar{\Sigma}_{i}^{2} \bar{\kappa}_{\textsc{r}}\left(\rho_{i}, T_{i} \right) + \kappa_{\textsc{p}}^{-1}\left(\rho_{i}, T_{i} \right)},
\label{eqn:heatingRate}
\end{equation}
where $\sigma_{\textsc{sb}}$ is the Stefan-Boltzmann constant, $T_{\textsc{bgr}}$ is the pseudo-background temperature below which particles cannot cool radiatively, $\bar{\Sigma}_{i}$ is mass-weighted mean column density of the particle, and $\bar{\kappa}_{\textsc{r}}$ and $\kappa_{\textsc{p}}$ are the Rosseland- and Planck-mean opacities, respectively.
Once most of the gas in the disc has dissipated (accreted onto the central star and onto the secondary objects formed in the disc; $t=10$~kyr), we utilise an N-body integrator with a 4th-order Hermite integration scheme \citep{Makino:1992a}, to follow the evolution of the objects present at the end of each hydrodynamic simulation up to $200 \textup{ kyr}$. We use a strict timestep criterion so that energy is conserved to better than one part in $10^8$ \citep{Hubber:2005a}. This allows us to determine the ultimate fate of these objects: will they remain bound to central star or be ejected from the system? It is noted that at this phase we ignore gravitational and dissipative interactions due to gas within the disc.
\subsection{Radiative feedback from sinks}
\label{sub:radiative_feedback_from_sinks_stars}
Sinks that represent stars, brown dwarfs and planets in the simulations interact both gravitationally and radiatively with the disc. In the optically thin limit, the temperature that the dust/gas will attain at a distance $\left| \vec{r} - \vec{r}_{n}\right|$ from a radiative object $n$ is
\begin{equation}
T_n\left(\vec{r} \right) = \left( \frac{L_{n}}{16 \pi \sigma_{\textsc{sb}}} \right)^{1/4}
\left( \left| \vec{r} - \vec{r}_{n}\right| \right)^{-1/2}\,.
\label{eqn:Tbgr2}
\end{equation}
In the optically thick limit, considering a geometrically thin, passive disc \cite[e.g.][]{Kenyon:1987a} the temperature is
\begin{equation}
T_n\left(\vec{r} \right) = \left( \frac{L_{n} R_{n}}{4 \pi \sigma_{\textsc{sb}} } \right)^{1/4}
\left( \left| \vec{r} - \vec{r}_{n}\right| \right)^{-3/4}\,.
\label{eqn:Tbgr3}
\end{equation}
Therefore, the temperature drops faster with the distance from the radiative object in the optically thick case ($q=3/4$ vs $q=1/2$, respectively). However, in the case of a flared disc the temperature drop is less steep, approaching the $q=1/2$ value. This is because a flared disc intercepts a higher fraction of the star's radiation \citep[e.g.][]{Kenyon:1987a, Chiang:1997a}. This lower value for $q$ is also consistent with disc observations \cite[e.g.][]{Andrews:2009b}.
Customarily, the optically thin case is used in analytic and computational studies of protostellar disc evolution \citep[e.g.][]{Matzner:2005a,Kratter:2006a,Stamatellos:2007b,Offner:2009a,Stamatellos:2009d,Stamatellos:2011a,Zhu:2012a,Lomax:2014a,Vorobyov:2015a,Dong:2016a,Kratter:2016a}, albeit with a scaled down stellar luminosity (by a factor of $\sim 0.1$) so as to match detailed radiative transfer calculations \citep[see][]{Matzner:2005a}. In either case, the temperature at a given distance from a radiative source depends on the luminosity of the source. The luminosity of young stellar and substellar objects is mostly due to accretion of material onto their surfaces.
In the simulations presented here we assume a time-independent contribution from the central star in the optically thick regime, and a time-dependent contribution in the optically thin regime from the secondary objects that form self-consistently in the simulations. We describe each one in detail in the following sections. We note that these contributions only account for disc heating due to radiation released on the surfaces of bound objects; energy release in the disc midplane due to accretion is taken into account self-consistently within the hydrodynamic simulation. This approach ignores the case in which the density of the gas within the Hill radius of a secondary object is high, shielding the rest of the disc from the effect of heating. However, such a phase would be short-lived as gas is accreted onto the secondary object.
\subsubsection{Radiative feedback from the central star}
\label{subsub:radiative_feedback_from_the_central_star}
We assume that the radiative feedback from the central star is constant with time, and independent of the accretion rate onto it. This is done because the central star is part of the initial conditions and does not form self-consistently in the simulations. Therefore the accretion rate onto it may not be properly determined. Additionally, by choosing a relatively steep temperature profile we minimise the role of the central star in stabilising the disc, and focus on the radiative effect from the secondary objects forming in the disc.
The pseudo-background temperature due to the central star is set to
\begin{equation}
T_{\textsc{bgr}}^{\star}(R) = \left[ T_{0}^{2} \left( \frac{R^{2} + R_{0}^{2}}{\textup{AU}^{2}} \right)^{-3/4} + T_{\infty}^{2} \right]^{1/2}.
\label{eqn:temperatureProfile}
\end{equation}
$R$ is the distance from the star on the disc midplane, $R_{0} = 0.25$ AU is a smoothing radius which prevents non-physical values when $R \rightarrow 0$, $T_{0} = 250$ K is the temperature at a distance of 1 AU from the central star, and $T_{\infty} = 10$ K is the temperature at large distances from the star. The above equation is chosen purely on numerical grounds to reproduce the required properties of the temperature profile.
\subsubsection{Radiative feedback from secondary objects}
\label{subsub:radiative_feedback_from_secondary_objects}
The radiative feedback from secondary objects depends on the accretion rate of gas onto them, and it is therefore time-dependent.
The pseudo-background temperature due to radiative secondary objects in the disc is set to
\begin{equation}
T_{\textsc{bgr}}^{4}\left(\vec{r} \right) = \left(10 \textup{ K} \right)^{4} + \sum_{n} \left( \frac{L_{n}}{16 \pi \sigma_{\textsc{sb}} \left| \vec{r} - \vec{r}_{n}\right|^{2}} \right),
\label{eqn:Tbgr}
\end{equation}
where $L_{n}$ and $\vec{r}_{n}$ are the luminosity and position of a radiative object $n$ \citep{Stamatellos:2011a,Stamatellos:2012a,Stamatellos:2015a}. The luminosity of a radiative secondary object $n$ is set to
\begin{equation}
L_{n} = L_{\rm NB} + \frac{fGM_{n}\dot{M_{n}}}{R_{\textup{acc}}}.
\label{eqn:massAccretionLuminosity}
\end{equation}
The first term on the right hand side of the equation describes the luminosity of the object from nuclear burning which is set equal to $\left({M_{n}}/{\msun}\right)^{3}\lsun$ for stellar objects ($M>0.08\msun$) and 0 for substellar objects. The second term represents the accretion luminosity. We let $f = 0.75$ be the fraction of accretion energy that is radiated away at the photosphere of the object \citep{Offner:2010d}. $R_{\textup{acc}}$ is the accretion radius, set to $R_{\textup{acc}} = 3 \textup{ R}_{\odot}$ \citep{Palla:1993a}. The choice of the accretion radius does not qualitatively affect the results presented in this paper.
We consider three cases of radiative feedback from secondary objects forming in the disc by fragmentation: (i) no radiative feedback, (ii) continuous radiative feedback, and (iii) episodic radiative feedback. In the case of no radiative feedback, objects accrete gas but the accretion energy deposited on their surfaces is not fed back into the disc. In the continuous radiative feedback case, gas accretes onto the object releasing energy that is fed back into the disc through the pseudo-background temperature set by Equations~(\ref{eqn:Tbgr})-(\ref{eqn:massAccretionLuminosity}). In the episodic radiative feedback case, mass accretes in periodic bursts resulting in episodic energy release.
The episodic accretion model that we use is described in detail in \cite{Stamatellos:2011a} and \cite{Stamatellos:2012a}. Gravitational instabilities cannot develop within the inner regions of a disc ($\sim$ a few AU) around a secondary object due to high temperatures. Therefore, there is no mechanism to transport angular momentum outwards for the gas to accrete onto the object, and mass accumulates in the inner disc region. The accumulation of gas increases the density and temperature. When the temperature is sufficiently high to ionise the gas, the magnetorotational instability (MRI) is activated, and gas starts flowing onto the secondary object. As with gravitational instability, angular momentum is transported outwards and matter flows inward. When the mass in the inner accretion disc is depleted, the MRI ceases, and mass once again begins to accumulate within the inner disc region.
As the hydrodynamic simulations do not have the resolution to capture the details of the inner accretion disc around each secondary object, \cite{Stamatellos:2011a} developed a sub-grid model to capture the effect of MRI, utilizing the time-dependant episodic accretion model ascribed to \cite{Zhu:2010a}. Each secondary sink is notionally split into two components, the {\it object} and the {\it inner accretion disc} (IAD) such that,
\begin{equation}
M_{\textup{sink}} = M_{\star} + M_{\textsc{iad}},
\label{eqn:EAMsinkMass}
\end{equation}
where $M_{\star}$ is the mass of the object and $M_{\textsc{iad}}$ is the mass of its inner accretion disc. The accretion rate onto the object, $\dot{M_{\star}}$, is assumed to have two components: a small continuous accretion $\dot{M}_{\textsc{con}}$, and the accretion due to the MRI, $\dot{M}_{\textsc{mri}}$. The total accretion rate is therefore
\begin{equation}
\dot{M}_{\star} = \dot{M}_{\textsc{con}} + \dot{M}_{\textsc{mri}}.
\label{eqn:EAMsinkAccretion}
\end{equation}
The material only couples to the magnetic field when it becomes ionised. The temperature at which this occurs is set to $T_{\textsc{mri}} \sim 1400$ K. \cite{Zhu:2010a} estimate that the accretion rate during an episode and the duration of an episode are
\begin{equation}
\dot{M}_{\textsc{mri}} \sim 5 \times 10^{-4} \msun \textup{yr}^{-1} \left(\frac{\alpha_{\textsc{mri}}}{0.1}\right),
\label{eqn:EAMepisodeAccretion}
\end{equation}
and
\begin{equation}
\Delta t_{\textsc{mri}} \sim 0.25 \textup{ kyr} \left( \frac{\alpha_{\textsc{mri}}}{0.1} \right)^{-1} \left( \frac{M_{\star}}{0.2 \msun} \right)^{2/3} \left( \frac{\dot{M}_{\textsc{iad}}}{10^{-5} \msun \textup{yr}^{-1}} \right)^{1/9}\,,
\label{eqn:EAMepisodeDuration}
\end{equation}
respectively. $\dot{M}_{\textsc{iad}}$ is the mass accretion rate which flows onto the inner accretion disc, i.e. the accretion rate onto the sink. $\alpha_{\textsc{mri}}$ is the MRI viscosity $\alpha$-prescription parameter \citep{Shakura:1973a}. The MRI is assumed to occur when sufficient mass has been accumulated within the inner accretion disc such that
\begin{equation}
M_{\textsc{iad}} > M_{\textsc{mri}} \sim \dot{M}_{\textsc{mri}} \Delta t_{\textsc{mri}}.
\label{eqn:EAMmriCondition}
\end{equation}
Substituting in Equations (\ref{eqn:EAMepisodeAccretion}) and (\ref{eqn:EAMepisodeDuration}) yields
\begin{equation}
M_{\textsc{iad}} > 0.13 \msun \left( \frac{M_{\star}}{0.2 \msun} \right)^{2/3} \left( \frac{\dot{M}_{\textsc{iad}} }{10^{-5} \msun \textup{yr}^{-1}} \right)^{1/9}.
\label{eqn:EAMmriConditionDetail}
\end{equation}
Observations of FU Orionis stars \citep[see e.g.][]{Hartmann:1996a}, show that the the accretion rate during an outburst episode drops exponentially. We therefore set for the accretion rate onto the central object
\begin{equation}
\dot{M}_{\textsc{mri}} = 1.58 \frac{M_{\textsc{mri}}}{\Delta t_{\textsc{mri}}} \exp \left\{ -\frac{t - t _{0}}{\Delta t_{\textsc{mri}}} \right\}, \hspace{1cm} t_{0} < t < t_{0} + \Delta t_{\textsc{mri}}.
\label{eqn:EAMmriAccretion}
\end{equation}
$t_{0}$ and $t_{0} + \Delta t_{\textsc{mri}}$ are the temporal bounds of the accretion episode. The factor of $1.58 = e / (e - 1)$ is included to allow all of the mass in the IAD to be accreted onto the object within $\Delta t_{\textsc{mri}}$. The accumulation of mass into the inner accretion disc occurs on a timescale
\begin{equation}
\Delta t_{\textsc{acc}} \sim \frac{M_{\textsc{mri}}}{\dot{M}_{\textsc{iad}}}.
\label{eqn:EAMmriAccumulationTime}
\end{equation}
Using Equation (\ref{eqn:EAMmriConditionDetail}) gives
\begin{equation}
\Delta t_{\textsc{acc}} \simeq 13 \textup{ kyr} \left( \frac{M_{\star}}{0.2 \msun} \right)^{2/3} \left( \frac{\dot{M}_{\textsc{iad}}}{10^{-5} \msun \textup{yr}^{-1}} \right)^{-8/9}.
\label{eqn:EAMmriAccumulationTimeDetail}
\end{equation}
Comparing this with Equation (\ref{eqn:EAMepisodeDuration}) shows that the period when mass is being accumulated into the inner accretion disc is much longer than the accretion episodes.
The free variables in this model are $\dot{M}_{\textsc{con}}$ and $\alpha_{\textsc{mri}}$. Increasing $\alpha_{\textsc{mri}}$ yields shorter but more intense accretion episodes. Note that Equations (\ref{eqn:EAMmriConditionDetail}) and (\ref{eqn:EAMmriAccumulationTimeDetail}) are independent of $\alpha_{\textsc{mri}}$. The uncertainty on $\alpha_{\textsc{mri}}$, which lies in the range $0.01-0.4$ \citep{King:2007a}, is therefore not reflected in the mass accreted in an episode nor the time interval between successive episodes.
\section{Initial Conditions}
\label{sec:initial_conditions}
We study the evolution of a $0.3$-M$_{\sun}$ gravitationally unstable protostellar disc around a $0.7$-M$_{\sun}$ star. The surface density and temperature profiles of the disc are set to $\Sigma \propto R^{-p}$ and $T \propto R^{-q}$, respectively. The surface density power index $p$ is thought to lie between 1 and $3/2$ from semi-analytical studies of cloud collapse and disc creation \citep{Lin:1990a,Tsukamoto:2015a}. The temperature power index $q$ has been observed to lie in the range from $0.35$ to $0.8$ from studies of pre-main sequence stars \citep{Andrews:2009b}. Here, we adopt $p = 1$ and a relatively high value of $q = 0.75$, in order to minimize the role of the central star in stabilizing the disc and focus on the radiative effect from the secondary objects forming in the disc.
The disc extends from an inner radius $R_{\textup{in}} = 1$ AU to an outer radius $R_{\textup{out}} = 100$~AU. The surface density profile we use is
\begin{equation}
\Sigma(R) = \Sigma_{0} \left( \frac{R_{0}^{2}}{R_{0}^{2} + R^{2}} \right)^{1/2}\,,
\label{eqn:columnDensityProfile}
\end{equation}
where $\Sigma_{0} = 1.7 \times 10^{4} \textup{ g cm}^{-2}$ is the surface density at $R = 0$.
The initial disc temperature profile is set using Equation \ref{eqn:temperatureProfile}, i.e. initially $T(R)=T_{\textsc{BGR}}^{\star}(R)$. We use $N = 10^{6}$ SPH particles to represent the disc. These are distributed using random numbers between $R_{\textup{in}}$ and $R_{\textup{out}}$ so as to reproduce the disc density profile. The values we use for all the aforementioned parameters are listed in Table \ref{tab:simParams}.
The disc is initially massive enough that it is gravitationally unstable ($Q<1$) beyond $\sim 30 \textup{ AU}$ (see Figure \ref{fig:initialToomre}). We have chosen such a profile to ensure that the disc will fragment, so as to study the effect of radiative feedback from secondary objects on subsequent fragmentation. The initial Toomre parameter reaches very low values at the outer edge of the disc which is unrealistic. When a disc forms around a young protostar its mass increases by infalling material from the protostellar envelope. That progressively reduces $Q$ to just below $\sim1$ and the disc may then fragment \cite[e.g.][]{Stamatellos:2011a}. In the simulations that we present here, the initial low $Q$ value results in high effective viscosity, so that the disc attains a nearly uniform $Q\sim1$ (see Figure \ref{fig:initialToomre}, red \& green lines) within a few outer orbital periods. This is similar to what it would be expected for a disc forming in a collapsing cloud.
We compute the gravitational acceleration for every disc particle using a spatial octal-tree \citep{Barnes:1986a}. The velocity of a particle $i$ in the $x-y$ plane are set using $v_{xy, i} = \sqrt{R_{i} \left| g_{xy, i} \right|}$, where $R_{i}$ and $g_{xy, i}$ are the radius and the gravitational acceleration of the particle on the disc midplane, respectively. We assume that there are no initial motions perpendicular to the disc midplane.
The number of SPH particles used to represent the disc $\left( 10^{6} \right)$ ensures that gravitational fragmentation can be properly resolved. The minimum resolvable mass for a $0.3 \msun$ disc comprising $10^{6}$ particles is $3.14 \times 10^{-4} \mjup$. \cite{Bate:1997a} argue that the Jeans mass must be resolved by $2 \times N_{\textup{neigh}}$ and \cite{Nelson:2006a} conclude that the Toomre mass must be resolved by $6 \times N_{\textup{neigh}}$. The simulations performed by \cite{Stamatellos:2009d} with a $0.7 \msun$ disc and $1.5 \times 10^{5}$ particles find a minimum Jeans mass of $\sim 2 \mjup$ and a minimum Toomre mass of $\sim 2.5 \mjup$. If we take $2 \mjup$ as a lower resolution limit, then this corresponds to $\sim 128 \times N_{\textup{neigh}}$ i.e. the disc is sufficiently resolved. The vertical structure of our disc is also adequately resolved, since we use $\sim 7$ times more particles than the simulations of \cite{Stamatellos:2009d} where the disc scale-height is resolved by a factor of more than $3-5$ smoothing lengths.
\begin{center}
\begin{table}
\centering
\caption{The initial disc parameters. The disc is gravitationally unstable, as determined by the Toomre criterion.}
\begin{tabular}{l r}
\hline
\hline
Disc Parameter & Value \\
\hline
\hline
$N$ & $10^{6}$ \\
$M_{\textup{disc}}$ & $0.3 \msun$ \\
$M_{\star}$ & $0.7 \msun$ \\
$R_{\textup{in}}$ & $1 \textup{ AU}$ \\
$R_{\textup{out}}$ & $100 \textup{ AU}$ \\
$R_{0}$ & $0.25 \textup{ AU}$ \\
$T_{0}$ & $250 \textup{ K}$ \\
$T_{\infty}$ & $10 \textup{ K}$ \\
$p$ & 1 \\
$q$ & 0.75 \\
\hline
\end{tabular}
\label{tab:simParams}
\end{table}
\end{center}
\begin{figure}
\begin{center}
\includegraphics[width = 0.5\textwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{Toomre.pdf}
\caption{Azimuthally averaged Toomre parameter $Q$ for a disc with the initial conditions listed in Table \ref{tab:simParams}, and at later times (as marked on the graph), before the disc fragments.}
\label{fig:initialToomre}
\end{center}
\end{figure}
\section{Disc fragmentation and the Effect of Radiative Feedback}
\label{sec:disc_fragmentation_and_the_effect_of_radiative_feedback}
We perform a set of 5 simulations of a $0.3$-M$_{\sun}$ gravitationally unstable protostellar disc around a $0.7$-M$_{\sun}$ star. The disc initial conditions are identical (as described in Section~\ref{sec:initial_conditions}). The disc is gravitationally unstable, thus spiral arms form and the disc fragments to form secondary objects in all 5 cases (see Table~\ref{tab:formation}). The only difference between the 5 simulations is the way the radiative feedback from these secondary objects is treated: (i) for simulation "NRF" there is no radiative feedback from secondary objects, (ii) for simulation "CRF", the radiative feedback from secondary objects is continuous, (iii) for simulations "ERF001", "ERF01", "ERF03" the radiative feedback is episodic. The difference between the last three simulations is the value of the viscosity parameter due to the MRI, which determines the intensity and the duration of the outburst (ERF001: $\alpha_{\textsc{mri}} = 0.01$; ERF01: $\alpha_{\textsc{mri}} = 0.1$; ERF03 $\alpha_{\textsc{mri}} = 0.3$). The disc surface density and the disc midplane temperature of the 5 runs are shown in Figures~\ref{fig:na1}-\ref{fig:ea1_03}. In all five simulations the discs evolve identically and at 2.7 kyr an object forms due to gravitational fragmentation. From this point on, the simulations differ as this object provides different radiative feedback in each run. In the NRF run, 7 secondary objects from by disc fragmentation, whereas in the CRF run only 1 secondary object forms. In the ERF runs, 3-4 secondary objects form, i.e. somewhere in between the two previous cases, similarly to what previous studies have found \citep{Stamatellos:2011a,Stamatellos:2012a,Lomax:2014a,Lomax:2015a}. The properties of the objects formed in each run are listed in Table~\ref{tab:formation}. In the next subsections we discuss each of the simulations in detail.
\begin{table*}
\centering
\caption{The properties of objects formed by gravitational fragmentation in the simulations with no radiative feedback from secondary objects (NRF), with continuous radiative feedback (CRF), and with episodic radiative feedback (ERF001, ERF01, ERF03). $N_{\textup{o}}$ is the total number of secondary objects formed, $t_i$ is the formation time of an object, $M_i$ its initial mass, and $M_f$ its final masses (i.e. at the end of the hydrodynamical simulation; $t=10$~kyr). $M_{\textsc{max}}$ is the maximum possible mass it can attain by accreting mass from the disc (see discussion in the text), $\langle \dot{M} \rangle$ is the mean accretion rate, $\dot{M}_{f} $ is the accretion rate onto the object at 10~kyr, $R_i$ is the distance from the star when it forms, $R_f$ is its final distance from the star, and $\Delta R=R_f-R_i$ is its radial migration within 10 kyr. S denotes the central star, LMS secondary low-mass stars, BD brown dwarfs and P planets. In the final column we mark the boundedness at the end of the NBODY simulation (200~kyr). B and E denote bound and ejected respectively.}
\resizebox{\textwidth}{!}{\begin{tabular}{l c c c c c c c c c c c c c c}
\hline
\hline
Run ID & $\alpha_{\textsc{mri}}$ & $\textup{N}_{\textup{o}}$ & $t_{i}$ & $M_{i}$ & $M_{f}$ & $M_{\textsc{max}}$ & $\langle \dot{M} \rangle$ & $\dot{M}_{f}$ & $R_{i}$ & $R_{f}$ & $\Delta R$ & Type & Bound \\
& & & (kyr) & $\left(\textup{M}_{\textsc{j}}\right)$ & $\left(\textup{M}_{\textsc{j}}\right)$ & $\left(\textup{M}_{\textsc{j}}\right)$ & $\left(10^{-7} \msun \textup{yr}^{-1}\right)$ & $\left(10^{-7} \msun \textup{yr}^{-1}\right)$ & $\left(\textup{AU} \right)$ & $\left(\textup{AU} \right)$ & $\left(\textup{AU} \right)$ & \\
\hline
\hline
NRF & - & 7 & \scell{0.0 \\ 2.7 \\4.3 \\ 5.5 \\ 5.9 \\ 6.0 \\ 7.1 \\ 7.5} &
\scell{733 \\ 2 \\ 4 \\ 2 \\ 2 \\ 2 \\ 1 \\ 1} &
\scell{773 \\ 97 \\ 48 \\ 13 \\ 4 \\ 66 \\ 4 \\ 3} &
\scell{774 \\ 99 \\ 53 \\ 13 \\ 4 \\ 67 \\ 4 \\ 3} &
\scell{37.7 \\ 124 \\ 75.0 \\ 22.0 \\ 4.63 \\ 154 \\ 7.78 \\ 6.33} &
\scell{5.29 \\ 14.7 \\ 27.3 \\ 2.83 \\ 0.02 \\ 4.57 \\ 3.19 \\ 0.34} &
\scell{0 \\ 77 \\ 65 \\ 160 \\ 270 \\ 103 \\ 191 \\ 103} &
\scell{0 \\ 105 \\ 25 \\ 144 \\ 570 \\ 15 \\ 169 \\ 235} &
\scell{0 \\ 28 \\ -40 \\ -16 \\ 300 \\ -88 \\ -22 \\ 132} &
\scell{S \\ LMS \\ BD \\ P/BD \\ P \\ BD \\ P \\ P} &
\scell{- \\ B \\ E \\ E \\ E \\ B \\ E \\ E} \\ \\
CRF & - & 1 & \scell{0.0 \\ 2.7} &
\scell{733 \\ 2} &
\scell{772 \\ 79} &
\scell{826 \\ 191} &
\scell{36.8 \\ 100} &
\scell{31.9 \\ 66.6} &
\scell{0 \\ 77} &
\scell{0 \\ 68} &
\scell{0 \\ -9} &
\scell{S \\ BD/LMS} &
\scell{- \\ B} \\ \\
ERF001 & 0.01 & 4 & \scell{0.0 \\ 2.7 \\ 5.4 \\ 8.2 \\ 9.8} &
\scell{733 \\ 2 \\ 3 \\ 2 \\ 2} &
\scell{772 \\ 87 \\ 32 \\ 8 \\ 3} &
\scell{811 \\ 127 \\ 42 \\ 32 \\ 5} &
\scell{37.2 \\ 111 \\ 60.5 \\ 31.0 \\ 51.0} &
\scell{33.0 \\ 35.0 \\ 8.54 \\ 20.7 \\ 2.01} &
\scell{0 \\ 77 \\ 93 \\ 166 \\ 119} &
\scell{0 \\76 \\ 178 \\ 97 \\ 104} &
\scell{0 \\ -1 \\ 85 \\ -69 \\ -15} &
\scell{S \\ LMS \\ BD \\ P \\ P} &
\scell{- \\ B \\ B \\ E \\ E} \\ \\
ERF01 & 0.1 & 3 & \scell{0.0 \\ 2.7 \\ 6.3 \\ 7.9} &
\scell{733 \\ 2 \\ 3 \\ 2} &
\scell{771 \\ 91 \\ 13 \\ 9} &
\scell{805 \\ 124 \\ 63 \\ 25} &
\scell{37.3 \\ 117 \\ 26.9 \\ 31.4} &
\scell{25.2 \\ 24.3 \\ 36.4 \\ 11.9} &
\scell{0 \\ 77 \\ 85 \\ 137} &
\scell{0 \\ 64 \\ 123 \\ 129} &
\scell{0 \\ -13 \\ 38 \\ -8} &
\scell{S \\ LMS \\ P/BD \\ P} &
\scell{- \\ B \\ E \\ E} \\ \\
ERF03 & 0.3 & 4 & \scell{0.0 \\ 2.7 \\ 5.5 \\ 6.0 \\ 6.0} &
\scell{733 \\ 2 \\ 2 \\ 2 \\ 3} &
\scell{771 \\ 105 \\ 66 \\ 17 \\ 16} &
\scell{782 \\ 116 \\ 73 \\ 24 \\ 16} &
\scell{44.3 \\ 135 \\ 138 \\ 34.9 \\ 72.0} &
\scell{33.6 \\ 31.2 \\ 18.5 \\ 29.5 \\ 0.0} &
\scell{0 \\ 77 \\ 106 \\ 87 \\ 96} &
\scell{0 \\ 40 \\ 14 \\ 142 \\ 300} &
\scell{0 \\ -37 \\ -92 \\ 55 \\ 204} &
\scell{S \\ LMS \\ BD \\ BD \\ BD} &
\scell{- \\ B \\ B \\ E \\ E} \\ \\
\hline
\end{tabular}}
\label{tab:formation}
\end{table*}
\subsection{No radiative feedback (NRF)}
\label{sub:no_radiative_feedback}
Figure \ref{fig:na1} shows the evolution of the surface density and midplane temperature for the disc whereby no radiative feedback is provided from secondary objects that form in the disc. Spiral arms develop and the disc fragments to form 7 secondary objects (see Table~\ref{tab:formation}). Fragmentation occurs outside $65$~AU where the disc is gravitationally unstable and cools fast enough \citep[e.g.][]{Rice:2003e,Rice:2005a,Stamatellos:2007c,Stamatellos:2011d}. After 10 kyr, the first of these objects has accreted a sufficient amount of gas to become a low-mass hydrogen-burning star ($M=97$~M$_{\rm J}$). Two brown dwarfs are formed (with masses 48 and 66~M$_{\rm J}$) and orbit within 25~AU of the central star (at 25 and 15 AU, respectively). Three of the objects formed remain in the planetary-mass regime. These form at a late stage and at large orbital radii, thus having less time to accrete gas from the disc. One of these planets undergoes a net radial outward migration of 300~AU between its formation at 5.9 kyr and the end of the hydrodynamical simulation (10~kyr). These objects are bound to the central star by the end of the hydrodynamic simulation. However, a few of them are loosely bound at large radii ($R>150$~AU for 3 of them), and therefore destined to be ejected from the system. Indeed, at the end of the NBODY calculation (at 200~kyr), all but two of these objects are ejected from the system, becoming free-floating planets and brown dwarfs \citep[see also][]{Stamatellos:2009a,Li:2015b, Li:2016a, Vorobyov:2016a}.
\begin{figure*}
\begin{centering}
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{NRF_CD_LOW.pdf}} \\
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{NRF_T_LOW.pdf}}
\caption{Disc evolution without any radiative feedback from secondary objects (NRF run). The top snapshots show the disc surface density and the bottom snapshots show the disc midplane temperature (at times as marked on each graph). 7 objects form by gravitational fragmentation due to the disc cooling fast enough in its outer regions. Most of the objects formed are brown dwarfs and planets. Planets are ultimately ejected from the system.}
\label{fig:na1}
\end{centering}
\end{figure*}
\begin{figure*}
\begin{centering}
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{CRF_CD_LOW.pdf}} \\
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{CRF_T_LOW.pdf}}
\caption{Disc evolution with continuous radiative feedback from secondary objects (CRF run). The disc fragments but only one object forms that ends up as a low-mass star. Radiative feedback from this object suppresses further fragmentation. The object forms on a wide orbit (68~AU) and migrates inwards only by 9 AU within 7.3 kyr.}
\label{fig:ca1}
\end{centering}
\end{figure*}
\begin{figure*}
\begin{centering}
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF001_CD_LOW.pdf}} \\
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF001_T_LOW.pdf}}
\caption{Disc evolution with episodic radiative feedback from secondary objects and a viscosity parameter $\alpha_{\textsc{mri}} = 0.01$ (ERF001 run). The disc fragments and 4 objects form as the disc is cool enough to be gravitationally unstable between accretion episodes.}
\label{fig:ea1_001}
\end{centering}
\end{figure*}
\begin{figure*}
\begin{centering}
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF01_CD_LOW.pdf}} \\
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF01_T_LOW.pdf}}
\caption{Disc evolution with episodic radiative feedback from secondary objects and a viscosity parameter $\alpha_{\textsc{mri}} = 0.1$ (ERF01 run). The disc fragments and 3 objects form. Two of these objects are planets, as in the ERF001 run.}
\label{fig:ea1_01}
\end{centering}
\end{figure*}
\begin{figure*}
\begin{centering}
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF03_CD_LOW.pdf}} \\
\subfloat{\includegraphics[width = 0.75\textwidth, trim = 0cm 6cm 0cm 0cm, clip=true]{ERF03_T_LOW.pdf}}
\caption{Disc evolution with episodic radiative feedback from secondary objects and a viscosity parameter $\alpha_{\textsc{mri}} = 0.3$ (ERF03 run). The disc fragments and 4 objects form. One object migrates inwards significantly such that it accretes a large amount of gas while in a close orbit to the central star. The two lowest mass objects are ultimately ejected from the system.}
\label{fig:ea1_03}
\end{centering}
\end{figure*}
\subsection{Continuous radiative feedback (CRF)}
\label{sub:continuous_radiative_feedback}
Figure \ref{fig:ca1} shows the evolution of the disc surface density and disc midplane temperature for the simulation with continuous radiative feedback from secondary objects that form in the disc. The disc fragments but only one secondary object forms. Continuous radiative feedback from this object heats and stabilises the disc; therefore, no further fragmentation occurs. The object carves out a gap within the disc and migrates inwards 9 AU by the end of the hydrodynamical simulation (i.e. within 7.3 kyr since its formation). At this point it has accreted enough gas to become a high-mass brown dwarf and is close to overcoming the hydrogen-burning mass limit. As there is still plenty of gas within the disc, the ultimate fate of this system is a binary comprising a solar-type and a low-mass secondary star.
\subsection{Episodic radiative feedback (ERF)}
\label{sub:episodic_radiative_feedback}
Figures \ref{fig:ea1_001}, \ref{fig:ea1_01} and \ref{fig:ea1_03} show the surface density and disc midplane temperature evolution for the simulations with episodic radiative feedback from secondary objects forming in the disc, in three different cases: $\alpha_{\textsc{mri}} = 0.01, 0.1, 0.3$, respectively. The disc fragments as in the previous cases; the radiative feedback from secondary objects is now episodic due to episodic accretion. During the accretion/outburst episodes, the mass that has accumulated in the inner disc region of a secondary object flows onto the object, resulting in an increase of its accretion luminosity that affects a large portion of the disc around the central star. This is evident by the sudden increase in the temperature (e.g. in Figures \ref{fig:ea1_001} and \ref{fig:ea1_01}). The increase of the temperature in the disc is three- to four-fold (see Figure~\ref{fig:QTCD_Comparison}b), which is enough to stabilise the disc during the outburst. However, in all three cases, when the outburst stops the disc becomes unstable and fragments.
The number of secondary objects formed is similar in all three cases (3-4 objects). Therefore, fewer objects form than in the non-radiative feedback case and more objects than the continuous radiative feedback case \citep{Stamatellos:2011a,Stamatellos:2012a,Lomax:2014a,Lomax:2015a}.
The frequency and duration (see Table~\ref{tab:episodicDuration}) of the accretion/feedback outbursts are important for the gravitational stability of the disc. The total duration of episodic outbursts drops from $\sim18\%$ to $\sim0.8\%$ of the simulated disc lifetime (10~kyr), as the viscosity parameter $\alpha_{\textsc{mri}}$ is increased from $0.01$ to $0.3$. A larger $\alpha_{\textsc{mri}}$ results in stronger but shorter outbursts. The number of secondary objects forming in the disc does not strongly depend on $\alpha_{\textsc{mri}}$, which indicates that for suppressing disc fragmentation the total duration of episodic outbursts must be longer.
We find that the average mass of secondary objects at the end of the hydrodynamical simulation (10~kyr) increases with $\alpha_{\textsc{mri}}$; the average masses are 33, 38 \& 51~$\textup{M}_{\textsc{j}}$ for $\alpha_{\textsc{mri}} =$ 0.01, 0.1, and 0.3, respectively. In all cases, the two lowest mass objects are ultimately ejected from the system. For $\alpha_{\textsc{mri}} = 0.01$, the two lowest mass objects are planets. For $\alpha_{\textsc{mri}} = 0.1$, the two lowest mass objects consist of a planet and a brown dwarf. And finally, for $\alpha_{\textsc{mri}} = 0.3$, the two lowest mass objects are brown dwarfs. The estimated maximum mass that all of all these objects will eventually attain (see next section) is above the deuterium-burning limit, except for one object in the $\alpha_{\textsc{mri}} = 0.01$ run.
We further find that subsequent formation of secondary objects happens on a more rapid timescale for greater values of $\alpha_{\textsc{mri}}$. Radiative feedback episodes are shorter for a higher $\alpha_{\textsc{mri}}$: the disc cools fast after an episode ends, becoming gravitationally unstable and fragments again within a shorter time.
\begin{center}
\begin{table}
\centering
\caption{The duration of episodic accretion events from each secondary object in the simulations which consider episodic radiative feedback.}
\begin{tabular}{l c c c c c}
\hline
\hline
Run ID & $\alpha_{\textsc{mri}}$ & Sink \# & Episodes & Duration (yr) \\
\hline
\hline
ERF001 & 0.01 & 2 & 3 & 1170 \\
& & 3 & 2 & 487 \\
& & 4 & 1 & 90 \\
& & 5 & 0 & 0 \\
& & All & 6 & 1747 \\ \\
ERF01 & 0.1 & 2 & 3 & 108 \\
& & 3 & 1 & 10 \\
& & 4 & 1 & 9 \\
& & All & 5 & 128 \\ \\
ERF03 & 0.3 & 2 & 3 & 46 \\
& & 3 & 2 & 22 \\
& & 4 & 1 & 4 \\
& & 5 & 2 & 9 \\
& & All & 6 & 81 \\
\hline
\end{tabular}
\label{tab:episodicDuration}
\end{table}
\end{center}
\subsection{Comparison of simulations with different radiative feedback from secondary objects}
\label{sub:comparison_of_simulations}
Table \ref{tab:formation} lists information pertaining to the secondary objects that form in the disc simulations. We list the number of objects formed in each simulation, their initial and final masses (i.e. at the end of the hydrodynamical simulation, $t=10$~kyr), and an estimate of the maximum mass they can ultimately attain (considering that they will still be evolving in a gaseous disc), the gas accretion rate onto them, their formation and final radius, their type, and their estimated boundedness at the end of the NBODY simulation (200~kyr).
The maximum mass, $M^i_{\textsc{max}}$, that an object $i$ can attain is calculated as follows. We assume that each object will continue to accrete at its accretion rate $\left(\dot{M}^i_{f}\right)$ at the end of the hydrodynamic simulation (which is likely an overestimate as generally the accretion rate decreases, unless there is some kind of violent interaction within the disc). Therefore, the maximum mass that an object $i$ can attain is:
\begin{equation}
\label{eqn:theoreticalMaximumMass}
M^i_\textsc{max}=M^i_f+\dot{M}^i_{f}\ t_{acc}\,,
\end{equation}
where $M^i_{f}$ is the mass of an object $i$ at $t=10$~kyr, and $t_{acc}$ is the time for which it will keep on accreting gas. We also assume that only a fraction $\xi=0.9$ of all the gas from the disc could eventually accrete either onto the central star or onto the secondary objects, therefore
\begin{equation}
\xi M_{\textup{disc}}= \sum_{sec}{M^i_{f}} + \sum_{all}{\dot{M}_f^{i}t_{acc}}
\label{eqn:finalMassEquate}
\end{equation}
where the first sum on the right hand side is over the secondary objects and the second sum is over all objects to include gas accreting onto the central star. We assume that the accretion time $t_{acc}$ is the same for all objects within each simulation, therefore it is calculated such that
\begin{equation}
t_{acc} = \frac{\xi M_{\textup{disc}} - \sum_{sec}M_f^{i}}{\sum_{all}\dot{M}_f^{i}}.
\label{eqn:accretionTime}
\end{equation}
The maximum estimated mass for an object $i$ can then be calculated using Equation~\ref{eqn:theoreticalMaximumMass}.
\begin{figure}
\begin{center}
\includegraphics[width = 0.5\textwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{QTCD_Comparison.pdf}
\caption{(a) Azimuthally averaged Toomre parameter $Q$, (b) disc midplane temperature, and (c) disc surface density for all simulations at $t=4.4$~kyr. The coloured dashed lines correspond to times when outburst episodes are happening: $t=5.6$~kyr and $t=5.2$~kyr for the simulations ERF001 and ERF01, respectively. The disc inner region is gravitationally stable due to the high temperature, whereas the disc is unstable outside $\sim 70$~AU. The temperature peaks between 50 and 100~AU correspond to regions close to secondary objects.}
\label{fig:QTCD_Comparison}
\end{center}
\end{figure}
The number of secondary objects that form in the disc is strongly affected by the type of their radiative feedback. We find that without radiative feedback, 7 objects form within the disc (Figure \ref{fig:na1}) compared to only one object forming where continuous radiative feedback is considered (Figure \ref{fig:ca1}). Continuous radiative feedback from the secondary object raises the disc temperature enough (see Figure~\ref{fig:QTCD_Comparison}b) to stabilise the disc (Figure~\ref{fig:QTCD_Comparison}a) and suppress further fragmentation. The number of objects when episodic radiative feedback is considered is in between the two previous cases ($3-4$ objects). This behaviour has been seen in previous simulations \citep{Stamatellos:2011a,Stamatellos:2012a,Lomax:2014a,Lomax:2015a}.
Radiative feedback from secondary objects affects the entire disc as these secondary objects are high accretors (at their initial stages of their formation). For a short time they may even outshine the central star (see Figure~\ref{fig:luminosity_accretion}). The assumed pseudo-background temperature profile provided by each secondary object (see Equation~\ref{eqn:Tbgr}) influences the temperature at a given location in the disc and may affect disc fragmentation \citep{Stamatellos:2011d} but probably not significantly. If we adopt a pseudo-background temperature profile with $q=3/4$ instead of $q=1/2$, then the disc temperature at a distance 50 AU from a radiative object will be a factor of $\sim5$ smaller, and the Toomre parameter $Q$ (see Figure~\ref{fig:QTCD_Comparison}a) a factor of $\sim2$ smaller, bringing it (for the CRF and ERF runs), close to the critical value for fragmentation \citep[$Q\approx1$; see e.g.][]{Durisen:2007a}. However, this is the maximum expected effect. Even in the case of $q=3/4$ (which is an upper limit for $q$) the disc temperature is expected to be higher than the minimum "background" value due to energy dissipation within the disc as it is gravitationally unstable.
\begin{figure}
\begin{center}
\includegraphics[width = \columnwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{Luminosity_Accretion.pdf}
\caption{ (a) Mass accretion rates onto and (b) accretion luminosities of the first secondary object that forms in each of the simulations where radiative feedback is considered. Time is given with respect to the formation time of each object. At their initial stages of their formation, secondary objects are high accretors and they may even outshine the central star. In the case where radiative feedback is episodic this only happens for a short time and therefore would be difficult to observed. In the case of continuous radiative feedback (which is probably not realistic) secondary objects may outshine the central star for longer.}
\label{fig:luminosity_accretion}
\end{center}
\end{figure}
With regard to the episodic radiative feedback runs, the number of secondary object does not vary much for a different MRI viscosity parameter $\alpha_{\textsc{mri}}$. 4 objects form when $\alpha_{\textsc{mri}} = 0.01$; 3 objects form when $\alpha_{\textsc{mri}} = 0.1$; 4 objects form when $\alpha_{\textsc{mri}} = 0.3$. More secondary objects result in more radiative feedback episodes and a hotter disc for longer periods of time. Thus this provides sustained stability against gravitational fragmentation. The duration of episodic outbursts affects the stability of the disc. For a smaller $\alpha_{\textsc{mri}}$, episodes are longer and provide longer periods of stability. The opposite is true for a larger $\alpha_{\textsc{mri}}$. This is shown in Table \ref{tab:episodicDuration}. However, it is evident that episodic feedback from only 1 or 2 secondary objects cannot suppress further disc fragmentation, in contrast with the continuous feedback case, where the presence of just one secondary object suppresses fragmentation.
Observations of episodic outbursts from secondary objects do not require high-sensitivity; during these outbursts their luminosity increases from a few $\textup{L}_{\sun}$ to tens of $\textup{L}_{\sun}$ (see Figure~\ref{fig:luminosity_accretion}). In the case of $\alpha_{\textsc{mri}} = 0.01$, where the outburst events are mild and long, 18\% of the initial 10~kyr of the disc's lifetime correspond to the outburst phase. On the other hand, when $\alpha_{\textsc{mri}} = 0.3$, where the events are short and intense, this percentage drops down to just 0.8\% (see Table~\ref{tab:episodicDuration}). However, episodic accretion events are expected to be relatively more frequent only during the initial stages of disc evolution, i.e. within a few kyr after the disc's formation, while the newly formed secondary objects are vigorously accreting gas from the disc. Therefore, such outbursts from secondary objects at the initial stages of disc evolution should not significantly influence the observed number of outbursting sources. \cite{Scholz:2013a} observed a sample of $\sim4000$ YSOs over a period of 5 years and they found $1-4$ possible outbursting sources indicating that outbursts happens at intervals of $(5-50)$~kyr; this is roughly consistent with our models after the initial $\sim 4$~kyr during the disc's evolution (see Figure~\ref{fig:luminosity_accretion}).
Figure \ref{fig:QTCD_Comparison} shows a comparison between radially-averaged Toomre parameter, temperature and surface density for a representative snapshot from each simulation exhibiting strong spiral features ($t=4.4$~kyr). Within the inner $\sim 25$~AU, the disc is stable due to heating from the central star. The peaks in surface density and temperature around $\sim 50$~AU correspond to regions around secondary objects. The discs are unstable or close to being unstable outside $\sim 80$~AU in all cases apart from the CRF run and the ERF runs (during episodic outbursts).
\begin{figure}
\begin{centering}
\includegraphics[width = 0.5\textwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{Mass_Radius.pdf}
\caption{Mass-radius plots of the secondary objects formed by disc fragmentation in all 5 simulations. (a) Mass and radius at formation. A zoomed inset panel is shown for clarity. (b) Mass and radius at the end of the hydrodynamical simulation (10 kyr). (c) Mass and and semi-major axis at the end of the NBODY simulation (200 kyr). The upper mass limits correspond to the maximum mass that the object may attain (see text for details), whereas the lower mass limits corresponds to the mass of the object at the end of the hydrodynamical simulation. The horizontal bars in this panel represent the periastron and apoastron of the secondary object's orbit around the central star. The dashed line represents the hydrogen burning limit, and the grey band the deuterium burning limit \citep[$\sim 11-16~{\rm M}_{\rm J}$;][]{Spiegel:2011a}.}
\label{fig:massRadius}
\end{centering}
\end{figure}
In all simulations disc fragmentation occurs beyond radii $\sim 65$ AU (see Figure~\ref{fig:massRadius}a), where the disc is gravitationally unstable and can cool fast enough \citep[e.g.][]{Stamatellos:2009a}. The initial mass of a fragment is a few $\textup{M}_{\textsc{j}}$, as set by the opacity limit for fragmentation \citep{Low:1976a,Rees:1976a}. The masses of the secondary objects at the end of the hydrodynamical simulations are shown in Figure~\ref{fig:final_mass}. The first object that forms in each simulation generally migrates inwards and accretes enough mass to become a low-mass star; this object remains ultimately bound to the central star. All secondary objects increase in mass as they accrete gas from the disc. However, roughly half of the objects formed in each simulation (excluding continuous radiative feedback) remain as planets by the end of the hydrodynamical simulation as shown in Figure \ref{fig:final_mass}.
\begin{figure}
\begin{center}
\includegraphics[width = 0.5\textwidth, trim = 0cm 0cm 0cm 0cm, clip=false]{Final_Mass.pdf}
\caption{The masses of the secondary objects at the end of the hydrodynamic simulations (10~kyr). Colour denotes the order in which each secondary object formed; from earliest to latest: red, orange, yellow, green, cyan, blue, violet. Circles and squares correspond to objects that are ultimately bound or ejected (at 200~kyr), respectively. The lower points in the NRF and ERF003 simulations are separated for clarity. The dashed line represents the hydrogen burning limit ($\sim 80~{\rm M}_{\rm J}$). The grey band represents the deuterium burning limit.}
\label{fig:final_mass}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 0.48\textwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{Sink_Mass.pdf}
\caption{The mass evolution of the first 3 secondary objects that form in each of the 5 simulations (for the simulations with episodic radiative feedback the mass refers to the sink mass, i.e. both the object and the inner accretion disc). Time is given with respect to the formation time of each object. The second object in the ERF03 run (b) undergoes a rapid increase in mass as it migrates into a dense region around the central star.}
\label{fig:sink_mass}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 0.48\textwidth, trim = 0cm 0cm 0cm 0cm, clip=true]{Sink_Accretion.pdf}
\caption{Mass accretion rates onto the first 3 secondary object that form in each of the 5 simulations (for the simulations with episodic radiative feedback the accretion rates onto the sinks are plotted). Time is given with respect to the formation time of each object. Upward pointing triangles represent the beginning of accretion episodes in the ERF runs. These are followed by corresponding downward pointing triangles denoting the end of episodes.}
\label{fig:sink_accretion}
\end{center}
\end{figure}
In the continuous radiative feedback simulation (CRF) the mass growth of the secondary object is mildly suppressed (Figures~\ref{fig:sink_mass}a,\ref{fig:sink_accretion}a) due to an increased outward thermal pressure, so that the final mass of the object is within the brown dwarf regime. Secondary objects that form at later times tend to have lower masses (Figure~\ref{fig:final_mass}).
Episodic feedback also mildly suppresses the mass growth of the first secondary object that forms (Figure~\ref{fig:sink_mass}a). Its effect is more pronounced for the second secondary object (Figure~\ref{fig:sink_mass}b). However, the mass growth of each object also depends on where the object forms in the disc and how it interacts both with other objects and with the spiral structure of the disc. Therefore, the mass growth of an object can be rather erratic, e.g. for the second object at around 2~kyr (Figure~\ref{fig:sink_mass}b). Specifically, this object migrates into the high density region surrounding the central star where it rapidly accretes a large amount of gas (see Figure~\ref{fig:ea1_03}, 7.2-8 kyr). The effect of episodic accretion is to suppress mass accretion during/after the outburst (e.g. Figure~\ref{fig:sink_accretion}a compare NRF and ERF runs after the first outburst; also seen in Figures~\ref{fig:sink_accretion}b, c). However, the accretion rate is restored to to its previous value within $200-400$~yr. Ultimately, there is no strong anti-correlation between the mass that an object and the number and duration of the episodic outbursts it undergoes.
We find a population of planetary-mass objects on wide orbits ($100-800$~AU) around the central star. However, these objects are loosely bound to the central star and could be liberated into the field becoming free-floating planets. We follow the evolution of these systems using NBODY simulations. Indeed we find that all planetary-mass objects are ejected from the discs (Figure~\ref{fig:final_mass}c); what is left behind is a central star with low-mass star or brown dwarf companions. Consequently, it is unlikely that the observed wide-orbit giant planets \citep{Kraus:2008a, Kraus:2014a, Marois:2008a, Faherty:2009a, Ireland:2011a,Kuzuhara:2011a,Kuzuhara:2013a, Aller:2013a,Bailey:2014a,Rameau:2013b,Naud:2014a,Galicher:2014a,Macintosh:2015a} may form by disc fragmentation, unless somehow the mass growth of secondary objects forming in the disc is suppressed. On the other hand disc fragmentation may readily produce free floating planets and brown dwarfs \citep{Stamatellos:2009a,Hao:2013a, Li:2015b,Vorobyov:2016a}.
Note however that in order to follow the long term evolution of the system we have ignored the effect of the gas once the hydrodynamical simulation has evolved for 10~kyr. The effect of gas is to stabilize the system. Therefore it is possible that some of these planets may remain bound to the central star. However, they should co-exist with a higher mass object (like e.g. a low-mass star or a brown dwarf) and they may accrete enough mass to become brown dwarfs.
\subsection{Caveats of sink particles}
\label{sub:caveats_of_sink_particles}
Sink particles are used in the simulations to prevent large running times. In dense regions, timesteps become very short and without sinks the simulation effectively stalls.
In our simulations a sink particle is created when the density exceeds $10^{-9} \textup{ g cm}^{-3}$. It is therefore assumed that if a proto-fragment reaches this density it will continue to contract to heat to $\sim 2000$~K such that molecular hydrogen dissociates to initiate the second collapse. The proto-fragment will ultimately reach stellar densities ($\sim 1\textup{ g cm}^{-3}$) to become a bound object. The density threshold used for sink creation is higher than the density required for the formation of the first hydrostatic core ($\sim 10^{-13}\textup{ g cm}^{-3}$). Therefore, the proto-fragment at this stage contracts on a Kelvin-Helmholtz timescale. The time that it takes a proto-fragment to evolve from the first to second hydrostatic core is $\sim 1-10$~kyr \citep{Stamatellos:2009d}. Thus, it is possible that some of the proto-fragments may get disrupted e.g. by interactions with spiral arms and/or tidal forces, and dissolve \citep{Stamatellos:2009d, Zhu:2012a,Tsukamoto:2013a}.
Another limitation in the use of sink particles relates to their size. We assume that the sink radius of secondary objects that form in the disc is 1~AU, which roughly corresponds to the size of the first hydrostatic core during the collapse of a proto-fragment \citep{Masunaga:2000a, Tomida:2013a, Vaytet:2013a}. The size of the Hill radius of proto-fragments that form in the disc is on the order of a few AU. Therefore, a significant fraction of the accretion disc around a proto-fragment is not resolved. The flow of material from the sink radius to the secondary object is considered to be instantaneous, whereas, in reality, there is a time delay. This results in increased accretion onto secondary objects, which in the case of continuous feedback, results in an increased luminosity. As such, we may overestimate the effect of luminosity on disc fragmentation. However, for the episodic accretion runs we employ a sub-grid model (within a sink radius) based on an $\alpha$-viscosity prescription that allows gas to flow (episodically) onto the secondary object (see Section \ref{subsub:radiative_feedback_from_secondary_objects}). Even in this case, the accretion rate is possibly overestimated, as the inner accretion disc within the sink ($<1$~AU) does not exchange angular momentum with the rest of the accretion disc \citep[for an additional discussion of this issue see][]{Hubber:2013b}. Nevertheless, considering the uncertainties in $\alpha_{\textsc{mri}}$ (which in effect modulates the accretion of material onto the secondary objects and for which we examine a wide range of values, all of which lead to similar outcomes) we have confidence that the choice of sink size does not qualitatively affect the results of this paper regarding the effect of radiative feedback on disc fragmentation.
\section{Conclusions}
\label{sec:conclusions}
We have performed SPH simulations of gravitationally unstable protostellar discs in order to investigate the effect that radiative feedback from secondary objects formed by fragmentation has on disc evolution. We have considered three cases of radiative feedback from secondary objects: {\bf (i)} No radiative feedback: where no energy from secondary objects is fed back into the disc. {\bf(ii)} Continuous radiative feedback: where energy, produced by accretion of material onto the surface of the object is continuously fed back into the disc. {\bf (iii)} Episodic radiative feedback: where accretion of gas onto secondary objects is episodic, resulting in episodic radiative feedback. Our findings are summarised as follows:
\begin{itemize}
\item Radiative feedback from secondary objects that form through gravitational fragmentation stabilises the disc, reducing the likelihood of subsequent fragmentation. When there is no radiative feedback from secondary objects, 7 objects form, compared to a single object forming when radiative feedback is continuous. When radiative feedback happens in episodic outbursts, $3-4$ objects form. This is because the disc cools sufficiently to become gravitationally unstable between the outbursts. All objects in the three different radiative feedback cases that we examine here form at radii $> 65$~AU, with initial masses of a few $\textup{M}_{\textup{J}}$. \newline
\item The mass growth of secondary objects is mildly suppressed due to their radiative feedback. The mass of the first object that forms within the disc is generally larger when there is no radiative feedback; in the case when radiative feedback is continuous the mass of the first secondary object is the lowest. Episodic radiative feedback tends to reduce the mass accretion rate onto a secondary object during and after an episode outburst. However, the accretion rate is restored to its previous value relatively quickly (within $\sim200 - 400$~yr). \newline
\item The intensity and the duration of an outburst (which in our models is determined by the effective viscosity due to the magnetorotational instability, $\alpha_{\textsc{mri}}$) does not affect the number of objects that form within the disc when episodic radiative feedback is considered. The total duration of the radiative feedback outbursts is not long enough to fully suppress disc fragmentation. However, we find that $\alpha_{\textsc{mri}}$ affects the average mass of the objects formed: lower $\alpha_{\textsc{mri}}$ results into lower mass secondary objects. Moreover, subsequent fragmentation happens faster for higher $\alpha_{\textsc{mri}}$, as the first outburst finishes faster. The first object that forms in each case undergoes a larger inward migration for increased values of $\alpha_{\textsc{mri}}$. \newline
\item Regardless of the type of radiative feedback, we find that the first object that forms within the disc, remains ultimately bound to the central star. It accretes mass while it generally migrates inwards. Brown dwarfs also form in the simulations and a fraction of them remain bound to the central star. Gravitational fragmentation may therefore provide a method for the formation of intermediate separation, low-mass-ratio binary systems. \newline
\item A significant fraction ($\sim40$\%, dropping to $\sim20$\% if the estimated final mass is considered) of the secondary objects formed by disc fragmentation are planets, regardless of the type of radiative feedback. However, every planet that forms within the disc is ultimately ejected from the system. We do not find any giant planets that remain on wide-orbits around the central star. Secondary objects that form and remain within the disc accrete enough mass to become brown dwarfs, even in the case where radiative feedback suppresses gas accretion. Thus, gravitational fragmentation may produce free-floating planets and brown dwarfs, but not wide-orbit gas giant planets, unless the mass growth of fragments forming in a young protostellar disc is further suppressed.
\end{itemize}
\section*{Acknowledgements}
\label{sec:acknowledgements}
We would like to thank the referee A. Boley of his thorough review of the paper. The simulations presented in this paper were performed at UCLan High Performance Computing Facility {\sc wildcat}. Surface density plots were produced using the \textsc{splash} software package \citep{Price:2007b}. N-body simulations were performed by a code originally developed by D.~Hubber, who we thank for the support provided. AM is supported by STFC grant ST/N504014/1. DS is partly supported by STFC grant ST/M000877/1.
|
2007.07668
|
\section{Introduction}
In this paper we provide asymptotics for the number of critical points of Gaussian random fields with isotropic increments (a.k.a.~locally isotropic Gaussian random fields) in the high dimensional limit. The definition of locally isotropic fields was first formulated by Kolmogorov about 80 years ago \cite{Ko41} for the application in statistical theory of turbulence; see \cite{Ya57} for an account of background and early history.
The model is defined as follows. Let $B_N \subset \mathbb R^N$ be a sequence of subsets and let $H_N : B_N \subset \mathbb R^N \to \mathbb R$ be given by
\begin{align}\label{hamil}
H_N(x) = X_N(x) +\frac\mu2 \|x\|^2,
\end{align}
where $\mu \in \mathbb{R}$, $\|x\|$ is the Euclidean norm of $x$, and $X_N$ is a Gaussian random field that satisfies
\begin{align*}
\mathbb{E}[(X_N(x)-X_N(y))^2]=N D\Big(\frac1N \|x-y\|^2\Big), \ \ x,y\in \mathbb{R}^N.
\end{align*}
Here the function $D:\mathbb R_+ \to \mathbb R_+$ is called the correlator (or structure) function and $\mathbb{R}_+=[0,\infty)$. It determines the law of $X_N$ up to an additive shift by a Gaussian random variable. Complete characterization of all correlators was given in the work of Yaglom \cite{Ya57} (see also the general form of a positive definite kernel due to Schoenberg \cite{Sch38}). In short, if $D$ is the correlator function for all $N\in \mathbb{N}$, then $X_N$ must belong to one of the following two classes (see also \cite{Kli12}*{Theorem A.1}):
\begin{enumerate}
\item \textbf{Isotropic fields.} There exists a function $B:\mathbb{R}_+\to \mathbb{R}$ such that
\begin{align*}
\mathbb{E}[X_N(x)X_N(y)]=N B\Big(\frac1N\|x-y\|^2\Big)
\end{align*}
where $B$ has the representation
\begin{align*}
B(r) = c_0+\int_{(0,\infty)} e^{-r t^2} \nu(\mathrm{d} t),
\end{align*}
and $c_0\in \mathbb{R}_+$ is a constant and $\nu$ is a finite measure on $(0,\infty)$. In this case,
\begin{align*}
D(r)=2(B(0)-B(r)).
\end{align*}
\item \textbf{Non-isotropic field with isotropic increments.} The correlator $D$ can be written as
\begin{align}\label{eq:drep}
D(r) = \int_{(0,\infty)} (1-e^{-rt^2})\nu(\mathrm{d} t) + Ar, \ \ r\in \mathbb{R}_+,
\end{align}
where $A\in \mathbb{R}_+$ is a constant and $\nu$ is a $\sigma$-finite measure with
\begin{align*}
\int_{(0,\infty)} \frac{t^2}{1+t^2}\nu(\mathrm{d} t) <\infty.
\end{align*}
\end{enumerate}
See \cite{Ya87}*{Section 25.3} for more details on locally isotropic fields. Case 1 is known as short-range correlation (SRC) processes and case 2 as long-range correlation (LRC) in the physics literature.
Here is a special example of $B(r)$ and $D(r)$, which we learned from Yan Fyodorov.
\begin{example}\rm
We assume $c_0=0$ and $A=0$. For fixed ${\varepsilon}>0$ and $\gamma>0$, let
\begin{align*}
\nu(\mathrm{d} x) = 2 e^{-{\varepsilon} x^2}x^{2\gamma-3} \mathrm{d} x.
\end{align*}
The case $\gamma>1$ corresponds to SRC while the case $0<\gamma\le1$ is LRC field. Indeed, if $\gamma>1$,
\begin{align*}
B(r)&=\int_0^\infty 2e^{-r t^2}e^{-{\varepsilon} t^2} t^{2\gamma-3} \mathrm{d} t = \frac{\Gamma(\gamma-1)}{(r+{\varepsilon})^{\gamma-1}},
\end{align*}
while if $0< \gamma<1$, using integration by parts,
\begin{align*}
D(r)=\int_0^\infty(e^{-{\varepsilon} y}- e^{-(r+{\varepsilon})y})y^{\gamma-2} \mathrm{d} y = \frac{\Gamma(\gamma)}{1-\gamma}[(r+{\varepsilon})^{1-\gamma}-{\varepsilon}^{1-\gamma}].
\end{align*}
The case $\gamma=1$ can be obtained by sending $\gamma\uparrow 1$ and using the dominated convergence theorem with the control function $f(y)=(e^{-{\varepsilon} y}- e^{-(r+{\varepsilon})y})y^{-1}$ for $y\le1$ and $=(e^{-{\varepsilon} y}-e^{-(r+{\varepsilon})y})y^{-1/2}$ for $y>1$. Then if $\gamma=1$, we have
\begin{align*}
D(r)=\log(1+r/{\varepsilon}).
\end{align*}
In the LRC case, we see that the long range covariance behaves like a high dimensional analog of fractional Brownian motions.
\end{example}
\begin{remark}
Observe that any Bernstein function vanishing at 0 is a structure function. This is a consequence of the L\'evy--Khintchine representation of Bernstein functions; see e.g.~the monograph \cite{SSV}, which also contains a comprehensive list of complete Bernstein functions. Conversely, any structure function is a Bernstein function. It follows that any correlator function $D$ must be concave, infinitely differentiable, and non-decreasing on $(0,+\infty)$. Moreover, we have $D'(r)\ge 0, D''(r)\le 0, D'''(r)\ge 0$ for $r>0$.
\end{remark}
\begin{remark}
One should not confuse SRC/LRC with short-range/long-range dependence. SRC here refers to the fact that $\mathbb{E}(X_N(x)X_N(y))$ decays as $\|x-y\|\to \infty$ while for LRC it may not. Short-range dependence requires the autocovariance function to have exponential decay.
\end{remark}
\subsection{Previous results}
The Hamiltonian \eqref{hamil} has been considered in many papers, from physics to mathematics, since 1950s. In particular, the model was introduced by Mezard--Parisi \cite{MP91} and Engel \cite{En93} among others as a model for a classical particle confined to an impenetrable spherical box or a toy model describing elastic manifolds propagating in a random potential \cite{Fy04}. A nice historical account can be found in \cite{FS07} which also contains the phase diagram ($T-\mu$ relation) for the model at positive temperature. At zero temperature, in the seminal paper \cite{Fy04}, Fyodorov considered the case of isotropic fields (SRC) and computed the mean total number of critical points, finding a phase transition for different values of $\mu$ and $D''(0)$. In a subsequent and impressive paper, \cite{FW07} computes the mean number of saddles and minima for SRC fields. This paper considered a more general model where $\mu\|u\|_2^2/2$ is replaced by $NU(\|u\|_2^2/N)$ for suitable confining potential $U$.
Still in the case of isotropic fields, \cite{FN12} computed the mean number of minima and studied the phenomena of topology trivialization and the relation of this quantity with the Tracy--Widom distribution. More recently, \cite{CS18} considered the mean number of critical points of a fixed index and for finite $N$. The reader is also invited to take a look at \cites{BD07, YV18, Kli12}.
For a similar Hamiltonian defined on the $N$ dimensional sphere, known as the spherical $p$-spin model, the rigorous study of the complexity of saddles and minima started in \cite{ABC13} and now has solid body of work including \cites{ABA13, subag2017complexity, Mihai}. For the physics predictions of this model, the reader should consult \cites{CL04, MPV} and the references therein.
All of the rigorous work above only considered isotropic Gaussian fields (SRC case) or spherical spin glasses. In the physics literature, the complexity of LRC Gaussian fields was studied in a sequence of two remarkable papers \cites{FS07, FB08}. However, the lack of symmetry in this model imposes a difficult obstacle and no rigorous results on the complexity are currently known.
The main purpose of this article and its companion paper is to close this gap by providing a comprehensive rigorous study of the complexity of LRC Gaussian fields. We extend and recover the predictions made by Fyodorov, Bochaud and Sommers \cites{FS07, FB08}. In this first paper, we focus on the high dimensional limit of the expected number of critical points for when the domain and value of the fields are constrained to any particular set. In the companion paper \cite{AZ22}, we will provide information on local minima and saddles with given indices.
A word of comment is needed here. One of the main differences between LRC and SRC fields is the fact that the variance of an LRC field may change from location and the gradient $\nabla H_{N}$ is no longer independent of $H_{N}$. The main novelty of our two papers is the development of techniques to overcome this difficulty. Another set of important techniques to deal with ``non-invariant'' fields was also recently developed in \cites{BBMsd, BBMsd2}; these do not seem to apply to the model we consider.
\subsection{Main results}
To state our results, let $B_N\subset \mathbb{R}^N$ and $E\subset \mathbb{R}$ be (a sequence of) Borel sets. We define
\begin{align*}
\mathrm{Crt}_N(E,B_N) &= \#\{x\in B_N: \nabla H_N(x)=0, \frac1N H_N(x)\in E \}.
\end{align*}
Throughout the paper we will consider the following extra assumptions on $X_N$.
\textbf{Assumption I} (Smoothness). The function $D$ is four times differentiable at $0$ and it satisfies
\begin{align}\label{eq:asmp}
0<|D^{(4)}(0)|<\infty.
\end{align}
\begin{remark}
By Kolmogorov's criterion, Assumption I ensures that almost surely the field $H_{N}$ is twice differentiable. Moreover, Assumption I guarantees $D'(0), D''(0)$ and $D'''(0)$ exist and are non-zero. This implies that for $ r>0$
\begin{align*}
D(r)>0, \ \ D'(r)>0,\ \ D''(r)<0,\ \ D'''(r)> 0,
\end{align*}
and in particular all these functions are strictly monotone. From here we also know that $D(r)\le D'(0) r$ and when $\nu$ in the representation \prettyref{eq:drep} is not a finite measure (or equivalently in case 2), $\lim_{r\to \infty} D(r)=\infty$.
\end{remark}
\textbf{Assumption II} (Pinning). We have $$X_N (0) = 0.$$
\begin{remark}
Random fields with isotropic increments are high dimensional generalizations of stochastic processes with stationary increments in dimension one. It is a common practice to assume such processes (like Brownian motion or Poisson processes) to start from 0. Therefore, Assumption II is a natural choice for studying random fields with isotropic increments. Note that only the trivial isotropic field ($X_{N}=0$) satisfies Assumption II.
\end{remark}
We first consider the average of the {\it total} number of critical points of $H_N$. Then we count the average number of critical points of $H_N$ with a given fixed critical value. Although the first result can be essentially obtained from the second, the formula and proof for the first are clearer, thus we state them separately. We hope this organization provides a gentle introduction to the reader to appreciate the second result, where most of the novelty (and difficulty of the paper) resides.
The following condition is only needed when the critical value is not restricted.
\textbf{Assumption III} (Domain growth). Let $z_N$ be a standard $N$ dimensional Gaussian random variable. There exist $\Xi$ or $\Theta$ such that the sequence of sets $B_N$ satisfies
\begin{align}
\lim_{N\to \infty} \frac1N \log \mathbb{P}( z_N \in |\mu| B_N/\sqrt{D'(0)} ) &= -\Xi\le 0, &\mu\neq 0,\label{eq:bnasp1}\\
\lim_{N\to \infty} \frac1N \log |B_N|& = \Theta, &\mu=0.\label{eq:bnasp2}
\end{align}
\begin{remark}
Assumption III serves to select domains in the right scale and it is less restrictive. As seen in the proof of our main theorems, the reader could consider other sequence of sets $B_{N}$ provided some knowledge of their volumes. %
\end{remark}
\begin{theorem
\label{th:ttcpx}
Under Assumptions I, II, and III, we have
\begin{align*}
\lim_{N\to\infty}&\frac1N\log\mathbb{E} \mathrm{Crt}_N(\mathbb{R},B_N)\\
&=
\begin{cases}
-\Xi, & |\mu|>\sqrt{-2D''(0)},\\
-\log\frac{|\mu|}{\sqrt{-2D''(0)}}+\frac{\mu^2}{-4D''(0)} -\frac12-\Xi, & 0<|\mu|\le \sqrt{-2D''(0)},\\
\log\sqrt{-2D''(0)}-\frac12-\frac12\log(2\pi)- \frac12\log[D'(0)] +\Theta, & \mu=0.
\end{cases}
\end{align*}
\end{theorem}
\begin{remark}
If we let $J=\sqrt{-2D''(0)}$ and $\Xi=0$ as in \cite{Fy04}, the second case can be rewritten as
\begin{align}
\Sigma_{\mu,D} = \frac12\Big(\frac{\mu^2}{J^2}-1\Big)-\log\frac{\mu}J \ge 0.
\end{align}
which matches Fyodorov's result for isotropic Gaussian random fields.
\end{remark}
Next, we state our main result on the number of critical points with values in an open set $E\subset \mathbb R$ and confined to a shell $B_{N}(R_{1},R_{2})=\{ x\in \mathbb R^{N}: R_{1}< \frac{\| x\|}{\sqrt N} < R_{2} \}$. This is a natural choice, as the isotropy assumption implies rotational invariance. To emphasize the dependence on $R_1$ and $R_2$, we also write
\begin{align*}
\mathrm{Crt}_{N}(E, (R_1,R_2)) & =\mathrm{Crt}_{N}(E, B_{N}(R_1,R_2)).
\end{align*}
We will assume the following technical assumption:
\textbf{Assumption IV}. \prettyref{eq:asmp1} and \prettyref{eq:asmp2} hold for $x\in\mathbb{R}^N\setminus\{0\}$.
This assumption is rather mild, and is satisfied by e.g.~the so called Thorin--Bernstein functions; see \prettyref{se:3} for more details.
\begin{theorem}\label{th:cpsublevel}
Let $0\le R_1<R _2\le \infty$ and $E$ be an open set of $\mathbb{R}$. Assume Assumptions I, II, IV, and $|\mu|+\frac1{R_2} > 0$. Then
\[
\lim_{N\to\infty} \frac1N \log\mathbb{E} \mathrm{Crt}_{N}(E, B_{N}(R_1,R_2)) = \frac12 \log[-4D''(0)] -\frac12\log D'(0) +\frac12+\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y)
\]
where $F=\{(\rho,u,y):y\in\mathbb{R}, \rho \in (R_1, R_2), u\in E\}$, and the function $\psi_*$ is given explicitly in \eqref{eq:psifunction}.
\end{theorem}
The condition $|\mu|+\frac1{R_2} > 0$ merely says $R_2<\infty$ if $\mu=0$, which is necessary to get non-trivial asymptotics as we saw in \prettyref{th:ttcpx}. In \prettyref{ex:2} at the end of Section \ref{se:4}, we provide details on how to recover Theorem \ref{th:ttcpx} from Theorem \ref{th:cpsublevel} when $B_N$ is a shell, which also provides some insight on the location of the majority of critical points.
Let us end this section with a brief description of the proofs, highlighting the main difference from previous results that also computed the mean number of critical points. Similar to many results in this area, we use the Kac--Rice formula as a starting point. Since our fields do not have constant variance and in particular $H_{N}$ is correlated to $\nabla H_{N}$, we are unable to trace a direct parallel to random matrix theory as done in \cites{ABC13, ABA13, subag2017complexity} where the Hessian is distributed as a matrix from the Gaussian Orthogonal Ensemble (GOE) plus a scalar matrix. This small difference actually provides major obstacles. To go around this difficulty, we first find out the conditional distribution of the Hessian after some matrix manipulations. The GOE matrix appears as a summand of a principal submatrix which itself is correlated to the other element on diagonal. Then we estimate from above and below the conditional expectation of the Hessian given $H_{N}$. Matching upper and lower bounds only come after long and non-trivial calculations and careful asymptotic analysis.
The rest of the paper is organized as follows. In Section \ref{se:whole}, we fix our notation and provide some preliminary facts before giving the proof of \prettyref{th:ttcpx}. We find the (conditional) distribution of the Hessian with some of the tools from random matrix theory in \prettyref{se:3} and establish various results on exponential tightness in \prettyref{se:exptt}, both of which will serve as the starting point for computing complexity functions in this paper and the companion paper \cite{AZ22}. We prove Theorem \ref{th:cpsublevel} in \prettyref{se:4} .
\subsection{Acknowledgments}
We would like to thank Yan Fyodorov for suggesting the study of fields with isotropic increments and providing several references.
\section{Preliminary facts and proof of \prettyref{th:ttcpx}}\label{se:whole}
Throughout, we regard a vector to be a column vector. We write e.g.~$C_{\mu,D}$ for a constant depending on $\mu$ and $D$ which may vary from line to line. For $N\in \mathbb{N}$, let us denote $[N]=\{1,2,...,N\}$. For a vector $(y_1,...,y_N)\in \mathbb{R}^N$, we write $L(y_1^N)=\frac1N \sum_{i=1}^N \delta_{y_i}$ for its empirical measure. Recall that an $N \times N$ matrix $M$ in the Gaussian Orthogonal Ensemble (GOE) is a symmetric matrix with centered Gaussian entries that satisfy
\begin{align}\label{eq:goemat}
\mathbb{E}(M_{ij})=0,\ \ \mathbb{E}(M_{ij}^2) =\frac{1+\delta_{ij}}{2N}.
\end{align}
We will simply write $\mathrm{GOE}_N$ or $\mathrm{GOE}(N)$ for the matrix $M$. Denoting by $\lambda_1 \leq \dots \leq \lambda_N$ the eigenvalues of $M$, we write $L_N =L(\langle_1^N)= \frac{1}{N} \sum_{k=1}^N \delta_{\lambda_k}$ for its empirical spectral measure. From time to time, we may also use $\langle_k$ to denote the $k$th smallest eigenvalue of $\mathrm{GOE}_{N+1}$ or $\mathrm{GOE}_{N-1}$. This should be clear from context and should not affect any results as we only care about the large $N$ behavior eventually. For a closed set $F\subset \mathbb{R}$, we denote by $\mathcal{P}(F)$ the set of probability measures with support contained in $F$. We equip the space $\mathcal{P}(\mathbb{R})$ with the weak topology, which is compatible with the distance
\begin{align}\label{eq:measd}
d(\mu,\nu):=\sup\Big\{\Big|\int f \mathrm{d} \mu-\int f \mathrm{d}\nu \Big|: \|f\|_\infty\le1, \|f\|_L\le 1\Big\},\quad \mu,\nu \in\mathcal{P}(\mathbb{R}),
\end{align}
where $\|f\|_\infty$ and $\|f\|_L$ denote the $L^\infty$ norm and Lipschitz constant of $f$, respectively. Let $B(\nu,\delta)$ denote the open ball in the space $\mathcal{P}(\mathbb{R})$ with center $\nu$ and radius $\delta$ w.r.t.~to the distance $d$ given in \prettyref{eq:measd}. Similarly, we write $B_K(\nu,\delta)=B_K(\nu,\delta) \cap \mathcal{P}([-K,K])$ for some constant $K>0$. We denote by $\sigma_{{\rm sc}}$ the semicircle law scaled to have support $[-\sqrt2,\sqrt2]$.
We will frequently use the following facts which are consequences of large deviations. Using the large deviation principle (LDP) of empirical measures of GOE matrices \cite{BG97}, for any $\delta>0$, there exists $c=c(\delta)>0$ and $N_\delta>0$ such that for all $N>N_\delta$,
\begin{align}\label{eq:conineq}
\mathbb{P}(L(\langle_1^N)\notin B(\sigma_{\rm sc}, \delta) )\le e^{-cN^2}.
\end{align}
On the other hand, the LDP of the smallest eigenvalue of GOE matrices \cite{BDG01} states that $\langle_1$ satisfies an LDP with speed $N$ and a good rate function
\begin{align}
J_{1}(x)&=
\begin{cases}
k\int_{x}^{-\sqrt{2}}\sqrt{z^2-2}\mathrm{d} z, & x\le -\sqrt 2,\\
\infty, & x>-\sqrt2,
\end{cases}\notag \\
&=\begin{cases}
\frac12\log2 -\frac12 x\sqrt{x^2-2}-\log(-x+\sqrt{x^2-2}),& x\le -\sqrt2,\\
\infty,& x>-\sqrt2.
\end{cases}\label{eq:rf1ev}
\end{align}
In particular, writing $\langle_N^*=\max_{i\in [N]} |\langle_i|$ for the operator norm of an $N\times N$ GOE matrix, by \cite{BDG01}*{Lemma 6.3}, there exists $N_0>0$ and $K_0>0$ such that for $K>K_0$ and $N>N_0$,
\begin{align}\label{eq:ladein}
\mathbb{P}(\langle_N^*>K)\le e^{-NK^2/9}.
\end{align}
This can also be seen directly from the LDP of $\langle_1$, even though it was originally proved as a technical input for the LDP of $\langle_1$. It follows that there exists an absolute constant $C>0$ such that
\begin{align}\label{eq:goenorm}
\mathbb{E}[{\langle_N^*}^{k}] \le C^{k}
\end{align}
for any $k\ge0$ and $N>N_0$. For a probability measure $\nu$ on $\mathbb{R}$, let us define
\begin{align}\label{eq:psidef0}
\Psi(\nu,x)=\int_\mathbb{R} \log |x-t| \nu(\mathrm{d} t), \qquad \Psi_*(x)=\Psi(\sigma_{\rm sc}, x).
\end{align}
By calculation,
\begin{align}\label{eq:phi*}
\Psi_*(x) &= \frac12 x^2-\frac12 -\frac12\log2-\int_{\sqrt2}^{|x|} \sqrt{y^2-2}\mathrm{d} y \ensuremath{\boldsymbol 1}\{|x|\ge\sqrt2\} \notag \\
&=
\begin{cases}
\frac12 x^2-\frac12 -\frac12\log2,& |x|\le \sqrt2,\\
\frac12x^2 -\frac12-\log2 - \frac12 |x| \sqrt{x^2-2}+\log(|x|+\sqrt{x^2-2}) ,& |x|>\sqrt2.
\end{cases}
\end{align}
Note that $\Psi_*(x)-\frac{x^2}{2}\le -\frac12-\frac12\log2$.
Let $z$ be a standard Gaussian r.v.~and $\Phi$ the c.d.f.~of $z$. For $a\in\mathbb{R}, b>0$, we have
\begin{align}\label{eq:absgau}
\sqrt{\frac2\pi}b\le \mathbb{E}|a+bz|=\frac{\sqrt2 b }{\sqrt{\pi}}e^{-\frac{a^2}{2b^2}}
+a(2\Phi(\frac{a}{b})-1)\le \sqrt{\frac2\pi}b+|a|.
\end{align}
Unless specified otherwise, we always assume Assumptions I and II throughout.
Let us prove the result for the total number of critical points. The strategy we employ is well-known and similar to the one developed in \cite{ABC13}: We start by applying the Kac--Rice formula and we derive the asymptotics in high dimensions with the use of random matrix theory and large deviation principles. The proof is somewhat straight-forward since we do not face the main obstacle of the next sections, i.e., the dependence of $H_{N}$ and $\nabla H_{N}$.
\begin{proof}[Proof of Theorem \ref{th:ttcpx}]
Let $E$ be a Borel subset of $\mathbb{R}$. By the Kac--Rice formula \cite{AT07}*{Theorem 11.2.1},
\begin{align} \label{eq:startingpoint}
&\mathbb{E} \mathrm{Crt}_{N}(E,B_N)=\int_{B_N} \mathbb{E}[|\det \nabla^2 H_N(x)| \ensuremath{\boldsymbol 1}\{\frac1N H_N(x)\in E\}| \nabla H_N(x)=0] p_{\nabla H_N(x)}(0)\mathrm{d} x,
\end{align}
where $p_{\nabla H_N(x)}(t)$ is the p.d.f.~of ${\nabla H_N(x)}$ at $t$.
When $E=\mathbb{R}$, the restriction on the range of $H_N(x)$ disappears. By independence of $\nabla H_N$ and $\nabla^2 H_N$ (see Lemma \ref{le:cov}) and dropping the restriction on index, the above formula simplifies to
\begin{align}\label{eq:KRSimple}
\mathbb{E} \mathrm{Crt}_N(\mathbb{R},B_N)=\int_{B_N} \mathbb{E}[|\det \nabla^2 H_N(x)| ] p_{\nabla H_N(x)}(0)\mathrm{d} x.
\end{align}
The following lemma is a random matrix computation.
\begin{lemma} Let $M$ be an $N \times N$ GOE matrix and set
$$P=aM -\left(b+\frac{\sigma}{\sqrt N}Z\right)I,$$
where $Z$ is a standard Gaussian random variable independent of $M$, $I$ is the identity matrix and $a,b, \sigma \in \mathbb R.$ Then
\begin{equation*}
\mathbb E |\det P| = \frac{\Gamma(\frac{N+1}{2})(N+1)a^{N+1}}{\sqrt{\pi}\sigma N^{\frac{N}{2}}e^{\frac{Nb^2}{2\sigma^2}}} \mathbb E \int \exp \left[ \frac{(N+1)x^2}{2} \left( 1- \frac{a^2}{\sigma^2} \right) + \frac{\sqrt{N(N+1)}axb}{\sigma^2} \right] L_{N+1}(\mathrm{d} x).
\end{equation*}
\end{lemma}
\begin{proof}
Use \cite{ABC13}*{Lemma 3.3} with $m=\frac{b}{a}$, $t=\frac{\sigma}{\sqrt{N}a}$ and sum over the eigenvalues.
\end{proof}
From Lemma \ref{le:cov}, $\nabla^2 H_N(x)$ and $\sqrt{-4D''(0)}M- (\sqrt{\frac{-2D''(0)}N}Z-\mu)I$ have the same distribution. Then with $$m=-\mu/\sqrt{-4D''(0)},$$ from the Lemma above with $a=\sqrt{-4D''(0)}, b = -\mu, \sigma = \sqrt{-2D''(0)}$ we obtain
\begin{align*}
\mathbb{E}|\det\nabla^2 H_N(x)|&= \frac{\sqrt{2}[-4D''(0)]^{N/2} \Gamma(\frac{N+1}2) (N+1) }{\sqrt{\pi}N^{N/2} e^{Nm^2 }} \mathbb E \int e^{-\frac12(N+1)w^2+2\sqrt{N(N+1)} m w} L_{N+1}(\mathrm{d} w).
\end{align*}
From \prettyref{le:repl}, we see that for the asymptotic analysis we can replace the above $\sqrt{N(N+1)}$ in the exponent by $N+1$, leaving us to compute asymptotics of
\begin{align*}
I_N= \mathbb E \int e^{(N+1) \phi(x)} L_{N+1}(\mathrm{d} x),
\end{align*}
where
$$\phi(x)=-\frac12x^2-\frac{\mu x}{\sqrt{-D''(0)} }.$$
This is obtained in the following Lemma.
\begin{lemma}\label{dasodaskpow}
If $|\mu|>\sqrt{-2D''(0)}$ then
\[
\lim_{N\to\infty} \frac1N\log I_N =\frac{\mu^2}{-4D''(0)} +\log\frac{|\mu|}{\sqrt{-2D''(0)}}+\frac12,
\]
while if $|\mu| \le \sqrt{-2 D''(0)}$ we have
\[
\lim_{N\to\infty}\frac1N \log I_N = \frac{\mu^2}{-2D''(0)}.
\]
\end{lemma}
Assuming the above Lemma, we note that
\begin{align*}
\int_{B_N}p_{\nabla H_N(x)}(0) \mathrm{d} x =
\begin{cases}
\frac1{|\mu|^N} \mathbb{P}( z_N \in |\mu| B_N/\sqrt{D'(0)} ),& \mu\neq 0,\\
\frac1{(2\pi)^{N/2}D'(0)^{N/2}} |B_N| , & \mu=0,
\end{cases}
\end{align*}
where $|B_N|$ is the Lebesgue measure of $B_N$ and $z_N$ is a standard $N$ dimensional Gaussian vector. It follows from \eqref{eq:KRSimple} that
\begin{align*}
\lim_{N\to \infty} \frac{1}{N} \log \mathbb{E} \mathrm{Crt}_N(\mathbb{R},B_N)&= \lim_{N\to\infty} \frac{1}{N} \bigg(\log C_N + \log I_N\bigg),
\end{align*}
where
\begin{align}\label{eq:stir1}
C_N = \begin{cases}
\frac{\sqrt{2}[-4D''(0)]^{N/2} \Gamma(\frac{N+1}2) (N+1) }{\sqrt{\pi}N^{N/2} e^{Nm^2 }|\mu|^N } \mathbb{P}( z_N \in |\mu| B_N/\sqrt{D'(0)} ), & \mu\neq 0,\\
\frac{\sqrt{2}[-4D''(0)]^{N/2} \Gamma(\frac{N+1}2) (N+1)|B_N| }{\sqrt{\pi}N^{N/2} (2\pi)^{N/2}D'(0)^{N/2}} , & \mu=0.
\end{cases}
\end{align}
From Assumption III and Stirling's formula,
\begin{align*}
\lim_{N\to\infty} \frac1N \log C_N = \begin{cases}
\log\frac{\sqrt{-2D''(0)}}{|\mu|} +\frac{\mu^2}{4D''(0)}- \frac12 -\Xi, & \mu\neq 0,\\
\log\sqrt{-2D''(0)}-\frac12-\frac12\log(2\pi)- \frac12\log[D'(0)]+\Theta, & \mu=0.
\end{cases}
\end{align*}
The above computation combined with Lemma \ref{dasodaskpow} finishes the proof of the Theorem.
\end{proof}
We finish this section with the proof of Lemma \ref{dasodaskpow}.
\begin{proof}[Proof of Lemma \ref{dasodaskpow}]
The proof follows from the large deviation principle for the smallest eigenvalues of GOE. In short, in the latter case, the maximum of $\phi$ is attained in the bulk while in the former case, the smallest eigenvalue contributes to the asymptotics of $I_N$.
We argue the first case $|\mu|>\sqrt{-2D''(0)}$. By symmetry, we only consider $\mu>\sqrt{-2D''(0)}$. Since $\phi(x)$ is bounded from above, by the LDP for $\langle_1$ as in \prettyref{eq:rf1ev} and Varadhan's Lemma,
\begin{align}\label{eq:vrd1}
\sup_{x\in \mathbb{R}} \phi(x)-J_{1}(x)&\le \liminf_{N\to\infty} \frac1{N+1}\log \mathbb{E}_{\mathrm{GOE}(N+1)} e^{(N+1)\phi(\langle_{1})}\notag \\
&\le \limsup_{N\to\infty} \frac1{N+1}\log \mathbb{E}_{\mathrm{GOE}(N+1)} e^{(N+1)\phi(\langle_{1})}\le \sup_{x\in \mathbb{R}} \phi(x)-J_{1}(x).
\end{align}
Note that $\arg\max_x [\phi(x)-J_{1}(x) ]= -\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}<-\sqrt2$. It follows that
\begin{align*}
\liminf_{N\to \infty} \frac1N\log I_N\ge\liminf_{N\to \infty} \frac1N\log \frac1{N+1}\mathbb{E}_{\mathrm{GOE}(N+1)} e^{(N+1)\phi(\langle_1)}\\
\ge \frac{\mu^2}{-4D''(0)}+\log\frac{\mu}{\sqrt{-2D''(0)}}+\frac12.
\end{align*}
On the other hand,
\begin{align*}
I_N\le \mathbb{E}_{\mathrm{GOE}(N+1)} e^{(N+1)\phi(\langle_1)}\ensuremath{\boldsymbol 1}\{\langle_1\ge -\frac{\mu}{\sqrt{-D''(0)}}\}+ e^{(N+1)\phi(-\frac{\mu}{\sqrt{-D''(0)}})} \mathbb{P}\Big(\langle_1< -\frac{\mu}{\sqrt{-D''(0)}}\Big).
\end{align*}
For an upper bound for the first term on the right-hand side, we have by \prettyref{eq:vrd1},
\begin{align*}
\lim_{N \to\infty}& \frac1N \log \mathbb{E}_{\mathrm{GOE}(N+1)} e^{(N+1)\phi(\langle_1)} \\
&= \phi\Big(-\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}\Big)-J_1\Big(-\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}\Big).
\end{align*}
And for the second term, we find by \prettyref{eq:rf1ev}
\begin{align*}
&\limsup_{N \to\infty} \frac1N \log \Big[e^{(N+1)\phi(-\frac{\mu}{\sqrt{-D''(0)}})} \mathbb{P}\Big(\langle_1< -\frac{\mu}{\sqrt{-D''(0)}}\Big)\Big]\\
&\le \phi\Big(-\frac{\mu}{\sqrt{-D''(0)}}\Big) -J_1\Big(-\frac{\mu}{\sqrt{-D''(0)}}\Big)\\
&\le \phi\Big(-\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}\Big)-J_1\Big(-\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}\Big).
\end{align*}
It follows that
\[
\limsup_{N \to\infty} \frac1N \log I_N\le \frac{\mu^2}{-4D''(0)}+\log\frac{\mu}{\sqrt{-2D''(0)}}+\frac12.
\]
We have proved the claim.
For the second case $|\mu| \le \sqrt{-2 D''(0)}$, the maximum of $\phi(x)$ on $[-\sqrt2,\sqrt2]$ is achieved at $x=-\frac{\mu} {\sqrt{-D''(0)}}$. Then for ${\varepsilon}>0$ and $N$ large enough,
\[
\mathbb E \int_{-\frac{\mu}{\sqrt{-D''(0)}}}^{-\frac{\mu} {\sqrt{-D''(0)}}+{\varepsilon}} e^{(N+1) \phi(-\frac{\mu} {\sqrt{-D''(0)}}+{\varepsilon})} L_{N+1}(\mathrm{d} x) \le I_N\le e^{ (N+1) \phi\Big(-\frac{\mu} {\sqrt{-D''(0)}}\Big)}.
\]
Since $\lim_{N\to\infty} \mathbb E L_{N+1}\Big(-\frac{\mu}{\sqrt{-D''(0)}}, -\frac{\mu}{\sqrt{-D''(0)}}+{\varepsilon}\Big) >0$, it follows that
\begin{align*}
\frac{\mu^2}{-2D''(0)} -\frac{{\varepsilon}^2}{2}\le \liminf_{N\to\infty} \frac1N \log I_N \le \limsup_{N\to\infty}\frac1N\log I_N \le \frac{\mu^2}{-2D''(0)}.
\end{align*}
The claim follows by sending ${\varepsilon}\to 0+$.
\end{proof}
\section{Conditional law of $\nabla^2 H_N$ with constrained critical values}\label{se:3}
In this section, we provide the initial steps for computing complexity functions. Our main result is a relation between a conditional Hessian $\nabla^{2} H_{N}$ and the GOE given in \prettyref{eq:ayg} which implies \eqref{eq:martin} in the Kac--Rice representation for structure functions $D$ that satisfy Assumptions I, II and IV.
Recall the Kac--Rice formula \prettyref{eq:startingpoint}.
Note that $(H_N(x),\partial_i H_N(x), \partial_{kl} H_N(x))_{1\le i\le N, 1\le k\le l\le N}$ is a Gaussian field. From \prettyref{le:cov}, we have $ \mathrm{Var}(H_N(x)) = ND(\frac1N \|x\|^2)$ and the means
\begin{align*}
\mathbb{E}(H_N(x))&=\frac{\mu}2\|x\|^2,\ \
\mathbb{E}(\nabla H_N(x))=\mu x, \ \ \mathbb{E}(\nabla^2 H_N(x)) = \mu I_N.
\end{align*}
Let $\Sigma_{01}= \mathrm{Cov}(H_N(x), \nabla H_N(x))= D'(\frac{\|x\|^2}N)x^\mathsf T$ and $\Sigma_{11}=\mathrm{Cov}(\nabla H_N(x))= D'(0) I_N$. By the conditional distribution of Gaussian vectors, we know $$Y:=\frac1N[H_N(x)-\Sigma_{01}\Sigma_{11}^{-1}\nabla H_N(x)]
= \frac{H_N(x)}N- \frac{D'(\frac{\|x\|^2}N)\sum_{i=1}^{N} x_i \partial_i H_N(x)}{N D'(0)}$$
is independent from $\nabla H_N(x)$. Since $\nabla H_N(x)$ is independent from $\nabla^2H_N(x)$, by conditioning, we may rewrite \prettyref{eq:startingpoint} as
\begin{align}\label{eq:kr1}
&\ \ \mathbb{E} \mathrm{Crt}_{N}(E,B_N)\notag\\
&=\int_{B_N} \mathbb{E}[|\det \nabla^2 H_N(x)| \ensuremath{\boldsymbol 1}{\{ Y+ \frac1N\Sigma_{01}\Sigma_{11}^{-1}\nabla H_N(x) \in E \}}| \nabla H_N(x)=0] p_{\nabla H_N(x)}(0)\mathrm{d} x \notag\\
&=\int_{B_N} \mathbb{E}[|\det \nabla^2 H_N(x)| \ensuremath{\boldsymbol 1}\{Y\in E\} ] p_{\nabla H_N(x)}(0)\mathrm{d} x \notag\\
&=\int_{B_N}\int_{E} \mathbb{E}(|\det \nabla^2 H_N(x)| |Y=u) \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} p_{\nabla H_N(x)}(0) \mathrm{d} u \mathrm{d} x,
\end{align}
where
\begin{align*}
m_Y &= \mathbb{E}(Y)=\frac{\mu\|x\|^2}{2N}-\frac{\mu D'(\frac{\|x\|^2}{N}) \|x\|^2}{D'(0) N}, \\
\sigma_Y^2 & =\mathrm{Var}(Y)=\frac1N\Big(D(\frac{\|x\|^2}{N})-\frac{D'(\frac{\|x\|^2}N)^2}{D'(0)} \frac{\|x\|^2}{N}\Big).
\end{align*}
To proceed, we need the conditional distribution of $\nabla^2 H_N(x)$ given $Y=u$. A crucial difficulty arises here, however. Namely, one can check that the off-diagonal entries of $\nabla^2 H_N(x)$ given $Y=u$ may have negative covariance, for example,
\[
\mathrm{Cov}[(\partial_{ij} H_N(x), \partial_{kl} H_N(x))|Y=u]= - \frac1N \frac{\alpha x_i x_j}{N} \frac{\alpha x_k x_l}{N},
\ i\neq j, k\neq l, \{i,j \}\neq\{k,l\},
\]
for some $\alpha$ defined below, which prevents using GOE directly.
To overcome this difficulty, let us define
\begin{align}\label{eq:albt0}
\alpha=\alpha(\|x\|^2/N) &= \frac{2D''(\|x\|^2/N)}{ \sqrt{ D(\frac{\|x\|^2}N)-\frac{D'({\|x\|^2}/N)^2}{D'(0)}\frac{\|x\|^2}N}}, \notag\\
\beta=\boldsymbol{t}(\|x\|^2/N) & =\frac{D'(\|x\|^2/N)-D'(0)}{\sqrt{ D(\frac{\|x\|^2}N)-\frac{D'({\|x\|^2}/N)^2}{D'(0)}\frac{\|x\|^2}N}}.
\end{align}
Note that $\alpha\le0$ and $\beta\le 0$. One should think of $\alpha$ and $\boldsymbol{t}$ as $O(1)$ quantities.
Let us define $A=A_N=U(x)\nabla^2H_N(x) U(x)^\mathsf T$ where $U(x)$ is an $N\times N$ orthogonal matrix such that
\begin{align}\label{eq:umat}
U( \frac{\alpha x x^\mathsf T}N+\boldsymbol{t} I_N )U^\mathsf T =
\begin{pmatrix}
\frac{\alpha\|x\|^2}N +\boldsymbol{t} &0 &\cdots &0 \\
0& \boldsymbol{t} & \cdots&0 \\
\vdots& \vdots& \ddots& \vdots \\
0& 0& \cdots& \boldsymbol{t}
\end{pmatrix}.
\end{align}
In other words, we have for $U=(u_{ij})$,
\begin{align}\label{}
\sum_{k,l} u_{ik}( \frac{\alpha x_k x_l}N +\boldsymbol{t} \delta_{kl}) u_{jl}& =\alpha\delta_{i1}\delta_{j1} \frac{\|x\|^2}N +\boldsymbol{t}\delta_{ij}.
\end{align}
Indeed, such a $U(x)$ can be found by imposing the first row to be $\frac{x^\mathsf T}{\|x\|}$ for $x\neq 0$; and if $x=0$, $U(x)$ can be arbitrary orthogonal matrix. It follows that $\mathbb{E}(A)=\mu I_N$, and by \prettyref{le:cov},
\begin{align*}
\mathrm{Cov}(A_{ij}, A_{i'j'})&=\sum_{k,l,k',l'} u_{ik}u_{jl}u_{i'k'} u_{j'l'} \mathrm{Cov}(\partial_{kl} H_N(x),\partial_{k'l'} H_N(x))\\
&=\frac{-2D''(0)}{N} (\delta_{ij}\delta_{i'j'}+\delta_{ii'}\delta_{jj'} +\delta_{ij'}\delta_{i'j}),\\
\mathrm{Cov}(A_{ij}, \partial_l H_N(x)) &= \sum_{a,b} u_{ia} u_{jb}\mathrm{Cov}(\partial_{ab} H_N(x), \partial_l H_N(x))=0,\\
\mathrm{Cov}(A_{ij}, H_N(x))& =\sum_{a, b} u_{ia}u_{jb} (\frac{2D''(\|x\|^2/N)x_a x_b}N +[D'(\|x\|^2/N)-D'(0)]\delta_{ab})\\
&=\frac{2D''(\|x\|^2/N) \delta_{i1}\delta_{j1} \|x\|^2}N +[D'(\|x\|^2/N) -D'(0)] \delta_{ij}.
\end{align*}
Since $A$ and $\nabla^2 H_N(x)$ have the same eigenvalues, by \prettyref{eq:kr1},
\begin{align}\label{eq:kr2}
\mathbb{E} \mathrm{Crt}_{N}(E,B_N)=\int_{B_N}\int_{E} \mathbb{E}(|\det A| |Y=u) \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} p_{\nabla H_N(x)}(0) \mathrm{d} u \mathrm{d} x.
\end{align}
We need the conditional distribution of $A$ given $Y=u$. Note that
\[
\mathrm{Cov}(A_{ij}, Y)= \mathrm{Cov}(A_{ij},\frac{H_N}N) =\frac{2D''(\|x\|^2/N) \delta_{i1}\delta_{j1} \|x\|^2}{N^2 } +\frac{[D'(\|x\|^2/N) -D'(0)] \delta_{ij}}{N}.
\]
Then conditioning on $Y=u$ we have
\begin{align}
&\mathbb{E}(A_{ij}|Y=u) = \mathbb{E}(A_{ij})+\mathrm{Cov}(A_{ij},Y)\sigma_Y^{-2}(u-\mathbb{E}(Y)) \nonumber \\
& = \mu\delta_{ij} + \frac{(\frac{2D''(\frac{\|x\|^2}N) \delta_{i1}\delta_{j1} \|x\|^2}N +[D'(\frac{\|x\|^2}N) -D'(0)] \delta_{ij}) (u -\frac{\mu\|x\|^2}{2N} +\frac{\mu D'(\frac{\|x\|^2}N)\|x\|^2}{D'(0)N})}{ D(\frac{\|x\|^2}{N}) -\frac{D'(\frac{\|x\|^2}{N})^2 \|x\|^2}{D'(0) N}} , \nonumber \\
& m_{A|u}:=\mathbb{E}(A|Y=u) =\mu I_N + \frac{u-\frac{\mu\|x\|^2}{2N} +\frac{\mu D'(\frac{\|x\|^2}N)\|x\|^2}{D'(0)N}}{D(\frac{\|x\|^2}{N}) -\frac{D'(\frac{\|x\|^2}{N})^2 \|x\|^2}{D'(0) N}}\nonumber \\
& \times \begin{pmatrix} \
\frac{2D''(\frac{\|x\|^2}N)\|x\|^2}{N} +D'(\frac{\|x\|^2}N) -D'(0) & 0 \label{mean:A} \\
0 & [D'(\frac{\|x\|^2}N) -D'(0) ]I_{N-1}
\end{pmatrix}, \\
& \mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T |Y=u]=\mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T]-\mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T ,Y]\sigma_Y^{-2}\mathrm{Cov}[Y,(A_{ij}, A_{i'j'})^\mathsf T] \nonumber \\
&= \mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T] - \frac1N\nonumber \\
& \times\begin{pmatrix}
(\frac{\alpha \delta_{i1}\delta_{j1} \|x\|^2}{N}+\boldsymbol{t}\delta_{ij})^2 & (\frac{\alpha \delta_{i1}\delta_{j1} \|x\|^2}{N}+\boldsymbol{t}\delta_{ij}) (\frac{\alpha \delta_{i'1}\delta_{j'1} \|x\|^2}{N}+\boldsymbol{t}\delta_{i'j'} )\\
(\frac{\alpha \delta_{i1}\delta_{j1} \|x\|^2}{N}+\boldsymbol{t}\delta_{ij}) (\frac{\alpha \delta_{i'1}\delta_{j'1} \|x\|^2}{N}+\boldsymbol{t}\delta_{i'j'} ) & (\frac{\alpha \delta_{i'1}\delta_{j'1} \|x\|^2}{N}+\boldsymbol{t}\delta_{i'j'})^2
\end{pmatrix},\nonumber
\end{align}
where $\mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T] $ denotes the $2\times 2$ covariance matrix of $A_{ij}$ and $A_{i'j'}$ while $\mathrm{Cov}[(A_{ij}, A_{i'j'})^\mathsf T ,Y] $ denotes the $2\times 1$ covariance matrix of $(A_{ij}, A_{i'j'})^\mathsf T$ and $Y$. From here we see conditioning on $Y=u$,
\begin{align*}
&\mathrm{Cov}[(A_{ij}, A_{i'j'})|Y=u] \\
&=\frac{-2D''(0)(\delta_{ij}\delta_{i'j'}+\delta_{ii'}\delta_{jj'} +\delta_{ij'}\delta_{i'j})}{N}-\frac1N(\frac{\alpha \delta_{i1}\delta_{j1} \|x\|^2}{N}+\boldsymbol{t}\delta_{ij}) (\frac{\alpha \delta_{i'1}\delta_{j'1} \|x\|^2}{N}+\boldsymbol{t}\delta_{i'j'} )\\
&=\begin{cases}
\frac{-6D''(0)}{N}-\frac1N (\frac{\alpha \|x\|^2}N+\boldsymbol{t})^2, & i=j=i'=j'=1, \\
\frac{-2D''(0)}{N} - \frac1N(\frac{\alpha \|x\|^2}N+\boldsymbol{t})\boldsymbol{t}, & i=j=1\neq i'=j', \mbox{ or } i'=j'=1\neq i=j,\\
\frac{-6D''(0)}{N} -\frac{ \boldsymbol{t}^2}N,& i=j=i'=j'\neq 1,\\
\frac{-2D''(0)}{N} -\frac{\boldsymbol{t}^2}N , & 1\neq i=j\neq i'=j'\neq 1,\\
\frac{-2D''(0)}{N} ,& i=i'\neq j=j', \mbox{ or } i=j'\neq j=i', \\
0, &\text{otherwise}.
\end{cases}
\end{align*}
Alternatively, one can find the above conditional covariances using spherical coordinates, which could avoid the matrix function $U(x)$. In order to draw connection with GOE, we first have to check that all the quantities above are positive. Note that $\alpha$ and $\boldsymbol{t}$ depend on $\|x\|^2$ and $N$ through $\|x\|^2/N$. Let us write $\rho=\rho_{N}(x)=\frac{\|x\|}{\sqrt N}$ so that $\alpha=\alpha(\rho^2)$ and $\boldsymbol{t}=\boldsymbol{t}(\rho^2)$.
\begin{lemma}\label{le:albtd}
We have $\lim_{\rho \to0+} \frac{D(\rho^2)}{\rho^4}-\frac{D'(\rho^2)^2}{D'(0)\rho^2} =-\frac32D''(0)$ and
\begin{align*}
\lim_{\rho\to0+} \boldsymbol{t}(\rho^2)^2&=-\frac23 D''(0),\quad
\lim_{\rho\to 0+} \alpha(\rho^2) \boldsymbol{t}(\rho^2)\rho^2 = -\frac43 D''(0), \quad
\lim_{\rho\to 0+} [\alpha(\rho^2) \rho^2]^2 = -\frac83 D''(0).
\end{align*}
\end{lemma}
\begin{proof}
Using l'Hospital's rule together with $D(0)=0$,
\begin{align*}
\lim_{\rho\to 0+} \frac{D(\rho^2)}{\rho^4}-\frac{D'(\rho^2)^2}{D'(0)\rho^2} & =\lim_{\rho \to 0+} \frac{D'(\rho^2)\rho^2-D(\rho^2)}{\rho^4}-\frac{2D'(\rho^2)D''(\rho^2)}{D'(0)} = -\frac32D''(0).
\end{align*}
It follows that
\begin{align*}
\lim_{\rho \to 0+}& \boldsymbol{t}(\rho^2)^2= \lim_{\rho\to 0+} \frac{ [\frac{D'(\rho^2)-D'(0)}\rho^2]^2}{\frac{D(\rho^2)}{\rho^4} -\frac{D'(\rho^2)^2}{D'(0)\rho^2}}= -\frac23 D''(0),\\
\lim_{\rho\to 0+}& \alpha(\rho^2)\boldsymbol{t}(\rho^2) \rho^2 =\lim_{\rho\to0+}\frac{[2D''(\rho^2)] \frac{D'(\rho^2)-D'(0)}{\rho^2}} {\frac{D(\rho^2)}{\rho^4}-\frac{D'(\rho^2)^2}{D'(0)\rho^2}} =-\frac43 D''(0),\\
\lim_{\rho\to 0+}&[\alpha(\rho^2)\rho^2]^2 = \lim_{\rho\to0+}\frac{[2D''(\rho^2)]^2} {\frac{D(\rho^2)}{\rho^4}-\frac{D'(\rho^2)^2}{D'(0)\rho^2}}=-\frac83 D''(0).\qedhere
\end{align*}
\end{proof}
In light of \prettyref{le:albtd}, we make the following observation. Following \cite{SSV}*{Theorem 8.2}, a function $f: (0,\infty)\to(0,\infty)$ is a Thorin--Bernstein function if and only if $\lim_{x\to0+}f(x)$ exists and its derivative has a representation
\begin{align}\label{eq:tbfcn}
f'(x)=\frac{a}{x}+b +\int_{(0,\infty)} \frac1{x+t}\sigma (\mathrm{d} t),
\end{align}
where $a,b\ge0$ and $\sigma$ is a measure on $(0,\infty)$ satisfying $\int_{(0,\infty)} \frac1{1+t}\sigma(\mathrm{d} t)<\infty$. In particular, the functions $D(r)=\log(1+r/{\varepsilon})$ and $D(r)=(r+{\varepsilon})^\gamma-{\varepsilon}^{\gamma}$ are Thorin--Bernstein functions. Recall the definitions of $\alpha$ and $\boldsymbol{t}$ as in \prettyref{eq:albt0}. The proof of the following analytical result is deferred to Appendix \prettyref{se:aux}.
\begin{lemma}\label{le:dgeab}
For any $x\in \mathbb{R}^N\setminus\{0\}$, we have
\begin{align}
-2D''(0)&> \left(\frac{\alpha\|x\|^2}{N}+\boldsymbol{t}\right)\boldsymbol{t},\label{eq:asmp1} \\
-4D''(0)&> \left(\frac{\alpha\|x\|^2}{N}+\boldsymbol{t}\right)\frac{\alpha \|x\|^2}N,\label{eq:asmp2}
\end{align}
provided anyone of the following conditions holds:
\begin{enumerate}
\item For all $x\neq 0$,
\begin{align}\label{eq:btbd}
\boldsymbol{t}^2\le -\frac23 D''(0).
\end{align}
\item For all $y\ge0$,
\begin{align}\label{eq:btinc}
2D'(0)D''(y)[D(y)-D'(y)y]+D'(y)[D'(y)-D'(0)]^2 \ge 0.
\end{align}
\item For all $y\ge0$
\begin{align}\label{eq:btbd2}
\frac{D'(y) y}{D'(0)}-\frac{D'(y)-D'(0)}{D''(0)}\ge 0.
\end{align}
\item For all $y\ge 0$,
\begin{align}\label{eq:btbd3}
-\frac{D'(y)}{D''(y)} +\frac{D'(0)}{D''(0)}\ge y.
\end{align}
\item For all $y\ge 0$,
\begin{align}\label{eq:btbd4}
\frac{-D''(y)^2+D'''(y)D'(y)}{D''(y)^2}\ge 1.
\end{align}
\item $D$ is a Thorin--Bernstein function with $a=0$ in \prettyref{eq:tbfcn}.
\end{enumerate}
\end{lemma}
From now on, we always assume \prettyref{eq:asmp1} and \prettyref{eq:asmp2}, thus $\mathrm{Cov}[(A_{ij}, A_{i'j'})|Y=u]\ge0$ for all $i,i',j,j'$. Recalling \eqref{mean:A}, let us write
\begin{align}
m_1 & =m_1(\rho,u)= \mu + \frac{(u-\frac{\mu\rho^2}{2} +\frac{\mu D'(\rho^2)\rho^2}{D'(0)}) ( 2D''(\rho^2)\rho^2+D'(\rho^2) -D'(0) )}{D(\rho^2) -\frac{D'(\rho^2)^2 \rho^2}{D'(0) }}, \notag \\
m_2&=m_2(\rho,u)= \mu + \frac{(u-\frac{\mu \rho^2}{2} +\frac{\mu D'(\rho^2) \rho^2}{D'(0)}) (D'(\rho^2) -D'(0) )}{D( \rho^2) -\frac{D'(\rho^2)^2 \rho^2}{D'(0) }}, \notag\\
\sigma_1 & =\sigma_1(\rho)= \sqrt{\frac{-4D''(0)-(\alpha \rho^2 +\boldsymbol{t})\alpha\rho^2}{N}}, \ \ \
\sigma_2 =\sigma_2(\rho)= \sqrt{\frac{-2D''(0)-(\alpha\rho^2 +\boldsymbol{t})\boldsymbol{t}}{N}}, \notag \\
m_Y&=m_Y(\rho)= \frac{\mu\rho^2}{2}-\frac{\mu D'(\rho^2) \rho^2}{D'(0) }, \ \ \ \sigma_Y =\sigma_Y(\rho) =\sqrt{\frac1N\Big(D(\rho^2)-\frac{D'(\rho^2)^2\rho^2}{D'(0)} \Big)},\notag \\
\alpha &=\alpha(\rho^2)= \frac{2D''(\rho^2)}{ \sqrt{ D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)}}}, \ \ \
\beta =\beta(\rho^2)=\frac{D'(\rho^2 )-D'(0)}{\sqrt{ D(\rho^2 )-\frac{D'(\rho^2)^2 \rho^2}{D'(0)}}}, \label{eq:msialbt}
\end{align}
where $\rho=\frac{\|x\|}{\sqrt N}$. From time to time, we also use the following change of variable
\begin{align}\label{eq:uvcov}
v=\frac{u-\frac{\mu\rho^2}{2}+\frac{\mu D'(\rho^2)\rho^2}{D'(0)}}{\sqrt{D(\rho^2)-\frac{D'(\rho^2)^2\rho^2}{D'(0)}}} = \frac{u-m_Y}{\sqrt{N}\sigma_Y}
\end{align}
so that
\begin{align}
m_1&=\mu +v(\alpha \rho^2+\boldsymbol{t}), \ \
m_2=\mu+ v\boldsymbol{t}.\label{eq:m12cov}
\end{align}
Let
\begin{align*}
G= G(u) =
\begin{pmatrix}
z_1'& \xi^\mathsf T \\
\xi & \sqrt{-4D''(0)} (\sqrt{\frac{N-1}{N}}\mathrm{GOE}_{N-1}-z_3'I_{N-1})
\end{pmatrix} ,
\end{align*}
where with $z_1,z_2,z_3$ being independent standard Gaussian random variables,
\begin{align*}
z_1'&=\sigma_1 z_1 - \sigma_2 z_2 + m_1, \quad
z_3'=\frac1{\sqrt{-4D''(0)}}\Big(\sigma_2 z_2+ \frac{ \sqrt{\alpha\boldsymbol{t}}\rho }{\sqrt N} z_3 - m_2\Big),
\end{align*}
and $\xi$ is a centered column Gaussian vector with covariance matrix $\frac{-2D''(0)}{N}I_{N-1}$ which is independent from $z_1,z_2,z_3$ and the GOE matrix $\mathrm{GOE}_{N-1}$.
The above discussion yields our main result of this section.
\begin{proposition}
Assume Assumptions I, II and IV. Then we have in distribution
\begin{align}\label{eq:ayg}
(U\nabla^2 H_N U^\mathsf T | Y=u) & \stackrel{d}{=}
G.
\end{align}
\end{proposition}
In the following we write frequently
$$G_{**}=\sqrt{-4D''(0)} \Big(\sqrt{\frac{N-1}{N}}\mathrm{GOE}_{N-1}-z_3'I_{N-1}\Big).$$
To connect with \prettyref{eq:kr2}, we have
\begin{equation}\label{eq:martin}
\mathbb{E}(|\det A||Y=u) = \int |\det a| p_{A|Y}(a|u) \mathrm{d} a = \mathbb{E}(|\det G|).
\end{equation}
\section{Exponential tightness}\label{se:exptt}
The purpose of this section is to prove several exponential tightness results so that our future analysis will be reduced to the compact setting. Let $E \subset \mathbb{R}$ be a Borel set. Hereafter, for simplicity, let us assume $B_N$ is a shell $B_{N}(R_{1},R_{2})=\{ x\in \mathbb R^{N}: R_{1}< \frac{\| x\|}{\sqrt N} < R_{2} \}$, $0\le R_1<R_2\le \infty$. Recall that in this case we write
$\mathrm{Crt}_{N}(E, (R_1, R_2))= \mathrm{Crt}_{N}(E,B_N(R_1,R_2)).$
Using spherical coordinates and writing $\rho=\frac{\|x\|}{\sqrt N}$, by the Kac--Rice formula we have
\begin{align}
&\mathbb{E}\mathrm{Crt}_{N}(E,(R_1,R_2))=\int_{B_N} \int_E \mathbb{E}[|\det A| |Y=u] \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} p_{\nabla H_N(x)}(0) \mathrm{d} u \mathrm{d} x \notag\\
&=S_{N-1} N^{(N-1)/2} \int_{R_1}^{R_2}\int_E \mathbb{E}[|\det G| ] \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u -m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho.\label{eq:krerr}
\end{align}
Here $S_{N-1} = \frac{2\pi^{N/2}}{\Gamma(N/2)}$ is the area of $N-1$ dimensional unit sphere, $G$ depends on $u$ implicitly.
Using the Stirling formula, we have
\begin{align}\label{eq:snlim}
\lim_{N\to\infty} \frac1N \log (S_{N-1}N^{\frac{N-1}{2}})= \frac12\log(2\pi)+\frac12.
\end{align}
Recall the representation \prettyref{eq:ayg}. Let $\langle_1\le \cdots \le \langle_{N-1}$ be the eigenvalues of $\mathrm{GOE}_{N-1}$. The eigenvalues of $G_{**}$ can be represented as $\{\sqrt{-4D''(0)}((\frac{N-1}{N})^{1/2}\langle_i-z_3')\}_{i=1}^{N-1}$. By the representation, we may find a random orthogonal matrix $V$ which is independent of the unordered eigenvalues $\tilde \langle_j, j=1,...,N-1$ and $z_3'$, such that
\begin{align}\label{eq:goedc}
G_{**} = \sqrt{-4D''(0)} V^\mathsf{T} \begin{pmatrix}
(\frac{N-1}{N})^{1/2}\tilde \langle_1-z_3' &\cdots &0 \\
\vdots& \ddots& \vdots \\
0& \cdots& (\frac{N-1}{N})^{1/2}\tilde\langle_{N-1}-z_3'
\end{pmatrix} V.
\end{align}
By the rotational invariance of Gaussian measures, $V \xi$ is a centered Gaussian vector with covariance matrix $\frac{-2D''(0)}N I_{N-1}$ that is independent of $z_3'$ and $\tilde\langle_j$'s. We can rewrite $V \xi \stackrel{d}{=}\sqrt{\frac{-2D''(0)}{N}} Z$, where $Z=(Z_1,..., Z_{N-1})$ is an $N-1$ dimensional standard Gaussian random vector.
Using the determinant formula for block matrices or the Schur complement formula,
\begin{align}
\det G= \det (G_{**})(z_1' - \xi^\mathsf T G_{**}^{-1} \xi )& = [-4D''(0)]^{(N-1)/2} z_1'\prod_{j=1}^{N-1} ((\frac{N-1}{N})^{1/2}\langle_j -z_3') \notag\\
&\ \ \ - \frac{[-4D''(0)]^{N/2}}{2N} \sum_{k=1}^{N-1} Z_k^2 \prod_{j\neq k}^{N-1} ((\frac{N-1}{N})^{1/2}\langle_j -z_3').\label{eq:schur}
\end{align}
It follows from \prettyref{eq:krerr} that
\begin{align}
&\mathbb{E}\mathrm{Crt}_{N} (E,(R_1,R_2))
= S_{N-1}N^{(N-1)/2}\int_{R_1}^{R_2} \int_{E} \mathbb{E}\Big(\Big| [-4D''(0)]^{(N-1)/2} z_1' \prod_{j=1}^{N-1} ((\frac{N-1}{N})^{1/2}\langle_j-z_3') \notag\\
&\ \ - \frac{[-4D''(0)]^{N/2}}{2N} \sum_{k=1}^{N-1} Z_k^2 \prod_{j\neq k}^{N-1} ((\frac{N-1}{N})^{1/2}\langle_j -z_3')\Big|\Big) \frac{e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} }{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} }{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \notag \\
&\le S_{N-1}N^{(N-1)/2} [I_1(E, (R_1,R_2)) +I_2(E, (R_1,R_2))], \label{eq:ecnr12}
\end{align}
where
\begin{align}
I_1(E, (R_1,R_2)) &= [-4D''(0)]^{\frac{N-1}{2}} \int_{R_1}^{R_2} \int_{E} \mathbb{E}\Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \Big]\notag\\
&\frac{ e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho,\notag \\
I_2(E,(R_1,R_2)) &= \frac{[-4D''(0)]^{\frac{N}{2}} }{2N} \sum_{i=1}^{N-1}\int_{R_1}^{R_2} \int_{E} \mathbb{E}\Big[Z_i^2 \prod_{j\neq i} |(\frac{N-1}{N})^{1/2}\langle_j-z_3'| \Big ] \notag \\
&\frac{ e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho.\label{eq:ie12}
\end{align}
In the following we will employ hard analysis to derive various estimates that would reduce the problem to the compact setting.
\begin{lemma}\label{le:apest}
For any $\rho>0$, $u\in \mathbb{R}$, we have
\begin{align}
\frac{1}{D'(0)-D'(\rho^2)}&\le \frac{C_D (1+\rho^2)}{\rho^2}, \ \ \ \frac1{\sqrt{D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} }} \le \frac{C_D (1+\rho^2)}{\rho^2}, \notag\\
|m_i| &\le |\mu| + C_D \Big|\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} \Big| (1+\rho^2), \ \ i=1,2. \label{eq:mtest}
\end{align}
\end{lemma}
\begin{proof}
Since $\lim_{\rho\to0+} \frac{D'(\rho^2)-D'(0)}{\rho^2}=D''(0)$ and $D'(\rho^2)$ is strictly decreasing to 0 as $\rho^2$ tends to $\infty$, we have the first assertion. By \prettyref{eq:asmp1}, we have
\[
\frac1{\sqrt{D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} }}\le \frac{\sqrt{-2D''(0)}}{D'(0)-D'(\rho^2)}\le \frac{C_D(1+\rho^2)}{\rho^2} .
\]
Using \prettyref{eq:asmp1} and \prettyref{eq:asmp2},
\begin{align*}
|m_1| & \le |\mu|+ \Big|\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)}\Big|\frac{ C_D \rho^2}{D'(0)-D'(\rho^2)} \le |\mu|+ C_D \Big|\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} \Big| (1+\rho^2),\notag \\
|m_2|&\le |\mu|+ \Big|\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)}\Big| \frac{C_D \rho^2}{D'(0)-D'(\rho^2)}\le |\mu| + C_D \Big|\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} \Big| (1+\rho^2).\qedhere
\end{align*}
\end{proof}
Recall $z_1'=\sigma_1 z_1 -\sigma_2 z_2 + m_{1}$, $z_3'=(\sigma_2 z_2 + \frac{\rho\sqrt{\alpha \boldsymbol{t}}z_3}{\sqrt N}-m_{2}) /\sqrt{-4D''(0)}$. Note that the conditional distribution of $z_1'$ given $z_3'=y$ is given by
\begin{align}\label{eq:z13con0}
z_1'| z_3'=y \sim N \Big (\bar \mathsf a, \frac{\mathsf b^2}{N}\Big),
\end{align}
where
\begin{align*}
\bar \mathsf a&= m_1-\frac{\sigma_2^2(\sqrt{-4D''(0)}y + m_2)}{\sigma_2^2+\frac{\alpha \boldsymbol{t} \rho^2}{N}}\\% =m_1-\frac{[-2D''(0)-(\alpha\rho^2 +\boldsymbol{t})\boldsymbol{t}](\sqrt{-D''(0)}y +m_2)}{-2D''(0)-\boldsymbol{t}^2}\\
&=\frac{-2D''(0)\alpha\rho^2( u-\frac{\mu\rho^2}{2}+\frac{\mu D'(\rho^2) \rho^2}{D'(0) } )}{(-2D''(0)-\boldsymbol{t}^2) \sqrt{D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} }}
+\frac{\alpha\boldsymbol{t}\rho^2 \mu }{-2D''(0)-\boldsymbol{t}^2}\notag \\
&\quad -\frac{(-2D''(0)-\boldsymbol{t}^2-\alpha\boldsymbol{t}\rho^2)\sqrt{-4D''(0)} y}{-2D''(0)-\boldsymbol{t}^2},\\
\frac{\mathsf b^2}{N}& = \sigma_1^2+\sigma_2^2-\frac{\sigma_2^4}{\sigma_2^2+\frac{\alpha\boldsymbol{t} \rho^2}{N}} =\frac{-4D''(0)}{N} +\frac{2D''(0)\alpha^2\rho^4}{N(-2D''(0)-\boldsymbol{t}^2)}.
\end{align*}
\begin{lemma}\label{le:exptt}
Suppose $\mu\neq 0$.
Then
\begin{align*}
\limsup_{ T\to \infty} \limsup_{N\to \infty} \frac1N \log \mathbb{E} \mathrm{Crt}_{N} ([-T, T]^c, (0,\infty)) &= -\infty,\\
\limsup_{ R \to \infty} \limsup_{N\to \infty} \frac1N \log \mathbb{E} \mathrm{Crt}_{N} (\mathbb{R}, (R,\infty)) &= -\infty,\\
\limsup_{ {\varepsilon} \to 0+} \limsup_{N\to \infty} \frac1N \log \mathbb{E} \mathrm{Crt}_{N} (\mathbb{R}, (0,{\varepsilon})) &= -\infty.
\end{align*}
\end{lemma}
\begin{proof}
(1) Note that $\mathsf b^2\le -4D''(0)$ and that
\begin{align}\label{eq:z3mom}
\mathbb{E} [|z_3'|^{N-1}] \le C^{N-1}\Big[ m_2^{N-1}+\Big(\frac{-2D''(0)-\boldsymbol{t}^2}{-4N D''(0)}\Big)^{\frac{N-1}{2}}\Big]\le C^{N-1}(1+m_2^{N-1}).
\end{align}
We write $m_u=|\mu|+ C_D |\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} | (1+\rho^2)$. Using the conditional distribution \prettyref{eq:z13con0}, \prettyref{eq:absgau}, \prettyref{eq:goenorm}, \prettyref{le:apest} and the elementary fact $m_u\le\max\{ 1, m_u^N\}$,
\begin{align*}
& \mathbb{E} \Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \Big] \\
&=\int_\mathbb{R} \mathbb{E} \Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-y| \Big|z_3'=y\Big] \frac{\sqrt {-4N D''(0)} \exp\{-\frac{N(\sqrt{ -4D''(0)}y+m_2)^2}{2(-2D''(0)-\boldsymbol{t}^2)}\}} {\sqrt{2\pi(-2D''(0)-\boldsymbol{t}^2)} } \mathrm{d} y\\
&\le \int_\mathbb{R} \Big( \frac{\sqrt 2 \mathsf b}{\sqrt{\pi N}}+|\bar a| \Big) \mathbb{E}(\langle_{N-1}^*+|y|)^{N-1} \frac{\sqrt {-4N D''(0)} \exp\{-\frac{N(\sqrt{ -4D''(0)}y+m_2)^2}{2(-2D''(0)-\boldsymbol{t}^2)}\}} {\sqrt{2\pi(-2D''(0)-\boldsymbol{t}^2)} } \mathrm{d} y\\
&\le C^{N-1}\mathbb{E} [ (\mathsf b+ |m_1|+|m_2| +\sqrt{-4D''(0)}|z_3'|) ({\langle_{N-1}^*}^{N-1}+|z_3'|^{N-1})]\\
&\le C_D^N(1+m_u^{N}),
\end{align*}
where $\langle_{N-1}^*$ is the operator norm of $\mathrm{GOE}_{N-1}$. Similarly,
\begin{align*}
\mathbb{E}\Big[Z_i^2 \prod_{j\neq i, 1\le j\le N-1} |(\frac{N-1}{N})^{1/2}\langle_j-z_3'| \Big ] \le \mathbb{E} (\langle_{N-1}^*+|z_3'|)^{N-2}\le C^N(1+|m_2|^{N-2}).
\end{align*}
Since $D(r)\le D'(0) r$, we have
$
D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)}\le D'(0)\rho^2.
$
Together with \prettyref{le:apest}, we obtain after a change of variable $u= \rho^2 s$,
\begin{align*}
& \mathbb{E} \mathrm{Crt}_{N} ([-T, T]^c, (0,\infty)) \\
&\le C_D^N S_{N-1}\int_{\mathbb{R}_+} \int_{[-T,T]^c} (1 +m_u^N) \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \\
&\le C_{\mu,D}^N S_{N-1} \int_{\mathbb{R}_+} \int_{[-{T/\rho^2},{T/\rho^2}]^c} \Big[1 + (1+\rho^{2N})|s-\frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)} |^{N}\Big] \\
&\ \ \frac{\sqrt N}{\sqrt{2\pi} \sqrt{D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} }} \exp\Big(- \frac{N \rho^4 (s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)})^2}{2(D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)})}\Big) \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N+1} \mathrm{d} s \mathrm{d}\rho \\
&\le \frac{C_{\mu,D}^N S_{N-1} \sqrt N}{ (2\pi)^{\frac{N+1}{2}}D'(0)^{N/2}} \Big(\int_{0}^\infty \int_{\sqrt{T/s}}^\infty + \int_{-\infty}^0 \int_{\sqrt{-T/s}}^\infty\Big ) [1+(1+\rho^{2N})(|s|+|\mu|)^{N} ] \\
& \ \ \ \ \ \frac{ (1+\rho^2)}{ \rho^2} \exp\Big(-\frac{N[(s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)})^2+\mu^2] \rho^2}{2D'(0)}\Big) \rho^{N+1} \mathrm{d}\rho \mathrm{d} s.
\end{align*}
We need to find a good lower bound for $(s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)})^2$. To save space, let
\[
f(s,\rho^2)=[1+(1+\rho^{2N})(|s|+|\mu|)^{N} ] (\rho^{N-1}+\rho^{N+1})\exp\Big(-\frac{N[(s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)})^2+\mu^2] \rho^2}{2D'(0)}\Big).
\]
We will use the estimate $\int_{x}^\infty e^{-\frac{y^2}{2\sigma^2}} \mathrm{d} y\le \frac{\sigma^2}x e^{-\frac{x^2}{2\sigma^2}}$ repeatedly in the following.
\emph{Case 1}: $s>0$. If $s>|\mu|$, since $|\frac12-\frac{D'(\rho^2)}{D'(0)}|\le\frac12 $, we have
\[
\Big(s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)}\Big)^2\ge \Big(s- \Big|\frac12-\frac{D'(\rho^2)}{D'(0)}\Big| |\mu|\Big )^2\ge \frac{s^2}4.
\]
Then
\begin{align*}
&\int_{|\mu|}^{\infty} \int_{ \sqrt{T/s} }^{\infty} f(s,\rho^2) \mathrm{d} \rho \mathrm{d} s\\
&\le \int_{|\mu|}^\infty \int_{\sqrt{T/s}}^\infty [1+(s+|\mu|)^{N} +(s+|\mu|)^{N} \rho^{2N}] (\rho^{N-1}+\rho^{N+1}) e^{-\frac{N[\frac{s^2}4+\mu^2]\rho^2}{2D'(0)}} \mathrm{d} \rho \mathrm{d} s\\
&\le C_{\mu,D}\Big( \frac{8D'(0)}{5\mu^2}\Big)^{N+1} \int_{|\mu| }^\infty \int_{\sqrt{\frac{T}{2D'(0)}(\frac{s}4+\frac{\mu^2}{s})} }^\infty \Big(\frac{2D'(0)}{\frac{s^2}4+\mu^2} \Big)^{(N-1)/2} (1+(s+|\mu|)^{N})r^{3N+1} e^{-Nr^2} \mathrm{d} r \mathrm{d} s\\
&\le \frac{ C_{\mu,D}^N}{N \sqrt{T}} \int_{|\mu|}^\infty \frac{1+(s+|\mu|)^{N}} {(s^2+4\mu^2)^{(N-1)/2}} e^{-\frac{N T}{4D'(0)}(\frac{s}{4}+\frac{\mu^2}{s})} \mathrm{d} s\\
&\le \frac{ C_{\mu,D}^N}{N \sqrt{T}} \int_{|\mu|}^\infty \frac{1} {s^2 } e^{-\frac{N Ts }{32 D'(0)}} \mathrm{d} s \le \frac{C_{\mu,D}^N}{N \sqrt{T}} e^{-\frac{|\mu| N T }{32 D'(0)}}.
\end{align*}
Here we have used the fact that $\sqrt{\frac{T}{2D'(0)}(\frac{s}4+\frac{\mu^2}{s})}\ge |\mu|\sqrt{\frac{T}{2D'(0)}}$ so that we can always choose $T$ large to guarantee $r>1$ and $r^4\le e^{r^2/2}$.
If $s\le |\mu|$, using the trivial bound $(s - \frac{\mu}2+\frac{\mu D'(\rho^2)}{D'(0)})^2\ge0$, we have
\begin{align*}
&\int_{0}^{|\mu|} \int_{\sqrt{T/s}}^{\infty } f(s,\rho^2) \mathrm{d} \rho \mathrm{d} s\\
&\le \int_{0}^{|\mu|} \int_{\sqrt{T/|\mu| }}^\infty [1+(s+|\mu|)^{N} +(s+|\mu|)^{N} \rho^{2N}] (\rho^{N-1}+\rho^{N+1}) e^{-\frac{N \mu^2\rho^2}{2D'(0)}} \mathrm{d} \rho \mathrm{d} s\\
&\le C_D \int_{0}^{|\mu|} \int_{ \sqrt{\frac{|\mu|T}{2 D'(0)}}}^\infty \Big[ \Big(\frac{2D'(0)}{\mu^2}\Big)^{3N/2}+1\Big] (1+(s+|\mu|)^{N})r^{3N+1} e^{-Nr^2} \mathrm{d} r \mathrm{d} s\\
&\le \frac{C_{\mu,D}^N }{N \sqrt{ T}} e^{-\frac{|\mu| N T}{4 D'(0)}}.
\end{align*}
\emph{Case 2}: $s<0$. After change of variable $s\to -s$, we can proceed in the same way as the case $s>0$ and find
\begin{align*}
&\int_{-\infty}^0\int_{\sqrt{-T/s}}^{\infty}f(s,\rho^2)\mathrm{d} \rho \mathrm{d} s =\int_{0}^\infty \int_{\sqrt{T/s}}^{\infty}f(-s,\rho^2) \mathrm{d} \rho \mathrm{d} s\\
&=\Big(\int_{0}^{|\mu|}\int_{\sqrt{T/s}}^{\infty} + \int_{|\mu|}^{\infty}\int_{\sqrt{T/s}}^{\infty} \Big) f(-s,\rho^2) \mathrm{d} \rho \mathrm{d} s\\
&\le \frac{C_{\mu,D}^N}{N \sqrt T}\Big( e^{-| \mu| NT/[32D'(0)]}+ e^{-|\mu|NT/[4 D'(0)]} \Big).
\end{align*}
Putting things together, we see that
\[
\mathbb{E} \mathrm{Crt}_{N} ([-T, T]^c, (0,\infty)) \le \frac{C_{\mu,D}^N}{N \sqrt T}\Big( e^{-| \mu| NT/[32D'(0)]}+ e^{-|\mu|NT/[4 D'(0)]} \Big).
\]
From here the first assertion follows.
(2) The last two claims follow somewhat different strategy. By conditioning and Young's inequality,
\begin{align*}
&\mathbb{E} \Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \Big]\\
&\le C^{N-1}\mathbb{E} [ (\mathsf b+ |m_1|+|m_2| +\sqrt{-4D''(0)}|z_3'|) ({\langle_{N-1}^*}^{N-1}+|z_3'|^{N-1})]\\
&\le C_D^N(1+|m_1|^N+|m_2|^N).
\end{align*}
Using \prettyref{le:apest}, \prettyref{eq:asmp1} and \prettyref{eq:asmp2} together with the change of variable formulas \prettyref{eq:uvcov} and \prettyref{eq:m12cov},
\begin{align*}
& \mathbb{E} \mathrm{Crt}_{N} (\mathbb{R}, (R,\infty)) \\
&\le C_D^N S_{N-1}\int_{R}^\infty \int_{\mathbb{R}} (1 +|m_1|^N+|m_2|^N) \frac{e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} }{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \\
&\le C_{\mu,D}^N S_{N-1} \int_{R}^\infty \int_{\mathbb{R}} [1 + |v|^{N}(\alpha\rho^2+\boldsymbol{t})^N] \frac{e^{-\frac{N v^2}{2}}}{\sqrt{2\pi}} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} }{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} v \mathrm{d}\rho \\
&\le C_{\mu,D}^N S_{N-1} \int_{R}^{\infty} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d}\rho\\
&\le \frac{C_{\mu,D}^N S_{N-1}}{NR} e^{-\frac{N \mu^2 R^2}{4D'(0)}}
\end{align*}
for $R$ large enough. Similarly,
\begin{align*}
& \mathbb{E} \mathrm{Crt}_{N} (\mathbb{R}, (0,{\varepsilon})) \le C_{\mu,D}^N S_{N-1} \int_{0}^{{\varepsilon}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d}\rho\le \frac{C_{\mu,D}^N S_{N-1}{\varepsilon}^N}{N} .
\end{align*}
This completes the proof.
\end{proof}
We remark that we have actually proved the following stronger results with heavier notations from \prettyref{eq:ie12}:
\begin{align*}
\limsup_{ T\to \infty} \limsup_{N\to \infty} \frac1N \log [I_1([-T, T]^c, (0,\infty)) +I_2([-T, T]^c, (0,\infty)) ]&= -\infty,\\
\limsup_{ R \to \infty} \limsup_{N\to \infty} \frac1N \log [I_1 (\mathbb{R}, (R,\infty))+I_2 (\mathbb{R}, (R,\infty))] &= -\infty,\\
\limsup_{ {\varepsilon} \to 0+} \limsup_{N\to \infty} \frac1N \log [I_1 (\mathbb{R}, (0,{\varepsilon}))+I_2 (\mathbb{R}, (0,{\varepsilon}))] &= -\infty.
\end{align*}
The third claim also holds for $\mu=0$ with the same argument.
If $\mu=0$, observing the complexity function in \prettyref{se:whole}, it is reasonable to require $R_2<\infty$.
\begin{lemma}\label{le:exptt2}
Let $\mu=0$ and $R<\infty$. Then
\begin{align*}
\limsup_{ T\to \infty} \limsup_{N\to \infty} \frac1N \log \mathbb{E} \mathrm{Crt}_{N} ([-T, T]^c, [0,R)) &= -\infty.
\end{align*}
\end{lemma}
\begin{proof}
The argument follows that of \prettyref{le:exptt} and is actually much easier. Indeed, we find
\begin{align*}
& \mathbb{E} \mathrm{Crt}_{N} ([-T, T]^c, (0,R) ) \\
&\le C_D^N S_{N-1}\int_{0}^{R} \int_{[-T,T]^c} (1 +m_u^N) \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \\
&\le \frac{C_{D}^N S_{N-1} \sqrt N}{ (2\pi)^{\frac{N+1}{2}}D'(0)^{N/2}} \int_{0}^R \Big( \int_{T}^\infty + \int_{-\infty}^{-T} \Big ) [1+\rho^N+ (1+\rho^{2N})|u|^{N} ] \frac{ (1+\rho^2)}{ \rho^2} e^{-\frac{N u^2 }{2D'(0) \rho^2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \\
&\le \frac{C_{R, D}^N S_{N-1} \sqrt N}{ T } e^{-\frac{N T^2}{ 4D'(0) R^2}}.
\end{align*}
The proof is complete.
\end{proof}
We need the following fact.
\begin{lemma}\label{le:intbd}
Suppose $|\mu| +\frac1R>0$. Then for any $a>0,c>0,b,d\in\mathbb{R}$ satisfying $aN+b<cN+d$, there exist constants $C_{\mu,D, a,b,c,d}>0, N_0>0$ such that for all $N>N_0$,
\begin{align*}
\int_0^{R}\int_{-\infty}^\infty (1+|s|^{aN+b}) \exp\Big(-\frac{N(s^{2}+\mu^{2})
\rho^{2}}{2D'(0)}\Big)\rho^{cN+d}\mathrm{d} s\mathrm{d}\rho \le C^N_{\mu,R, D, a,b,c,d}.
\end{align*}
\end{lemma}
\begin{proof}
If $\mu\neq 0$, changing the order of integration yields
\begin{align*}
&\int_0^{\infty}\int_{\mathbb{R}} (1+|s|^{aN+b}) \exp\Big(-\frac{N(s^{2}+\mu^{2}) \rho^{2}}{2D'(0)}\Big)\rho^{cN+d} \mathrm{d} s\mathrm{d}\rho\\
&= \int_{-\infty}^\infty \int_0^\infty \Big( \frac{D'(0)}{s^2+\mu^2}\Big)^{\frac{cN+d+1}{2}}( 1+|s|^{aN+b} ) r^{cN+d} e^{-Nr^2/2} \mathrm{d} r\mathrm{d} s\\
&\le C_{D,c,d}^N \int_{-\infty}^\infty \frac{1+|s|^{aN+b}}{(s^2+\mu^2)^{\frac{cN+d+1}{2}}} \mathrm{d} s\le C_{\mu,D,c,d}^N,
\end{align*}
where in the last step we used the assumption $aN+b<cN+d$. If $\mu=0$, then $R<\infty$ and we have
\begin{align*}\
\int_0^{R}\int_{-\infty}^\infty (1+|s|^{aN+b}) \exp\Big(-\frac{Ns^2
\rho^{2}}{2D'(0)}\Big)\rho^{cN+d}\mathrm{d} s \mathrm{d}\rho \le C^N_{a, b,D} \int_0^{R} (1+\rho^{-aN-b}) \rho^{cN+d} \mathrm{d}\rho,
\end{align*}
which completes the proof.
\end{proof}
To save space, for an event $\Delta$ that may depend on the eigenvalues of GOE and other Gaussian random variables in question, let us write
\begin{align*}
I_2(E,(R_1,R_2),\Delta) &= \frac{[-4D''(0)]^{\frac{N}{2}} }{2N} \sum_{i=1}^{N-1}\int_{R_1}^{R_2} \int_{E} \mathbb{E}\Big[Z_i^2 \prod_{j\neq i} |(\frac{N-1}{N})^{1/2}\langle_j-z_3'| \ensuremath{\boldsymbol 1}_\Delta \Big] \\
&\frac{ e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho.
\end{align*}
\begin{lemma}\label{le:goecpt}
Suppose $|\mu| +\frac1{R_2}>0$. Then
\begin{align*}
&\limsup_{K\to\infty} \limsup_{N \to\infty} \frac1N\log I_2(E, (R_1,R_2), \{\langle_{N-1}^*>K\} )=-\infty,\\
&\limsup_{K\to\infty} \limsup_{N \to\infty} \frac1N\log I_2(E, (R_1,R_2), \{|z_3'-\mathbb{E}(z_3')|>K\} )=-\infty.
\end{align*}
\end{lemma}
\begin{proof}
Using \prettyref{eq:ladein} and choosing $K$ large so that $2t< e^{t^2/18}$ for $t\ge K$,
\begin{align}
&\mathbb{E}[{(\langle_{N-1}^*)}^{N-2}\ensuremath{\boldsymbol 1}\{\langle_{N-1}^*>K\}]=\int_0^K Kt^{K-1}\mathbb{P}(\langle_{N-1}^*\ge K)\mathrm{d} t+\int_K^\infty (N-2)t^{N-3} \mathbb{P}(\langle_{N-1}^*>t) \mathrm{d} t \notag\\
&\le K^K e^{-(N-1)K^2/9}+ \int_K^\infty e^{-(N-1)t^2/18} \mathrm{d} t \le 2 e^{-(N-1)K^2/18}.\label{eq:lantail}
\end{align}
If $\mu\neq 0$, using \prettyref{eq:z3mom} and \prettyref{le:apest}, we obtain
\begin{align*}
&I_2(E,(R_1,R_2), \{\langle_{N-1}^*>K\})\le C_D^N \int_0^\infty\int_\mathbb{R} \mathbb{E} [((\langle_{N-1}^*)^{N-2}+z_3'^{N-2})\ensuremath{\boldsymbol 1}\{\langle_{N-1}^*>K\}]\\
& \ \ \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\le C_{\mu,D}^N e^{-(N-1)K^2/18} \int_0^\infty\int_\mathbb{R} \Big[ 1+ |\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} |^{N-2} (1+\rho^{2(N-2)}) \Big] \\
& \ \ \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\le C_{\mu,D}^N e^{-(N-1)K^2/18} \int_0^\infty\int_\mathbb{R} \Big[1 +s^{N-2}(1+\rho^{2(N-1)})\Big] \exp\Big(-\frac{N(s^{2}+\mu^{2})
\rho^{2}}{2D'(0)}\Big)\rho^{N-1}\mathrm{d} s \mathrm{d}\rho.
\end{align*}
Here in the last step we used the observation $(1+\rho^{2(N-2)})(1+\rho^2)\le 4(1+\rho^{2(N-1)}).$ The assertion then follows from \prettyref{le:intbd}. Similarly, note that
\[
\mathbb{P}(|z_3'-\mathbb{E}(z_3')|>K)\le 2e^{-\frac{N(-4D''(0))K^2}{2(-2D''(0)-\boldsymbol{t}^2)}}\le 2 e^{-NK^2}.
\]
It follows that for $K$ large enough,
\[
\mathbb{E}(|z_3'-\mathbb{E}(z_3')|^{N-2}\ensuremath{\boldsymbol 1}\{|z_3'-\mathbb{E}(z_3')|>K\}) \le 4e^{-NK^2/2}.
\]
From here we deduce that
\begin{align*}
&I_2(E,(R_1,R_2), \{|z_3'-\mathbb{E}(z_3')|>K\})\le C_D^N \int_0^\infty\int_\mathbb{R} \mathbb{E} [({(\langle_{N-1}^*)}^{N-2}+ |\mathbb{E}(z_3')|^{N-2}\\
&\ \ +|z_3'-\mathbb{E}(z_3')|^{N-2})\ensuremath{\boldsymbol 1}\{|z_3'-\mathbb{E}(z_3')|>K\}] \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\le C_{\mu,D}^N e^{-NK^2/2} \int_0^\infty\int_\mathbb{R} \Big[ 1+ |\frac{u}{\rho^2}-\frac\mu2+\frac{\mu D'(\rho^2)}{D'(0)} |^{N-2} (1+\rho^{2(N-2)}) \Big]\\
&\quad \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho .
\end{align*}
The rest of argument is the same as above. The case $\mu=0$ and $R_2<\infty$ follows the same steps and is omitted.
\end{proof}
\begin{lemma}\label{le:goecpt2}
Suppose $|\mu| +\frac1{R_2}>0$. Then for any $\delta>0$,
\[
\limsup_{N \to\infty} \frac1N\log I_2(E, (R_1,R_2), \{L(\langle_{1}^{N-1})\notin B(\sigma_{{\rm sc}},\delta) \} )=-\infty.
\]
\end{lemma}
\begin{proof}
We only argue for the harder case $\mu\neq 0$. Using \prettyref{eq:z3mom}, the Cauchy--Schwarz inequality and \prettyref{eq:conineq}, we have
\begin{align*}
&\mathbb{E} \Big[\prod_{i=1,i\neq j}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i - z_3' | \ensuremath{\boldsymbol 1}\{L(\langle_{1}^{N-1})\notin B(\sigma_{{\rm
sc}},\delta)\}\Big] \\
&\le C^N \mathbb{E} [((\langle_{N-1}^*)^{N-2}+z_3'^{N-2}
)\ensuremath{\boldsymbol 1}\{L(\langle_{1}^{N-1})\notin B(\sigma_{{\rm sc}},\delta)\}
]\\
&\le C^N [\mathbb{E} ({(\langle_{N-1}^*)}^{2(N-2)}+z_3'^{2(N-2)} )]^{1/2}
\mathbb{P}(L(\langle_{1}^{N-1})\notin B(\sigma_{{\rm sc}},\delta))^{1/2}\\
&\le C_{\mu,D}^{N}\Big[1 +\Big|\frac{u}{\rho^{2}}-\frac{\mu}{2}+\frac{\mu D'(\rho^{2})}{D'(0)}\Big|^{N-2}(1+\rho^{2(N-2)})\Big] e^{-\frac12c(N-1)^{2}}.
\end{align*}
Together with \prettyref{le:intbd}, we deduce that
\begin{align*}
& I_2(E, (R_1,R_2), \{L(\langle_{1}^{N-1})\notin B(\sigma_{{\rm sc}},\delta) \} ) \notag\\
&\le C_{\mu,D}^{N} e^{-cN^{2}}
\int_{R_{1}}^{R_{2}}\int_{E/\rho^{2}} \Big[1+\Big|s-\frac{\mu}{2}+\frac{\mu
D'(\rho^{2})}{D'(0)}\Big|^{N-2}(1+\rho^{2(N-1)})\Big]\notag\\
& \ \ \ \exp\Big(-\frac{N[(s-\frac{\mu}{2}
+\frac{\mu D'(\rho^{2})}{D'(0)})^{2}+\mu^{2}]
\rho^{2}}{2D'(0)}\Big)\rho^{N-1}\mathrm{d} s\mathrm{d}\rho\notag\\
&\le C_{\mu,D}^{N} e^{-cN^{2}}
\int_{0}^{\infty}\int_{\mathbb{R}} \Big[1+|v|^{N-2}(1+\rho^{2(N-1)})\Big] \exp\Big(-\frac{N[v^{2}+\mu^{2}]
\rho^{2}}{2D'(0)}\Big)\rho^{N-1}\mathrm{d} v\mathrm{d}\rho\notag\\
& \le C_{\mu,D}^{N}e^{-cN^{2}}.
\end{align*}
From here the assertion follows.
\end{proof}
For an event $\Delta$, let us write
\begin{align*}
I_1(E,(R_1,R_2),\Delta) &= [-4D''(0)]^{\frac{N-1}{2}} \int_{R_1}^{R_2} \int_{E} \mathbb{E}\Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \ensuremath{\boldsymbol 1}_\Delta\Big]\notag\\
&\frac{ e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho.
\end{align*}
The argument in this part shares the same spirit as that for $I_2$.
\begin{lemma}\label{le:exptti11}
Suppose $|\mu|+\frac1{R_2}>0$. Then we have
\begin{align*}
&\limsup_{K\to\infty} \limsup_{N \to\infty} \frac1N\log I_1(E, (R_1,R_2), \{\langle_{N-1}^*>K\} )=-\infty,\\
&\limsup_{K\to\infty} \limsup_{N \to\infty} \frac1N\log I_1(E, (R_1,R_2), \{|z_3'-\mathbb{E}(z_3')|>K\} )=-\infty.
\end{align*}
\end{lemma}
\begin{proof}
The argument is similar to that of \prettyref{le:goecpt}. As there, we only provide details for the case $\mu\neq 0$. Note that $\mathsf b^2\le -4D''(0)$. By \prettyref{eq:lantail}, \prettyref{eq:z13con0}, \prettyref{eq:absgau}, Young's inequality and conditioning, we find
\begin{align*}
&\mathbb{E}\Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \ensuremath{\boldsymbol 1}\{\langle_{N-1}^*>K \} \Big]\\
&\le C^N\mathbb{E}\Big[\Big(\frac{\sqrt2 \mathsf b}{\sqrt{\pi N}}+|\bar \mathsf a|\Big) ({(\langle_{N-1}^*)}^{N-1} +|z_3'|^{N-1}) \ensuremath{\boldsymbol 1}\{\langle_{N-1}^*>K \}\Big]\\
&\le C^{N}\mathbb{E} [ (\mathsf b+ |m_1|+|m_2| +\sqrt{-4D''(0)}|z_3'|) ({(\langle_{N-1}^*)}^{N-1}+|z_3'|^{N-1})\ensuremath{\boldsymbol 1}\{\langle_{N-1}^*>K\}]\\
&\le C_{D}^N e^{-(N-1) K^2/18}(1+|m_1|^N+|m_2|^N).
\end{align*}
Using \prettyref{eq:asmp1}, \prettyref{eq:asmp2} and the change of variable formulas \prettyref{eq:uvcov} and \prettyref{eq:m12cov},
\begin{align*}
&I_1(E, (R_1,R_2), \{\langle_{N-1}^*>K\} )\\
&\le C_{D}^N e^{-(N-1) K^2/18}\int_{0}^\infty \int_{\mathbb{R}} (1 +|m_1|^N+|m_2|^N) \frac{e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} }{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho \\
&\le C_{\mu,D}^N e^{-(N-1) K^2/18} \int_{0}^\infty \int_{\mathbb{R}} [1 + |v|^{N}(\alpha\rho^2+\boldsymbol{t})^N] \frac{e^{-\frac{N v^2}{2}}}{\sqrt{2\pi}} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} }{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} v \mathrm{d}\rho \\
&\le C_{\mu,D}^N e^{-(N-1) K^2/18} \int_{0}^{\infty} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d}\rho\\
&\le C_{\mu,D}^N e^{-(N-1) K^2/18}.
\end{align*}
From here the first assertion follows. The argument for the second one is in the same fashion after observing $|z_3'|\le |z_3'-\mathbb{E}(z_3')|+|\mathbb{E}(z_3')|$ and
\begin{align*}
&\mathbb{E}\Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \ensuremath{\boldsymbol 1}\{ |z_3'-\mathbb{E}(z_3')|>K \} \Big]\\
&\le C^{N}\mathbb{E} [ (\mathsf b+ |m_1|+|m_2| +\sqrt{-4D''(0)}|z_3'|) ({(\langle_{N-1}^*)}^{N-1}+|z_3'|^{N-1})\ensuremath{\boldsymbol 1}\{|z_3'-\mathbb{E}(z_3')|>K\}]\\
&\le C_{D}^N e^{-N K^2/2}(1+|m_1|^N+|m_2|^N).\qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{le:exptti12}
Let $\delta>0$. Suppose $|\mu|+\frac1{R_2}>0$. Then we have
\begin{align*}
\limsup_{N \to\infty} \frac1N\log I_1(E, (R_1,R_2), \{L(\langle_1^{N-1})\notin B(\sigma_{\rm sc},\delta)\} )=-\infty.
\end{align*}
\end{lemma}
\begin{proof}
The proof is similar to that of \prettyref{le:goecpt2} and we only provide the difference for the case $\mu\neq0$. Conditioning as in the proof of \prettyref{le:exptti11}, using Young's inequality, the Cauchy--Schwarz inequality and \prettyref{eq:conineq}, we find
\begin{align*}
&\mathbb{E}\Big[|z_1'| \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_i-z_3'| \ensuremath{\boldsymbol 1}\{L(\langle_1^{N-1})\notin B(\sigma_{\rm sc},\delta) \} \Big]\\
&\le C_{D}^N (1+|m_1|^{2N}+|m_2|^{2N})^{1/2} e^{-c N^2}.
\end{align*}
The rest of argument follows verbatim that of \prettyref{le:exptti11}.
\end{proof}
\section{Proof of \prettyref{th:cpsublevel}}\label{se:4}
For a probability measure $\nu$ defined on $\mathbb{R}$, recall the functions $\Psi(\nu,x)$ and $\Psi_*(x)$ as in \prettyref{eq:psidef0}. Let us define
\begin{align}
\psi(\nu,\rho,u,y) &= \Psi(\nu,y)-\frac{(u-m_Y)^2}{2\Big(D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} \Big)}-\frac{-2D''(0)}{-2D''(0)-\boldsymbol{t}^2}\Big(y+\frac{m_2}{\sqrt{-4D''(0)}}\Big)^2\notag \\
&\ \ -\frac{\mu^2\rho^2}{2D'(0)}+\log \rho, \label{eq:psidef} \\
\psi_*(\rho,u,y)&=\psi(\sigma_{\rm sc},\rho,u,y). \notag
\end{align}
Recalling the notations \prettyref{eq:msialbt}, $\psi_*(\rho,u,y)$ can be written explicitly as
\begin{align}
&\psi_*(\rho,u,y)= \Psi_*( y)-\frac{(u-\frac{\mu\rho^2}{2}+\frac{\mu D'(\rho^2) \rho^2}{D'(0) } )^2}{ 2 (D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} )} -\frac{\mu^2\rho^2}{2D'(0)}+\log \rho-\frac{-2D''(0)}{-2D''(0)-\frac{[D'(\rho^2 )-D'(0)]^2}{{ D(\rho^2 )-\frac{D'(\rho^2)^2 \rho^2}{D'(0)}}}}
\notag \\
& \times\Big(y+\frac{1} {\sqrt{-4D''(0)}}\Big[ \mu + \frac{(u-\frac{\mu \rho^2}{2} +\frac{\mu D'(\rho^2) \rho^2}{D'(0)}) (D'(\rho^2) -D'(0) )}{D( \rho^2) -\frac{D'(\rho^2)^2 \rho^2}{D'(0) }}\Big]\Big)^2.
\label{eq:psifunction}
\end{align}
\begin{lemma}\label{le:psirho0}
For any $u$ and $y$ fixed, we have $\lim_{\rho\to0+}\psi_*(\rho,u,y) = -\infty$. For any $\rho$ and $u$ fixed, we have $\lim_{|y|\to \infty}\psi_*(\rho,u,y) = -\infty$.
\end{lemma}
\begin{proof}
From \prettyref{le:albtd}, we know $ D(\rho^2)-\frac{D'(\rho^2)^2\rho^2}{D'(0)} \sim -\frac32D''(0)\rho^4$ as $\rho\to0+$.
For any ${\varepsilon}>0$ and $\rho\in(0,{\varepsilon})$, we may find $c_{\varepsilon}$ such that
\begin{align*}
\psi_*(\rho,u,y)-\Psi_*(y) & \le -\frac{(u-\frac{\mu\rho^2}{2}+\frac{\mu D'(\rho^2) \rho^2}{D'(0) } )^2}{ 2 (D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} )} -\frac{\mu^2\rho^2}{2D'(0)}+\log \rho\\
&\le -\frac{(\frac{u}{\rho^2}-\frac{\mu}{2}+\frac{\mu D'(\rho^2)}{D'(0)})}{-3c_{\varepsilon} D''(0)}-\frac{\mu^2 \rho^2}{2D'(0)} +\log \rho .
\end{align*}
The right-hand side clearly tends to $-\infty$ as $\rho\to0+$.
Since $\frac{-2D''(0)}{-2D''(0)-\boldsymbol{t}^2}\ge 1$, from the definition it is clear to see $\lim_{|y|\to \infty}\psi_*(\rho,u,y) = -\infty$ for fixed $\rho$ and $u$.
\end{proof}
Let $\llbracket \ell \rrbracket =\{i_1,...,i_\ell\}\subset [N-1]$. For any 1-Lipschitz function $f$, we have
\begin{align}
&\Big|\frac1{N-1} \sum_{j=1}^{N-1}f(\langle_j) -\frac1{N-1-\ell}\sum_{j\in[N-1]\setminus \llbracket \ell \rrbracket }f(\langle_j)\Big| \notag\\
&\le \frac1{(N-1)(N-1-\ell)}\sum_{j\in[N-1]\setminus \llbracket \ell \rrbracket} |(N-1-\ell)f(\langle_j)+\sum_{i\in \llbracket \ell \rrbracket}f(\langle_i) -(N-1)f(\langle_j) |\notag\\
&\le \frac{\ell}{N-1}\max_{i,j}|\langle_i-\langle_j|. \label{eq:flanl}
\end{align}
\subsection{Upper bound}\label{se:ub}
\begin{proposition}\label{pr:ubi2}
Suppose $\bar E$ is compact and $0\le R_1<R_2<\infty$. Under Assumptions I, II and IV, we have
\begin{align*}
\limsup_{N\to\infty} \frac1N\log I_2(E,(R_1,R_2)) \le \frac12 \log[-4D''(0)] -\frac12\log D'(0) -\frac12\log(2\pi)+\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y),
\end{align*}
where $F=\{(\rho,u,y):y\in\mathbb{R}, \rho \in (R_1, R_2),u\in \bar E\}$ and $\psi_*(\rho,u,y)$ is given as in \prettyref{eq:psifunction}.
\end{proposition}
\begin{proof}
Since
\begin{align*}
I_2(E,(R_1,R_2))&=I_2(E,(R_1,R_2),\{L(\langle_1^{N-1})\in B_K(\sigma_{{\rm sc}},\delta), |z_3'-\mathbb{E}(z_3')|\le K\}) \\
&\ \ +I_2(E,(R_1,R_2),\{L(\langle_1^{N-1})\notin B_K(\sigma_{{\rm sc}},\delta)\}\cup \{|z_3'-\mathbb{E}(z_3')|> K\}),
\end{align*}
by Lemmas \ref{le:goecpt} and \ref{le:goecpt2}, we can always choose $K$ large enough so that the second term is exponentially negligible as $N\to\infty$, provided the first term yields a finite quantity in the limit. We only need to consider the first term.
Using \prettyref{eq:flanl}, if $L(\langle_{i=1}^{N-1})\in B_K(\sigma_{\rm sc}, \delta) $, we may choose $N$ large enough so that $L((\frac{N-1}{N})^{1/2}\langle_{j=1,j\neq i}^{N-1})\in B_K(\sigma_{\rm sc}, 2\delta)$. It follows that for any $i\in[N-1]$,
\begin{align}\label{eq:jile}
\prod_{j=1, j\neq i}^{N-1} |(\frac{N-1}{N})^{1/2}\langle_j-z_3' | \ensuremath{\boldsymbol 1}\{ L(\langle_{i=1}^{N-1})\in B_K(\sigma_{\rm sc}, \delta) \}\le e^{(N-2)\sup_{\nu\in B_K(\sigma_{\rm sc},2\delta)} \Psi(\nu,z_3')}.
\end{align}
By \prettyref{le:albtd} and \prettyref{eq:asmp1}, we have $c_{D,R_2}:=\inf_{R_1<\rho<R_2}-2D''(0)-\boldsymbol{t}^2>0$. It follows that
\begin{align*}
&\frac{\sqrt{-4ND''(0)}}{\sqrt{2\pi(-2D''(0)-\boldsymbol{t}^2)}} \exp\Big(-\frac{-2ND''(0)(y+\frac{m_2}{\sqrt{-4D''(0)}})^2}{-2D''(0)-\boldsymbol{t}^2} \Big)\\
&\le \frac{\sqrt{-4ND''(0)}}{\sqrt{2\pi c_{D,R_2}}} \exp\Big(-\frac{-2ND''(0)(y+\frac{m_2}{\sqrt{-4D''(0)}})^2}{-2D''(0)-\boldsymbol{t}^2} \Big).
\end{align*}
Let
\begin{align}\label{eq:fdeset}
F(\delta)=\Big\{ (\nu,\rho,u,y): \nu\in B_{K}(\sigma_{\rm sc},\delta), y\in\Big [-\frac{m_2}{\sqrt{-4D''(0)}}-K, -\frac{m_2}{\sqrt{-4D''(0)}}+K \Big],\notag \\
\rho \in(R_1,R_2), u\in \bar E \Big \}.
\end{align}
Using $\rho^2\le R_2^2$ and the fact that all summands of $\psi(\nu,\rho,u,y)$ in \prettyref{eq:psidef} are bounded from above on $F(\delta)$, we deduce from \prettyref{le:apest}
\begin{align*}
&I_2(E,(R_1,R_2),\{L(\langle_1^{N-1})\in B_K(\sigma_{{\rm sc}},\delta), |z_3'-\mathbb{E}(z_3')|\le K\})\\
&\le [-4D''(0)]^{N/2}\int_{R_1}^{R_2}\int_E \mathbb{E}\Big[e^{(N-2)\sup_{\nu\in B_K(\sigma_{\rm sc},2\delta)} \Psi(\nu,z_3')}\ensuremath{\boldsymbol 1}\{|z_3'-\mathbb{E}(z_3')|\le K\} \Big]\\
&\ \ \frac{ \sqrt{N} e^{-\frac{N(u-m_Y)^2}{2\big[D(\rho^2)-\frac{D'(\rho^2)^2\rho^2}{D'(0)} \big]}}}{\sqrt{2\pi(D(\rho^2)-\frac{D'(\rho^2)^2\rho^2}{D'(0)} )}} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\le \frac{C_{D,R_2, |E|} N [-4D''(0)]^{\frac{N+1}2}}{(2\pi)^{\frac{N+2}{2}} D'(0)^{\frac{N}2} } \exp\Big[ (N-3) \sup_{(\nu,\rho,u,y)\in F(2\delta)} \psi(\nu,\rho,u,y)\Big],
\end{align*}
where $|E|$ is the Lebesgue measure of $E$.
Since $\psi(\nu,\rho,u,y)$ is an upper semi-continuous function on $F(2\delta)$ and attains its maximum on the closure $\overline{F(2\delta)}$, we have
\begin{align*}
&\limsup_{\delta\to0+} \sup_{(\nu,\rho,u,y)\in F(2\delta)} \psi(\nu,\rho,u,y) \le \sup_{(\rho,u,y)\in F(0)}\psi_*(\rho,u,y).
\end{align*}
By Lemmas \ref{le:goecpt} and \ref{le:psirho0}, the continuous function $\psi_*(\rho,u,y)$ attains its maximum in $\bar F$ at some point $(\rho_*,u_*,y_*)$ with $\rho_*>0$. Therefore we may choose $K$ large enough in the beginning so that
\[
\sup_{(\rho,u,y)\in F(0)}\psi_*(\rho,u,y) = \psi_*(\rho_*,u_*,y_*).
\]
This justifies that $\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y)>-\infty$ and the proof is complete.
\end{proof}
\begin{proposition}\label{pr:ubi1}
Suppose $\bar E$ is compact and $0\le R_1<R_2<\infty$. Under Assumptions I, II and IV, we have
\begin{align*}
\limsup_{N\to\infty} \frac1N\log I_1(E,(R_1,R_2)) \le \frac12 \log[-4D''(0)] -\frac12\log D'(0) -\frac12\log(2\pi)+\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y),
\end{align*}
where $F=\{(\rho,u,y):y\in\mathbb{R}, \rho \in (R_1, R_2),u\in \bar E\}$ and $\psi_*(\rho,u,y)$ is given as in \prettyref{eq:psifunction}.
\end{proposition}
\begin{proof}
By the remark after \prettyref{le:exptt}, we know
\begin{align*}
\limsup_{N\to\infty}\frac1N\log I_1(E,(0,R_2))& = \limsup_{N\to\infty}\frac1N\log I_1(E,({\varepsilon},R_2))
\end{align*}
by choosing ${\varepsilon}>0$ small enough. Hence, we may assume $R_1>0$. Similar to the proof of \prettyref{pr:ubi2}, since
\begin{align*}
I_1(E,(R_1,R_2))&=I_1(E,(R_1,R_2),\{L(\langle_1^{N-1})\in B_K(\sigma_{{\rm sc}},\delta), |z_3'-\mathbb{E}(z_3')|\le K\}) \\
&\ \ +I_1(E,(R_1,R_2),\{L(\langle_1^{N-1})\notin B_K(\sigma_{{\rm sc}},\delta)\}\cup \{|z_3'-\mathbb{E}(z_3')|> K\}),
\end{align*}
thanks to Lemmas \ref{le:exptti11} and \ref{le:exptti12}, by choosing $K$ large enough, it suffices to consider the first term. Since $0<R_1<R_2<\infty$, using continuity of functions in question, conditioning with \prettyref{eq:z13con0} and \prettyref{le:apest} for $\sigma_Y$,
\begin{align*}
&I_1(E,(R_1,R_2),\{L(\langle_1^{N-1})\in B_K(\sigma_{{\rm sc}},\delta), |z_3'-\mathbb{E}(z_3')|\le K\})\\
&\le [-4D''(0)]^{\frac{N-1}{2}} \sup_{R_1\le \rho\le R_2, u\in \bar E, |y+\frac{m_2}{\sqrt{-4D''(0)}}|\le K} (\mathsf b+|m_1|+|m_2|+\sqrt{-4D''(0)}|y|) \int_{R_1}^{R_2} \int_{E} \notag\\
&\ \ \mathbb{E}\Big[e^{(N-1)\sup_{\nu\in B_K(\sigma_{\rm sc},\delta)} \Psi(\nu,z_3')}\ensuremath{\boldsymbol 1}\{|z_3'-\mathbb{E}(z_3')|\le K\} \Big] \frac{ e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}}}{\sqrt{2\pi}\sigma_Y} \frac{e^{-\frac{N \mu^2 \rho^2}{2D'(0)}}}{(2\pi)^{N/2} D'(0)^{N/2}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\le \frac{C_{R_1,R_2,D,K,\bar E} [-4D''(0)]^{\frac{N}{2}}} {(2\pi)^{\frac{N+2}{2}} D'(0)^{\frac{N}2}} \exp\Big[(N-3)\sup_{(\nu,\rho,u,y)\in F(\delta)}\psi(\nu,\rho,u,y) \Big],
\end{align*}
where $F(\delta)$ is given as in \prettyref{eq:fdeset} and the supremum of $|m_1|+|m_2|$ may depend on $R_1$. The assertion follows from the upper semi-continuity of $\psi(\nu,\rho,u,y)$ on $F(\delta)$ by sending $N\to\infty$ and $\delta\to0+$.
\end{proof}
\subsection{Lower bound}
\begin{proposition}\label{pr:lbd*}
Suppose $E$ is an open set and $0\le R_1<R_2<\infty$. Under Assumptions I, II and IV, we have
\begin{align*}
\liminf_{N\to\infty} \frac1N\log \mathbb{E}\mathrm{Crt}_N(E,(R_1,R_2)) \ge \frac12+ \frac12 \log[-4D''(0)] -\frac12\log D'(0) +\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y),
\end{align*}
where $F=\{(\rho,u,y):y\in\mathbb{R}, \rho \in (R_1, R_2),u\in \bar E\}$ and $\psi_*(\rho,u,y)$ is given as in \prettyref{eq:psifunction}.
\end{proposition}
\begin{proof}
Using \prettyref{eq:absgau} and \prettyref{eq:z13con0}, we know
\begin{align}\label{eq:z13con}
&\mathbb{E}\big[|z_1' - h(z_3') | | z_3'=y \big] \ge \sqrt{\frac2\pi}\Big[ \frac{-4D''(0)}{N} +\frac{2D''(0)\alpha^2\rho^4}{N(-2D''(0)-\boldsymbol{t}^2)}\Big]^{1/2},
\end{align}
where $h(z_3')$ only depends on $z_3'$. By conditioning, using \prettyref{eq:schur} and \prettyref{eq:ayg},
\begin{align*}
&\mathbb{E}(|\det G| )= \mathbb{E}(|\det G_{**}| | z_1'-\xi^\mathsf T G_{**}^{-1} \xi | )\\
&=[-4D''(0)]^{\frac{N-1}{2}} \mathbb{E}[ |\det ((\frac{N-1}{N})^{1/2}\mathrm{GOE}_{N-1}-z_3' I_{N-1}) | \mathbb{E}( | z_1'-\xi^\mathsf T G_{**} ^{-1} \xi | | \mathrm{GOE}_{N-1}, \xi, z_3')]
\\
&\ge [-4D''(0)]^{\frac{N-1}{2}} \sqrt{\frac2\pi}\Big[ \frac{-4D''(0)}{N} +\frac{2D''(0)\alpha^2\rho^4}{N(-2D''(0)-\boldsymbol{t}^2)} \Big]^{1/2} \\
&\quad \frac{\sqrt{N(-4D''(0))}}{\sqrt{2\pi (-2D''(0)-\boldsymbol{t}^2)} } \int_{\mathbb{R}^{N-1}} \prod_{i=1}^{N-1} \int_\mathbb{R} |(\frac{N-1}{N})^{1/2}x_i-y| \exp\Big[-\frac{-4ND''(0)(y +\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \mathrm{d} y\\
&\quad p_{\mathrm{GOE}}(x_1,...,x_{N-1}) \prod_{i=1}^{N-1} \mathrm{d} x_i
\end{align*}
where $p_{\mathrm{GOE}}(x_1,...,x_{N-1})$ is the joint density of the unordered eigenvalues of GOE.
Without loss of generality we assume $E$ is non-empty. Choose $(\rho_*,u_*,y_*)$ as that in the proof of \prettyref{pr:ubi2}; i.e., it is a maximum of $\psi_*(\rho,u,y)$ on $[R_1,R_2]\times \bar E\times \mathbb{R}$. If there are multiple points for $\psi_*$ to attain its maximum, we just choose one to be $(\rho_*,u_*,y_*)$. Recall that $\rho_*>0$. Then $(\rho_*-\delta', \rho_*+\delta')\cap [R_1,R_2]$ and $(u_*-\delta',u_*+\delta')\cap \bar E$ must be non-empty for any $\delta'>0$. If $\rho_*$ and $u_*$ are both interior points, we choose $\delta'>0$ small enough so that $(\rho_*-\delta', \rho_*+\delta')\subset (R_1,R_2)$ and $(u_*-\delta',u_*+\delta')\subset E$. If either $\rho_*$ or $u_*$ is a boundary point, by abuse of notation we still write $(\rho_*-\delta', \rho_*+\delta')$ and $(u_*-\delta',u_*+\delta')$ with the understanding that one endpoint should be replaced by $\rho_*$ or $u_*$ so that we always have $(\rho_*-\delta', \rho_*+\delta')\subset (R_1,R_2)$ and $(u_*-\delta',u_*+\delta')\subset E$. Using \prettyref{eq:asmp1}, the right-hand side of \prettyref{eq:z13con} attains strictly positive minimum for $\rho\in[\rho_*-\delta', \rho_*+\delta']$.
By restricting to small intervals, we find
\begin{align*}
&\int_{R_1}^{R_2}\int_{E} \mathbb{E}(|\det G |) \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} u \mathrm{d}\rho\\
&\ge [-4D''(0)]^{\frac{N-1}{2}} \sqrt{\frac2\pi} \int_{\rho_*-\delta'}^{\rho_*+\delta'}\int_{u_*-\delta'}^{u_*+\delta'} \int_{y_*-\delta_1}^{y_*+\delta_1} \\
&\quad \Big[ \frac{-4D''(0)}{N} +\frac{2D''(0)\alpha^2\rho^4}{N(-2D''(0)-\boldsymbol{t}^2)}\Big]^{1/2}
\frac{\sqrt{N(-4D''(0))}}{\sqrt{2\pi (-2D''(0)-\boldsymbol{t}^2)} } \\
&\quad \int_{\mathbb{R}^{N-1}} \prod_{i=1}^{N-1} |(\frac{N-1}{N})^{1/2} x_i-y| \exp\Big[-\frac{-4ND''(0)(y+\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \\
&\quad p_{\mathrm{GOE}}(x_1,...,x_{N-1}) \prod_{i=1}^{N-1} \mathrm{d} x_i \frac1{\sqrt{2\pi}\sigma_Y} e^{-\frac{(u-m_Y)^2}{2\sigma_Y^2}} \frac1{(2\pi)^{N/2} D'(0)^{N/2}} e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} y \mathrm{d} u \mathrm{d}\rho\\
&=: \mathcal{E}(\delta',\delta_1),
\end{align*}
where $\delta_1>0$ will be specified in the following. We consider two cases.
\emph{Case 1}: $y_* \notin[-\sqrt2,\sqrt2]$. In this case, there exist ${\varepsilon}_1>0$ small enough so that $y_*\notin [-\sqrt2-3{\varepsilon}_1, \sqrt2+3{\varepsilon}_1]$. We can choose $\delta_1$ small enough so that $y_*+\delta_1< -\sqrt2-2{\varepsilon}_1$ if $y_* < -\sqrt2$ or $y_*-\delta_1 > \sqrt2+2{\varepsilon}_1$ if $y_* > \sqrt2$. According to our choice, if $x\in(y_*-\delta_1, y_*+\delta_1)$, then $x\notin [-\sqrt2-2{\varepsilon}_1,\sqrt2+2{\varepsilon}_1]$. With these considerations in mind, by restricting the empirical measure of GOE eigenvalues to $B_{\sqrt2+{\varepsilon}_1}(\sigma_{{\rm sc}},\delta)$ first, we find
\begin{align*}
&\mathcal{E}(\delta',\delta_1)\ge [-4D''(0)]^{\frac{N-1}{2}} \sqrt{\frac2\pi} \mathbb{P}( L((\frac{N-1}{N})^{1/2}\langle_1^{N-1})\in B_{\sqrt2+{\varepsilon}_1}(\sigma_{{\rm sc}},\delta)) \\
&\quad \int_{\rho_*-\delta'}^{\rho_*+\delta'}\int_{u_*-\delta'}^{u_*+\delta'} \int_{y_*-\delta_1}^{y_*+\delta_1} e^{(N-1) \inf_{\nu\in B_{\sqrt2+{\varepsilon}_1} (\sigma_{\rm sc}, \delta)} \Psi(\nu,y)} \exp\Big[-\frac{-4ND''(0)(y+\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \\
&\quad \Big[\frac{-4D''(0)}{N} +\frac{2D''(0)\alpha^2\rho^4}{N(-2D''(0)-\boldsymbol{t}^2)} \Big]^{1/2}
\frac{\sqrt{N(-4D''(0))}}{\sqrt{2\pi (-2D''(0)-\boldsymbol{t}^2)} } \\
&\quad \frac{\sqrt{N} (2\pi)^{-(N+1)/2} D'(0)^{-N/2} }{\sqrt{D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} }} \exp\Big(- \frac{N (u - \frac{\mu\rho^2}2+\frac{\mu D'(\rho^2) \rho^2}{D'(0)})^2}{2(D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)})}\Big) e^{-\frac{N \mu^2 \rho^2}{2D'(0)}} \rho^{N-1} \mathrm{d} y \mathrm{d} u \mathrm{d}\rho.
\end{align*}
Since $\Psi(\nu, y)$ is continuous in $\mathcal{P}[-\sqrt2-{\varepsilon}_1,\sqrt2+{\varepsilon}_1] \times (-\sqrt2-2{\varepsilon}_1, \sqrt2+2{\varepsilon}_1)^c$, we have
\begin{align*}
\lim_{\delta\to0+}\inf_{\nu\in B_{\sqrt2+{\varepsilon}_1} (\sigma_{\rm sc},\delta)} \Psi(\nu,y) & = \Psi_*( y)
\end{align*}
for all $y\in[y_*-\delta_1, y_*+\delta_1]$. By Wigner's semicircle law with the distance \prettyref{eq:measd} and the LDP of the largest eigenvalue of GOE, we have
\begin{align*}
&\liminf_{N\to\infty }\mathbb{P}( L((\frac{N-1}{N})^{1/2}\langle_1^{N-1})\in B_{\sqrt2+{\varepsilon}_1}(\sigma_{{\rm sc}},\delta)) \\
&\ge \liminf_{N\to\infty } [\mathbb{P} ( L((\frac{N-1}{N})^{1/2}\langle_1^{N-1})\in B(\sigma_{{\rm sc}},\delta))- \mathbb{P}(\max_{i=1,...,N-1}|(\frac{N-1}{N})^{1/2}\langle_i|>\sqrt2+{\varepsilon}_1)]=1.
\end{align*}
Recall the function $\psi$ as in \eqref{eq:psidef}. Since the functions in question are all continuous and thus attain strictly positive minimum in $\rho\in[\rho_*-\delta',\rho_*+\delta'], u\in[u_*-\delta',u_*+\delta'],y\in[y_*-\delta_1, y_*+\delta_1]$, using \prettyref{eq:snlim} and \prettyref{eq:krerr} we deduce that
\begin{align}
\liminf_{N\to\infty} &\frac1N\log \mathbb{E}\mathrm{Crt}_{N}(E,(R_1, R_2)) \ge \liminf_{\delta'\to0+, \atop\delta_1\to0+} \liminf_{N\to\infty} \frac1N\log \mathcal{E}(\delta',\delta_1) + \frac12+\frac12\log(2\pi) \notag\\
&\ge \frac12 +\frac12 \log [-4D''(0)] -\frac12 \log D'(0) \notag\\
& \ \ + \liminf_{\delta\to0+,\delta'\to0+, \atop\delta_1\to0+} \inf_{\rho\in[\rho_*-\delta',\rho_*+\delta'],\atop u\in[u_*-\delta',u_*+\delta'], y\in[y_*-\delta_1, y_*+\delta_1] }[ \psi_*(\rho,u,y)-\Psi_*(y)+\inf_{\nu\in B_{\sqrt2+{\varepsilon}_1} (\sigma_{\rm sc},\delta)} \Psi(\nu,y)] \notag\\
&= \frac12 +\frac12 \log [-4D''(0)] -\frac12 \log D'(0) +\psi_*(\rho_*,u_*,y_*).\label{eq:lbdt1}
\end{align}
\emph{Case 2}: $y_*
\in[-\sqrt2,\sqrt2]$. In this case, we can choose $\delta_1>0$ small such that
$G(\delta_1):=(y_*-\delta_1, y_*+\delta_1)\cap (-\sqrt2,\sqrt2) \neq \emptyset$.
Choosing $K$ large we find
\begin{align*}
& \int_{G(\delta_1)} \mathbb{E}[e^{(N-1)\Psi(L((\frac{N-1}{N})^{1/2}\langle_1^{N-1}), y)}] \exp\Big[-\frac{-4ND''(0)(y+\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \mathrm{d} y\\
&\ge \frac1{Z'_{N-1}}\int_{G(\delta_1)} \int_{[-(\frac{N}{N-1})^{1/2}K,(\frac{N}{N-1})^{1/2}K]^{N-1}} \exp\Big[-\frac{-4ND''(0)(y +\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \\
&\qquad \prod_{i=1}^{N-1}|(\frac{N-1}{N})^{1/2} x_i-y| \prod_{1\le i<j\le N-1} |x_i-x_j| e^{-\frac{N-1}{2}\sum_{i=1}^{N-1}x_i^2} \prod_{i=1}^{N-1} \mathrm{d} x_i \mathrm{d} y\\
&\stackrel{(\frac{N-1}{N})^{1/2} x_i\mapsto x_i}{\scalebox{7}[1]{=}} \frac1{Z'_{N-1}} \Bigl(\frac{N}{N-1}\Bigr)^{\frac{N(N-1)}{4}}\int_{x_N\in G(\delta_1)} \exp\Big[-\frac{-4ND''(0)(x_N+ \frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big] \\
&\qquad \int_{[-K,K]^{N-1}} \prod_{1\le i<j\le N} |x_i-x_j| e^{-\frac{N}2\sum_{i=1}^N x_i^2} e^{\frac{N}{2} x_N^2} \prod_{i=1}^N \mathrm{d} x_i\\
&\ge \frac{Z'_N}{Z'_{N-1}}\frac{1}{Z'_N} \Big(\frac{N}{N-1}\Big)^{\frac{N(N-1)}{4}} \exp\Big[ N \min_{x\in G(\delta_1)}\Big(\frac{ x ^2}{2} -\frac{-4 D''(0)(x +\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big)\Big] \\
&\qquad \int_{x_N\in G(\delta_1)} \int_{[-K,K]^{N-1}} \prod_{1\le i< j\le N} |x_i-x_j| e^{-\frac{N}2\sum_{i=1}^N x_i^2 }\prod_{i=1}^N \mathrm{d} x_i \\
&= \frac{Z'_N}{Z'_{N-1}} \Big(\frac{N}{N-1}\Big)^{\frac{N(N-1)}{4}} \exp\Big[ N \min_{x\in G(\delta_1)}\Big(\frac{ x ^2}{2} -\frac{-4ND''(0)(x+\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} \Big)\Big] \\
&\qquad \mathbb{E}\Big[\frac1N \#\{ i\in[N]: \tilde \langle_i^N \in G(\delta_1)\} \ensuremath{\boldsymbol 1}\{\max_{i=1,...,N} |\tilde\langle_i^N| \le K\}\Big].
\end{align*}
Here $Z_N'=N!Z_N$ is the normalizing constant for the p.d.f.~of unordered eigenvalues of $\mathrm{GOE}_N$ matrix. By Stirling's formula,
\[
\lim_{N\to\infty} \frac1N\log \Big[\frac{Z'_N}{Z'_{N-1}} \Big(\frac{N}{N-1}\Big)^{\frac{N(N-1)}{4}}\Big]=-\frac12-\frac12\log2.
\]
From Wigner's semicircle law we deduce
\begin{align*}
&\liminf_{N\to\infty} \frac1N\log\mathbb{E}\Big[\frac1N \#\{ i\in[N]: \tilde \langle_i^N \in G(\delta_1)\} \ensuremath{\boldsymbol 1}\{\max_{i=1,...,N} |\langle_i| \le K\}\Big]\\
& = \lim_{N\to\infty}\frac1N\log \sigma_{\rm sc}[G(\delta_1)]=0.
\end{align*}
Since the functions in question are all continuous and thus attains strictly positive minimum in $\rho\in[\rho_*-\delta',\rho_*+\delta'], u\in[u_*-\delta',u_*+\delta'],y\in[y_*-\delta_1, y_*+\delta_1]$, using \prettyref{eq:snlim} and \prettyref{eq:krerr} we deduce that
\begin{align}
\liminf_{N\to\infty} &\frac1N\log \mathbb{E}\mathrm{Crt}_{N}(E,(R_1, R_2)) \ge \liminf_{\delta'\to0+, \atop\delta_1\to0+} \liminf_{N\to\infty} \frac1N\log \mathcal{E}(\delta',\delta_1) + \frac12+\frac12\log(2\pi) \notag \\
&\ge \frac12 +\frac12 \log [-4D''(0)] -\frac12 \log D'(0)-\frac12-\frac12\log2 + \liminf_{\delta'\to0+, \atop\delta_1\to0+} \inf_{\rho\in[\rho_*-\delta',\rho_*+\delta'],\atop u\in[u_*-\delta',u_*+\delta'], x\in G(\delta_1)} \notag \\
& \quad \Big[ \frac{ x ^2}{2} -\frac{-4ND''(0)(x +\frac{m_2}{\sqrt{-4D''(0)}})^2}{2(-2D''(0)-\boldsymbol{t}^2)} -\frac{(u-\frac{\mu\rho^2}{2}+\frac{\mu D'(\rho^2) \rho^2}{D'(0) } )^2}{ 2\Big(D(\rho^2)-\frac{D'(\rho^2)^2 \rho^2}{D'(0)} \Big)} -\frac{\mu^2\rho^2}{2D'(0)}+\log \rho \Big]\notag \\
&= \frac12 +\frac12 \log [-4D''(0)] -\frac12 \log D'(0) +\psi_*(\rho_*,u_*,y_*).\label{eq:lbdt2}
\end{align}
Here in the last step, we used the fact \prettyref{eq:phi*} that $\Psi_*(y_*) = \frac12 y_*^2-\frac12-\frac12\log2$ as $y_*\in[-\sqrt2,\sqrt2]$.
\end{proof}
\begin{proof}[Proof of \prettyref{th:cpsublevel}]
If $\bar E$ is compact and $0\le R_1<R_2<\infty$, the assertion follows from \prettyref{eq:snlim}, Propositions \ref{pr:ubi2}, \ref{pr:ubi1} and \ref{pr:lbd*}.
Suppose $\bar E$ is not compact or $R_2=\infty$.
Thanks to Lemmas \ref{le:psirho0} and \ref{le:exptt}, we may choose $R<\infty$ and $T<\infty$ large enough such that
\begin{align*}
&\lim_{N\to\infty} \frac1N \log\mathbb{E} \mathrm{Crt}_{N}(E, (R_1,R_2)) = \lim_{N\to\infty} \frac1N \log\mathbb{E} \mathrm{Crt}_{N}(E \cap(-T,T), (R_1,R_2)\cap [0,R])\\
&=\frac12 \log[-4D''(0)] -\frac12\log D'(0) +\frac12+\sup_{y\in \mathbb{R}, R_1< \rho<R\wedge R_2, u\in \bar E\cap [-T, T], }\psi_*(\rho,u,y)\\
&=\frac12 \log[-4D''(0)] -\frac12\log D'(0) +\frac12+\sup_{(\rho,u,y)\in F}\psi_*(\rho,u,y),
\end{align*}
which completes the proof.
\end{proof}
We finish this section by showing how to recover Theorem \ref{th:ttcpx} from Theorem \ref{th:cpsublevel} when the domain of field is confined in a shell.
\begin{example}\label{ex:2}
\rm
Let $0\le R_1<R_2\le\infty$ and $E=\mathbb{R}$. This removes restriction on the range of the random field. Let $J=\sqrt{-2D''(0)}$. Using \prettyref{eq:msialbt} and \prettyref{eq:uvcov},
we rewrite
\begin{align*}
\psi_*(\rho,u,y )= \Psi_*(y) -\frac{J^2}{J^2-\boldsymbol{t}^2} \Big(y+\frac{\mu}{\sqrt2J}+\frac{\boldsymbol{t} v}{\sqrt2 J}\Big)^2 -\frac{v^2}{2} -\frac{\mu^2\rho^2}{2D'(0)}+\log \rho.
\end{align*}
From \prettyref{eq:phi*}, we calculate
\begin{align*}
\partial_y \psi_*&=\frac{-(\boldsymbol{t}^2+J^2)y-\sqrt2J (\mu+\boldsymbol{t} v)}{J^2-\boldsymbol{t}^2}-\mathrm{sgn}(y)\sqrt{y^2-2}\ensuremath{\boldsymbol 1}\{|y|>\sqrt2\},\\
\partial_{yy}\psi_*&=-\frac{J^2+\boldsymbol{t}^2}{J^2-\boldsymbol{t}^2} -\frac{|y|}{\sqrt{y^2-2}}\ensuremath{\boldsymbol 1}\{|y|>\sqrt2\}, \\
\partial_v\psi_*&= \frac{-J^2v -\boldsymbol{t} (\sqrt2Jy +\mu )}{J^2-\boldsymbol{t}^2},\ \
\partial_{yv}\psi_* =-\frac{\sqrt2 J\boldsymbol{t}}{J^2-\boldsymbol{t}^2},\ \
\partial_{vv}\psi_* = -\frac{J^2}{J^2-\boldsymbol{t}^2}.
\end{align*}
Using the relation $\partial_v\psi_*=0$ we find
\begin{align}\label{eq:musbt}
v=-\frac{\boldsymbol{t}(\sqrt2Jy+\mu)}{J^2}, \ \ \sqrt2Jy+\mu+ \boldsymbol{t} v = \frac{(\sqrt2Jy +\mu)(J^2-\boldsymbol{t}^2)}{J^2}.
\end{align}
Together with \prettyref{eq:phi*}, we can eliminate $v$ and simplify
\begin{align}\label{eq:psids0}
\psi_*(\rho,u,y )= -\frac12y^2-\frac12-\frac12\log2 -J_1(-|y|)\ensuremath{\boldsymbol 1}\{|y|>\sqrt2\}-\frac{\sqrt2 \mu y}{J}-\frac{\mu^2}{2J^2}-\frac{\mu^2\rho^2}{2D'(0)}+\log \rho.
\end{align}
\emph{Case 1}: $\mu\neq0$. Solving $\partial_y \psi_*=0, \partial_v\psi_*=0$ gives (after removing an extraneous solution)
\begin{align*}
\begin{cases}
y=-\frac{\sqrt2 \mu}{J},\ \ v=\frac{\mu \boldsymbol{t}}{J^2}, & |\mu|\le J,\\
y=-\frac1{\sqrt2}(\frac\mu{J}+\frac{J}\mu), \ \ v=\frac\boldsymbol{t}\mu, &|\mu|>J.
\end{cases}
\end{align*}
From \prettyref{eq:asmp1} we know $J^2-\boldsymbol{t}^2>0$ for $\rho>0$. By the second derivative test, this critical point is the unique global maximum. Moreover, plugging in the critical point reveals that
$$\Psi_*(y) -\frac{J^2}{J^2-\boldsymbol{t}^2} \Big(y+\frac{\mu}{\sqrt2J}+\frac{\boldsymbol{t} v}{\sqrt2 J}\Big)^2 -\frac{v^2}{2}$$ does not depend on $\rho$. As a result, we choose $\rho$ by optimizing $-\frac{\mu^2\rho^2}{2D'(0)}+\log \rho$. Let us consider $R_1<\sqrt{D'(0)}/|\mu|$ only; the other case is similar. Choose
\begin{align}\label{eq:rho*}
\rho_*=\begin{cases}
\frac{\sqrt{D'(0)}}{|\mu|},& \text{ if } R_2>\frac{\sqrt{D'(0)}}{|\mu|},\\
R_2,&\text{ otherwise}.
\end{cases}
\end{align}
If $|\mu|\le \sqrt{-2D''(0)}$, we take $y_*=-\mu/\sqrt{-D''(0)}$, and
\begin{align}\label{eq:u*1}
u_*= \frac{\mu[D'(\rho_*^2)-D'(0)]}{-2D''(0)}+\frac{\mu\rho_*^2}{2}-\frac{\mu D'(\rho_*^2) \rho_*^2}{D'(0) } .
\end{align}
Then we find
\begin{align}\label{eq:psi*1}
\psi_*(\rho_*,u_*,y_* )=\begin{cases}
\frac{\mu^2}{-4D''(0)}-1-\frac12\log2 +\frac12\log D'(0)-\log |\mu|, &\mbox{if } R_2>\frac{\sqrt{D'(0)}}{|\mu|},\\
\frac{\mu^2}{-4D''(0)}-\frac12-\frac12\log2 +\log R_2- \frac{\mu^2R_2^2}{2D'(0)}, & \text{otherwise}.
\end{cases}
\end{align}
If $|\mu|>\sqrt{-2D''(0)}$, we take $y_*=-\frac{\mu}{\sqrt{-4D''(0)}} -\frac{\sqrt{-D''(0)}}{\mu}$,
\begin{align}\label{eq:u*2}
u_*=\frac{D'(\rho_*^2)-D'(0)}{\mu }+\frac{\mu\rho_*^2}{2}-\frac{\mu D'(\rho_*^2) \rho_*^2}{D'(0) }.
\end{align}
Then we find
\begin{align}\label{eq:psi*2}
&\psi_*(\rho_*,u_*,y_*) \notag\\
&=
\begin{cases}
-\frac12\log2-\log\sqrt{-2D''(0)}-\frac12+\frac12\log D'(0), & \mbox{if } R_2>\frac{\sqrt{D'(0)}}{|\mu|}, \\
-\frac12\log2-\log\sqrt{-2D''(0)}+\log|\mu| +\log R_2- \frac{\mu^2R_2^2}{2D'(0)}, & \mbox{otherwise}.
\end{cases}
\end{align}
Since $B_N=\{x\in \mathbb{R}^N: \sqrt N R_1< \|x\|<\sqrt N R_2\}$, using Cramer's theorem for the chi-square distribution, we have
\[
-\Xi=\begin{cases}
-\frac{\mu^2 R_2^2}{2D'(0)}+\frac12+\log R_2+\log |\mu|-\frac12\log D'(0) , & \mbox{if } R_2<\frac{\sqrt{D'(0)}}{|\mu|},\\
0, & \mbox{otherwise}.
\end{cases}
\]
where $\Xi$ is defined as in \prettyref{eq:bnasp1}.
\emph{Case 2}: $\mu=0$. We have to assume $R_2<\infty$. Then the above computations show that $\psi_*$ is optimized at $y_*=u_*=0$ and $\rho_*=R_2$ which gives
\[
\lim_{N\to\infty} \frac1N \log\mathbb{E} \mathrm{Crt}_{N}(\mathbb{R}, (R_1,R_2)) = \frac12 \log[-2D''(0)] -\frac12\log D'(0) +\log R_2.
\]
In addition, $\Theta=\lim_{N\to\infty} \frac1N \log|B_N|= \log R_2+\frac12\log(2\pi)+\frac12$.
Our results here match all the three cases in \prettyref{th:ttcpx}. Therefore, this example explains the seemingly very different forms of the three phases, whose origin is hard to understand without the general \prettyref{th:cpsublevel}. Moreover, this suggests that the critical points around the value $u_*$ and variable $\rho_*$ given above dominate all other places.
\end{example}
\begin{appendix}
\section{Covariance function and its derivatives}
Let $D_N(r)=D(r/N)$. For $x,y\in \mathbb{R}^N$, let $\varphi(x,y) =\frac12(D_N(\|x\|^2)+D_N(\|y\|^2)-D_N(\|x-y\|^2))$.
Under $X_N(0)=0$, isotropic increments imply that $\mathbb{E} X_N(x)=0$; see \cite{Ya87}*{p.439}. We have
\[
\mathrm{Cov}[H_N(x), H_N(y)]=\mathrm{Cov}[X_N(x),X_N(y)]=\mathbb{E}[X_N(x)X_N(y)]=\varphi(x,y).
\]
\begin{lemma}\label{le:cov}
Assume Assumptions I and II. Then for $x\in \mathbb{R}^N$,
\begin{align*}
\mathrm{Cov}[H_N(x), \partial_i H_N(x)]&= D'\left(\frac{\|x\|^2}N\right)x_i,\\
\mathrm{Cov}[\partial_i H_N(x),\partial_j H_N(x)] &= D'(0)\delta_{ij},\\
\mathrm{Cov}[H_N(x),\partial_{ij} H_N(x)]&= 2D''\left(\frac{\|x\|^2}N\right)\frac{x_ix_j}N +\left[D'\left(\frac{\|x\|^2}N\right)-D'(0)\right]\delta_{ij}\\
\mathrm{Cov}[\partial_k H_N(x), \partial_{ij} H_N(x)]&= 0,\\
\mathrm{Cov}[\partial_{lk} H_N(x), \partial_{ij} H_N(x)]&= -2D''(0)[\delta_{jl}\delta_{ik}+\delta_{il}\delta_{kj} +\delta_{kl}\delta_{ij}]/N,
\end{align*}
where $\delta_{ij}$ are the Kronecker delta function.
\end{lemma}
\begin{proof}
By \cite{AT07}*{Theorem 1.4.2}, $X_N(x)$ is smooth. We can differentiate inside expectation as in \cite{AT07}*{(5.5.4)} and find
\begin{align*}
\mathbb{E}[X_N(x)\partial_i X_N(y)]/N& = \partial_{y_i} \mathbb{E}(X_N(x) X_N(y))/N=D_N'(\|y\|^2)y_i +D_N'(\|x-y\|^2)(x_i-y_i),\\
\mathbb{E}[\partial_i X_N(x)\partial_j X_N(y)]/N&= \partial_{x_i} [ D_N'(\|x-y\|^2)(x_j-y_j)]\\
&=2D_N''(\|x-y\|^2)(x_i-y_i)(x_j-y_j)+ D_N'(\|x-y\|^2)\delta_{ij},\\
\mathbb{E}[ X_N(x)\partial_{ij} X_N(y)]/N&= \partial_{y_i} [ D_N'(\|y\|^2) y_j +D_N'(\|x-y\|^2) (x_j-y_j)]\\
&=2D_N''(\|y\|^2)y_i y_j+ D_N'(\|y\|^2)\delta_{ij} -2D_N''(\|x-y\|^2)(x_i-y_i)(x_j-y_j) \\ &\hspace{5ex} -D_N'(\|x-y\|^2)\delta_{ij},\\
\mathbb{E}[\partial_k X_N(x)\partial_{ij} X_N(y)]/N& =-4D_N'''(\|x-y\|^2)(x_k-y_k)(x_i-y_i)(x_j-y_j) \\
&\hspace{-19ex} -2D_N''(\|x-y\|^2)(x_j-y_j)\delta_{ki} -2D_N''(\|x-y\|^2)(x_i-y_i)\delta_{kj} -2D_N''(\|x-y\|^2)(x_k-y_k)\delta_{ij},\\
\mathbb{E}[\partial_{lk} X_N(x)\partial_{ij} X_N(y)]/N & =-8D_N^{(4)}(\|x-y\|^2)(x_l-y_l)(x_k-y_k)(x_i-y_i)(x_j-y_j) \\
&\hspace{-19ex} -4D_N'''(\|x-y\|^2)[(x_i-y_i)(x_j-y_j)\delta_{kl} +(x_k-y_k)(x_j-y_j)\delta_{il}+(x_k-y_k)(x_i-y_i)\delta_{jl}\\
& +(x_l-y_l)(x_j-y_j)\delta_{ki} +(x_l-y_l)(x_i-y_i)\delta_{kj} +(x_l-y_l)(x_k-y_k)\delta_{ij}]\\
& -2D_N''(\|x-y\|^2)[\delta_{jl}\delta_{ik}+\delta_{il}\delta_{kj} +\delta_{kl}\delta_{ij}].
\end{align*}
Substituting $x=y$,
\begin{align*}
\mathbb{E}[X_N(x)\partial_i X_N(x)]/N&=D'_N(\|x\|^2)x_i,\\
\mathbb{E}[\partial_i X_N(x)\partial_j X_N(x)]/N &= D'_N(0)\delta_{ij},\\
\mathbb{E}[X_N(x)\partial_{ij} X_N(x)]/N&= 2D_N''(\|x\|^2)x_ix_j +D_N'(\|x\|^2)\delta_{ij}-D'_N(0)\delta_{ij}\\
\mathbb{E}[\partial_k X_N(x)\partial_{ij} X_N(x)]/N&= 0,\\
\mathbb{E}[\partial_{lk} X_N(x)\partial_{ij} X_N(x)]/N&= -2D_N''(0)[\delta_{jl}\delta_{ik}+\delta_{il}\delta_{kj} +\delta_{kl}\delta_{ij}].
\end{align*}
Then we note that $D'_N(r)=D'(r/N)/N$ and $D_N''(r)=D''(r/N)/N^2$.
\end{proof}
\section{Auxiliary Lemmas}\label{se:aux}
For the integral $\mathbb{E} \int_\mathbb{R} \exp\big(-\frac12(N+1)x^2 -\frac{\sqrt{N(N+1)}\mu x}{\sqrt{-D''(0)}}\big) L_{N+1}(\mathrm{d} x)$, we have the following elementary fact which is used in \prettyref{se:whole}.
\begin{lemma}\label{le:repl}
Let $\nu_N$ be probability measures on $\mathbb{R}$ and $\mu\neq0$. Suppose
$$\lim_{N\to\infty} \frac1N\log \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} }\nu_{N+1}(\mathrm{d} x) >-\infty.$$
Then we have
\begin{align*}
\lim_{N\to \infty} \frac1N \Big(\log \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{\sqrt{N(N+1)}\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x)
- \log \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} }\nu_{N+1}(\mathrm{d} x)\Big)=0.
\end{align*}
\end{lemma}
\begin{proof}
Let
\begin{align*}
a_N &= \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} }\nu_{N+1}(\mathrm{d} x),\\
b_N &= \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{\sqrt{N(N+1)}\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) ,
\\
c_N&=\int_\mathbb{R} e^{-\frac12Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} }\nu_{N+1}(\mathrm{d} x)
\end{align*}
We claim $\lim_{N\to \infty} \frac1N \log \frac{ a_N}{c_{N}}=0$. Indeed, by Jensen's inequality,
\begin{align*}
\log\frac{ c_{N}}{ a_{N}}\le \log \frac{ a_N^{N/(N+1)}}{ a_{N}}=-\frac1{N+1}\log a_N.
\end{align*}
But
\[
a_N =\int_\mathbb{R} e^{-\frac12(N+1)(x+\frac{\mu}{\sqrt{-D''(0)}})^2 +\frac{(N+1)\mu^2}{-2D''(0)}} \nu_{N+1}(\mathrm{d} x)\le e^{\frac{(N+1)\mu^2}{-2D''(0)}}.
\]
Then the claim follows. From the elementary inequality $a\wedge b \le (a+b)/2 \le a\vee b$, we have $\lim_{N\to\infty} \frac1N (\log ( a_{N} + c_{N})-\log a_N )=0$. It remains to prove that
\[
\lim_{N\to\infty} \frac1N (\log ( a_{N} + c_{N}) - \log b_N)=0.
\]
Note that
\begin{align*}
b_N & \le \int_{-\infty}^{0} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) + \int_{0}^{\infty} e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) \le a_N + c_N.
\end{align*}
Let $t $ be a large constant (independent of $N$) such that
\[
\lim_{N\to\infty} \frac1N\log \int_\mathbb{R} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} }\nu_{N+1}(\mathrm{d} x) > - \frac{t^2}{8}
\]
and that
\[
\int_{|x|>t} e^{-\frac12(N+1)x^2 -\frac{(N+1)\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) \le e^{-(N+1)t^2/4}.
\]
It follows that
\[
\int_{|x|>t} e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) \le e^{-Nt^2/4},
\]
and since $\frac1N \log \frac{a_N}{c_N}\to 0$ as $N\to\infty$,
\begin{align*}
&\lim_{N\to\infty} \frac1N \log\int_{-t}^t e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x)\\
&= \lim_{N\to\infty} \frac1N \log \int_{-\infty}^\infty (1-\ensuremath{\boldsymbol 1}\{|x|>t\})e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x)\\
&=\lim_{N\to\infty} \frac1N \log \int_{-\infty}^\infty e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x).
\end{align*}
Note that
\begin{align*}
b_N &\ge e^{-\frac{t^2}2}\int_{-t}^{0} e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x) + e^{-\frac{t^2}2 -\frac{\mu t}{\sqrt{-D''(0)}}}\int_{0}^{t} e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x)\\
&\ge e^{-\frac{t^2}2 -\frac{\mu t}{\sqrt{-D''(0)}}} \int_{-t}^t e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x).
\end{align*}
Since
\[
\lim_{N\to\infty} \frac1N\Big( \log (a_N+c_N)- \log\int_{-t}^t e^{-\frac12 Nx^2 -\frac{N\mu x}{\sqrt{-D''(0)}} } \nu_{N+1}(\mathrm{d} x)\Big)=0,
\]
we have $\lim_{N\to\infty} \frac1N (\log ( a_{N} + c_{N}) - \log b_N)=0$.
\end{proof}
The following discussion is about Assumption IV.
\begin{proof}[Proof of \prettyref{le:dgeab}]
1. Since $y\mapsto D'(y)$ is a strictly decreasing convex function and $D'''(y)>0$ for any $y>0$, $|D''(y)|< \frac{D'(0)-D'(y)}y $. By assumption,
\[
(\alpha \rho^2)^2 = \frac{4D''(\rho^2)^2 \rho^4}{D(\rho^2)-\frac{\rho^2 D'(\rho^2)^2}{D'(0)}} \le -\frac{8 D''(\rho^2)^2 D''(0)}{ 3[D'(\rho^2)-D'(0)]^2/\rho^4}< -\frac83 D''(0).
\]
It follows that
\begin{align*}
(\alpha \rho^2+\boldsymbol{t})\boldsymbol{t} &< \sqrt{-\frac23 D''(0)} \sqrt{-\frac83 D''(0)} -\frac23 D''(0) =-2D''(0),\\
(\alpha \rho^2+\boldsymbol{t})\alpha \rho^2 &< -\frac83 D''(0)+ \sqrt{-\frac23 D''(0)} \sqrt{-\frac83 D''(0)}=-4D''(0).
\end{align*}
2. We verify \prettyref{eq:btbd}. If \prettyref{eq:btinc} holds, then $y\mapsto \boldsymbol{t}(y)^2$ is a decreasing function and \prettyref{eq:btbd} follows from \prettyref{le:albtd}.
3. By item 1, it suffices to check \prettyref{eq:btbd}. Consider the function
\[
f(y)=-D''(0)[D'(0)D(y)-D'(y)^2y]-\frac32D'(0)[D'(y)-D'(0)]^2.
\]
Condition \prettyref{eq:btbd} is equivalent to $f(y)\ge0$. Note that $f(0)=0$ and that
\[
f'(y)= [D'(0)-D'(y)][D'(0)D''(y)-D''(0)D'(y)]+2D''(y)(D''(0)D'(y)y -D'(0)[D'(y)-D'(0)]).
\]
By convexity, $ \frac{D'(y)-D'(0)}y \le D''(y)\le 0$. If \prettyref{eq:btbd2} holds, $D''(0)D'(y)y -D'(0)[D'(y)-D'(0)]\le 0$ and
\[
\frac{D'(y)}{D'(0)}-\frac{D''(y)}{D''(0)}\ge 0.
\]
Then \prettyref{eq:btbd} follows from here since $D'(0)\ge D'(y)$ and we have $f'(y)\ge0$.
4. By Cauchy's mean value theorem, condition \prettyref{eq:btbd2} is equivalent to \prettyref{eq:btbd3}.
5. Direct calculation yields
\[
\frac{\mathrm{d}}{\mathrm{d} y} \frac{D'(y)}{- D''(y)} =\frac{-D''(y)^2+D'''(y)D'(y)}{D''(y)^2}.
\]
Then \prettyref{eq:btbd4} implies \prettyref{eq:btbd3}.
6. By the representation \prettyref{eq:tbfcn} of Thorin--Bernstein functions, we have
\begin{align*}
D''(x)= -\int_{(0,\infty)} \frac1{(x+t)^2} \sigma(\mathrm{d} t), \qquad D'''(x)= \int_{(0,\infty)} \frac{2}{(x+t)^3} \sigma(\mathrm{d} t).
\end{align*}
By the Cauchy--Schwarz inequality, we have
\begin{align*}
2D''(x)^2\le D'(x)D'''(x).
\end{align*}
It follows that $\frac{\mathrm{d}}{\mathrm{d} y} \frac{D'(y)}{- D''(y)} \ge1$ and \prettyref{eq:btbd4} holds.
\end{proof}
If $A=0$ in the representation \prettyref{eq:drep}, using the Cauchy--Schwarz inequality, we can see
\[
\frac{\mathrm{d}}{\mathrm{d} y} \frac{D'(y)}{- D''(y)} =\frac{-D''(y)^2+D'''(y)D'(y)}{D''(y)^2}\ge 0,
\]
compared with \prettyref{eq:btbd4}. It is easy to check that for any ${\varepsilon}>0, 0<\gamma<1$, our major examples $D(r)=\log(1+r/{\varepsilon})$ and $D(r)=(r+{\varepsilon})^\gamma-{\varepsilon}^{\gamma}$ satisfy \prettyref{eq:btbd3}. With more work, one can check that these functions satisfy \prettyref{eq:btinc}.
On the other hand, according to \cite{SSV}*{p.~332},
$$D(x)=\frac{\sqrt{x}\sinh^2(\sqrt x)}{\sinh(2\sqrt x)}$$
is a complete Bernstein function which is not Thorin--Bernstein. One can check (at least numerically) that it violates \prettyref{eq:btbd3} but still verifies \prettyref{eq:btbd}.
We suspect that \prettyref{eq:asmp1} and \prettyref{eq:asmp2} always hold for any structure function $D$. The following shows that this is the case at least in a neighborhood of $0$.
\begin{lemma}
Assume $A=0$ in \prettyref{eq:drep}. We have
\begin{align*}
\lim_{y\to 0+} \frac{\mathrm{d} }{\mathrm{d} y} [\alpha(y) y+\boldsymbol{t}(y)]\boldsymbol{t}(y) & <0, \\
\lim_{y\to 0+} \frac{\mathrm{d} }{\mathrm{d} y} [\alpha(y) y+\boldsymbol{t}(y)]\alpha(y)y & <0.
\end{align*}
Consequently, there exists $\delta>0$ such that $-2D''(0)>[\alpha(y) y+\boldsymbol{t}(y)]\boldsymbol{t}(y)$ and $-4D''(0)>[\alpha(y) y+\boldsymbol{t}(y)]\alpha(y)y$ for $y\in(0,\delta)$.
\end{lemma}
\begin{proof}
We only prove the first inequality as the second is similar. Write
\[
(\alpha y+\boldsymbol{t})\boldsymbol{t} = \frac{[2D''(y)+\frac{D'(y)-D'(0)}{y}] \frac{D'(y)-D'(0)}{y}}{\frac{D(y)}{y^2}-\frac{D'(y)^2}{D'(0)y}} =:\frac{T}{B}.
\]
Since $[(\alpha y+\boldsymbol{t})\boldsymbol{t}]'=\frac{T'B-B'T}{B^2}$ and $\lim_{y\to 0+} B=-\frac32 D''(0) \neq 0$, it suffices to show that $\lim_{y\to 0+}T'B-B'T<0$. By calculation, we have $\lim_{y\to0+} T= 3D''(0)^2$ and
\begin{align*}
T'&= [2D'''(y) +\frac{D''(y)y-D'(y) +D'(0)}{y^2}] \frac{D'(y)-D'(0)}{y}\\
&\ \ +[2D''(y)+\frac{D'(y)-D'(0)}{y}] \frac{D''(y)y-D'(y) +D'(0)}{y^2},\\
B'&=\frac{D'(0)D'(y)y-2D'(0)D(y)-2D'(y)D''(y)y^2+y D'(y)^2}{D'(0) y^3}.
\end{align*}
After some tedious computation, we find $\lim_{y\to0+} T'=4D'''(0) D''(0)$ and $\lim_{y\to0+} B'=-\frac56 D'''(0)-\frac{D''(0)^2}{D'(0)}$. Then
\[
\lim_{y\to 0+} T'B-B'T = D''(0)^2 \Big[\frac{3D''(0)^2}{D'(0)}-\frac72 D'''(0)\Big].
\]
By the Cauchy--Schwarz inequality,
\[
D''(0)^2 = \Big(\int_0^\infty t^4 \nu(\mathrm{d} t) \Big)^2\le \int_0^\infty t^2 \nu(\mathrm{d} t) \int_0^\infty t^6 \nu(\mathrm{d} t)=D'(0)D'''(0).
\]
From here the conclusion follows.
\end{proof}
\end{appendix}
\bibliographystyle{plain}
|
1803.01727
|
\section{Introduction.}
\label{sec:intro}
In our previous paper \cite{KNS}, we established (a combinatorial version of)
standard monomial theory for semi-infinite Lakshmibai-Seshadri (LS for short) paths.
To be more precise, let $\lambda \in P^{+}$ be a (level-zero) dominant integral weight
for an untwisted affine Lie algebra $\Fg_{\af}$, and
let $\SLS(\lambda)$ denote the set of semi-infinite LS paths of shape $\lambda$;
note that the set $\SLS(\lambda)$ provides a realization of the crystal basis
of the extremal weight module $V(\lambda)$
over the quantum affine algebra $U_{q}(\Fg_{\af})$ (see \cite{INS}).
In \cite{KNS}, we proved that for
(level-zero) dominant integral weights $\lambda, \mu \in P^{+}$,
there exists an embedding $\Xi : \SLS(\lambda + \mu) \rightarrow \SLS(\lambda) \otimes \SLS(\mu)$
of crystals that sends the straight-line path $\pi_{\lambda + \mu}$ to
the tensor product $\pi_{\lambda} \otimes \pi_{\mu}$ of
the straight-line paths $\pi_{\lambda}$ and $\pi_{\mu}$.
In particular, the restriction of $\Xi$ to
the connected component $\SLS_{0}(\lambda + \mu)$ of $\SLS(\lambda + \mu)$
containing $\pi_{\lambda + \mu}$ gives an isomorphism of crystals
from $\SLS_{0}(\lambda + \mu)$ to the connected component $(\SLS(\lambda) \otimes \SLS(\mu))_{0}$
of $\SLS(\lambda) \otimes \SLS(\mu)$ containing $\pi_{\lambda} \otimes \pi_{\mu}$.
Moreover, in \cite{KNS}, we gave an explicit description of the image
$\Xi(\SLS(\lambda + \mu)) \subset \SLS(\lambda) \otimes \SLS(\mu)$
in terms of the semi-infinite Bruhat order on the affine Weyl group $W_{\af}$
in a way similar to the one for the ordinary standard monomial theory due to
Littelmann (\cite{Lit96}).
Also, in \cite{NS05}, we proved the tensor product decomposition theorem
for quantum Lakshmibai-Seshadri (QLS for short) paths.
To be more precise, for a (level-zero) dominant integral weight
$\lambda \in P^{+} = \sum_{i \in I} m_{i} \vpi_{i}$,
where the $\vpi_{i}$, $i \in I$, are the level-zero fundamental weights for $\Fg_{\af}$,
let $\QLS(\lambda)$ denote the set of QLS paths of shape $\lambda$;
note that the set $\QLS(\lambda)$ provides a realization of the crystal basis of
the tensor product $\bigotimes_{i \in I} W(\vpi_{i})^{\otimes m_{i}}$ of
the level-zero fundamental representations $W(\vpi_{i})$, $i \in I$,
over the quantum affine algebra $U_{q}^{\prime}(\Fg_{\af})$
without the degree operator (see \cite{NS03}, \cite{NS06}).
In \cite{NS05}, we proved that for (level-zero)
dominant integral weights $\lambda, \mu \in P^{+}$,
there exists an isomorphism $\Theta : \QLS(\lambda + \mu) \rightarrow \QLS(\lambda) \otimes \QLS(\mu)$
of crystals that sends the straight-line path $\eta_{\lambda + \mu}$ to
the tensor product $\eta_{\lambda} \otimes \eta_{\mu}$ of
the straight-line paths $\eta_{\lambda}$ and $\eta_{\mu}$.
Based on the fact that the affine Weyl group $W_{\af}$ is
the semi-direct product of the finite Weyl group $W$ and
the coroot lattice $Q^{\vee} = \sum_{i\in I} \BZ \alpha_{i}^{\vee}$ of
the underlying simple Lie algebra $\Fg \subset \Fg_{\af}$,
we can define a surjective morphism $\cl : \SLS(\lambda) \rightarrow \QLS(\lambda)$
of crystals for $\lambda \in P^{+}$;
note that for $\lambda, \mu \in P^{+}$, we have the following commutative diagram:
\begin{equation*}
\begin{CD}
\SLS_{0}(\lambda + \mu) @>{\Xi}>> \left(\SLS(\lambda) \otimes \SLS(\mu)\right)_{0} \\
@V{\cl}VV @VV{\cl \otimes \cl}V \\
\QLS(\lambda + \mu) @>>{\Theta}> \QLS(\lambda) \otimes \QLS(\mu).
\end{CD}
\end{equation*}
Here we note that $\cl(\SLS_{0}(\lambda)) = \QLS(\lambda)$;
in fact, for each $\eta \in \QLS(\lambda)$,
there exists a unique element $\pi_{\eta} \in \SLS_{0}(\lambda)$ such that $\cl(\pi_{\eta}) = \eta$
and such that the final direction of $\pi_{\eta}$ is identical to that of $\eta \in W$.
Using this element $\pi_{\eta}$, we define the (tail) degree function
$\deg_{\lambda} : \QLS(\lambda) \rightarrow \BZ_{\le 0}$ by:
$\wt(\pi_{\eta}) = \lambda - \gamma + \deg_{\lambda}(\eta) \delta$,
where $\gamma \in Q$ and $\delta$ is the null root of the affine Lie algebra $\Fg_{\af}$.
Also, for an arbitrary $w \in W$, we can define the degree function
$\deg_{w \lambda} : \QLS(\lambda) \rightarrow \BZ_{\le 0}$ at $w \lambda$
by ``twisting'' $\pi_{\eta}$ by a certain element in $Q^{\vee} \subset W_{\af}$
corresponding to $w$; note that $\deg_{\lambda} = \deg_{e \lambda}$,
where $e$ is the identity element of $W$.
In \cite{LNSSS2} and \cite{LNSSS3},
we gave an explicit description of the specialization at $t = 0$
of the nonsymmetric Macdonald polynomial $E_{w \lambda}(q, t)$
in terms of (a specific subset of) the set $\QLS(\lambda)$
equipped with the degree function $\deg_{\lambda}$.
In addition, in \cite{NNS1} (see also \cite{NS18}),
we gave an explicit description of the specialization at $t = \infty$ of
the nonsymmetric Macdonald polynomial $E_{w \lambda}(q, t)$
in terms of (a specific subset of) the set $\QLS(\lambda)$
equipped with the degree function $\deg_{w \lambda}$.
In this paper, we study the behavior of the degree function
under the isomorphism $\Theta : \QLS(\lambda + \mu) \rightarrow \QLS(\lambda) \otimes \QLS(\mu)$
of crystals for $\lambda, \mu \in P^{+}$.
To be more precise, let $\eta \in \QLS(\lambda + \mu)$,
and write $\Theta(\eta) \in \QLS(\lambda) \otimes \QLS(\mu)$ as:
$\Theta(\eta) = \eta_{1} \otimes \eta_{2}$, with $\eta_{1} \in \QLS(\lambda)$ and $\eta_{2} \in \QLS(\mu)$.
Then our main result (Theorem~\ref{thm:main})) states that for an arbitrary $w \in W$,
\begin{equation*}
\deg_{w(\lambda + \mu)}(\eta) =
\deg_{\io{\eta_{2}}{w} \lambda}(\eta_{1}) + \deg_{w \mu}(\eta_{2}) - \pair{\lambda}{\ze{\eta_{2}}{w}}.
\end{equation*}
Here, $\io{\eta_{2}}{w}$ is an element of $W$,
called the initial direction of $\eta_{2}$ with respect to $w$,
defined in terms of the quantum version of Deodhar lifts introduced in \cite{LNSSS1};
note that if $\mu \in P^{+}$ is regular, then the $\io{\eta_{2}}{w}$ is
just the initial direction $\iota(\eta_{2})$ of $\eta_{2}$.
Also, $\ze{\eta_{2}}{w}$ is an element of
$Q^{\vee, +} := \sum_{i \in I} \BZ_{\ge 0} \alpha_{i}^{\vee}$
defined in terms of the quantum Bruhat graph (see Section~\ref{subsec:main} for details);
note that if $\mu \in P^{+}$ is regular and $\eta_{2} \in \QLS(\mu)$ is
of the form $\eta_{2} = (v_{1}, \ldots, v_{s}; \sigma_{0} = 0, \ldots, \sigma_{s} = 1)$,
then the $\ze{\eta_{2}}{w}$ is just the element
$\sum_{k = 1}^{s} \wt(v_{k+1} \Rightarrow v_{k})$, where $v_{s+1} := w$.
As an application of this result,
we obtain the following equation (Corollary~\ref{cor:main})
between the generating functions
$\gch_{w \lambda} \QLS(\lambda) :=
\sum_{\eta \in \QLS(\lambda)} q^{\deg_{w \lambda}(\eta)} e^{\wt(\eta)}$
for $\lambda \in P^{+}$ and $w \in W$ (called graded characters):
\begin{equation*}
\gch_{w(\lambda + \mu)} \QLS(\lambda + \mu) =
\sum_{\eta \in \QLS(\mu)}
e^{\wt(\eta)} q^{\deg_{w \mu}(\eta) - \pair{\lambda}{\ze{\eta}{w}}}
\gch_{\iota(\eta,w) \lambda} \QLS(\lambda).
\end{equation*}
We know from \cite[Sect.~5.1]{No} that
the graded character $\lng(\gch_{w \lambda} \QLS(\lambda))$
twisted by the longest element $\lng$ of $W$ is identical
to the graded character of the generalized Weyl module $W_{\lng w \lambda}$ introduced in \cite{FM}.
Therefore, we have thus given a crystal-theoretic proof of \cite[Theorem~1.17]{FM}.
This paper is organized as follows. In Section~\ref{sec:review},
we first fix our notation for affine Lie algebras.
Next, we recall some basic facts about the (parabolic) semi-infinite Bruhat graph,
and then briefly review fundamental results on semi-infinite LS paths.
Also, we recall some basic facts about the (parabolic) quantum Bruhat graph,
and then briefly review fundamental results on QLS paths,
which includes the definition and some of the properties of the degree function.
In Section~\ref{sec:main}, we first state our main result (Theorem~\ref{thm:main}).
Next, we show a technical lemma about the quantum version of Deodhar lifts,
which plays an important role in our proof of the main result.
Finally, by using similarity maps for semi-infinite LS paths and QLS paths,
we prove Theorem~\ref{thm:main}.
\subsection*{Acknowledgments.}
S.N. was partially supported by
JSPS Grant-in-Aid for Scientific Research (B) 16H03920.
D.S. was partially supported by
JSPS Grant-in-Aid for Scientific Research (C) 15K04803.
\section{Semi-infinite Lakshmibai-Seshadri paths and quantum Lakshmibai-Seshadri paths.}
\label{sec:review}
\subsection{Affine Lie algebras.}
\label{subsec:liealg}
Let $\Fg$ be a finite-dimensional simple Lie algebra over $\BC$
with Cartan subalgebra $\Fh$.
Denote by $\{ \alpha_{i}^{\vee} \}_{i \in I}$ and
$\{ \alpha_{i} \}_{i \in I}$ the set of simple coroots and
simple roots of $\Fg$, respectively, and set
$Q^{\vee} := \bigoplus_{i \in I} \BZ \alpha_i^{\vee}$ and
$Q^{\vee,+} := \sum_{i \in I} \BZ_{\ge 0} \alpha_i^{\vee}$;
for $\xi,\,\zeta \in Q^{\vee}$, we write $\xi \ge \zeta$ if $\xi-\zeta \in Q^{\vee,+}$.
Let $\Delta$, $\Delta^{+}$, and $\Delta^{-}$ be
the set of roots, positive roots, and negative roots of $\Fg$, respectively,
with $\theta \in \Delta^{+}$ the highest root of $\Fg$.
For a root $\alpha \in \Delta$, we denote by $\alpha^{\vee}$ its dual root.
We set $\rho:=(1/2) \sum_{\alpha \in \Delta^{+}} \alpha$.
Also, let $\vpi_{i}$, $i \in I$, denote the fundamental weights for $\Fg$, and set
\begin{equation} \label{eq:P-fin}
P:=\bigoplus_{i \in I} \BZ \vpi_{i}, \qquad
P^{+} := \sum_{i \in I} \BZ_{\ge 0} \vpi_{i}.
\end{equation}
Let $\Fg_{\af} = \bigl(\BC[z,z^{-1}] \otimes \Fg\bigr) \oplus \BC c \oplus \BC d$ be
the untwisted affine Lie algebra over $\BC$ associated to $\Fg$,
where $c$ is the canonical central element, and $d$ is
the scaling element (or the degree operator),
with Cartan subalgebra $\Fh_{\af} = \Fh \oplus \BC c \oplus \BC d$.
We regard an element $\mu \in \Fh^{\ast}:=\Hom_{\BC}(\Fh,\,\BC)$ as an element of
$\Fh_{\af}^{\ast}$ by setting $\pair{\mu}{c}=\pair{\mu}{d}:=0$, where
$\pair{\cdot\,}{\cdot}:\Fh_{\af}^{\ast} \times \Fh_{\af} \rightarrow \BC$ denotes
the canonical pairing of $\Fh_{\af}^{\ast}:=\Hom_{\BC}(\Fh_{\af},\,\BC)$ and $\Fh_{\af}$.
Let $\{ \alpha_{i}^{\vee} \}_{i \in I_{\af}} \subset \Fh_{\af}$ and
$\{ \alpha_{i} \}_{i \in I_{\af}} \subset \Fh_{\af}^{\ast}$ be the set of
simple coroots and simple roots of $\Fg_{\af}$, respectively,
where $I_{\af}:=I \sqcup \{0\}$; note that
$\pair{\alpha_{i}}{c}=0$ and $\pair{\alpha_{i}}{d}=\delta_{i0}$
for $i \in I_{\af}$.
Denote by $\delta \in \Fh_{\af}^{\ast}$ the null root of $\Fg_{\af}$;
recall that $\alpha_{0}=\delta-\theta$.
Also, let $\Lambda_{i} \in \Fh_{\af}^{\ast}$, $i \in I_{\af}$,
denote the fundamental weights for $\Fg_{\af}$ such that $\pair{\Lambda_{i}}{d}=0$,
and set
\begin{equation} \label{eq:P}
P_{\af} :=
\left(\bigoplus_{i \in I_{\af}} \BZ \Lambda_{i}\right) \oplus
\BZ \delta \subset \Fh^{\ast}, \qquad
P_{\af}^{0}:=\bigl\{\mu \in P_{\af} \mid \pair{\mu}{c}=0\bigr\};
\end{equation}
notice that $P_{\af}^{0}=P \oplus \BZ \delta$, and that
$\pair{\mu}{\alpha_{0}^{\vee}} = - \pair{\mu}{\theta^{\vee}}$
for $\mu \in P_{\af}^{0}$. We remark that for each $i \in I$,
$\vpi_{i}$ is equal to $\Lambda_{i}-\pair{\Lambda_{i}}{c}\Lambda_{0}$,
which is called the level-zero fundamental weight in \cite{Kas02}.
Let $W := \langle s_{i} \mid i \in I \rangle$ and
$W_{\af} := \langle s_{i} \mid i \in I_{\af} \rangle$ be the (finite) Weyl group of $\Fg$ and
the (affine) Weyl group of $\Fg_{\af}$, respectively,
where $s_{i}$ is the simple reflection with respect to $\alpha_{i}$
for each $i \in I_{\af}$. We denote by $\ell:W_{\af} \rightarrow \BZ_{\ge 0}$
the length function on $W_{\af}$, whose restriction to $W$ agrees with
the one on $W$, by $e \in W \subset W_{\af}$ the identity element,
and by $\lng \in W$ the longest element.
We set
\begin{equation} \label{eq:tis}
\ti{s}_{i}:=
\begin{cases}
s_{i} & \text{if $i \in I$}, \\[1mm]
s_{\theta} & \text{if $i=0$},
\end{cases}
\qquad
\ti{\alpha}_{i}:=
\begin{cases}
\alpha_{i} & \text{if $i \in I$}, \\[1mm]
-\theta & \text{if $i=0$}.
\end{cases}
\end{equation}
For each $\xi \in Q^{\vee}$, let $t_{\xi} \in W_{\af}$ denote
the translation in $\Fh_{\af}^{\ast}$ by $\xi$ (see \cite[Sect.~6.5]{Kac});
for $\xi \in Q^{\vee}$, we have
\begin{equation}\label{eq:wtmu}
t_{\xi} \mu = \mu - \pair{\mu}{\xi}\delta \quad
\text{if $\mu \in \Fh_{\af}^{\ast}$ satisfies $\pair{\mu}{c}=0$}.
\end{equation}
Then, $\bigl\{ t_{\xi} \mid \xi \in Q^{\vee} \bigr\}$ forms
an abelian normal subgroup of $W_{\af}$, in which $t_{\xi} t_{\zeta} = t_{\xi + \zeta}$
holds for $\xi,\,\zeta \in Q^{\vee}$. Moreover, we know from \cite[Proposition 6.5]{Kac} that
\begin{equation*}
W_{\af} \cong
W \ltimes \bigl\{ t_{\xi} \mid \xi \in Q^{\vee} \bigr\} \cong W \ltimes Q^{\vee}.
\end{equation*}
Denote by $\rr$ the set of real roots of $\Fg_{\af}$, and
by $\prr \subset \rr$ the set of positive real roots;
we know from \cite[Proposition 6.3]{Kac} that
$\rr =
\bigl\{ \alpha + n \delta \mid \alpha \in \Delta,\, n \in \BZ \bigr\}$,
and
$\prr =
\Delta^{+} \sqcup
\bigl\{ \alpha + n \delta \mid \alpha \in \Delta,\, n \in \BZ_{> 0}\bigr\}$.
For $\beta \in \rr$, we denote by $\beta^{\vee} \in \Fh_{\af}$
its dual root, and $s_{\beta} \in W_{\af}$ the corresponding reflection;
if $\beta \in \rr$ is of the form $\beta = \alpha + n \delta$
with $\alpha \in \Delta$ and $n \in \BZ$, then
$s_{\beta} =s_{\alpha} t_{n\alpha^{\vee}} \in W \ltimes Q^{\vee}$.
\subsection{Parabolic semi-infinite Bruhat graph.}
\label{subsec:SiBG}
In this subsection, we take and fix an arbitrary subset $\J \subset I$. We set
$\QJ := \bigoplus_{i \in \J} \BZ \alpha_i$,
$\QJv := \bigoplus_{i \in \J} \BZ \alpha_i^{\vee}$,
$\QJvp := \sum_{i \in \J} \BZ_{\ge 0} \alpha_i^{\vee}$,
$\DeJ := \Delta \cap \QJ$,
$\DeJ^{+} := \Delta^{+} \cap \QJ$,
$\WJ := \langle s_{i} \mid i \in \J \rangle$, and
$\rho_{\J}:=(1/2) \sum_{\alpha \in \DeJ^{+}} \alpha$;
we denote by
$[\,\cdot\,]^{\J} : Q^{\vee} \twoheadrightarrow Q_{\Jc}^{\vee}$
the projection from $Q^{\vee}=Q_{\Jc}^{\vee} \oplus \QJv$
onto $Q_{\Jc}^{\vee}$ with kernel $\QJv$.
Let $\WJu$ denote the set of minimal(-length) coset representatives
for the cosets in $W/\WJ$; we know from \cite[Sect.~2.4]{BB} that
\begin{equation} \label{eq:mcr}
\WJu = \bigl\{ w \in W \mid
\text{$w \alpha \in \Delta^{+}$ for all $\alpha \in \DeJ^{+}$}\bigr\}.
\end{equation}
For $w \in W$, we denote by $\mcr{w}=\mcr{w}^{\J} \in \WJu$
the minimal coset representative for the coset $w \WJ$ in $W/\WJ$.
Also, following \cite{Pet97} (see also \cite[Sect.~10]{LS10}), we set
\begin{align}
(\DeJ)_{\af}
& := \bigl\{ \alpha + n \delta \mid
\alpha \in \DeJ,\,n \in \BZ \bigr\} \subset \Delta_{\af}, \\
(\DeJ)_{\af}^{+}
&:= (\DeJ)_{\af} \cap \prr =
\DeJ^{+} \sqcup \bigl\{ \alpha + n \delta \mid
\alpha \in \DeJ,\, n \in \BZ_{> 0} \bigr\}, \\
\label{eq:stabilizer}
(\WJ)_{\af}
& := \WJ \ltimes \bigl\{ t_{\xi} \mid \xi \in \QJv \bigr\}
= \bigl\langle s_{\beta} \mid \beta \in (\DeJ)_{\af}^{+} \bigr\rangle, \\
\label{eq:Pet}
(\WJu)_{\af}
&:= \bigl\{ x \in W_{\af} \mid
\text{$x\beta \in \prr$ for all $\beta \in (\DeJ)_{\af}^{+}$} \bigr\};
\end{align}
if $\J = \emptyset$, then
$(W^{\emptyset})_{\af}=W_{\af}$ and $(W_{\emptyset})_{\af}=\bigl\{e\bigr\}$.
We know from \cite{Pet97} (see also \cite[Lemma~10.6]{LS10}) that
for each $x \in W_{\af}$, there exist a unique
$x_1 \in (\WJu)_{\af}$ and a unique $x_2 \in (\WJ)_{\af}$
such that $x = x_1 x_2$; let
\begin{equation} \label{eq:PiJ}
\PJ : W_{\af} \twoheadrightarrow (\WJu)_{\af}, \quad x \mapsto x_{1},
\end{equation}
denote the projection,
where $x= x_1 x_2$ with $x_1 \in (\WJu)_{\af}$ and $x_2 \in (\WJ)_{\af}$.
\begin{lem} \label{lem:PiJ}
\mbox{}
\begin{enu}
\item It holds that
\begin{equation} \label{eq:PiJ2}
\begin{cases}
\PJ(w)=\mcr{w}
& \text{\rm for all $w \in W$}; \\[1mm]
\PJ(xt_{\xi})=\PJ(x)\PJ(t_{\xi})
& \text{\rm for all $x \in W_{\af}$ and $\xi \in Q^{\vee}$};
\end{cases}
\end{equation}
in particular, $(\WJu)_{\af}
= \bigl\{ w \PJ(t_{\xi}) \mid w \in \WJu,\,\xi \in Q^{\vee} \bigr\}$.
\item For each $\xi \in Q^{\vee}$,
the element $\PJ(t_{\xi}) \in (\WJu)_{\af}$ is
of the form{\rm:} $\PJ(t_{\xi})=ut_{\xi+\xi_{1}}$
for some $u \in \WJ$ and $\xi_{1} \in \QJv$.
\item For $\xi,\,\zeta \in Q^{\vee}$,
$\PJ(t_{\xi}) = \PJ(t_{\zeta})$ if and only if $\xi-\zeta \in \QJv$.
\end{enu}
\end{lem}
\begin{proof}
Part (1) follows from \cite[Proposition~10.10]{LS10}, and
part (2) follows from \cite[(3.7)]{LNSSS1}.
The ``if'' part of part (3) is obvious by part (1) and
the fact that $t_{\xi-\zeta} \in (\WJ)_{\af}$.
The ``only if'' part of part (3) is obvious by part (2).
\end{proof}
\begin{dfn} \label{dfn:sell}
Let $x \in W_{\af}$, and
write it as $x = w t_{\xi}$ with $w \in W$ and $\xi \in Q^{\vee}$.
We define the semi-infinite length $\sell(x)$ of $x$ by:
$\sell (x) = \ell (w) + 2 \pair{\rho}{\xi}$.
\end{dfn}
\begin{dfn}[\cite{Lu80}, \cite{Lu97}; see also \cite{Pet97}] \label{dfn:SiB}
\mbox{}
\begin{enu}
\item The (parabolic) semi-infinite Bruhat graph $\SBJ$
is the $\prr$-labeled directed graph whose
vertices are the elements of $(\WJu)_{\af}$, and
whose directed edges are of the form:
$x \edge{\beta} y$ for $x,y \in (\WJu)_{\af}$ and $\beta \in \prr$
such that $y=s_{\beta}x$ and $\sell (y) = \sell (x) + 1$.
When $\J=\emptyset$, we write $\SB$ for
$\mathrm{BG}^{\si}\bigl((W^{\emptyset})_{\af}\bigr)$.
\item
The (parabolic) semi-infinite Bruhat order is a partial order
$\sile$ on $(\WJu)_{\af}$ defined as follows:
for $x,\,y \in (\WJu)_{\af}$, we write $x \sile y$
if there exists a directed path in $\SBJ$ from $x$ to $y$;
we write $x \sil y$ if $x \sile y$ and $x \ne y$.
\end{enu}
\end{dfn}
\begin{rem}
In the case $\J = \emptyset$, the semi-infinite Bruhat order on $W_{\af}$ is
essentially the same as the generic Bruhat order introduced in \cite{Lu80};
see \cite[Appendix~A.3]{INS} for details. Also, for a general $\J$,
the parabolic semi-infinite Bruhat order on $(\WJu)_{\af}$
is essentially the same as the partial order on $\J$-alcoves introduced in
\cite{Lu97} when we take a special point to be the origin.
\end{rem}
\begin{prop}[{\cite[Corollary~4.2.2]{INS}}] \label{prop:beta}
Let $x,y \in (\WJu)_{\af}$, and $\beta \in \prr$.
If $x \edge{\beta} y$ is an edge of $\SBJ$, then
$\beta$ is either of the following forms{\rm:}
$\beta = \alpha$ with $\alpha \in \Delta^{+}$, or
$\beta = \alpha + \delta$ with $\alpha \in \Delta^{-}$.
Moreover, if $x = w \PJ(t_{\xi})$ with $w \in \WJu$ and $\xi \in Q^{\vee}$,
then $w^{-1}\alpha \in \Delta^{+} \setminus \DeJ$.
\end{prop}
\begin{lem}[{\cite[Remark~4.1.3]{INS}}] \label{lem:si}
Let $x \in (\WJu)_{\af}$, and $i \in I_{\af}$. Then,
\begin{equation} \label{eq:si1}
s_{i}x \in (\WJu)_{\af} \iff
\pair{x\lambda}{\alpha_{i}^{\vee}} \ne 0 \iff
x^{-1}\alpha_{i} \in (\Delta \setminus \DeJ)+\BZ\delta.
\end{equation}
Moreover, in this case,
\begin{equation} \label{eq:simple}
\begin{cases}
x \edge{\alpha_{i}} s_{i}x \iff
\pair{x\lambda}{\alpha_{i}^{\vee}} > 0 \iff
x^{-1}\alpha_{i}^{\vee} \in (\Delta^{+} \setminus \DeJ^{+})+\BZ\delta, & \\[1.5mm]
s_{i}x \edge{\alpha_{i}} x \iff
\pair{x\lambda}{\alpha_{i}^{\vee}} < 0 \iff
x^{-1}\alpha_{i}^{\vee} \in (\Delta^{-} \setminus \DeJ^{-})+\BZ\delta. &
\end{cases}
\end{equation}
\end{lem}
\subsection{Semi-infinite Lakshmibai-Seshadri paths.}
\label{subsec:SLS}
In this subsection, we fix $\lambda \in P^{+} \subset P_{\af}^{0} \subset P_{\af}$
(see \eqref{eq:P-fin} and \eqref{eq:P}), and set
\begin{equation} \label{eq:J}
\J=\J_{\lambda}:=
\bigl\{ i \in I \mid \pair{\lambda}{\alpha_i^{\vee}}=0 \bigr\} \subset I.
\end{equation}
\begin{dfn} \label{dfn:SBa}
For a rational number $0 < \sigma < 1$,
we define $\SBa$ to be the subgraph of $\SBJ$
with the same vertex set but having only
those directed edges of the form
$x \edge{\beta} y$ for which
$\sigma\pair{x\lambda}{\beta^{\vee}} \in \BZ$ holds.
\end{dfn}
\begin{dfn}\label{dfn:SLS}
A semi-infinite Lakshmibai-Seshadri (LS for short) path of
shape $\lambda $ is a pair
\begin{equation} \label{eq:SLS}
\pi = (x_{1},\,\dots,\,x_{s} \,;\,
\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}), \quad s \ge 1,
\end{equation}
of a strictly decreasing sequence $x_1 \sig \cdots \sig x_s$
of elements in $(\WJu)_{\af}$ and an increasing sequence
$0 = \sigma_0 < \sigma_1 < \cdots < \sigma_s =1$ of rational numbers
satisfying the condition that there exists a directed path
in $\SBb{\sigma_{u}}$ from $x_{u+1}$ to $x_{u}$
for each $u = 1,\,2,\,\dots,\,s-1$.
\end{dfn}
\begin{rem} \label{rem:SLS}
We set
$\turn{\lambda}:=\bigl\{\sigma \in [0,1] \mid
\sigma \pair{\lambda}{\alpha^{\vee}} \in \BZ
\text{ for some $\alpha \in \Delta^{+} \setminus \DeJ^{+}$} \bigr\}$;
note that $\turn{\lambda}$ is a finite set contained in $\BQ$.
Let $0 < \sigma < 1$ be a rational number, and
assume that there exists an edge $x \edge{\beta} y$ in $\SBa$;
write $x \in \WJa$ and $\beta \in \prr$
as in Proposition~\ref{prop:beta}.
Then we see that $\sigma\pair{\lambda}{w^{-1}\alpha^{\vee}} =
\sigma\pair{w\lambda}{\alpha^{\vee}} =
\sigma\pair{x\lambda}{\beta^{\vee}} \in \BZ$,
which implies that $\sigma \in \turn{\lambda}$.
Therefore, if $\pi \in \SLS(\lambda)$ is of the form \eqref{eq:SLS},
then $\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s} \in \turn{\lambda}$.
\end{rem}
We denote by $\SLS(\lambda)$
the set of all semi-infinite LS paths of shape $\lambda$.
If $\pi \in \SLS(\lambda)$ is of the form \eqref{eq:SLS},
then we set $\iota(\pi):=x_{1} \in \WJa$ and $\kappa(\pi):=x_{s} \in \WJa$,
and call them the initial and final directions of $\pi$,
respectively.
Following \cite[Sect.~3.1]{INS}, we endow the set $\SLS(\lambda)$
with a crystal structure with weights in $P_{\af}$ as follows.
Let $\pi \in \SLS(\lambda)$ be of the form \eqref{eq:SLS}.
We define $\ol{\pi}:[0,1] \rightarrow \BR \otimes_{\BZ} P_{\af}$
to be the piecewise-linear, continuous map
whose ``direction vector'' for the interval
$[\sigma_{u-1},\,\sigma_{u}]$ is $x_{u}\lambda \in P_{\af}$
for each $1 \le u \le s$, that is,
\begin{equation} \label{eq:olpi}
\ol{\pi} (t) :=
\sum_{k = 1}^{u-1}(\sigma_{k} - \sigma_{k-1}) x_{k}\lambda + (t - \sigma_{u-1}) x_{u}\lambda
\quad
\text{for $t \in [\sigma_{u-1},\,\sigma_u]$, $1 \le u \le s$}.
\end{equation}
We know from \cite[Proposition~3.1.3]{INS} that $\ol{\pi}$
is an (affine) LS path of shape $\lambda \in P_{\af}$,
introduced in \cite[Sect.~4]{Lit95}. We set
\begin{equation} \label{eq:wt}
\wt (\pi):= \ol{\pi}(1) = \sum_{u = 1}^{s} (\sigma_{u}-\sigma_{u-1})x_{u}\lambda \in P_{\af}.
\end{equation}
We define root operators $e_{i}$, $f_{i}$, $i \in I_{\af}$,
in the same manner as in \cite[Sect.~2]{Lit95}. Set
\begin{equation} \label{eq:H}
\begin{cases}
H^{\pi}_{i}(t) := \pair{\ol{\pi}(t)}{\alpha_{i}^{\vee}} \quad
\text{for $t \in [0,1]$}, \\[1.5mm]
m^{\pi}_{i} :=
\min \bigl\{ H^{\pi}_{i} (t) \mid t \in [0,1] \bigr\}.
\end{cases}
\end{equation}
As explained in \cite[Remark~2.4.3]{NS16},
all local minima of the function $H^{\pi}_{i}(t)$, $t \in [0,1]$,
are integers; in particular,
the minimum value $m^{\pi}_{i}$ is a nonpositive integer
(recall that $\ol{\pi}(0)=0$, and hence $H^{\pi}_{i}(0)=0$).
We define $e_{i}\pi$ as follows.
If $m^{\pi}_{i}=0$, then we set $e_{i} \pi := \bzero$,
where $\bzero$ is an additional element not
contained in any crystal.
If $m^{\pi}_{i} \le -1$, then we set
\begin{equation} \label{eq:t-e}
\begin{cases}
t_{1} :=
\min \bigl\{ t \in [0,\,1] \mid
H^{\pi}_{i}(t) = m^{\pi}_{i} \bigr\}, \\[1.5mm]
t_{0} :=
\max \bigl\{ t \in [0,\,t_{1}] \mid
H^{\pi}_{i}(t) = m^{\pi}_{i} + 1 \bigr\};
\end{cases}
\end{equation}
note that $H^{\pi}_{i}(t)$ is
strictly decreasing on the interval $[t_{0},\,t_{1}]$.
Let $1 \le p \le q \le s$ be such that
$\sigma_{p-1} \le t_{0} < \sigma_p$ and $t_{1} = \sigma_{q}$.
Then we define $e_{i}\pi$ by
\begin{equation} \label{eq:epi}
\begin{split}
& e_{i} \pi := (
x_{1},\,\ldots,\,x_{p},\,s_{i}x_{p},\,s_{i}x_{p+1},\,\ldots,\,
s_{i}x_{q},\,x_{q+1},\,\ldots,\,x_{s} ; \\
& \hspace*{40mm}
\sigma_{0},\,\ldots,\,\sigma_{p-1},\,t_{0},\,\sigma_{p},\,\ldots,\,\sigma_{q}=t_{1},\,
\ldots,\,\sigma_{s});
\end{split}
\end{equation}
if $t_{0} = \sigma_{p-1}$, then we drop $x_{p}$ and $\sigma_{p-1}$, and
if $s_{i} x_{q} = x_{q+1}$, then we drop $x_{q+1}$ and $\sigma_{q}=t_{1}$.
Similarly, we define $f_{i}\pi$ as follows.
Observe that $H^{\pi}_{i}(1) - m^{\pi}_{i}$ is a nonnegative integer.
If $H^{\pi}_{i}(1) - m^{\pi}_{i} = 0$, then we set $f_{i} \pi := \bzero$.
If $H^{\pi}_{i}(1) - m^{\pi}_{i} \ge 1$, then we set
\begin{equation} \label{eq:t-f}
\begin{cases}
t_{0} :=
\max \bigl\{ t \in [0,1] \mid H^{\pi}_{i}(t) = m^{\pi}_{i} \bigr\}, \\[1.5mm]
t_{1} :=
\min \bigl\{ t \in [t_{0},\,1] \mid H^{\pi}_{i}(t) = m^{\pi}_{i} + 1 \bigr\};
\end{cases}
\end{equation}
note that $H^{\pi}_{i}(t)$ is
strictly increasing on the interval $[t_{0},\,t_{1}]$.
Let $0 \le p \le q \le s-1$ be such that $t_{0} = \sigma_{p}$ and
$\sigma_{q} < t_{1} \le \sigma_{q+1}$. Then we define $f_{i}\pi$ by
\begin{equation} \label{eq:fpi}
\begin{split}
& f_{i} \pi := ( x_{1},\,\ldots,\,x_{p},\,s_{i}x_{p+1},\,\dots,\,
s_{i} x_{q},\,s_{i} x_{q+1},\,x_{q+1},\,\ldots,\,x_{s} ; \\
& \hspace{40mm}
\sigma_{0},\,\ldots,\,\sigma_{p}=t_{0},\,\ldots,\,\sigma_{q},\,t_{1},\,
\sigma_{q+1},\,\ldots,\,\sigma_{s});
\end{split}
\end{equation}
if $t_{1} = \sigma_{q+1}$, then we drop $x_{q+1}$ and $\sigma_{q+1}$, and
if $x_{p} = s_{i} x_{p+1}$, then we drop $x_{p}$ and $\sigma_{p}=t_{0}$.
In addition, we set $e_{i} \bzero = f_{i} \bzero := \bzero$
for all $i \in I_{\af}$.
\begin{thm}[{see \cite[Theorem~3.1.5]{INS}}] \label{thm:SLS}
\mbox{}
\begin{enu}
\item The set $\SLS(\lambda) \sqcup \{ \bzero \}$ is
stable under the action of the root operators
$e_{i}$ and $f_{i}$, $i \in I_{\af}$.
\item For each $\pi \in \SLS(\lambda)$
and $i \in I_{\af}$, we set
\begin{equation*}
\begin{cases}
\ve_{i} (\pi) :=
\max \bigl\{ n \ge 0 \mid e_{i}^{n} \pi \neq \bzero \bigr\}, \\[1.5mm]
\vp_{i} (\pi) :=
\max \bigl\{ n \ge 0 \mid f_{i}^{n} \pi \neq \bzero \bigr\}.
\end{cases}
\end{equation*}
Then, the set $\SLS(\lambda)$,
equipped with the maps $\wt$, $e_{i}$, $f_{i}$, $i \in I_{\af}$,
and $\ve_{i}$, $\vp_{i}$, $i \in I_{\af}$,
defined above, is a crystal with weights in $P_{\af}$.
\end{enu}
\end{thm}
We denote by $\SLS_{0}(\lambda)$ the connected component of
$\SLS(\lambda)$ containing $\pi_{\lambda}:=(e;0,1)$.
Also, for $x \in (\WJu)_{\af}$,
we set $\pi_{\lambda}^{x}:=(x;0,1) \in \SLS(\lambda)$.
Recall from \cite[Theorem~3.2.1]{INS} that
$\SLS(\lambda)$ is isomorphic as a crystal
(with weights in $P_{\af}$) to the crystal basis of
the extremal weight module of extremal weight $\lambda$.
Hence we deduce from \cite[Sect.~7]{Kas94} that
the affine Weyl group $W_{\af}$ acts on $\SLS(\lambda)$ by
\begin{equation} \label{eq:W1}
s_{i} \cdot \pi:=
\begin{cases}
f_{i}^{n}\pi & \text{if $n:=\pair{\wt(\pi)}{\alpha_{i}^{\vee}} \ge 0$}, \\[1.5mm]
e_{i}^{-n}\pi & \text{if $n:=\pair{\wt(\pi)}{\alpha_{i}^{\vee}} \le 0$},
\end{cases}
\end{equation}
for $\pi \in \SLS(\lambda)$ and $i \in I_{\af}$;
by convention, we set $w \cdot \bzero:=\bzero$
for all $w \in W_{\af}$.
\begin{lem}[{see \cite[(5.1.6)]{INS}}] \label{lem:pix}
For every $x \in W_{\af}$, it holds that
$x \cdot \pi_{\lambda} = (\PJ(x);0,1) = \pi_{\lambda}^{\PJ(x)}$.
In particular, $\pi_{\lambda}^{x} \in \SLS_{0}(\lambda)$
for all $x \in (\WJu)_{\af}$.
\end{lem}
Let $\pi=(x_{1},\,\dots,\,x_{s} \,;\,
\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}) \in \SLS(\lambda)$.
For $\xi \in Q^{\vee}$, we set
\begin{equation} \label{eq:Txi}
\pi \cdot T_{\xi} := (x_{1}\PJ(t_{\xi}),\,\dots,\,x_{s}\PJ(t_{\xi}) \,;\,
\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s});
\end{equation}
by convention, we set $\bzero \cdot T_{\xi}:=\bzero$.
Then we see from the definition of semi-infinite LS paths that
$\pi \cdot T_{\xi} \in \SLS(\lambda)$; note that $x_{u}\PJ(t_{\xi})=
\PJ(x_{u})\PJ(t_{\xi}) = \PJ(x_{u}t_{\xi}) \in \WJa$ by \eqref{eq:PiJ2}.
The following lemma can be easily shown from the definitions.
\begin{lem} \label{lem:Txi}
Let $\pi \in \SLS(\lambda)$, $\xi \in Q^{\vee}$, and $i \in I_{\af}$.
Then,
\begin{equation} \label{eq:Txi1}
e_{i}(\pi \cdot T_{\xi}) = (e_{i}\pi) \cdot T_{\xi}, \qquad
f_{i}(\pi \cdot T_{\xi}) = (f_{i}\pi) \cdot T_{\xi},
\end{equation}
\begin{equation} \label{eq:Txi1a}
\begin{cases}
\wt (\pi \cdot T_{\xi}) = \wt(\pi)-\pair{\lambda}{\xi}\delta,
\ \text{\rm and hence} \
\pair{\wt (\pi \cdot T_{\xi})}{\alpha_{i}^{\vee}} =
\pair{\wt(\pi)}{\alpha_{i}^{\vee}}, \\
\ve_{i}(\pi \cdot T_{\xi}) = \ve_{i}(\pi), \quad
\vp_{i}(\pi \cdot T_{\xi}) = \vp_{i}(\pi).
\end{cases}
\end{equation}
\end{lem}
\begin{rem} \label{rem:Txi}
Let $\pi \in \SLS_{0}(\lambda)$, and write it as
$\pi=X\pi_{\lambda}$ for some monomial $X$ in root operators. Then,
$\pi \cdot T_{\xi} = (X\pi_{\lambda}) \cdot T_{\xi}
\stackrel{\eqref{eq:Txi1}}{=} X (\pi_{\lambda} \cdot T_{\xi}) =
X \pi_{\lambda}^{\PJ(t_{\xi})} \in \SLS_{0}(\lambda)$.
\end{rem}
Now, for $\lambda_{1},\dots,\,\lambda_{n} \in P^{+}$,
we denote by $(\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n}))_{0}$
the connected component of $\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n})$
containing $\pi_{\lambda_{1}} \otimes \cdots \otimes \pi_{\lambda_{n}}$.
The next lemma follows from \cite[Theorem~3.1]{KNS}.
\begin{thm} \label{thm:SMT}
Let $\lambda_{1},\dots,\,\lambda_{n} \in P^{+}$,
and set $\lambda:=\lambda_{1}+\cdots+\lambda_{n} \in P^{+}$.
There exists an embedding
$\Xi=\Xi_{\lambda_{1},\dots,\lambda_{n}}:
\SLS(\lambda) \hookrightarrow
\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n})$
of crystals that sends $\pi_{\lambda}$ to
$\pi_{\lambda_{1}} \otimes \cdots \otimes \pi_{\lambda_{n}}$.
In particular, the restriction of $\Xi=\Xi_{\lambda_{1},\dots,\lambda_{n}}$
to the connected component $\SLS_{0}(\lambda)$ is
a {\rm(}unique{\rm)} isomorphism
\begin{equation} \label{eq:Xi0}
\Xi=\Xi_{\lambda_{1},\dots,\lambda_{n}}:
\SLS_{0}(\lambda) \stackrel{\sim}{\rightarrow}
(\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n}))_{0}
\end{equation}
of crystals that sends $\pi_{\lambda}$ to
$\pi_{\lambda_{1}} \otimes \cdots \otimes \pi_{\lambda_{n}}$.
\end{thm}
\begin{rem} \label{rem:ass1}
Let $\lambda,\mu,\nu \in P^{+}$. We see that
the following diagram is commutative:
\begin{equation} \label{eq:ass1a}
\begin{split}
\xymatrix{
\SLS_{0}(\lambda+\mu+\nu)
\ar[rr]^-{\Xi_{\lambda+\mu,\nu}}
\ar[d]_{\Xi_{\lambda,\mu+\nu}} & &
(\SLS(\lambda+\mu) \otimes \SLS(\nu))_{0} \ar[d]^{\Xi_{\lambda\mu} \otimes \id} \\
(\SLS(\lambda) \otimes \SLS(\mu+\nu))_{0}
\ar[rr]^-{\id \otimes \Xi_{\mu\nu}} & &
(\SLS(\lambda) \otimes \SLS(\mu) \otimes \SLS(\nu))_{0}.
}
\end{split}
\end{equation}
Indeed, both of the maps $(\id \otimes \Xi_{\mu\nu}) \circ
\Xi_{\lambda,\mu+\nu}$ and
$(\Xi_{\lambda\mu} \otimes \id) \circ
\Xi_{\lambda+\mu,\nu}$ are isomorphisms of crystals that
send the element $\pi_{\lambda+\mu+\nu} \in \SLS_{0}(\lambda+\mu+\nu)$
to $\pi_{\lambda} \otimes \pi_{\mu} \otimes \pi_{\nu} \in
(\SLS(\lambda) \otimes \SLS(\mu) \otimes \SLS(\nu))_{0}$.
Because $\SLS_{0}(\lambda+\mu+\nu) \subset \SLS(\lambda+\mu+\nu)$
is connected and contains $\pi_{\lambda+\mu+\nu}$,
we conclude that the diagram above is commutative.
\end{rem}
Keep the setting of Theorem~\ref{thm:SMT}.
Take $\J_{k}=\J_{\lambda_{k}}$, $1 \le k \le n$ and
$\J=\J_{\lambda}$ as in \eqref{eq:J}.
\begin{lem}[{see \cite[Remark~3.5.2]{NS16}}] \label{lem:ext}
It holds that $\Xi(x \cdot \pi_{\lambda})=
(x \cdot \pi_{\lambda_{1}}) \otimes \cdots \otimes (x \cdot \pi_{\lambda_{n}})$
for $x \in W_{\af}$.
\end{lem}
For $\pi_{1} \otimes \cdots \otimes \pi_{n}
\in \SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n})$ and $\xi \in Q^{\vee}$,
we set
\begin{equation} \label{eq:Txiten}
(\pi_{1} \otimes \cdots \otimes \pi_{n}) \cdot T_{\xi}:=
(\pi_{1} \cdot T_{\xi}) \otimes \cdots \otimes
(\pi_{n} \cdot T_{\xi}).
\end{equation}
\begin{lem} \label{lem:Txi2}
It holds that $\Xi(\pi \cdot T_{\xi}) = \Xi(\pi) \cdot T_{\xi}$
for all $\pi \in \SLS_{0}(\lambda)$ and $\xi \in Q^{\vee}$.
\end{lem}
\begin{proof
Let $\pi \in \SLS_{0}(\lambda)$, and write it as $\pi=X\pi_{\lambda}$
for some monomial $X$ in root operators. Assume that
$\Xi(\pi)=\Xi(X\pi_{\lambda}) = X\Xi(\pi_{\lambda}) =
X (\pi_{\lambda_{1}} \otimes \cdots \otimes \pi_{\lambda_{n}}) =
X_{1}\pi_{\lambda_{1}} \otimes \cdots \otimes X_{n}\pi_{\lambda_{n}}$
for some monomials $X_{k}$, $1 \le k \le n$, in root operators
(which are ``submonomials'' of $X$ by the tensor product rule for crystals).
Then we have
\begin{align}
\Xi(\pi) \cdot T_{\xi} & =
(X_{1}\pi_{\lambda_{1}} \cdot T_{\xi}) \otimes \cdots \otimes
(X_{n}\pi_{\lambda_{n}} \cdot T_{\xi}) \nonumber \\
& =
(X_{1}\pi_{\lambda_{1}}^{\PS{1}(t_{\xi})}) \otimes \cdots \otimes
(X_{n}\pi_{\lambda_{n}}^{\PS{n}(t_{\xi})}) \quad
\text{by Remark~\ref{rem:Txi}}. \label{eq:txi2a}
\end{align}
Here, by Remark~\ref{rem:Txi}, we see that
$\pi \cdot T_{\xi} = X \pi_{\lambda}^{\PJ(t_{\xi})}$.
Therefore, it follows from Lemma~\ref{lem:ext} that
$\Xi(\pi \cdot T_{\xi})=\Xi(X \pi_{\lambda}^{\PJ(t_{\xi})}) =
X \Xi(\pi_{\lambda}^{\PJ(t_{\xi})}) =
X(\pi_{\lambda_{1}}^{\PS{1}(t_{\xi})}
\otimes \cdots \otimes
\pi_{\lambda_{n}}^{\PS{n}(t_{\xi})})$.
Hence, the tensor product rule for crystals,
along with \eqref{eq:Txi1a}, shows that
\begin{equation*}
\Xi(\pi \cdot T_{\xi})=
X(\pi_{\lambda_{1}}^{\PS{1}(t_{\xi})}
\otimes \cdots \otimes
\pi_{\lambda_{n}}^{\PS{n}(t_{\xi})}) =
(X_{1}\pi_{\lambda_{1}}^{\PS{1}(t_{\xi})}) \otimes \cdots \otimes
(X_{n}\pi_{\lambda_{n}}^{\PS{n}(t_{\xi})}).
\end{equation*}
By this equality and \eqref{eq:txi2a},
we obtain $\Xi(\pi \cdot T_{\xi}) = \Xi(\pi) \cdot T_{\xi}$, as desired.
\end{proof}
\subsection{Parabolic quantum Bruhat graph and the tilted Bruhat order.}
\label{subsec:QBG}
In this subsection, we take and fix a subset $\J$ of $I$.
\begin{dfn}
The (parabolic) quantum Bruhat graph $\QBJ$ is
the ($\Delta^{+} \setminus \DeJ^{+})$-labeled
directed graph whose vertices are the elements of $\WJu$, and
whose directed edges are of the form: $w \edge{\beta} v$
for $w,v \in \WJu$ and $\beta \in \Delta^{+} \setminus \DeJ^{+}$
such that $v= \mcr{ws_{\beta}}$, and such that either of
the following holds:
(i) $\ell(v) = \ell (w) + 1$;
(ii) $\ell(v) = \ell (w) + 1 - 2 \pair{\rho-\rho_{\J}}{\beta^{\vee}}$.
An edge satisfying (i) (resp., (ii)) is called a Bruhat (resp., quantum) edge.
When $\J=\emptyset$, we write $\QB$ for $\mathrm{QBG}(W^{\emptyset})$.
\end{dfn}
\begin{rem} \label{rem:PQBG}
We know from \cite[Remark~6.13]{LNSSS1} that for each $w,\,v \in \WJu$,
there exists a directed path in $\QBJ$ from $w$ to $v$.
\end{rem}
Let $w,\,v \in \WJu$, and let
$\bp:w=
v_{0} \edge{\beta_{1}}
v_{1} \edge{\beta_{2}} \cdots
\edge{\beta_{s}}
v_{s}=v$
be a directed path in $\QBJ$ from $w$ to $v$.
Then we define the weight of $\bp$ by
\begin{equation} \label{eq:wtdp}
\wt^{\J}(\bp) := \sum_{
\begin{subarray}{c}
1 \le r \le s\,; \\[1mm]
\text{$v_{r-1} \edge{\beta_{r}} v_{r}$ is} \\[1mm]
\text{a quantum edge}
\end{subarray}}
\beta_{r}^{\vee} \in Q^{\vee,+};
\end{equation}
when $\J=\emptyset$, we write $\wt(\bp)$ for $\wt^{\emptyset}(\bp)$.
We know the following proposition from
\cite[Proposition~8.1 and its proof]{LNSSS1}.
\begin{prop} \label{prop:81}
Let $w,\,v \in \WJu$.
Let $\bp$ be a shortest directed path in $\QBJ$ from $w$ to $v$,
and $\bq$ an arbitrary directed path in $\QBJ$ from $w$ to $v$.
Then, $[\wt^{\J}(\bq)-\wt^{\J}(\bp)]^{\J} \in Q_{\Jc}^{\vee,+}$,
where $[\,\cdot\,]^{\J} : Q^{\vee} \twoheadrightarrow Q_{\Jc}^{\vee}$
is as defined in Section~\ref{subsec:SiBG}.
Moreover, $\bq$ is shortest if and only if
$[\wt^{\J}(\bq)]^{\J}=[\wt^{\J}(\bp)]^{\J}$.
\end{prop}
For $w,\,v \in \WJu$, we take a shortest directed path $\bp$ in
$\QBJ$ from $w$ to $v$, and set
$\wt^{\J}(w \Rightarrow v):=[\wt^{\J}(\bp)]^{\J} \in Q_{\Jc}^{\vee,+}$.
When $\J=\emptyset$, we write $\wt(u \Rightarrow v)$
for $\wt^{\emptyset}(u \Rightarrow v)$.
\begin{lem}[{\cite[Lemma~7.2]{LNSSS2}}] \label{lem:wtS} \mbox{}
Let $w,\,v \in \WJu$, and let $w_{1} \in w\WJ$, $v_{1} \in v\WJ$.
Then we have $\wt^{\J}(w \Rightarrow v) = [\wt(w_{1} \Rightarrow v_{1})]^{S}$.
\end{lem}
For $w,\,v \in W$, we denote by $\ell^{\J}(w \Rightarrow v)$
the length of a shortest directed path from $w$ to $v$ in $\QBJ$;
when $\J=\emptyset$, we write $\ell(w \Rightarrow v)$
for $\ell^{\emptyset}(w \Rightarrow v)$.
\begin{dfn}[tilted Bruhat order; see \cite{BFP}] \label{dfn:tilted}
For each $w \in W$, we define the $w$-tilted Bruhat order $\tb{w}$ on $W$ as follows:
for $v_{1},v_{2} \in W$,
\begin{equation} \label{eq:tilted}
v_{1} \le_{w} v_{2} \iff \ell(w \Rightarrow v_{2}) =
\ell(w \Rightarrow v_{1}) + \ell(v_{1} \Rightarrow v_{2}).
\end{equation}
Namely, $v_{1} \le_{w} v_{2}$ if and only if
there exists a shortest directed path in $\QB$
from $w$ to $v_{2}$ passing through $v_{1}$;
or equivalently, the concatenation of a shortest directed path
from $w$ to $v_{1}$ and one from $v_{1}$ to $v_{2}$
is one from $w$ to $v_{2}$.
\end{dfn}
\begin{prop}[{\cite[Theorem~7.1]{LNSSS1}}] \label{prop:tbmin}
Let $w \in W$, and let $\J$ be a subset of $I$.
Then each coset $v\WJ$, $v \in W$, has a unique minimal element
with respect to $\tb{w}$;
we denote it by $\tbmin{v}{\J}{w}$.
\end{prop}
\subsection{Quantum Lakshmibai-Seshadri paths and the degree function.}
\label{subsec:QLS}
We fix $\lambda \in P^{+}$, and take $\J=\J_{\lambda}$ as in \eqref{eq:J}.
\begin{dfn} \label{dfn:QBa}
For a rational number $0 < \sigma < 1$,
we define $\QBa$ to be the subgraph of $\QBJ$
with the same vertex set but having only those directed edges of the form
$w \edge{\beta} v$ for which
$\sigma\pair{\lambda}{\beta^{\vee}} \in \BZ$ holds.
\end{dfn}
\begin{dfn}\label{dfn:QLS}
A quantum LS path of shape $\lambda$ is a pair
\begin{equation} \label{eq:QLS}
\eta = (v_{1},\,\dots,\,v_{s} \,;\, \sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}), \quad s \ge 1,
\end{equation}
of a sequence $v_{1},\,\dots,\,v_{s}$
of elements in $\WJu$ with $v_{u} \ne v_{u+1}$
for any $1 \le u \le s-1$ and an increasing sequence
$0 = \sigma_0 < \sigma_1 < \cdots < \sigma_s =1$ of rational numbers
satisfying the condition that there exists a directed path
in $\QBb{\sigma_{u}}$ from $v_{u+1}$ to $v_{u}$
for each $u = 1,\,2,\,\dots,\,s-1$.
\end{dfn}
\begin{rem} \label{rem:QLS}
It is obvious by the definition that if $\eta \in \QLS(\lambda)$
is of the form \eqref{eq:QLS}, then
$\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s} \in \turn{\lambda}$,
where $\turn{\lambda}$ is as defined in Remark~\ref{rem:SLS}.
\end{rem}
We denote by $\QLS(\lambda)$
the set of all quantum LS paths of shape $\lambda$.
We set $\eta_{\lambda}:=(e;0,1) \in \QLS(\lambda)$, and
set $\eta_{\lambda}^{v}:=(v;0,1) \in \QLS(\lambda)$
for $v \in \WJu$.
Also, if $\eta \in \QLS(\lambda)$ is of the form \eqref{eq:QLS},
then we set $\iota(\eta):=v_{1} \in \WJu$ and
$\kappa(\eta):=v_{s} \in \WJu$,
and call them the initial and final directions of $\eta$,
respectively.
We identify $\eta \in \QLS(\lambda)$ of the form \eqref{eq:QLS}
with the piecewise-linear, continuous map
$\eta:[0,1] \rightarrow \BR \otimes_{\BZ} P$
whose ``direction vector'' for the interval
$[\sigma_{u-1},\,\sigma_{u}]$ is $v_{u}\lambda \in P$
for each $1 \le u \le s$, that is,
\begin{equation} \label{eq:eta}
\eta (t) :=
\sum_{k = 1}^{u-1}(\sigma_{k} - \sigma_{k-1}) v_{k}\lambda + (t - \sigma_{u-1}) v_{u}\lambda
\quad
\text{for $t \in [\sigma_{u-1},\,\sigma_u]$, $1 \le u \le s$};
\end{equation}
note that $\BR \otimes_{\BZ} P \cong
(\BR \otimes_{\BZ} P_{\af}^{0})/\BR\delta \subset
(\BR \otimes_{\BZ} P_{\af})/\BR\delta$.
\begin{rem} \label{rem:LScl}
We know from \cite[Theorem~3.3]{LNSSS2} that
the set $\QLS(\lambda)$ of quantum LS paths of shape $\lambda$ is identical
(as a set of piecewise-linear, continuous map
from $[0,1]$ to $(\BR \otimes_{\BZ} P_{\af})/\BR\delta$)
to the set $\BB(\lambda)_{\cl}$ of
(affine) LS paths of shape $\lambda$ projected by
$\cl:\BR \otimes_{\BZ} P_{\af} \twoheadrightarrow
(\BR \otimes_{\BZ} P_{\af})/\BR\delta$,
which we studied in \cite{NS03}, \cite{NS05}, \cite{NS06}.
\end{rem}
We endow the set $\QLS(\lambda)$ with a crystal structure
with weights in $P \cong P_{\af}^{0}/\BZ\delta \subset P_{\af}/\BZ\delta$ as follows
(see \cite[\S4.2]{LNSSS16}). Let $\eta \in \QLS(\lambda)$
be of the form \eqref{eq:QLS}. We set
\begin{equation} \label{eq:wteta}
\wt (\eta):= \eta(1) = \sum_{u = 1}^{s} (\sigma_{u}-\sigma_{u-1})v_{u}\lambda \in P.
\end{equation}
We define root operators $e_{i}$, $f_{i}$, $i \in I_{\af}$,
in the same manner as in \cite[Sect.~2]{Lit95}. Set
\begin{equation} \label{eq:H2}
\begin{cases}
H^{\eta}_{i}(t) := \pair{\eta(t)}{\alpha_{i}^{\vee}} \quad
\text{for $t \in [0,1]$}, \\[1.5mm]
m^{\eta}_{i} :=
\min \bigl\{ H^{\eta}_{i} (t) \mid t \in [0,1] \bigr\}.
\end{cases}
\end{equation}
Since $\QLS(\lambda) = \BB(\lambda)_{\cl}$,
it follows from \cite[Lemma~4.5\,d)]{Lit95}
(see also \cite[Proposition~4.1.12]{LNSSS16}) that
all local minima of the function $H^{\eta}_{i}(t)$, $t \in [0,1]$,
are integers; in particular,
the minimum value $m^{\eta}_{i}$ is a nonpositive integer
(recall that $\eta(0)=0$, and hence $H^{\eta}_{i}(0)=0$).
We define $e_{i}\eta$ as follows.
If $m^{\eta}_{i}=0$, then we set $e_{i} \eta := \bzero$.
If $m^{\eta}_{i} \le -1$, then we set
\begin{equation} \label{eq:t-e2}
\begin{cases}
t_{1} :=
\min \bigl\{ t \in [0,\,1] \mid
H^{\eta}_{i}(t) = m^{\eta}_{i} \bigr\}, \\[1.5mm]
t_{0} :=
\max \bigl\{ t \in [0,\,t_{1}] \mid
H^{\eta}_{i}(t) = m^{\eta}_{i} + 1 \bigr\};
\end{cases}
\end{equation}
note that $H^{\eta}_{i}(t)$ is
strictly decreasing on the interval $[t_{0},\,t_{1}]$.
Let $1 \le p \le q \le s$ be such that
$\sigma_{p-1} \le t_{0} < \sigma_p$ and $t_{1} = \sigma_{q}$.
Then we define $e_{i}\eta$ by
\begin{equation} \label{eq:eeta}
\begin{split}
& e_{i} \eta := (
v_{1},\,\ldots,\,v_{p},\,\mcr{\ti{s}_{i}v_{p}},\,\mcr{\ti{s}_{i}v_{p+1}},\,\ldots,\,
\mcr{\ti{s}_{i}v_{q}},\,v_{q+1},\,\ldots,\,v_{s} ; \\
& \hspace*{40mm}
\sigma_{0},\,\ldots,\,\sigma_{p-1},\,t_{0},\,\sigma_{p},\,\ldots,\,\sigma_{q}=t_{1},\,
\ldots,\,\sigma_{s}),
\end{split}
\end{equation}
where $\ti{s}_{i}$ is as in \eqref{eq:tis};
if $t_{0} = \sigma_{p-1}$, then we drop $v_{p}$ and $\sigma_{p-1}$, and
if $\ti{s}_{i} v_{q} = v_{q+1}$, then we drop $v_{q+1}$ and $\sigma_{q}=t_{1}$.
Similarly, we define $f_{i}\eta$ as follows.
Observe that $H^{\eta}_{i}(1) - m^{\eta}_{i}$ is a nonnegative integer.
If $H^{\eta}_{i}(1) - m^{\eta}_{i} = 0$, then we set $f_{i} \eta := \bzero$.
If $H^{\eta}_{i}(1) - m^{\eta}_{i} \ge 1$, then we set
\begin{equation} \label{eq:t-f2}
\begin{cases}
t_{0} :=
\max \bigl\{ t \in [0,1] \mid H^{\eta}_{i}(t) = m^{\eta}_{i} \bigr\}, \\[1.5mm]
t_{1} :=
\min \bigl\{ t \in [t_{0},\,1] \mid H^{\eta}_{i}(t) = m^{\eta}_{i} + 1 \bigr\};
\end{cases}
\end{equation}
note that $H^{\eta}_{i}(t)$ is
strictly increasing on the interval $[t_{0},\,t_{1}]$.
Let $0 \le p \le q \le s-1$ be such that $t_{0} = \sigma_{p}$ and
$\sigma_{q} < t_{1} \le \sigma_{q+1}$. Then we define $f_{i}\eta$ by
\begin{equation} \label{eq:feta}
\begin{split}
& f_{i} \eta := ( v_{1},\,\ldots,\,v_{p},\,\mcr{\ti{s}_{i}v_{p+1}},\,\dots,\,
\mcr{\ti{s}_{i} v_{q}},\,\mcr{\ti{s}_{i} v_{q+1}},\,v_{q+1},\,\ldots,\,v_{s} ; \\
& \hspace{40mm}
\sigma_{0},\,\ldots,\,\sigma_{p}=t_{0},\,\ldots,\,\sigma_{q},\,t_{1},\,
\sigma_{q+1},\,\ldots,\,\sigma_{s});
\end{split}
\end{equation}
if $t_{1} = \sigma_{q+1}$, then we drop $v_{q+1}$ and $\sigma_{q+1}$, and
if $v_{p} = \ti{s}_{i} v_{p+1}$, then we drop $v_{p}$ and $\sigma_{p}=t_{0}$.
In addition, we set $e_{i} \bzero = f_{i} \bzero := \bzero$ for all $i \in I_{\af}$.
\begin{thm} \label{thm:QLS}
The set $\QLS(\lambda) \sqcup \{ \bzero \}$ is
stable under the action of the root operators
$e_{i}$ and $f_{i}$, $i \in I_{\af}$.
Moreover, if we set
\begin{equation*}
\begin{cases}
\ve_{i} (\eta) :=
\max \bigl\{ n \ge 0 \mid e_{i}^{n} \eta \neq \bzero \bigr\}, \\[1.5mm]
\vp_{i} (\eta) :=
\max \bigl\{ n \ge 0 \mid f_{i}^{n} \eta \neq \bzero \bigr\}
\end{cases}
\end{equation*}
for $\eta \in \QLS(\lambda)$ and $i \in I_{\af}$, then
the set $\QLS(\lambda)$,
equipped with the maps $\wt$, $e_{i}$, $f_{i}$, $i \in I_{\af}$,
and $\ve_{i}$, $\vp_{i}$, $i \in I_{\af}$,
defined above, is a crystal with weights in
$P \cong P_{\af}^{0}/\BZ\delta \subset P_{\af}/\BZ\delta$.
\end{thm}
The next lemma follows from
\cite[Theorem~3.2 and Proposition~3.23]{NS05}.
\begin{thm} \label{thm:NS05} \mbox{}
\begin{enu}
\item For every $\lambda \in P^{+}$,
the crystal $\QLS(\lambda)$ is connected.
\item Let $\lambda_{1},\dots,\,\lambda_{n} \in P^{+}$,
and set $\lambda:=\lambda_{1}+\cdots+\lambda_{n}$.
There exists a {\rm(}unique{\rm)} isomorphism
\begin{equation} \label{eq:Theta}
\Theta=\Theta_{\lambda_{1},\dots,\lambda_{n}}:
\QLS(\lambda) \stackrel{\sim}{\rightarrow}
\QLS(\lambda_{1}) \otimes \cdots \otimes \QLS(\lambda_{n})
\end{equation}
of crystals that sends $\eta_{\lambda}$ to
$\eta_{\lambda_{1}} \otimes \cdots \otimes \eta_{\lambda_{n}}$.
\end{enu}
\end{thm}
\begin{rem} \label{rem:ass2}
Let $\lambda,\mu,\nu \in P^{+}$.
By the same reasoning as in Remark~\ref{rem:ass1},
we see that the following diagram is commutative:
\begin{equation} \label{eq:ass2a}
\begin{split}
\xymatrix{
\QLS(\lambda+\mu+\nu)
\ar[rr]^-{\Theta_{\lambda+\mu,\nu}}
\ar[d]_{\Theta_{\lambda,\mu+\nu}} & &
\QLS(\lambda+\mu) \otimes \QLS(\nu) \ar[d]^{\Theta_{\lambda\mu} \otimes \id} \\
\QLS(\lambda) \otimes \QLS(\mu+\nu)
\ar[rr]^-{\id \otimes \Theta_{\mu\nu}} & &
\QLS(\lambda) \otimes \QLS(\mu) \otimes \QLS(\nu).
}
\end{split}
\end{equation}
\end{rem}
Now, we define a projection $\cl : (\WJu)_{\af} \twoheadrightarrow \WJu$ by
$\cl (x) := w$ for $x \in (\WJu)_{\af}$ of the form
$x = w\PJ(t_{\xi})$ with $w \in \WJu$ and $\xi \in Q^{\vee}$.
For $\pi = (x_{1},\,\dots,\,x_{s}\,;\,\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s})
\in \SLS(\lambda)$, we define
\begin{equation} \label{eq:clpi}
\cl(\pi):=(\cl(x_{1}),\,\dots,\,\cl(x_{s})\,;\,\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s});
\end{equation}
here, for each $1 \le p < q \le s$ such that $\cl(x_{p})= \cdots = \cl(x_{q})$,
we drop $\cl(x_{p}),\,\dots,\,\cl(x_{q-1})$ and $\sigma_{p},\,\dots,\,\sigma_{q-1}$.
We set $\cl(\bzero):=\bzero$ by convention.
We know from \cite[Sect.~6.2]{NS16} that $\cl(\pi) \in \QLS(\lambda)$
for all $\pi \in \SLS(\lambda)$. Also, we see from the definitions that
\begin{equation} \label{eq:cl}
\begin{cases}
\wt(\cl(\pi)) = \cl(\wt(\pi)), \\
\cl(e_{i}\pi) = e_{i} \cl(\pi), \ \cl(f_{i}\pi) = f_{i} \cl(\pi), \\
\ve_{i}(\cl(\pi))=\ve_{i}(\pi), \ \vp_{i}(\cl(\pi))=\vp_{i}(\pi)
\end{cases}
\end{equation}
for all $\pi \in \SLS(\lambda)$ and $i \in I$.
We know the following lemma from \cite[Lemma~6.2.3]{NS16};
recall that $\SLS_{0}(\lambda)$ denotes the connected component of $\SLS(\lambda)$
containing $\pi_{\lambda}=(e\,;\,0,1)$.
\begin{lem} \label{lem:deg}
For each $\eta \in \QLS(\lambda)$, there exists a unique
$\pi_{\eta} \in \SLS_{0}(\lambda)$ such that
$\cl(\pi_{\eta})=\eta$ and $\kappa(\pi_{\eta}) = \kappa(\eta) \in \WJu$.
\end{lem}
We define the (tail) degree function $\deg_{\lambda}:
\QLS(\lambda) \rightarrow \BZ_{\le 0}$ as follows.
Let $\eta \in \QLS(\lambda)$, and
take $\pi_{\eta} \in \SLS_{0}(\lambda)$ as in Lemma~\ref{lem:deg};
we see from the argument in \cite[Sect.~6.2]{NS16} that
$\wt(\pi_{\eta}) = \lambda - \gamma + k\delta$
for some $\gamma \in Q^{+}$ and $k \in \BZ_{\le 0}$.
Then we set $\deg_{\lambda}(\eta):=k$.
Now, for $\eta = (v_{1},\,\dots,\,v_{s} \,;\,
\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}) \in \QLS(\lambda)$,
we set
\begin{equation} \label{eq:bxi}
\begin{cases}
\bxi{\eta}:=(\xi_1,\,\dots,\,\xi_{s-1},\,\xi_{s}), \quad \text{where} \\[2mm]
\xi_{s}:=0, \quad
\xi_{u}:=\xi_{u+1} + \wt^{\J}(v_{u+1} \Rightarrow v_{u})
\quad \text{for $1 \le u \le s-1$};
\end{cases}
\end{equation}
for the definition of $\wt^{\J}(w \Rightarrow v)$, see Section~\ref{subsec:QBG}.
\begin{prop} \label{prop:deg}
Keep the notation and setting above. It holds that
\begin{equation} \label{eq:pieta}
\pi_{\eta} =
(v_{1}\PJ(t_{\xi_1}),\,\dots,\,v_{s-1}\PJ(t_{\xi_{s-1}}),\,v_{s} \,;\,
\sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}),
\end{equation}
\begin{equation} \label{eq:deg0}
\deg_{\lambda}(\eta) = - \sum_{u=1}^{s-1}
\sigma_{u} \pair{\lambda}{\wt^{\J}(v_{u+1} \Rightarrow v_{u})}.
\end{equation}
\end{prop}
\begin{proof}
Equality \eqref{eq:pieta} can be shown in exactly the same way as
\cite[Theorem~4.1.1]{LNSSS15}.
Equality \eqref{eq:deg0} follows from
\cite[Theorem~4.1.1]{LNSSS15}; or, we can show this equality by
direct computation, using with \eqref{eq:pieta} and \eqref{eq:wt}.
\end{proof}
Also, for an arbitrary $w \in W$ and $\eta =
(v_{1},\,\dots,\,v_{s} \,;\, \sigma_{0},\,\sigma_{1},\,\dots,\,\sigma_{s}) \in \QLS(\lambda)$,
we define the degree of $\eta$ at $w\lambda$
(see \cite[Sect.~3.2]{NNS1} and \cite[Sect.~2.3]{NNS2}) by
\begin{equation} \label{eq:degw}
\deg_{w\lambda}(\eta):=
- \sum_{u=1}^{s}
\sigma_{u} \pair{\lambda}{\wt^{\J}(v_{u+1} \Rightarrow v_{u})}, \quad
\text{with $v_{s+1}:=\mcr{w}$}.
\end{equation}
We set
\begin{equation}
\gch_{w\lambda} \QLS(\lambda) :=
\sum_{\eta \in \QLS(\lambda)} q^{\deg_{w\lambda}(\eta)}e^{\wt(\eta)}.
\end{equation}
\begin{rem} \label{rem:degw}
Let $\eta \in \QLS(\lambda)$, and $w \in W$.
We see by \eqref{eq:Txi1a} that
$\wt (\pi_{\eta} \cdot T_{\wt^{\J}(\mcr{w} \Rightarrow \kappa(\eta))}) =
\wt (\eta) + \deg_{w\lambda}(\eta)\delta$.
\end{rem}
Let $\lambda_{1},\,\dots,\,\lambda_{n} \in P^{+}$,
and set $\lambda:=\lambda_{1}+\cdots+\lambda_{n} \in P^{+}$.
Then the following diagram is commutative:
\begin{equation} \label{eq:CD}
\begin{split}
\xymatrix{
\SLS_{0}(\lambda)
\ar[rr]^-{\Xi_{\lambda_{1},\dots,\lambda_{n}}}
\ar[d]_{\cl}
& & (\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n}))_{0}
\ar[d]^{\cl \otimes \cdots \otimes \cl} \\
\QLS(\lambda)
\ar[rr]^-{\Theta_{\lambda_{1},\dots,\lambda_{n}}}
& & \QLS(\lambda_{1}) \otimes \cdots \otimes \QLS(\lambda_{n}),}
\end{split}
\end{equation}
where $\Xi_{\lambda_{1},\dots,\lambda_{n}}$ and
$\Theta_{\lambda_{1},\dots,\lambda_{n}}$ are the isomorphisms of crystals
in Theorems~\ref{thm:SMT} and \ref{thm:NS05}\,(2), respectively.
Indeed, observe that both of the maps
$(\cl \otimes \cdots \otimes \cl) \circ
\Xi_{\lambda_{1},\dots,\lambda_{n}}$ and
$\Theta_{\lambda_{1},\dots,\lambda_{n}} \circ \cl$ send
$\pi_{\lambda}$
to $\eta_{\lambda_{1}} \otimes \cdots \otimes \eta_{\lambda_{n}}$,
and that these two maps commute with the root operators
(see \eqref{eq:cl}). Because $\SLS_{0}(\lambda) \subset \SLS(\lambda)$
is connected and contains $\pi_{\lambda}$,
we conclude that the diagram above is commutative.
\section{Main result and its proof.}
\label{sec:main}
\subsection{Main result.}
\label{subsec:main}
In this subsection, we fix $\lambda,\mu \in P^{+}$,
and take $\J_{\lambda}$, $\J_{\mu}$, $\J_{\lambda+\mu}$ as in \eqref{eq:J};
note that $\J_{\lambda+\mu} = \J_{\lambda} \cap \J_{\mu}$.
Recall from \eqref{eq:CD} that the following diagram commutes:
\begin{equation} \label{eq:CD2}
\begin{split}
\xymatrix{
\SLS_{0}(\lambda+\mu)
\ar[rr]^-{\Xi_{\lambda\mu}}
\ar[d]_{\cl}
& & (\SLS(\lambda) \otimes \SLS(\mu))_{0}
\ar[d]^{\cl \otimes \cl} \\
\QLS(\lambda+\mu)
\ar[rr]^-{\Theta_{\lambda\mu}}
& & \QLS(\lambda) \otimes \QLS(\mu).}
\end{split}
\end{equation}
For $\eta=(v_1,\,\dots,\,v_{s};
\sigma_{0},\,\sigma_1,\,\dots,\,\sigma_s) \in \QLS(\mu)$ and $w \in W$,
we define
\begin{equation} \label{eq:ti1}
\begin{cases}
\tiv{\eta}{w}:=
(\ti{v}_1,\,\dots,\,\ti{v}_{s},\,\ti{v}_{s+1}), \quad \text{where} \\[2mm]
\ti{v}_{s+1}:=w, \quad \ti{v}_{u}:=\tbmin{v_{u}}{\J_{\mu}}{\ti{v}_{u+1}}
\quad \text{for $1 \le u \le s$};
\end{cases}
\end{equation}
note that $\ti{v}_{u} \in v_{u}\WS{\mu}$ for $1 \le u \le s$.
In this notation, we set $\io{\eta}{w}:=\ti{v}_{1}$, and call it
the initial direction of $\eta$ with respect to $w$. Also, we define
\begin{equation} \label{eq:ti2}
\begin{cases}
\tixi{\eta}{w}:=
(\ti{\xi}_1,\,\dots,\,\ti{\xi}_{s}), \quad \text{where} \\[2mm]
\ti{\xi}_{s}:=\wt(\ti{v}_{s+1} \Rightarrow \ti{v}_{s}), \quad
\ti{\xi}_{u}:=\ti{\xi}_{u+1} + \wt(\ti{v}_{u+1} \Rightarrow \ti{v}_{u})
\quad \text{for $1 \le u \le s-1$}.
\end{cases}
\end{equation}
In this notation, we set $\ze{\eta}{w}:=\ti{\xi}_{1}$.
\begin{rem} \label{rem:equiv}
Keep the notation and setting above.
We see from Lemma~\ref{lem:wtS} that
\begin{equation*}
\begin{cases}
\wt(\ti{v}_{u+1} \Rightarrow \ti{v}_{u}) \equiv
\wt^{\J_{\mu}}(v_{u+1} \Rightarrow v_{u}) \mod \QSv{\mu} \quad
\text{for every $1 \le u \le s-1$}, \\
\wt(\ti{v}_{s+1} \Rightarrow \ti{v}_{s}) \equiv
\wt^{\J_{\mu}}(\mcr{w}^{\J_{\mu}} \Rightarrow v_{s}) =
\wt^{\J_{\mu}}(\mcr{w}^{\J_{\mu}} \Rightarrow \kappa(\eta)) \mod \QSv{\mu}.
\end{cases}
\end{equation*}
If $\bxi{\eta}=(\xi_{1},\,\dots,\,\xi_{s-1},\,\xi_{s})$ (see \eqref{eq:bxi}),
then $\ti{\xi}_{u} \equiv
\xi_{u} + \wt^{\J_{\mu}}(\mcr{w}^{\J_{\mu}} \Rightarrow \kappa(\eta))$
mod $\QSv{\mu}$ for all $1 \le u \le s$.
\end{rem}
The following theorem is the main result of this paper.
\begin{thm} \label{thm:main}
Keep the notation and setting above.
Let $\eta \in \QLS(\lambda+\mu)$, and write
$\Theta_{\lambda\mu}(\eta) \in \QLS(\lambda) \otimes \QLS(\mu)$ as
$\Theta_{\lambda\mu}(\eta) = \eta_{1} \otimes \eta_{2}$
with $\eta_{1} \in \QLS(\lambda)$ and
$\eta_{2} \in \QLS(\mu)$. Let $w \in W$.
Then the following equality holds{\rm:}
\begin{equation} \label{eq:main}
\Xi_{\lambda\mu}(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))}) =
\pi_{\eta_{1}} \cdot T_{\wt(\io{\eta_2}{w} \Rightarrow \kappa(\eta_1))+ \ze{\eta_2}{w}} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \kappa(\eta_2))}.
\end{equation}
\end{thm}
We will give a proof of Theorem~\ref{thm:main}
in Section~\ref{subsec:prf-main1}.
As an application of Theorem~\ref{thm:main},
using Remarks~\ref{rem:degw} and \ref{rem:equiv},
we obtain the following.
\begin{cor} \label{cor:main}
Keep the notation and setting in Theorem~\ref{thm:main}.
It holds that
\begin{equation}
\gch_{w(\lambda+\mu)} \QLS(\lambda+\mu) =
\sum_{\eta \in \QLS(\mu)} e^{\wt(\eta)}
q^{\deg_{w\mu}(\eta)-\pair{\lambda}{\ze{\eta}{w}} }
\gch_{\io{\eta}{w}\lambda} \QLS(\lambda).
\end{equation}
\end{cor}
\subsection{Some technical lemmas concerning the quantum Bruhat graph.}
\label{subsec:lem1}
\begin{lem}[{see \cite[Lemma~7.7]{LNSSS1}}] \label{lem:dia}
Let $w,v \in W$, and $i \in I_{\af}$.
\begin{enu}
\item If $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ and
$v^{-1}\ti{\alpha}_{i} \in \Delta^{-}$, then
\begin{equation*}
\begin{cases}
\ell(w \Rightarrow v) = \ell(\ti{s}_{i}w \Rightarrow v) + 1 = \ell(w \Rightarrow \ti{s}_{i}v) + 1, \\
\wt(w \Rightarrow v)
= \wt(\ti{s}_{i}w \Rightarrow v) + \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
= \wt(w \Rightarrow \ti{s}_{i}v) - \delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee}.
\end{cases}
\end{equation*}
\item If $w^{-1}\ti{\alpha}_{i},\,v^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
or if $w^{-1}\ti{\alpha}_{i},\,v^{-1}\ti{\alpha}_{i} \in \Delta^{-}$, then
\begin{equation*}
\begin{cases}
\ell(w \Rightarrow v) = \ell(s_{i}w \Rightarrow s_{i}v), \\
\wt(w \Rightarrow v)
= \wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}v) + \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
- \delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee}.
\end{cases}
\end{equation*}
\end{enu}
\end{lem}
\begin{lem}[{see \cite[Propositions~5.10 and 5.11]{LNSSS1}}] \label{lem:edge}
Let $w \in W$, and $i \in I_{\af}$.
If $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$, then
$w \edge{w^{-1}\ti{\alpha}_{i}} \ti{s}_{i}w$ is a directed edge of $\QB${\rm;}
this edge is a quantum edge if and only if $i = 0$.
\end{lem}
\begin{lem} \label{lem:tb}
Let $\J$ be a subset of $I$.
Let $w \in W$ and $i \in I_{\af}$
be such that $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
Let $v \in W$, and set $\ti{v}:=\tbmin{v}{\J}{w}$.
\begin{enu}
\item
If $v^{-1}\ti{\alpha}_{i} \in \Delta^{+} \setminus \DeJ^{+}$, then
$\tbmin{\ti{s}_{i}v}{\J}{\ti{s}_{i}w} = \ti{s}_{i}\ti{v}$.
\item
If $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeJ^{-}$,
then $\tbmin{v}{\J}{\ti{s}_{i}w} = \ti{v}$.
\item
If $v^{-1}\ti{\alpha}_{i} \in \DeJ$, then
$\ti{v}^{-1} \ti{\alpha}_{i} \in \Delta^{+}$.
Moreover, if we set
$\ti{v}':=\tbmin{v}{\J}{\ti{s}_{i}w}$, then
\begin{equation} \label{eq:tb0}
\ti{v}'=
\begin{cases}
\ti{v} & \text{\rm if $(\ti{v}')^{-1} \ti{\alpha}_{i} \in \Delta^{+}$}, \\[1.5mm]
\ti{s}_{i}\ti{v} & \text{\rm if $(\ti{v}')^{-1} \ti{\alpha}_{i} \in \Delta^{-}$}.
\end{cases}
\end{equation}
\end{enu}
\end{lem}
\begin{proof}
(1) First we show that
$u^{-1}\ti{\alpha}_{i} \in \Delta^{-}$ for all $u \in \ti{s}_{i}v\WJ$.
Let us write $u \in \ti{s}_{i}v\WJ$ as
$u = \ti{s}_{i}vz$ for some $z \in \WJ$.
We see by the assumption of part (1) that
$u^{-1}\ti{\alpha}_{i} = - z^{-1}v^{-1}\ti{\alpha}_{i} \subset z^{-1}(\Delta^{-} \setminus \DeJ^{-})$.
Since $\Delta^{-} \setminus \DeJ^{-}$ is stable under the action of $\WJ$,
we deduce that
$u^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeJ^{-} \subset \Delta^{-}$, as desired.
We set $\ti{v}':=\tbmin{\ti{s}_{i}v}{\J}{\ti{s}_{i}w}$.
Since $\ti{s}_{i} \ti{v} \in \ti{s}_{i}v\WJ$,
we have $\ti{v}' \tb{\ti{s}_{i}w} \ti{s}_{i} \ti{v}$, and hence
\begin{equation} \label{eq:tb1a}
\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') + \ell(\ti{v}' \Rightarrow \ti{s}_{i} \ti{v})
\end{equation}
by the definition of $\tb{\ti{s}_{i}w}$.
Since $(\ti{s}_{i} \ti{v})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$, as seen above,
and $(\ti{s}_{i}w)^{-1}\ti{\alpha}_{i} \in \Delta^{-}$ by the assumption,
we deduce from Lemma~\ref{lem:dia}\,(2) that
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(w \Rightarrow \ti{v})$.
Similarly, since $(\ti{s}_{i}w)^{-1}\ti{\alpha}_{i},\,
(\ti{v}')^{-1}\ti{\alpha}_{i},\,
(\ti{s}_{i} \ti{v})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$,
we deduce that
$\ell(\ti{s}_{i}w \Rightarrow \ti{v}') =
\ell(w \Rightarrow \ti{s}_{i}\ti{v}')$ and
$\ell(\ti{v}' \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(\ti{s}_{i}\ti{v}' \Rightarrow \ti{v})$
from Lemma~\ref{lem:dia}\,(2).
Substituting these equalities into \eqref{eq:tb1a},
we obtain
$\ell(w \Rightarrow \ti{v}) =
\ell(w \Rightarrow \ti{s}_{i}\ti{v}') +
\ell(\ti{s}_{i}\ti{v}' \Rightarrow \ti{v})$,
which implies that $\ti{s}_{i}\ti{v}' \tb{w} \ti{v}$.
Since $\ti{v}=\tbmin{v}{\J}{w}$ and $\ti{s}_{i}\ti{v}' \in v\WJ$,
it follows that $\ti{s}_{i}\ti{v}' = \ti{v}$, and hence
$\ti{v}'=\ti{s}_{i}\ti{v}$.
(2) We set $\ti{v}':=\tbmin{v}{\J}{\ti{s}_{i}w}$.
Since $\ti{v} \in v\WJ$, we have $\ti{v}' \tb{\ti{s}_{i}w} \ti{v}$, and hence
\begin{equation} \label{eq:tb2a}
\ell(\ti{s}_{i}w \Rightarrow \ti{v}) =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') + \ell(\ti{v}' \Rightarrow \ti{v})
\end{equation}
by the definition of $\tb{\ti{s}_{i}w}$.
Note that $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ by the assumption.
If we write $\ti{v} \in v\WJ$ as $\ti{v}=vz$ for some $z \in \WJ$,
then we see by the assumption of part (2)
that $\ti{v}^{-1}\ti{\alpha}_{i} = z^{-1}v^{-1}\ti{\alpha}_{i}
\in z^{-1}(\Delta^{-} \setminus \DeJ^{-}) \subset
\Delta^{-} \setminus \DeJ^{-} \subset \Delta^{-}$.
Similarly, we see that $(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{-}$
since $\ti{v}' \in v\WJ$.
Therefore, we see from Lemma~\ref{lem:dia}\,(1) that
$\ell(w \Rightarrow \ti{v}) = \ell(\ti{s}_{i}w \Rightarrow \ti{v}) + 1$ and
$\ell(w \Rightarrow \ti{v}') = \ell(\ti{s}_{i}w \Rightarrow \ti{v}') +1$.
Substituting these equalities into \eqref{eq:tb2a}, we obtain
$\ell(w \Rightarrow \ti{v}) =
\ell(w \Rightarrow \ti{v}') +
\ell(\ti{v}' \Rightarrow \ti{v})$,
which implies that $\ti{v}' \tb{w} \ti{v}$.
Since $\ti{v}=\tbmin{v}{\J}{w}$ and $\ti{v}' \in v\WJ$,
we get $\ti{v}' = \ti{v}$.
(3) Note that $\ti{s}_{i}v\WJ = v\WJ$ since $v^{-1}\ti{\alpha}_{i} \in \DeJ$.
Suppose, for a contradiction, that
$\ti{v}^{-1}\ti{\alpha}_{i} \in \Delta^{-}$.
Since $\ti{s}_{i}\ti{v} \in \ti{s}_{i}v\WJ = v\WJ$ and
$\ti{v}=\tbmin{v}{\J}{w}$,
we have $\ti{v} \tb{w} \ti{s}_{i}\ti{v}$, and hence
\begin{equation} \label{eq:tb3a}
\ell(w \Rightarrow \ti{s}_{i}\ti{v}) =
\ell(w \Rightarrow \ti{v}) + \ell(\ti{v} \Rightarrow \ti{s}_{i}\ti{v}) \ge
\ell(w \Rightarrow \ti{v}).
\end{equation}
Also, since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ and
$\ti{v}^{-1}\ti{\alpha}_{i} \in \Delta^{-}$ by our assumptions,
we see from Lemma~\ref{lem:dia}\,(1) that
$\ell(w \Rightarrow \ti{s}_{i}\ti{v}) = \ell(w \Rightarrow \ti{v})-1 <
\ell(w \Rightarrow \ti{v})$, which contradicts \eqref{eq:tb3a}.
Thus we obtain $\ti{v}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
\paragraph{Case 1.}
Assume that $(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
Since $\ti{v}' \in v\WJ$, we have $\ti{v} \tb{w} \ti{v}'$, and hence
\begin{equation} \label{eq:tb3b}
\ell(w \Rightarrow \ti{v}') =
\ell(w \Rightarrow \ti{v}) + \ell(\ti{v} \Rightarrow \ti{v}')
\end{equation}
by the definition of $\tb{w}$.
Recall that $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ by the assumption.
Also, recall that $\ti{v}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ as seen above,
and that $(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ by the assumption of Case 1.
Therefore, by Lemma~\ref{lem:dia}\,(2), we deduce that
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}') = \ell(w \Rightarrow \ti{v}')$,
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}) = \ell(w \Rightarrow \ti{v})$, and
$\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}') = \ell(\ti{v} \Rightarrow \ti{v}')$.
Substituting these equalities into \eqref{eq:tb3b}, we obtain
\begin{equation} \label{eq:tb3c}
\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}') =
\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}) +
\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}').
\end{equation}
Since $\ti{s}_{i}\ti{v} \in \ti{s}_{i}v\WJ=v\WJ$,
and since $\ti{v}'=\tbmin{v}{\J}{\ti{s}_{i}w}$,
we have $\ti{v}' \tb{\ti{s}_{i}w} \ti{s}_{i}\ti{v}$, and hence
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}) =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') + \ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v})$.
Substituting this equality into \eqref{eq:tb3c}, we obtain
\begin{equation} \label{eq:tb3d}
\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}') =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') + \ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}) +
\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}').
\end{equation}
Since $\ti{s}_{i}\ti{v}' \in \ti{s}_{i}v\WJ=v\WJ$, we have
$\ti{v}' \tb{\ti{s}_{i}w} \ti{s}_{i}\ti{v}'$,
and hence
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}') =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') +
\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}')$.
Combining this equality and \eqref{eq:tb3d},
we obtain
$\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}) +
\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}') =
\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}')$.
Since $(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ by the assumption of Case 1,
it follows from Lemma~\ref{lem:edge} that
$\ti{v}' \edge{(\ti{v}')^{-1}\ti{\alpha}_{i}} \ti{s}_{i}\ti{v}'$
is a directed edge of $\QB$,
which implies that $\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}') = 1$.
Hence either $\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v})=0$ or
$\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}')=0$.
If $\ell(\ti{v}' \Rightarrow \ti{s}_{i}\ti{v}) = 0$, then
$\ti{v}' = \ti{s}_{i}\ti{v}$. However, this contradicts our assumption that
$(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ since
$(\ti{s}_{i}\ti{v})^{-1}\ti{\alpha}_{i} = -\ti{v}^{-1}\ti{\alpha}_{i} \in \Delta^{-}$.
Therefore, we obtain $\ell(\ti{s}_{i}\ti{v} \Rightarrow \ti{s}_{i}\ti{v}')=0$,
from which we conclude that $\ti{s}_{i}\ti{v} = \ti{s}_{i}\ti{v}'$,
and hence $\ti{v} = \ti{v}'$.
\paragraph{Case 2.}
Assume that $(\ti{v}')^{-1}\ti{\alpha}_{i} \in \Delta^{-}$.
Since $\ti{s}_{i} \ti{v} \in \ti{s}_{i}v\WJ = v\WJ$,
we have $\ti{v}' \tb{\ti{s}_{i}w} \ti{s}_{i} \ti{v}$, and hence
\begin{equation} \label{eq:tb3e}
\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(\ti{s}_{i}w \Rightarrow \ti{v}') + \ell(\ti{v}' \Rightarrow \ti{s}_{i} \ti{v}).
\end{equation}
Since $\ti{v}^{-1}\ti{\alpha}_{i},\,
w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we deduce from Lemma~\ref{lem:dia}\,(2) that
$\ell(\ti{s}_{i}w \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(w \Rightarrow \ti{v})$.
Similarly, since $(\ti{s}_{i}w)^{-1}\ti{\alpha}_{i},\,
(\ti{v}')^{-1}\ti{\alpha}_{i},\,
(\ti{s}_{i} \ti{v})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$,
we deduce that
$\ell(\ti{s}_{i}w \Rightarrow \ti{v}') =
\ell(w \Rightarrow \ti{s}_{i}\ti{v}')$ and
$\ell(\ti{v}' \Rightarrow \ti{s}_{i} \ti{v}) =
\ell(\ti{s}_{i}\ti{v}' \Rightarrow \ti{v})$
from Lemma~\ref{lem:dia}\,(2).
Substituting these equalities into \eqref{eq:tb3e},
we obtain
$\ell(w \Rightarrow \ti{v}) =
\ell(w \Rightarrow \ti{s}_{i}\ti{v}') +
\ell(\ti{s}_{i}\ti{v}' \Rightarrow \ti{v})$,
which implies that $\ti{s}_{i}\ti{v}' \tb{w} \ti{v}$.
Since $\ti{v}=\tbmin{v}{\J}{w}$ and $\ti{s}_{i}\ti{v}' \in \ti{s}_{i}v\WJ = v\WJ$,
it follows that $\ti{s}_{i}\ti{v}' = \ti{v}$, and hence
$\ti{v}'=\ti{s}_{i}\ti{v}$.
This completes the proof of Lemma~\ref{lem:tb}.
\end{proof}
\subsection{Similarity maps for $\SLS(\lambda)$ and $\QLS(\lambda)$.}
\label{subsec:sim}
Let $\lambda \in P^{+}$, and take $\J=\J_{\lambda}$ as in \eqref{eq:J};
recall the definition of $\turn{\lambda}$ from Remark~\ref{rem:SLS}.
Take $N=N_{\lambda} \in \BZ_{\ge 1}$ such that
\begin{equation} \label{eq:N}
N\sigma=N_{\lambda}\sigma \in \BZ \quad \text{for all $\sigma \in \turn{\lambda}$}.
\end{equation}
We define $\Sigma_{N}:\SLS(\lambda) \rightarrow
\SLS(\lambda)^{\otimes N}$ as follows.
Let $\pi = (x_{1},\,\dots,\,x_{s};\sigma_{0},\sigma_{1},\dots,\sigma_{s})
\in \SLS(\lambda)$; note that $N\sigma_{u} \in \BZ$ for all $0 \le u \le s$
(see Remark~\ref{rem:SLS}). We set
\begin{equation}
\Sigma_{N}(\pi) : =
(\pi_{\lambda}^{x_{1}})^{\otimes N(\sigma_{1}-\sigma_{0})} \otimes
(\pi_{\lambda}^{x_{2}})^{\otimes N(\sigma_{2}-\sigma_{1})} \otimes \cdots \otimes
(\pi_{\lambda}^{x_{s}})^{\otimes N(\sigma_{s}-\sigma_{s-1})} \in \SLS(\lambda)^{\otimes N};
\end{equation}
recall that $\pi_{\lambda}^{x}=(x;0,1) \in \SLS(\lambda)$ for $x \in (\WJu)_{\af}$.
We set $\Sigma_N (\bzero) := \bzero$ by convention.
We know the following proposition from \cite[Proposition~5.24]{INS}.
\begin{prop} \label{prop:sim1}
The map $\Sigma_{N}:\SLS(\lambda) \rightarrow
\SLS(\lambda)^{\otimes N}$ is an injective map such that
$\Sigma_{N}(\pi_{\lambda})=\pi_{\lambda}^{\otimes N}$, and
for all $\pi \in \SLS(\lambda)$ and $i \in I_{\af}$,
\begin{align}
& \wt(\Sigma_N (\pi)) = N \wt(\pi), &
& \ve_i (\Sigma_N (\pi)) = N \ve_i (\pi), &
& \vp_i (\Sigma_N (\pi)) = N \vp_i (\pi), \label{eq:SN1} \\
& \Sigma_N (e_i \pi) = e_i^N \Sigma_N (\pi), &
& \Sigma_N (f_i \pi) = f_i^N \Sigma_N (\pi). \label{eq:SN2}
\end{align}
\end{prop}
\begin{rem} \label{rem:sim1}
By \eqref{eq:SN2}, we see that
$\Sigma_{N}(\SLS_{0}(\lambda)) \subset
(\SLS(\lambda)^{\otimes N})_{0}$.
\end{rem}
For each $\pi = (x_{1},\,\dots,\,x_{s};\sigma_{0},\sigma_{1},\dots,\sigma_{s})
\in \SLS(\lambda)$, we see from the definition of semi-infinite LS paths that
$\Sigma_{N}'(\pi):=
(x_{1},\,\dots,\,x_{s};\sigma_{0},\sigma_{1},\dots,\sigma_{s})$ is contained in $\SLS(N\lambda)$.
Hence the map $\Sigma_{N}':\SLS(\lambda) \rightarrow \SLS(N\lambda)$,
$\pi \mapsto \Sigma_{N}'(\pi)$, is an analogue of the ``$N$-multiple'' map
in \cite[page~504]{Lit95}; it is verified
in the same way as \cite[Lemma~2.4]{Lit95} that $\Sigma_{N}'$ has
the same properties as \eqref{eq:SN1} and \eqref{eq:SN2}.
In particular, we see that $\Sigma_{N}'(\SLS_{0}(\lambda)) \subset
\SLS_{0}(N\lambda)$. Also, the following diagram is commutative:
\begin{equation} \label{eq:CDs1}
\begin{split}
\xymatrix
\SLS_{0}(\lambda)
\ar[r]^{\Sigma'_{N}}
\ar[d]_{\Sigma_{N}} &
\SLS_{0}(N\lambda) \ar[ld]^{\Xi_{\lambda}^{(N)}} \\
(\SLS(\lambda)^{\otimes N})_{0},
& }
\end{split}
\end{equation}
where $\Xi_{\lambda}^{(N)}$ is the isomorphism of crystals
given by Theorem~\ref{thm:SMT}, applied to
the decomposition $N\lambda=\lambda+\cdots+\lambda$ ($N$ times).
Indeed, both of the maps $\Sigma_{N}$ and $\Xi_{\lambda}^{(N)} \circ \Sigma_{N}'$
send the element $\pi_{\lambda}$ to
$\pi_{\lambda}^{\otimes N}$,
and have property \eqref{eq:SN2}.
Since $\SLS_{0}(\lambda) \subset \SLS(\lambda)$
is connected and contains $\pi_{\lambda}$, we conclude that
the diagram above is commutative.
We define $\Sigma_{N}:\QLS(\lambda) \rightarrow
\QLS(\lambda)^{\otimes N}$ and
$\Sigma_{N}':\QLS(\lambda) \rightarrow
\QLS(N\lambda)$ in exactly the same way as above;
we deduce that these maps $\Sigma_{N}$ and $\Sigma_{N}'$ have
the same properties as \eqref{eq:SN1} and \eqref{eq:SN2}, and that
the following diagram is commutative:
\begin{equation} \label{eq:CDs2}
\begin{split}
\xymatrix
\QLS(\lambda)
\ar[r]^{\Sigma'_{N}}
\ar[d]_{\Sigma_{N}} &
\QLS(N\lambda) \ar[ld]^{\Theta_{\lambda}^{(N)}} \\
\QLS(\lambda)^{\otimes N},
& }
\end{split}
\end{equation}
where $\Theta_{\lambda}^{(N)}$ is the isomorphism of crystals
given by Theorem~\ref{thm:NS05}\,(2), applied to
the decomposition $N\lambda=\lambda+\cdots+\lambda$ ($N$ times).
Moreover, the same argument as above shows that
the following diagram is commutative:
\begin{equation} \label{eq:CD3}
\begin{split}
\xymatrix
\SLS(\lambda) \ar[r]^-{\Sigma_{N}} \ar[d]_-{\cl} &
\SLS(\lambda)^{\otimes N} \ar[d]^-{\cl^{\otimes N}} \\
\QLS(\lambda) \ar[r]^-{\Sigma_{N}} &
\QLS(\lambda)^{\otimes N}.
}
\end{split}
\end{equation}
Now, let $\lambda_{1},\dots,\lambda_{n} \in P^{+}$, and set
$\lambda := \lambda_{1}+\cdots +\lambda_{n}$.
Take $N_{\lambda_{k}} \in \BZ_{\ge 1}$, $1 \le k \le n$, and
$N_{\lambda} \in \BZ_{\ge 1}$ as in \eqref{eq:N},
and let $N \in \BZ_{\ge 1}$ be a common multiple of $N_{\lambda}$,
$N_{\lambda_1},\,\dots,\,N_{\lambda_n}$.
In exactly the same way as above, we see that
the following diagram is commutative:
\begin{equation} \label{eq:CD4}
\begin{split}
\xymatrix
\SLS_{0}(\lambda)
\ar[rr]^-{\Xi_{\lambda_1,\dots,\lambda_{n}}}
\ar[d]_-{\Sigma_{N}}
\ar[ddr]^-{\Sigma_{N}'} & &
(\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n}))_{0}
\ar[d]^-{\Sigma_{N} \otimes \cdots \otimes \Sigma_{N}} \\
(\SLS(\lambda)^{\otimes N})_{0} & &
(\SLS(\lambda_{1})^{\otimes N} \otimes \cdots \otimes \SLS(\lambda_{n})^{\otimes N})_{0} \\
& \SLS_{0}(N\lambda), \ar[ru]_{\Xi_{\lambda_1,\cdots,\lambda_n}^{(N)}}
\ar[lu]^-{\Xi_{\lambda}^{(N)}} &
}
\end{split}
\end{equation}
where the map $\Xi_{\lambda_1,\cdots,\lambda_n}^{(N)}$ is
the isomorphism of crystals given by Theorem~\ref{thm:SMT},
applied to the decomposition
\begin{equation} \label{eq:decN}
N\lambda =
(\underbrace{\lambda_1+\cdots+\lambda_1}_{\text{$N$ times}}) + \cdots +
(\underbrace{\lambda_n+\cdots+\lambda_n}_{\text{$N$ times}});
\end{equation}
note that $\Sigma_{N} \otimes \cdots \otimes \Sigma_{N}$ has
the same properties as \eqref{eq:SN1} and \eqref{eq:SN2}.
Similarly, we obtain the following commutative diagram:
\begin{equation} \label{eq:CD5}
\begin{split}
\xymatrix
\QLS(\lambda)
\ar[rr]^-{\Theta_{\lambda_1,\dots,\lambda_{n}}}
\ar[d]_-{\Sigma_{N}}
\ar[ddr]^-{\Sigma_{N}'} & &
\QLS(\lambda_{1}) \otimes \cdots \otimes \QLS(\lambda_{n}))
\ar[d]^-{\Sigma_{N} \otimes \cdots \otimes \Sigma_{N}} \\
\QLS(\lambda)^{\otimes N} & &
\QLS(\lambda_{1})^{\otimes N} \otimes \cdots \otimes \QLS(\lambda_{n})^{\otimes N} \\
& \QLS(N\lambda), \ar[ru]_{\Theta_{\lambda_1,\cdots,\lambda_n}^{(N)}}
\ar[lu]^-{\Theta_{\lambda}^{(N)}} &
}
\end{split}
\end{equation}
where the map $\Theta_{\lambda_1,\cdots,\lambda_n}^{(N)}$ is
the isomorphism of crystals given by Theorem~\ref{thm:NS05}\,(2),
applied to the decomposition \eqref{eq:decN}.
\subsection{Some lemmas concerning final directions.}
\label{subsec:lem2}
Let $\lambda \in P^{+}$, and take $\J=\J_{\lambda}$ as in \eqref{eq:J}.
For $\pi \in \SLS(\lambda)$ (resp., $\eta \in \QLS(\lambda)$) and $i \in I_{\af}$,
we set $e_{i}^{\max}\pi:=e_{i}^{\ve_{i}(\pi)}\pi$
(resp., $e_{i}^{\max}\eta:=e_{i}^{\ve_{i}(\eta)}\eta$) and
$f_{i}^{\max}\pi:=f_{i}^{\vp_{i}(\pi)}\pi$
(resp., $f_{i}^{\max}\eta:=f_{i}^{\vp_{i}(\eta)}\eta$).
The next lemma follows from the definition of the root operator $f_{i}$
(see also \cite[Remark~41]{NNS2}).
\begin{lem} \label{lem:kap} \mbox{}
\begin{enu}
\item Let $\pi \in \SLS(\lambda)$, and $i \in I_{\af}$.
If $0 \le m < \vp_{i}(\pi)$, then $\kappa(f_{i}^{m}\pi) = \kappa(\pi)$.
Also,
\begin{equation*}
\kappa(f_{i}^{\max}\pi) =
\begin{cases}
s_{i}\kappa(\pi)
& \text{\rm if $\pair{\kappa(\pi)\lambda}{\alpha_{i}^{\vee}} > 0$}, \\[1mm]
\kappa(\pi)
& \text{\rm if $\pair{\kappa(\pi)\lambda}{\alpha_{i}^{\vee}} \le 0$}.
\end{cases}
\end{equation*}
\item Let $\eta \in \QLS(\lambda)$, and $i \in I_{\af}$.
If $0 \le m < \vp_{i}(\eta)$, then $\kappa(f_{i}^{m}\eta) = \kappa(\eta)$.
Also,
\begin{equation*}
\kappa(f_{i}^{\max}\eta) =
\begin{cases}
\mcr{\ti{s}_{i}\kappa(\eta)} (\ne \kappa(\eta))
& \text{\rm if $\pair{\kappa(\eta)\lambda}{\alpha_{i}^{\vee}} > 0$}, \\[1mm]
\kappa(\eta)
& \text{\rm if $\pair{\kappa(\eta)\lambda}{\alpha_{i}^{\vee}} \le 0$}.
\end{cases}
\end{equation*}
\end{enu}
\end{lem}
We can show the following lemma
by Lemma~\ref{lem:kap}, \eqref{eq:cl}, and
the definition of $\pi_{\eta}$ (cf. \cite[Remark~4.4]{LNSSS1}).
\begin{lem} \label{lem:rec}
Let $\eta \in \QLS(\lambda)$, and $i \in I_{\af}$.
Then
\begin{align*}
& \pi_{f_{i}^{m}\eta}=f_{i}^{m}\pi_{\eta} \quad
\text{\rm for all $0 \le m < \vp_{i}(\eta)$}, \\[2mm]
& \pi_{f_{i}^{\max}\eta} =
\begin{cases}
f_{i}^{\max}\pi_{\eta} & \text{\rm if $i \ne 0$, or if
$i = 0$ and $\pair{\kappa(\eta)\lambda}{\alpha_{0}^{\vee}} \le 0$}, \\
f_{0}^{\max}\pi_{\eta} \cdot T_{-\kappa(\eta)^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{\rm if $i=0$ and $\pair{\kappa(\eta)\lambda}{\alpha_{0}^{\vee}} > 0$}.
\end{cases}
\end{align*}
\end{lem}
\begin{lem} \label{lem:conn}
Let $w \in W$. For each $\eta \in \QLS(\lambda)$,
there exist $i_{1},\,i_{2},\,\dots,\,i_{n} \in I_{\af}$
such that
$f_{i_n}^{\max} \cdots f_{i_2}^{\max}f_{i_1}^{\max}\eta = \eta_{\lambda}^{\mcr{\lng}}$, and
$(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_2}\ti{s}_{i_1}w)^{-1}\ti{\alpha}_{i_{k}} \in \Delta^{+}$
for all $1 \le k \le n$.
\end{lem}
\begin{proof}
It follows from \cite[Lemma~1.4]{AK} that
there exist $i_{1},i_{2},\dots,i_{a} \in I_{\af}$ such that
$(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_2}\ti{s}_{i_1}w)^{-1}\ti{\alpha}_{i_{k}}
\in \Delta^{+}$ for all $1 \le k \le a$, and
$\ti{s}_{i_{a}} \cdots \ti{s}_{i_2}\ti{s}_{i_1}w = e$;
we set $\eta':=
f_{i_a}^{\max} \cdots f_{i_2}^{\max}f_{i_1}^{\max}\eta$.
Take $i_{a+1},\,i_{a+2},\,\dots,\,i_{b} \in I$
in such a way that
$\lng = s_{i_{b}} \cdots s_{i_{a+2}}s_{i_{a+1}}$
is a reduced expression for $\lng$. Then, for $a+1 \le k \le b$,
\begin{equation*}
(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_{a+1}}
\underbrace{\ti{s}_{i_{a}} \cdots \ti{s}_{i_1}w}_{=e})^{-1}\ti{\alpha}_{i_{k}} =
(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_{a+1}})^{-1}\ti{\alpha}_{i_{k}} =
s_{i_{a+1}} \cdots s_{i_{k-1}}\alpha_{i_{k}} \in \Delta^{+},
\end{equation*}
and $\ti{s}_{i_{b}} \cdots \ti{s}_{i_{a+1}}
\ti{s}_{i_{a}} \cdots \ti{s}_{i_1}w = \lng$.
Here we recall that the crystal $\QLS(\lambda) = \BB(\lambda)_{\cl}$ is regular
in the sense that for every proper subset $J \subsetneqq I$,
it is isomorphic, as a crystal for $U_{q}(\Fg_{J})$, to
the crystal basis of a finite-dimensional $U_{q}(\Fg_{J})$-module,
where $\Fg_{J}$ is the (finite-dimensional) Levi subalgebra of $\Fg_{\af}$ corresponding to $J$
(see \cite[Proposition~3.1.3]{NS05}).
Therefore, we deduce from \cite[Corollaire~9.1.4\,(2)]{KasF} that
$\eta'':=
f_{i_b}^{\max} \cdots f_{i_{a+1}}^{\max}
f_{i_{a}}^{\max} \cdots f_{i_1}^{\max}\eta =
f_{i_b}^{\max} \cdots f_{i_{a+1}}^{\max}\eta'$
satisfies the condition that $f_{i}\eta'' = \bzero$ for all $i \in I$.
We know from \cite[Sect.~4.5]{LNSSS2} that
there exists a (unique) involution $\Lus:\QLS(\lambda) \rightarrow \QLS(\lambda)$
(called the Lusztig involution) such that
$\wt (\Lus(\psi)) = \lng \wt (\psi)$ for $\psi \in \QLS(\lambda)$,
and $\Lus(e_{i}\psi)=f_{i}\Lus (\psi)$, $\Lus(f_{i}\psi)=e_{i}\Lus (\psi)$
for $\psi \in \QLS(\lambda)$ and $i \in I_{\af}$;
by convention, we set $\Lus(\bzero):=\bzero$.
Observe that $\Lus(\eta_{\lambda}) = \eta_{\lambda}^{\mcr{\lng}}$.
We see that $e_{i}\Lus(\eta'')=\bzero$ for all $i \in I$.
It follows from \cite[Proposition~4.3.1]{NS08} that
there exist $i_{b+1},\,i_{b+2},\,\dots,\,i_{n} \in I_{\af}$ such that
$e_{i_n}^{\max} \cdots e_{i_{b+2}}^{\max}e_{i_{b+1}}^{\max}\Lus(\eta'') = \eta_{\lambda}$, and
$(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_{b+2}}\ti{s}_{i_{b+1}})^{-1}\ti{\alpha}_{i_{k}} \in \Delta^{-}$
for all $b+1 \le k \le n$.
Since $\Lus(\eta_{\lambda})=\eta_{\lambda}^{\mcr{\lng}}$ and
$e_{i}\Lus(\eta'')=\bzero$ for all $i \in I$,
we have $f_{i_n}^{\max} \cdots f_{i_{b+2}}^{\max}f_{i_{b+1}}^{\max}\eta'' = \eta_{\lambda}^{\mcr{\lng}}$,
and $(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_{b+2}}\ti{s}_{i_{b+1}}\lng)^{-1}\ti{\alpha}_{i_{k}} \in \Delta^{+}$
for all $b+1 \le k \le n$. Thus, the sequence
$i_{1},\,\dots,\,i_{a},\,i_{a+1},\,\dots,\,i_{b},\,i_{b+1},\,\dots,\,i_{n} \in I_{\af}$
satisfies the condition of the assertion. This proves the lemma.
\end{proof}
\begin{lem} \label{lem:kappa}
Let $\lambda_{1},\,\dots,\,\lambda_{n} \in P^{+}$, and set
$\lambda:=\lambda_{1} + \cdots + \lambda_{n}$.
Take $\J_{n}:=\J_{\lambda_{n}}$ and $\J=\J_{\lambda}$ as in \eqref{eq:J}{\rm;}
note that $\J \subset \J_{n}$.
Let $\eta \in \QLS(\lambda)$, and write $\Theta_{\lambda_{1},\dots,\lambda_{n}}(\eta)$ as
$\Theta_{\lambda_{1},\dots,\lambda_{n}}(\eta) = \eta_{1} \otimes \cdots \otimes \eta_{n}$
for some $\eta_{k} \in \QLS(\lambda_{k})$, $1 \le k \le n$. Then,
$\mcr{\kappa(\eta)}^{\J_{n}}=\kappa(\eta_{n})$.
\end{lem}
\begin{proof}
We deduce from \cite[Proposition~4.3.1]{NS08} (along with Remark~\ref{rem:LScl})
that there exists a monomial
$X=f_{i_m}f_{i_{m-1}} \cdots f_{i_1}$ in root operators $f_{i}$, $i \in I_{\af}$,
such that $\eta = X \eta_{\lambda}$.
We prove the assertion of the lemma
by induction on $m$. If $m=0$, then the assertion is obvious.
Assume that $m > 0$, and set $X':=f_{i_{m-1}} \cdots f_{i_1}$ and $i:=i_{m}$;
we have $X=f_{i}X'$. We set $\eta':=X' \eta_{\lambda}$, and write
$X' (\eta_{\lambda_1} \otimes \cdots \otimes \eta_{\lambda_{n}}) =
\eta_{1}' \otimes \cdots \otimes \eta_{n}'$
for some $\eta_{k}' \in \QLS(\lambda_{k})$, $1 \le k \le n$.
Note that $\eta=f_{i}\eta'$ and $\eta_{1} \otimes \cdots \otimes \eta_{n} =
f_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}')$. Hence
\begin{enu}
\item[(a)] $\eta_{n}$ is identical to either $\eta_{n}'$ or $f_{i}\eta_{n}'$
by the tensor product rule for crystals;
\item[(b)] if $\eta_{n} = \eta_{n}'$ (resp., $\eta_{n}=f_{i}\eta_{n}'$), then
$\kappa(\eta_{n})$ is identical to $\kappa(\eta_{n}')$ (resp.,
either $\kappa(\eta_{n}')$ or $\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{S_n}$).
\end{enu}
Also, we claim that
\begin{enu}
\item[(c)] $\kappa(\eta)=
\mcr{\ti{s}_{i}\kappa(\eta')}^{\J} \ne \kappa(\eta')$ if and only if
$\vp_{i}(\eta')=1$ and $\pair{\kappa(\eta')\lambda}{\alpha_{i}^{\vee}} > 0$;
\item[(d)]
$\kappa(\eta_{n})=
\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{\J_{n}} \ne \kappa(\eta_{n}')$ if and only if
$\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}') = 1$ and
$\pair{\kappa(\eta_{n}')\lambda_{n}}{\alpha_{i}^{\vee}} > 0$.
\end{enu}
Indeed, (c) is obvious by Lemma~\ref{lem:kap}\,(2).
Let us show the ``if'' part of (d).
Since $\pair{\kappa(\eta_{n}')\lambda_{n}}{\alpha_{i}^{\vee}} > 0$,
we see from the definition of $f_{i}$ that $\vp_{i}(\eta_n') \ge 1$.
By the tensor product rule for crystals, we have
$\vp_{i}(\eta_n') \le \vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}') = 1$.
Hence we obtain $\vp_{i}(\eta_n') = 1$.
Suppose that $\eta_{n} = \eta_{n}'$ (see (a)).
Then we have $1 = \vp_{i}(\eta_n') = \vp_{i}(\eta_n) \le
\vp_{i}(\eta_{1} \otimes \cdots \otimes \eta_{n})$,
which contradicts that
$\vp_{i}(\eta_{1} \otimes \cdots \otimes \eta_{n}) =
\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}') - 1 = 0$.
Thus we have $\eta_{n} = f_{i} \eta_{n}'$.
Therefore, by Lemma~\ref{lem:kap}\,(2), we conclude that
$\kappa(\eta_{n})=
\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{\J_{n}} \ne \kappa(\eta_{n}')$.
Let us show the ``only if'' part of (d).
If $\kappa(\eta_{n})=
\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{\J_{n}} \ne \kappa(\eta_{n}')$,
then we have $\eta_{n}=f_{i}\eta_{n}'$ (see (b)),
$\vp_{i}(\eta_{n}') = 1$, and
$\pair{\kappa(\eta_{n}')\lambda_{n}}{\alpha_{i}^{\vee}} > 0$
by Lemma~\ref{lem:kap}\,(2).
Since $f_{i}((\eta_{1}' \otimes \cdots \otimes \eta_{n-1}') \otimes \eta_{n}') =
(\eta_{1}' \otimes \cdots \otimes \eta_{n-1}') \otimes f_{i}\eta_{n}'$,
we see by the tensor product rule for crystals that
$\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n-1}')
\le \ve_{i}(\eta_{n}) = \vp_{i}(\eta_{n}') -
\pair{\wt \eta_{n}'}{\alpha_{i}^{\vee}}$,
and hence $\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n-1}' \otimes \eta_{n}') =
\max \bigl\{ \vp_{i}(\eta_{n}'),\,
\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n-1}') +
\pair{\wt \eta_{n}'}{\alpha_{i}^{\vee}} \bigr\}
= \vp_{i}(\eta_{n}') =1$. This shows (d).
Finally, by the induction hypothesis, we have
\begin{equation} \label{eq:kappa1}
\mcr{\kappa(\eta')}^{\J_n}=\kappa(\eta_{n}').
\end{equation}
\paragraph{Case 1.}
Assume that $\kappa(\eta)=\kappa(\eta')$;
it suffices to show that $\kappa(\eta_{n}') = \kappa(\eta_{n})$.
We see by (c) that
$\pair{\kappa(\eta')\lambda}{\alpha_{i}^{\vee}} \le 0$ or
$\vp_{i}(\eta') \ge 2$. If $\pair{\kappa(\eta')\lambda}{\alpha_{i}^{\vee}} \le 0$,
then it follows from \eqref{eq:kappa1} that
$\pair{\kappa(\eta_{n}')\lambda_{n}}{\alpha_{i}^{\vee}} \le 0$, which implies that
$\kappa(\eta_{n}) = \kappa(\eta_{n}')$ by (d).
If $\vp_{i}(\eta') \ge 2$, then we have
$\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}') \ge 2$,
and hence $\kappa(\eta_{n}') = \kappa(\eta_{n})$ by (d).
\paragraph{Case 2.}
Assume that $\kappa(\eta)=\mcr{\ti{s}_{i}\kappa(\eta')}^{\J} \ne \kappa(\eta')$;
we see that
\begin{equation} \label{eq:kc2}
\mcr{\kappa(\eta)}^{\J_n} =
\mcr{\mcr{\ti{s}_{i}\kappa(\eta')}^{\J}}^{\J_n} =
\mcr{\ti{s}_{i}\kappa(\eta')}^{\J_n} =
\mcr{ \ti{s}_{i}\mcr{\kappa(\eta')}^{\J_n} }^{\J_n}
\stackrel{\eqref{eq:kappa1}}{=} \mcr{ \ti{s}_{i}\kappa(\eta_{n}') }^{\J_n}.
\end{equation}
Also, by (c), we have $\vp_{i}(\eta')=1$ and
$\pair{\kappa(\eta')\lambda}{\alpha_{i}^{\vee}} > 0$.
Hence, $\vp_{i}(\eta_{1}' \otimes \cdots \otimes \eta_{n}')=\vp_{i}(\eta')=1$.
We see that
$\pair{\kappa(\eta_{n}')\lambda_{n}}{\alpha_{i}^{\vee}}
\stackrel{\eqref{eq:kappa1}}{=}
\pair{\kappa(\eta')\lambda_{n}}{\alpha_{i}^{\vee}} \ge 0$
since $\pair{\kappa(\eta')\lambda}{\alpha_{i}^{\vee}} > 0$ implies that
$\kappa(\eta')^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
If $\pair{\kappa(\eta_{n}')\lambda_n}{\alpha_{i}^{\vee}} > 0$, then
we have $\kappa(\eta_{n})=\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{\J_n}$
by (d), and hence $\mcr{\kappa(\eta)}^{\J_n} =
\kappa(\eta_{n})$ by \eqref{eq:kc2}.
If $\pair{\kappa(\eta_{n}')\lambda_n}{\alpha_{i}^{\vee}} = 0$, then
we have $\mcr{\ti{s}_{i}\kappa(\eta_{n}')}^{\J_n} = \kappa(\eta_{n}')$,
and also $\kappa(\eta_{n}) = \kappa(\eta_{n}')$ by (d).
Therefore, by \eqref{eq:kc2},
$\mcr{\kappa(\eta)}^{\J_n} = \kappa(\eta_{n}') = \kappa(\eta_{n})$.
This proves the lemma.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main}.}
\label{subsec:prf-main1}
\begin{prop} \label{prop:s}
Let $\lambda,\,\mu \in P^{+}$, and take $\J_{\lambda}$, $\J_{\mu}$,
$\J_{\lambda+\mu} \subset I$ as in \eqref{eq:J}{\rm;} note that $\J_{\lambda+\mu} =
\J_{\lambda} \cap \J_{\mu}$.
Let $\eta_{1} \in \QLS(\lambda)$, and let $\eta_{2}=\eta_{\mu}^{v}=(v;0,1) \in \QLS(\mu)$
with $v \in \WSu{\mu}$. We set $\eta:=\Theta_{\lambda\mu}^{-1}(\eta_{1} \otimes \eta_{2})
\in \QLS(\lambda+\mu)$. Then, for every $w \in W$,
\begin{equation} \label{eq:ps}
\Xi_{\lambda\mu}(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))}) =
\pi_{\eta_{1}} \cdot T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) +
\wt(w \Rightarrow \ti{v}_{w})} \otimes \pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w})},
\end{equation}
where $\ti{v}_{w}:=\tbmin{v}{\J_{\mu}}{w}${\rm;}
note that $\pi_{\eta_{2}} = \pi_{\mu}^{v} = (v;0,1)$.
\end{prop}
\begin{proof}
By Lemma~\ref{lem:conn},
there exist $i_{1},\,i_{2},\,\dots,\,i_{n} \in I_{\af}$
such that
$f_{i_n}^{\max} \cdots f_{i_2}^{\max}f_{i_1}^{\max}\eta =
\eta_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}$, and
$(\ti{s}_{i_{k-1}} \cdots \ti{s}_{i_2}\ti{s}_{i_1}w)^{-1}
\ti{\alpha}_{i_{k}} \in \Delta^{+}$ for all $1 \le k \le n$.
We prove the assertion of the proposition by induction on $n$.
Assume that $n=0$; we have $\eta=\eta_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}$.
By \cite[Lemma~3.19\,(3)]{NS05}, we deduce that
$\Theta_{\lambda\mu}(\eta_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}) =
\eta_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}} \otimes
\eta_{\mu}^{\mcr{\lng}^{\J_{\mu}}}$, and hence
$\eta_{1}=\eta_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}}$ and
$\eta_{2}=\eta_{\mu}^{\mcr{\lng}^{\J_{\mu}}}$;
in particular, $\kappa(\eta)=\mcr{\lng}^{\J_{\lambda+\mu}}$,
$\kappa(\eta_1) = \mcr{\lng}^{\J_{\lambda}}$, and $v=\mcr{\lng}^{\J_{\mu}}$.
Also, we have $\pi_{\eta}=\pi_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}$,
$\pi_{\eta_{1}}=\pi_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}}$,
and $\pi_{\eta_{2}}=\pi_{\mu}^{\mcr{\lng}^{\J_{\mu}}}$.
By Lemma~\ref{lem:wtS}, we have
$\wt(w \Rightarrow \kappa(\eta)) =
\wt(w \Rightarrow \mcr{\lng}^{\J_{\lambda+\mu}}) \equiv
\wt(w \Rightarrow \lng)$ mod $\QSv{\lambda+\mu}$.
Since $\lng$ is greater than or equal to $w$ in the (ordinary) Bruhat order on $W$,
there exists a (shortest) directed path in $\QB$ from
$w$ to $\lng$ whose edges are all Bruhat edges.
Hence we obtain $\wt(w \Rightarrow \lng) = 0$, so that
$\wt(w \Rightarrow \kappa(\eta)) \in \QSv{\lambda+\mu}$.
Therefore, by Lemma~\ref{lem:PiJ}\,(3),
the left-hand side of \eqref{eq:ps} is equal to
$\Xi_{\lambda\mu}(\pi_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}})$.
Since $\ti{v}_{w} \in v\WS{\mu}=\lng\WS{\mu}$,
it follows from Lemma~\ref{lem:wtS} that
$\wt(w \Rightarrow \ti{v}_{w}) \equiv \wt(w \Rightarrow \lng)$
mod $\QSv{\mu}$. Since $\wt(w \Rightarrow \lng) = 0$ as seen above,
we have $\wt(w \Rightarrow \ti{v}_{w}) \in \QSv{\mu}$.
Therefore, by Lemma~\ref{lem:PiJ}\,(3),
the second factor of the right-hand side of \eqref{eq:ps} is equal to
$\pi_{\mu}^{\mcr{\lng}^{\J_{\mu}}}$.
Similarly, since $\kappa(\eta_{1})=\mcr{\lng}^{\J_{\lambda}} \in \lng \WS{\lambda}$,
we see by Lemmas~\ref{lem:wtS} and \ref{lem:PiJ}\,(3)
that the first factor of the right-hand side of \eqref{eq:ps} is equal to
\begin{equation*}
\pi_{\eta_{1}} \cdot T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) +
\wt(w \Rightarrow \ti{v}_{w})} =
\pi_{\eta_{1}} \cdot T_{\wt(\ti{v}_{w} \Rightarrow \lng) +
\wt(w \Rightarrow \ti{v}_{w})}.
\end{equation*}
Since $\ti{v}_{w} \tb{w} \lng$ by the definition of $\ti{v}_{w}$, we have
$\wt(w \Rightarrow \lng) =
\wt(w \Rightarrow \ti{v}_{w})+ \wt(\ti{v}_{w} \Rightarrow \lng)$.
Since $\wt(w \Rightarrow \lng) = 0$ as seen above,
we conclude that the first factor of the right-hand side of
\eqref{eq:ps} is equal to
$\pi_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}}$.
Thus, equation \eqref{eq:ps} reduces to:
\begin{equation} \label{eq:311a}
\Xi_{\lambda\mu}(\pi_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}) =
\pi_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}} \otimes
\pi_{\mu}^{\mcr{\lng}^{\J_{\mu}}}.
\end{equation}
This equality is verified by using
Lemmas~\ref{lem:PiJ}\,(1), \ref{lem:pix}, and \ref{lem:ext} as follows:
\begin{equation*}
\Xi_{\lambda\mu}(\pi_{\lambda+\mu}^{\mcr{\lng}^{\J_{\lambda+\mu}}}) =
\Xi_{\lambda\mu}(\lng \cdot \pi_{\lambda+\mu})=
(\lng \cdot \pi_{\lambda}) \otimes (\lng \cdot \pi_{\mu})=
\pi_{\lambda}^{\mcr{\lng}^{\J_{\lambda}}} \otimes
\pi_{\mu}^{\mcr{\lng}^{\J_{\mu}}}.
\end{equation*}
This proves the assertion in the case $n=0$.
Assume that $n > 0$. For simplicity of notation,
we set $i:=i_{1}$, $\eta':=f_{i_{1}}^{\max}\eta = f_{i}^{\max}\eta$,
and $w':=\ti{s}_{i_1}w = \ti{s}_{i}w$.
Note that $\vp_{i}(\eta_{2})=\max\{ \pair{v\mu}{\alpha_{i}^{\vee}},0 \}$,
and
\begin{equation*}
f_{i}^{\max}\eta_{2} =
\begin{cases}
(\mcr{\ti{s}_{i}v}^{\J_{\mu}};0,1) &
\text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\
(v;0,1) &
\text{if $\pair{v\mu}{\alpha_{i}^{\vee}} \le 0$};
\end{cases}
\end{equation*}
see, e.g., \cite[Lemma~8.2.7]{KasF}.
We see by the tensor product rule for crystals,
along with these equalities, that
\begin{equation} \label{eq:vp}
\vp_{i}(\eta) = \vp_{i}(\eta_{1} \otimes \eta_{2}) =
\begin{cases}
\vp_{i}(\eta_{1}) + \pair{v\mu}{\alpha_{i}^{\vee}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[1mm]
\vp_{i}(\eta_{1}) & \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} = 0$}, \\[1mm]
\max\{ \vp_{i}(\eta_{1}) + \pair{v\mu}{\alpha_{i}^{\vee}},0\}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$}, \\[1mm]
\end{cases}
\end{equation}
and
\begin{align}
\Theta_{\lambda\mu}(f_{i}^{\max}\eta) =
f_{i}^{\max}(\eta_{1} \otimes \eta_{2}) & =
\begin{cases}
f_{i}^{\max}\eta_{1} \otimes f_{i}^{\max}\eta_{2}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[1.5mm]
f_{i}^{\max}\eta_{1} \otimes \eta_{2}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} = 0$}, \\[1.5mm]
f_{i}^{\vp_{i}(\eta)}\eta_{1} \otimes \eta_{2}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$},
\end{cases} \nonumber \\[2mm]
& =
\begin{cases}
f_{i}^{\max}\eta_{1} \otimes (\mcr{\ti{s}_{i}v}^{\J_{\mu}};0,1)
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[1.5mm]
f_{i}^{\max}\eta_{1} \otimes (v;0,1)
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} = 0$}, \\[1.5mm]
f_{i}^{\vp_{i}(\eta)}\eta_{1} \otimes (v;0,1)
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$}.
\end{cases} \label{eq:s1}
\end{align}
We define $\eta_{1}' \in \QLS(\lambda)$ and
$\eta_{2}' \in \QLS(\mu)$ by:
$\eta_{1}' \otimes \eta_{2}'= \Theta_{\lambda\mu}(\eta') =
\Theta_{\lambda\mu}(f_{i}^{\max}\eta) =
f_{i}^{\max}(\eta_{1} \otimes \eta_{2})$.
By \eqref{eq:s1}, we can apply our induction hypothesis to
$\Theta_{\lambda\mu}(\eta')=\eta_{1}' \otimes \eta_{2}'$ and $w'=\ti{s}_{i}w$
to obtain
\begin{equation} \label{eq:s2}
\Xi_{\lambda\mu}(\pi_{\eta'} \cdot T_{\wt(w' \Rightarrow \kappa(\eta'))}) =
\pi_{\eta_{1}'} \cdot T_{\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) +
\wt(w' \Rightarrow \ti{v}'_{w'})} \otimes \pi_{\eta_{2}'} \cdot
T_{\wt(w' \Rightarrow \ti{v}'_{w'})},
\end{equation}
where
\begin{equation*}
v':=
\begin{cases}
\mcr{\ti{s}_{i}v}^{\J_{\mu}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[1.5mm]
v & \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} \le 0$},
\end{cases}
\qquad \text{and} \quad
\ti{v}'_{w'}:=\tbmin{v'}{\J_{\mu}}{w'}.
\end{equation*}
\begin{claim} \label{c:s1}
It holds that
\begin{equation} \label{eq:cs1a}
\pi_{\eta'} \cdot T_{\wt(w' \Rightarrow \kappa(\eta'))} =
f_{i}^{\max}\pi_{\eta} \cdot
T_{\wt(w \Rightarrow \kappa(\eta))-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}.
\end{equation}
\end{claim}
\noindent
{\it Proof of Claim~\ref{c:s1}.}
Assume first that $\pair{\kappa(\eta)(\lambda+\mu)}{\alpha_{i}^{\vee}} > 0$;
we see from Lemma~\ref{lem:kap}\,(2) that
$\kappa(\eta') = \kappa(f_{i}^{\max}\eta) =
\mcr{\ti{s}_{i}\kappa(\eta)}^{\J_{\lambda+\mu}}$.
It follows from Lemma~\ref{lem:wtS} that
$\wt(w' \Rightarrow \kappa(\eta')) =
\wt(\ti{s}_{i}w \Rightarrow \mcr{\ti{s}_{i}\kappa(\eta)}^{\J_{\lambda+\mu}})
\equiv
\wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\kappa(\eta))$ mod
$\QSv{\lambda+\mu}$. Hence,
$\pi_{\eta'} \cdot T_{\wt(w' \Rightarrow \kappa(\eta'))} =
\pi_{\eta'} \cdot T_{\wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\kappa(\eta))}$
by Lemma~\ref{lem:PiJ}\,(3).
Since $\kappa(\eta)^{-1}\ti{\alpha}_{i} \in \Delta^{+}$
and $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we see from Lemma~\ref{lem:dia}\,(2) that
$\wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\kappa(\eta)) =
\wt(w \Rightarrow \kappa(\eta)) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\kappa(\eta)^{-1}\ti{\alpha}_{0}^{\vee}$.
Also, we have
$\pi_{\eta'} = \pi_{f_{i}^{\max}\eta} =
f_{i}^{\max}\pi_{\eta} \cdot T_{-\delta_{i0}\kappa(\eta)^{-1}\ti{\alpha}_{0}^{\vee}}$
by Lemma~\ref{lem:rec} and the assumption that
$\pair{\kappa(\eta)(\lambda+\mu)}{\alpha_{i}^{\vee}} > 0$.
Combining these equalities, we obtain \eqref{eq:cs1a}.
Assume next that $\pair{\kappa(\eta)(\lambda+\mu)}{\alpha_{i}^{\vee}} \le 0$;
we see from Lemma~\ref{lem:kap}\,(2) that
$\kappa(\eta') = \kappa(f_{i}^{\max}\eta) = \kappa(\eta)$.
We claim that there exists $z \in \WS{\lambda+\mu}$ such that
$(\kappa(\eta')z)^{-1}\ti{\alpha}_{i} =
(\kappa(\eta)z)^{-1}\ti{\alpha}_{i} \in \Delta^{-}$.
Indeed, since $\pair{\kappa(\eta)(\lambda+\mu)}{\alpha_{i}^{\vee}} \le 0$,
we see that $\alpha:=\kappa(\eta')^{-1}\ti{\alpha}_{i} =
\kappa(\eta)^{-1}\ti{\alpha}_{i} \in \Delta^{-} \cup \DeS{\lambda+\mu}$.
If $\alpha \in \Delta^{-}$ (resp., $\alpha \in \DeS{\lambda+\mu}^{+}$),
then $z=e$ (resp., $z=s_{\alpha}$) satisfies the condition that
$z^{-1}\alpha \in \Delta^{-}$. By Lemma~\ref{lem:wtS}, we have
\begin{equation*}
\wt(w' \Rightarrow \kappa(\eta')) =
\wt(\ti{s}_{i}w \Rightarrow \kappa(\eta)) \equiv
\wt(\ti{s}_{i}w \Rightarrow \kappa(\eta)z) \mod \QSv{\lambda+\mu}.
\end{equation*}
Since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ and
$(\kappa(\eta)z)^{-1}\ti{\alpha}_{i} \in \Delta^{-}$,
it follows from Lemma~\ref{lem:dia}\,(1) that
$\wt(\ti{s}_{i}w \Rightarrow \kappa(\eta)z) =
\wt(w \Rightarrow \kappa(\eta)z)-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}$.
Also, it follows from Lemma~\ref{lem:wtS} that
$\wt(w \Rightarrow \kappa(\eta)z) \equiv
\wt(w \Rightarrow \kappa(\eta))$ mod $\QSv{\lambda+\mu}$.
Therefore, we have
\begin{equation*}
\wt(w' \Rightarrow \kappa(\eta')) \equiv
\wt(w \Rightarrow \kappa(\eta))-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
\mod \QSv{\lambda+\mu},
\end{equation*}
and hence $\pi_{\eta'} \cdot T_{\wt(w' \Rightarrow \kappa(\eta'))} =
\pi_{\eta'} \cdot
T_{\wt(w \Rightarrow \kappa(\eta))-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}$
by Lemma~\ref{lem:PiJ}\,(3).
Because $\pi_{\eta'} = \pi_{f_{i}^{\max}\eta} = f_{i}^{\max}\pi_{\eta}$
by Lemma~\ref{lem:rec} and the assumption that
$\pair{\kappa(\eta)(\lambda+\mu)}{\alpha_{i}^{\vee}} \le 0$,
we obtain \eqref{eq:cs1a}. This proves Claim~\ref{c:s1}. \bqed
\begin{claim} \label{c:s2}
It holds that
\begin{equation} \label{eq:cs2a}
\pi_{\eta_{2}'} \cdot T_{\wt(w' \Rightarrow \ti{v}'_{w'})} =
\begin{cases}
f_{i}^{\max}\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[1.5mm]
\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} \le 0$}.
\end{cases}
\end{equation}
\end{claim}
\noindent
{\it Proof of Claim~\ref{c:s2}.}
Assume first that $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$;
note that $v^{-1}\ti{\alpha}_{i} \in \Delta^{+} \setminus \DeS{\mu}^{+}$ and
$v'=\mcr{\ti{s}_{i}v}^{\J_{\mu}}$, so that
$\ti{v}'_{w'}=\tbmin{\ti{s}_{i}v}{\J_{\mu}}{\ti{s}_{i}w}$.
Since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we deduce from Lemma~\ref{lem:tb}\,(1) that
$\ti{v}'_{w'} = \ti{s}_{i}\ti{v}_{w}$.
Since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ and
$\ti{v}_{w}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
it follows from Lemma~\ref{lem:dia}\,(2) that
\begin{equation} \label{eq:cs2d}
\wt(w' \Rightarrow \ti{v}'_{w'})
= \wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}_{w})
= \wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}.
\end{equation}
Since $\ti{v}_{w} \in v\WS{\mu}$,
we have $\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee} \equiv
v^{-1}\ti{\alpha}_{0}^{\vee}$ mod $\QSv{\mu}$. Hence
\begin{equation*}
\wt(w' \Rightarrow \ti{v}'_{w'})
\equiv \wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee} \mod \QSv{\mu}.
\end{equation*}
Because $\eta_{2}=(v;0,1) \in \QLS(\mu)$ with $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$, and
$\eta_{2}'=f_{i}^{\max}\eta_{2}$ (see \eqref{eq:s1}),
we have $\pi_{\eta_{2}'} = \pi_{f_{i}^{\max}\eta_{2}}=f_{i}^{\max}\pi_{\eta_{2}}
\cdot T_{-\delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee}}$ by Lemma~\ref{lem:rec}.
Therefore, we see that
\begin{align*}
\pi_{\eta_{2}'} \cdot T_{\wt(w' \Rightarrow \ti{v}'_{w'})}
& = f_{i}^{\max}\pi_{\eta_{2}}
\cdot T_{-\delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee} +
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}} \\
& = f_{i}^{\max}\pi_{\eta_{2}}
\cdot T_{-\delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee} +
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}v^{-1}\ti{\alpha}_{0}^{\vee}} \quad \text{by Lemma~\ref{lem:PiJ}\,(3)} \\
& = f_{i}^{\max}\pi_{\eta_{2}}
\cdot T_{\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}.
\end{align*}
Assume next that $\pair{v\mu}{\alpha_{i}^{\vee}} \le 0$;
note that $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \cup \DeS{\mu}$ and $v'=v$,
so that $\ti{v}'_{w'}=\tbmin{v}{\J_{\mu}}{\ti{s}_{i}w}$. We claim that
\begin{equation} \label{eq:cs2c}
\begin{split}
& \wt(w' \Rightarrow \ti{v}'_{w'}) = \\
&
\begin{cases}
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeS{\mu}^{-}$}, \\[1mm]
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and
$(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{+}$}, \\[1mm]
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and
$(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$}.
\end{cases
\end{split}
\end{equation}
Indeed, since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we deduce from Lemma~\ref{lem:tb}\,(2) and (3) that
\begin{equation} \label{eq:cs2b}
\ti{v}'_{w'} =
\begin{cases}
\ti{v}_{w} &
\text{if $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeS{\mu}^{-}$}, \\[1mm]
\ti{v}_{w} &
\text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$
and $(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{+}$}, \\[1mm]
\ti{s}_{i}\ti{v}_{w} &
\text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$
and $(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$}.
\end{cases}
\end{equation}
In the first case, we have $\ti{v}_{w}^{-1}\ti{\alpha}_{i} \in \Delta^{-}$
since $\ti{v}_{w} \in v\WS{\mu}$.
Also, recall that $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
It follows from Lemma~\ref{lem:dia}\,(1) that
$\wt(w' \Rightarrow \ti{v}'_{w'}) =
\wt(\ti{s}_{i}w \Rightarrow \ti{v}_{w}) =
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}$.
In the third case, since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ and
$\ti{v}_{w}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
by the same argument as for \eqref{eq:cs2d}, we deduce that
$\wt(w' \Rightarrow \ti{v}'_{w'}) = \wt(w \Rightarrow \ti{v}_{w})
-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}$.
Let us consider the second case.
Since $\ti{v}_{w}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we see by Lemma~\ref{lem:edge} that
$\ti{v}_{w} \edge{\ti{v}_{w}^{-1}\ti{\alpha}_{i}} \ti{s}_{i}\ti{v}_{w}$
is a directed edge of $\QB$;
in particular, $\wt (\ti{v}_{w} \Rightarrow \ti{s}_{i}\ti{v}_{w}) =
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}$.
Because $\ti{v}_{w} = \ti{v}'_{w'} = \tbmin{v'}{\J_{\mu}}{w'} =
\tbmin{v}{\J_{\mu}}{\ti{s}_{i}w}$, and
because $\ti{s}_{i}\ti{v}_{w} \in v\WS{\mu}$ in this case,
we have $\ti{v}_{w} \tb{\ti{s}_{i}w} \ti{s}_{i}\ti{v}_{w}$, and hence
\begin{equation*}
\wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}_{w}) =
\underbrace{\wt(\ti{s}_{i}w \Rightarrow \ti{v}_{w})}_{=\wt(w' \Rightarrow \ti{v}'_{w'})} +
\underbrace{\wt(\ti{v}_{w} \Rightarrow \ti{s}_{i}\ti{v}_{w})}_
=\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}}.
\end{equation*}
By the same argument as for \eqref{eq:cs2d}, we deduce that
$\wt(\ti{s}_{i}w \Rightarrow \ti{s}_{i}\ti{v}_{w}) = \wt(w \Rightarrow \ti{v}_{w})
-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}$.
Hence we obtain $\wt(w' \Rightarrow \ti{v}'_{w'}) =
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}$.
This shows \eqref{eq:cs2c}.
Now, we remark that if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$, then
$\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee} \in \QSv{\mu}$.
Hence we see by \eqref{eq:cs2c} that
$\wt(w' \Rightarrow \ti{v}'_{w'})
\equiv
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}$ mod $\QSv{\mu}$.
Also, recall that $\eta_{2}' = \eta_{2}$
in the case that $\pair{v\mu}{\alpha_{i}^{\vee}} \le 0$.
Therefore, we obtain
$\pi_{\eta_{2}'} \cdot T_{\wt(w' \Rightarrow \ti{v}'_{w'})} =
\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}$
by Lemma~\ref{lem:PiJ}\,(3). Thus we have shown Claim~\ref{c:s2}. \bqed
\begin{claim} \label{c:s3}
It holds that
\begin{equation} \label{eq:cs3a}
\begin{split}
& \pi_{\eta_{1}'} \cdot
T_{\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) + \wt(w' \Rightarrow \ti{v}'_{w'})} \\
& =
\begin{cases}
f_{i}^{\max}\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w}) -
\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} \ge 0$}, \\[1mm]
f_{i}^{\vp_{i}(\eta)}\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w}) -
\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$}.
\end{cases}
\end{split}
\end{equation}
\end{claim}
\noindent
{\it Proof of Claim~\ref{c:s3}.}
We know from \eqref{eq:cs2d} and \eqref{eq:cs2c} that
\begin{equation} \label{eq:cs3b}
\begin{split}
& \wt(w' \Rightarrow \ti{v}'_{w'}) = \\
&
\begin{cases}
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \Delta^{+} \setminus \DeS{\mu}^{+}$}, \\[1mm]
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeS{\mu}^{-}$}, \\[1mm]
\wt(w \Rightarrow \ti{v}_{w}) - \delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and
$(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{+}$}, \\[1mm]
\wt(w \Rightarrow \ti{v}_{w})-\delta_{i0}w^{-1}\ti{\alpha}_{0}^{\vee} +
\delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}
& \text{if $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and
$(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$}.
\end{cases
\end{split}
\end{equation}
\paragraph{Case 1.}
Assume that $\pair{v\mu}{\alpha_{i}^{\vee}} \ge 0$;
note that $v^{-1}\ti{\alpha}_{i} \in \Delta^{+} \cup \DeS{\mu}$.
Since $\eta_{1}'=f_{i}^{\max}\eta_{1}$ in this case (see \eqref{eq:s1}),
we see that
if $\pair{\kappa(\eta_{1})\lambda}{\alpha_{i}^{\vee}} > 0$ (resp., $\le 0$),
then $\kappa(\eta_{1}')=\mcr{\ti{s}_{i}\kappa(\eta_{1})}^{\J_{\lambda}}$
(resp., $\kappa(\eta_{1}')=\kappa(\eta_{1})$) by Lemma~\ref{lem:kap}\,(2).
Also, we deduce from Lemma~\ref{lem:rec} that
\begin{equation} \label{eq:cs3-1b}
\pi_{\eta_{1}'} = \pi_{f_{i}^{\max}\eta_{1}} =
\begin{cases}
f_{i}^{\max}\pi_{\eta_{1}} \cdot
T_{-\delta_{i0}\kappa(\eta_{1})^{-1}\ti{\alpha}_{0}^{\vee}}
& \text{if $\pair{\kappa(\eta_{1})\lambda}{\alpha_{i}^{\vee}} > 0$}, \\[1mm]
f_{i}^{\max}\pi_{\eta_{1}}
& \text{if $\pair{\kappa(\eta_{1})\lambda}{\alpha_{i}^{\vee}} \le 0$}.
\end{cases}
\end{equation}
Since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
it follows from Lemma~\ref{lem:tb}\,(1) and (3) that
\begin{equation*}
\ti{v}'_{w'} =
\begin{cases}
\ti{s}_{i}\ti{v}_{w}
& \text{in Cases 1a and 1c}, \\[1mm]
\ti{v}_{w}
& \text{in Case 1b},
\end{cases}
\end{equation*}
where
Case 1a: $v^{-1}\ti{\alpha}_{i} \in \Delta^{+} \setminus \DeS{\mu}^{+}$;
Case 1b: $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and $(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{+}$;
Case 1c: $v^{-1}\ti{\alpha}_{i} \in \DeS{\mu}$ and $(\ti{v}'_{w'})^{-1}\ti{\alpha}_{i} \in \Delta^{-}$;
\noindent
notice that $\ti{v}_{w}^{-1}\ti{\alpha}_{i} \in \Delta^{+}$ in all of these cases.
Now, assume first that $\pair{\kappa(\eta_{1})\lambda}{\alpha_{i}^{\vee}} > 0$;
note that $\kappa(\eta_{1})^{-1}\ti{\alpha}_{i} \in \Delta^{+}$.
Then we see by Lemmas~\ref{lem:dia} and \ref{lem:wtS} that
\begin{align}
\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) & \equiv
\begin{cases}
\wt(\ti{s}_{i}\ti{v}_{w} \Rightarrow \ti{s}_{i}\kappa(\eta_1))
\mod \QSv{\lambda}
& \text{in Cases 1a and 1c}, \\[1mm]
\wt(\ti{v}_{w} \Rightarrow \ti{s}_{i}\kappa(\eta_1))
\mod \QSv{\lambda}
& \text{in Case 1b},
\end{cases} \nonumber \\[2mm]
& =
\begin{cases}
\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1))
- \delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee}
+ \delta_{i0}\kappa(\eta_1)^{-1}\ti{\alpha}_{0}^{\vee}
& \text{in Cases 1a and 1c}, \\[1mm]
\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1))
+ \delta_{i0}\kappa(\eta_1)^{-1}\ti{\alpha}_{0}^{\vee}
& \text{in Case 1b}.
\end{cases} \label{eq:cs3-1a}
\end{align}
Combining \eqref{eq:cs3b}, \eqref{eq:cs3-1b}, and \eqref{eq:cs3-1a},
along with Lemma~\ref{lem:PiJ}\,(3),
we obtain \eqref{eq:cs3a} in this case. Assume next that
$\pair{\kappa(\eta_{1})\lambda}{\alpha_{i}^{\vee}} \le 0$;
note that $\kappa(\eta_{1})^{-1}\ti{\alpha}_{i} \in \Delta^{-} \cup \DeS{\lambda}$.
In Case 1b, we have
\begin{equation} \label{eq:cs3-1d}
\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) =
\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)).
\end{equation}
By \eqref{eq:cs3b}, \eqref{eq:cs3-1b}, \eqref{eq:cs3-1d},
we obtain \eqref{eq:cs3a} in this case.
Let us consider Cases 1a and 1c.
By the same argument as in the proof of Claim~\ref{c:s1}, we see that
there exists $z \in \WS{\lambda}$ such that
$(\kappa(\eta_{1})z)^{-1}\ti{\alpha}_{i} \in \Delta^{-}$.
Then we see by Lemmas~\ref{lem:wtS} and \ref{lem:dia} that
\begin{align}
\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) & =
\wt(\ti{s}_{i}\ti{v}_{w} \Rightarrow \kappa(\eta_1)) \equiv
\wt(\ti{s}_{i}\ti{v}_{w} \Rightarrow \kappa(\eta_1)z) \nonumber \\
& = \wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)z)
- \delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee} \nonumber \\
& \equiv
\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1))
- \delta_{i0}\ti{v}_{w}^{-1}\ti{\alpha}_{0}^{\vee} \mod \QSv{\lambda}. \label{eq:cs3-1c}
\end{align}
By \eqref{eq:cs3b}, \eqref{eq:cs3-1b}, \eqref{eq:cs3-1c},
along with Lemma~\ref{lem:PiJ}\,(3), we obtain \eqref{eq:cs3a} also in this case.
\paragraph{Case 2.}
Assume that $v^{-1}\ti{\alpha}_{i} \in \Delta^{-} \setminus \DeS{\mu}^{-}$;
note that $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$.
Recall that $\eta_{1}'=f_{i}^{\vp_{i}(\eta)}\eta_{1}$ (see \eqref{eq:s1});
by \eqref{eq:vp}, we see that $\vp_{i}(\eta) = 0$ or
$\vp_{i}(\eta) < \vp_{i}(\eta_{1})$.
In both cases, we deduce from Lemma~\ref{lem:kap}\,(2) and
Lemma~\ref{lem:rec} that $\kappa(\eta_{1}')=\kappa(\eta_{1})$ and
$\pi_{\eta_{1}'}=f_{i}^{\vp_{i}(\eta)}\pi_{\eta_{1}}$.
Recall that $v'=v$, and hence $\ti{v}'_{w'} =
\tbmin{v}{\J_{\mu}}{\ti{s}_{i}w}$ in this case.
Since $w^{-1}\ti{\alpha}_{i} \in \Delta^{+}$,
we see from Lemma~\ref{lem:tb}\,(2) that
$\ti{v}'_{w'} = \ti{v}_{w}$. Hence we obtain
$\wt(\ti{v}'_{w'} \Rightarrow \kappa(\eta_1')) =
\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1))$.
By this equality, \eqref{eq:cs3b}, and
$\pi_{\eta_{1}'}=f_{i}^{\vp_{i}(\eta)}\pi_{\eta_{1}}$,
we obtain \eqref{eq:cs3a}.
Thus we have shown Claim~\ref{c:s3}. \bqed
\vsp
Substituting \eqref{eq:cs1a}, \eqref{eq:cs2a}, \eqref{eq:cs3a}
into \eqref{eq:s2}, and then using Lemma~\ref{lem:Txi2}, we deduce that
\begin{equation} \label{eq:s4}
\begin{split}
& \Xi_{\lambda\mu}(f_{i}^{\max}\pi_{\eta} \cdot
T_{\wt(w \Rightarrow \kappa(\eta))}) = \\
&
\begin{cases}
f_{i}^{\max}\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
f_{i}^{\max}\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w})}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} > 0$}, \\[2mm]
f_{i}^{\max}\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \ti{v}_{w})}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} = 0$}, \\[2mm]
f_{i}^{\vp_{i}(\eta)}\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}
& \text{if $\pair{v\mu}{\alpha_{i}^{\vee}} < 0$}.
\end{cases}
\end{split}
\end{equation}
Here we see from \eqref{eq:Txi1a} and \eqref{eq:cl} that
$\vp_{i}(\pi_{\eta} \cdot
T_{\wt(w \Rightarrow \kappa(\eta))}) = \vp_{i}(\eta)$.
Hence the left-hand side of \eqref{eq:s4} is equal to
$f_{i}^{\vp_{i}(\eta)}\Xi_{\lambda\mu}
(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))})$.
Similarly, it is easily verified, using \eqref{eq:Txi1a}, \eqref{eq:cl},
and the tensor product rule for crystals, that
\begin{equation*}
\vp_{i}(\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}) = \vp_{i}(\eta_{1} \otimes \eta_{2})=\vp_{i}(\eta).
\end{equation*}
Furthermore, we deduce by the tensor product rule for crystals that
the right-hand side of \eqref{eq:s4} is equal to
\begin{equation*}
\begin{split}
& f_{i}^{\max}(\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}) \\
& \qquad =
f_{i}^{\vp_{i}(\eta)}(\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}).
\end{split}
\end{equation*}
Therefore, we obtain
\begin{equation*}
f_{i}^{\vp_{i}(\eta)}\Xi_{\lambda\mu}
(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))}) =
f_{i}^{\vp_{i}(\eta)}(\pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}),
\end{equation*}
and hence $\Xi_{\lambda\mu}
(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))}) = \pi_{\eta_{1}} \cdot
T_{\wt(\ti{v}_{w} \Rightarrow \kappa(\eta_1)) + \wt(w \Rightarrow \ti{v}_{w})} \otimes
\pi_{\eta_{2}} \cdot T_{\wt(w \Rightarrow \ti{v}_{w})}$,
as desired. This completes the proof of Proposition~\ref{prop:s}.
\end{proof}
\begin{prop} \label{prop:s2}
Let $\lambda_{1},\,\dots,\,\lambda_{n} \in P^{+}$, and set
$\lambda:=\lambda_{1} + \cdots + \lambda_{n}$.
Take $\J_{k}:=\J_{\lambda_{k}}$, $1 \le k \le n$, and $\J=\J_{\lambda}$ as in \eqref{eq:J}.
Let $v_{k} \in \WSu{\lambda_k}$ for $1 \le k \le n$, and set
\begin{equation*}
\eta:=\Theta_{\lambda_1,\dots,\lambda_n}^{-1}(
\eta_{\lambda_1}^{v_1} \otimes \cdots \otimes \eta_{\lambda_n}^{v_n})
\in \QLS(\lambda).
\end{equation*}
Let $w \in W$. We define
\begin{equation*}
\begin{cases}
\ti{v}_{n+1}:=w, \quad
\ti{v}_{k}:=\tbmin{v_{k}}{\J_{k}}{\ti{v}_{k+1}} \quad
\text{\rm for $1 \le k \le n$}, \\[1mm]
\xi_{n}:=\wt (\ti{v}_{n+1} \Rightarrow \ti{v}_{n}), \quad
\xi_{k}:=\xi_{k+1} + \wt (\ti{v}_{k+1} \Rightarrow \ti{v}_{k})
\quad \text{\rm for $1 \le k \le n-1$}.
\end{cases}
\end{equation*}
Then the following equality holds{\rm:}
\begin{equation} \label{eq:s2a}
\Xi_{\lambda_1,\dots,\lambda_n}(\pi_{\eta} \cdot T_{\wt (w \Rightarrow \kappa(\eta))}) =
(\pi_{\lambda_1}^{v_1} \cdot T_{\xi_1}) \otimes \cdots \otimes
(\pi_{\lambda_n}^{v_n} \cdot T_{\xi_n}).
\end{equation}
\end{prop}
\begin{proof}
We prove the assertion of the proposition by induction on $n$.
Assume that $n=1$. In this case,
both $\Xi_{\lambda_1,\dots,\lambda_n}$ and
$\Theta_{\lambda_1,\dots,\lambda_n}$ are the identity map.
Hence, $\eta=\eta_{\lambda_1}^{v_1}$, and
$\pi_{\eta}=\pi_{\lambda_1}^{v_1}$. By Lemma~\ref{lem:wtS},
we have $\xi_{1} = \wt(w \Rightarrow \ti{v}_{1}) \equiv
\wt(w \Rightarrow v_{1}) =
\wt(w \Rightarrow \kappa(\eta))$ mod $\QSv{1}$.
Therefore, we obtain
$\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))} =
\pi_{\eta} \cdot T_{\xi_1} = \pi_{\lambda_1}^{v_1} \cdot T_{\xi_1}$
by Lemma~\ref{lem:PiJ}\,(3).
This proves the assertion for the case $n=1$.
Assume that $n > 1$; for simplicity of notation,
we set $\lambda':=\lambda_{1} + \cdots + \lambda_{n-1}$.
We see from Remarks~\ref{rem:ass1} and \ref{rem:ass2} that
the following diagrams \eqref{eq:CDa} and \eqref{eq:CDb} are commutative:
\begin{equation} \label{eq:CDa}
\begin{split}
\xymatrix
\QLS(\lambda) \ar[r]^-{\Theta_{\lambda_1,\dots,\lambda_n}}
\ar[d]_{\Theta_{\lambda',\lambda_n}} &
\QLS(\lambda_{1}) \otimes \cdots \otimes \QLS(\lambda_{n}) \\
\QLS(\lambda') \otimes \QLS(\lambda_{n}),
\ar[ur]_{\qquad \Theta_{\lambda_1,\dots,\lambda_{n-1}} \otimes \id} & }
\end{split}
\end{equation}
\begin{equation} \label{eq:CDb}
\begin{split}
\xymatrix
\SLS_{0}(\lambda) \ar[r]^-{\Xi_{\lambda_1,\dots,\lambda_n}}
\ar[d]_{\Xi_{\lambda',\lambda_n}} &
(\SLS(\lambda_{1}) \otimes \cdots \otimes \SLS(\lambda_{n}))_{0} \\
(\SLS(\lambda') \otimes \SLS(\lambda_{n}))_{0}.
\ar[ur]_{\qquad \Xi_{\lambda_1,\dots,\lambda_{n-1}} \otimes \id} & }
\end{split}
\end{equation}
Now, we set
$\eta':=\Theta_{\lambda_1,\dots,\lambda_{n-1}}^{-1}(
\eta_{\lambda_1}^{v_1} \otimes \cdots \otimes \eta_{\lambda_{n-1}}^{v_{n-1}})
\in \QLS(\lambda')$; by the commutative diagram \eqref{eq:CDa},
we see that $\Theta_{\lambda',\lambda_{n}}^{-1}(\eta' \otimes
\eta_{\lambda_{n}}^{v_{n}}) = \eta$. Therefore, we deduce
from Proposition~\ref{prop:s} that
\begin{equation}
\Xi_{\lambda',\lambda_{n}}(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))}) =
\pi_{\eta'} \cdot T_{ \wt (\ti{v}_{n} \Rightarrow \kappa(\eta')) + \wt(w \Rightarrow \ti{v}_{n})}
\otimes \pi_{\lambda_{n}}^{v_{n}} \cdot T_{\wt(w \Rightarrow \ti{v}_{n})};
\end{equation}
note that $\wt(w \Rightarrow \ti{v}_{n}) = \xi_{n}$.
Also, by the induction hypothesis
(applied to $\eta' \in \QLS(\lambda')$ and $\ti{v}_{n} \in W$),
we have
\begin{align*}
& \Xi_{\lambda_1,\dots,\lambda_{n-1}}(
\pi_{\eta'} \cdot T_{ \wt (\ti{v}_{n} \Rightarrow \kappa(\eta')) + \wt(w \Rightarrow \ti{v}_{n})}) \\
& \qquad =
\Bigl(\Xi_{\lambda_1,\dots,\lambda_{n-1}}(
\pi_{\eta'} \cdot T_{ \wt (\ti{v}_{n} \Rightarrow \kappa(\eta')) } ) \Bigr)
\cdot T_{\wt(w \Rightarrow \ti{v}_{n})}
\quad \text{by Lemma~\ref{lem:Txi2}} \\
& \qquad =
\Bigl(
(\pi_{\lambda_1}^{v_1} \cdot T_{\xi_1-\wt(w \Rightarrow \ti{v}_{n})}) \otimes \cdots \otimes
(\pi_{\lambda_{n-1}}^{v_{n-1}} \cdot T_{\xi_{n-1}-\wt(w \Rightarrow \ti{v}_{n})})
\Bigr) \cdot T_{\wt(w \Rightarrow \ti{v}_{n})} \\
& \qquad =
(\pi_{\lambda_1}^{v_1} \cdot T_{\xi_1}) \otimes \cdots \otimes
(\pi_{\lambda_{n-1}}^{v_{n-1}} \cdot T_{\xi_{n-1}}).
\end{align*}
Therefore, by the commutative diagram \eqref{eq:CDb}, we obtain
\begin{align*}
\Xi_{\lambda_1,\dots,\lambda_{n}}(\pi_{\eta} \cdot T_{\wt (w \Rightarrow \kappa(\eta))})
& = ((\Xi_{\lambda_1,\dots,\lambda_{n-1}} \otimes \id) \circ
\Xi_{\lambda',\lambda_{n}})(\pi_{\eta} \cdot T_{\wt (w \Rightarrow \kappa(\eta))}) \\
& = (\pi_{\lambda_1}^{v_1} \cdot T_{\xi_1}) \otimes \cdots \otimes
(\pi_{\lambda_n}^{v_n} \cdot T_{\xi_n}),
\end{align*}
as desired. This proves the proposition.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}.]
Take $N_{\lambda}$, $N_{\mu}$, $N_{\lambda+\mu}$ as in \eqref{eq:N},
and let $N$ be a common multiple of $N_{\lambda}$, $N_{\mu}$, $N_{\lambda+\mu}$.
We deduce from \eqref{eq:CD4} and \eqref{eq:CD5} that
the following diagrams are commutative:
\begin{equation} \label{eq:CD4a}
\begin{split}
\xymatrix
\SLS_{0}(\lambda+\mu)
\ar[rr]^-{\Xi_{\lambda\mu}}
\ar[d]_-{\Sigma_{N}}
\ar[ddr]^-{\Sigma_{N}'} & &
(\SLS(\lambda) \otimes \SLS(\mu))_{0}
\ar[d]^-{\Sigma_{N} \otimes \Sigma_{N}} \\
(\SLS(\lambda+\mu)^{\otimes N})_{0} & &
(\SLS(\lambda)^{\otimes N} \otimes \SLS(\mu)^{\otimes N})_{0} \\
& \SLS_{0}(N\lambda+N\mu), \ar[ru]_{\Xi_{\lambda\mu}^{(N)}}
\ar[lu]^-{\Xi_{\lambda+\mu}^{(N)}} &
}
\end{split}
\end{equation}
\begin{equation} \label{eq:CD5a}
\begin{split}
\xymatrix
\QLS(\lambda+\mu)
\ar[rr]^-{\Theta_{\lambda\mu}}
\ar[d]_-{\Sigma_{N}}
\ar[ddr]^-{\Sigma_{N}'} & &
\QLS(\lambda) \otimes \QLS(\mu)
\ar[d]^-{\Sigma_{N} \otimes \Sigma_{N}} \\
\QLS(\lambda+\mu)^{\otimes N} & &
\QLS(\lambda)^{\otimes N} \otimes \QLS(\mu)^{\otimes N} \\
& \QLS(N\lambda+N\mu), \ar[ru]_{\Theta_{\lambda\mu}^{(N)}}
\ar[lu]^-{\Theta_{\lambda+\mu}^{(N)}} &
}
\end{split}
\end{equation}
By the commutative diagram \eqref{eq:CD4a}, it suffices to show that
\begin{equation} \label{eq:main1a}
\begin{split}
& (\Xi_{\lambda\mu}^{(N)} \circ \Sigma_{N}')
(\overbrace{ \pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))} }^
\text{ cf. LHS of \eqref{eq:main} }}) \\
& \qquad =
(\Sigma_{N} \otimes \Sigma_{N})
(\underbrace
\pi_{\eta_{1}} \cdot
T_{\wt(\io{\eta_2}{w} \Rightarrow \kappa(\eta_1))+ \ze{\eta_2}{w}} \otimes
\pi_{\eta_{2}} \cdot
T_{\wt(w \Rightarrow \kappa(\eta_2))}}_
\text{RHS of \eqref{eq:main}}})
\end{split}
\end{equation}
First we compute the left-hand side of \eqref{eq:main1a}.
We set $\eta':= \Sigma_{N}'(\eta) \in \QLS(N\lambda+N\mu)$;
note that $\kappa(\eta')=\kappa(\eta)$.
By the commutative diagrams \eqref{eq:CD} and \eqref{eq:CD3},
and the definition of $\Sigma_{N}'$, we see that
\begin{equation*}
\Sigma_{N}'(\pi_{\eta} \cdot T_{\wt(w \Rightarrow \kappa(\eta))})
= \Sigma_{N}'(\pi_{\eta}) \cdot T_{\wt(w \Rightarrow \kappa(\eta))}
= \pi_{\eta'} \cdot T_{\wt(w \Rightarrow \kappa(\eta'))}.
\end{equation*}
Hence the left-hand side of \eqref{eq:main1a} is identical to
$\Xi_{\lambda\mu}^{(N)}
(\pi_{\eta'} \cdot T_{\wt(w \Rightarrow \kappa(\eta'))})$.
Next we compute the right-hand side of \eqref{eq:main1a}.
Assume that $\eta_{1} \in \QLS(\lambda)$ and $\eta_{2} \in \QLS(\mu)$ are of the forms:
\begin{equation*}
\eta_{1} = (u_{1},\,\dots,u_{p};\tau_{0},\tau_{1},\dots,\tau_{p}), \qquad
\eta_{2} = (v_{1},\,\dots,v_{s};\sigma_{0},\sigma_{1},\dots,\sigma_{s}),
\end{equation*}
respectively. We define
\begin{equation}
\begin{cases}
\tiv{\eta_{2}}{w} = (\ti{v}_{1},\,\dots,\,\ti{v}_{s},\,\ti{v}_{s+1}=w), \\[1.5mm]
\tiv{\eta_{1}}{\ti{v}_{1}} = (\ti{u}_{1},\,\dots,\,\ti{u}_{p},\,\ti{u}_{p+1}=\ti{v}_{1}),
\end{cases}
\quad \text{and} \quad
\begin{cases}
\tixi{\eta_{2}}{w} = (\ti{\xi}_{1},\,\dots,\,\ti{\xi}_{s}), \\[1.5mm]
\tixi{\eta_{1}}{\ti{v}_{1}} = (\ti{\gamma}_{1},\,\dots,\,\ti{\gamma}_{p}),
\end{cases}
\end{equation}
as in \eqref{eq:ti1} and \eqref{eq:ti2}, respectively;
recall that $\io{\eta_2}{w}=\ti{v}_{1}$ and $\ze{\eta_2}{w}=\ti{\xi}_{1}$.
We claim that
\begin{align}
& \pi_{\eta_1} \cdot T_{\wt(\io{\eta_2}{w} \Rightarrow \kappa(\eta_1))+ \ze{\eta_2}{w}}
= (u_{1}\PS{\lambda}(t_{\ti{\gamma}_1+\ti{\xi}_1}),\,\dots,
u_{p}\PS{\lambda}(t_{\ti{\gamma}_p+\ti{\xi}_1});
\tau_{0},\tau_{1},\dots,\tau_{p}), \label{eq:23a} \\
& \pi_{\eta_2} \cdot T_{\wt(w \Rightarrow \kappa(\eta_{2}))}
= (v_{1}\PS{\mu}(t_{\ti{\xi}_1}),\,\dots,
v_{s}\PS{\mu}(t_{\ti{\xi}_s});
\sigma_{0},\sigma_{1},\dots,\sigma_{s}). \label{eq:23b}
\end{align}
Let us show \eqref{eq:23b}; the proof of \eqref{eq:23a} is similar.
We define $\bxi{\eta_{2}} = (\xi_{1},\,\dots,\,\xi_{s-1},\xi_{s}=0)$
as in \eqref{eq:bxi}. Then, by Remark~\ref{rem:equiv},
we have $\ti{\xi}_{u} \equiv \xi_{u}+\wt^{\J_{\mu}}
(\mcr{w}^{\J_{\mu}} \Rightarrow \kappa(\eta_{2}))$ mod $\QSv{\mu}$
for all $1 \le u \le s$. By Lemma~\ref{lem:wtS}, we have
$\wt^{\J_{\mu}}
(\mcr{w}^{\J_{\mu}} \Rightarrow \kappa(\eta_{2}))
\equiv \wt(w \Rightarrow \kappa(\eta_{2}))$ mod $\QSv{\mu}$,
and hence $\ti{\xi}_{u} \equiv \xi_{u} +
\wt(w \Rightarrow \kappa(\eta_{2}))$ mod $\QSv{\mu}$
for all $1 \le u \le s$. From these, we see that
\begin{align*}
\pi_{\eta_2} \cdot T_{\wt(w \Rightarrow \kappa(\eta_{2}))}
& \stackrel{\eqref{eq:pieta}}{=} (v_{1}\PS{\mu}(t_{\xi_1}),\,\dots,
v_{s}\PS{\mu}(t_{\xi_s});
\sigma_{0},\sigma_{1},\dots,\sigma_{s}) \cdot
T_{\wt(w \Rightarrow \kappa(\eta_{2}))} \\
& \stackrel{\eqref{eq:PiJ2}}{=}
(v_{1}\PS{\mu}(t_{\xi_1+\wt(w \Rightarrow \kappa(\eta_{2}))}),\,\dots,
v_{s}\PS{\mu}(t_{\xi_s+\wt(w \Rightarrow \kappa(\eta_{2}))});
\sigma_{0},\sigma_{1},\dots,\sigma_{s}) \\
& = (v_{1}\PS{\mu}(t_{\ti{\xi_1}}),\,\dots,
v_{s}\PS{\mu}(t_{\ti{\xi_s}});
\sigma_{0},\sigma_{1},\dots,\sigma_{s}) \quad
\text{by Lemma~\ref{lem:PiJ}\,(3)},
\end{align*}
as desired.
By the definition of $\Sigma_{N}$,
the right-hand side of \eqref{eq:main1a} is:
\begin{equation} \label{eq:main1c}
\begin{split}
& (\pi_{\lambda}^{u_1} \cdot T_{\ti{\gamma}_1+\ti{\xi}_1})^{\otimes N(\tau_1-\tau_0)}
\otimes \cdots \otimes
(\pi_{\lambda}^{u_p} \cdot T_{\ti{\gamma}_{p}+\ti{\xi}_1})^{\otimes N(\tau_p-\tau_{p-1})} \\
& \qquad \otimes
(\pi_{\mu}^{v_1} \cdot T_{\ti{\xi}_1})^{\otimes N(\sigma_1-\sigma_0)}
\otimes \cdots \otimes
(\pi_{\mu}^{v_s} \cdot T_{\ti{\xi}_s})^{\otimes N(\sigma_s-\sigma_{s-1})}.
\end{split}
\end{equation}
Now, we see from the commutative diagram \eqref{eq:CD5a} and
the definition of $\Sigma_{N}$ that
\begin{align*}
& \Theta_{\lambda\mu}^{(N)}(\eta') =
(\Theta_{\lambda\mu}^{(N)} \circ \Sigma_{N}')(\eta) =
((\Sigma_{N} \otimes \Sigma_{N}) \circ \Theta_{\lambda\mu})(\eta) =
(\Sigma_{N} \otimes \Sigma_{N})(\eta_{1} \otimes \eta_{2}) \\
& =
(\eta_{\lambda}^{u_1})^{\otimes N(\tau_1-\tau_0)}
\otimes \cdots \otimes
(\eta_{\lambda}^{u_p})^{\otimes N(\tau_p-\tau_{p-1})} \otimes
(\eta_{\mu}^{v_1})^{\otimes N(\sigma_1-\sigma_0)}
\otimes \cdots \otimes
(\eta_{\mu}^{v_s})^{\otimes N(\sigma_s-\sigma_{s-1})}.
\end{align*}
Therefore, by applying Proposition~\ref{prop:s2},
we deduce that $\Xi_{\lambda\mu}^{(N)}
(\pi_{\eta'} \cdot T_{\wt(w \Rightarrow \kappa(\eta'))})$
(which is identical to the left-hand side of \eqref{eq:main1a}, as seen above)
is identical to the element in \eqref{eq:main1c}
(which is identical to the right-hand side of \eqref{eq:main1a}, as seen above).
Thus we have shown \eqref{eq:main1a},
thereby completing the proof of Theorem~\ref{thm:main}.
\end{proof}
{\small
|
1803.01716
|
\subsubsection*{Acknowledgment.}}%
\usepackage{amsmath,amsfonts,amsthm,amssymb,verbatim,enumerate,quotes,graphicx,mathtools
\usepackage{color}
\def \color{red} {\color{red}}
\def \color{black} {\color{black}}
\newcommand{\comments}[1]{}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{corollary}{Corollary}
\newtheorem{example}{Example}
\newtheorem*{remark}{Remark}
\newtheorem*{remarks}{Remarks}
\newtheorem{remarkk}{Remark}
\newtheorem{conjecture}{Conjecture}
\newtheorem{question}{Question}
\newtheorem{proposition}{Proposition}
\def \in\mathbb{N} {\in\mathbb{N}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{C}{\mathbb{C}}
\def\mathbb{N}{\mathbb{N}}
\def\mathbb{N}_0{\mathbb{N}_0}
\def\mathbb{Z}{\mathbb{Z}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{H}{\mathbb{H}}
\newcommand{transcendental entire function}{transcendental entire function}
\newcommand\qfor{\quad\text{for }}
\makeatletter
\def\xdef\@thefnmark{}\@footnotetext{\xdef\@thefnmark{}\@footnotetext}
\makeatother
\begin{document}
\title[The bungee set in quasiregular dynamics]{The bungee set in quasiregular dynamics}
\author{Daniel A. Nicks, \, David J. Sixsmith}
\address{School of Mathematical Sciences \\ University of Nottingham \\ Nottingham
NG7 2RD \\ UK \\ ORCiD:0000-0002-9493-2970}
\email{Dan.Nicks@nottingham.ac.uk}
\address{Dept. of Mathematical Sciences \\
University of Liverpool \\
Liverpool L69 7ZL\\
UK \\ ORCiD: 0000-0002-3543-6969}
\email{djs@liverpool.ac.uk}
\begin{abstract}
In complex dynamics, the bungee set is defined as the set points whose orbit is neither bounded nor tends to infinity. In this paper we study, for the first time, the bungee set of a quasiregular map of transcendental type. We show that this set is infinite, and shares many properties with the bungee set of a {transcendental entire function}. By way of contrast, we give examples of novel properties of this set in the quasiregular setting. In particular, we give an example of a quasiconformal map of the plane with a non-empty bungee set; this behaviour is impossible for an analytic homeomorphism.
\end{abstract}
\maketitle
\xdef\@thefnmark{}\@footnotetext{2010 \itshape Mathematics Subject Classification. \normalfont Primary 37F10; Secondary 30C65, 30D05.}
\section{Introduction}
Suppose that $f$ is an entire function. In the study of complex dynamics it is common to partition the complex plane into two sets. Firstly, the \emph{Julia set} $J(f)$, which consists of points in a neighbourhood of which the iterates of $f$ are, in some sense, chaotic. Secondly, its complement the \emph{Fatou set} $F(f) := \mathbb{C} \setminus J(f)$. For more information on complex dynamics, including precise definitions of these sets, we refer to \cite{MR1216719}.
An alternative partition divides the plane into three sets based on the nature of the orbits of points; the \emph{orbit} of a point $z$ is the sequence $(f^n(z))_{n\geq 0}$ of its images under the iterates of $f$. This partition is defined as follows:
\begin{itemize}
\item The \emph{escaping set} $I(f)$ consists of those points whose orbit tends to infinity.
\item The \emph{bounded orbit set} $BO(f)$ consists of those points whose orbit is bounded.
\item The \emph{bungee set} $BU(f) := \mathbb{C} \setminus (I(f) \cup BO(f))$ contains all other points.
\end{itemize}
Suppose that $P$ is a polynomial of degree greater than one. Then the escaping set $ I(P) $ is the basin of attraction of infinity, and so $I(P) \subset F(P)$. The set $BO(P)$ (usually in this context denoted by $K(P)$) is known as the \emph{filled Julia set} and has been extensively investigated, since $J(P) = \partial BO(P)$. It is well-known that $BU(P)$ is empty in this case.
The escaping set for a general transcendental entire function\ $ f $ was first studied by Eremenko \cite{MR1102727}, and has been the focus of much subsequent research in complex dynamics.
The set $BO(f)$ for a {transcendental entire function} $f$ was studied in \cite{MR2869069} and \cite{MR3118409}.
If $f$ is transcendental, then $BU(f)$ is non-empty; indeed the Hausdorff dimension of $BU(f) \cap J(f)$ is greater than zero \cite[Theorem 5.1]{OsSix}. The properties of $BU(f)$ were studied in \cite{OsSix} and subsequently in \cite{Lazebnik2017611, DaveDynamical}. Examples of {transcendental entire function}s with Fatou components in $BU(f)$ were given in \cite{Bish3, MR918638, Lazebnik2017611, FJL}. These sets are connected by the equation \cite{MR1102727, MR3118409, OsSix}
\begin{equation}
\label{boundaries}
J(f) = \partial BU(f) = \partial I(f) = \partial BO(f).
\end{equation}
To move the study of the bungee set into a more general setting, we consider the iteration of quasiregular and quasiconformal maps;
we refer to \cite{MR1238941, MR950174} for definitions.
Suppose that $d \geq 2$, and that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map with an essential singularity at infinity, in which case we say that $f$ is of \emph{transcendental type}. Following \cite{MR3009101, MR3265283}, we define the \emph{Julia set $J(f)$} as the set of all $x \in \mathbb{R}^d$ such that
\begin{equation*}
\operatorname{cap} \left(\mathbb{R}^d\backslash \bigcup_{k=1}^\infty f^k(U)\right) = 0,
\end{equation*}
for every neighbourhood $U$ of $x$. Here if $S \subset \mathbb{R}^d$, then we write cap $S = 0$ if $S$ has \emph{zero (conformal) capacity}, and otherwise we write cap $S > 0$. Again, we refer to \cite{MR1238941, MR950174} for a definition and properties of conformal capacity
It is known that if $f$ is a quasiregular map of transcendental type, then the Julia set is infinite \cite[Theorem 1.1]{MR3265283}. It is easy to see that $J(f)$ is closed, and also that $J(f)$ is \emph{completely invariant}, in the sense that $x \in J(f)$ if and only if $f(x) \in J(f)$.
The definitions of $I(f), BO(f)$ and $BU(f)$ can be modified in an obvious way to apply to quasiregular maps of $\mathbb{R}^d$. In the quasiregular setting, the escaping set has been studied in \cite{MR2448586,MR3265357,MR3215194,danslow}, and the bounded orbit set in \cite{MR3265283}. Our goal in this paper is to study $BU(f)$ in the case that $f$ is quasiregular and of transcendental type.
Our first result shows that the bungee set of a quasiregular map of transcendental type is never empty, and in fact always meets the Julia set.
\begin{theorem}
\label{theo:itsinfinite}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. Then $BU(f) \cap J(f)$ is an infinite set.
\end{theorem}
We now specialise to the case that the Julia set has positive capacity. In fact there are no known examples where the Julia set of a quasiregular map of transcendental type does not have positive capacity, and the following conjecture arises from \cite{MR3009101, MR3265283}.
\begin{conjecture}
\label{con1}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. Then cap $J(f) > 0$.
\end{conjecture}
The next three theorems are the main results of this paper. The first two show that, for quasiregular maps of transcendental type, the first equality of \eqref{boundaries} need not hold in general, but we are guaranteed inclusion provided that the Julia set has positive capacity.
\begin{theorem}
\label{theo:capnonzero}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. If cap $J(f) >0$, then $BU(f) \cap J(f)$ is an infinite set and
\begin{equation}
\label{e:inclusions}
J(f) \subset \partial BU(f) \cap \partial I(f) \cap \partial BO(f).
\end{equation}
\end{theorem}
\begin{theorem}
\label{theo:JnotboundaryBU}
There is a quasiregular map of transcendental type $f : \mathbb{R}^2 \to \mathbb{R}^2$ such that cap $J(f) > 0$ and $J(f) \ne \partial BU(f)$.
\end{theorem}
Our proof of Theorem~\ref{theo:JnotboundaryBU} relies on the following, perhaps somewhat surprising, result.
\begin{theorem}
\label{theo:quasiconformalexample}
There is a quasiconformal map $f : \mathbb{R}^2 \to \mathbb{R}^2$ such that $BU(f)\ne\emptyset$.
\end{theorem}
If $f : \mathbb{R}^2 \to \mathbb{R}^2$ is an analytic homeomorphism, in other words an affine map, the dynamics of $f$ are not particularly interesting; certainly we have that $BU(f) = \emptyset$. Theorem~\ref{theo:quasiconformalexample} shows that this is not the case for quasiconformal maps of the plane.
\begin{remark}\normalfont
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map \emph{not} of transcendental type. Suppose also that the degree of $f$ is sufficiently large compared to the distortion of $f$; in technical terms we require that deg $f > K_I(f)$. It is shown in \cite[p.28]{MR2755919} (see also \cite{ETS:9408364}) that $I(f)$ contains a neighbourhood of infinity, and so $BU(f)$ is empty.
\end{remark}
Finally, returning to Conjecture~\ref{con1}, we note that there are many conditions known to be sufficient for Conjecture~\ref{con1} to hold. For example, the Julia set of a quasiregular map of transcendental type $f : \mathbb{R}^2 \to \mathbb{R}^2$ is always of positive capacity \cite[Theorem 1.11]{MR3265283}, so this part of Theorem~\ref{theo:JnotboundaryBU} is immediate. The paper \cite{MR3265283} gives many other sufficient conditions; for example, if $f$ is locally Lipschitz or has bounded local index. In the following we add to this list a simple condition on the growth of the function; roughly speaking, all functions that do not grow too slowly have a Julia set of positive capacity. Here, for $r > 0$, we define the \emph{maximum modulus function} by
\[
M(r, f) := \max_{|x| = r} |f(x)|.
\]
\begin{theorem}
\label{theo:capJ}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. Suppose also that
\begin{equation}
\label{eq:grows}
\liminf_{r\rightarrow\infty} \frac{\log \log M(r, f)}{\log \log r} = \infty.
\end{equation}
Then cap $J(f)>0$.
\end{theorem}
\begin{remark}\normalfont
A quasiregular map $f : \mathbb{R}^d \to \mathbb{R}^d$ has \emph{positive lower order} if there exist $r_0 > 0$ and $\epsilon > 0$ such that
\[
M(r, f) > \exp r^\epsilon, \qfor r \geq r_0.
\]
It is easy to see that a quasiregular map with positive lower order satisfies \eqref{eq:grows}.
\end{remark}
\subsection*{Notation}
For $0 < r_1 < r_2$, we denote the spherical shell centred at the origin by
\[
A(r_1,r_2) := \{x \in \mathbb{R}^d : r_1 < |x| < r_2\},
\]
and the ball with centre at the origin and radius $r_1$ by
\[
B(r_1) := \{x \in \mathbb{R}^d : |x| < r_1\}.
\]
Finally, if $S \subset \mathbb{R}^d$, then we denote the boundary of $S$ in $\mathbb{R}^d$ by $\partial S$, and closure of $S$ in $\mathbb{R}^d$ by $\overline{S}$.
\section{Proof of Theorem~\ref{theo:itsinfinite} and Theorem~\ref{theo:capnonzero}}
\label{S:itsinfinite}
We use the following result. This is a version of \cite[Lemma 3.1]{Sixsmithmax} stated for quasiregular maps. The proof is omitted, as it is almost identical to the proof of the original.
\begin{lemma}
\label{lemm:l1}
Suppose that $(E_n)_{n\in\mathbb{N}}$ is a sequence of compact sets in $\mathbb{R}^d$ and $(m_n)_{n\in\mathbb{N}}$ is a sequence of integers. Suppose also that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map such that $E_{n+1} \subset f^{m_n}(E_n )$, for $n\in\mathbb{N}$. Set $p_n := \sum_{k=1}^n m_k$, for $n\in\mathbb{N}$. Then there exists $\zeta\in E_1$ such that
\begin{equation}
\label{feq}
f^{p_n}(\zeta) \in E_{n+1}, \qfor n\in\mathbb{N}.
\end{equation}
If, in addition, $E_n \cap J(f) \ne \emptyset$, for $n\in\mathbb{N}$, then there exists $\zeta \in E_1 \cap J(f)$ such that (\ref{feq}) holds.
\end{lemma}
We need the following, which is taken from \cite[Lemma 3.3]{danslow} and \cite[Lemma~3.4]{danslow}. Here a quasiregular map $f : \mathbb{R}^d \to \mathbb{R}^d$ of transcendental type has the \emph{pits effect} if there exists $n \in \mathbb{N}$ such that, for all $c > 1$ and $\epsilon > 0$, there exists $r_0$ such that if $r > r_0$, then the set
\[
\{x \in \mathbb{R}^d : r \leq |x| \leq cr, |f(x)| \leq 1\}
\]
can be covered by $n$ balls of radius $\epsilon r$.
\begin{lemma}
\label{lem:dan}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type that has the pits effect. Then there exist increasing sequences of positive real numbers $(s_n)_{n\in\mathbb{N}}$ and $(t_n)_{n\in\mathbb{N}}$, both tending to infinity, such that, for $t \geq t_n$,
\begin{equation}
\label{eq:dan}
f(A(s_n , t)) \supset B(2t), \qfor n \in \mathbb{N}.
\end{equation}
\end{lemma}
Note that \cite[Lemma 3.4]{danslow} states $f(A(s_n , t)) \supset A(s_n ,2t)$ in place of \eqref{eq:dan}. Our stronger statement is easily derived from the proof of \cite[Lemma 3.4]{danslow}.
\begin{proof}[Proof of Theorem~\ref{theo:itsinfinite} and Theorem~\ref{theo:capnonzero}]
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. The proof splits into two cases: cap $J(f) > 0$ and cap $J(f) = 0$.
We consider first the case that cap $J(f) > 0$. Pick $R>0$ sufficiently large that cap~$J' > 0$, where $J' := J(f) \cap B(R)$. For each $n \in \mathbb{N}$ set
\[
J_n := J(f) \cap \{ x \in \mathbb{R}^d : |x| > n \}.
\]
It follows from \cite[Theorem~1.2]{MR583633}, which is the quasiregular analogue of Picard's great theorem, together with complete invariance, that $J(f) \setminus f(J_n)$ is a finite set, for $n \in \mathbb{N}$, and so has capacity zero. If cap $J_n = 0$, then cap $f(J_n) = 0$ (see, for example, \cite[Theorem 10.15]{MR950174}) and so cap $J(f) \setminus f(J_n) > 0$. This is a contradiction. Hence cap $J_n > 0$, for $n \in \mathbb{N}$.
Choose a point $x_1 \in J(f)$, and let $U_1$ be a neighbourhood of $x_1$ of diameter at most one. It follows from the definition of the Julia set that cap $(\mathbb{R}^d \setminus \bigcup_{k\in\mathbb{N}} f^k(U_1)) = 0$, and so there exist $m_1 \in \mathbb{N}$ and $x_1' \in U_1$ such that
\[
x_2 := f^{m_1}(x_1') \in J_2.
\]
Let $U_1' \subset U_1$ be a neighbourhood of $x_1'$ sufficiently small that $U_2 := f^{m_1}(U_1')$ is of diameter at most one.
Now, since cap $J' > 0$, and $U_2$ is open and meets $J(f)$, there exist $m_2 \in \mathbb{N}$ and $x_2' \in U_2$ such that
\[
x_3 := f^{m_2}(x_2') \in J'.
\]
Let $U_2' \subset U_2$ be a neighbourhood of $x_2'$ sufficiently small that $U_3 := f^{m_2}(U_2')$ is of diameter at most one.
Continuing inductively, we obtain a sequence of domains $(U_n)_{n\in\mathbb{N}}$, each of diameter at most one, and a sequence of integers $(m_n)_{n\in\mathbb{N}}$ such that $f^{m_n}(U_n) \supset U_{n+1}$, and $U_n$ meets $J_n$ when $n$ is even, and $J'$ when $n\geq 3$ is odd.
An application of Lemma~\ref{lemm:l1} gives that there is a point
\[
\xi \in \overline{U_1} \cap BU(f) \cap J(f).
\]
Since $f^n(\xi) \in BU(f) \cap J(f)$, for $n \geq 0$, we obtain that $BU(f) \cap J(f)$ is infinite.
Since $x_1$ and $U_1$ were arbitrary, it follows that $J(f) \subset \overline{BU(f)}$. It is known that $J(f) \subset \partial I(f) \cap \partial BO(f)$ \cite[Theorem 1.3]{MR3265283}. Thus $J(f) \subset \partial BU(f)$, and so \eqref{e:inclusions} holds. This completes the proof of Theorem~\ref{theo:capnonzero}, and also of Theorem~\ref{theo:itsinfinite} when cap $J(f) > 0$. \\
It remains to prove Theorem~\ref{theo:itsinfinite} in the case that cap $J(f) = 0$, so we now assume that the Julia set has capacity zero. It follows by \cite[Corollary 1.1]{MR3265283} that $f$ has the pits effect.
Let $(s_n)_{n\in\mathbb{N}}$ and $(t_n)_{n\in\mathbb{N}}$ be as given in Lemma~\ref{lem:dan}. Set $V_n := A(s_n, t_n)$, for $n \in \mathbb{N}$. We may assume that $B(2t_n)$ meets $J(f)$ for all $n \in \mathbb{N}$, so \eqref{eq:dan} and complete invariance imply that
\[
V_n \cap J(f) \ne \emptyset, \qfor n\in \mathbb{N}.
\]
By \eqref{eq:dan} again,
\[
f(V_n) \supset B(2t_n) \supset V_1, \qfor n \in \mathbb{N},
\]
and, moreover, if $m_n \in \mathbb{N}$ is sufficiently large that $2^{m_n} \geq t_n/t_1$, then
\[
f^{m_n}(V_1) \supset B(2^{m_n}t_1) \supset V_n, \qfor n \in \mathbb{N}.
\]
An application of Lemma~\ref{lemm:l1} (with $E_n = \overline{V_1}$ for odd $n$, and $E_n = \overline{V_n}$ for even $n$) gives that there is a point
\[
\xi \in \overline{V_1} \cap BU(f) \cap J(f),
\]
because we have forced oscillation of the orbit. As earlier, it follows that $BU(f) \cap J(f)$ is infinite.
\end{proof}
\section{Examples}
\label{S:quasiconformalexamples}
In this section we first prove Theorem~\ref{theo:quasiconformalexample}, and then use the function constructed to prove Theorem~\ref{theo:JnotboundaryBU}.
\begin{proof}[Proof of Theorem~\ref{theo:quasiconformalexample}]
From now on we identify $\mathbb{R}^2$ with $\mathbb{C}$ in the obvious way. We construct a quasiconformal map $f : \mathbb{C} \to \mathbb{C}$ such that $BU(f)~\ne~\emptyset$. First we fix $y_0 > 100$, and let $T_0$ be the domain
$$
T_0 := \{ x + iy : y > y_0, \ |x| < 1/y \}.
$$
We define a continuous map $\psi : \overline{T_0} \to \overline{T_0}$ as follows. If $x + iy \in \overline{T_0}$, then we set
\begin{equation}
\label{psidef}
\psi(x + iy) := \frac{xy}{y + 1/y - |x|} + i(y + 1/y - |x|).
\end{equation}
Note that $\psi$ is the identity map on the two vertical sides of $T_0$. Note in addition that
\begin{equation}
\label{toinf}
\psi^n(z) \rightarrow\infty \text{ as } n\rightarrow\infty, \qfor z = 0 + iy \text{ where } y > y_0.
\end{equation}
We show that $\psi$ is quasiconformal on $T_0$ by estimating the derivative. By differentiating \eqref{psidef} we obtain that, as $y \rightarrow\infty$,
\begin{equation*}
D\psi(x+iy) =
\left(
\begin{array}{cc}%
1 + O(y^{-2}) & O(y^{-2}) \\
\pm 1 & 1 + O(y^{-2})
\end{array}
\right), \qfor (x + iy) \in T_0.
\end{equation*}
It follows that $\psi$ is indeed quasiconformal on $T_0$.
Roughly speaking $\overline{T_0}$ is an infinite ``straight snake''. We now seek to define a quasiconformal map $\phi$ on $T_0$, homeomorphic up to the boundary, such that $\overline{\phi(T_0)}$ is a ``coiled snake''. Moreover half the bends in this snake will have imaginary parts tending to infinity, whereas the remaining bends will be within a fixed distance of the origin.
To construct this map, we first need to fix two particular quasiconformal maps. Let $A$ be the rectangle
\[
A := \{ z : \operatorname{Re}(z) \in [0, 1], \operatorname{Im}(z) \in [0, 2]\},
\]
and let $B$ be the half-annulus
\[
B := \{ z : \operatorname {Im}(z) \geq 0, 1/2 \leq |z - 3/2| \leq 3/2 \}.
\]
We define a map $\nu_r : A \to B$ by
\begin{equation}
\label{nurdef}
\nu_r(x+iy) := 3/2 + (x-3/2)e^{-i \pi y/2}.
\end{equation}
It can be checked that $\nu_r$ is a quasiconformal map on the interior of $A$. It is also easy to check that $\nu_r$ is the identity on the lower boundary of $A$, maps each vertical line segment ending at a point on the lower boundary of $A$ to a semi-circle in $B$, and maps the upper boundary of $A$ to the right-hand lower boundary of $B$ by an affine transformation.
The second quasiconformal map is
\begin{equation}
\label{nuldef}
\nu_l(x+iy) := -1/2 + (x+1/2)e^{i \pi y/2}.
\end{equation}
This maps $A$ to the half annulus \[ \{ z : \operatorname{Im}(z) \geq 0, 1/2 \leq |z + 1/2| \leq 3/2 \},\] once again fixing the lower boundary of $A$.
Let the sequences $(s_n)_{n\in\mathbb{N}}$ and $(t_n)_{n\in\mathbb{N}}$ of positive real numbers be defined by $t_n := 2^n$, $s_1 := y_0$ and then
\[
s_{n+1} := s_n + 2t_n + 4/(s_n + t_n) + 4/(s_n + 2t_n + 4/(s_n + t_n)).
\]
Note that
\begin{equation}
\label{tsum}
\sum_{n=0}^\infty 1/t_n < \infty.
\end{equation}
Roughly speaking $t_n$ will be the height of the $n$th bend of the snake, and $s_n$ will measure the total distance along the snake to the start of the $n$th bend. Note that $s_{n+1}$ is only approximately equal to $s_n + 2t_n$; the additional terms correspond to the ``corners'' of the bends. See Figure~\ref{f1}.
We now divide the set $\overline{T_0}$ into infinitely many collections of four closed approximate rectangles. In particular, for each $n \in \mathbb{N}$ we define:
\begin{itemize}
\item A strip of height $t_n$ given by $$S^1_n := \overline{T_0} \cap \{ x + iy : s_n \leq y \leq s_n + t_n \}.$$
\item A small (approximate) rectangle, of height twice its width, given by $$S^2_n :=\overline{T_0} \cap \{ x + iy : s_n + t_n \leq y \leq s_n + t_n + 4/(s_n + t_n) \}.$$
\item A second strip of height $t_n$ given by $$S_n^3 := \overline{T_0} \cap \{ x + iy : s_n + t_n + 4/(s_n + t_n) \leq y \leq s_n + 2t_n + 4/(s_n + t_n)\}.$$
\item A second (approximate) rectangle, also of height twice its width, given by \[S_n^4 := \overline{T_0} \cap \{ x + iy : s_n + 2t_n + 4/(s_n + t_n) \leq y \leq s_{n+1} \}.\]
\end{itemize}
We define $\phi$ by specifying it first on $S^1_1$, then on $S^2_1$, then on $S^3_1$, and so on ``up'' $T_0$. Note that the rectangles above meet where upper and lower boundaries coincide, but we will ensure that the definitions of $\phi$ respect this. In addition, the upper and lower boundaries will be mapped only by affine transformations.
First we define $\phi$ on the lowest collection of four rectangles in $T_0$.
\begin{itemize}
\item On $S^1_1$ we let $\phi$ be the identity.
\item The action of $\phi$ on $S^2_1$ is defined as follows. First translate $S^2_1$ so that its bottom left corner lies at the origin. Then enlarge it by a scale factor of $(s_1+t_1)/2$, so that it maps into $A$, and then map it by the function $\nu_r$ defined in \eqref{nurdef}. Then scale it by a scale factor of $2/(s_1+t_1)$, and translate it so the left-hand lower boundary of the image coincides with the upper boundary of $\phi(S^1_1)$. (Observe here that the enlarged translation of $S^2_1$ is only a subset of the rectangle $A$. This does not affect the argument).
\item The action of $\phi$ on $S^3_1$ is defined by first rotating by one half-turn, and then translating so that the upper boundary of the image of $S_1^3$ coincides with the right-hand lower boundary of $\phi(S_1^2)$.
\item The action of $\phi$ on $S^4_1$ is defined as follows, and is very similar to the action on $S^2_1$. First translate $S^4_1$ so that its bottom left corner lies at the origin. Then enlarge it by a scale factor of $(s_n + 2t_n + 4/(s_n + t_n))/2$ to obtain a subset of $A$. Then apply the map $\nu_l$ defined in \eqref{nuldef}, followed by an second scaling with scale factor equal to $2/(s_n + 2t_n + 4/(s_n + t_n))$. Finally rotate by one half-turn, and then translate so that the upper left-hand boundary of the image of $S_1^4$ coincides with the lower boundary of $\phi(S_1^3)$.
\end{itemize}
\begin{figure}
\includegraphics[width=16cm,height=10cm]{snake}
\caption{A rough schematic of the construction of the map $\phi$
}\label{f1}
\end{figure}
It is now clear how to continue this process; we iterate the four steps above, although with different translations at each stage to ensure continuity at the boundary. In particular, for each $n \geq 2$, $\phi$ maps $S^1_n$ by a translation, rather than the identity. See Figure~\ref{f1}. Note that it follows from \eqref{tsum} that the snake remains within a strip of bounded real part.
In order to see that $\phi$ is quasiconformal on $T_0$, we now check that subsequent sections of the snake do not overlap; that is, for each $n \in \mathbb{N}$, the sets $\phi(S_n^1)$, $\phi(S_n^3)$ and $\phi(S_{n+1}^1)$ are pairwise disjoint. To see this, fix $n \in \mathbb{N}$. Note that the base of the strip $\phi(S_n^1)$ is of width $2/s_n$, and the top of this strip is of width $2/(s_n + t_n)$. Also, by construction, the left-hand side of the strip $\phi(S_n^3)$ is at least $4/(s_n+t_n)$ from the left-hand side of the strip $\phi(S_n^1)$. Now, it follows from the definitions that $t_n < s_n$, and hence $2/s_n < 4/(s_n + t_n)$. Thus the strips $\phi(S_n^1)$ and $\phi(S_n^3)$ are pairwise disjoint. The proof that the strips $\phi(S_n^3)$ and $\phi(S_{n+1}^1)$ are also pairwise disjoint is similar and is omitted.
We are now able to define our quasiconformal map $f : \mathbb{C} \to \mathbb{C}$. First, set $\widetilde{T} := \phi(T_0)$. For $z \in \widetilde{T}$ we define $f(z) := (\phi \circ \psi \circ \phi^{-1})(z)$. It is easy to check that $f$ is quasiconformal on $\widetilde{T}$ and extends to the identity on all parts of the boundary of $\widetilde{T}$ apart from the line segment $\{ x + iy : y = y_0, \ |x| < 1/y_0 \}$.
We then extend $f$ to a map of the whole plane. First we let $R$ be the rectangle
\[ R := \{ x + iy : y \in (0, y_0), \ |x| < 1/y_0 \}. \]
On $\mathbb{C} \setminus (\widetilde{T} \cup R)$ we let $f$ be the identity map. It is then straightforward, using, for example, \cite[Theorem 6]{SixsmithNicks2}, to see that $f$ can be extended to a quasiconformal map of the whole plane. Note that we are actually only interested in the behaviour of $f$ in $\widetilde{T}$; the rectangle $R$ is only used to allow us to extend the definition of $f$ to the whole plane.
It is now straightforward to see, by \eqref{toinf} and the geometry of $\widetilde{T}$, that
\[ \phi(\{ x + iy : x = 0, y > y_0 \}) \subset BU(f), \]
and this completes the construction.
\end{proof}
Finally we prove Theorem~\ref{theo:JnotboundaryBU} by constructing a quasiregular map $h : \mathbb{C} \to \mathbb{C}$, of transcendental type, such that $\partial BU(h) \setminus J(h) \ne \emptyset$.
\begin{proof}[Proof of Theorem~\ref{theo:JnotboundaryBU}]
We first use a technique from \cite[Section 6]{MR2448586}, (see also \cite[Section 4]{MR3008885}), to define a quasiregular map $g : \mathbb{C} \to \mathbb{C}$ of transcendental type which is equal to the identity in the upper half-plane $\mathbb{H}$.
In particular we choose $\delta > 0$ small, and then set
\[
g(z) :=
\begin{cases}
z, &\text{for } \operatorname{Im} z \geq 0, \\
z - \delta (\operatorname{Im} z) \exp(-z^2), &\text{for } \operatorname{Im} z \in [-1, 0), \\
z + \delta \exp(-z^2), &\text{otherwise}.
\end{cases}
\]
It can be shown by a calculation that if $\delta$ is sufficiently small, then $g$ is quasiregular. It is clearly of transcendental type.
Now, let $f$ be the quasiconformal map constructed in the proof of Theorem~\ref{theo:quasiconformalexample}. We note that the ``snake'' $\widetilde{T}$ constructed in the proof of that result lies in $\mathbb{H}$. We set $h := g \circ f$.
Since $f(\mathbb{H}) \subset \mathbb{H}$, we have that $h(\mathbb{H}) \subset \mathbb{H}$, and so $\mathbb{H} \cap J(h) = \emptyset$. Since $g$ is the identity on $\widetilde{T}$, the maps $f$ and $h$ have the same dynamics on $\widetilde{T}$. It follows that $$\mathbb{H} \cap BO(h) \ne \emptyset \quad\text{and}\quad \mathbb{H} \cap BU(h) \ne \emptyset.$$ Hence, in particular, $\mathbb{H}$ meets $\partial BU(h) \setminus J(h)$.
\end{proof}
\section{Proof of Theorem~\ref{theo:capJ}}
\label{S:capJ}
Suppose that $f : \mathbb{R}^d \to \mathbb{R}^d$ is a quasiregular map of transcendental type. It is known that if \eqref{eq:grows} holds, then $J(f) = \partial A(f)$ \cite[Theorem 1.2]{MR3265357}. Here $A(f)$ is the \emph{fast escaping set}, which is a subset of the escaping set consisting of points that iterate to infinity at a rate comparable to iteration of the maximum modulus; the exact definition is not needed here.
Now, the set $A(f)$ contains continua \cite[Theorem 1.2]{MR3215194}, and so has positive capacity. Moreover, the complement of $A(f)$ contains $BO(f)$, and so also has positive capacity \cite[Theorem 1.4]{MR3265283}.
Suppose that cap $\partial A(f) = 0$. It follows by \cite[Corollary 2.2.5]{MR1238941} that $\partial A(f)$ is totally disconnected, and so $\mathbb{R}^d \setminus \partial A(f)$ is connected. Hence either $A(f) \subset \partial A(f)$ or $\mathbb{R}^d \setminus A(f) \subset \partial A(f)$. This is impossible, as a set of positive capacity cannot be contained in a set of zero capacity. Hence cap $J(f) = $ cap $\partial A(f) > 0,$ as required. \\
\emph{Acknowledgment:} The authors are grateful to the referee for many helpful comments.
|
1605.03696
|
\section{Introduction}
A long time ago it was found \cite{P1,P2} that the Belinskii, Khalatnikov and Lifshitz (BKL) conjecture \cite{BKL33}, concerning generic
spacelike singularity, extends to generic timelike singularity. Both conjectures are based on the generalization of possible solutions
to the dynamics of the Bianchi IX model near spacelike and timelike singularities. These are general analytical considerations
based on reasonable assumptions, which are not rigorous mathematically so the BKL is not called a theorem, but conjecture or scenario.
However, due to numerical simulations done in the meantime (see, \cite{Ber,Gar} and references therein), the BKL scenario is commonly
believed to underly the generic solution to general relativity.
Recently, there appeared two papers with contradictory conclusions verifying the considerations concerning the solution to the Bianchi IX
dynamics near the timelike singularity in terms of the Iwasawa decomposition of the metric \cite{Kli, Sha} . This was our motivation
to analyse again the dynamics of the Bianchi IX spacetime, but this time both analytically and numerically. It is so because the dynamics
is defined by highly nonlinear coupled system of equations and exact analytical solutions are not available.
We consider the dynamics of the vacuum Bianchi IX model which is known to be similar to the case with the perfect fluids having soft equations
of state such as dust or radiation.
The dynamics of the Bianchi IX spacetime near spacelike and timelike singularities turn out to be similar of oscillatory type. The main
difference is that in the latter case there exist singular solutions with diverging volume densities, but finite curvature invariants. However,
in both cases there occur solutions which are asymptotically singular with blowing up curvature invariants.
Our paper is organized as follows: The next two sections specify the dynamics of the Bianchi IX near spacelike and timelike singularities.
The further two sections present the solutions to the dynamical equations, including some approximate analytical and numerical
treatments. We conclude in the last section.
\section{The Bianchi IX model with spacelike singularity} \label{t}
The metric in a synchronous frame has the form
\begin{equation}
\label{e1}
\rm{d} s^{2} = \rm{d} t^{2} - \gamma_{\alpha \beta}(t)\rm{d} x^{\alpha} \rm{d} x^{\beta} \quad(\alpha,\beta=1,2,3).
\end{equation}
In what follows we use the system of units in which $c = 1$ and the definitions and notations of the book
\citep{LL}. In particular, the signature ($+---$) is used and the Roman indices run over $0,1,2,3$.
In this Section (but not in the Section \ref{x}) the Greek indices $\alpha,\beta,\ldots$ take values
$1,2,3$ and label the spatial coordinates. The coordinate $t=x^0$ is timelike, so we exclude the solutions
describing strong gravitational waves \citep{B}, naked singularities \citep{HP} and other spacetimes which are
not interesting for cosmology \citep{P79}. The singularity
corresponds to $t=0$.
The Einstein equations for (\ref{e1}) acquire the form
\begin{equation}
\label{e2}
-R_0^0=\frac{1}{2}\frac{\partial\kappa_{\alpha}^{\alpha}}{\partial t}+\frac{1}{4}
\kappa_{\beta}^{\alpha}\kappa_{\alpha}^{\beta}=0,
\end{equation}
\begin{equation}
\label{e3}
-R_{\alpha}^0=\frac{1}{2}(\kappa_{\beta;\alpha}^{\beta}-\kappa_{\alpha;\beta}^{\beta})=0,
\end{equation}
\begin{equation}
\label{e4}
-R_{\alpha}^{\beta}=P_{\alpha}^{\beta}+\frac{1}{2\sqrt{\gamma}}\frac{\partial(\sqrt{\gamma}
\kappa_{\alpha}^{\beta})}{\partial x}=0.
\end{equation}
Here $\kappa_{\alpha}^{\beta}=\partial\gamma_{\alpha}^{\beta}/\partial t$,
$\gamma=\det|\gamma_{\alpha}^{\beta}|$ and $P_{\alpha}^{\beta}$
is the three-dimensional Ricci tensor calculated from $\gamma_{\alpha\beta}$. We use the
tensor $\gamma_{\alpha\beta}$ and inverse tensor $\gamma^{\alpha \beta}$ to transform covariant
tensors into contravariant ones and vice versa.
For the homogeneous cosmological models we have
\begin{equation}
\label{eq20}
\gamma_{\alpha \beta}=\eta_{(a)(b)}(t)e_{\alpha}^{(a)}e_{\beta}^{(b)}.
\end{equation}
Here $(a)=1,2,3$ and $\mathbf{e}^{(a)}$ is the set of three frame vectors for the corresponding Bianchi type model.
We determine the spatial components of four-dimensional vectors
and tensors with respect to a triplet of frame vectors of the given space:
\begin{equation}
\label{eq21}
R_{(a)(b)}=R_{\alpha\beta}e^{\alpha}_{(a)}e^{\beta}_{(b)},R_{0(a)}=R_{0{\alpha}}e^{\alpha}_{(a)},u^{(a)}=u^{\alpha}e_{\alpha}^{(a)},
\end{equation}
etc. Indices in brackets are lowered and raised by means of the tensor $\eta_{(a)(b)}$ and the reciprocal tensor $\eta^{(a)(b)}$.
We use the notation: $\kappa_{(a)(b)}:=\mathrm{d}\eta_{(a)(b)}/\mathrm{d}t$ and $\eta:=\mathrm{det}(\eta_{(a)(b)})$.
As a result, we get the components of the Ricci tensor in the form:
\begin{equation}
\label{eq22}
- R_{0}^{0} = \frac{1}{2}\frac{d \kappa^{(a)}_{(a)}}{dt}+\frac{1}{4}\kappa^{(a)}_{(b)}\kappa^{(b)}_{(a)},
\end{equation}
\begin{equation}
\label{eq23}
- R_{(a)}^{(b)} = \frac{1}{2(\eta)^{1/2}}\frac{d\left(\eta^{1/2}\kappa^{(b)}_{(a)}\right)}{dt}+P_{(a)}^{(b)}.
\end{equation}
We use the diagonal tensor
\begin{equation}
\label{eq24}
\eta_{(a)(b)}=\rm{diag}(a(t)^2,b(t)^2,b(t)^2)\, ,
\end{equation}
because the non-diagonal one is incompatible with the vacuum solution
we consider. Thus, the metric reads
\begin{equation}
\label{e5}
\gamma_{\alpha\beta}=a^2(t) \mathbf{l}_{\alpha}\mathbf{l}_{\beta}+b^2(t) \mathbf{m}_{\alpha}
\mathbf{m}_{\beta}+c^2(t) \mathbf{n}_{\alpha}\mathbf{n}_{\beta},
\end{equation}
where $\mathbf{l},\mathbf{m}$ and $\mathbf{n}$ are the three frame vectors $\mathbf{e}^{(a)}$ of the Bianchi type IX
homogeneous spacetime. They are presented in \citep{LL,Sch} in the form
\begin{equation}
\begin{array}{l}
\label{e5a}
\mathbf{e}^{(1)}=\mathbf{l}=(\sin z,-\cos z\sin x,0),\\
\mathbf{e}^{(2)}=\mathbf{m}=(\cos z,\sin z\sin x,0),\\
\mathbf{e}^{(3)}=\mathbf{n}=(0,\cos x,1).
\end{array}
\end{equation}
The frame vectors satisfy the relation
\begin{equation}
\label{e5b}
(\mathbf{e}^{(a)}\Rot \mathbf{e}^{(b)})=-\delta^{(a)(b)}(\mathbf{e}^{(1)}[\mathbf{e}^{(2)}\mathbf{e}^{(3)}]).
\end{equation}
The spatial curvature tensor
\begin{equation}\label{eq33}
\begin{array}{c}
P_{(a)}^{(b)}=(2\eta)^{-1}\Bigg\{ 2C^{(b)(d)}C_{(a)(d)}+C^{(d)(b)}C_{(a)(d)} \\
+C^{(b)(d)}C_{(d)(a)}-C^{(d)}_{\phantom{(d)}(d)}\left(C^{(b)}_{\phantom{(b)}(a)}+C_{(a)}^{\phantom{(a)}(b)}\right) \\
+\delta^{(b)}_{(a)}\left[ \left(C^{(d)}_{\phantom{(d)}(d)}\right)^2-2C^{(d)(f)}C_{(d)(f)}\right] \Bigg\}
\end{array}
\end{equation}
is diagonal for (\ref{eq24}), so all the nondiagonal ($ab$)- and ($0a$)-components of the Einstein equations are
satisfied identically. The structural constant $C^{(1)(1)}$ is equal to 0 (type I) or 1 (types II, VI$_0$, VII, VIII, IX),
$C^{(2)(2)}$ is equal to 0 (types I, II), 1 (types VII, VIII, IX) or $-1$ (type VI$_0$), and
$C^{(3)(3)}$ is equal to 0 (types I, II, VI$_0$, VII), 1 (type IX) or $-1$ (type VIII). The condition (\ref{e5b})
means that the structural constants correspond to Bianchi type IX for the frame vectors (\ref{e5a}).
The Einstein equations become simpler if we redefine the cosmological time variable $t$ as follows:
$ dt = \sqrt{\gamma}\; d\tau$, where $\gamma$ denotes the determinant of $\gamma_{ab}$.
All the non-diagonal components of the vacuum Einstein equations are identically
satisfied, whereas the diagonal components are the solutions to the three equations:
\begin{equation}
\label{e6}
2(\ln a)^{\cdot\cdot}=(b^2-c^2)^2-a^4,
\end{equation}
\begin{equation}
\label{e7}
2(\ln b)^{\cdot\cdot}=(a^2-c^2)^2-b^4,
\end{equation}
\begin{equation}
\label{e8}
2(\ln c)^{\cdot\cdot}=(a^2-b^2)^2-c^4,
\end{equation}
where ``dot'' denotes $d/d\tau $.
The equation (\ref{eq22}) gives an additional condition, which combined with (\ref{eq23}) yields the dynamical constraint:
\begin{equation}\label{e9}
4\big((\ln a)^{\cdot} (\ln b)^{\cdot} +(\ln a)^{\cdot} (\ln c)^{\cdot} +(\ln b)^{\cdot} (\ln c)^{\cdot} \big)
= a^4+b^4+c^4 - 2(a^2 b^2 + a^2 c^2 + b^2c^2) \; .
\end{equation}
One may verify that the system of the equations \eqref{e6} - \eqref{e9} coincides with the corresponding system derived for
the Bianchi IX model in Ref. \cite{LL} (see Ch. 14, Sec. 118).
\section{The Bianchi IX model with timelike singularity} \label{x}
Let us consider the Einstein equations near the
timelike singularity proposed in \cite{P1}. In the pseudosynchronous
frame of reference the metric has the form
\begin{equation}
\label{eq1}
ds^2 = -dx^2+\gamma_{\alpha \beta}dx^{\alpha} dx^{\beta}.
\end{equation}
The coordinate $x=x^1$ is spacelike and the singularity
corresponds to $x=0$. The other three coordinates $t=x^0,y=x^2,z=x^3$ use Greek
indices, corresponding to 0,2,3.
Let us consider the vacuum case of the Bianchi IX model.
The Einstein equations for (\ref{eq1}) acquire the form
\begin{equation}
\label{eq2}
R_1^1=\frac{1}{2}\frac{\partial\kappa_{\alpha}^{\alpha}}{\partial x}+\frac{1}{4}
\kappa_{\beta}^{\alpha}\kappa_{\alpha}^{\beta}=0,
\end{equation}
\begin{equation}
\label{eq3}
R_{\alpha}^1=\frac{1}{2}(\kappa_{\beta;\alpha}^{\beta}-\kappa_{\alpha;\beta}^{\beta})=0,
\end{equation}
\begin{equation}
\label{eq4}
R_{\alpha}^{\beta}=P_{\alpha}^{\beta}+\frac{1}{2\sqrt{\gamma}}\frac{\partial(\sqrt{\gamma}
\kappa_{\alpha}^{\beta})}{\partial x}=0.
\end{equation}
Here, $\kappa_{\alpha}^{\beta}:=\partial\gamma_{\alpha}^{\beta}/\partial x$,
$\gamma=\det|\gamma_{\alpha}^{\beta}|$ and $P_{\alpha}^{\beta}$
is the three-dimensional Ricci tensor calculated from $\gamma_{\alpha\beta}$. We use the
tensor $\gamma_{\alpha\beta}$ and inverse tensor $\gamma^{\alpha \beta}$ to transform covariant
tensors into contravariant ones and vice versa.
Let us consider the case
\begin{equation}
\label{eq5}
\gamma_{\alpha\beta}=a^2(x) \mathbf{l}_{\alpha}\mathbf{l}_{\beta}-b^2(x) \mathbf{m}_{\alpha}
\mathbf{m}_{\beta}-c^2(x) \mathbf{n}_{\alpha}\mathbf{n}_{\beta},
\end{equation}
where $\mathbf{l},\mathbf{m}$ and $\mathbf{n}$ are the three frame vectors of the Bianchi type IX
homogeneous spacetime. They differ from (\ref{e5a}) due to the replacement of one spatial coordinate by $t$.
More specifically, we replace in (\ref{e5a}) $y$ by $t$, and $x$ by $y$. Thus, we get
\begin{equation}
\begin{array}{l}
\label{eq5a}
\mathbf{e}^{(1)}=\mathbf{l}=(\sin z,-\cos z\sin y,0),\\
\mathbf{e}^{(2)}=\mathbf{m}=(\cos z,\sin z\sin y,0),\\
\mathbf{e}^{(3)}=\mathbf{n}=(0,\cos y,1).
\end{array}
\end{equation}
With such choice of coordinates these vectors do not depend on $t$.
All the non-diagonal components of the Einstein equations are identically
satisfied and the diagonal components of (\ref{eq4}) lead to the three equations:
\begin{equation}
\label{eq6}
2(\ln a)^{\prime\prime}=(b^2-c^2)^2-a^4,
\end{equation}
\begin{equation}
\label{eq7}
2(\ln b)^{\prime\prime}=(a^2+c^2)^2-b^4,
\end{equation}
\begin{equation}
\label{eq8}
2(\ln c)^{\prime\prime}=(a^2+b^2)^2-c^4,
\end{equation}
where ``prime'' denotes $d/d\xi $, and where the new coordinate $\xi$ is defined by $d\xi=\gamma^{-1/2}dx=(abc)^{-1}dx$.
The equation (\ref{eq2}) gives an additional condition, which combined with (\ref{eq6}) - (\ref{eq8}) gives the dynamical constraint:
\begin{equation}\label{eq9}
4\big((\ln a)^{\prime} (\ln b)^{\prime} +(\ln a)^{\prime} (\ln c)^{\prime} +(\ln b)^{\prime} (\ln c)^{\prime} \big)
= a^4+b^4+c^4 + 2(a^2 b^2 + a^2 c^2 - b^2c^2) \; .
\end{equation}
For more details concerning the derivation of (\ref{eq6}) - (\ref{eq9}) we recommend Ref. \cite{P1}.
\section{Oscillating solution in the Bianchi IX cosmological model} \label{rd}
Some analytical studies of the behaviour of the system (\ref{e6}) - (\ref{e9}) have been done in numerous papers (see, e.g.
\cite{Belinski:2014kba} and references therein). Here we make introductory remarks following the paper \citep{BKL}.
The directional scale factors $a(t),b(t)$ and $c(t)$ make a complex oscillations which could not be described analytically with
all details. An evolution towards the singularity takes a finite interval of the cosmological time $t$, but an infinite interval
of the evolution parameter $\tau$. An infinite number of oscillations occur during each of these intervals. They are separated
by the so-called Kasner epochs, each of which can be described by the left-hand sides
of (\ref{e6}) - (\ref{e9}) with the right-hand sides being neglected. During these epochs the spacetime
is similar to the well-known Kasner metric \citep{Kas}
\begin{equation}
\label{eqq14}
\mathrm{d} s^{2} = \mathrm{d} t^{2} - t^{2p_{1}} \mathrm{d} x^{2} - t^{2p_{2}} \mathrm{d} y^{2} - t^{2p_{3}}\mathrm{d} z^{2},
\end{equation}
where the Kasner indices $p_{i}$ satisfy the conditions
\begin{equation}
\label{eqq16}
p_{1} + p_{2} + p_{3} = 1,\quad {p_{1}}^{2}
+{p_{2}}^{2} +{p_{3}}^{2} = 1.
\end{equation}
Thus one of them is negative and other two are positive.
More specifically, the functions during each Kasner's epoch have the form
\begin{equation}
\label{eqq11}
\ln (a)=A\tau+\mathrm{const}, \,\ln (b)=B\tau+\mathrm{const}, \,\ln (c)=C\tau+\mathrm{const},
\end{equation}
where $A,B,C=\mathrm{const}$.
For the function $Q :=\mathrm{d}\ln(abc)/\mathrm{d}\tau$ we get
\begin{equation}
\label{eqq12}
Q=A+B+C =: \Lambda,\quad\ \Lambda=\mathrm{const} ,
\end{equation}
where
\begin{equation}
\label{eqq13}
A=p_1\Lambda,\;B=p_2\Lambda,\;C=p_3\Lambda
\end{equation}
with some set of Kasner indices $p_{i}$ satisfying conditions (\ref{eqq16}).
The increasing of the function with a negative Kasner index leads to the increasing the
corresponding terms in the right-hand sides of (\ref{e6}) - (\ref{e8}) and a transfer from one
epoch to another. For instance, if $A<0$, then this transition has the form
\begin{equation}\label{eqq3}
\ln(a)=2\ln(S)-2U(\tau),\quad \ln(b)=2U(\tau)+L\tau+\mathrm{const},\quad \ln(c)=M\tau+\mathrm{const},
\end{equation}
\begin{equation}\label{eqq33}
Q=\mathrm{const}+S\tanh(S(\tau-\tau_{0})),~~~~ U(\tau)=\ln(\cosh(S(\tau-\tau_{0})),
\end{equation}
where $S,L,M=\mathrm{const}$, and where $\tau_0$ is the value of $\tau$ corresponding to a maximum of $a$.
After that we have a new epoch with a negative Kasner index associated with $b$ or $c$.
The entire process of evolution of the metric is generally made up of successive periods named eras (each consisting
of a number of epochs), during which distances along two of the axes oscillate, while the distance along the third axis
changes, to some extent, independently. In Fig. \ref{f1}
one can see a typical behaviour of functions $a,b,c$, for which it is easy to see epochs
and eras. It was obtained by numerically solution of the equations (\ref{e6}) - (\ref{e8})
with initial values and derivatives satisfying the condition (\ref{e9}).
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{f1a-colour}
\caption{An example of solution to the system of equations (\ref{e6}) - (\ref{e8})
with initial values and derivatives satisfying the condition (\ref{e9})}
\label{f1}
\end{figure}
One can choose the direction of evolution towards the singularity in two ways, namely along or opposite to the $\tau$
increase. The system evolves to the singularity if the local volume density $V:= a b c$ decreases, so we get $V\to 0$ as $t\to 0$.
If $\Lambda_0>0$, we have $V\to 0$ as $\tau \to -\infty$; if $\Lambda_0<0$, the vanishing of the volume occurs as $\tau \to \infty$
(where $\Lambda_0 := \Lambda$ at $\tau =0$). In the Bianchi IX vacuum cosmological model we can consider the spacetime (\ref{e5})
at any $\tau$ from $-\infty$ to $\infty$.
There is no singularities at finite values of $\tau$.
Suppose, for instance, that the function $a$ increases when approaching the singularity at finite $\tau$, while two others decrease,
i.e. $a\gg b\approx c$. Then, we can leave only the main terms in the right hand sides of (\ref{e6}) - (\ref{e8}) so we
get the solution (\ref{eqq3}) in which, however, $a$ decreases near the singularity. Thus, this scenario is impossible.
Now, let us assume that two of the functions, for example $a$
and $b$, increase when approaching the singularity, while the other one changes slowly, i.e. we have the case $a\approx b \gg c$.
Thus, we have
\begin{equation}\label{ez}
2(\ln (ab))^{\cdot\cdot}= (b^2 - c^2)^2 - a^4 + (a^2 - c^2)^2 - b^4 = 2 c^4 - 2 c^2 (a^2 + b^2) < 0.
\end{equation}
Thus, the product of the l.h.s. of (\ref{ez}) is a concave function. This fact contradicts to our assumption.
Finally, let assume that all three functions increase when approaching the singularity. The
system of equations (\ref{e6}) - (\ref{e9}) is invariant with respect to the transformation
\begin{equation}\label{ez1}
\tilde{a}=Ka,~~~\tilde{b}=Kb,~~~\tilde{c}=Kc,~~~\tilde{\tau}=K^{-2}\tau,~~~K=const.
\end{equation}
So the characteristic period of $\tau$, e.g. a duration of epoch has to decrease near the
singularity. This has to be accompanied by increasing of values of $\Lambda$ from (\ref{eqq12}).
But a change of Kasner epoch (\ref{eqq3}) leads to its decreasing \citep{BKL}
\begin{equation}\label{ez2}
\tilde{\Lambda}=(1-2|p_1|)\Lambda ,
\end{equation}
which again leads to the contradiction.
We conclude that the occurrence of the singularity at finite value of $\tau$ is impossible.
Can all three functions decrease? In this case we can neglect the right-hand sides of
(\ref{e6}) - (\ref{e8}) and get the asymptotic form near the singularity
\begin{equation}\label{ez3}
(\ln a)^{\cdot}\to A,~~~(\ln b)^{\cdot}\to B,~~~(\ln a)^{\cdot}\to C.
\end{equation}
If the singularity corresponds to $\tau=\infty$, we have $A,B,C<0$, and if
the singularity occurs at $\tau=-\infty$, we take $A,B,C>0$. In both
cases the left-hand side of (\ref{e8}) tends to some positive constant and its
right-hand side vanishes. Thus, this behaviour is also impossible.
The only possible dynamics of the Bianchi IX vacuum cosmological model is the oscillating, i.e. it occurs
for any $\tau \in \dR$.
\section{Dynamics of the Bianchi IX model with timelike singularity} \label{ts}
One can repeat all the arguments from Section \ref{rd} with a few changes to show that there is an oscillation
regime near the timelike singularity as it was done in \citep{P1}.
The difference in signs of the right-hand sides of (\ref{e6}) - (\ref{e8})
and (\ref{eq6}) - (\ref{eq8}) is not important when we consider an oscillating mode, because among
functions $a,b,c$ only one is increasing and acts as a perturbation.
Could the singularity in (\ref{eq6}) - (\ref{eq9}) occur at some finite $\xi$? In what follows we try to answer this question.
Our considerations concerning the dynamics of the system (\ref{e6}) - (\ref{e8}) can be extended (with little modification)
to the dynamics (\ref{eq6}) - (\ref{eq9}), except the case $b\approx c\gg a$.
Here, instead of (\ref{ez}), we get
\begin{equation}
\label{ez4}
2(\ln (bc))^{\prime\prime}=(a^2+b^2)^2-c^4+(a^2+c^2)^2-b^4> 2 a^4 + 2a^2 (b^2 + c^2) > 0,
\end{equation}
which means that we are dealing with a convex function. Thus, this case needs making separate studies.
Let us denote $u=\ln(a^2b^2)$. From (\ref{eq6}) and (\ref{eq7}) we obtain
\begin{equation}
\label{ez6}
u^{\prime\prime}=2c^2(c^2-b^2+a^2).
\end{equation}
If
\begin{equation}
\label{ez6a}
|b^2-c^2|\ll a^2\ll b^2
\end{equation}
this equation simplifies to
\begin{equation}
\label{ez7}
u^{\prime\prime}=2e^u.
\end{equation}
There are three types of solutions of (\ref{ez7}), namely
\begin{equation}
\label{ez8}
e^u=a^2b^2=(\Delta\xi)^{-2},
\end{equation}
\begin{equation}
\label{ez9}
e^u=a^2b^2=p^2\sin^{-2}(p\Delta\xi),
\end{equation}
\begin{equation}
\label{ez10}
e^u=a^2b^2=p^2\sinh^{-2}(p\Delta\xi).
\end{equation}
Here $\Delta\xi=\xi-\xi_0$, $\xi_0,p$ are constants. All these solutions have singularities at some finite value $\xi=\xi_0$
with $a^2b^2\xrightarrow [\xi \to \xi_0]{}(\Delta\xi)^{-2}$.
The equation (\ref{eq6}), in the case (\ref{ez6a}), leads to
\begin{equation}
\label{ez11}
2(\ln a)^{\prime\prime}=-a^4 ,
\end{equation}
which has the solution
\begin{equation}
\label{ez11}
\ln(a)=2\ln(S)-2U(\xi),\quad U(\xi)=\ln(\cosh(S(\xi-\xi_1)),~~~ S,\xi_1=\mathrm{const}.
\end{equation}
So the function $a$ is regular and $b^2\approx c^2\to G\;(\Delta\xi)^{-2}$
at $\xi \to \xi_0$. Here $G=\mathrm{const}$.
From (\ref{eq7}) and (\ref{eq9}) we get
\begin{equation}
\label{ez5}
2(\ln (b/c))^{\prime\prime}=2a^2(c^2-b^2)+2c^4-2b^4=2(c^2-b^2)(a^2+b^2+c^2).
\end{equation}
Let us denote $w=\ln(b^2/c^2)$ and rewrite (\ref{ez5}) for the case (\ref{ez6a})
and $|w|\ll 1$ in the form
\begin{equation}
\label{ez12}
w^{\prime\prime}=-4wc^4=-4G^2w(\xi-\xi_0)^{-4}.
\end{equation}
Its solution is
\begin{equation}
\label{ez13}
w=\Delta\xi\left(C_1\sin\frac{2G}{\Delta\xi}+C_2\cos\frac{2G}{\Delta\xi}\right),\quad C_1,C_2=\mathrm{const}.
\end{equation}
So the condition (\ref{ez6a}) can be fulfilled near the singularity.
At small $\Delta\xi$ both sides of the constrain (\ref{eq9}) lead to $4(\Delta \xi)^{-2}$.
We see that the constraint is satisfied in the leading terms.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{f-sing-colour}
\caption{An example of solution to the system of equations (\ref{eq6}) - (\ref{eq8})
with initial values and derivatives satisfying the condition (\ref{eq9}). There is a
singularity with $b,c \to \infty$ at small negative $\xi_0$}
\label{f2}
\end{figure}
One can see an example of such singularity in Fig. \ref{f2}. It presents the numerical solution
of equations (\ref{eq6}) - (\ref{eq8}) with initial values and derivatives with respect to $\xi$ satisfying
(\ref{eq9}).
To analyse the spacetime defined by (\ref{eq1} and \ref{eq5}), we introduce $F :=\mathrm{d}\ln(abc)/\mathrm{d}\xi=\mathrm{d}\ln(V)/\mathrm{d}\xi$
where $V$ is the local volume.
The similar quantity $Q$ was used in Section \ref{rd}. Summing up (\ref{eq6}) - (\ref{eq9}) we get
\begin{equation}
\label{ez14}
2F^{\prime}=(b^2-c^2)^2+a^4+2a^2(b^2+c^2)>0.
\end{equation}
In Fig. \ref{f2} we have $F<0$ at $\xi=0$. Increasing $\xi$ we move towards an oscillating timelike
singularity, whereas decreasing $\xi$ we move towards a singularity with $b,c \to \infty$. In the case
$F>0$, at $\xi=0$, we have an oscillating solution as $\xi \to -\infty$ and the discontinuity
at some positive $\xi_0$. In both cases $|F|$ decreases if we move towards the oscillating timelike singularity.
Assume that $F$ becomes zero at some $\xi$. In this case we have
$(\ln(a))^{\prime}=-(\ln(b))^{\prime}-(\ln(c))^{\prime}$ and the left-hand side of (\ref{eq9}) is
$4\left(-\left((\ln b)^{\prime}\right)^2 -\left((\ln c)^{\prime}\right)^2+(\ln b)^{\prime} (\ln c)^{\prime}\right)<0$,
while its right-hand side is $a^4+(b^2-c^2)^2 + 2a^2( b^2 + c^2)>0$. This contradiction proves that
the function $F$ keeps its sign and eliminates the possibility of $F(\xi)$ having two discontinuities at
different values of $\xi$. Note that the function $Q(\tau)$ for cosmological model also keeps its sign, although
the reasoning given above is not applicable in this case. The key point of the proof of this is the relation
(\ref{ez2}).
The behaviour of the derivatives $(\ln a)^{\prime}, (\ln b)^{\prime},(\ln c)^{\prime}$ and function $F$
is illustrated by Figs \ref{f3} and \ref{f4}, presenting a numerical solution to Eqs. (\ref{eq6}) -
(\ref{eq9}) with both types of singularities.
In Fig. \ref{f3} these functions show their
typical behaviour near the oscillating singularity, described by (\ref{eqq3}), (\ref{eqq33}) with the replacement
$\tau$ by $\xi$, and $Q$ by $F$. Characteristic $\tanh$-like steps are clearly recognizable. But in Fig. \ref{f4} we see
a completely different behavior of these functions near the singularity with finite $\xi$, described by
(\ref{ez8}) and (\ref{ez13}). In particular, all functions have the same sign in the vicinity of the singularity.
An increasing of $|(\ln a)^{\prime}|$ in contrast to (\ref{ez11}) could be explained by an influence of errors
caused by limitation of computer
accurasy combined with computations near the singularity, when the condition (\ref{ez6a}) is fulfilled.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{f-der+colour}
\caption{Derivatives $(\ln a)^{\prime},
(\ln b)^{\prime},(\ln c)^{\prime}$ and $F$ vs $\xi$ at $\xi>0$}
\label{f3}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{f-der-colour}
\caption{Derivatives $(\ln a)^{\prime},
(\ln b)^{\prime},(\ln c)^{\prime}$ and $F$ vs $\xi$ at $\xi<0$}
\label{f4}
\end{figure}
Thus, an oscillating mode is required near the singularity at $\xi \to \pm \infty$. But moving away from it
along the $x$ coordinate we are approaching another singularity at finite $\xi$. Note that the distance along the line
with constant $y,z,t$ between two points in spacetime (\ref{eq1}) is equal to the difference of their $x$ coordinates
or $\Delta x=\int_{x_1}^{x_2}abc\, d\xi$. Such a distance between the initial point $\xi=0$ and oscillating
singularity at $\xi=\pm \infty$ is finite and a distance between the initial point and the singularity at $\xi=\xi_0$
is infinite. One can see this in Fig. \ref{f5}.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{fig5_red}
\caption{Distance from the singularity $x$ vs the coordinate $\xi$ specific to Fig. \ref{f2}. We put for clarity $x(\xi=0)=1.7$.
The function $x(\xi)$ decreases monotonically at $\xi>0.6$ as well.}
\label{f5}
\end{figure}
So the singularity at finite $\xi$ is located infinitely far from the oscillating timelike singularity.
This is not the real singularity, but just a coordinate one. The asymptotic form of the metric (\ref{e5}) near this
false singularity at $x\to \infty$ is
\begin{eqnarray}\label{ez15}
\begin{array}{l}
\displaystyle \rm{d} s^{2} \xrightarrow [x \to \infty]{} -\rm{d} x^{2} +C_1(\cos y
\, \rm{d}t+\rm{d}z)^2
-C_2 x^2(\cos z\, \rm{d}y +\sin y \sin z\, \rm{d}t)^2\\
\displaystyle -C_2\left(x^2+C_3\frac{\sin(C_4x)}{x}\right)(\sin z \,\rm{d}y -\sin y \cos z\, \rm{d}t)^2
\end{array}
\end{eqnarray}
with $C_i=$ const.
The calculations of both the Ricci and the Kretschmann scalars with the metric (\ref{ez15}) give the result that they are both
vanishing as $x \to \infty$. Thus, the approximate metric (\ref{ez15}) is asymptotically flat and it may describe the metric near
the spatial edge of considered spacetime infinitely far away from the real timelike oscillatory-type singularity that occurs as $x \to 0$.
\section{Conclusions}
We have examined the properties of the spacetime defined by the metric (\ref{eq1}) and (\ref{eq5}) which is homogeneous
on the hypersurface $x=$ const.
This homogeneous cross-section corresponds to the Bianchi type IX case. There is a naked timelike singularity at
$x=0$. This model is the basis for constructing the general solution near the timelike singularity \cite{P1}
of oscillating type, which is similar to the well-known spacelike singularity \cite{BKL}.
The dynamics of the model is similar to the dynamics of the vacuum homogeneous cosmological model of Bianchi-type IX with
the change of coordinates $x\leftrightarrow t$ and includes an infinite number of epochs and eras.
The dynamics defined by Eqs. (\ref{eq6}) - (\ref{eq9}) seems to be similar to the dynamics with the spacelike
singularity discussed above. To verify this hypotheses we have solved numerically Eqs. (\ref{e6}) - (\ref{e9}) and
(\ref{eq6}) - (\ref{eq9}), as analytical solutions are not available.
Nevertheless, there are some differences between the model with the metric defined by (\ref{eq1}) and (\ref{eq5}),
and the cosmological model with the metric defined by (\ref{e1}) and (\ref{e5}). Let us mention the main differences: The coordinate
$\tau=\int \gamma^{-1/2} dt$ with spacelike singularity at $t=0$ varies from $-\infty$ to $\infty$.
The coordinate $\xi=\int \gamma^{-1/2} dx$ with timelike singularity at $x=0$ varies from $\pm \infty$
to some finite value $\xi_0$. The case $\xi \to \pm \infty$ corresponds to the timelike oscillating singularity.
The singularity at $\xi=\xi_0$ is the false one as the curvature invariants vanish there.
Our numerical results confirm the analytical considerations done a long time ago in the papers \cite{P1, BKL}.
They are also consistent with the recently obtained results for the diagonal Bianchi IX model with timelike
singularity, which are based on the Iwasawa decomposition of the metric \cite{Sha}. The difference between the results of
Refs. \cite{Sha} and \cite{Kli}, which uses Iwasawa's decomposition as well, might be due to the quite different choices of gauges.
Our numerical results visualise that the {\it diagonal} vacuum Bianchi IX model with spacelike singularity
and timelike singularity have
quite similar dynamics. In both cases we have the oscillations of the directional scale factors in the form of a sequence of eras
with each era consisting of a number of epochs.
Finally, it is worth to mention that the oscillations in the asymptotic dynamics of the {\it nondiagonal} Bianchi IX model
with spacelike singularity do not occur \cite{Belinski:2014kba,Czuchry:2014hxa}. It would be interesting to examine the
corresponding situation in the case of the timelike singularity.
\acknowledgments We are grateful to Orest Hrycyna for his numerical calculations for the purpose of Figs. (\ref{f2}) -
(\ref{f4}), and Grzegorz Plewa for the calculations concerning the curvature invariants. We also thank John Barrow, Vladimir Belinski,
Piotr Chru\'{s}ciel, and Edgar Shaghoulian for helpful discussions. SP is grateful to the Ukrainian State Fund For Fundamental Research
for financial support (project F64/45-2016).
|
1605.03746
|
\section{Introduction}
New and affordable depth sensors, like the Kinect, have been of great interest to the robotics community. These new sensors are able to simultaneously capture high-resolution color and depth images at high frame rates (RGB-D images).
Most recent work on RGB-D perception has been targeted to semantic segmentation or \emph{labelling} \cite{gupta2013perceptual,ren2012rgb}, namely the task of assigning a category label to each pixel of the image.
While this is an important problem, many practical applications require a richer understanding of the scene. In particular, the notion of object instances is missing from such algorithms.
Object detection in RGB-D images \cite{lai2011large, kim2013accurate}, in contrast, focusses on object instances, but the typical output of these approaches is a bounding box.
Neither of these approaches produces a useful output representation for robot grasping \cite{hariharan2014simultaneous}.
In some cases it may not be enough to know that an area of the image contain object pixels; at the same time, a bounding box of an individual object instance may not be precise enough for the robot to grasp it. The best approach for this kind of task is \emph{instance segmentation}, which consists in delineating all the object pixels corresponding to each detection.
For instance, in \cite{6163000} the authors propose a real-time algorithm for the segmentation of RGB-D sequences. Segments represent the equilibrium states of a Potts model. Interaction strength between pixels is based on color difference and is penalized based on large depth discrepancies. The Potts equilibrium is then found using a Metropolis algorithm implemented on GPU. The method however may not be robust to strong textures on objects or background, and thus produce over-segmentation.
In \cite{mishra2009active} a biologically-inspired algorithm was proposed for segmenting regions around a set of \emph{fixation points}. Monocular cues and depth or motion information (from stereo vision or optical flow) cues are combined in an independent way.
In \cite{mishra2012segmenting} the approach was extended with a strategy for automatically selecting fixation points inside "simple" objects, by estimating if borders belong to the inner or outer side of objects. The algorithm does not explicitly deal with missing depth information in computing the border ownership.
In \cite{rao2010grasping} the authors extend a graph-based segmentation algorithm \cite{felzenszwalb2004efficient} by integrating depth as a fourth component among RGB components in computing difference between adjacent pixels.
More recently, in \cite{gupta2014learning} semantically rich image and depth features have been used for object detection in RGB-D images, based on geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to horizontal disparity. Segmentation was also performed by labelling pixels belonging to object instances found by a Neural Network detector; decision forests were used for region classification. The approach uses GPU.
The contribution of this work is the extension of a fast graph-based segmentation algorithm\cite{felzenszwalb2004efficient} with the inclusion of depth information (similarly to \cite{rao2010grasping}) and saliency.
We focus on the problem of object detection for robotic grasping, and in particular we are most interested in the Amazon Picking Challenge \cite{apc} scenario.
One aspect of the competition is to attempt simplified versions of the general task of picking items from shelves. The robots are presented with a stationary and lightly cluttered inventory shelf and are asked to pick a subset of the products and put them on a table.
We first propose a modified algorithm for depth image smoothing which takes into account depth shadows. We then describe a modified Canny edge detector that integrates depth information for finding robust edges. Then we propose two cost functions for creating a weighted graph that will be partitioned into regions. We finally use some rejection steps to discard most of the regions that do not belong to objects.
In Section \ref{sec:background} we introduce the problem of graph-based image segmentation; in Section \ref{sec:segmentation} we
present our approach; in Section \ref{sec:results} we present and discuss experimental results and finally in Section \ref{sec:conclusion}
we draw conclusions and illustrate future work.
\section{Background}
\label{sec:background}
In this work we started from the segmentation algorithm described in \cite{felzenszwalb2004efficient}, which is based on graph formalism. Let $\cal G = (\cal V,\cal E)$ be an undirected graph with vertices ${\cal V} = (v_i, \dots, v_{Np})$ corresponding to image pixels, and edges $e_{ij} = (i, j) \in \cal E$ , that connect pairs of neighboring vertices, namely $v_i$ and $v_j$. Each edge $e_{ij}$ has a corresponding weight $w_{ij}$, which is a measure of the similarity between $v_i$ and $v_j$. Hence image segmentation reduces to partitioning of $\cal G$ in subgraphs sharing similar characteristics.
Edge weights can be computed by evaluating color or intensity difference. If the function $\Lambda : \mathbb{R}^2 \rightarrow: \mathbb{R}^5$ associates each node (pixel) $v_j$ with the corresponding feature vector containing both node coordinates $v_{i_x},v_{i_y}$ and the
RGB values $v_{i_r},v_{i_g},,v_{i_b}$, then the edges weights are computed as:
\begin{equation}
w_{ij}=\vert\vert{\Lambda(v_i)-\Lambda(v_j)} \vert\vert, \forall (i,j) \in {\cal E}.
\end{equation}
Let's define the \emph{internal difference} within the region $R_a$ as:
\begin{equation}
{\cal I}(R_a)= \max_{(i,j)\in {\cal E},\, {i,j} \in R_a} w_{ij},
\end{equation}
The segmentation procedure starts by considering each pixel as a different region. Such regions are pairwise compared and two regions are merged together in a bigger cluster if the following condition holds:
\begin{equation}
{\cal M}(R_a, R_b) \le \min ( {\cal I}(R_a)+ {\gamma \over \vert R_a \vert}, {\cal I}(R_b) + {\gamma \over {\vert R_b \vert}}),
\label{eq:merging}
\end{equation}
otherwise a boundary exist between them. In (\ref{eq:merging}), $\gamma$ is a constant parameter and the operator $\vert \cdot \vert$ returns the region size in pixels.
The described approach considers the internal characteristics of each region in the pairwise comparison, hence it is effective in segmenting image scenes with texture or non-uniform colors; moreover it is efficient, requiring a complexity of ${\cal O}(N_p log N_p)$, where $N_p$ is the number of pixels in the image.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=.45\columnwidth]{figures/desk-image.jpg} & \includegraphics[width=.45\columnwidth]{figures/desk-depth.jpg} \\
(a) & (b) \\[6pt]
\includegraphics[width=.45\columnwidth]{figures/desk-inpaint.jpg} & \includegraphics[width=.45\columnwidth]{figures/desk-w.jpg} \\
(c) & (d) \\[6pt]
\includegraphics[width=.45\columnwidth]{figures/desk-segm.jpg} &
\includegraphics[width=.45\columnwidth]{figures/desk-pca.jpg} \\
(e) & (f) \\[6pt]
\multicolumn{2}{c}{\includegraphics[width=40mm]{figures/desk-final.jpg} } \\
\multicolumn{2}{c}{(g)} \\[6pt]
\end{tabular}
\caption{The algorithm in action. (a) RGB image; (b) depth image; (c) smoothed and inpainted depth image; (d) obtained graph weights (color coded); (e) segmentation result; (f) result after post-processing; (g) segmented objects. }
\label{fig:algorithm}
\end{figure}
\section{Enhanched Segmentation}
\label{sec:segmentation}
In this section we describe the approach in detail.
An example of the algorithm in action is shown in Figure \ref{fig:algorithm}.
\subsection{Color}
The color difference is computed in the HSV color space and is derived from the metric described in \cite{6163000}. Separation of
color information (hue, saturation) and intensity (value) is more robust for objects with shadows or changes in lightness.
The color difference between two vertices $v_i=(v_{i_h},v_{i_s},v_{i_v})$ and $v_j=(v_{j_h},v_{j_s},v_{j_v})$ is defined as:
\begin{equation}\label{eq:hsv}
\delta_{ij_{hsv}}= { \sqrt{\delta_v^2+\delta_s^2} \over \sqrt{k_{dv}^2+k_{ds}^2}} ,
\end{equation}
where:
$$ \delta_v = k_{dv} \vert v_{i_v}-v_{j_v} \vert, \delta_h=\vert v_{i_h}-v_{j_h} \vert, $$
$$ \theta = \begin{cases} \delta_h, & \mbox{if } \delta_h <180^{\circ} \\ 360^{\circ}-\delta_h, & \mbox{if } \delta_h \ge 180^{\circ} \end{cases} $$
$$\delta_s=k_{ds}\sqrt{v_{i_s}^2+v_{j_s}^2- 2 v_{i_s} v_{j_s} \cos{\theta}}. $$
The denominator in (\ref{eq:hsv}) is introduced to normalize the color error in the range $\left[0,1\right]$. The parameters $k_{dv}$,$~k_{ds}$ are used to weight the \textit{value} and \textit{saturation} differences respectively. In our experiments these parameters are always kept fixed to $k_{dv}=4.5$,$~k_{ds}=0.1$.
\subsection{Depth}
Since depth maps obtained from low-cost Kinect-like sensors are usually noisy and prone to quantization errors,
we first apply a smoothing to the depth map. In \cite{rusu}, a depth-dependent smoothing algorithm is proposed which generates a smoothing kernel of different size for each pixel in the depth image. The area of such a kernel is based on two indicators, i.e. depth information of the pixel itself and the distance of the pixel from the object borders. The former is used to generate a wider smoothing area for pixels far from the camera, since the noise of the depth data is proportional to the distance from the sensor.
The latter guarantees that the object edges are not smoothed by the filter.
The final smoothing kernel sizes are saved in a \emph{Smoothing Area Map} $\mathcal{S}(y,x)$. The average value within a region is thus computed by means of the depth integral image $\mathcal{I}_D(y,x)$ and saved in the smoothed depth map $\mathcal{D}_s(y,x)$ as follows:
\begin{small}
\begin{equation}\label{eq:rusuSmoothing}
\begin{split}
\mathcal{D}_s(y,x) = &\frac{1}{\left(2r+1\right)^2} \left[~\mathcal{I}_D(y+r,x+r) - \mathcal{I}_D(y+r,x-r) \right.\\ &\left.- \mathcal{I}_D(y-r,x+r) + \mathcal{I}_D(y-r,x-r)~\right];
\end{split}
\end{equation}
\end{small}
where: $r = \mathcal{S}(y,x)~$.
We noticed that the smoothing from \cite{rusu} introduces noise when the depth image contains shadows (depth image pixels with zero depth value); this is because (\ref{eq:rusuSmoothing}) is not able to discriminate between pixels having real depth information and pixels having no depth component
We therefore generate a binary image $\mathcal{B}_D(y,x) $ from the depth map $\mathcal{D}(y,x)$ as follows:
\begin{equation}\label{eq:binaryDepth}
\mathcal{B}_D(y,x) =
\begin{cases}
0 & \quad \text{if } \mathcal{D}(y,x)\neq 0 \\
1 & \quad \text{if } \mathcal{D}(y,x) = 0 \\
\end{cases}
\end{equation}
The binary depth map integral image $\mathcal{I}_B(y,x)$ is then used to count the number of pixels with no depth information ($\gamma_0$) inside the smoothing area of a given depth image pixel.
\begin{equation}\label{eq:numNaNSmoothing}
\begin{split}
\gamma_0 = & \mathcal{I}_B(y+r,x+r) - \mathcal{I}_B(y+r,x-r)~+ \\
& - \mathcal{I}_B(y-r,x+r) + \mathcal{I}_B(y-r,x-r);
\end{split}
\end{equation}
where: $r = \mathcal{S}(y,x)~$.\\
(\ref{eq:rusuSmoothing}) is thus updated
as shown in (\ref{eq:MySmoothing}).
\begin{small}
\begin{equation}\label{eq:MySmoothing}
\begin{split}
&\mathcal{D}_s(y,x) = \frac{1}{\left(2r+1\right)^2-\gamma_0} \left[~\mathcal{I}_D(y+r,x+r)~ + \right.\\ &\left. - \mathcal{I}_D(y+r,x-r) - \mathcal{I}_D(y-r,x+r) + \mathcal{I}_D(y-r,x-r)~\right];
\end{split}
\end{equation}
\end{small}
The denominator in (\ref{eq:MySmoothing}) equals zero if and only if all depth image pixels inside the smoothing kernel are equal to zero (no depth data is available).
In this case the smoothing has no meaning and $\mathcal{D}_s(y,x) = \mathcal{D}(y,x)$.\\
Figure \ref{fig:mySmoothing} shows a comparison between the original depth smoothing algorithm and the modified one.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=.45\columnwidth]{figures/SmoothedDepthFNAN.jpg} & \includegraphics[width=.45\columnwidth]{figures/SmoothedDepthWNANDepth.jpg} \\
(a) & (b) \\[2pt]
\includegraphics[width=.45\columnwidth]{figures/depthSmoothingNAN.jpg} & \includegraphics[width=.45\columnwidth]{figures/depthSmoothingMY2.jpg} \\
(c) & (d) \\[2pt]
\includegraphics[width=.45\columnwidth]{figures/SmoothedDepthWNAN.jpg} & \includegraphics[width=.45\columnwidth]{figures/depthSmoothingMY.jpg} \\
(e) & (f) \\[2pt]
\end{tabular}
\caption{Comparison of depth smoothing algorithms in presence of missing depth values (dark blue pixels in both (a) and (b) images). Left column shows the results of the original smoothing algorithm as described in \cite{rusu}. Right column depicts the depth smoothing results obtained through our own changes to the original algorithm. (a) and (b) show $\mathcal{D}_s(y,x)$. (c),(d) Show the PointCloud extracted from $\mathcal{D}_s(y,x)$. (e) and (f) show object and edge details.}
\label{fig:mySmoothing}
\end{figure}
The depth error between two vertices $v_{i_d}=\left(y_{i_d},x_{i_d}\right)$ and $v_{j_d}=\left(y_{j_d},x_{j_d}\right)$ is then defined as:
\begin{equation}
\delta_{ij_{depth}}=\mathcal{D}_s(y_{i_d},x_{i_d})-\mathcal{D}_s(y_{j_d},x_{j_d})
\end{equation}
$\delta_{ij_{depth}}$ is then normalized to be in the range $\left[0,1\right]~$.
When either $v_{i_d}$ or $v_{j_d}$ is undefined (one of the pixels belongs to a shadow in the depth image) $\delta_{ij_{depth}}$ is set to 0, since a shadow border is not necessarily an object border. Shadows in Kinect-like depth images can be caused by several effects: occlusions, highly reflective or absorbing materials, highly skewed surfaces, thin objects, or objects placed too near to the sensor.
\subsection{Saliency}
We compute a visual saliency map $\mathcal{V}_S(y,x)$ on the color image using the algorithm proposed in \cite{montabone2010human}, which is a fast implementation of visual saliency that uses an integral image on the original scale of the image in order to obtain high quality features in real time.
The algorithm is based on a single parameter for computing all the filter windows on a single integral image $\varsigma=\sigma 2^s$,
where $\sigma$ represents the surround and $s$ the scale.
The saliency map $\mathcal{V}_S(y,x)$ is first normalized in the range $\left[0,1\right]~$ and then filtered through a power-law transformation as:
\begin{equation}
\hat{\mathcal{V}}_S(y,x) = \mathcal{V}_S(y,x)^{4}
\end{equation}
This transformation lowers the values of middle gray-level pixels while keeping the high gray-level ones (pixels close to white) almost unchanged. The latter are pixels that are likely to belong to object borders. Figure \ref{fig:SaliencyPowerlaw} shows the transformed saliency image.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=.45\columnwidth]{figures/Saliency_Map.jpg} & \includegraphics[width=.45\columnwidth]{figures/Saliency_Map_Power_Law.jpg} \\
(a) & (b)
\end{tabular}
\caption{(a) Original saliency image. (b) Power-law filtered saliency image}
\label{fig:SaliencyPowerlaw}
\end{figure}
The saliency for each vertex $v_{i_s} = \left(y_{i_s},x_{i_s}\right)$ is thus defined as:
\begin{equation}
\delta_{i_{sal}}= \hat{\mathcal{V}}_S\left(y_{i_s},x_{i_s}\right)
\end{equation}
\subsection{Removing texture edges}
Similarly to \cite{schafer2013depth} and \cite{mishra2012segmenting}, we also want to remove edges caused by strong textures by calculating the difference in depth of two pixels on opposite sides of an edge pixel. While the first approach averages the depth gradient of an edge pixel within a small neighborhood of pixel
along a connected edge
and the second approach uses a logistic function trained over examples for detecting depth boundary edges, we run a Canny edge detector on the image, based on the Scharr kernels to obtain a binary edge map $\mathcal{E}_E(y,x)$. We also extract the gradient directions $\Theta$ of the edge pixels and we discretize them to one of the possible angle (namely $0^{\circ},45^{\circ},90^{\circ},135^{\circ}$).\\
For each edge pixel $\vec{e}=(y_e,x_e) \in \mathcal{E}_E$, we sample two points, one along the positive edge gradient direction and the other along the negative one and we compute the depth gradients as:
\begin{equation}
\begin{split}
\rho_{+} = \mathcal{D}_s(\vec{e}) - \mathcal{D}_s(\vec{e}+\varepsilon_{\rho}\vec{n})\\
\rho_{-} = \mathcal{D}_s(\vec{e}) - \mathcal{D}_s(\vec{e}-\varepsilon_{\rho}\vec{n})
\end{split}
\end{equation}
where $\vec{n}$ is the edge normal vector and $\varepsilon_{\rho}$ indicate the pixel to pick along the edge gradient direction.\\
A \emph{depth boundary} map $\mathcal{E}_B(y,x)$ is the computed as follows:
\begin{equation}
\mathcal{E}_B(\vec{e}) =
\begin{cases}
0 & \quad \text{if } \rho_{+} < t_{\rho} \wedge \rho_{-} < t_{\rho}\\
1 & \quad \text{otherwise} \\
\end{cases}
\end{equation}
If both $\rho_{+},~\rho_{-}$ are below a given threshold $t_{\rho}$, then the edge point $\vec{e}$ does not represent a real edge, but it is caused by texture instead; otherwise the point is a \emph{depth boundary}.
While most boundary pixels of an object correspond to depth discontinuities, the part of the object that touches the surface it is resting on does not have present depth discontinuity across it, therefore the \emph{contact edge} pixels are filtered out from the \emph{depth boundary} map $\mathcal{E}_B(y,x)$.\\
For each edge pixel $\vec{e}=(y_e,x_e) \in \mathcal{E}_E$, three points along the edge gradient direction are sampled, with the central one being the pixel $\vec{e}$. These three pixels are then projected onto the camera frame by means of the camera intrinsic matrix $\textbf{K}$ in order to obtain the corresponding three points: $\vec{p}_e,\vec{p}_+,\vec{p}_- \in \mathbb{R}^3$.
\begin{equation}
\begin{split}
&\vec{p}_e = \textbf{K}^{-1}\left[ ~\mathcal{D}_s(\vec{e})~\tilde{\textbf{e}} ~\right]
\\
&\vec{p}_+ = \textbf{K}^{-1}\left[ ~\mathcal{D}_s(\vec{e}_+)~\tilde{\textbf{e}}_+ ~\right]
\\
&\vec{p}_- = \textbf{K}^{-1}\left[ ~\mathcal{D}_s(\vec{e}_-)~\tilde{\textbf{e}}_- ~\right]
\end{split}
\end{equation}
where:
\begin{equation*}
\vec{e}_+ = \vec{e} + \varepsilon_e\vec{n}; \quad \vec{e}_- = \vec{e} - \varepsilon_e\vec{n}; \quad \tilde{\textbf{e}} = \left[x_e,y_e,1\right]^{T} .
\end{equation*}
The two unit vectors defined by the above three points are computed together with their angle as follows:
\begin{align}\label{eq:3pointAngle}
&\vec{v}_{n_+} = \dfrac{\vec{p}_+-\vec{p}_e}{\| \vec{p}_+-\vec{p}_e \|}\\
&\vec{v}_{n_-} = \dfrac{\vec{p}_--\vec{p}_e}{\| \vec{p}_--\vec{p}_e \|}\\
&\theta_v = {\rm atan2}\left(\dfrac{\vec{v}_{\times}}{\vec{v}_{dot}}\right)\label{eq:contactAngle}
\end{align}
where: $\vec{v}_{\times} = \vec{v}_{n_+} \times \vec{v}_{n_-}$ and $\vec{v}_{dot} = \vec{v}_{n_+} \cdot \vec{v}_{n_-}$.
For everyday objects lying on ordinary surfaces (such as tables, shelves, floors, etc.), contact edge pixels can be estimated straightforward by filtering the angle $\theta_v$ in (\ref{eq:contactAngle}). Common interactions between objects and holding surfaces lead to contact angles close to $90^{\circ}$.\\
A \emph{contact boundary} map $\mathcal{E}_C(y,x)$ is the computed as follows:
\begin{equation}\label{eq:contactBoundary}
\mathcal{E}_C(\vec{e}) =
\begin{cases}
0 & \quad \text{if } \theta_v > t_{\theta_H} \vee~ \theta_v < t_{\theta_L}\\
1 & \quad \text{otherwise} \\
\end{cases}
\end{equation}
where $t_{\theta_H}$ and $t_{\theta_L}$ are high and low thresholds used to cope with contact angles perturbations around the ideal value.\\% for a pixel to be considered contact edge.
The contact boundary map as defined by (\ref{eq:contactBoundary}) and (\ref{eq:contactAngle}). It also includes false contact edges called \emph{internal boundaries} as shown in figure \ref{fig:internalBoundary}a. Since we are only interested in the objects external contours (e.g., figure \ref{fig:internalBoundary}b), we cancel internal edge pixels out by looking at the direction of the cross product $\vec{v}_{\times}$ as depicted in figure (\ref{fig:internalBoundary}g).
Note that the vectors defined until now are all expressed w.r.t the camera reference frame with the $Z$-axis pointing outwards along the optical axis and the $X$-axis pointing to the right.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=.45\columnwidth]{figures/internaledgergb.jpg} & \includegraphics[width=.45\columnwidth]{figures/internaledgergbno.jpg} \\
(a) & (b)\\[6pt]
\includegraphics[width=.45\columnwidth]{figures/fullcanny.jpg} & \includegraphics[width=.45\columnwidth]{figures/filtcanny.jpg} \\
(c) & (d) \\[6pt]
\includegraphics[width=.45\columnwidth]{figures/fullcannymilk.jpg} & \includegraphics[width=.45\columnwidth]{figures/filtcannymilk.jpg} \\
(e) & (f)\\[6pt]
\multicolumn{2}{c}{\includegraphics[width=80mm]{figures/cross2.jpg} }\\
\multicolumn{2}{c}{(g)}
\end{tabular}
\caption{(a) Final Edge map with internal boundaries. (b) Final Edge map. (c) Canny output with no texture edge filter (Coffe Cups). (d) Coffe Cups Final Edge map. (e) Canny output with no texture edge filter (Milk Jugs). (f) Milk Jugs Final Edge map. (g) Internal boundary pixels definition.}
\label{fig:internalBoundary}
\end{figure}
A contact edge pixel $\vec{i} = (y_i,x_i) \in \mathcal{E}_C(y,x)$ is estimated to be an internal boundary pixel if and only if the $x$ component of the cross product $\vec{v}_{\times}$ is less than zero. (\ref{eq:contactAngle}) is then updated as follows:
\begin{equation}\label{eq:contactAngleMod}
\theta_v =
\begin{cases}
- {\rm atan2}\left(\dfrac{\vec{v}_{\times}}{\vec{v}_{dot}}\right) & \quad \text{if } v_{\times_x} < 0\\
{\rm atan2}\left(\dfrac{\vec{v}_{\times}}{\vec{v}_{dot}}\right) & \quad \text{otherwise} \\
\end{cases}
\end{equation}
The \emph{contact boundary} map $\mathcal{E}_C(y,x)$ definition (see \ref{eq:contactBoundary}) is therefore left unchanged.\\
The \emph{final boundary} map $\mathcal{E}_F(y,x)$ is then computed as:
\begin{equation}
\mathcal{E}_F(y,x) = \mathcal{E}_B(y,x) \cup \mathcal{E}_C(y,x)
\end{equation}
Despite the fact that the simple texture edge filtering might be seen as limited to a small class of simple objects (i.e., prismatic ones), Figures \ref{fig:internalBoundary}d and \ref{fig:internalBoundary}f show how the algorithm is very effective for other classes of objects as well (i.e., cylindrical ones) and, in general, is able to handle complex object shapes (e.g., milk jugs).
The boundary for each vertex $v_{i_b} = \left(y_{i_b},x_{i_b}\right)$ is thus defined as:
\begin{equation}
\delta_{i_{bound}}= \mathcal{E}_F(y_{i_b},x_{i_b})
\end{equation}
\subsection{Weight function}
Two cost functions are proposed. The first one includes color, depth and saliency information and it is found to work best when a large number of shadows are present in the depth image. The second one includes depth, color and boundary edges and works better when full depth information is available.
In the first cost function, the difference between two vertices $v_i$ and $v_j$ is defined as:
\begin{equation}\label{eq:w1}
\small
\begin{aligned}
w_{ij}=
{
{k_y \log_{2}(1+\delta_{ij_{hsv}}) + k_x \log_{2}(1+\delta_{ij_{depth}})}
\over
{2+k_x+k_y+k_s}
} \\
+
{
{
\delta_{ij_{depth}} \delta_{i_{sal}}^{1+\delta_{ij_{depth}}} + \delta_{ij_{depth}} \delta_{ij_{hsv}}^{1+\delta_{ij_{depth}}}
+ k_s \log_{2}(1+\delta_{i_{sal}})
}
\over
{2+k_x+k_y+k_s}
},
\end{aligned}
\end{equation}
where $k_s$, $k_y$ , $k_x$ are parameters for weighting in the saliency map, the color and depth difference respectively.
In the second cost function, the difference between two vertices is defined as:
\begin{equation}\label{eq:w2}
w_{ij}= {{k_x \delta_{ij_{depth}} \log_{2}(1+ \delta_{ij_{hsv}}) + k_b\delta_{i_{bound}}} \over {k_x+k_b}},
\end{equation}
where $k_b$ is a parameter for weighting in boundary edges, while the denominator in both (\ref{eq:w1}) and (\ref{eq:w2}) is needed to normalize the weights between $\left[0,1\right]$.\\
We use base-2 logarithms since all the cues used as input to the weight functions ($\delta$) are in the range $\left[0,1\right]$; the dynamic range of the cues is thus left unchanged. Moreover, logarithmic functions map a narrow range of low cue values into a wider range of output levels while compressing higher values. This property tends to create edge weights that are spread within their own dynamic range rather than generating quasi-binary weights maps.\\
(\ref{eq:w1}) is composed by a logarithmic term which handles a single independent variable (i.e., $k\log_{2}(1+\delta)$) and a coupled term which relates two variables (i.e., $\delta_{depth} \delta_{hsv}^{1+\delta_{depth}}$). The former controls the effect of each input information independently through the parameters $k$. It plays a fundamental role when no depth data are available, since the coupled terms equal zero.
The latter, instead, biases the weights assignment by introducing depth information. The lower the depth variation is, the lower is the contribute of color and saliency cues; this happens in highly textured objects where two pixels, belonging to the same object surface, generate small depth gradient but large color (or saliency) difference. For large depth gradients the weight shows an exponential trend and reaches maximum together with the color (or saliency) difference.
The exponential shape of $w$ for medium-high depth gradients is needed to mitigate the linear contribute of $\delta_{depth}$. This situation may arise in presence of concave objects (i.e., ceramic bowls or horseshoe-like objects) where medium-high variations of the depth gradient do not necessarily mean that the corresponding two graph vertices belong to different objects.
In this case, in fact, the function generates lower edge weights in presence of low and medium visual cue differences.
The cost function in (\ref{eq:w2}) is used when little or no shadows are present in the depth map. We fill the small gaps in the depth image by in-painting. Depth maps with large shadows are not in-painted since the reconstruction error would generate noise and false object boundaries. In this case, the depth map is not modified and the cost function in (\ref{eq:w1}) is used instead.
The second weight function has a coupled term that trades the information of $\delta_{hsv}$ and $\delta_{depth}$ like the first weight function but without any saliency data. The idea of this terms is the same defined in (\ref{eq:w1}) but we noticed how the logarithmic term here works better than the exponential one. The second term adds a bias term $k_b$ when the vertex $v_i$ is a boundary pixel $p_i=(y_i,x_i) \in \mathcal{E}_F$.\\
The graph is partitioned using Disjoint-set Forests. At the first iteration each node represents a distinct region $R_i$. Regions are iteratively merged based on (\ref{eq:merging}).
The final result is a set of regions $\cal R$.
\subsection{Post-processing}
In order to discard false positives, such as regions that belong to the background, some rejections steps are
required on the set $\cal R$.
Principal component analysis is performed on each region to estimate the principal components $\vec{x_1}$, $\vec{x_2}$, the relative eigenvalues $\lambda_{1}$, $\lambda_{2}$ and its eccentricity $\varepsilon$.
If either $\lambda_{1}$, $\lambda_{2}$ or $\varepsilon$ are over given thresholds, the region is discarded. The threshold can be roughly estimated if the classes of objects to be found are known in advance.
We add two more rejection steps when dealing with difficult lightning conditions and poor depth maps (see Section \ref{sec:rutgers}).
When not using in-painting of the depth image,
regions
whose pixels with no valid depth data are greater than 30$\%$ of total region size are also discarded, as this may lead to the failure of the robot grasping policies defined thereafter. Finally, dark regions can be discarded too. A $32$ bins histogram of the brightness component of the region is computed. If $30\%$ of region pixels fall within the first three bins of the histogram (i.e., pixels values in the range $\left[0,24\right]$), the region is discarded.
Since we are interested in grasping, it is also possible to discard regions which are out of reach for a robotic arm, being too distant to the camera frame.
\section{Experimental results}
\label{sec:results}
We tested our approach on three public datasets of RGB-D scenes \cite{rutgers}, \cite{rgbd} and \cite{tejani2014latent}.
We also compare the results with the one proposed in \cite{mishra2012segmenting} and show a qualitative comparison with the original algorithm \cite{felzenszwalb2004efficient} and \cite{6163000}.
For each image of every dataset, objects have been manually labelled by delineating the pixels inside the object boundary.
If the segmented objects overlap more than 70\% with the corresponding object pixels, we consider the object as successfully segmented, as in \cite{rao2010grasping}.
The software has been developed using the OpenCV library in C++ under Linux and runs on CPU. The source code will be available online.
All frames are $640 \times 480$ and the average processing time per image was 0.6 s on a standard PC with a 2.3Ghz CPU (single thread). Figure \ref{fig:challenge} shows two results on different datasets, while Figure \ref{fig:comparison} shows a comparison between different approaches.
\subsection{Rutgers APC RGB-D Dataset}
\label{sec:rutgers}
This dataset has been created specifically for the Amazon Picking Challenge and is composed of different runs,
each one containing a series of RGB images and corresponding depth images, acquired using a Asus XTion sensor in
different positions and with a variable number of objects on shelves.
For each position, four consecutive images are provided.
The dataset is particularly challenging due to low lightning and heavy presence of shadows and missing areas in the depth image. We assembled three runs from the dataset with increasing average number of objects in each image.
We tested our approach on a subset of runs. We used the first weight function, due to the large number of shadows.
Parameters have following values: $\gamma=5$, $k_x=1.05$, $k_y=1.5$, $k_s=0.5$.
Results are reported in Table \ref{table:rutgers}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
& No. of objects & $\%$ of objects detected \\
\hline\hline
Run\_1 & 80 & 87.9\% \\
Run\_2 & 120 & 92.3\% \\
Run\_3 & 121 & 75.6\% \\
\hline
\end{tabular}
\end{center}
\caption{Results for the Rutgers APC Dataset.
}
\label{table:rutgers}
\end{table}
\subsection{RGB-D Scenes Dataset}
The RGB-D Scenes Dataset consists of 8 scenes annotated with objects that belong to the RGB-D Object Dataset. (bowls, caps, cereal boxes, coffee mugs, and soda cans). Each scene is a single video sequence consisting of multiple RGB-D frames. The objects are visible from different viewpoints and distances and may be partially or completely occluded.
We compare the results of the proposed algorithm with the one from \cite{mishra2012segmenting}. We tested the approach on subset of six objects and on three different scenes.
We used the second weight function and set the parameters to the following values: $\gamma=0.0016$, $k_x=7.5$, $k_b=0.66$.
Results are shown in Table \ref{table:rgbd}. It should be noted that our approach does not rely on knowledge of the camera pose and is thus more general, at the cost of lower accuracy for some objects, while attaining 100\% accuracy for other objects.
Results are comparable to \cite{mishra2012segmenting}, though the metric we use is more strict (in \cite{mishra2012segmenting} an overlap of 50\% is considered as a good detection).
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \multicolumn{6}{|c|}{$\%$ of objects detected} \\
& Soda can & Coffee mug & Cap & Bowl & Flashlight & Cereal box \\
\hline\hline
Table\_1 & 90.6\% (100\%) & 100\% (83.6\%) & 80.1\% (93.6\%) & 85.5\% (90.3\%) & 98.1\% (98.1\%) & 72\% (97.8\%) \\
Desk\_1 & 100\% (93.7\%) & 100\% (92.5\%) & 74.2\% (100\%) & - & - & - \\
Kitchen\_small\_1 & 98.6\% (74.8\%) & 100\% (70.1\%) & 86.5\% (97.3\%) & 100\% (90\%) & 100\% (88.5\%) & 77.6\% (84.4\%) \\
\hline
\end{tabular}
\end{center}
\caption{Results for the RGB-D Dataset. Inside parentheses, the results from \cite{mishra2012segmenting} are reported for comparison.}
\label{table:rgbd}
\end{table*}
\subsection{Multiple-instance dataset}
In \cite{tejani2014latent}, 6 objects are captured under varying viewpoint with lots of background clutter, scale and pose changes, and in particular foreground occlusions and multi-instance representation
(three instances of the same object are present in each frame as well as other objects and clutter). We tested the approach on a subset of scenes.
Parameters for the second weight function are as follows: $\gamma=0.001$, $k_x=1.2$, $k_b=0.05$.
Results are shown in Table \ref{table:challenge}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
& No. of objects& $\%$ of objects detected \\
\hline\hline
Milk & 2589 & 66.6\% \\
Coffee\_Cup & 2127 & 87.2\% \\
Shampoo & 2118 & 99.6\% \\
Camera & 894 & 96.3\% \\
\hline
\end{tabular}
\end{center}
\caption{Results for the multiple instance dataset.}
\label{table:challenge}
\end{table}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=.45\columnwidth]{figures/result_table} & \includegraphics[width=.45\columnwidth]{figures/results/result_milk} \\
(a) & (b) \\[6pt]
\end{tabular}
\caption{Examples of the results on the \cite{rutgers} dataset (a) and \cite{tejani2014latent} dataset (b). Segmented objects are highlighted.}
\label{fig:challenge}
\end{figure}
\begin{figure}[h!]
\center
\begin{tabular}{cc}
\includegraphics[width=.37\columnwidth]{figures/img_1164.jpg} &
\includegraphics[width=.37\columnwidth]{figures/crayola_64_ct-image-E-2-2-2.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/cup-depth.jpg} &
\includegraphics[width=.37\columnwidth]{figures/crayola222-depth.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/cup.jpg} &
\includegraphics[width=.37\columnwidth]{figures/crayola.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/img1164_potts.jpg} &
\includegraphics[width=.37\columnwidth]{figures/img222crayola.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/results/novel1.jpg} &
\includegraphics[width=.37\columnwidth]{figures/results/novel2.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/cup-pca.jpg} &
\includegraphics[width=.37\columnwidth]{figures/crayola-pca.jpg}\\
\includegraphics[width=.37\columnwidth]{figures/nostro.jpg} &
\includegraphics[width=.37\columnwidth]{figures/resultcrayola.jpg}\\
\end{tabular}
\caption{Comparison between different approaches. First and second row: original images; third row: \cite{felzenszwalb2004efficient}; fourth row: \cite{6163000}; fifth row: \cite{rao2010grasping}; sixth and seventh row: our approach.}
\label{fig:comparison}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We presented a fast approach for segmenting simple objects from RGB-D images in the 2D domain, without need for 3D models of the objects. The approach builds on graph-based image segmentation by integrating depth information and between object boundaries and strong texture and is able to work with texture-less objects as well as heavy textured ones.
While the approach is targeted to robot grasping, it is general in its formulation. We specifically addressed the case of poor depth images and light conditions.
We proposed a modified Canny edge detector that integrates depth information in order to find robust edges in RGB-D images. We then proposed two cost functions for computing the graph weights.
The algorithm
has been tested on three public object recognition datasets, relevant to robot grasping and containing different scenarios and challenges.
Future work will be devoted to parallelization of the graph creation and partitioning phases and to investigate different partitioning strategies.
\bibliographystyle{plainnat}
|
2002.05685
|
\section{Introduction}
Gradient-based optimization algorithms have been the de facto choice in deep learning for solving the optimization problems of the form:
\begin{align}
\mathbf{x}^\star = \argmin\nolimits_{\mathbf{x} \in \mathbb{R}^d} \Bigl\{ f(\mathbf{x}) \triangleq (1/n) \sum\nolimits_{i=1}^n f^{(i)}(\mathbf{x}) \Bigr\}\,, \label{eqn:optim}
\end{align}
where $f:\mathbb{R}^d\to\mathbb{R}$ denotes the non-convex loss function, $f^{(i)}$ denotes the loss contributed by an individual data point $i \in \{1, \dots, n\}$, $\mathbf{x}\in\mathbb{R}^d$ denotes the collection of all the parameters of the neural network.
Among others, stochastic gradient descent with momentum (SGDm) is one of the most popular algorithms for solving such optimization tasks (see e.g., \citet{sutskever2013importance,iclr2018}), and is based on the following iterative scheme:
\begin{align}
\tilde{\mathbf{v}}^{k+1}= \tilde{\gamma} \tilde{\mathbf{v}}^k - \tilde{\eta} \nabla \tilde{f}_{k+1}(\mathbf{x}^{k}), \quad \mathbf{x}^{k+1} = \mathbf{x}^{k} + \tilde{\mathbf{v}}^{k+1}, \label{eqn:sgdm_common}
\end{align}
where $k$ denotes the iteration number, $\tilde{\eta}$ is the step-size, $\tilde{\gamma}$ is the friction, and $\tilde{\mathbf{v}}$ denotes the \emph{velocity} (also referred to as momentum). Here, $\nabla\tilde{f}_k$ denotes the stochastic gradients defined as follows:
\begin{align}
\nabla\tilde{f}_k(\mathbf{x}) \triangleq (1/b) \sum\nolimits_{i\in \Omega_k} f^{(i)}(\mathbf{x}), \label{eqn:stochgrad}
\end{align}
where $\Omega_k \subset \{1,\dots,n\}$ denotes a random subset drawn from the set of data points with $|\Omega_k| = b \ll n$ for all $k$.
When the gradients are computed on all the data points (i.e., $\nabla \tilde{f}_k = \nabla f$), SGDm becomes \emph{deterministic} and can be viewed as a discretization of the following \emph{continuous-time} system \cite{GGZ,maddison2018hamiltonian}:
\begin{align}
\mathrm{d} \mathbf{v}_t=- (\gamma \mathbf{v}_t+\nabla f(\mathbf{x}_t)) \mathrm{d} t, \qquad \mathrm{d} \mathbf{x}_t= \mathbf{v}_t \mathrm{d} t, \label{eqn:ode}
\end{align}
where $\mathbf{v}_t$ is still called the velocity. The connection between this system and \eqref{eqn:sgdm_common} becomes clearer, if we discretize this system by using the Euler scheme with step-size $\eta$:
\begin{align}
\nonumber &\mathbf{v}^{k+1}= \mathbf{v}^{k} - \eta (\gamma \mathbf{v}^{k+1} +\nabla f(\mathbf{x}^{k}))\,,
\\
&\mathbf{x}^{k+1} = \mathbf{x}^{k} + \eta \mathbf{v}^{k+1}\,, \label{eqn:sgdm}
\end{align}
and make the change of variables $\tilde{\mathbf{v}}^k \triangleq \eta \mathbf{v}^k$, $\tilde{\gamma} \triangleq (1-\eta \gamma) $, and $\tilde{\eta} \triangleq \eta^2$. %
However, due to the presence of the stochastic gradient noise $U_k(\mathbf{x}) \triangleq \nabla\tilde{f}_k(\mathbf{x}) - \nabla f(\mathbf{x})$, the sequence $\{\mathbf{x}_k, \mathbf{v}_k\}_{k\in \mathbb{N}_+}$ will be a \emph{stochastic process} and the deterministic system \eqref{eqn:ode} would not be an appropriate proxy.
Understanding the statistical properties of $\{\mathbf{x}_k, \mathbf{v}_k\}_{k\in \mathbb{N}_+}$ would be of crucial importance as it might reveal the peculiar properties that lie behind the performance of SGDm for learning with neural networks.
A popular approach for understanding the dynamics of stochastic optimization algorithms in deep learning is to impose some structure on the noise $U_k$ and relate the process \eqref{eqn:sgdm_common} to a stochastic differential equation (SDE) \cite{mandt2016variational,jastrzkebski2017three,hu2017diffusion,chaudhari2018stochastic,zhu2019anisotropic,pmlr-v97-simsekli19a}. For instance, by assuming that the second-order moments of the stochastic gradient noise are bounded (i.e., $\mathbb{E}\|U_k(\mathbf{x})\|^2 < \infty$ for all admissible $k$, $\mathbf{x}$), one might argue that $U_k$ can be approximated by a Gaussian random vector due to the central limit theorem (CLT) \cite{fischer2011history}. Under this assumption, we might view \eqref{eqn:sgdm_common} as a discretization of the following SDE, which is also known as the \emph{underdamped} or \emph{kinetic} Langevin dynamics:
\begin{align}
\nonumber &\mathrm{d} \mathbf{v}_t=- (\gamma \mathbf{v}_t+\nabla f(\mathbf{x}_t))\mathrm{d} t + \sqrt{2\gamma/\beta} \mathrm{d} \mathrm{B}_t
\\
&\mathrm{d} \mathbf{x}_t= \mathbf{v}_t \mathrm{d} t,\label{eqn:sde_brownian}
\end{align}
where $\mathrm{B}_t$ denotes the $d$-dimensional Brownian motion and $\beta > 0$ is called the inverse temperature variable, measuring the noise intensity along with $\gamma$. It is easy to check that, under very mild assumptions, the solution process $\{\mathbf{x}_t,\mathbf{v}_t\}_{t\geq 0}$ admits an invariant distribution whose density is proportional to $\exp(-\beta(f(\mathbf{x}) + \|\mathbf{v}\|^2/2))$, where the function $\|\mathbf{v}\|^2/2$ is often called the \emph{Gaussian kinetic energy} (see e.g. \cite{betancourt2017geometric}) and the distribution itself is called the Boltzmann-Gibbs measure \cite{pavliotis2014stochastic,GGZ,herau2004isotropic,dalalyan2018kinetic}. We then observe that the marginal distribution $\mathbf{x}$ in the stationarity has a density proportional to $\exp(-\beta f(\mathbf{x}))$, which indicated that any local minimum of $f$ appears as a local maximum of this density. This is a desirable property since it implies that, when the gradient noise $U_k$ has light tails, the process will spend more time near the local minima of $f$. Furthermore, it has been shown that as $\beta$ goes to infinity, the marginal distribution of $\mathbf{x}$ concentrates around the global optimum $\mathbf{x}^\star$. This observation has yielded interesting results for understanding the dynamics of SGDm in the contexts of both sampling and optimization with convex and non-convex potentials $f$ \cite{GGZ,GGZ2,pmlr-v80-zou18a,lu2016relativistic,csimcsekli2018asynchronous}.
%
%
While the Gaussianity assumption can be accurate in certain settings such as small networks \cite{martin2019heavy,panigrahi2019non}, recently it has been empirically demonstrated that in several deep learning setups, the stochastic gradient noise can exhibit a \emph{heavy-tailed} behavior \cite{csimcsekli2019heavy,zhang2019adam}\footnote{In two recent studies, \citet{gurbuzbalaban2020heavy} and \citet{hodgkinson2020multiplicative} have shown that
the stationary distribution of the stochastic gradient descent (SGD) algorithm can be indeed a heavy-tailed distribution depending on the choice of the step-size and the batch-size. On the other hand, in another recent study, \citet{csimcsekli2020hausdorff} have provided generalization bounds for a general class of SDEs, including heavy- and light-tailed ones.}. While the Gaussianity assumption would not be appropriate in this case since the conventional CLT would not hold anymore, nevertheless we can invoke the generalized CLT, which states that the asymptotic distribution of $U_k$ will be a symmetric $\alpha$-stable distribution
($\mathcal{S}\alpha\mathcal{S}$); a class of distributions that are commonly used in the statistical physics literature as an approximation to heavy-tailed random variables \citep{sliusarenko2013stationary,dubkov2008levy}. As we will define in more detail in the next section, in the core of ${\cal S}\alpha{\cal S}$, lies the parameter $\alpha \in (0,2]$, which determines the heaviness of the tail of the distribution. The tails get heavier as $\alpha$ gets smaller, the case $\alpha=2$ reduces to the Gaussian random variables. This is illustrated in Figure \ref{fig:sas}.
\begin{figure}[t]
\centering
%
\includegraphics[width=0.49\columnwidth]{figures/stablepdf3.pdf} \hfill
\includegraphics[width=0.49\columnwidth]{figures/stablemotion2.pdf}
\caption{${\cal S}\alpha{\cal S}$ densities and $\mathrm{L}^\alpha_t$.}
\label{fig:sas}
\end{figure}
{\citet{pmlr-v97-simsekli19a,csimcsekli2019heavy} empirically illustrated that, in deep neural networks, the statistical structure of $U_k$ can be better captured by using an $\alpha$-stable distribution.} With the assumption of $U_k$ being ${\cal S}\alpha{\cal S}$ distributed, the choice of Brownian motion will be no longer appropriate and should be replaced with an $\alpha$-stable L\'{e}vy motion, which motivates the following L\'{e}vy-driven SDE:
\begin{align}
\nonumber &\mathrm{d} \mathbf{v}_t=- (\gamma \mathbf{v}_{t-}+\nabla f(\mathbf{x}_t))\mathrm{d} t + \sqrt{2\gamma/\beta} \mathrm{d} \mathrm{L}^\alpha_t,
\\
&\mathrm{d} \mathbf{x}_t= \mathbf{v}_t \mathrm{d} t,\label{eqn:sde_sas}
\end{align}
where $\mathbf{v}_{t-}$ denotes the left limit of $\mathbf{v}_t$ and $\mathrm{L}^\alpha_t$ denotes the $\alpha$-stable L\'{e}vy process with independent components, which coincides with $\sqrt{2}\mathrm{B}_t$ when $\alpha=2$. Unfortunately, when $\alpha<2$, as opposed to its Brownian counterpart, the invariant measures of such SDEs do not admit an analytical form in general; yet, one can still show that the invariant measure cannot be in the form of the Boltzmann-Gibbs measure \cite{eliazar2003levy}.
A more striking property of \eqref{eqn:sde_sas} was very recently revealed in a statistical physics study \cite{capala2019stationary}, where the authors numerically illustrated that, even when $f$ has a single minimum, the invariant measure of \eqref{eqn:sde_sas} can exhibit multiple maxima, none of which coincides with the minimum of $f$. A similar property has been formally proven in the overdamped dynamics with Cauchy noise (i.e., $\alpha=1$ and $\gamma \to \infty$) by \citet{sliusarenko2013stationary}.
Since the process \eqref{eqn:sde_sas} would spend more time around the modes of its invariant measure (i.e., the high probability region), in an optimization context (i.e., for larger $\beta$) the sample paths would concentrate around these modes, which might be arbitrarily distant from the optima of $f$. In other words, the heavy-tails of the gradient noise could result in an undesirable bias, which would be still present even when the step-size is taken to be arbitrarily small.
As we will detail in Section~\ref{sec:mainres}, informally, this phenomenon stems from the fact that the heavy-tailed noise leads to aggressive updates on $\mathbf{v}$, which are then directly transmitted to $\mathbf{x}$ due to the dynamics. Unless `tamed', these updates create an hurling effect on $\mathbf{x}$ and drift it away from the modes of the ``potential" $f$ that is sought to be minimized.
\textbf{Contributions:} In this study, we develop a \emph{fractional} underdamped Langevin dynamics whose invariant distribution is guaranteed to be in the form of the Boltzmann-Gibbs measure, hence its optima exactly match the optima of $f$.
We first prove a general theorem which holds for any kinetic energy function, which is not necessarily the Gaussian kinetic energy. However, it turns out that some components of the dynamics might not admit an analytical form for an arbitrary choice of the kinetic energy. Then we identify two choices of kinetic energies, where all the terms in dynamics can be written in an analytical form or accurately computable. We also analyze the Euler discretization of \eqref{eqn:sde_levy} and identify sufficient conditions for ensuring weak convergence of the ergodic averages computed over the iterates.
%
We observe that the discretization of the proposed dynamics has interesting algorithmic similarities with natural gradient descent \cite{amari1998natural} and gradient clipping \citep{pascanu2013difficulty}, which we believe bring further theoretical understanding for their role in deep learning. Finally, we support our theory with experiments conducted on both synthetic settings and neural networks.
\section{Technical Background \& Related Work}
The stable distributions are heavy-tailed
distributions
that appear as the limiting distribution of the generalized CLT for
a sum of i.i.d. random variables
with infinite variance \cite{paul1937theorie}.
In this paper, we are interested in
centered \textit{symmetric $\alpha$-stable
distribution}.
A scalar random variable $X$
follows a symmetric $\alpha$-stable
distribution denoted as $X\sim\mathcal{S}\alpha\mathcal{S}(\sigma)$
if its characteristic function
takes the form:
$\mathbb{E}\left[e^{i\omega X}\right]=\exp\left(-\sigma^{\alpha}|\omega|^{\alpha}\right)$, $\omega\in\mathbb{R}$,
where $\alpha\in(0,2]$ and $\sigma>0$.
Here, $\alpha\in(0,2]$ is known as the tail-index, which determines
the tail thickness of the distribution.
$\mathcal{S}\alpha\mathcal{S}$ becomes
heavier-tailed as $\alpha$ gets smaller.
$\sigma>0$ is known as the scale
parameter that measures the spread
of $X$ around $0$.
The probability density function of a symmetric $\alpha$-stable distribution, $\alpha\in(0,2]$,
does not yield closed-form expression in general except for a few special cases.
When $\alpha=1$ and $\alpha=2$, ${\cal S}\alpha{\cal S}$ reduces to the Cauchy and the Gaussian distributions, respectively.
When $0<\alpha<2$, $\alpha$-stable distributions
have heavy-tails so that their moments
are finite only up to the order $\alpha$
in the sense that
$\mathbb{E}[|X|^{p}]<\infty$
if and only if $p<\alpha$,
which implies infinite variance.
L\'{e}vy motions are stochastic processes with independent and stationary increments.
Their successive displacements are random
and independent, and statistically identical over different time intervals of the same length,
and can be viewed as the continuous-time
analogue of random walks.
The best known and most important examples
are the Poisson process, Brownian motion,
the Cauchy process and more generally stable
processes. L\'{e}vy motions
are prototypes of Markov processes and
of semimartingales, and concern many
aspects of probability theory.
We refer to \cite{bertoin1996}
for a survey on the theory of L\'{e}vy motions.
In general, L\'{e}vy motions
are heavy-tailed, which make it appropriate
to model natural phenomena with possibly
large variations, {that often occurs}
in statistical physics \cite{eliazar2003levy},
signal processing \cite{kuruoglu1999},
and finance \cite{mandelbrot2013}.
We define $\mathrm{L}^\alpha_{t}$, a $d$-dimensional
symmetric $\alpha$-stable L\'{e}vy motion
with independent components as follows.
Each component of $\mathrm{L}^\alpha_{t}$ is an independent scalar $\alpha$-stable L\'{e}vy process,
which is defined as follows: (cf.\ Figure~\ref{fig:sas})
\begin{enumerate}[label=(\roman*),noitemsep,topsep=0pt,leftmargin=*,align=left]
\item
$\mathrm{L}^\alpha_{0}=0$ almost surely.
\item
For any $t_{0}<t_{1}<\cdots<t_{N}$, the increments $\mathrm{L}^\alpha_{t_{n}}-\mathrm{L}^\alpha_{t_{n-1}}$
are independent, $n=1,2,\ldots,N$.
\item
The difference $\mathrm{L}^\alpha_{t}-\mathrm{L}^\alpha_{s}$ and $\mathrm{L}^\alpha_{t-s}$
have the same distribution: $\mathcal{S}\alpha\mathcal{S}((t-s)^{1/\alpha})$ for $s<t$.
\item
$\mathrm{L}^\alpha_{t}$ has stochastically continuous sample paths, i.e.
for any $\delta>0$ and $s\geq 0$, $\mathbb{P}(|\mathrm{L}^\alpha_{t}-\mathrm{L}^\alpha_{s}|>\delta)\rightarrow 0$
as $t\rightarrow s$.
\end{enumerate}
When $\alpha=2$,
we obtain a scaled Brownian motion $\sqrt{2}\mathrm{B}_{t}$
as a special case so that
the difference $\mathrm{L}^\alpha_{t}-\mathrm{L}^\alpha_{s}$
follows a Gaussian distribution $\mathcal{N}(0,2(t-s))$ and $\mathrm{L}^\alpha_{t}$
is almost surely continuous.
When $0<\alpha<2$, due to the stochastic
continuity property, symmetric $\alpha$-stable
L\'{e}vy motions can have
have a countable number of discontinuities,
which are often known as \textit{jumps}.
The sample paths are continuous from
the right and they have left limits,
a property known as c\`{a}dl\`{a}g \cite{duan2015}.
Recently, \citet{FLMC} extended the \emph{overdamped} Langevin dynamics to an SDE driven by $\mathrm{L}^\alpha_t$, given as:\footnote{In \citet{FLMC}, \eqref{eqn:flmc} does not contain an inverse temperature $\beta$, which was later on introduced in \citet{nguyen19}. }
\begin{equation}
\mathrm{d} \mathbf{x}_{t}=b(\mathbf{x}_{t-},\alpha)\mathrm{d} t+ \beta^{-1/\alpha} \mathrm{d}\mathrm{L}^\alpha_{t}, \label{eqn:flmc}
\end{equation}
where the drift $b(\mathbf{x},\alpha)=((b(\mathbf{x},\alpha))_{i},1\leq i\leq d)$ is defined as follows:
\begin{equation}
(b(\mathbf{x},\alpha))_{i}={-\mathcal{D}_{x_{i}}^{\alpha-2}(\phi(\mathbf{x})\partial_{x_{i}}f(\mathbf{x}))}/{\phi(\mathbf{x})}.
\end{equation}
Here, $\phi(\mathbf{x})=\exp(-f(\mathbf{x}))$ and $\mathcal{D}$ denotes the fractional Riesz derivative \cite{Riesz}:
\begin{equation}
\mathcal{D}^{\gamma}u(x):=\mathcal{F}^{-1}\left\{|\omega|^{\gamma}(\mathcal{F}(u))(\omega)\right\}(x),
\end{equation}
where $\mathcal{F}$ denotes the Fourier transform. Briefly, $\mathcal{D}^\gamma$ extends usual differentiation to fractional orders and when $\gamma =2$ it coincides (up to a sign difference) with the usual second-order derivative $- \mathrm{d}^2 f(x)/ \mathrm{d} x^2$.
The important property of the process \eqref{eqn:flmc} is that it admits an invariant distribution whose density is proportional to $\exp(-\beta f(\mathbf{x}))$ \cite{nguyen19}. It is easy to show that, when $\alpha=2$, the drift reduces to $b(\mathbf{x},2) = -\nabla f(\mathbf{x})$, hence we recover the classical overdamped dynamics:
\begin{align}
\mathrm{d} \mathbf{x}_{t}=-\nabla f(\mathbf{x}_t)\mathrm{d} t+ \sqrt{2/\beta} \mathrm{d}\mathrm{B}_{t}. \label{eqn:langevin}
\end{align}
%
Since the fractional Riesz derivative is costly to compute,
\citet{FLMC} proposed an approximation of $b(\mathbf{x},\alpha)$ based on the alternative definition of $\mathcal{D}$ given in \cite{ortigueira2006riesz}, such that:
\begin{equation}
b(\mathbf{x},\alpha)\approx-c_{\alpha}\nabla f(\mathbf{x}), \label{eqn:riesz_approx}
\end{equation}
where $c_{\alpha}:=\Gamma(\alpha-1)/\Gamma(\alpha/2)^{2}$. This approximation essentially results in replacing $\mathrm{B}_t$ with $\mathrm{L}^\alpha_t$ in \eqref{eqn:langevin} in a rather straightforward manner.
While avoiding the computational issues originated from the Riesz derivatives, as shown in \cite{nguyen19}, this approximation can induce an arbitrary bias in a non-convex optimization context. Besides, the stationary distribution of this approximated dynamics was analytically derived in \cite{sliusarenko2013stationary} under the choice of $\alpha=1$ and $f(x) = x^4/4 - a x^2/2$ for $x\in\mathbb{R}^1$ and $a>0$. These results show that, in the presence of heavy-tailed perturbations, the drift should be modified, otherwise an inaccurate approximation of the Riesz derivatives can result in an explicit bias, which moves the modes of the distribution away from the modes of $f$.
From a pure Monte Carlo perspective, \citet{SFHMC} extended the fractional overdamped dynamics \eqref{eqn:flmc} to higher-order dynamics and proposed the so-called fractional Hamiltonian dynamics (FHD), given as follows:
\begin{align}
\nonumber \mathrm{d} \mathbf{x}_{t}=&{\mathcal{D}^{\alpha-2}\{\phi(\mathbf{z}_{t})\mathbf{v}_{t}\}}/{\phi(\mathbf{z}_{t})}\mathrm{d} t,
\\
\nonumber \mathrm{d} \mathbf{v}_{t}=&-{\mathcal{D}^{\alpha-2}\{\phi(\mathbf{z}_{t})\nabla f(\mathbf{x}_{t})\}}/{\phi(\mathbf{z}_{t})}\mathrm{d} t \\
&-\gamma{\mathcal{D}^{\alpha-2}\{\phi(\mathbf{z}_{t})\mathbf{v}_{t}\}}/{\phi(\mathbf{z}_{t})}\mathrm{d} t
+\gamma^{1/\alpha}\mathrm{d} \mathrm{L}^\alpha_{t}, \label{eqn:fhd}
\end{align}
where $\mathbf{z}_{t}=(\mathbf{x}_{t},\mathbf{v}_{t})$, and $\phi(\mathbf{z})=e^{-f(\mathbf{x})-\frac{1}{2}\Vert\mathbf{v}\Vert^{2}}$. They showed that the invariant measure of the process has a density proportional to
$\phi(\mathbf{z})$, i.e., the Boltzmann-Gibbs measure. Similar to the overdamped case \eqref{eqn:flmc}, the Riesz derivatives do not admit an analytical form in general. Hence they approximated them by using the same approximation given in \eqref{eqn:riesz_approx}, which yields the SDE given in \eqref{eqn:sde_sas} (up to a scaling factor). This observation also confirms that the heavy-tailed noise requires an adjustment in the dynamics, otherwise the induced bias might drive the dynamics away from the minima of $f$ \cite{capala2019stationary}.
\section{Fractional Underdamped Langevin Dynamics}
\label{sec:mainres}
In this section, we develop the fractional underdamped Langevin dynamics (FULD), which is expressed by the following SDE:
\begin{align}
\nonumber &\mathrm{d} \mathbf{v}_t=- (\gamma c(\mathbf{v}_{t-},\alpha)+\nabla f(\mathbf{x}_t)) \mathrm{d} t+ ({\gamma}/{\beta})^{1/\alpha} \mathrm{d} \mathrm{L}^\alpha_t,
\\
&\mathrm{d} \mathbf{x}_t =\nabla g(\mathbf{v}_t) \mathrm{d} t, \label{eqn:sde_levy}
\end{align}
where $c : \mathbb{R}^d \times (0,2] \mapsto \mathbb{R}^d$ is the \emph{drift function} for the velocity and $g : \mathbb{R}^d \mapsto \mathbb{R}$ denotes a general notion of \emph{kinetic energy}. In the next theorem, which is the main theoretical result of this paper, we will identify the relation between these two functions such that the solution process will keep the \emph{generalized} Boltzmann-Gibbs measure, $\exp(-\beta (f(\mathbf{x}) + g(\mathbf{v})))\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{v}$ invariant. All the proofs are given in the supplementary document.
\begin{theorem}\label{thm:invariant}
Let $c(\mathbf{v},\alpha)=((c(\mathbf{v},\alpha))_{i}, 1\leq i\leq d)$ has the following form:
\begin{equation}
(c(\mathbf{v},\alpha))_{i}:=\frac{\mathcal{D}_{v_{i}}^{\alpha-2}(\psi(\mathbf{v})\partial_{v_{i}}g(\mathbf{v}))}{\psi(\mathbf{v})},
\qquad
\psi(\mathbf{v}):=e^{-g(\mathbf{v})}.
\label{eqn:driftc}
\end{equation}
{The measure} $\pi(\mathrm{d}\mathbf{x},\mathrm{d}\mathbf{v})\propto e^{- \beta(f(\mathbf{x})+g(\mathbf{v}))}\mathrm{d}\mathbf{x} \mathrm{d}\mathbf{v}$ on $\mathbb{R}^{d}\times\mathbb{R}^{d}$ is an invariant probability measure for the Markov process $(\mathbf{x}_{t},\mathbf{v}_{t})$.
\end{theorem}
One of the main features of FULD is that the fractional Riesz derivatives only appears in the drift $c$, which \emph{only} depends on $\mathbf{v}$. This is highly in contrast with FHD \eqref{eqn:fhd}, where the Riesz derivatives are taken over both $\mathbf{x}$ and $\mathbf{v}$, which is the source of intractability.
Moreover, FULD enjoys the freedom to choose different kinetic energy functions $g(\mathbf{v})$. In the sequel, we will investigate two options for $g$, such that the drift $c$ can be analytically obtained.
\subsection{Gaussian kinetic energy}
In classical overdamped Langevin dynamics and Hamiltonian dynamics, the default choice of kinetic energy is the Gaussian kinetic energy, which corresponds to taking $g(\mathbf{v})=\frac{1}{2}\Vert \mathbf{v}\Vert^{2}$ \cite{neal2010mcmc,livingstone2019kinetic,dalalyan2018kinetic}. With this choice, the fractional dynamics becomes:
\begin{align}
&\mathrm{d}\mathbf{v}_{t}=-\gamma c(\mathbf{v}_{t-},\alpha)\mathrm{d} t-\nabla f(\mathbf{x}_{t})\mathrm{d} t+(\gamma/\beta)^{1/\alpha}\mathrm{d} \mathrm{L}^\alpha_{t},
\nonumber
\\
&\mathrm{d} \mathbf{x}_{t}=\mathbf{v}_{t}\mathrm{d} t. \label{eqn:fuld_v1}
\end{align}
In the next result, we will show that in this case, the drift $c$ admits an analytical solution.
\begin{theorem}\label{prop:formula}
Let $g(\mathbf{v})=\frac{1}{2}\Vert \mathbf{v}\Vert^{2}$. Then, for any $1\leq i\leq d$,
\begin{equation}
(c(\mathbf{v},\alpha))_{i}
=\frac{2^{\frac{\alpha}{2}}v_{i}}{\sqrt{\pi}}\Gamma\left(\frac{\alpha+1}{2}\right)
{_1F}_{1}\left(\frac{2-\alpha}{2};\frac{3}{2};\frac{v_{i}^{2}}{2}\right),
\end{equation}
where $\Gamma$ is the gamma function and $_{1}F_{1}$ is
the Kummer confluent hypergeometric function.
In particular, when $\alpha=2$, we have $(c(\mathbf{v},\alpha))_{i}=v_{i}$.
\end{theorem}
We observe that the fractional dynamics \eqref{eqn:fuld_v1} strictly extends the underdamped Langevin dynamics \eqref{eqn:sde_brownian} as $c(\mathbf{v},2)= \mathbf{v}$.
Let us now investigate the form of the new drift $c$ and its implications. In Figure~\ref{fig:driftv1}, we illustrate $c$ for the $d=1$ dimensional case (note that for $d>1$, each component of $c$ still behaves like Figure~\ref{fig:driftv1}). We observe that due to the hypergeometric function $_{1}F_{1}$, the drift grows exponentially fast with $|v|$ whenever $\alpha<2$. Semantically, this means that, in order to be able to compensate the large jumps incurred by $\mathrm{L}^\alpha_t$, the drift has to react very strongly and hence prevent $v$ to take large values. To illustrate this behavior, we provide more visual illustrations in the supplementary document.
%
\begin{figure}[t]
\centering
%
\subfigure[]{\includegraphics[width=0.495\columnwidth]{figures/drift_v1_2.pdf} \label{fig:driftv1}}
\subfigure[]{\includegraphics[width=0.46\columnwidth]{figures/drift_v2_2.pdf} \label{fig:driftv2}}
\vspace{-10pt}
\caption{Illustration of one dimensional a) drift function $c$ for the Gaussian kinetic energy, b) $\nabla G_\alpha$ for the ${\cal S}\alpha{\cal S}$ kinetic energy. }
\end{figure}
Even though this aggressive behavior of $c$ can be beneficial for the continuous-time system, it is unfortunately clear that its Euler-Maruyama discretization will not yield a practical algorithm due to the same behavior. Indeed, we would need the function $c$ to be Lipschitz continuous in order to guarantee the algorithmic stability of its discretization \cite{kloeden2013numerical}; however, if we consider the integral form of $_{1}F_{1}$ (cf.\ \cite{AS1972}), we observe that the function
%
\begin{equation*}
(c(\mathbf{v},\alpha))_{i}
=\frac{2^{\frac{\alpha}{2}}v_{i}}{\sqrt{\pi}}
\cdot\frac{\Gamma(\frac{3}{2})}{\Gamma(\frac{2-\alpha}{2})}
\int_{0}^{1}e^{\frac{v_{i}^{2}}{2}t}t^{-\frac{\alpha}{2}}(1-t)^{\frac{\alpha-1}{2}}\mathrm{d} t
\end{equation*}
is clearly not Lipschitz continuous in $v_{i}$. Therefore, we conclude that FULD with the Gaussian kinetic energy is mostly of theoretical interest.
\subsection{Alpha-stable kinetic energy}
The dynamics with the Gaussian kinetic energy requires a very strong drift $c$
mainly because we force the dynamics to make sure that the invariant distribution of $\mathbf{v}$ to be a Gaussian. Since the Gaussian distribution has light-tails, it cannot tolerate samples with large magnitudes, hence requires a large dissipation to make sure $\mathbf{v}$ does not take large values.
In order to avoid such an explosive drift that potentially degrades practicality, next we explore \emph{heavy-tailed} kinetic energies, which would allow the components of $\mathbf{v}$ to take large values, while still making sure that the drift $c$ in \eqref{eqn:driftc} admits an analytical form.
In our next result, we show that, when we choose an ${\cal S}\alpha{\cal S}$ kinetic energy, such that the tail-index $\alpha$ of this kinetic energy matches the one of the driving process $\mathrm{L}^\alpha_t$, the drift $c$ simplifies and becomes the identity function.
\begin{theorem}\label{prop:v}
Let $e^{-g_{\alpha}(v)}$ be the probability density
function of $\mathcal{S}\alpha\mathcal{S}(\frac{1}{\alpha^{1/\alpha}})$.
Choose $\psi(\mathbf{v})=e^{-G_{\alpha}(\mathbf{v})}$ in \eqref{eqn:driftc},
where $G_{\alpha}(\mathbf{v})=\sum_{i=1}^{d}g_{\alpha}(v_{i})$ for any $\mathbf{v}=(v_{1},\ldots,v_{d})$.
Then,
\begin{equation}
(c(\mathbf{v},\alpha))_{i}=v_{i},\qquad 1\leq i\leq d.
\end{equation}
\end{theorem}
This result hints that, perhaps $G_\alpha(v)$ is the natural choice of kinetic energy for the systems driven by $\mathrm{L}^\alpha_t$.
It now follows from Theorem \ref{prop:v} that
the FULD with $\alpha$-stable kinetic energy reduces to the following SDE:
\begin{align}
&\mathrm{d} \mathbf{v}_{t}=-\gamma \mathbf{v}_{t-}\mathrm{d} t-\nabla f(\mathbf{x}_{t})\mathrm{d} t+(\gamma/\beta)^{1/\alpha}\mathrm{d}\mathrm{L}^\alpha_{t},
\nonumber
\\
&\mathrm{d} \mathbf{x}_{t}=\nabla G_{\alpha}(\mathbf{v}_{t})\mathrm{d} t.
\label{eqn-sde}
\end{align}
It can be easily verified that $\nabla G_{\alpha}(\mathbf{v}_{t})=\mathbf{v}_{t}$ for $\alpha=2$, as $g_{2}(v)=\frac1{2}\log2\pi+\frac1{2}v^2$, hence, the SDE \eqref{eqn-sde} also reduces to the classical underdamped Langevin dynamics \eqref{eqn:sde_brownian}.
While this choice of $g$ results in an analytically available $c$, unfortunately the function $\nabla G_{\alpha}$ itself admits a closed-form analytical formula only when $\alpha=1$ or $\alpha=2$, due to the properties of the ${\cal S}\alpha{\cal S}$ densities. Nevertheless, as $\nabla G_{\alpha}$ is based on one-dimensional ${\cal S}\alpha{\cal S}$ densities, it can be very accurately computed by using the recent methods developed in \cite{Ament2017}. {On the other hand, in the next section, we will show that $\nabla G_{\alpha}$ is Lipschitz continuous for all $\alpha \in (0,2]$, which implies that under standard regularity conditions on $f$, the Boltzmann-Gibbs measure is the unique invariant measure of \eqref{eqn-sde}. }
We visually inspect the behavior of $\nabla G_\alpha$ in Figure~\ref{fig:driftv2} for dimension one. We observe that, as soon as $\alpha <2$, $\nabla G_\alpha$ takes a very smooth form. Besides, for small $|v|$ the function behaves like a linear function and when $|v|$ goes to infinity, it vanishes. This behavior can be interpreted as follows: since $\mathbf{v}$ can take larger values due to the heavy tails of the kinetic energy, in order to be able target the correct distribution, the dynamics compensates the potential bursts in $\mathbf{v}$ by passing it through the asymptotically vanishing $\nabla G_\alpha$.
\subsection{Euler discretization and weak convergence analysis}
As visually hinted in Figure~\ref{fig:driftv2}, the function $\nabla G_\alpha$ has strong regularity, which makes \eqref{eqn-sde} to be potentially beneficial for practical implementations. Indeed, it is easy to verify that $\nabla G_\alpha$ is Lipschitz continuous for $\alpha =1$ and $2$, and in our next result, we show that this observation is true for any admissible $\alpha$, which is a desired property when discretizing continuous-time dynamics.
\begin{proposition}
\label{prop:lipschitz}
For $0<\alpha\leq 2$, the map $v\mapsto g'_\alpha(v)$ is Lipschitz continuous, hence $\mathbf{v}\mapsto \nabla G_\alpha(\mathbf{v})$ is also Lipschitz continuous.
\end{proposition}
Accordingly we consider the following Euler-Maruyama discretization for \eqref{eqn-sde}:
\begin{align}
\textstyle
\mathbf{v}^{k+1} &= \tilde{\gamma}_{k} \mathbf{v}^k - \eta_{k} \nabla f(\mathbf{x}^k) + (\eta_{k} \gamma/\beta)^{1/\alpha} \mathbf{s}^{k+1}, \nonumber \\
\mathbf{x}^{k+1} &= \mathbf{x}^{k} + \eta_{k}\nabla G_\alpha(\mathbf{v}^{k+1}), \label{eqn:eulerv2}
\end{align}
where $\tilde{\gamma}_{k} = 1- \gamma \eta_{k}$, $\mathbf{s}^{k}$ is a random vector whose components are independently ${\cal S}\alpha{\cal S}(1)$ distributed, and $(\eta_k)_{k \in \mathbb{N}_+ }$ is a sequence of step-sizes.
In this section, we analyze the weak convergence of the ergodic averages computed by using \eqref{eqn:eulerv2}. Given a test function $h$, consider its expectation with respect to the target measure $\pi$, i.e. $\pi(h) := \mathbb{E}_{X \sim \pi} [h(X)]=\int h(\mathbf{x}) \pi(\mathrm{d}\mathbf{x})$ with $\pi(\mathrm{d} \mathbf{x}) \propto \exp(-\beta f(\mathbf{x}))\mathrm{d} \mathbf{x}$. We will discuss next how this expectation can be approximated through the sample averages
\begin{equation}
\bar{\pi}_K(h):= (1/S_K)\sum\nolimits_{k=1}^K \eta_k h(\mathbf{x}^{k})\,, \label{eqn:mc_avr}
\end{equation}
where $S_K := \sum_{k=1}^K \eta_k $ is the cumulative sum of the step-size sequence.
{ We note that Langevin-based algorithms have been used in the literature
to obtain global convergence guarantees for non-convex optimization, see e.g. \cite{Raginsky,xu2018global,GGZ2,ZXG2019,nguyen19}.
In particular, \citet{nguyen19} used an overdamped
fractional Langevin dynamics for non-convex optimizations.
The proposed model in our paper can also be used to
study the non-convex optimizations
and we expect that our underdamped dynamics may
have improved theoretical guarantees compared to \cite{nguyen19}.}
We now present the assumptions that imply our results.
\begin{assumption}\label{assump-stepsize} The step-size sequence $\{\eta_k\}$ is non-increasing and satisfies $\lim_{k\to\infty}\eta_k = 0$ and $\lim_{K\to\infty} S_K =\infty$.
\end{assumption}
\begin{assumption}\label{assump-growth} Let $V:\mathbb{R}^{2d}\to\mathbb{R}_+$ be a twice continuously differentiable function, satisfying $\lim_{\|\mathbf{z}\|\to\infty} V(\mathbf{z})=\infty$, $\|\nabla V\|\leq C \sqrt{V}$ for some $C>0$ and has a bounded Hessian $\nabla^2 V$. Given $p\in (0,\frac{1}{2}]$, there exists $a\in (1-\frac{p}{2},1]$, $\beta_1 \in\mathbb{R}$, $\beta_2 > 0$ such that $\|b\|^2 \leq CV^a$ and $\langle \nabla V, b \rangle \leq \beta_1 - \beta_2 V^a$ where $b(\mathbf{v},x)= (-\gamma \mathbf{v} - \nabla f(\mathbf{x}), \nabla G_\alpha(\mathbf{v}))$ is the drift of the $(\mathbf{v}_{t},\mathbf{x}_{t})$ process defined in \eqref{eqn-sde}.
\end{assumption}
These are common assumptions ensuring that the SDE is simulated with infinite time-horizon and the process is not explosive \cite{panloup2008recursive,FLMC}. We can now establish the weak convergence of \eqref{eqn:mc_avr} and present it as a corollary to Theorem~\ref{thm:invariant}, Proposition~\ref{prop:lipschitz}, and \cite{panloup2008recursive} (Theorem 2).
\begin{corollary}
\label{cor:weakconv}
Assume that the gradient $\nabla f$ is Lipschitz continuous and has linear growth i.e., there exists $C>0$ such that $\|\nabla f(\mathbf{x})\| \leq C(1+ \|\mathbf{x}\|)$ for all $\mathbf{x}$. Furthermore, assume that Assumptions \ref{assump-stepsize} and \ref{assump-growth} hold for some $p\in(0,1/2]$. If the test function $h = o(V^{\frac{p}{2}+a-1})$ then
$$\bar{\pi}_K(h) \to \pi(h) \quad \mbox{almost surely as }K\to\infty. $$
\end{corollary}
\subsection{Connections to existing approaches}
\label{sec:conn}
We now point out interesting algorithmic connections between \eqref{eqn:eulerv2} and two methods that are commonly used in practice. We first roll back our initial hypothesis that the gradient noise is ${\cal S}\alpha{\cal S}$ distributed, i.e., $\nabla \tilde{f}_{k}(\mathbf{x}) = \nabla f(\mathbf{x}) + (\eta_{k} \gamma/\beta)^{1/\alpha} \mathbf{s}^{k} $, and modify \eqref{eqn:eulerv2} as follows:
\begin{align}
\textstyle
\mathbf{v}^{k+1} &= \tilde{\gamma}_{k} \mathbf{v}^k - \eta_{k} \nabla \tilde{f}_{k+1}(\mathbf{x}^k), \nonumber \\
\mathbf{x}^{k+1} &= \mathbf{x}^{k} + \eta_{k}\nabla G_\alpha(\mathbf{v}^{k+1}). \label{eqn:eulerv2_optim}
\end{align}
{As a special case when $\tilde{\gamma}_k=0$, we obtain a stochastic gradient descent-type recursion:
\begin{align}
\mathbf{x}^{k+1} = \mathbf{x}^{k} + \eta_{k} \nabla G_\alpha(-\eta_{k} \nabla \tilde{f}_{k+1}(\mathbf{x}^k)). \label{eqn:newsgd}
\end{align}}
Let us now consider \emph{gradient-clipping}, a heuristic approach for eliminating the problem of `exploding gradients', which often appear in training neural networks \cite{pascanu2013difficulty,zhang2019analysis}. Very recently, \citet{zhang2019adam} empirically illustrated that such explosions stem from heavy-tailed gradients and formally proved that gradient clipping indeed improves convergence rates under heavy-tailed perturbations. We notice that, the behavior of \eqref{eqn:eulerv2_optim} is reminiscent of gradient clipping: due to the vanishing behavior of $\nabla G_\alpha$ for $\alpha<2$, as the components of $\mathbf{v}^k$ gets larger in magnitude, the update applied on $\mathbf{x}^k$ gets smaller. The behavior becomes more prominent in \eqref{eqn:newsgd}. On the other hand, \eqref{eqn:eulerv2_optim} is more aggressive in the sense that the updates can get arbitrarily small as the value of $\alpha$ decreases as opposed to being `clipped' with a threshold.
The second connection is with the natural gradient descent algorithm, where the stochastic gradients are pre-conditioned with the inverse Fisher information matrix (FIM) \cite{amari1998natural}. Here FIM is defined as $\mathbb{E}[\nabla f(\mathbf{x})\nabla f(\mathbf{x})^\top]$, where the expectation is taken over the data. Notice that when $\alpha=1$ (i.e., Cauchy distribution), we have the following form: $\nabla G_{1}(\mathbf{v})=\left(\frac{2v_{1}}{v_{1}^{2}+1},\ldots,\frac{2v_{d}}{v_{d}^{2}+1}\right)$. Therefore, we observe that, in \eqref{eqn:newsgd}, $\nabla G_{1}(\nabla \tilde{f}_k(\mathbf{x}))$ can be equivalently written as $\mathbf{M}_k(\mathbf{x})^{-1}\nabla \tilde{f}_k(\mathbf{x})$, where $\mathbf{M}_k(\mathbf{x})$ is a diagonal matrix with entries $m_{ii} = ((\nabla \tilde{f}_k(\mathbf{x}))_i^2 +1)/2$.
Therefore, we can see $\mathbf{M}_k$ as an estimator of the diagonal part of FIM, as they will be in the same order when $|(\nabla \tilde{f}_k(\mathbf{x}))_i|$ is large.
Besides, \eqref{eqn:eulerv2_optim} then appears as its momentum extension.
However, $\mathbf{M}_k$ will be biased mainly due to the fact that FIM is the average of the squared gradients, whereas $\mathbf{M}_k$ is based on the square of the average gradients. This connection is rather surprising, since a seemingly unrelated, differential geometric approach turns out to have strong algorithmic similarities with a method that naturally arises when the gradient noise is Cauchy distributed.
\begin{figure}[t]
\centering
%
%
\includegraphics[width=0.99\columnwidth]{figures/synth_a10.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/synth_a19.pdf}
%
\vspace{-15pt}
\caption{Estimated invariant measures for the quartic potential: top $\alpha=1$, bottom $\alpha = 1.9$.}
\label{fig:synth}
\end{figure}
\section{Numerical Study}
In this section, we will illustrate our theory on several experiments which are conducted in both synthetic and real-data settings\footnote{We provide our implementation in \url{https://github.com/umutsimsekli/fuld}.}. We note that, as expected, FULD with Gaussian kinetic energy did not yield a numerically stable discretization due to the explosive behavior of $c$. Hence, in this section, we only focus on FULD with ${\cal S}\alpha{\cal S}$ kinetic energy and from now on we will simply refer to FULD with ${\cal S}\alpha{\cal S}$ kinetic energy as FULD. %
\subsection{Synthetic setting}
\begin{figure}[t]
\centering
%
%
\includegraphics[width=0.99\columnwidth]{figures/synth_a10_path.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/synth_a19_path.pdf}
%
\vspace{-13pt}
\caption{Illustration of the iterates for the quartic potential: top $\alpha=1$, bottom $\alpha = 1.9$.}
\label{fig:synth_path}
\end{figure}
We first consider a one-dimensional synthetic setting, similar to the one considered in \cite{capala2019stationary}. We consider a quartic potential function with a quadratic component, $f(x) = x^4/4 - x^2/2$. We then simulate the `uncorrected dynamics' (UD) given in \eqref{eqn:sde_sas} and FULD \eqref{eqn-sde} by using the Euler-Maruyama discretization to compare their behavior for different $\alpha$. For $\alpha \notin \{1,2\}$, we used the software given in \cite{Ament2017} for computing $\nabla G_\alpha$.
Figure~\ref{fig:synth} illustrates the distribution of the samples generated by simulating the two dynamics. In this setup, we set $\beta =1$, $\eta = 0.01$, $\gamma=10$ with number of iterations $K=50000$. We observe that, for $\alpha=1.9$, FULD very accurately captures the form of the distribution, whereas UD exhibits a visible bias and the shape of its resulting distribution is slightly distorted. Nevertheless, since the perturbations are close to a Gaussian in this case (i.e., $\alpha$ is close to $2$), the difference is not substantial and can be tolerable in an optimization context. However, this behavior becomes much more emphasized when we use a heavier-tailed driving process: when $\alpha=1$, we observe that the target distribution of UD becomes distant from the Gibbs measure $\exp(-f(x))$, and more importantly its modes no longer match the minima of $f$; agreeing with the observations presented in \cite{capala2019stationary}\footnote{{We note that the overdamped dynamics with the uncorrected drift exhibits a similar behavior to the one of the uncorrected underdamped dynamics with sufficiently large $\gamma$.} }. On the other hand, thanks to the correction brought by $\nabla G_\alpha$, FULD still captures the target distribution very accurately, even when the driving force is Cauchy.
On the other hand, in our experiments we observed that, for small values of $\alpha$, UD can quickly become numerically unstable and even diverge for slightly larger step-sizes, whereas this problem never occurred for FULD. This outcome also stems from the fact that UD does not have any mechanism to compensate the potential large updates originating from the heavy-tailed perturbations. To illustrate this observation more clearly, in Figure~\ref{fig:synth_path} we illustrate the iterates $(\mathbf{x}^k)_{k=1}^K$ which were used for producing Figure~\ref{fig:synth}. We observe that, while the iterates of UD are well-behaved for $\alpha=1.9$, the magnitude range of the iterates gets quite large when $\alpha$ is set to $1$. On the other hand, for both values of $\alpha$, FULD iterates are always kept in a reasonable range, thanks to the clipping-like effect of $\nabla G_\alpha$.
\subsection{Neural networks}
In our next set of experiments, we evaluate our theory on neural networks. In particular, we apply the iterative scheme given in \eqref{eqn:eulerv2_optim} as an optimization algorithm for training neural networks, and compare its behavior with classical SGDm defined in \eqref{eqn:sgdm_common}. In this setting, we do not add any explicit noise, all the stochasticity comes from the potentially heavy-tailed stochastic gradient noise \eqref{eqn:stochgrad} {under the assumption that the noise can be well-modeled by using an ${\cal S}\alpha{\cal S}$ vector (see Section~\ref{sec:conn} for the explicit assumption)}.
\begin{figure}[t]
\centering
%
%
%
%
\includegraphics[width=0.99\columnwidth]{figures/mnist_NLL_fc_wid128_training.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/mnist_NLL_fc_wid256_training.pdf}
%
\vspace{-10pt}
\caption{Neural network results on MNIST (training).}
\label{fig:fcn_mnist}
\end{figure}
We consider a fully-connected network for a classification task on the MNIST and CIFAR10 datasets, with different depths (i.e.\ number of layers) and widths (i.e.\ number of neurons per layer).
For each depth-width pair, we train two neural networks by using SGDm \eqref{eqn:sgdm_common} and our modified version \eqref{eqn:eulerv2_optim}, and compare their final train/test accuracies and loss values. We use the conventional train-test split of the datasets: for MNIST we have $60$K training and $10$K test samples, and for CIFAR10 these numbers are $50$K and $10$K, respectively. We use the cross entropy loss (also referred to as the `negative-log-likelihood').
We note that the modified scheme \eqref{eqn:eulerv2_optim} reduces to \eqref{eqn:sgdm_common} when $\alpha =2$, since $\nabla G_2(\mathbf{v}) = \mathbf{v}$. Hence in this section, we will refer to SGDm as the special case of \eqref{eqn:eulerv2_optim} with $\alpha =2$.
On the other hand, in these experiments, directly computing $\nabla G_\alpha$ becomes impractical for $\alpha \notin \{1,2\}$, since the algorithms given in \cite{Ament2017} become prohibitively slow with the increased dimension $d$. However, since $\nabla G_\alpha$ is based on the derivatives of the \emph{one-dimensional} ${\cal S}\alpha{\cal S}$ densities $g_\alpha(v)$ (see Theorem~\ref{prop:v}), for $\alpha \in (1,2)$, we first precomputed the values of $g_\alpha(v)$ over a fine grid of $v \in [-100,100]$; then, during the SGDm recursion, we approximated $\nabla G_\alpha$ by linearly interpolating the values of $g_\alpha$ that are precomputed over this grid.
We expect that, if the stochastic gradient noise can be well-approximated by using an ${\cal S}\alpha{\cal S}$ distribution, then the modified dynamics should exhibit an improved performance since it would eliminate the potential bias brought by the heavy-tailed noise.
\begin{figure}[t]
\centering
%
%
\includegraphics[width=0.99\columnwidth]{figures/mnist_NLL_fc_wid128_test.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/mnist_NLL_fc_wid256_test.pdf}
%
\vspace{-10pt}
\caption{Neural network results on MNIST (test).}
\label{fig:fcn_mnist2}
\end{figure}
In these experiments, we set $\eta = 0.1$, $\gamma=0.1$ for MNIST, and $\gamma=0.9$ for CIFAR10. We run the algorithms for $K=10000$ iterations \footnote{Since the scale of the gradient noise is proportional to $ (\gamma/\beta)^{\frac1{\alpha}} $ (see \eqref{eqn:eulerv2}), in this setup, a fixed $\gamma$ implicitly determines $\beta$. }. We measure the accuracy and the loss at every 100th iteration and we report the average of the last two measurements. Figures~\ref{fig:fcn_mnist} and \ref{fig:fcn_mnist2} show the results obtained on the MNIST dataset. We observe that, in most of the cases, setting $\alpha=1.75$ yields a better performance in terms both training and testing accuracies/losses. This difference becomes more visible when the width is set to $256$: the accuracy difference between the algorithms reaches $\approx 2\%$. We obtain a similar result on the CIFAR10 dataset, as illustrated in Figures~\ref{fig:fcn_cifar} and \ref{fig:fcn_cifar2}. In most of the cases $\alpha=1.75$ performs better, with the maximum accuracy difference being $\approx 4.5\%$, implying the gradient noise can be approximated by an ${\cal S}\alpha{\cal S}$ random variable.
We observed a similar behavior when the width was set to $64$. However, when we set the width to $32$ we did not perceive a significant difference in terms of the performance of the algorithms.
On the other hand, when the width was set to $512$, $\alpha=2$ resulted in a slightly better performance, which would be an indication that the Gaussian approximation is closer. The corresponding figures are provided in the supplementary document.
\begin{figure}[t]
\centering
%
%
%
%
\includegraphics[width=0.99\columnwidth]{figures/cifar10_NLL_fc_wid128_training.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/cifar10_NLL_fc_wid256_training.pdf}
%
\vspace{-10pt}
\caption{Neural network results on CIFAR10 (training).}
\label{fig:fcn_cifar}
\end{figure}
%
%
\section{Conclusion and Future Directions}
We considered the continuous-time variant of SGDm, known as the underdamped Langevin dynamics (ULD), and developed theory for the case where the gradient noise can be well-approximated by a heavy-tailed $\alpha$-stable random vector. As opposed to na\"{i}vely replacing the driving stochastic force in ULD, which correspondonds to running SGDm with heavy-tailed gradient noise, the dynamics that we developed exactly target the Boltzmann-Gibbs distribution, and hence do not introduce an implicit bias. We further established the weak convergence of the Euler-Maruyama discretization and illustrated interesting connections between the discretized algorithm and existing approaches commonly used in practice. We supported our theory with experiments on a synthetic setting and fully connected neural networks.
Our framework opens up interesting future directions. Our current modeling strategy requires a state-independent, isotropic noise assumption, which would not accurately reflect the reality. While anisotropic noise can be incorporated to our framework by using the approach of \citet{SFHMC}, state-dependent noise introduces challenging technical difficulties. Similarly, it has been illustrated that the tail-index $\alpha$ can depend on the state and different components of the noise can have a different $\alpha$ \cite{csimcsekli2019heavy}. Incorporating such state dependencies would be an important direction of future research. {Finally, it has been shown that the heavy-tailed perturbations yield shorter escape times \cite{nguyen2019first} in the overdamped dynamics, and extending such results to the underdamped case is still an open problem.}
\begin{figure}[t]
\centering
%
%
%
%
\includegraphics[width=0.99\columnwidth]{figures/cifar10_NLL_fc_wid128_test.pdf}\\
\includegraphics[width=0.99\columnwidth]{figures/cifar10_NLL_fc_wid256_test.pdf}
%
\vspace{-10pt}
\caption{Neural network results on CIFAR10 (test).}
\label{fig:fcn_cifar2}
\end{figure}
\section*{Acknowledgments}
We thank Jingzhao Zhang for fruitful discussions. The contribution of Umut \c{S}im\c{s}ekli to this work is partly supported by the French National Research Agency (ANR) as a part of the FBIMATRIX (ANR-16-CE23-0014) project, and by the industrial chair Data science \& Artificial Intelligence from T\'{e}l\'{e}com Paris. Lingjiong Zhu is grateful to the support from Simons Foundation Collaboration Grant. Mert G\"{u}rb\"{u}zbalaban acknowledges support from the grants NSF DMS-1723085 and NSF CCF-1814888.
|
1711.03792
|
\section{Introduction}
\par In order to understand the geometry of the ideal $I_S$ of $s$ points in $\PP^n$, one source of information is the
Hilbert function $h_d(I_S)$, which gives the number of degree $d$ generators of $I_S.$ The relation among these generators
is captured in the free resolution of $I_S.$ Conjectures have been made as to the prescribed form of the minimal free resolution
for the ideal of points in $\PP^n.$ One such conjecture is the minimal resolution conjecture by Lorenzini \cite{a} for the
ideal of points in general position. Hirschowitz and Simpson \cite{h} in their study of minimal free resolutions
showed that in order to prove that an ideal of points in general position has a minimal free resolution of the expected
form, it suffices to show that some evaluation map is of maximal rank.
\newline In this paper, we make use of the equivalence between the category of locally free sheaves and the category of algebraic vector bundles to describe an elementary transformations in $\PP^n$ and explain how this elementary
transformation can be used to prove a case of maximal rank hypothesis.
\par The paper is organized as follows. In the next section, we give an overview of category theory with a view of
building notation and developing the language used in the future sections. In section \ref{sheavesandvect} we introduce
locally free sheaves and relate them to vector bundles. We end the section by defining an elementary transformation
of algebraic vector bundles. The definitions and properties we come across in these sections follow largely from
\cite{hart} and \cite{igr}. Our main results are found in section \ref{main}, where we present
an elementary transformation in $\PP^n,$ and prove that the diagram is indeed a diagram of elementary transformation.
As a conclusion to this section, we show how our results can be used in studying minimal free resolutions.
\section{Some category theory}
\begin{definition}
A category $\mathcal{C}$ consists of a class of objects, denoted by $Obj(\mathcal{C}),$ together with a
set of morphisms between any pair $X,Y\in Obj(\mathcal{C})$. The set of morphisms from $X$ to $Y$ is denoted
by $Hom(X,Y)$ and satisfy the following properties;
\begin{enumerate}[a.]
\item given $f_1\in Hom(X,Y)$ and $f_2\in Hom(Y,Z)$, then the composition $f_2\circ f_1\in Hom(X,Z)$.
\item for every $X\in Obj(\mathcal{C})$ there exist a morphism $id_X\mapsto Hom(X,X)$ which
is both right and left identity of the composition.
\item given $f_1\in Hom(X,Y)$, $f_2\in Hom(Y,Z)$ and $f_3\in Hom(Z,W),$ $f_1\circ (f_2 \circ f_3)=(f_1\circ f_2) \circ f_3$
For every ordered triple $(X,Y,Z)\in Obj(\mathcal{C})$.
\end{enumerate}
\end{definition}
Given a category $\mathcal{C},$ we can define the \textbf{opposite category} $\mathcal{C}^{op}$ by reversing the sense
of maps in the category $\mathcal{C}.$ That is, the objects in the category $\mathcal{C}^{op}$ is the same as the objects
in the category $\mathcal{C}$ and for any pair $X,Y$, $Hom_{\mathcal{C}}(X,Y)=Hom_{\mathcal{C}^{op}}(Y,X)$
\par Given two categories $\mathcal{C}_1$ and $\mathcal{C}_2$ we can define a function
$F:\mathcal{C}_1\mapsto \mathcal{C}_2$ between the two categories. Such a function is called a \textbf{functor}.
Functors can be covariant or contravariant.
\newline A \textbf{covariant functor} from a category $\mathcal{C}_1$ to a category $\mathcal{C}_2$ consists of
a map $F$ from $Obj(\mathcal{C}_1)$ to $Obj(\mathcal{C}_2)$ together with, for every pair $(X,Y)\in Obj(\mathcal{C}_1),$
a function $F_{X,Y}:Hom(X,Y)\mapsto Hom(F(X),F(Y)),$ such that $F$ commutes with composition and carries $id_X$ to $id_{F(X)}.$
\newline A \textbf{contravariant functor} $\mathcal{C}_1$ to a category $\mathcal{C}_2$ is a functor $F$ defined as
$F:\mathcal{C}_1^{op}\mapsto \mathcal{C}_2$. In other words it is a functor that reverses the sense of the morphisms.
\par Given two functors $F_1$ and $F_2$ from $\mathcal{C}_1$ to $\mathcal{C}_2,$ a \textbf{natural transformation} of $F_1$ to $F_2$ consists of, for each $X\in(\mathcal{C}_1),$ a morphism $\phi_X:F_1(X)\mapsto F_2(X)$ such that for every morphism
$f\in Hom(X,Y),$ the diagram
\[
\begin{tikzcd}
F_1(X)\arrow{d}{\phi_X}\arrow{r}{F_1(f)}&F_1(Y)\arrow{d}{\phi_Y}\\
F_2(X)\arrow{r}{F_2(f)}&F_2(Y)
\end{tikzcd}
\]
is commutative. A \textbf{natural isomorphism} of functors is a natural transformation for which $\phi_X$ is an
isomorphism. The data of functors $F:\C_1\mapsto\C_2$ and $F':\C_2\mapsto\C_1$ for which $F\circ F'$ is naturally an
isomorphism to the identity function on $Id_{\C_2}$ on $\C_2$ and $F'\circ F$ is naturally isomorphic to $Id_{\C_1}$ is
called an \textbf{equivalence of categories}. We say that two categories are equivalent if there exist an equivalence
between them.
\par A category $\mathcal{C}$ is said to be \textbf{enriched} over a category $\mathcal{D}$ if for every
$X,Y\in Obj(\mathcal{C}),\ Hom(X,Y)\in Obj(\mathcal{D})$ and the composition
$Hom(Y,Z)\times Hom(X,Y)\mapsto Hom(Y,Z)$ is a morphism in $\mathcal{D}.$
\newline An abelian category is a category enriched with the category of abelian groups with some extra coditions.
\begin{definition}
An abelian category is a category $\mathcal{C}$, such that: for each $X,Y\in Obj(\mathcal{C}),\ Hom(X,Y)$
has a structure of an abelian group, and the composition law is linear; finite direct sums exist; every
morphism has a kernel and a cokernel; every monomorphism is the kernel of its cokernel, every epimorphism
is the cokernel of its kernel; and finally, every morphism can be factored into an epimorphism followed by
a monomorphism.
\end{definition}
The category $\textbf{Ab}$ of abelian groups and the category $\textbf{R-Mod}$ of (left) $R$-modules over a
ring $R$ are examples of abelian categories.
Given a category $\mathcal{C},$ one can construct a \textbf{complex}. That is, a sequence
\[
\begin{tikzcd}
\cdots\arrow{r}&X_{i-1}\arrow{r}{\delta_i}&X_i\arrow{r}{\delta_{i+1}}&X_{i+1}\arrow{r} &\cdots \\
\end{tikzcd}
\]
For which $im(\delta_i)\subseteq ker(\delta_{i+1}).$ If $im(\delta_i)=ker(\delta_{i+1})$ for all $i$ then the complex
is called an \textbf{exact sequence}. The following properties hold for exact sequences in an abelian category.
\begin{theo}[Five lemma]
Consider the commutative diagram below with exact rows.
\[
\begin{tikzcd}
X_0\arrow{d}{f_0}\arrow{r}&X_1\arrow{d}{f_1} \arrow{r}&X_2\arrow{d}{f_2}\arrow{r}&X_3\arrow{d}{f_3}\arrow{r}&X_4\arrow{d}{f_4} \\
Y_0\arrow{r}&Y_1 \arrow{r}&Y_2\arrow{r}&Y_3\arrow{r}&Y_4\\
\end{tikzcd}
\]
\begin{enumerate}[a.]
\item If $f_1$ and $f_3$ are monomorphisms and $f_0$ is an epimorphism, then $f_2$ is a monomorphism.
\item If $f_1$ and $f_3$ are epimorphisms and $f_4$ is a monomorphism, then $f_2$ is an epimorphism.
\end{enumerate}
\end{theo}
\begin{theo}[Snake lemma]
Consider the diagram below whose rows are short exact sequences.
\[
\begin{tikzcd}
0\arrow{r}&X_1\arrow{d}{f_1} \arrow{r}&X_2\arrow{d}{f_2}\arrow{r}&X_3\arrow{d}{f_3}\arrow{r}&0\\
0\arrow{r}&Y_1 \arrow{r}&Y_2\arrow{r}&Y_3\arrow{r}&0\\
\end{tikzcd}
\]
Then there exists a canonical homomorphism
$\delta:ker(f_3)\mapsto coker(f_1)$ called the connecting homomorphism, such that
\[
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.8] (a) at (0,0){
\begin{tikzcd}
0\arrow{r}&ker(f_1)\arrow{r}&ker(f_2)\arrow{r}&ker(f_3)\arrow{r}{\delta}&coker(f_1)\arrow{r}&coker(f_2)\arrow{r}&coker(f_3)\arrow{r}&0\\
\end{tikzcd}};
\end{tikzpicture}
\]
is exact, where all the maps other than $\delta$ are the obvious ones induced by the diagram.
\end{theo}
Let $F:\mathcal{C}_1 \longrightarrow \mathcal{C}_2$ be a covariant functor between two abelian categories
$\mathcal{C}_1$ and $\mathcal{C}_2.$ The functor $F$ is called \textbf{additive} if it commutes with addition of morphisms.
$Hom$ and the tensor product
are examples of additive functors.
An additive functor sends complexes to complexes, but does not generally send exact sequences to exact sequences.
\begin{definition}
Let $F$ be a functor;
\begin{enumerate}[a.]
\item $F$ is \textbf{left exact} if for a given exact sequence $0 \longrightarrow X_1\longrightarrow X_2\longrightarrow X_3$
the sequence \\
$0 \longrightarrow F(X_1)\longrightarrow F(X_2)\longrightarrow F(X_3)$ is exact.
\item $F$ is \textbf{right exact} if for a given exact sequence $\longrightarrow X_1\longrightarrow X_2\longrightarrow X_3 \longrightarrow 0$ the sequence $ \longrightarrow F(X_1)\longrightarrow F(X_2)\longrightarrow F(X_3)\longrightarrow 0$
is exact.
\item $F$ is \textbf{exact} if it is both left exact and right exact.
\end{enumerate}
\end{definition}
The $Hom$ functor is left exact while the tensor product functor
is right exact.
We now give the definition of cohomological functors.
\begin{definition}[Cohomological functor]
A cohomological functor (or $\delta$-functor ) between abelian categories $\mathcal{C}_1$ and $\mathcal{C}_2$ is a sequence of functors
$$T^i : \mathcal{C}_1 \mapsto \mathcal{C}_2\ i= 0,1,\ldots$$
plus for each short exact sequence
$$0 \longrightarrow X\longrightarrow Y\longrightarrow Z\longrightarrow 0$$
in $\mathcal{C}_1$ a morphism $\delta^i:T^i (Z)\mapsto T^{i+1}(X)$ functorial in the sequence, such that
the sequence
\[
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.8] (a) at (0,0){
\begin{tikzcd}
0\arrow{r}&T^0(X) \arrow{r}& T^0(Y) \arrow{r}& T^0(Z) \arrow{r}{\delta^0}&T^1(X) \arrow{r}& T^1(Y) \arrow{r}& T^1(Z) \arrow{r}{\delta^1}&T^2(X) \arrow{r}&\cdots\\
\end{tikzcd}};
\end{tikzpicture}
\]
is exact.
\end{definition}
A cohomological functor $T$ is said to be \textbf{universal} if given any other cohomological functor $U$ and a
natural transformation $f^0:T^0\mapsto U^0,$ there is a unique sequence of natural transformations
$f^i:T^i\mapsto U^i$ starting with $f^0$ which commute with the $\delta_i$. Given $T^0,$ any two
extensions of it to a universal cohomological functor are naturally isomorphic.
\par Let $I$ be an object in in an abelian category $\mathcal{C}.$ Then $I$ is \textbf{injective} if the functor\\
$Hom(-,I):\mathcal{C}^{op}\mapsto \textbf{Ab}$ is exact. Since the $Hom$ functor is left exact it
suffices to show that if $0\longrightarrow X \longrightarrow Y$ is a monomorphism, then for any
morphism $X\mapsto I$ we can find some morphism $Y\longrightarrow I$ so that the diagram below commute.
\[
\begin{tikzcd}
0\arrow{r}&X\arrow{rd}\arrow{r}&Y\arrow[d, dashrightarrow]\\
&&I
\end{tikzcd}
\]
If for every object $X$ in category $\mathcal{C}$ there exists a monomorphism $X\mapsto I,$ where
$I$ is an injective element, then we say that the category $\mathcal{C}$ has enough injectives.
For an abelian category with enough injectives, any universal cohomology functor
can be computed using injective resolutions. Thus it is possible to define the right derived functor
of $F$ as follows; for any object $X,$ if $I^{\ast}$ is an injective resolution of $X,$
set $R^iF(X)=H^i(F(I^{\ast})).$
\section{Sheaves}
\label{sheavesandvect}
\begin{definition}[Presheaf of abelian groups]
Let $X$ be a topological space. A presheaf $\mathcal{F}$ of abelian groups on $X$ is an assignment of
each open set $U\subset X$ an abelian group $\mathcal{F}(U)$, and to every inclusion $V\subset U$ of open
subsets of $X$ a morphism of abelian groups $r^U_V: \mathcal{F}(U)\longrightarrow \mathcal{F}(V)$ called
the restriction of $U$ to $V$ subject to the following conditions.
\begin{enumerate}[a.]
\item $\mathcal{F}(U)=0\ \Leftrightarrow \ U=\varnothing,$ the empty set,
\item $r^U_U$ is the identity map $\mathcal{F}(U)\longrightarrow \mathcal{F}(U)$ and
\item for any three open sets $U,\ V,$ and $W$ such that $W\subset V\subset U$, $r^U_W =r^V_W\circ r^U_V $
\end{enumerate}
\end{definition}
In other words, a presheaf $\mathcal{F}$ of abelian groups is a contravariant functor
$\mathcal{F}:\textbf{Top}(X)\mapsto \textbf{Ab},$ where $\textbf{Top}(X)$ is a category whose objects are
open sets and morphisms inclusion maps in which $Hom(U,V)=\varnothing$ if $V$ is not a subset of $U$ and
$Hom(U,V)$ has only one element if $V$ is a subset of $U$. The category $\textbf{Ab}$ in the definition of
presheaves of abelian groups can be replaced by any category $\mathcal{C}$ to obtain a presheaf with values
in the fixed category $\mathcal{C}.$ We refer to $\mathcal{F}(U)$ as the sections of the presheaf
$\mathcal{F}$ over the open set $U$.
\newline If in addition the presheaf in the definition above satisfies the following conditions:
\begin{enumerate}[i.]
\item for any open set $U$, if ${V_i}$ is an open cover for $U$ and if $s\in \mathcal{F}(U)$
is an element such that $r^U_{V_i}(s)=0$ for all $i$, then $s = 0$;
\item for an open set $U$, if ${V_ i}$ is an open cover for $U$, and if we have elements
$s_i \in \mathcal{F}(V_i)$ for each $i$, with the property that for each
$i,j,\ r^{U}_{V_i\cap V_j}(s_i)= r^{U}_{V_i\cap V_j}(s_j)$ then there is an element $s\in \mathcal{F}(U)$ such that $r^{U}_{V_i}(s)=s_i$ for each $i$.
\end{enumerate}
Then $\mathcal{F}$ is called a \textbf{sheaf}.
\begin{definition}
Let $\mathcal{F}$ and $\mathcal{G}$ be presheaves on $X.$ A morphism
$\phi : \mathcal{F}\longrightarrow \mathcal{G}$ consists of a morphism of abelian groups
$\phi (U):\mathcal{F}(U)\mapsto \mathcal{G}(U)$ for each open set $U$ such that
whenever $V\subset U$ is an inclusion, the diagram
\[
\begin{tikzcd}
\mathcal{F}(U)\arrow{r}{\phi(U)}\arrow{d}{r^U_V}&\mathcal{G}(U)\arrow{d}{r'^U_V}\\
\mathcal{F}(V)\arrow{r}{\phi(U)}&\mathcal{G}(V)
\end{tikzcd}
\]
commute. The presheaf kernel of $\phi$, presheaf cokernel of $\phi$, and presheaf image of $\phi$
to be the presheaves given by $U\mapsto ker(\phi(U)),\ U\mapsto coker(\phi(U))$
and $U\mapsto im(\phi(U))$ respectively.
\end{definition}
\begin{remark}
If $\phi$ is a morphism of sheaves, then the presheaf kernel of $ \phi$ is a sheaf. However, the
presheaf cokernel and presheaf image of $\phi$ are not necessarily sheaves.
\end{remark}
Let $X = Spec R$ the prime spectrum of
$R=k[x_0,\ldots x_n]$ endowed with the zariski topology. In this topology open sets are the distinguished
open sets $D(f)$ for $f\in R$, where $D(F)$ is the set of all functions in $R$ that do not vanish outside
the vanishing set of $f$. We can then define a ring $\mathcal{O}_X(U)$ for every open set $U\subset X.$
We call $\mathcal{O}_X(U)$ the ring of regular function in the neighbourhood of $U$ and the assignment to
every open set $U$ the ring $\mathcal{O}_X(U)$ the structure sheaf of $X$ denoted by $\mathcal{O}_X$.
The pair $(X,\mathcal{O}_X)$ is a ringed space.
Cohomology of sheaves can be defined by taking the derived functors of the global section functor.
This is possible because for any ring $R$ every $R$-module is isomorphic to a submodule of some
injective $R$-module.
\par Let $(X,\mathcal{O}_X)$ be a ringed space and consider the category $\textbf{Mod}(X)$ of
sheaves of $\mathcal{O}_X $-modules. This category has enough injectives. Consequently then the
category $\textbf{Ab}(X)$ of sheaves of abelian groups on $X$ has enough injectives.
\begin{definition}
Suppose $X$ is a topological space. Denote by $\Gamma (X,\_)$ the the global section functor
from $\textbf{Ab}(X)$ to $\textbf{Ab}$. We can then define the cohomology functor
$\HH^i(X,\_)$ as the right derived functors of $\Gamma (X,\_).$ For any sheaf $\mathcal{F}$,
the groups $\HH^i(X,\mathcal{F})$ are referred to as the cohomology groups of $\mathcal{F}$.
\end{definition}
For a given short exact sequence
$$0\longrightarrow \F_1\longrightarrow \F_2\longrightarrow \F_3\longrightarrow 0,$$
there is a long exact sequence induced by the cohomology functor given by;
\[
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.7] (a) at (0,0){
\begin{tikzcd}
0\arrow{r}&\HH^0(X,\F_1)\arrow{r}&\HH^0(X,\F_2)\arrow{r}&\HH^0(X,\F_3)\arrow{r}&\HH^1(X,\F_1)\arrow{r}&\HH^1(X,\F_2)\arrow{r}&\HH^1(X,\F_3)\arrow{r}&\cdots
\end{tikzcd}};
\end{tikzpicture}
\]
If $X$ is a noetherian topological space of dimension $n,$ then $\HH^i(X,\mathcal{F})=0$ for all $i >n$
and all sheaves of abelian groups $\mathcal{F}$ on $X.$
\par The dimension of the cohomology group $\HH^i(X,\mathcal{F})$ is denoted by $\hh^i(X,\mathcal{F}).$
If $X=\PP^n$ and $\mathcal{F}=\Omega_{\PP^n}^{p}(d)$ is the sheaf of $p$-forms of degree $d$,
then the dimension of the cohomology group $\HH^i(\PP^n,\Omega_{\PP^n}^{p}(d))$ is given by the Botts formula.
\begin{theo}[ Botts formula \cite{cmh}]
\label{bott}
\abovedisplayskip=0pt\relax
\[
\hh^i(\PP^n,\Omega_{\PP^n}^{p}(d)) =
\begin{cases}
\binom{d+n-p}{d}\binom{d-1}{p} &\text{for}\ i=0,\ 0\leq p\leq n,\ d>p\\
1& \text{for}\ d=0, 0\leq p=i\leq n\\
\binom{-d-p}{-d}\binom{-d-1}{n-p}& \text{for}\ i=n\ 0\leq p\leq n,\ d<p-n\\
0&\text{otherwise}
\end{cases}
\]
in particular if $p=0,$ we have
\[
\hh^i(\PP^n,\mathcal{O}_{\PP^n}(d)) =
\begin{cases}
\binom{d+n}{d} &\text{for}\ i=0,\ d\geq 0\\
\binom{-d-1}{-d-1-n}& \text{for}\ d\leq -n-1\\
0&\text{otherwise}
\end{cases}
\]
\end{theo}
Let $X=Spec{R}$ and $\mathcal{O}_X$ be the structure sheaf of $X.$ A sheaf of $\mathcal{O}_X$-modules
is a sheaf $\mathcal{F}$ on $X$ such that for each open set $U\subset X$, the group $\mathcal{F}(U)$
is an $\mathcal{O}_X(U)$-module. An $\mathcal{O}_X$-module which is isomorphic to a direct sum of copies
of $\mathcal{O}_X$ is called a free $\mathcal{O}_X(U)$-module. If the open sets $U$ such that
$\mathcal{F}|_U$ is an $\mathcal{O}_X(U)$-module forms an open cover for the topological space $X$
then $\mathcal{F}$ is a locally free sheaf. The rank of $\mathcal{F}$ on is the number of copies of the
structure sheaf needed whether finite or infinite. For a connected topological space $X$,
the rank of a locally free sheaf is the same everywhere. A locally free sheaf of rank 1 is also called an
invertible sheaf.
\begin{definition}
Let $X$ be a variety, a vector bundle over $X$ is a variety $E$ with a map $\pi:E\to X$ such that the following
conditions hold:
\begin{enumerate}[a.]
\item For each $p\in X,$ we have $\pi^{-1}(p)$ is isomorphic to $\mathbb{A}^n$ for some n.
\item There exists a cover $U_i$ of $X$ such that $\pi^{-1}(U_i)$ is isomorphic to $U_i\times \mathbb{A}^n.$
\end{enumerate}
\end{definition}
Given any locally free sheaves, one can define a vector bundle and conversely. By viewing locally free sheaves as
vector bundles we can define elementary transformations.
\subsection*{Elementary transformation of Vector bundles \cite{mm}}
Let $\F$ be a vector bundle on $X.$ If we define a surjective map $\psi:\F\longrightarrow \F',$ where
$\F'$ is a vector bundle on a divisor $X'$ of $X,$ then $\E=ker(\psi)$ is a vector bundle on X. The
procedure of obtaining $\E$ from $\F$ is called the elementary transformation of $\F$ along $\F'$ and
is denoted by $\E=elm_{\F'}(\F).$ We call $\E$ the elementary transform of $\F$ along $\F.$ For
the given $\psi:\F\longrightarrow \F'$, we have the following exact, commutative diagram which is
called the display of the elementary transformation:
\[
\begin{tikzcd}
&0\arrow{d}&0\arrow{d}&&\\
&\F(-X)\arrow{d} \arrow[equals]{r}&\F(-X)\arrow{d}&&\\
0\arrow{r}
&\E\arrow{r}\arrow{d}{\psi '}&\F\arrow{r}{\psi }\arrow{d}&\F'\arrow{r}\arrow[equals]{d}&0\\
0\arrow{r}
&\F''\arrow{r}\arrow{d}&\F|_{X'}\arrow{r}\arrow{d}&\F'\arrow{r}
&0\\
&0&0&&
\end{tikzcd}
\]
where $\F''$ is the kernel of $\psi|_{X'}$ and $\F(-X) = F\otimes_{\mathcal{O}_x}\mathcal{O}_X(-X)$. The
leftmost vertical exact sequence gives us the inverse of the given transformation, that is,
$\F(-X) = elm_{F''}(E).$
\begin{remark}
There is an equivalence of categories between the category of algebraic vector bundles and the category of
locally free sheaves, given by associating to an algebraic vector bundle $F\mapsto X$, the sum $\F$ of sections
of $F.$
\end{remark}
\section{Main}
\label{main}
In this section, we prove our main result, that is, we use the equivalence in category between the category
of locally free sheaves and the category of algebraic vector bundles to give an elementary transformation
in $\PP^n$. Now that $\OO_{X}$-modules form an abelian group, but locally free sheaves along with reasonably natural maps
between them (those arising as maps of $\OO_X$-modules) do not form an abelian category, we will enlarge
our notion of nice $\OO_X$-modules to quasi-coherent sheaves. In fact our locally free sheaves have finite rank,
and so we will talk about coherent sheaves. We will conclude this section by describing how the elementary
transformation in $\PP^n$ can be used to prove the minimal resolution conjecture, especially when the points
under consideration are in general position.
\begin{theo}
\label{mainprop}
There exist an elementary transformation of vector bundles on $\PP^n$ comprising of the following exact sequences.
\begin{equation}
\label{etgp}
\begin{tikzcd}
&0\arrow{d}&0\arrow{d}&&\\
&\Omega^{p+1}_{\PP^n}(p+1)\arrow{d} \arrow[equals]{r}
&\Omega^{p+1}_{\PP^n}(p+1)\arrow{d}&&\\
0\arrow{r}
&\mathcal{O}_{\PP^n}^{\oplus \binom{n}{p+1}}\arrow{r}\arrow{d}
&\Omega^{p+1}_{\PP^n}(p+2)\arrow{r}\arrow{d}
&\Omega^{p+1}_{\PP^{n-1}}(p+2)\arrow{r}\arrow[equals]{d}
&0\\
0\arrow{r}
&\Omega^{p}_{\PP^{n-1}}(p+1)\arrow{r}\arrow{d}
&\Omega^{p+1}_{\PP^n|\PP^{n-1}}(p+2)\arrow{r}\arrow{d}
&\Omega^{p+1}_{\PP^{n-1}}(p+2)\arrow{r}
&0\\
&0&0&&
\end{tikzcd}
\end{equation}
\end{theo}
The proof of this theorem above follows from the following set of claims.
\begin{claim}
\begin{enumerate}[i)]
\item The kernel of the map $\Omega^{p+1}_{\PP^m}\longrightarrow \Omega^{p+1}_{\PP^{m-1}}$ is isomorphic to
$\mathcal{O}_{\PP^m}(-p-2)^{\oplus \binom{m}{p+1}}$.
\item The sequence
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.8] (a) at (0,0){
\begin{tikzcd} 0\arrow{r}
&\Omega^{p}_{\PP^{n-1}}(p)\arrow{r}
&\Omega^{p+1}_{\PP^n|\PP^{n-1}}(p+2)\arrow{r}
&\Omega^{p+1}_{\PP^{n-1}}(p+2)\arrow{r}&0\\
\end{tikzcd}};
\end{tikzpicture}
is exact.
\item The kernel of the map $\mathcal{O}_{\PP^n}^{\oplus \binom{n}{p+1}}\longrightarrow \Omega_{\PP^{n-1}}^{p}(2)$
is isomorphic to $\Omega^{p+1}_{\PP^n}(2)$.
\end{enumerate}
\end{claim}
\begin{proof}
\begin{enumerate}[i)]
\item We prove the first claim by induction on $p.$ For the base case, we set $p=0$ and prove that the kernel
of the map $\Omega^{p+1}_{\PP^n}\longrightarrow \Omega^{p+1}_{\PP^{n-1}}$ is isomorphic to
$\mathcal{O}_{\PP^n}(-p-2)^{\oplus \binom{n}{p+1}}$.
\newline Consider the Euler sequence.
\[
\begin{tikzcd}
0\arrow{r}&\Omega_{\PP^{n}}\arrow{r}&\mathcal{O}_{\PP^n}(-1)^{n+1}\arrow{r}{s_n}&\mathcal{O}_{\PP^n}\arrow{r} &0
\end{tikzcd}
\]
We can construct a commutative diagram of exact sequences on $\PP^m;$
\[
\begin{tikzcd}
0\arrow{r}&\Omega_{\PP^n}\arrow{r}\arrow{d}{e}&\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}\arrow{r}\arrow{d}{b}&\mathcal{O}_{\PP^n}\arrow{r}\arrow{d}{c}&0\\
0\arrow{r}&\Omega_{\PP^{n-1}}\arrow{r}&\mathcal{O}_{\PP^{n-1}}(-1)^{\oplus n}\arrow{r}&\mathcal{O}_{\PP^{n-1}}\arrow{r}&0
\end{tikzcd}
\]
where $\PP^{n-1}$ is identified with the locus of $\PP^n$ where the last coordinate vanish. The rows are
the Euler sequences, $e$ is the restriction of forms, $c$ is the restriction of functions and $b$ is the
restriction on the first $n$ summands of $\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}$ and is $0$ on the last summand.
Considering the kernels of $e,\ b$ and $c$ we get a commutative diagram with exact rows and columns,
\begin{equation}
\label{diag1}
\begin{tikzcd}
&0\arrow{d}&0\arrow{d}&0\arrow{d}&\\
0\arrow{r}&K\arrow{r}\arrow{d}&\mathcal{O}_{\PP^n}(-2)^{\oplus n}\oplus \mathcal{O}_{\PP^n}(-1)\arrow{r}{\alpha}\arrow{d}&\mathcal{O}_{\PP^n}(-1)\arrow{r}\arrow{d}&0\\
0\arrow{r}&\Omega_{\PP^n}\arrow{r}\arrow{d}&\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}\arrow{r}{S_n}\arrow{d}&\mathcal{O}_{\PP^n}\arrow{r}\arrow{d}&0\\
0\arrow{r}&\Omega_{\PP^{n-1}}\arrow{r}\arrow{d}&\mathcal{O}_{\PP^{n-1}}(-1)^{\oplus n}\arrow{r}\arrow{d}&\mathcal{O}_{\PP^{n-1}}\arrow{r}\arrow{d}&0\\
&0&0&0&
\end{tikzcd}
\end{equation}
and we need to prove that $K\cong \mathcal{O}_{\PP^n}(-2)^{\oplus n}.$ It suffices to show that $\alpha$ sends
the last summand $\mathcal{O}_{\PP^n}(-1)$ of $\mathcal{O}_{\PP^n}(-2)^{\oplus n}\oplus \mathcal{O}_{\PP^n}(-1)$
isomorphically to $\mathcal{O}_{\PP^n}(-1)$. As this summand is isomorphic to the codomain, it
suffices to prove that the restriction of $\alpha$ to the summand is $\mathcal{O}_{\PP^n}(-1)$ is generically
injective. This is true because the restriction of $S_n$ to the last summand is an isomorphism outside of
$\PP^{n-1}.$ (homomorphism of line bundles on the complement of $\PP^{n-1}$). As a consequence the first row
of \ref{diag1} splits and $k\cong \mathcal{O}_{\PP^n}(-2)^{\oplus n}$.
\newline Suppose now that the kernel of the map $\Omega^{p}_{\PP^n}\longrightarrow \Omega^{p}_{\PP^{n-1}}$ induced by
the restriction of forms is isomorphic to $\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p}}$. Consider the
exact sequence below obtained from the Euler sequence of $\PP^n$.
\[
\begin{tikzcd}
0\arrow{r}&\Omega_{\PP^{n}}^{p+1}\arrow{r}&\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{n+1}\arrow{r}\arrow[equals]{d}&\mathcal{O}_{\PP^n}^{p}\arrow{r} &0\\
&&\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n+1}{p+1}}&&
\end{tikzcd}
\]
We can then construct the commutative diagram below with exact rows;
\[
\begin{tikzcd}
0\arrow{r}&\Omega_{\PP^n}^{p+1}\arrow{r}\arrow{d}{e}&\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n+1})\arrow{r}\arrow{d}{b}&\Omega_{\PP^n}^{p}\arrow{r}\arrow{d}{c}&0\\
0\arrow{r}&\Omega_{\PP^{n-1}}^{p+1}\arrow{r}&\wedge^{p+1}(\mathcal{O}_{\PP^{n-1}}(-1)^{\oplus n})\arrow{r}&\Omega_{\PP^{n-1}}^{p}\arrow{r}&0
\end{tikzcd}
\]
where $e$ and $c$ are restrictions of forms and $b$ is described as follows. Recall that $\PP^{n-1}$ is the
locus where the last coordinate varnishes and decompose $\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}$ as the direct
sum of the direct sum of the first $n$ summands and the last summand, that is,\\
$\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}=\mathcal{O}_{\PP^n}(-1)^{\oplus n}\oplus \mathcal{O}_{\PP^n}(-1).$
The domain of $b$ is $\wedge^{p+1}\mathcal{O}_{\PP^n}(-1)^{\oplus n+1}=\\
\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p+1}}\oplus \mathcal{O}_{\PP^n}(-p-1)^{\binom{n}{p}}$.
The codomain of $b$ is $\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n})=\\
\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p+1}}$.
Thus $b$ sends $\wedge^p(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\otimes \mathcal{O}_{\PP^n}(-1)$ to zero and its restriction to
$\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n})=\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p+1}}$ is natural.
In particular, the kernel of $b$ is $\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\otimes \mathcal{O}_{\PP^n}(-1)\oplus \wedge^p(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\otimes \mathcal{O}_{\PP^n}(-1)=\\
\mathcal{O}_{\PP^n}(-p-2)^{\oplus \binom{n}{p+1}}\oplus \mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p}}$.
Considering the kernel of $a,\ b$ and $c$ we get a commutative diagram with exact rows and columns;
\begin{equation}
\label{diag1}
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.7] (a) at (0,0){
\begin{tikzcd}
&0\arrow{d}&0\arrow{d}&0\arrow{d}&\\
0\arrow{r}&K\arrow{r}\arrow{d}&\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\mathcal{O}_{\PP^n}(-1)\oplus \wedge^p(\mathcal{O}_{\PP^n}(-1)^n)\otimes\mathcal{O}_{\PP^n}(-1)\arrow{r}{\alpha}\arrow{d}&\mathcal{O}_{\PP^n}(-p-1)^{\binom{n}{p}}\arrow{r}\arrow{d}&0\\
0\arrow{r}&\Omega_{\PP^n}^{p+1}\arrow{r}\arrow{d}&\wedge^{p+1}(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\oplus\wedge^p(\mathcal{O}_{\PP^n}(-1)^{\oplus n})\otimes\mathcal{O}_{\PP^n}(-1)\arrow{r}{\gamma}\arrow{d}&\Omega^p_{\PP^n}\arrow{r}\arrow{d}&0\\
0\arrow{r}&\Omega_{\PP^{n-1}}^{p+1}\arrow{r}\arrow{d}&\wedge^{p+1}(\mathcal{O}_{\PP^{n-1}}(-1)^{\oplus n})\arrow{r}\arrow{d}&\Omega^p_{\PP^{n-1}}\arrow{r}\arrow{d}&0\\
&0&0&0&
\end{tikzcd}};
\end{tikzpicture}
\end{equation}
It suffices to prove that the first row splits. To do this, we prove that the restriction of
$\alpha$ to $\wedge^p(\mathcal{O}_{\PP^{n-1}}(-1)^{\oplus n})\otimes\mathcal{O}_{\PP^{n}}(-1)$ is an isomorphism
on $\mathcal{O}_{\PP^{n}}(-p-1)^{\oplus \binom{n}{p}}.$ Since we know that
$\wedge^p(\mathcal{O}_{\PP^n}(-1)^n)\otimes\mathcal{O}_{\PP^n}(-1)$ is isomorphic to
$\mathcal{O}_{\PP^n}(-p-1)^{\oplus \binom{n}{p}}$, it suffices to show that $\alpha$ is of maximal rank on
all open subsets. This is because on the complement of $\PP^{n-1}$ the morphism $\gamma$ injects
$\wedge^p(\mathcal{O}_{\PP^n}(-1)^n)\otimes\mathcal{O}_{\PP^n}(-1)$ in $\Omega^p_{\PP^n}.$
\item To prove the exactness of the bottom row,
consider the exact sequence
\[
\begin{tikzcd}
0\arrow{r}&\mathcal{O}_{\PP^n}(-1)\arrow{r}&\Omega^{1}_{{\mathbb P}^{n}}|_{{\mathbb P}^{n-1}}\arrow{r}&\Omega^{1}_{{\mathbb P}^{n-1}}\arrow{r}&0.\\
\end{tikzcd}
\]
Taking the $(p+1)$-th exterior product and tensoring by $\mathcal{O}(p+2)$ we get the desired exact sequence.
\item Finally, to show that the kernel of the map $\mathcal{O}_{\PP^n}^{\oplus \binom{n}{p+1}}\longrightarrow \Omega_{\PP^{n-1}}^{p}(2)$
is isomorphic to $\Omega^{p+1}_{\PP^n}(2)$, we first recall that given a commutative diagram whose lines are two exact sequences.
\[
\begin{tikzcd}
0\arrow{r}&A\arrow{r}\arrow{d}&B\arrow{r}\arrow{d}&C\arrow{r}\arrow{d}&0\\
0\arrow{r}&D\arrow{r}&E\arrow{r}&F\arrow{r}&0\\
\end{tikzcd}
\]
there always exists a map $A\longrightarrow D$ still making the extended diagram commutative. Let $a,\ b$ and $c$ be the 3
vertical maps appearing in this extended diagram. By the Snake lemma, we have an exact sequence
\[
\begin{tikzpicture}[baseline= (a).base]
\node[scale=.6] (a) at (0,0){
\begin{tikzcd}
0\arrow{r}&ker(a)\arrow{r}&ker(b)\arrow{r}&ker(c)\arrow{r}&coker(a)\arrow{r}&coker(b)\arrow{r}&coker(c)\arrow{r}&0.\\
\end{tikzcd}};
\end{tikzpicture}
\]
Since $c$ is an isomorphism, $ker(a)=ker(b)$. As the second column of the diagram is obtained by tensoring
$\Omega^{p+1}_{\mathbb{P}^n}(p+2)$ with the exact sequence
\[
\begin{tikzcd}
0\arrow{r}&I\arrow{r}&\mathcal{O}_{{\mathbb P}^{n}}\arrow{r}&\mathcal{O}_{{\mathbb P}^{n-1}}\arrow{r}&0\\
\end{tikzcd}
\]
defining the ideal of $\mathbb{P}^{n-1}$ in $\mathbb{P}^{n},$ we have $ker(a)=ker(b)=\Omega^{p+1}_{{\mathbb P}^{n}}(p+1).$
\end{enumerate}
\end{proof}
\subsubsection*{conclusion}
In conclusion, lets see how the results above can be applied in the study of minimal free resolution.
\par Suppose $X$ is a smooth projective variety and $X'$ is a non-singular divisor of $X.$ Let $\F$ be a locally free sheaf
on $X$ and
$$0\longrightarrow \F''\longrightarrow \F_{| X'}\longrightarrow \F'\longrightarrow 0$$
be an exact sequence of locally free sheaves on $X'.$ The kernel $\E$ of $\F\longrightarrow \F'$ is a locally free sheaf
on $X$ and we have another exact sequence of locally free sheaves on $X'$
$$0\longrightarrow \F' (-X')\longrightarrow \E_{| X'}\longrightarrow \F''\longrightarrow 0$$
and as well exact sequences of coherent sheaves on X
\begin{equation*}
\label{1}
0\longrightarrow \E \longrightarrow \F \longrightarrow \F'\longrightarrow 0
\end{equation*}
and
\begin{equation*}
\label{2}
0\longrightarrow \F(-X) \longrightarrow \E \longrightarrow \F'' \longrightarrow 0.
\end{equation*}
\begin{theo} [Differential method of Horace]
\label{dmh}
Suppose we are given a surjective morphism of vector spaces,
$\lambda :\H^0\left(X',\F'\right)\longrightarrow L$
and suppose that there exist a point $Z'\in X'$ such that
$\H^0\left(X',\F'\right)\hookrightarrow L\oplus \F'|_{Z'}$
and suppose that $\HH^1\left(X,\E\right)=0.$ Then there exist a quotient
$\E(Z')\longrightarrow D(\lambda)$ with a kernel contained in $\F'(Z')$
of dimension $dim(D(\lambda))=\rk (\F)-dim(ker\lambda)$ having the following property.
Let $\mu :\H^0\left(X,\F\right)\longrightarrow M$ be a morphism of vector spaces, then
there exist $Z\in X'$ such that if
$\H^0\left(X,\E\right)\rightarrow M\oplus D(\lambda)$ is of maximal rank then
$\H^0\left(X,\F\right)\longrightarrow M\oplus L\oplus \F(Z)$
is also of maximal rank.
\end{theo}
\begin{remark}
\label{remdmh}
The idea of the theorem is illustrated in the diagram below.
\[
\begin{CD
0@>>>\H^0(X,\E)@>>>\H^0(X,\F)@>>>\H^0 (X',\F')@>>>0\\
@.@V\alpha_1VV@V\alpha_2VV@V\alpha_3VV@.\\
0@>>>M\oplus D(\lambda)@>>>M\oplus L \oplus \F(Z)@>>>L\oplus D'(\lambda)|_{Z}@>>>0
\end{CD}
\]
The key point is that if the map $\alpha_3$ is bijective, then $\alpha_2$ will be bijective provided
that $\alpha_1$ is bijective.
\end{remark}
The elementary transformation in theorem \ref{mainprop} above can be used in proving inductively that the map
\begin{equation}
\label{maxranpn}
\H^0 \left(\PP^n,\Omega^{p+1}_{\PP^n}(d+p+1)\right)\longrightarrow \bigoplus _{i=1}^{s}\Omega^{p+1}_{\PP^n}(d+p+1)|_{P_i}
\end{equation}
is of maximal rank for a fixed $p$, for all non negative integers $d\geq m.$ To see this,
set $X=\PP^n,\ X'=\PP^{n-1},\ \F =\Omega^{p+1}_{\PP^n} \F'=\Omega^{p}_{\PP^{n-1}}$ and
$\E =\mathcal{O}_{\PP^n}(2)^{\oplus \binom{n}{p+1}}$ in the diagram under remark \ref{remdmh}. We can also construct a
diagram similar to this using the sequence
\begin{equation*}
\label{2}
0\longrightarrow \H^0(\PP^n,\Omega^{p+1}_{\PP^{n-1}}(d)) \longrightarrow \H^0(\PP^n,\mathcal{O}_{\PP^n}(2)^{\oplus \binom{n}{p+1}}(d-1)) \longrightarrow \H^0(\PP^{n-1},\Omega^{p}_{\PP^{n-1}}) \longrightarrow 0.
\end{equation*}
Call the three vertical maps in this second diagram $\alpha_1',\ \alpha_2'$ and $\alpha_3'.$ By construction of the
map $\alpha_1$ coincides with the map $\alpha_2'.$ In reference to remark \ref{remdmh}, we have that $\alpha_2$ is
bijective provided $\alpha_1$ is, and $\alpha_1$ is bijective provided that $\alpha_1'$ is. Bijectivity of $\alpha_2$
is a statement on forms of degree $d+1$ and bijectivity of $\alpha_1'$ is a statement of forms of degree $d$. It
therefore follows that if we can prove such bijectivity for $d=m,$ and also prove that the bijectivity of $\alpha_1$
implies bijectivity of $\alpha_2$ and bijectivity of $\alpha_1'$ implies bijectivity of $\alpha_1,$ then it will
follow by induction that the map \ref{maxranpn} is of maximal rank for all $d\geq m.$
|
2004.14939
|
\section{Introduction}\label{sec:introduction}
Peer selection, where agents must choose a subset of themselves for an award or a prize, is one of the pillars for quality assessment in scientific contexts and beyond. While current methods rely on expert panels, there is increasing attention to how to design trustworthy mechanisms that improve the accuracy and reliability of the outcome, keeping the procedure simple and cheap. The latter is particularly relevant in open online courses \cite{piech2013tuned}, where hiring professional graders is prohibitively expensive. Indeed, even IJCAI 2020 is implementing a portion of this system, requiring authors who submit papers to agree to be reviewers themselves.
The importance of having an ``objective" assessment in conference reviewing has been brought to light by the famous NIPS experiment \cite{langford_2015,shah2018design}: of all papers submitted to NIPS 2014, 10\% were reviewed twice by two independent committees which, astonishingly, agreed on less than half of the accepted papers in their pool. Whether the outcome was due to bias, incompetence or simply well-thought disagreement is still unclear. What is clear though is that the current solutions show undesirable properties. %
Methods for {\em impartial} peer selection, where self-interested individuals assess one another in such a way that none of them has an incentive to misrepresent their evaluation, have a long standing tradition in economics, e.g., \cite{Douceur2009,Holzman2013,Dollar}, which has in turn encouraged several groups in artificial intelligence and computer science more broadly to investigate these problems, e.g., \cite{KLMP15,AFPT11a,XZSS19,Aziz2019}.
The interest in such methods has culminated in a pilot scheme by the US National Science Foundation (NSF) \cite{naghizadeh2013incentives}, called for by \citet{MerrifieldSaari}, in which each principal investigator (PI) was asked to rank 7 proposals from other PIs. The rankings were then combined using the Borda score with the additional truth-telling incentive of receiving a bonus the closer one gets to the average of the other reviewers' marks. Though this method is not impartial, and leads to a Keynesian beauty contest \cite{Key36a}, the results were encouraging.
Research in artificial intelligence and economics has led to a number of proposals for algorithms choosing a set of $k$ agents from amongst themselves, commonly known as the peer selection problem. We review some of the most prominent ones here to which we will compare our proposal. %
\begin{description}[itemsep=0em]
\item [Credible Subset \cite{KLMP15}.] In Credible Subset (CS), reviewers assign scores to their allocated proposals and the potential manipulators, i.e., the reviewers that could be within the $k$ funded ones, are also selected to be funded, with a given probability. While the system is strategy-proof, it will yield an empty set of funded proposals in a number of cases \cite{Aziz2016}.
\item[Dollar Raffle \cite{Dollar}.] The Dollar Raffle method (DR) consists of reviewers distributing a score in the interval [0,1] to their reviews rather than independently allocating them as in (CS). %
\item[Exact Dollar Partition \cite{Aziz2019}] Reviewers are clustered at random and rank peers in different clusters. Using a randomized rounding scheme based on the the shares computed with the method of \citet{Dollar}, the top proposals of each cluster are selected, depending on their clusters' importance. Dollar Partition is strategy-proof and has been shown to be the most accurate available method \cite{Aziz2019}.
\end{description}
We compare our algorithm against two more basic procedures: Vanilla, which selects the $k$ agents with the highest total Borda score based on the reviews received; and Partition, which, instead, divides the agents into a set of clusters and selects a predetermined number of them from each (typically $k$ divided by the number of clusters) as rated by the agents from the other clusters. Notice that, unlike Partition, Vanilla is not impartial but is commonly used as a baseline for comparison.
Relevant recent developments with a different focus use voting rules to aggregate ranks (e.g., $k$-Partite \cite{KKKKP18a}, including the Committee \cite{KKKKP18a} and Divide-and-Rank \cite{XZSS19}) algorithms. Other methods are approval-based but only focus on single agent selection: Permutation~\cite{FeKl14a} and Slicing~\cite{BNV14a}. Additional work in this area also focuses on assignment and calibration issues \cite{wang2019your,LianMNW18}.
\smallbreak
\noindent
\textbf{Our Contribution.}
We present \text{\textsc{PeerNomination}}\xspace, an impartial peer selection method for scenarios where $n$ agents review and are reviewed by $m$ others, with the goal of selecting $k$ of them. Each proposal is considered independently and it is selected only if it falls in the top $\frac{k}{n}m$ of the majority of its reviewers' (partial) rankings, using a probabilistic completion if such number is not an integer.
This way we relax the exactness requirement, in the sense that our algorithm is not guaranteed to select exactly $k$ proposals every time.
However, under some mild rationality assumptions, the algorithm does so in expectation. Unlike other well-known peer reviewing methods, e.g., Exact Dollar Partition (EDP), \text{\textsc{PeerNomination}}\xspace does not rely on clustering nor on reviewers submitting complete rankings, allowing more flexibility in where and when it may be deployed. %
We compare the performance of \text{\textsc{PeerNomination}}\xspace against an underlying ground truth ranking when agent rankings are drawn according to a Mallows model \cite{Mal57,Xia19}, exactly deriving its expected accuracy analytically. Moreover, we empirically compare our method against other peer selection mechanisms, for which analytic performance bounds are unknown, using a number of well-known classification measures. Our results show that \text{\textsc{PeerNomination}}\xspace improves on the current best performance in terms of accuracy known from the literature and relies on milder assumptions on the underlying reviewer graph. This suggests that relaxing the exactness requirement in peer selection outcomes can give us an improved performance with respect to the accuracy of the accepted set.
\smallbreak
\noindent
\textbf{Paper Structure.}
In Section \ref{sec:preliminaries} we set up the basic terminology and notation. Section \ref{sec:model} presents our algorithm and its theoretical properties. Section \ref{sec:experiments} compares its accuracy against the main existing alternatives, under various metrics. %
\section{Preliminaries}\label{sec:preliminaries}
We work with a set of agents $\mathcal{N} = \{ 1, 2, ..., n \}$ and an order over them, induced by their index, which represents the final ranking the agents would have, if they were to be assessed objectively. We refer to this order as the {\em ground truth}. Each agent is assigned $m$ other agents to review and is in turn reviewed by $m$ others.
We represent such $m$-regular assignment as a function $A: \mathcal{N} \rightarrow 2^{\mathcal{N}}$ and denote $i$'s review pool as $A(i)$, while $A^{-1}(i)$ denotes $i$'s reviewers. It is worth noting that while generating a random $m$-regular assignment is easy for small $m$ (by generating an $m$-regular bipartite graph), sampling one uniformly is non-trivial and is an active area of study (e.g., see \cite{berger2010uniform}). In this paper, we assume uniform sampling to make our theoretical analysis tractable in Section \ref{sec:model} but not for experiments in Section \ref{sec:experiments}. In practice, we observed negligible effect on the performance of algorithms when using different assignment-generating procedures.
In real-world settings, agents can only review a limited number of proposals or papers so $m$ is typically small and constant, given $n$.
Each reviewer $i$ submits a ranking of their review pool $A(i)$, which we represent as a strategy $\sigma_i: A(i) \rightarrow \{1, ..., m \}$, where $\sigma_i(j)$ gives the rank of $j$ given by $i$ in $i$'s review pool.
A collection of all declared strategies
is called a \textit{profile} and is denoted by $\sigma$. The unique profile which is consistent with the ground truth is called \textit{truthful}.
After the individual preferences are declared, they are aggregated to select $k$ individuals.
We call a peer selection mechanism \textit{impartial} or \textit{strategyproof} if no agent can affect their chances of selection in any assignment using any strategy.
\section{\text{\textsc{PeerNomination}}\xspace}\label{sec:model}
In this section we present \text{\textsc{PeerNomination}}\xspace and describe its performance analytically.
\subsection{The Algorithm}
A usual requirement for peer selection mechanisms is that it must return an accepting set exactly of size $k$ \cite{Aziz2019,AFPT11a,KKKKP18a}. Though some approaches investigated relaxing this assumption \cite{Aziz2016,KLMP15}, most notably the results by \citet{bjelde_fischer_klimm_2017} show that this relaxation can lead to better optimality approximation. We use this intuition in designing the following algorithm that returns an accepting set of size $k$ in expectation.
\text{\textsc{PeerNomination}}\xspace works as follows: suppose every agent reviews and is reviewed by $m$ other agents. If an agent is in the true top $k$, we expect them to be ranked in the top $k$ proportion (i.e., top $\frac{k}{n} m$) of their review pool by the majority of agents that review them, if these were to report their accurate rankings. We say that an agent is \textit{nominated} by a reviewer if they are in the top $k$ proportion of the reviewer's declared ranking, i.e., their review pool. Likewise, we refer to $\frac{k}{n} m$ as the \textit{nomination quota}. Hence, for every agent $j$, we look at all reviewers $i_1, ..., i_m$ reviewing $j$ and select $j$ only if they are nominated by the majority of these reviewers.
As $\frac{k}{n} m$ is unlikely to be an integer, we consider an agent {\em nominated for certain} if they are among the first $\lfloor \frac{k}{n} m \rfloor$ agents in the review pool, where $\lfloor x \rfloor$ denotes the whole part of a positive real number $x$. If they are in the next position (i.e., $\lfloor \frac{k}{n} m \rfloor + 1$), we randomly consider them nominated with probability $\frac{k}{n} m - \lfloor \frac{k}{n} m \rfloor$, that is, the decimal part of the nomination quota. Lastly, if the number of review pools an agent is part of is even, we require them to be nominated by just half of the review pools, not a strict majority.
A crucial observation is that, since each agent is considered independently for selection, the algorithm is not guaranteed to return exactly $k$ agents. However, we will show that the algorithm is close enough to such number if the reviewers submit reviews that are close enough to the ground truth and, moreover, that truth-telling is an equilibrium outcome, i.e., \text{\textsc{PeerNomination}}\xspace is impartial.
The \text{\textsc{PeerNomination}}\xspace algorithm is presented in Algorithm \ref{algorithm}. Note that in the algorithm we introduce the \textit{slack parameter} $\varepsilon$, which extends the nomination quota accordingly. As we show next, this is necessary in some settings to achieve the right expected size of the accepting set.
\begin{algorithm}[h]
\begin{algorithmic}{}
\REQUIRE Assignment $A$, review profile $\sigma$, target quota $k$, slack parameter $\varepsilon$
\ENSURE Accepting set $S$
\STATE Set $\textit{nomQuota} := \frac{k}{n} m +\varepsilon$
\FORALL{$j$ in $\mathcal{N}$}
\STATE Initialise \textit{nomCount} := 0
\FORALL{$i \in A^{-1}(j)$}
\IF{$\sigma_i(j) \leq \lfloor \textit{nomQuota} \rfloor$}
\STATE increment \textit{nomCount} by 1
\ELSIF{$\sigma_i(j) = \lfloor \textit{nomQuota} \rfloor + 1$}
\STATE increment \textit{nomCount} by 1 with probability $\textit{nomQuota} - \lfloor \textit{nomQuota} \rfloor$
\ENDIF
\ENDFOR
\IF{$\textit{nomCount} \geq \lceil \frac{m}{2} \rceil$}
\STATE $S \leftarrow j$
\ENDIF
\ENDFOR
\RETURN $S$
\end{algorithmic}
\caption{\text{\textsc{PeerNomination}}\xspace}
\label{algorithm}
\end{algorithm}{}
\subsection{Expected Size and Slack Parameter}
We now derive the expected size of the accepting set returned by \text{\textsc{PeerNomination}}\xspace as a function of $n, m$ and $k$. %
Since each agent is considered independently, we just need to derive the probability of selection for an agent given their ground truth position. Assume the algorithm is run on an $m$-regular assignment and the reviews are truthful. Note that we assume such assignment is sampled uniformly and so each review pool is equally likely to be assigned to any reviewer. Firstly, consider the probability of obtaining position $y$ in the sample of size $m$, given position $r$ in the underlying ranking. When drawing the sample, we need to choose $y-1$ individuals out of $r-1$ that are above agent $r$ in the ground truth, and then choose $m-y$ out of $n-r$ that are worse. In total, as expected, we are choosing $m-1$ other agents out of $n-1$. Hence:
\begin{equation*}
\mathbb{P}[Y = y | R = r] = {r-1 \choose y-1} {n-r \choose m-y} \Big/ {n-1 \choose m-1}
\end{equation*}
where $Y$ is a random variable representing the position in the review pool and $R$ is a random variable representing the ground truth position.
Denote now the nomination quota by $k_q := \frac{k}{n} m$ and recall that in any given review pool, top $\lfloor k_q \rfloor$ agents are nominated for certain and the next position is nominated with the probability of $k_q - \lfloor k_q \rfloor$. Hence, the probability of being nominated in any pool from position $r$ in the ranking is, independently:
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{equation}
\begin{split}
q_r &:= \sum_{y=1}^{\lfloor k_q \rfloor} \mathbb{P}[Y = y | R = r] \\
&+ \left(k_q - \lfloor k_q \rfloor \right) \mathbb{P}[Y = \lfloor k_q \rfloor + 1 | R = r]
\end{split}
\label{eq:q_r}
\end{equation}
\end{minipage}
}
\vspace{0.5em}
Since each review pool can be regarded as a Bernoulli trial with probability $q_r$ and to be accepted an agent has to be nominated $\lceil m/2 \rceil$ times, the probability of being accepted from position $r$ is given by the cumulative Binomial distribution:
\begin{equation}\label{eq:prob}
\mathbb{P}[\textrm{accept} | R = r] = \sum_{i= \lceil m/2 \rceil}^m {m \choose i} q_r^i (1-q_r)^{m-i}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width = .80\linewidth]{accept_prob_comparison2.pdf}
\caption{Probability of being accepted by the algorithm given the position in the ranking when $n=130$ and $k=30$.}
\label{fig:review_dist}
\end{figure}
An illustration of acceptance probabilities as a function of the ground truth position is shown in Figure \ref{fig:review_dist}. We can see that agents that are well inside top $k$ are almost certain to be accepted while those well outside of top $k$ are almost certain to be rejected. The width of the interval around top $k$ for which the probability is away from the extremes is dictated by $m$. Higher $m$ reduces uncertainty by providing more ``trials" for each agent and so narrows the interval.
We can now use the derived probability of acceptance to calculate the expected size of the accepting set. %
Since every individual is accepted independently with probability $\mathbb{P}[\textrm{accept} | R = r]$ and contributes 1 to the size if they are accepted, the expectation is simply $\sum_{r=1}^n \mathbb{P}[\textrm{accept} | R = r]$. The complexity of this expression makes it difficult to analyse it explicitly. However, Figure \ref{fig:expected_size} shows a typical behaviour of the expected size as a function of $m$. We observe that this approaches $k$ as $m$ increases. However, for small values of $m$ the expected size can vary significantly from $k$, especially when $m$ is odd (recall that agents need to get a clear majority in this case, making selection more difficult).
To tackle these issues, we introduce an additional parameter $\varepsilon$ that allows us to control the size of the accepting set more finely. If $\varepsilon$ is set to a non-zero value (usually a positive one), we extend the nomination quota in each review pool by this amount. Usually this increment simply contributes to the probability that the ``fractional nominee" is nominated. For example, in the setting $n=130, m=9$ and $k=30$, Figure \ref{fig:expected_size} shows the expected size slightly above 27 while our aim is 30. Setting $\varepsilon = 0.13$ yields the expected size very close to $30$. For most practical applications $\varepsilon \in [-0.05, 0.15]$, meaning the original algorithm is rather well-behaved. Note that this is in contrast to other inexact mechanisms in the literature: Credible Subset must return no solutions with positive probability \cite{KLMP15}, while the Dollar Partition method may return as many additional agents as the number of clusters \cite{Aziz2016}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{expected_size2.pdf}
\caption{Expected Size}
\label{fig:expected_size}
\end{subfigure}%
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width = 0.98\linewidth]{accept_exp_k.png}
\caption{Expected Accuracy}
\label{fig:accept_exp_k}
\end{subfigure}
\caption{(a) Expected size of the accepting set returned by the algorithm when $n=130, k=30$ and varying $m$. (b) Expected accuracy and accepting size for different values of $k$. $n=130$, $m=9$ and $\varepsilon = 0.15$ were used for this figure. The red line shows the expected accepting size and the blue line shows the accuracy.}
\label{fig:size_accuracy}
\end{figure}
Recall that the above analysis assumes reviewers to be accurate. If this assumption fails, we cannot provide any guarantees even for the expected size of the accepting set. It is also easy to construct marginal cases in which everyone or no one is selected in the worst case scenario.
\begin{example}
Consider the setting with 3 agents with everyone reviewing each other and suppose we want to select one individual (i.e., $n=3, m=2$ and $k=1$). Suppose agent 1 reviews 2 above 3, agent 2 reviews 3 above 1 and agent 3 reviews 1 above 2. The nomination quota with $\varepsilon = 0$ is $\frac{2}{3}$ and every agent is ranked in the first place once. Hence, each agent is selected with probability $\frac{2}{3}$ independently and so there exists a realisation where no one is selected as well as one where everyone is selected.
\end{example}{}
In Section \ref{sec:experiments} we consider a realistic setting that includes a noise model for the reviews and discuss the accepting size and the performance of \text{\textsc{PeerNomination}}\xspace.
\subsection{Expected Size and Accuracy}
Above we derived the probability of acceptance given a position in the ground truth, assuming no noise, before introducing the parameter $\varepsilon$. It is easy to adapt this expression to include $\varepsilon$: simply update the nomination quota when computing $q_r$ in Equation \ref{eq:q_r}. Hence, let $k^\varepsilon_q = k_q + \varepsilon$ and
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{equation*}
\begin{split}
q_r^\varepsilon &:= \sum_{y=1}^{\lfloor k^\varepsilon_q \rfloor} \mathbb{P}[Y = y | R = r] \\
&+ \left(k^\varepsilon_q - \lfloor k^\varepsilon_q \rfloor \right) \mathbb{P}[Y = \lfloor k^\varepsilon_q +1 \rfloor | R = r]
\end{split}
\end{equation*}
\end{minipage}
}
\vspace{0.5em}
This gives us $\mathbb{P}[\varepsilon\textrm{-accept} | R = r]$ for each ground truth position by simply replacing $q_r$ in Equation \ref{eq:prob} by $q^\varepsilon_r$. The expected size is again given by a similar expression:
\begin{equation*}
\mathbb{E}[\textrm{accepting size}] = \sum_{r = 1}^{n} \mathbb{P}[\varepsilon\textrm{-accept} | R = r]
\end{equation*}{}
It is now in principle easy to derive the expected accuracy of the algorithm. However, since the algorithm's output is inexact, there are multiple accuracy measures to consider, as is often the case for classification algorithms \cite{bishop2006pattern}. For example, we might care about how many agents of the true top $k$ we have selected (recall) or that we do not select too many agents from outside of it (false positive rate). We focus on the former, which we note is elsewhere referred to as {\em accuracy} \cite{Aziz2019}. The connection with classification metrics will be further explored in Section \ref{sec:experiments}.
Now, the {\em expected recall} is simply the sum of the probability of selection over all true top $k$ positions, divided by $k$:
\begin{equation*}
\mathbb{E}[\textrm{recall}] = \frac{1}{k} \sum_{r = 1}^{k} \mathbb{P}[\varepsilon\textrm{-accept} | R = r]
\end{equation*}{}
Again, the complexity of these expressions hinders theoretical analysis but Figure \ref{fig:accept_exp_k} shows a typical output for different values of $k$.
While its performance appears good in isolation, it is important to compare \text{\textsc{PeerNomination}}\xspace with other peer selection mechanisms which we do in Section \ref{sec:experiments}.
\subsection{Strategyproofness and Monotonicity}
Our main desired property is that of impartiality or strategyproofness. Luckily, this comes almost for free since the agents are chosen independently.
\begin{proposition}
The mechanism is strategyproof, i.e., no agent can affect their chances of selection using any strategy.
\end{proposition}{}
We also want the algorithm to be \textit{monotonic}, having better reviews does not hurt the chances of selection.
\begin{proposition}
The mechanism is monotonic, i.e., if a reviewer increases their ranking of an agent, that agent's probability of selection is not decreased.
\end{proposition}{}
\begin{proof}
Suppose $j$ is reviewed by $i$ and consider the probability of selecting $j$ given the original review of $i$, and a modified one where $j$ is ranked higher. There are three cases:
\begin{enumerate}[itemsep=0em]
\item $j$ was already inside the integer part of the nomination quota in the original review or $j$ is still completely outside of the the nomination quota in the modified review. In both cases $j$ was already certain to be nominated or not nominated, respectively, by $i$, hence their probability does not change.
\item $j$ moves from being a fractional nominee to being a full nominee increasing the chances of nomination (by $1 - (k_q - \lfloor k_q \rfloor)$), hence increasing their chances of selection.
\item $j$ moves from being not nominated to be fractionally nominated increasing the chance of nomination (by $k_q - \lfloor k_q \rfloor$), hence increasing the chances of selection.
\end{enumerate}{}
In all cases $j$'s chances of selection do not decrease, completing the proof. %
\end{proof}{}
Notice that in the definition of the algorithm we stipulate that $\varepsilon$ is part of the input. One could be tempted to calculate $\varepsilon$ after collecting the reviews in order to adjust the output size to be exactly $k$, however this is undesirable for several reasons. Firstly, the run of the algorithm is non-deterministic, hence it might be impossible to find a value of $\varepsilon$ that guarantees such output size on every run. Secondly, and most importantly, this would eliminate strategyproofness since now an agent could estimate that reporting an untruthful review could decrease the size of the accepting set, hence forcing the mechanism to increase $\varepsilon$ and so increase their chances of selection.
\section{Simulation Experiments}\label{sec:experiments}
We draw a novel connection between inexact peer selection and the literature on classification in machine learning \cite{bishop2006pattern}. With this empirical framework we run experiments to demonstrate that \text{\textsc{PeerNomination}}\xspace outperforms other mechanisms proposed.
\subsection{Classification Measures}
The usual and intuitive way to measure the ``accuracy'' of an exact peer-selection mechanism is counting how many agents from the top $k$ positions in the ground truth have been selected, as a proportion of all $k$ agents selected. This allows us to compare exact peer-selection mechanisms as was done in \cite{Aziz2019}. However, comparison with inexact mechanisms is less obvious. Since the accepting set is not guaranteed to be of size $k$ exactly, any output with more than $k$ agents may artificially increase the accuracy of the inexact mechanism and the opposite for any smaller output. One option is to measure the accuracy as a proportion of the output size, however, this approach will overrate outputs that are accurate but much smaller than $k$.
Inexactness allows us to view peer selection as a classification problem in which selection means positive classification. We can then view the selected agents from the true top $k$ as true positives and the non-selected agents from outside the true top $k$ as true negatives. We apply the standard classification accuracy measures \cite{bishop2006pattern} such as recall and precision to \text{\textsc{PeerNomination}}\xspace to analyse its performance.
More formally, let $S$ be the set of agents selected by the algorithm and $S^+ = \{r \in S \mid \textrm{rank}(r) \leq k \}$ the set of selected agents that are in the true top $k$, i.e., true positives (TP). Similarly, we can use $S^- = \{r \in S \mid \textrm{rank}(r) > k \}$ for false positives (FP).
Hence we can define:
$\textrm{TP} = |S^+|$,
$\textrm{FP} = |S^-| = |S| - \textrm{TP}$,
true negatives $\textrm{TN} = | \{ r \notin S \mid \textrm{rank}(r) > k \} | = n - k - \textrm{FP}$, and
false negatives $\textrm{FN} = | \{ r \notin S \mid \textrm{rank}(r) \leq k \} | = n - |S| - \textrm{TN}$.
We can now look at some of the normal performance metrics: {Positive Predictive Value (PPV) (aka Precision)}, {True Positive Rate (TPR) (aka Recall)} and \textrm{False Positive Rate (FPR)}, defined as follows: %
\noindent
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{align*}
& \textrm{PPV} := \frac{\textrm{TP}}{\textrm{TP} + \textrm{FP}} & &
\textrm{TPR} := \frac{\textrm{TP}}{\textrm{TP} + \textrm{FN}} & &
\textrm{FPR} := \frac{\textrm{FP}}{\textrm{TN} + \textrm{FP}}
\end{align*}
\end{minipage}
}
\vspace{0.5em}
Furthermore, we can view the slack parameter $\varepsilon$ as the sensitivity threshold akin to the probability threshold in the machine learning literature (see e.g., \cite{flach}). %
This suggests a method to construct the Precision-Recall (PR) and Receiver-Operator Characteristic Curve (ROC): vary $\varepsilon$ such that the nomination quota varies between 0 and $m$ and measure the Precision, Recall and False Positive Rate at each value. An example is presented in Figure \ref{fig:roc}.
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=.98\linewidth]{roc.pdf}
\caption{ROC Curve}
\end{subfigure}%
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{rc.pdf}
\caption{PR Curve \\\hspace{\textwidth}}
\end{subfigure}
\vspace{-1.5em}
\caption{ROC and PR curves for \text{\textsc{PeerNomination}}\xspace. They were computed analytically with $n=120, m=8, k=25$.}
\label{fig:roc}
\end{figure}
The curves show the trade off between sensitivity (TRP) and inclusivity (FPR). %
As we follow the ROC curve, which corresponds to gradually increasing the nomination quota, the (TPR) increases quickly, i.e., we do not need to accept \textit{too many} extra agents to select all the deserving agents. On the other hand, we can still achieve TPR of around 0.8 with the FPR very close to 0. This shows that we can select around 80\% of the agents in the true top $k$ if we concentrate on not selecting the ``undeserving" individuals. While the curves are interesting on their own, we want to be able to compare them to other peer-selection mechanisms, so an important direction is finding a generalizable way of constructing curves for other peer-selection mechanisms.
\begin{figure*}
\centering
\includegraphics[width=.85\linewidth]{test_final.pdf}
\caption{Comparison of prominent algorithms against a Vanilla baseline (left) and against the ground truth ranking of a Mallows Model (right). $n=120, l=4, \varphi=0.5$. On top $m=9$. On the bottom $k=30$. \text{\textsc{PeerNomination}}\xspace out performs across settings except $m=5$.}
\label{fig:results}
\end{figure*}{}
\subsection{Experimental Setup}
We extend the testing framework developed by \citet{Aziz2019} and using methods from \textsc{PrefLib} \cite{MaWa17,MaWa13a}. Our code and data is available online \footnote{\url{https://github.com/nmattei/peerselection}}. As in \citet{Aziz2019}, we set $n=120$ and tested the algorithm on various values of $k$ and $m$. The test values for $k$ were $15, 20, 25, 30, 35$ and the test values for $m$ were $5,7, 9, 11$. For the algorithms that rely on the partition, we chose the number of partitions, $l$, to be 4.
For each setting of the parameters we generated a random $m$-regular assignment matching reviewers to reviewees. As in other works, we model he reviews of each agent using a Mallows Model \cite{Mal57}. In a Mallows model we provide a (random) ground truth ranking $\pi$ and a noise parameter $\phi$. If we set $\phi=0$ then agents will always report $\pi$ as their ranking, i.e., they are all exactly correct. As we increase $\phi$ agents will report increasingly inaccurate rankings as a function of the Kendall tau distance between $\pi$ and all possible rankings. Note that each agent draws from this distribution independently. Hence, by varying $\phi$ we can test the robustness of our algorithms to errors in the rankings submitted by the agents. Mallows models have a long history in machine learning and group decision-making as they can simulate noisy observations of a ground truth ranking, and be sampled efficiently \cite{Mal57,Xia19,lu2011learning}.
The experiment was repeated 1000 times for each setting, after which the average recall was calculated giving us high confidence in our results. For \text{\textsc{PeerNomination}}\xspace, we used theoretical estimates of $\varepsilon$ to achieve the right expected size of the accepting set.
The error bars in Figures \ref{fig:results} and \ref{fig:fair_results} represent 1 standard deviation of the data. In line with the observation in \cite{Aziz2019}, varying the dispersion parameter $\phi$ for Mallows' noise did not have significant effect on the accuracy of all algorithms until we approach $\phi = 1.0$ when all reviewers report a completely random ordering. The results for $\phi = 0.5$ are presented in Figure \ref{fig:results}.
\text{\textsc{PeerNomination}}\xspace does not require an explicit partitioning making it more flexible. Another issue with partitioning, as pointed out in \cite{Aziz2019}, is that the performance of both EDP and Partition degrade as we increase the number of clusters. In another test we varied the number of clusters $\ell$ between 2 and 10. We saw a decrease in performance of about $3-4\%$ for the partition based methods while the performance of \text{\textsc{PeerNomination}}\xspace remained constant.
In another testing setup we adopted a slightly different procedure in order to ensure a level comparison. In each simulation, we generate a random $m$-regular assignment, run \text{\textsc{PeerNomination}}\xspace using the target $k$ as an input, measure the size of the output and run EDP using this size as the input $k$. A similar experiment was also performed for the inexact version of Dollar Partition in \citet{Aziz2016}. This ensures that during each simulation both algorithms return the same number of agents for selection. The results of this comparison are presented in Figure \ref{fig:fair_results}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fair.pdf}
\caption{Results of the forced size experiment: \text{\textsc{PeerNomination}}\xspace and EDP are always guaranteed to return the same number of agents.}
\label{fig:fair_results}
\end{figure}
\subsection{Results}
Again following \citet{Aziz2019} and depicted in Figure \ref{fig:results}, we compared a selection of impartial peer selection algorithms and Vanilla (Borda count). Borda is a classic social choice rule that is known not to be strategyproof but is optimal in the ordinal peer-ranking setting under the assumption of no noise \cite{caragiannis_krimpas_voudouris_2016}, and thus represents an optimistic baseline in the presence of no manipulators. \text{\textsc{PeerNomination}}\xspace outperforms EDP significantly in the the majority of the settings we have considered. The only setting where EDP outperforms \text{\textsc{PeerNomination}}\xspace is at $m=5$, which is a low information setting, where reviewers are given a nomination quota fractionally above 1. However, our algorithm improves quickly with $m$. Even at $m=9$ shown in Figure \ref{fig:results} \text{\textsc{PeerNomination}}\xspace approaches the performance of Borda across the values of $k$ we considered.
It is worth noting that \text{\textsc{PeerNomination}}\xspace tends to return a slightly larger than $k$ set on average (usually $<1$ additional agent). Nevertheless, even if the testing forces \text{\textsc{PeerNomination}}\xspace and EDP to return the same number of agents every time, we see that \text{\textsc{PeerNomination}}\xspace has an overall advantage as shown in Figure \ref{fig:fair_results}. Again, EDP only does better in a low information setting ($m=5$).
\section{Conclusion}\label{sec:conclusion}
There are many avenues for future work: \text{\textsc{PeerNomination}}\xspace, which already does not rely on predefined clustering, can be extended to not require an $m$-regular assignment. Moreover, each reviewer does not even need to declare a full ranking over their review pool, but simply declare the nominees for the selection and one nominee to be fractionally selected. This also suggests that there might be a possible extension of the algorithm which makes use of the declared rankings in full, as this data is currently discarded.
Crucially, the usefulness of our algorithm depends on returning an accepting set of size close to $k$. We saw that this can be achieved using the parameter $\varepsilon$. However, we saw that very high levels of noise can affect the size of the accepting set. This suggests that an important research direction will be that of testing different models of agent behaviour in detail.
{\small
\section{Introduction}\label{sec:introduction}
Peer selection, where agents must choose a subset of themselves for an award or a prize, is one of the pillars for quality assessment in scientific contexts and beyond. While current methods rely on expert panels, there is increasing attention to how to design trustworthy mechanisms that improve the accuracy and reliability of the outcome, keeping the procedure simple and cheap. The latter is particularly relevant in open online courses \cite{piech2013tuned}, where hiring professional graders is prohibitively expensive. Indeed, even IJCAI 2020 is implementing a portion of this system, requiring authors who submit papers to agree to be reviewers themselves.
The importance of having an ``objective" assessment in conference reviewing has been brought to light by the famous NIPS experiment \cite{langford_2015,shah2018design}: of all papers submitted to NIPS 2014, 10\% were reviewed twice by two independent committees which, astonishingly, agreed on less than half of the accepted papers in their pool. Whether the outcome was due to bias, incompetence or simply well-thought disagreement is still unclear. What is clear though is that the current solutions show undesirable properties. %
Methods for {\em impartial} peer selection, where self-interested individuals assess one another in such a way that none of them has an incentive to misrepresent their evaluation, have a long standing tradition in economics, e.g., \cite{Douceur2009,Holzman2013,Dollar}, which has in turn encouraged several groups in artificial intelligence and computer science more broadly to investigate these problems, e.g., \cite{KLMP15,AFPT11a,XZSS19,Aziz2019}.
The interest in such methods has culminated in a pilot scheme by the US National Science Foundation (NSF) \cite{naghizadeh2013incentives}, called for by \citet{MerrifieldSaari}, in which each principal investigator (PI) was asked to rank 7 proposals from other PIs. The rankings were then combined using the Borda score with the additional truth-telling incentive of receiving a bonus the closer one gets to the average of the other reviewers' marks. Though this method is not impartial, and leads to a Keynesian beauty contest \cite{Key36a}, the results were encouraging.
Research in artificial intelligence and economics has led to a number of proposals for algorithms choosing a set of $k$ agents from amongst themselves, commonly known as the peer selection problem. We review some of the most prominent ones here to which we will compare our proposal. %
\begin{description}[itemsep=0em]
\item [Credible Subset \cite{KLMP15}.] In Credible Subset (CS), reviewers assign scores to their allocated proposals and the potential manipulators, i.e., the reviewers that could be within the $k$ funded ones, are also selected to be funded, with a given probability. While the system is strategy-proof, it will yield an empty set of funded proposals in a number of cases \cite{Aziz2016}.
\item[Dollar Raffle \cite{Dollar}.] The Dollar Raffle method (DR) consists of reviewers distributing a score in the interval [0,1] to their reviews rather than independently allocating them as in (CS). %
\item[Exact Dollar Partition \cite{Aziz2019}] Reviewers are clustered at random and rank peers in different clusters. Using a randomized rounding scheme based on the the shares computed with the method of \citet{Dollar}, the top proposals of each cluster are selected, depending on their clusters' importance. Dollar Partition is strategy-proof and has been shown to be the most accurate available method \cite{Aziz2019}.
\end{description}
We compare our algorithm against two more basic procedures: Vanilla, which selects the $k$ agents with the highest total Borda score based on the reviews received; and Partition, which, instead, divides the agents into a set of clusters and selects a predetermined number of them from each (typically $k$ divided by the number of clusters) as rated by the agents from the other clusters. Notice that, unlike Partition, Vanilla is not impartial but is commonly used as a baseline for comparison.
Relevant recent developments with a different focus use voting rules to aggregate ranks (e.g., $k$-Partite \cite{KKKKP18a}, including the Committee \cite{KKKKP18a} and Divide-and-Rank \cite{XZSS19}) algorithms. Other methods are approval-based but only focus on single agent selection: Permutation~\cite{FeKl14a} and Slicing~\cite{BNV14a}. Additional work in this area also focuses on assignment and calibration issues \cite{wang2019your,LianMNW18}.
\smallbreak
\noindent
\textbf{Our Contribution.}
We present \text{\textsc{PeerNomination}}\xspace, an impartial peer selection method for scenarios where $n$ agents review and are reviewed by $m$ others, with the goal of selecting $k$ of them. Each proposal is considered independently and it is selected only if it falls in the top $\frac{k}{n}m$ of the majority of its reviewers' (partial) rankings, using a probabilistic completion if such number is not an integer.
This way we relax the exactness requirement, in the sense that our algorithm is not guaranteed to select exactly $k$ proposals every time.
However, under some mild rationality assumptions, the algorithm does so in expectation. Unlike other well-known peer reviewing methods, e.g., Exact Dollar Partition (EDP), \text{\textsc{PeerNomination}}\xspace does not rely on clustering nor on reviewers submitting complete rankings, allowing more flexibility in where and when it may be deployed. %
We compare the performance of \text{\textsc{PeerNomination}}\xspace against an underlying ground truth ranking when agent rankings are drawn according to a Mallows model \cite{Mal57,Xia19}, exactly deriving its expected accuracy analytically. Moreover, we empirically compare our method against other peer selection mechanisms, for which analytic performance bounds are unknown, using a number of well-known classification measures. Our results show that \text{\textsc{PeerNomination}}\xspace improves on the current best performance in terms of accuracy known from the literature and relies on milder assumptions on the underlying reviewer graph. This suggests that relaxing the exactness requirement in peer selection outcomes can give us an improved performance with respect to the accuracy of the accepted set.
\smallbreak
\noindent
\textbf{Paper Structure.}
In Section \ref{sec:preliminaries} we set up the basic terminology and notation. Section \ref{sec:model} presents our algorithm and its theoretical properties. Section \ref{sec:experiments} compares its accuracy against the main existing alternatives, under various metrics. %
\section{Preliminaries}\label{sec:preliminaries}
We work with a set of agents $\mathcal{N} = \{ 1, 2, ..., n \}$ and an order over them, induced by their index, which represents the final ranking the agents would have, if they were to be assessed objectively. We refer to this order as the {\em ground truth}. Each agent is assigned $m$ other agents to review and is in turn reviewed by $m$ others.
We represent such $m$-regular assignment as a function $A: \mathcal{N} \rightarrow 2^{\mathcal{N}}$ and denote $i$'s review pool as $A(i)$, while $A^{-1}(i)$ denotes $i$'s reviewers. It is worth noting that while generating a random $m$-regular assignment is easy for small $m$ (by generating an $m$-regular bipartite graph), sampling one uniformly is non-trivial and is an active area of study (e.g., see \cite{berger2010uniform}). In this paper, we assume uniform sampling to make our theoretical analysis tractable in Section \ref{sec:model} but not for experiments in Section \ref{sec:experiments}. In practice, we observed negligible effect on the performance of algorithms when using different assignment-generating procedures.
In real-world settings, agents can only review a limited number of proposals or papers so $m$ is typically small and constant, given $n$.
Each reviewer $i$ submits a ranking of their review pool $A(i)$, which we represent as a strategy $\sigma_i: A(i) \rightarrow \{1, ..., m \}$, where $\sigma_i(j)$ gives the rank of $j$ given by $i$ in $i$'s review pool.
A collection of all declared strategies
is called a \textit{profile} and is denoted by $\sigma$. The unique profile which is consistent with the ground truth is called \textit{truthful}.
After the individual preferences are declared, they are aggregated to select $k$ individuals.
We call a peer selection mechanism \textit{impartial} or \textit{strategyproof} if no agent can affect their chances of selection in any assignment using any strategy.
\section{\text{\textsc{PeerNomination}}\xspace}\label{sec:model}
In this section we present \text{\textsc{PeerNomination}}\xspace and describe its performance analytically.
\subsection{The Algorithm}
A usual requirement for peer selection mechanisms is that it must return an accepting set exactly of size $k$ \cite{Aziz2019,AFPT11a,KKKKP18a}. Though some approaches investigated relaxing this assumption \cite{Aziz2016,KLMP15}, most notably the results by \citet{bjelde_fischer_klimm_2017} show that this relaxation can lead to better optimality approximation. We use this intuition in designing the following algorithm that returns an accepting set of size $k$ in expectation.
\text{\textsc{PeerNomination}}\xspace works as follows: suppose every agent reviews and is reviewed by $m$ other agents. If an agent is in the true top $k$, we expect them to be ranked in the top $k$ proportion (i.e., top $\frac{k}{n} m$) of their review pool by the majority of agents that review them, if these were to report their accurate rankings. We say that an agent is \textit{nominated} by a reviewer if they are in the top $k$ proportion of the reviewer's declared ranking, i.e., their review pool. Likewise, we refer to $\frac{k}{n} m$ as the \textit{nomination quota}. Hence, for every agent $j$, we look at all reviewers $i_1, ..., i_m$ reviewing $j$ and select $j$ only if they are nominated by the majority of these reviewers.
As $\frac{k}{n} m$ is unlikely to be an integer, we consider an agent {\em nominated for certain} if they are among the first $\lfloor \frac{k}{n} m \rfloor$ agents in the review pool, where $\lfloor x \rfloor$ denotes the whole part of a positive real number $x$. If they are in the next position (i.e., $\lfloor \frac{k}{n} m \rfloor + 1$), we randomly consider them nominated with probability $\frac{k}{n} m - \lfloor \frac{k}{n} m \rfloor$, that is, the decimal part of the nomination quota. Lastly, if the number of review pools an agent is part of is even, we require them to be nominated by just half of the review pools, not a strict majority.
A crucial observation is that, since each agent is considered independently for selection, the algorithm is not guaranteed to return exactly $k$ agents. However, we will show that the algorithm is close enough to such number if the reviewers submit reviews that are close enough to the ground truth and, moreover, that truth-telling is an equilibrium outcome, i.e., \text{\textsc{PeerNomination}}\xspace is impartial.
The \text{\textsc{PeerNomination}}\xspace algorithm is presented in Algorithm \ref{algorithm}. Note that in the algorithm we introduce the \textit{slack parameter} $\varepsilon$, which extends the nomination quota accordingly. As we show next, this is necessary in some settings to achieve the right expected size of the accepting set.
\begin{algorithm}[h]
\begin{algorithmic}{}
\REQUIRE Assignment $A$, review profile $\sigma$, target quota $k$, slack parameter $\varepsilon$
\ENSURE Accepting set $S$
\STATE Set $\textit{nomQuota} := \frac{k}{n} m +\varepsilon$
\FORALL{$j$ in $\mathcal{N}$}
\STATE Initialise \textit{nomCount} := 0
\FORALL{$i \in A^{-1}(j)$}
\IF{$\sigma_i(j) \leq \lfloor \textit{nomQuota} \rfloor$}
\STATE increment \textit{nomCount} by 1
\ELSIF{$\sigma_i(j) = \lfloor \textit{nomQuota} \rfloor + 1$}
\STATE increment \textit{nomCount} by 1 with probability $\textit{nomQuota} - \lfloor \textit{nomQuota} \rfloor$
\ENDIF
\ENDFOR
\IF{$\textit{nomCount} \geq \lceil \frac{m}{2} \rceil$}
\STATE $S \leftarrow j$
\ENDIF
\ENDFOR
\RETURN $S$
\end{algorithmic}
\caption{\text{\textsc{PeerNomination}}\xspace}
\label{algorithm}
\end{algorithm}{}
\subsection{Expected Size and Slack Parameter}
We now derive the expected size of the accepting set returned by \text{\textsc{PeerNomination}}\xspace as a function of $n, m$ and $k$. %
Since each agent is considered independently, we just need to derive the probability of selection for an agent given their ground truth position. Assume the algorithm is run on an $m$-regular assignment and the reviews are truthful. Note that we assume such assignment is sampled uniformly and so each review pool is equally likely to be assigned to any reviewer. Firstly, consider the probability of obtaining position $y$ in the sample of size $m$, given position $r$ in the underlying ranking. When drawing the sample, we need to choose $y-1$ individuals out of $r-1$ that are above agent $r$ in the ground truth, and then choose $m-y$ out of $n-r$ that are worse. In total, as expected, we are choosing $m-1$ other agents out of $n-1$. Hence:
\begin{equation*}
\mathbb{P}[Y = y | R = r] = {r-1 \choose y-1} {n-r \choose m-y} \Big/ {n-1 \choose m-1}
\end{equation*}
where $Y$ is a random variable representing the position in the review pool and $R$ is a random variable representing the ground truth position.
Denote now the nomination quota by $k_q := \frac{k}{n} m$ and recall that in any given review pool, top $\lfloor k_q \rfloor$ agents are nominated for certain and the next position is nominated with the probability of $k_q - \lfloor k_q \rfloor$. Hence, the probability of being nominated in any pool from position $r$ in the ranking is, independently:
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{equation}
\begin{split}
q_r &:= \sum_{y=1}^{\lfloor k_q \rfloor} \mathbb{P}[Y = y | R = r] \\
&+ \left(k_q - \lfloor k_q \rfloor \right) \mathbb{P}[Y = \lfloor k_q \rfloor + 1 | R = r]
\end{split}
\label{eq:q_r}
\end{equation}
\end{minipage}
}
\vspace{0.5em}
Since each review pool can be regarded as a Bernoulli trial with probability $q_r$ and to be accepted an agent has to be nominated $\lceil m/2 \rceil$ times, the probability of being accepted from position $r$ is given by the cumulative Binomial distribution:
\begin{equation}\label{eq:prob}
\mathbb{P}[\textrm{accept} | R = r] = \sum_{i= \lceil m/2 \rceil}^m {m \choose i} q_r^i (1-q_r)^{m-i}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width = .80\linewidth]{accept_prob_comparison2.pdf}
\caption{Probability of being accepted by the algorithm given the position in the ranking when $n=130$ and $k=30$.}
\label{fig:review_dist}
\end{figure}
An illustration of acceptance probabilities as a function of the ground truth position is shown in Figure \ref{fig:review_dist}. We can see that agents that are well inside top $k$ are almost certain to be accepted while those well outside of top $k$ are almost certain to be rejected. The width of the interval around top $k$ for which the probability is away from the extremes is dictated by $m$. Higher $m$ reduces uncertainty by providing more ``trials" for each agent and so narrows the interval.
We can now use the derived probability of acceptance to calculate the expected size of the accepting set. %
Since every individual is accepted independently with probability $\mathbb{P}[\textrm{accept} | R = r]$ and contributes 1 to the size if they are accepted, the expectation is simply $\sum_{r=1}^n \mathbb{P}[\textrm{accept} | R = r]$. The complexity of this expression makes it difficult to analyse it explicitly. However, Figure \ref{fig:expected_size} shows a typical behaviour of the expected size as a function of $m$. We observe that this approaches $k$ as $m$ increases. However, for small values of $m$ the expected size can vary significantly from $k$, especially when $m$ is odd (recall that agents need to get a clear majority in this case, making selection more difficult).
To tackle these issues, we introduce an additional parameter $\varepsilon$ that allows us to control the size of the accepting set more finely. If $\varepsilon$ is set to a non-zero value (usually a positive one), we extend the nomination quota in each review pool by this amount. Usually this increment simply contributes to the probability that the ``fractional nominee" is nominated. For example, in the setting $n=130, m=9$ and $k=30$, Figure \ref{fig:expected_size} shows the expected size slightly above 27 while our aim is 30. Setting $\varepsilon = 0.13$ yields the expected size very close to $30$. For most practical applications $\varepsilon \in [-0.05, 0.15]$, meaning the original algorithm is rather well-behaved. Note that this is in contrast to other inexact mechanisms in the literature: Credible Subset must return no solutions with positive probability \cite{KLMP15}, while the Dollar Partition method may return as many additional agents as the number of clusters \cite{Aziz2016}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{expected_size2.pdf}
\caption{Expected Size}
\label{fig:expected_size}
\end{subfigure}%
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width = 0.98\linewidth]{accept_exp_k.png}
\caption{Expected Accuracy}
\label{fig:accept_exp_k}
\end{subfigure}
\caption{(a) Expected size of the accepting set returned by the algorithm when $n=130, k=30$ and varying $m$. (b) Expected accuracy and accepting size for different values of $k$. $n=130$, $m=9$ and $\varepsilon = 0.15$ were used for this figure. The red line shows the expected accepting size and the blue line shows the accuracy.}
\label{fig:size_accuracy}
\end{figure}
Recall that the above analysis assumes reviewers to be accurate. If this assumption fails, we cannot provide any guarantees even for the expected size of the accepting set. It is also easy to construct marginal cases in which everyone or no one is selected in the worst case scenario.
\begin{example}
Consider the setting with 3 agents with everyone reviewing each other and suppose we want to select one individual (i.e., $n=3, m=2$ and $k=1$). Suppose agent 1 reviews 2 above 3, agent 2 reviews 3 above 1 and agent 3 reviews 1 above 2. The nomination quota with $\varepsilon = 0$ is $\frac{2}{3}$ and every agent is ranked in the first place once. Hence, each agent is selected with probability $\frac{2}{3}$ independently and so there exists a realisation where no one is selected as well as one where everyone is selected.
\end{example}{}
In Section \ref{sec:experiments} we consider a realistic setting that includes a noise model for the reviews and discuss the accepting size and the performance of \text{\textsc{PeerNomination}}\xspace.
\subsection{Expected Size and Accuracy}
Above we derived the probability of acceptance given a position in the ground truth, assuming no noise, before introducing the parameter $\varepsilon$. It is easy to adapt this expression to include $\varepsilon$: simply update the nomination quota when computing $q_r$ in Equation \ref{eq:q_r}. Hence, let $k^\varepsilon_q = k_q + \varepsilon$ and
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{equation*}
\begin{split}
q_r^\varepsilon &:= \sum_{y=1}^{\lfloor k^\varepsilon_q \rfloor} \mathbb{P}[Y = y | R = r] \\
&+ \left(k^\varepsilon_q - \lfloor k^\varepsilon_q \rfloor \right) \mathbb{P}[Y = \lfloor k^\varepsilon_q +1 \rfloor | R = r]
\end{split}
\end{equation*}
\end{minipage}
}
\vspace{0.5em}
This gives us $\mathbb{P}[\varepsilon\textrm{-accept} | R = r]$ for each ground truth position by simply replacing $q_r$ in Equation \ref{eq:prob} by $q^\varepsilon_r$. The expected size is again given by a similar expression:
\begin{equation*}
\mathbb{E}[\textrm{accepting size}] = \sum_{r = 1}^{n} \mathbb{P}[\varepsilon\textrm{-accept} | R = r]
\end{equation*}{}
It is now in principle easy to derive the expected accuracy of the algorithm. However, since the algorithm's output is inexact, there are multiple accuracy measures to consider, as is often the case for classification algorithms \cite{bishop2006pattern}. For example, we might care about how many agents of the true top $k$ we have selected (recall) or that we do not select too many agents from outside of it (false positive rate). We focus on the former, which we note is elsewhere referred to as {\em accuracy} \cite{Aziz2019}. The connection with classification metrics will be further explored in Section \ref{sec:experiments}.
Now, the {\em expected recall} is simply the sum of the probability of selection over all true top $k$ positions, divided by $k$:
\begin{equation*}
\mathbb{E}[\textrm{recall}] = \frac{1}{k} \sum_{r = 1}^{k} \mathbb{P}[\varepsilon\textrm{-accept} | R = r]
\end{equation*}{}
Again, the complexity of these expressions hinders theoretical analysis but Figure \ref{fig:accept_exp_k} shows a typical output for different values of $k$.
While its performance appears good in isolation, it is important to compare \text{\textsc{PeerNomination}}\xspace with other peer selection mechanisms which we do in Section \ref{sec:experiments}.
\subsection{Strategyproofness and Monotonicity}
Our main desired property is that of impartiality or strategyproofness. Luckily, this comes almost for free since the agents are chosen independently.
\begin{proposition}
The mechanism is strategyproof, i.e., no agent can affect their chances of selection using any strategy.
\end{proposition}{}
We also want the algorithm to be \textit{monotonic}, having better reviews does not hurt the chances of selection.
\begin{proposition}
The mechanism is monotonic, i.e., if a reviewer increases their ranking of an agent, that agent's probability of selection is not decreased.
\end{proposition}{}
\begin{proof}
Suppose $j$ is reviewed by $i$ and consider the probability of selecting $j$ given the original review of $i$, and a modified one where $j$ is ranked higher. There are three cases:
\begin{enumerate}[itemsep=0em]
\item $j$ was already inside the integer part of the nomination quota in the original review or $j$ is still completely outside of the the nomination quota in the modified review. In both cases $j$ was already certain to be nominated or not nominated, respectively, by $i$, hence their probability does not change.
\item $j$ moves from being a fractional nominee to being a full nominee increasing the chances of nomination (by $1 - (k_q - \lfloor k_q \rfloor)$), hence increasing their chances of selection.
\item $j$ moves from being not nominated to be fractionally nominated increasing the chance of nomination (by $k_q - \lfloor k_q \rfloor$), hence increasing the chances of selection.
\end{enumerate}{}
In all cases $j$'s chances of selection do not decrease, completing the proof. %
\end{proof}{}
Notice that in the definition of the algorithm we stipulate that $\varepsilon$ is part of the input. One could be tempted to calculate $\varepsilon$ after collecting the reviews in order to adjust the output size to be exactly $k$, however this is undesirable for several reasons. Firstly, the run of the algorithm is non-deterministic, hence it might be impossible to find a value of $\varepsilon$ that guarantees such output size on every run. Secondly, and most importantly, this would eliminate strategyproofness since now an agent could estimate that reporting an untruthful review could decrease the size of the accepting set, hence forcing the mechanism to increase $\varepsilon$ and so increase their chances of selection.
\section{Simulation Experiments}\label{sec:experiments}
We draw a novel connection between inexact peer selection and the literature on classification in machine learning \cite{bishop2006pattern}. With this empirical framework we run experiments to demonstrate that \text{\textsc{PeerNomination}}\xspace outperforms other mechanisms proposed.
\subsection{Classification Measures}
The usual and intuitive way to measure the ``accuracy'' of an exact peer-selection mechanism is counting how many agents from the top $k$ positions in the ground truth have been selected, as a proportion of all $k$ agents selected. This allows us to compare exact peer-selection mechanisms as was done in \cite{Aziz2019}. However, comparison with inexact mechanisms is less obvious. Since the accepting set is not guaranteed to be of size $k$ exactly, any output with more than $k$ agents may artificially increase the accuracy of the inexact mechanism and the opposite for any smaller output. One option is to measure the accuracy as a proportion of the output size, however, this approach will overrate outputs that are accurate but much smaller than $k$.
Inexactness allows us to view peer selection as a classification problem in which selection means positive classification. We can then view the selected agents from the true top $k$ as true positives and the non-selected agents from outside the true top $k$ as true negatives. We apply the standard classification accuracy measures \cite{bishop2006pattern} such as recall and precision to \text{\textsc{PeerNomination}}\xspace to analyse its performance.
More formally, let $S$ be the set of agents selected by the algorithm and $S^+ = \{r \in S \mid \textrm{rank}(r) \leq k \}$ the set of selected agents that are in the true top $k$, i.e., true positives (TP). Similarly, we can use $S^- = \{r \in S \mid \textrm{rank}(r) > k \}$ for false positives (FP).
Hence we can define:
$\textrm{TP} = |S^+|$,
$\textrm{FP} = |S^-| = |S| - \textrm{TP}$,
true negatives $\textrm{TN} = | \{ r \notin S \mid \textrm{rank}(r) > k \} | = n - k - \textrm{FP}$, and
false negatives $\textrm{FN} = | \{ r \notin S \mid \textrm{rank}(r) \leq k \} | = n - |S| - \textrm{TN}$.
We can now look at some of the normal performance metrics: {Positive Predictive Value (PPV) (aka Precision)}, {True Positive Rate (TPR) (aka Recall)} and \textrm{False Positive Rate (FPR)}, defined as follows: %
\noindent
\resizebox{.9\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{align*}
& \textrm{PPV} := \frac{\textrm{TP}}{\textrm{TP} + \textrm{FP}} & &
\textrm{TPR} := \frac{\textrm{TP}}{\textrm{TP} + \textrm{FN}} & &
\textrm{FPR} := \frac{\textrm{FP}}{\textrm{TN} + \textrm{FP}}
\end{align*}
\end{minipage}
}
\vspace{0.5em}
Furthermore, we can view the slack parameter $\varepsilon$ as the sensitivity threshold akin to the probability threshold in the machine learning literature (see e.g., \cite{flach}). %
This suggests a method to construct the Precision-Recall (PR) and Receiver-Operator Characteristic Curve (ROC): vary $\varepsilon$ such that the nomination quota varies between 0 and $m$ and measure the Precision, Recall and False Positive Rate at each value. An example is presented in Figure \ref{fig:roc}.
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=.98\linewidth]{roc.pdf}
\caption{ROC Curve}
\end{subfigure}%
\begin{subfigure}[t]{0.48\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{rc.pdf}
\caption{PR Curve \\\hspace{\textwidth}}
\end{subfigure}
\vspace{-1.5em}
\caption{ROC and PR curves for \text{\textsc{PeerNomination}}\xspace. They were computed analytically with $n=120, m=8, k=25$.}
\label{fig:roc}
\end{figure}
The curves show the trade off between sensitivity (TRP) and inclusivity (FPR). %
As we follow the ROC curve, which corresponds to gradually increasing the nomination quota, the (TPR) increases quickly, i.e., we do not need to accept \textit{too many} extra agents to select all the deserving agents. On the other hand, we can still achieve TPR of around 0.8 with the FPR very close to 0. This shows that we can select around 80\% of the agents in the true top $k$ if we concentrate on not selecting the ``undeserving" individuals. While the curves are interesting on their own, we want to be able to compare them to other peer-selection mechanisms, so an important direction is finding a generalizable way of constructing curves for other peer-selection mechanisms.
\begin{figure*}
\centering
\includegraphics[width=.85\linewidth]{test_final.pdf}
\caption{Comparison of prominent algorithms against a Vanilla baseline (left) and against the ground truth ranking of a Mallows Model (right). $n=120, l=4, \varphi=0.5$. On top $m=9$. On the bottom $k=30$. \text{\textsc{PeerNomination}}\xspace out performs across settings except $m=5$.}
\label{fig:results}
\end{figure*}{}
\subsection{Experimental Setup}
We extend the testing framework developed by \citet{Aziz2019} and using methods from \textsc{PrefLib} \cite{MaWa17,MaWa13a}. Our code and data is available online \footnote{\url{https://github.com/nmattei/peerselection}}. As in \citet{Aziz2019}, we set $n=120$ and tested the algorithm on various values of $k$ and $m$. The test values for $k$ were $15, 20, 25, 30, 35$ and the test values for $m$ were $5,7, 9, 11$. For the algorithms that rely on the partition, we chose the number of partitions, $l$, to be 4.
For each setting of the parameters we generated a random $m$-regular assignment matching reviewers to reviewees. As in other works, we model he reviews of each agent using a Mallows Model \cite{Mal57}. In a Mallows model we provide a (random) ground truth ranking $\pi$ and a noise parameter $\phi$. If we set $\phi=0$ then agents will always report $\pi$ as their ranking, i.e., they are all exactly correct. As we increase $\phi$ agents will report increasingly inaccurate rankings as a function of the Kendall tau distance between $\pi$ and all possible rankings. Note that each agent draws from this distribution independently. Hence, by varying $\phi$ we can test the robustness of our algorithms to errors in the rankings submitted by the agents. Mallows models have a long history in machine learning and group decision-making as they can simulate noisy observations of a ground truth ranking, and be sampled efficiently \cite{Mal57,Xia19,lu2011learning}.
The experiment was repeated 1000 times for each setting, after which the average recall was calculated giving us high confidence in our results. For \text{\textsc{PeerNomination}}\xspace, we used theoretical estimates of $\varepsilon$ to achieve the right expected size of the accepting set.
The error bars in Figures \ref{fig:results} and \ref{fig:fair_results} represent 1 standard deviation of the data. In line with the observation in \cite{Aziz2019}, varying the dispersion parameter $\phi$ for Mallows' noise did not have significant effect on the accuracy of all algorithms until we approach $\phi = 1.0$ when all reviewers report a completely random ordering. The results for $\phi = 0.5$ are presented in Figure \ref{fig:results}.
\text{\textsc{PeerNomination}}\xspace does not require an explicit partitioning making it more flexible. Another issue with partitioning, as pointed out in \cite{Aziz2019}, is that the performance of both EDP and Partition degrade as we increase the number of clusters. In another test we varied the number of clusters $\ell$ between 2 and 10. We saw a decrease in performance of about $3-4\%$ for the partition based methods while the performance of \text{\textsc{PeerNomination}}\xspace remained constant.
In another testing setup we adopted a slightly different procedure in order to ensure a level comparison. In each simulation, we generate a random $m$-regular assignment, run \text{\textsc{PeerNomination}}\xspace using the target $k$ as an input, measure the size of the output and run EDP using this size as the input $k$. A similar experiment was also performed for the inexact version of Dollar Partition in \citet{Aziz2016}. This ensures that during each simulation both algorithms return the same number of agents for selection. The results of this comparison are presented in Figure \ref{fig:fair_results}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fair.pdf}
\caption{Results of the forced size experiment: \text{\textsc{PeerNomination}}\xspace and EDP are always guaranteed to return the same number of agents.}
\label{fig:fair_results}
\end{figure}
\subsection{Results}
Again following \citet{Aziz2019} and depicted in Figure \ref{fig:results}, we compared a selection of impartial peer selection algorithms and Vanilla (Borda count). Borda is a classic social choice rule that is known not to be strategyproof but is optimal in the ordinal peer-ranking setting under the assumption of no noise \cite{caragiannis_krimpas_voudouris_2016}, and thus represents an optimistic baseline in the presence of no manipulators. \text{\textsc{PeerNomination}}\xspace outperforms EDP significantly in the the majority of the settings we have considered. The only setting where EDP outperforms \text{\textsc{PeerNomination}}\xspace is at $m=5$, which is a low information setting, where reviewers are given a nomination quota fractionally above 1. However, our algorithm improves quickly with $m$. Even at $m=9$ shown in Figure \ref{fig:results} \text{\textsc{PeerNomination}}\xspace approaches the performance of Borda across the values of $k$ we considered.
It is worth noting that \text{\textsc{PeerNomination}}\xspace tends to return a slightly larger than $k$ set on average (usually $<1$ additional agent). Nevertheless, even if the testing forces \text{\textsc{PeerNomination}}\xspace and EDP to return the same number of agents every time, we see that \text{\textsc{PeerNomination}}\xspace has an overall advantage as shown in Figure \ref{fig:fair_results}. Again, EDP only does better in a low information setting ($m=5$).
\section{Conclusion}\label{sec:conclusion}
There are many avenues for future work: \text{\textsc{PeerNomination}}\xspace, which already does not rely on predefined clustering, can be extended to not require an $m$-regular assignment. Moreover, each reviewer does not even need to declare a full ranking over their review pool, but simply declare the nominees for the selection and one nominee to be fractionally selected. This also suggests that there might be a possible extension of the algorithm which makes use of the declared rankings in full, as this data is currently discarded.
Crucially, the usefulness of our algorithm depends on returning an accepting set of size close to $k$. We saw that this can be achieved using the parameter $\varepsilon$. However, we saw that very high levels of noise can affect the size of the accepting set. This suggests that an important research direction will be that of testing different models of agent behaviour in detail.
{\small
|
plasm-ph/9503002
|
\section*{List of captions}
Fig.1 (a) Distribution $\rho_j^{(1)}(v_z)$ of $j$-level population in
ion velocity $v_z$ within the first order of the perturbation theory
in the field intensity. Different curves correspond to distinct
values of detuning $\Omega$ (from left to right $\Omega/{kv_T} =
0; 0.5; 1; 1.5; 2; 2.5$). Parameters are assumed to be $\Gamma =
10^{-2} kv_T$, $\Gamma_j = 10^{-3} kv_T$, $\nu = 10^{-5} kv_T$.
(b) Half-width on half a maximum of contour $\rho_j^{(1)}(v_z)$ as a
function of detuning $\Omega/{kv_T}$ normalized to unit magnitude.
\end{document}
|
2012.02774
|
\section{Dynamical mean field equations}
The dynamics of the spinor polariton condensate order parameter is modeled through a set of stochastic driven-dissipative Gross-Pitaevskii equations (Langevin equations) coupled to spin-polarized rate equations describing excitonic reservoirs $X_\pm$ feeding the two spin components $\psi_\pm$ of the condensate. The stochastic part of our model ($\theta_\pm$) was formulated in Refs.~\cite{Read_PRB2009, Wouters_PRB2009} under the so-called truncated Wigner approximation which becomes valid above condensation threshold with large particle numbers in the condensate $\langle n \rangle \gg 1$ such that stimulated effects dominate over spontaneous scattering events. We point out that a more accurate treatment of dissipative many-body quantum systems involves writing a density matrix for the polariton field governed by appropriate master equations~\cite{del2010microcavity}. This approach is beyond the scope of the current study where our modelling concerns the limit of large particle numbers where we show that complex nonlinear mean-field forces have quite dramatic effects on the polariton statistics. The model reads:
\begin{subequations} \label{eq.orig}
\begin{align}
i & \frac{ d\psi_\sigma}{dt} = \frac{1}{2} \Big[ \nu V_\sigma+ i \left( R_1 X_\sigma + R_2 X_{-\sigma} - \Gamma \right) \Big] \psi_\sigma - \nu \frac{\Omega_x}{2} \psi_{-\sigma} + \theta_\sigma(t), \\
& \frac{ dX_\sigma}{dt} = - \left[ \Gamma_R + R_1 (|\psi_\sigma|^2 + 1) + R_2(|\psi_{-\sigma}|^2 + 1) \right]X_\sigma + \Gamma_s(X_{-\sigma} - X_\sigma) + P_\sigma,\\
& V_\sigma = \alpha_1 |\psi_\sigma|^2 + \alpha_2 |\psi_{-\sigma}|^2 + g_1\left(X_\sigma + \frac{P_\sigma}{W}\right) + g_2\left(X_{-\sigma} + \frac{P_{-\sigma}}{W}\right).
\end{align}
\end{subequations}
Here, $\sigma \in \{+, -\}$ are the two spin indices, $\alpha_{1,2}$ denotes the same-spin (triplet) and opposite-spin (singlet) polariton-polariton interaction strengths and $g_{1,2}$ are the corresponding interactions with the reservoir, $R_{1,2}$ is the rate of stimulated same-spin and opposite-spin scattering of polaritons into the condensate, and $\Gamma$ is the polariton decay rate, $\Gamma_R$ and $\Gamma_s$ describe the decay rate and spin relaxation of reservoir excitons. In principle, scattering from the reservoirs to the condensate should be dominantly spin-preserving ($R_1$) but in the presence of a (effective) magnetic field ($\Omega_x$) one needs to account for the possibility that particles from the opposite-spin reservoir can scatter ($R_2$) into the condensate~\cite{Solnyshkov_Semic2007, Redondo_NewJouPhys2018}. Some studies work under the approximation that in optical traps the condensate is so well separated from the background reservoir and that blueshift coming from polariton-exciton interactions can be discarded but recent studies~\cite{boozarjmehr2020spatial} have shown that the reservoir is actually not so distant from the condensate and therefore additional polariton condensate blueshift coming from this background reservoir ($g_{1,2}$) should be taken into account. For all results presented we have chosen $\alpha_2 = -0.2 \alpha_1$ and $g_2 = -0.2 g_1$~\cite{Ciuti_PRB1998}. We also include an energy dampening parameter $\nu = 1 -i \nu'$ according to the Landau-Khalatnikov approach~\cite{Read_PRB2009}. Finally, spin-mixing (spin-relaxation) between the reservoirs ($\Gamma_s$) should be taken into account as it can be evidenced as depolarization in the cavity photoluminescence below condensation threshold~\cite{Maialle_PRB1993, Ohadi_PRL2012, Redondo_PRB2019, klaas_nonresonant_2019,pickup2020polariton}. It is naturally quite challenging to understand the full picture of which parameters contribute to different observed effects in experiment and thus we attempt at being as inclusive as possible of different physical mechanisms.
Although the experiment deals with a birefringent field $\boldsymbol{\Omega}_\parallel = (\Omega_x,\Omega_y)^T$ at a specific angle we will, without any loss of generality, take the splitting to be between horizontal $\psi_H = (\psi_+ + \psi_-)/\sqrt{2}$ and vertical $\psi_V = (\psi_+ - \psi_-)/\sqrt{2}$ polarized modes represented by the real-valued $\Omega_x$ and $\Omega_y=0$. The strength of the white complex noise $\theta_\sigma(t)$ is determined by the scattering rate of polaritons into the condensate
\begin{equation}
\langle \theta_\sigma(t) \theta_{\sigma'}(t') \rangle = 0, \qquad \langle \theta_\sigma(t) \theta^*_{\sigma'}(t') \rangle = \frac{R_1 X_\sigma + R_2 X_{-\sigma}}{2} \delta_{\sigma \sigma'} \delta(t-t').
\end{equation}
The active reservoir $X_\sigma$, which feeds the condensate with particles, is driven by a background of high momentum {\it inactive} excitons $P_\sigma$ which do not satisfy energy-momentum conservation rules to scatter into the condensate. Assuming the simplest type of rate equation describing the conversion of optical excitation power into an inactive reservoir in the continuous wave regime we write:
\begin{equation} \label{eq.P}
\frac{dP_\sigma}{dt} = - (W + \Gamma_I) P_\sigma + \Gamma_s(P_{-\sigma} - P_\sigma) + L_\sigma.
\end{equation}
Here, $W\gg\Gamma_I$ where $W$ is a phenomenological spin-conserving redistribution rate of inactive excitons into active excitons and $\Gamma_I$ is the nonradiative exciton decay rate. Since these inactive excitons also experience spin relaxation $\Gamma_s$ the polarization of $P_\sigma$ will not coincide with that of the incident optical excitation. As the experiment is performed in the continuous-wave regime we can immediately solve the steady state solution of Eq.~\eqref{eq.P} and plug it into Eqs.~\eqref{eq.orig}. For optical excitation parameterized as $\mathbf{L} = L (\cos{(\theta)}, \sin{(\theta)})^T$ where $\theta$ can be understood as a quarter waveplate angle in the experimental setup, we can write the background reservoir as,
\begin{equation}
\begin{pmatrix} P_+ \\ P_- \end{pmatrix} = \frac{L}{W + 2 \Gamma_s} \begin{pmatrix} W \cos^2{(\theta)} + \Gamma_s \\ W \sin^2{(\theta)} + \Gamma_s \end{pmatrix}.
\end{equation}
Here, $L$ is the power of the optical excitation and $\theta$ determines the polarization of the incident light.
Determining the parameters of Eq.~\eqref{eq.orig} poses a challenge since they will depend in a complicated way on both sample and excitation properties. To overcome this, we implement a random walk algorithm which, in each step, calculates the root-mean-square-error between the experimental and simulation data. The algorithm starts from a random set of parameters (appropriately bounded to remain physical) and repeatedly takes a random step forward in parameter space which is kept if the error is lowered. If the error rises, the step is discarded (go back to previous step) and a new random step is tested. Performing 500 random initializations in the parameter space, with each taking 300 random steps, we determine a set of parameters best fitting the experimental results. The parameters used throughout the manuscript are given in units of $\Gamma$ except of $\nu'$ which is dimensionless: $\Gamma_R = 1.6$; $\Gamma_s = 0.19$; $W = 0.156$; $R_1 = 0.0032$; $R_2 = 0.0027$; $\alpha_1 = 0.00015$; $g_1 = 0.00097$; $\nu' = 0.077$. For Fig.~3(a) and~5(c,d) in the main text we have set $\Omega_x = -0.057$ and $-0.0072$ respectively as they correspond to different sample locations. The theoretical pump threshold value for linearly polarized excitation corresponds to a threshold laser power $L$ defined as $P_{th} = P_\pm(L_{th})$,
\begin{equation}
L_{th} = \frac{2 \Gamma}{ \dfrac{R_1 + R_2}{\Gamma_R} + \nu'(g_1 + g_2[\Gamma_R^{-1} + W^{-1}])}.
\end{equation}
\section{Polarization switching and pinning}
\begin{figure}[h]
\includegraphics [width=1\columnwidth]{SI_2}
\centering
\caption{(a) Experimentally measured (circles) $g_{i,j}^{(2)}(0)$ power dependence for H (black) and V (red) polarized light and corresponding integrated $S_1$ component. (b) Integrated measurement of condensate degree of polarization (DOP). Yellow rectangle highlights the pump power range without polarization pinning (low DOP). (c) Measured time series (1$\mu$s excitation pulse) for condensate H (black) and V (red) polarized emission. While pinned to vertical polarization we observe switching of intensity between the polarizations.}
\label{fig.1}
\end{figure}
|
1601.04940
|
\section{Introduction}
\label{s:Introduction}
In many physical systems the motion is not smooth, but proceeds by avalanches. This
jerky motion is correlated over a broad range of space and time scales.
Examples are magnetic interfaces, fluid contact lines, crack fronts in fracture,
strike-slip faults in geophysics
and many more \cite{ZapperiCizeauDurinStanley1998,LeDoussalWieseMoulinetRolley2009,DSFisher1998}.
These systems have been described using the model of an elastic interface slowly driven in a random medium.
This model is important for avalanches, both conceptually and in applications
\cite{BonamySantucciPonson2008,PapanikolaouBohnSommerDurinZapperiSethna2011,DahmenSethna1996}. The full model of an interface of internal dimension $d$, in presence of realistic short-ranged disorder is difficult to treat analytically, and requires methods such as the Functional Renormalization Group (FRG)
\cite{NattermannStepanowTangLeschhorn1992,NarayanDSFisher1993a,LeDoussalWiese2008c,LeDoussalWiese2011b,%
LeDoussalWiese2011a,DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a,DobrinevskiLeDoussalWiese2014a}.
A simpler version of the model, the so-called Brownian force model (BFM) introduced in
\cite{LeDoussalWiese2011b,LeDoussalWiese2011a,DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a}
is very interesting in several respects. First it is exactly solvable, and several
avalanche observables have been calculated, as discussed below.
Second, it was shown
\cite{LeDoussalWiese2011a,LeDoussalWiese2012a}
to be the appropriate mean-field theory for the space-time statistics of the velocity field in a single avalanche for
$d$-dimensional
interfaces close to the depinning transition for $d \geq d_{{\rm uc}}
$ with $d_{{\rm uc}} = 4$ for short ranged elasticity and $d_{{\rm uc}} = 2$ for long-ranged elasticity.
Remarkably, when considering the dynamics of the center of mass of the interface, it reproduces the results of
the simpler ABBM model, a toy model for a single degree of freedom (particle), introduced long ago on a phenomenological basis to describe Barkhausen experiments (magnetic noise)\cite{AlessandroBeatriceBertottiMontorsi1990,AlessandroBeatriceBertottiMontorsi1990b} and much studied since \cite{ZapperiCizeauDurinStanley1998,Colaiori2008,DobrinevskiLeDoussalWiese2011b}. Last but not least, the BFM is the starting point for a calculation of avalanche observables beyond mean-field, using the FRG
in a systematic expansion in $d_{\rm uc}-d$ \cite{LeDoussalWiese2011a,LeDoussalWiese2012a}.
The key property which makes the BFM (and the ABBM) model solvable is that
the disorder is taken to be a Brownian random force landscape. Since it can be shown that
under monotonous forward driving the interface always moves forward (Middleton's theorem \cite{Middleton1992}),
the resulting equation of motion for the velocity field is Markovian, and amenable to exact methods.
Despite being exactly solvable, the explicit calculation of avalanche observables in
the BFM requires solving a non-linear {\em instanton equation} and performing
Laplace inversions, which is not always an easy task. Global avalanche properties, such as the probability distribution function (PDF) of
global size $S$, of duration, and of velocity have been obtained for arbitrary driving. Detailed
space time properties however are more difficult. In Ref.\ \cite{LeDoussalWiese2012a}
a finite wave-vector observable was calculated, demonstrating an asymetry in the
temporal shape. Although the distribution of local avalanche sizes $S_r$ has been
obtained in some instances, this is not the case for the distribution of {\it the spatial extension $\ell$ of an avalanche},
i.e.\ the range of points which move during an avalanche,
an important observable accessible in experiments. Note that even the fact that an avalanche has a finite extent, instead of an exponentially decaying tail in its spatial extension is a non-trivial result, which up to now was only proven for very large avalanches in the BFM \cite{ThieryLeDoussalWiese2015}.
\begin{figure}[b]
\Fig{Figure1}
\caption{An avalanche in $d=1$.}
\label{Avalanche1d}
\end{figure}
The aim of this paper is to calculate further observables for the BFM which contain
information about local properties, such as the joint density of global and local avalanches, and
the distribution of extensions. We consider various protocols, where the interface is either driven uniformly in space
or at a single point; in the latter case we identify new critical exponents.
We study avalanches following a kick, i.e.\ a step in the driving force.\\
The article is structured as follows:
In section \ref{sec:BFM} we recall the definition of the BFM and of
the main avalanche observables, together with the general method to
obtain them from the instanton equation. Section \ref{sec:size}
starts by recalling the calculation of
the distributions of the global size (total swept area) $S$ and of the local jump size $S_r$
of an avalanche, for an arbitrary kick amplitude. In Section \ref{subsec:joint}
we extend this calculation to the joint density
$\rho(S_r,S)$
of local and global size for single avalanches, i.e.\ in the limit of an infinitesimal kick.
In Section \ref{sec:point} we study the case of an interface driven at
a single point. When the {\it force} at this point is imposed, we
find a new exponent $\tau_0=5/3$ for the PDF of the local jump $S_0$ at that
point. When the local {\it displacement} is imposed, we find a
new exponent $\tau=7/4$ for the PDF of the global size $S$.
In Section \ref{sec:extension} we show that the extension $\ell$ of a {\em single avalanche}
along one internal direction (i.e.\ the total length
in $d=1$) is finite; we calculate its distribution, following either a
local or a global kick. In all cases it exhibits a divergence $P(\ell) \sim \ell^{-3}$ at small $\ell$, with the same prefactor. All these exponents can be found in Table~\ref{TableExponent}.
Finally, in Section \ref{sec:non-stat} we study the {\em position} of the interface, which is a non-stationary process. We explain how the Larkin and BFM roughness exponents
emerge from the dynamics.
Most of our results are tested in a numerical simulation of the equation of motion
in $d=1$.
\begin{table}\label{TableExponent}
\begin{tabular}{|c|c|c|}
\hline
Driving protocol & \hspace{0.4cm} Observable \hspace{0.4cm} & \hspace{0.2cm} Exponent \hspace{0.2cm} \\
\hline \hline
any force kick & global size $S$ & $\tau=3/2$ \\
\hline
uniform force kick & local size $S_0$ & $\tau_0=4/3$ \\
\hline
uniform force kick & $S_0$ at fixed $S$ & $\tau_0=2/3$ \\
\hline
localized force kick & local size $S_0$ & $\tau_0=5/3$ \\
\hline
local displacement imposed & global size $S$ & $\tau=7/4$ \\
\hline
any force kick & extension $\ell$ & $\kappa=3$ \\
\hline
\end{tabular}
\caption{Summary of small-scale exponents for different distributions in the Brownian-Force Model, depending on the observable and the driving protocol.}
\end{table}
The technical parts of the calculations are presented in Appendices \ref{app:airy} to \ref{app:nonstat}, together with general material about Airy, Weierstrass and Elliptic functions. A short presentation of the numerical methods is also included.
Finally note a complementary recent study of the BFM, where the joint PDF of the local avalanche size
at all points was obtained. From that, the spatial shape
of an avalanche in the limit of large aspect ratio $S/\ell^4$ was derived \cite{ThieryLeDoussalWiese2015}.
\section{Avalanche observables of the BFM}
\label{sec:BFM}
\subsection{The Brownian Force Model}
In this paper, we study the Brownian Force Model (BFM) in space dimension $d$, defined as the stochastic differential equation (in the
Ito sense)\,:
\begin{equation}
\eta \partial_t \dot{u}_{xt}= \nabla_x^2 \dot{u}_{xt}
+ \sqrt{2 \sigma \dot{u}_{xt}} \,\xi_{xt} + m^2(\dot{w}_{xt}- \dot{u}_{xt})\ .
\label{BFMdef}
\end{equation}
This equation models the overdamped time evolution, with friction $\eta$, of the velocity field $\dot{u}_{xt} \geq 0$ of an interface with internal coordinate $x \in \mathbb{R}^d$; the space-time dependence is denoted by indices $\dot{u}(x,t) \equiv \dot{u}_{xt}$.
It is the sum of three contributions:
\begin{itemize}
\item short-ranged elastic interactions,
\item stochastic contributions from a disordered medium, where $\xi$ is a unit Gaussian white noise (both in $x$ and $t$)\,:
\begin{equation}
\overline{\xi_{xt}\xi_{x't'}} = \delta^d\!(x-x')\,\delta(t-t'),
\end{equation}
\item a confining quadratic potential of curvature $m$, centered at $w_{xt}$, acting as a driving.
\end{itemize}
The driving velocity is chosen positive, $\dot w_{xt} \geq 0$, a necessary condition for the model
to be well defined, as it implies that $\dot{u}_{xt} \geq 0$ at all $t>0$ if $\dot{u}_{xt=0} \geq 0$.
Equation \eqref{BFMdef}, taken here as a definition, can also be derived from the
equation of motion of an elastic interface, parameterized by a position field (displacement field) $u_{xt}$
in a quenched random force field $F(u,x)$,
\begin{equation}
\eta \partial_t u_{xt}= \nabla_x^2 u_{xt} + F\!\left(u_{xt},x\right) + m^2(w_{xt}- u_{xt})\ .
\label{BFMpos}
\end{equation}
The random force field is a collection of independent one-sided Brownian motions in the $u$ direction
with correlator
\begin{equation}
\overline{F(u,x)F(u',x')}=2 \sigma \delta^d(x-x')\min(u,u')\ .
\end{equation}
Taking the temporal derivative $\partial_t$ of Eq.~\eqref{BFMpos}, and assuming forward motion of the interface,
one obtains Eq.~\eqref{BFMdef} for the velocity variable $\partial_t u_{xt} \equiv \dot{u}_{xt}$
(we use indifferently $\partial_t$ or a dot to denote time derivatives).
The fact that the equation for the velocity is Markovian even for a quenched disorder is remarkable and results from
the properties of the Brownian motion.
Details of the correspondence are given in \cite{DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a}
where subtle aspects of the position theory, and its links to the mean-field theory of
realistic models of interfaces in short-ranged disorder via the Functional Renormalisation Group (FRG)
are discussed. In the last section of this paper we will mention some properties of the position theory of
the Brownian force model.
\subsection{Avalanches observables and scaling}
The BFM \eqref{BFMdef} allows to study the statistics of avalanches as the
dynamical response of the interface to a change in the driving. We consider solutions of \eqref{BFMdef} as a response to a driving of the form
\begin{equation} \label{kick}
\dot{w}_{xt} = \delta w_x ~ \delta(t) \,\, , \,\, \delta w_x \geq 0 \,\,, \,\,\delta w = L^{-d} \int_x \delta w_x >0 \ .
\end{equation}
The initial condition is
\begin{equation} \label{init}
\dot{u}_{xt=0} =0\ .
\end{equation}
This solution describes an
avalanche which starts at time $t=0$ and ends when $\dot u_{xt}=0$ for all $x$. The time at which the avalanche ends, also called avalanche duration, was studied in \cite{DobrinevskiPhD} and its distribution given in various situations.
Within the description \eqref{BFMpos}, i.e.\ in the position theory, it corresponds to an interface pinned, i.e.\ at rest in a metastable state at $t<0$, it is submitted at $t=0$ to a jump in the total applied force $m^2 \delta w$. More
precisely, the center of the confining potential jumps at $t=0$ from $w_{x}$ (where it was for $t<0$) to $w_{xt=0^+} = w_{x} + \delta w_x$ (where it stays for all $t>0$). As a consequence, the interface moves forward (since $\delta w_x \geq 0$) up to a new metastable state. This is represented in figure \ref{Avalanche1d}, where $u_{xt=0}$ is the initial metastable state and $u_{xt=\infty}$ is the new metastable state at the end of the avalanche. In fact, as we will see from the distribution of avalanche durations, the new metastable state is reached almost surely in a finite time.
For details on these metastable
states and the system's preparation see \cite{DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a}. \\
We now discuss the avalanche observables at the center of this paper. They can be computed from the solution of \eqref{BFMdef} given \eqref{kick} and \eqref{init}; they are represented in figure \ref{Avalanche1d} for a more visual definition in the case $d=1$.
\begin{itemize}
\item Global size of the avalanche:
\begin{equation}
S=\int_{x\in \mathbb{R}^d} \int_{0}^{\infty} \dot{u}_{tx} \, dt\ .
\label{GlobalSize}
\end{equation}
This is the total area swept by the interface during the avalanche.
\item Local size of the avalanche:
\begin{equation}
S_r = m^{-1}\int_{x \in \{r\}\times \mathbb{R}^{d-1}} \int_{0}^{\infty} \dot{u}_{tx} \, dt\ .
\label{LocalSize}
\end{equation}
This is the size of the avalanche localized on a hyperplane, where one of the internal coordinates is $r$; the factor $m^{-1}$ allows to express $S$ and $S_r$ using the same units (see below).
In $d=1$ this yields $S_r= m^{-1} \int_{0}^{\infty} \dot{u}_{tr}\, dt$, i.e.\ the transversal jump at the point $r$ of the interface. For $d>1$ the variable $r$ is still one-dimensional, and $S_r$ the total displacement in a hyperplane of the interface.
\item Avalanche extension:
For $d=1$, the extension (denoted $\ell$) of an avalanche is the lenght of the part of the interface which (strictly) moves during the avalanche. The generalisation to avalanches of a $d$-dimensional interface is done with the definition
\begin{equation}
\ell= \int_{-\infty}^{\infty} dr~ \theta(S_r > 0)\ ,
\label{extension}
\end{equation}
where $\theta$ is the Heaviside function. Note that even for a $d$-dimensional interface, the extension $\ell$ is a unidimensional observable (\textit{cf.} figure \ref{Avalanche2d}).
\end{itemize}
Note that
\begin{equation}
S_r >0\ \Leftrightarrow \ {\rm Supp} \bigcap \{r\}\times \mathbb{R}^{d-1} \neq \emptyset
\end{equation}
where ${\rm Supp}$ denotes all the points of the interface moving during an avalanche (i.e.\ its support).
\begin{figure}[t]
\Fig{Figure2}
\caption{An avalanche in $d=2$; the transverse direction is orthogonal to the plane of the figure and the colored zone corresponds to the support of the avalanche.}\label{Avalanche2d}
\end{figure}
We use natural scales (or units) to switch to dimensionless expressions, both for the (local and global) avalanche size $S_m$, as for the time $\tau_m$ expressed as
\begin{equation}
S_m = \frac{\sigma}{ m^4} \ , \qquad \tau_m = \frac{\eta}{m^2}\ .
\label{units}
\end{equation}
The extension, a length in the internal direction of the interface, is expressed in units of $m^{-1}$. This is equivalent to setting $m=\sigma=\eta = 1$. All expressions below, except explicit mention, are expressed in these units.
While $S_m$ is the large-size cutoff for avalanches, there is generically also a small-scale cutoff. As in the BFM the disorder is scale-invariant (by contrast with more realistic models with short-ranged smooth disorder), it is the
increment in the driving $\delta w$ which sets the small-scale cutoff for the local and global size of avalanches.
They scale as $\min(S) \sim \delta w^2$ (global size) and $\min(S_r) \sim \delta w^3$ (local size).
\vspace{0.2cm}
{\it Massless limit}
\vspace{0.2cm}
There are cases of interest where the mass $m \to 0$. This can be
defined from the equations of motion (\ref{BFMdef}) and (\ref{BFMpos}) with the changes
\begin{equation}
\begin{split}
m^2(w_{xt}- u_{xt}) &\to f_{xt} \\
m^2(\dot{w}_{xt}- \dot{u}_{xt}) &\to \dot f_{xt}\ .
\end{split}
\end{equation}
In that case it is natural to consider driving with a given force $f_{xt}$, rather
than by a parabola. The definition of the observables is the same except that
the factor of $m^{-1}$ is not added in the definition (\ref{LocalSize}).
To bring $\sigma$ and $\eta$ to unity, we then
define time in units of $\eta$ and displacements
$u$ in units of $\sigma$. The results will still have an unfixed
dimension of length. In some of them, the system size $L$
leads to dimensionless quantities (it also acts as
a cutoff for large sizes, although we will not use this explicitly).
\subsection{Generating functions and instanton equation}
To compute the distribution of the observables presented above, we use a result from \cite{DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a} which allows us to express the average over the disorder of generating functions (Laplace transforms) of $\dot{u}_{xt}$, solution of \eqref{BFMdef}. In dimensionless units, this result reads
\begin{equation}
G[\lambda_{xt}]=\left\langle \exp \left(\int_{xt} \lambda_{xt}\dot{u}_{xt} \right) \right\rangle = e^{ \int_{xt} \dot{w}_{xt} \tilde{u}_{xt} }\ .
\label{generating}
\end{equation}
Here $\langle\cdots \rangle$ denotes the average over disorder. $\tilde{u}$ is a solution of the differential equation (called instanton equation)
\begin{equation}
\partial_{x}^2\tilde{u} + \partial_{t}\tilde{u}-\tilde{u}+ \tilde{u}^2 =- \lambda_{xt}\ .
\label{instanton2}
\end{equation}
Since avalanche observables that we consider are integrals of the velocity field over all times (c.f.\ observable definitions above), the sources $\lambda_{xt}$ we need in \eqref{generating} are {\em time independent}. Thus we only need to solve the space dependent, but time independent, instanton equation
\begin{equation}
\tilde{u}_x''-\tilde{u}_x+ \tilde{u}^2_x =- \lambda_{x}\ .
\label{instanton}
\end{equation}
The prime denotes derivative w.r.t $x$. In the massless case discussed above, the term $- \tilde{u}_x$ is absent, all other terms are identical.
The global avalanche size implies a uniform source in the instanton equation: $\lambda_x = \lambda$, while the local size implies a localized source $\lambda_x = \lambda \delta^1(x)$. To obtain information on the extension of avalanches, we need to consider a source localized at two different points in space, $\lambda_x = \lambda_1 \delta(x-r_1) + \lambda_2 \delta(x-r_2)$.
This instanton approach, which derives from the Martin-Siggia-Rose formulation of \eqref{BFMdef}, allows us to compute exactly disorder averaged observables for any form of driving, by solving a ``simple" ordinary differential equation, which depends on the observable we want to compute, i.e.\ on $\lambda_{xt}$, but not on the form of the driving $\dot{w}_{xt}$. For a derivation of \eqref{generating} and \eqref{instanton2} we refer to \cite{DobrinevskiLeDoussalWiese2011b}.
\section{Distribution of avalanche size}
\label{sec:size}
\subsection{Global size}
As defined in \eqref{GlobalSize} the global size of an avalanche is the total area swept by the interface. Its PDF was
calculated in \cite{DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2011a,LeDoussalWiese2012a} and reads, in
dimensionless units,
\begin{equation}
P_{\delta w} (S) = \frac{\delta \hat{w}}{2 \sqrt{\pi}S^{\frac{3}{2}}}e^{-\frac{(S-\delta \hat{w})^2}{4S}}\ .
\label{GlobalDistrib}
\end{equation}
Here $\delta \hat{w} = L^{d} \delta w$. This result does not depend on the spatial form of the driving (it can be localized, uniform, or anything in between), as long as it is applied as a force on the interface. Driving by imposing a specific displacement at one point of the interface is another interesting case that leads to a different behavior, see Section \ref{sec:imposed}.
We can test this against a direct numerical simulation of the equation of motion \eqref{BFMdef}. There is excellent agreement over 5 decades, with no fitting parameter, see Fig.~\ref{GlobalSizeFig}.
\begin{figure}[t]
\Fig{Figure3}
\caption{Green histogram : global avalanche-size distribution from a direct numerical simulation of a discretized version of Eq.~\eqref{BFMdef} with parameters : $N=1024, m=0.01, df = m^2 \delta w = 1$ and $dt=0.05$. Red line : theoretical result given in Eq.~\eqref{GlobalDistrib}. For details about the simulation see appendix \ref{a:Numerics}.}\label{GlobalSizeFig}
\end{figure}
Avalanches have the property of infinite divisiblity, i.e.\ they are a Levy process. This can be written as an equality in distribution, i.e.\ for probabilities,
\begin{equation}
P_{\delta w_1} * P_{\delta w_2} \overset{d}{=}P_{\delta w_1+ \delta w_2}\ .
\end{equation}
It implies that we can extract from the probability distribution \eqref{GlobalDistrib} the {\em single avalanche} density per unit $\delta w$ that we denote $\rho(S)$ and which is defined as
\begin{equation}
P_{\delta w} (S) \underset{\delta \hat w \ll 1}{\simeq} \delta w \,\rho(S)\ .
\end{equation}
This avalanche density contains the same information as the full distribution \eqref{GlobalDistrib}; its expression is
\begin{equation}
\rho(S) = \frac{L^d}{2 \sqrt{\pi} S^{\frac{3}{2}}}e^{-\frac{S}{4}} \sim S^{-\tau}\ .
\label{GlobalDensity}
\end{equation}
It is proportional to the system volume since avalanches occur anywhere along
the interface. It defines the avalanche exponent $\tau=\frac{3}{2}$ for the
BFM. Due to the divergence when $S \rightarrow 0$ it is not
normalizable (it is not a PDF), but as the interface follows on average the confining parabola, it has the following property
\begin{equation}
\int_{0}^{\infty} \!\!\!dS\,S \rho(S) = L^d \ .
\end{equation}
In this picture, typical, i.e.\ almost all avalanches are of vanishing size, $S\approx 0$, or more precisely
$S\le \delta \hat{w} ^2$, but moments of
avalanches are dominated by non-typical large avalanches
(of order $S_m$).
\subsection{Local size}
We now investigate the distribution of local size $S_r$ as defined in Eq.~\eqref{LocalSize}. We have to specify the form of the kick; we start with one uniform (in $x$): $\delta w_x = \delta w$ for all $x \in \mathbb{R}$. In this case the system is translationnaly invariant, and we can choose $r=0$, as any local size will have the same distribution.
The distribution of $S_0$ is obtained by solving Eq.~$\eqref{instanton}$ with the source $\lambda_x =\lambda \delta(x)$, and then computing the inverse Laplace transform with respect to $\lambda$ of $G(\lambda) = \exp (\delta w \int_x \tilde{u}^{\lambda} )$, where $\tilde{u}^{\lambda}$ is the instanton solution (depending on $\lambda$). This has been done in \cite{LeDoussalWiese2012a}; the final result is
\begin{equation} \label{LocalDistrib}
\begin{split}
\!\!P_{\delta w}(S_0)&= \frac{2\times 3^{\frac{1}{3}}}{S_0^{\frac{4}{3}}}e^{6 \delta \hat{w}} \delta \hat{w}\, \text{Ai}\!\left( \left( \frac{3}{S_0}\right)^{\!\frac{1}{3}}(S_0+2 \delta \hat{w})\right)\\
&\simeq_{\delta \hat w \ll 1} \delta w \frac{2 L^{d-1}}{\pi S_0} \text{K}_{\frac{1}{3}}\!\left(\frac{2 S_0}{\sqrt{3}} \right)\ .
\end{split}
\end{equation}
Here $\delta \hat{w}= L^{d-1} \delta w$, Ai is the Airy function, and K the Bessel function.
We use that $\text{Ai}(x)=\frac{1}{\pi} \sqrt{\frac{x}{3}} K_{1/3}(\frac{2}{3} x^{3/2})$ for $x>0$. This distribution has again the property of infinite divisibility, which is far from obvious on the final results but, can be checked numerically
The small-$\delta w$ limit defines the density per unit $\delta w$ of the local sizes of a ``single avalanche", which is given by
\begin{equation}\label{LocalDensity}
\begin{split}
\rho(S_0) & = \frac{2 L^{d-1}}{\pi S_0} \,\text{K}_{\frac{1}{3}}\! \!\left(\frac{2 S_0}{\sqrt{3}} \right) \\
& \simeq_{S_0 \ll 1} L^{d-1} \frac{\sqrt[6]{3}\,\Gamma
(1/3)}{\pi {S_0}^{4/3}} \sim S_0^{- \tau_\phi} \ .
\end{split}
\end{equation}
Its small-size behavior defines the local size exponent $\tau_\phi=\frac{4}{3}$ for the BFM.
The distribution \eqref{LocalDistrib}, or the density \eqref{LocalDensity}, can be compared to the results of direct numerical simulations of the BFM, and the agreement is very good over 7 decades, {\em without any fitting parameter}, c.f. Fig.~\ref{LocalSizeFig}.
\begin{figure}[t]\Fig{Figure4}
\caption{Green histogramm: Local avalanche-size distribution from a direct numerical simulation of a discretized version of \eqref{BFMdef}with parameters $N=1024, m=0.01, df = m^2 \delta w = 1$, and $dt=0.05$. Red line: the theoretical result given in Eq.~\eqref{LocalDistrib}. For details about the simulation see appendix \ref{a:Numerics}.}\label{LocalSizeFig}
\end{figure}
Another interesting property is that the tail of large local sizes behaves as $\rho(S_0) \simeq_{S_0 \gg 1} S_0^{-3/2} e^{-2S_0/\sqrt{3}}$, i.e.\ with the same power-law exponent in the pre-exponential factor as the global size.
\subsection{Joint global and local size}
\label{subsec:joint}
We now extend these results with a new calculation of the joint density of local and global sizes.
Consider $P_{\delta w}(S_0,S)$, the joint PDF of local size $S_0$ and global size $S$, following a uniform kick
$\delta w$. For arbitrary $\delta w$ it does not admit a simple explicit form (see
Appendix \ref{sec:joint}). We thus again
consider the ``single avalanche" limit $\delta w \rightarrow 0$. It defines the joint density $\rho(S,S_0)$, via
$P_{\delta w}(S_0,S) \simeq \delta w\, \rho(S_0,S)$, which we now calculate. Equivalently one can
consider the conditional probability $P_{\delta w}(S_0|S)$ of the local size, given that the global size is $S$. In the
limit $\delta w \to 0$ these two objects are related by
\begin{equation}
P_{0^+}\!(S_0|S)= \frac{\rho(S_0,S)}{\rho(S)} \ ,
\end{equation}
where $\rho(S)$ is given in Eq.\ (\ref{GlobalDensity}); the two factors of $\delta w$ cancel.
For simplicity we discuss the result for
$P_{0^+}\!(S_0|S)$. While both $\rho(S)$ and $\rho(S_0,S)$ are not probabilities, i.e.\ they cannot be normalized to one, we will show that the conditional probability $P_{0^+}\!(S_0|S)$ is well-defined, and normalized to unity.
A natural decomposition of this conditional PDF is
\begin{equation} \label{dec}
P_{0^+}\!(S_0|S) = \hat P_{0^+}\!(S_0|S) + \delta(S_0) \left(1 - \int_{u>0}\!\!\!\hat P_{0^+}\!(u|S) \right)\ .
\end{equation}
The first term is the smooth part defined for $S_0>0$ which comes from
the avalanches containing the point $ r=0$. The second term arises from all avalanches
which do not contain the point $ r=0$. This term contains a substraction so that the
total probability is normalized to unity, $\int_{S_0} P_{0^+}(S_0|S) =1$, as it should be.
The smooth part is calculated using the instanton-equation approach.
The details are given Appendix \ref{sec:joint}.
The final result takes the scaling form
\begin{equation}\label{jointdensity}
\hat P_{0^+}\!(S_0|S) = \frac{1}{L} {4 \times 3^{2 \over 3} \over S_0^{2 \over 3}} e^{-\frac{2}{3}\alpha^3 } \Big[\alpha\, \text{Ai} \big( \alpha^2 \big) - \text{Ai}'\big(\alpha^2\big) \Big]
\end{equation}
with
\begin{equation}\label{alphaDef}
\alpha := {3^{2 \over 3} S_0^{4 \over 3} \over S}\ .
\end{equation}
The factor $1/L$ is natural since only a fraction of order $ 1/L$ of
avalanches contains the point ${r=0}$. As written, this smooth part is not normalized.
Its integral is equal to the probability $p$ that the point $S_0$ has moved
(i.e.\ $S_0>0$) during an avalanche, for which we find
\begin{equation}
p := \int_0^{\infty}\!\!\!dS_0\, \hat P_{ 0^+}(S_0|S) = \frac{S^{\frac{1}{4}}}{L} \frac{3 \Gamma\!\left( { 1 \over 4} \right) }{\sqrt{\pi}}\ .
\end{equation}
The scaling of this probability with size shows that in a single avalanche only a finite
portion of the interface is moving. If we assume statistical translational invariance
we deduce that
\begin{equation}
p = \langle \ell \rangle_S /L\ ,
\end{equation}
where $\ell$ is the extension defined in (\ref{extension}), and $\langle \ell \rangle_S$ its mean
value conditioned to the global size $S$. Hence we deduce that
\begin{equation}
\langle \ell \rangle_S = \frac{3 \Gamma\!\left( { 1 \over 4} \right) }{\sqrt{\pi}} S^{\frac{1}{4}}\ .
\end{equation}
In the following sections we will in fact calculate the
PDF of the extension $\ell$.
\begin{figure}[t]
\Fig{Figure5}
\caption{Distribution of $\alpha$, defined in Eq.~\eqref{alphaDef}, from numerical simulations ($N=1024,m=0.02,\delta w=10, dt=0.01$). This is compared to the theoretical prediction \eqref{alphaDistrib}. Keeping only large-size avalanches, this converges (without any adjustable parameter) to the $\delta w =0^+$ result. }\label{AlphaDistribFig}
\end{figure}
By dividing by $p$, we can now define a genuine normalized PDF for $S_0$, $\tilde P_{0^+}\!(S_0|S)$, conditioned to both $S$ and $S_0>0$, so that the decomposition (\ref{dec}) becomes
\begin{equation}
P_{0^+}\!(S_0|S) = p\,\tilde P_{0^+}\!(S_0|S) + \delta(S_0) (1-p) \ .
\end{equation}
Explicitly
\begin{equation}\label{S0DistribSfixed}
\tilde P_{0^+}\!(S_0|S) = \frac{4 \sqrt{\pi} e^{-\frac{2}{3}\alpha^3 } }{3^{1 \over 3} \Gamma\!\left(\frac{1}{4}\right) S_0^{2 \over 3} S^{\frac{1}{4} } } \Big[\alpha\, \text{Ai} \big( \alpha^2 \big) - \text{Ai}'\big(\alpha^2\big) \Big]\ ,
\end{equation}
with $\alpha$ defined in Eq.~\eqref{alphaDef}.
It is
now normalized to unity, $\int_{S_0>0} \tilde P_{0^+}(S_0|S) =1$.
One sees that the typical local size scales as $S_0 \sim S^{3/4}$.
Computing the first moment we find its conditional average
to be $\langle S_0 \rangle_{S,S_0>0} = \frac{\sqrt{\pi }}{3 \Gamma \left(1/4\right)} S^{3/4}$.
Its PDF has two limiting behaviors,
\begin{equation}
\tilde P_{0^+}\!(S_0|S) \simeq \left\{
\begin{array}{ll}
\dfrac{e^{-\frac{12 S_0^4}{S^3}}}{ \Gamma(\frac{5}{4}) S^{\frac{3}{4}}} &\text{ for } S_0 \gg S^{\frac{3}{4}}\\
\dfrac{ \sqrt{\pi}} { 3^{\frac{2}{3}} \Gamma(\frac{1}{3}) \Gamma(\frac{5}{4})
S_0^{\frac{2}{3}} S^{\frac{1}{4}} }&\text{ for }S_0 \ll S^{\frac{3}{4}}\ .
\end{array}\right.
\end{equation}
The first one shows that the probability of avalanches which are ``peaked" at $r=0$
decays very fast. The second shows an integrable divergence at small $S_0$
with an exponent $2/3$. Comparing, for instance, with the
behavior of the local size density (\ref{LocalDensity}), we see that conditioning
on $S$ yields a rather different behavior and exponent.
It is interesting to note that changing variables in Eq.~\eqref{S0DistribSfixed} from $S_0$ to $\alpha$, defined in \eqref{alphaDef}, gives
\begin{equation} \label{alphaDistrib}
\tilde P_{0^+}\!(\alpha|S) = \frac{\sqrt{3 \pi} e^{-\frac{2}{3}\alpha^3 } }{ \Gamma\!\left(\frac{1}{4}\right) \alpha^{\frac{3}{4}} } \Big[\alpha\, \text{Ai} \big( \alpha^2 \big) - \text{Ai}'\big(\alpha^2\big) \Big]
\ ,
\end{equation}
which is now independant of $S$, and thus easier to test numerically as it does not require any conditionning. Figure \ref{AlphaDistribFig} shows the agreement of these predictions with numerical simulations, in the limit of large $S$ which is equivalent to $\delta w=0^+$ as used in the theoretical derivation.
\subsection{Scaling exponents}
Let us now discuss the various exponents obtained until now. They are consistent with the
usual scaling arguments for interfaces. If an avalanche has an extension of order $\ell$
(in the codirection of the hyperplane over which the local size is calculated), the
transverse displacement scales as $u \sim \ell^\zeta$. Here the roughness exponent $\zeta$
for the BFM with SR elasticity is
\begin{equation}
\zeta_{{\rm BFM}} = 4 -d \ .
\end{equation}
The avalanche exponent for the global size follows the Narayan-Fisher (NF) prediction \cite{NarayanDSFisher1993a}
\begin{equation}\label{NF}
\tau = 2 - \frac2{d + \zeta} ~\stackrel{\rm BFM}{-\!\!\!\longrightarrow}~ \frac{3}{2} \ .
\end{equation}
The global size then scales as $S \sim \ell^{d + \zeta}$, since
all $d$ internal directions are equivalent, and the transverse response scales with the roughness exponent $\zeta$.
In turn this gives $\ell \sim S^{1 \over d+ \zeta}$. In the BFM with SR elasticity
this leads to $\ell \sim S^{1/4}$ as found above.
Similarly, the local size, defined here as the avalanche size inside a $d_{\phi}$-dimensionel subspace, is $S_0 \sim \ell^{d_\phi + \zeta}$ , leading to
a generalized NF value $\tau_\phi = 2 - \frac{2}{d_\phi + \zeta}$.
In the BFM we have focused on the case $d_\phi=d-1$ (i.e. the subspace is an hyperplane), hence $d_\phi+\zeta=3$ and
the local size exponent becomes $\tau_\phi = 4/3$.
It also implies $S_0 \sim \ell^3$, hence $S_0 \sim S^{3/4}$ as found above.
\section{Driving at a point: avalanche sizes}
\label{sec:point}
Here we briefly study avalanche sizes for an interface driven only in a small
region of space, e.g.\ at a point. There are two main cases:
\begin{itemize}
\item the local force on the point is imposed, which in our framework means to
consider a local kick $\delta w_x= \delta w \,\delta(x)$. In the massless setting
it amounts to use $f_x = \delta f\, \delta(x)$,
\item the displacement $u_{x=0,t}$ of one point of the interface is imposed.
\end{itemize}
As we now see this leads to different universality classes and exponents.
\subsection{Imposed local force}
Consider an avalanche following a local kick at $x=0$, i.e.\ $\delta w_x=\delta w_0 \delta(x)$.
In the BFM the distribution of the {\it global size} of an avalanche
does not depend on whether the kick is local in space or not. One still obtains \cite{LeDoussalWiese2012a} the
global-size distribution as given in Eq.~(\ref{GlobalDistrib}) with $\delta \hat w = \int_x \delta w_x=\delta w_0$.
The distribution of the {\it local size at the point of the kick
} is more interesting. The
calculation is performed in Appendix \ref{app:local}. For simplicity
we restrict to $d=1$, the general case can be obtained as above by
inserting factors of $L^{d-1}$. The full result for the PDF, $P_{\delta w_0}\!(S_0)$,
is given in (\ref{resloc})
and is bulky. In the limit $\delta w_0 \to 0$ it simplifies. Noting $P_{\delta w_0}\!(S_0)
\simeq \delta w_0 \rho(S_0)$, the
corresponding local-size density becomes
\begin{equation}
\rho(S_0) = - \frac{1}{3^{1/3} S_0^{5/3}} \text{Ai}'\!\left(3^{1/3} S_0^{2/3}\right)\ .
\end{equation}
At small $S_0$, or equivalently in the massless limit at fixed $\delta f_0 = m^2 \delta w_0$, it diverges as
\begin{equation}
\rho(S_0) \underset{S_0\ll1}{\simeq} \frac{S_0^{- 5/3}}{3^{2/3} \Gamma(1/3)} \sim S_0^{-\tau_{0,{\rm loc. driv.}} }\ .
\end{equation}
This leads to a {\it new avalanche exponent}
\begin{equation}
\tau_{0,{\rm loc. driv.}}=\frac{5}{3}\ .
\end{equation}
The cutoff at small size is given by the driving, $S_0 \sim \delta w_0^{3/2}$. At large $S_0$ the PDF is cut by the scale $S_m\equiv 1$ and decays as
\begin{equation}
\rho(S_0) \underset{S_0\gg1}{\simeq}\frac{S_0^{-3/2}}{2 \sqrt{\pi} 3^{1/4}} e^{-2 S_0/\sqrt{3}} \ .
\end{equation}
\subsection{Imposed displacement at a point}
\label{sec:imposed}
We analyze the problem in the massless case. To impose the
displacement at point $x=0$ we replace in the equation of motion
(\ref{BFMdef}) and (\ref{BFMpos}), $m^2 \to m^2 \delta(x)$.
Hence there is no global mass, but a local one to drive the
interface at a point. To impose the displacement, we consider the limit $m^2 \to \infty$.
In that limit $u_{x=0,t}=w_{0,t}$, and the local size of the
avalanche $S_0$ is equal to $\delta w_0$.
While the local size $S_0$ is fixed by the driving, we can
calculate the distribution of global sizes. It is obtained in
Appendix \ref{imposeddispl} using an instanton
equation with a Dirac mass term. It can be mapped
onto the same instanton equation as studied for the
joint PDF of local and global sizes. The Laplace-transform of the result for the
PDF is given in Eq.~(\ref{LT1}). Its small-driving limit,
i.e.\ the density, is
\begin{equation}
\rho(S) = \frac{\sqrt{3}}{\Gamma(1/4) S^{7/4}} \sim S^{-\tau_{\rm loc. driv.}}
\end{equation}
with a distinct exponent
\begin{equation}
\tau_{\rm loc. driv.} = \frac{7}{4} \ .
\end{equation}
\section{Distribution of avalanche extensions}
\label{sec:extension}
In this section we study the distribution of avalanche extensions.
In the BFM they can be calculated analytically. We start by recalling standard scaling arguments.
\subsection{Scaling arguments for the distribution of extensions}
As mentioned in the last section, we expect that
the global size $S$ and the extension $\ell$ of avalanches are related by the scaling relation
\begin{equation}
S \sim \ell^{d + \zeta}
\end{equation}
in the region of small avalanches $S \ll S_m$ (in dimensionfull units).
From the definition of the avalanche-size exponent
\begin{equation}
P(S) \sim S^{-\tau}
\end{equation}
and using the change of variables $P(S) dS = P(\ell) d\ell$
we find
\begin{equation}
P(\ell) \sim \ell^{- \kappa} \, \text{ with } \, \kappa = 1 + (\tau-1) (d+\zeta) \ .
\end{equation}
Using the value for $\tau$ from the NF relation (\ref{NF}) we obtain
\begin{equation}
\tau = 2 - \frac{2}{d+\zeta}\ .
\end{equation}
For SR elasticity, this yields
\begin{equation}
\kappa = d + \zeta -1\ .
\end{equation}
The prediction for the BFM is that $\zeta_{\rm BFM}=4-d$ and $\tau_{\rm BFM}=3/2$,
which leads to
\begin{equation}
\kappa_{\rm BFM}=3
\end{equation}
in all dimensions. We will now check this prediction from the scaling relations
with exact calculations on the BFM model in $d=1$.
\subsection{Instanton equation for two local sizes}
If we want to investigate the joint distribution of two local sizes at points $r_1$ and $r_2$,
we need to solve the instanton equation with two local sources,
\begin{equation}
\tilde{u}''_x -\tilde{u}_x+ \tilde{u}^2_x =- \lambda_1 \delta(x-r_1) - \lambda_2 \delta(x-r_2)\ .
\label{InstantonTwoSources}
\end{equation}
This solution is difficult to obtain for general values of $\lambda_1$ and $\lambda_2$.
Nevertheless $\lambda_{1,2} \rightarrow - \infty$ is an interesting solvable limit, and sufficient to compute the extension distribution. Let us denote by $\tilde{u}_{r_1,r_2}(x)$ a solution of Eq.~\eqref{InstantonTwoSources}
with $r_1<r_2$ in this limit $\lambda_{1,2} \rightarrow - \infty$. It allows to express the probability that two local sizes in an avalanche following an arbitrary kick $\delta w_x$ equal $0$,
\begin{equation}
\begin{split}
\mathbb{P}_{\delta w_x} (S_{r_1} =0&,S_{r_2}=0)\\
&= \exp \left( \int_{x \in \mathbb{R}^d} \delta w_x \,\tilde{u}_{r_1,r_2}( x ) \right)\ .
\label{towpointsdistrib}
\end{split}
\end{equation}
We further restrict for simplicity to the massless case, i.e.\ without the linear
term $\tilde u_x$ in Eq.~(\ref{InstantonTwoSources}).
One easily sees from the latter equation
that $\tilde{u}_{r_1,r_2}$ takes the scaling form
\begin{equation}
\tilde{u}_{r_1,r_2}(x) = \frac{1}{(r_1-r_2)^2}\, f\!\left(\frac{2x -r_1 - r_2}{2(r_2-r_1)} \right)\ .
\end{equation}
The function $f(x)$ is solution of
\begin{equation}
f''(x) + f(x)^2 = 0 \ .
\end{equation}
It diverges at $x=\pm \frac{1}{2}$, vanishes at $x \to \pm \infty$
and is negative everywhere: $f(x) \leq 0$. As $\delta w_x \geq 0$, the latter is a necessary condition s.t.\ the probability (\ref{towpointsdistrib}) is bounded by one.
In the interval $x \in ]-\frac{1}{2},\frac{1}{2}[$, the scaling function $f(x)$ can be expressed
in terms of the Weierstrass $\mathcal{P}$-function, see (\ref{soluP}),
\begin{equation}
f( x ) = - 6 \,\mathcal{P} \!\left( x + \frac{1}{2};g_2=0;g_
= \frac{\Gamma(1/3)^{18}}{(2 \pi)^6} \right)\ .
\label{towSourceSol1}
\end{equation}
The value
of $g_3>0$ is consistent with the required period $2 \Omega=1$, see \eqref{halfper}.
Note from Appendix \ref{app:W} that there is another solution of the form
(\ref{towSourceSol1}) with $g_3 = - \Big( 2 \sqrt{\pi} \frac{\Gamma(1/3)}{4^{\frac{1}{3}} \Gamma(5/6)} \Big)^6<0$
which violates the condition $f(x) \leq 0$, hence is discarded.
For $|x| \geq 1/2$, the function $f(x)$ reads
\begin{equation}
f( x ) = - \frac{6}{(|x|-1/2)^2}\ .
\label{towSourceSol2}
\end{equation}
One property of the solution $\tilde{u}_{r_1,r_2}(x)$ is that it diverges as $\sim (x-r_{1,2})^{-2}$ when $x \approx r_{1,2}$.
There are thus two cases:
(i) - the driving $\delta w_x$ is non-zero at one of these points, or vanishes too slowly
near this point (e.g.\ only linearly or slower). Then the integral in \eqref{towpointsdistrib}
is not convergent, equal to $- \infty$, which implies
$$\mathbb{P}_{\delta w_x} (S_{r_1} =0,S_{r_2}=0) = 0\ .$$
This means that the avalanche contains surely at least one of the points $r_1$ or $r_2$.
(ii) - If $\delta w_x$ vanishes fast enough, for example if $\delta w_x$ is localised away from $x = \pm r_{1,2}$ (e.g $\delta w_x = \delta w \delta (x-y)$ for some $y\in \mathbb{R} \backslash \{r_1,r_2 \} $), the probablity \eqref{towpointsdistrib} becomes non trivial.
\subsection{Avalanche extension with a local kick}
We now consider a local kick
centered at $x=0$, i.e.\ $w_x = \delta w_0 \,\delta(x)$.
If further $0 < r_1 < r_2$, then
\begin{equation}
\mathbb{P}_{\delta w_0}\!\left(S_{r_1} =0,S_{r_2}=0 \right) = \mathbb{P}_{\delta w_0}\!\left(S_{r_1} =0\right)\ .
\end{equation}
This comes from the fact that in the interval $x \in [ - \infty,r_1]$, the solution $\tilde{u}_{r_1,r_2}(x)$
is identical to the instanton solution with only one infinite source at $r_1$ (in other word, it does not ``feel" the source in $r_2$). This shows for instance that the support of the avalanche is larger or equal than the set of points
where the driving is non-zero.
This property also shows that avalanches are connected, i.e.\ it is impossible to draw a plane where the interface did not move between two moving parts of the interface.
As a function of $r$ (which is one-dimensional), the support (i.e.\ the set of
points where $S_r >0$) of an avalanche following a local kick at $x=0$ must
be an interval. Since this interval contains $x=0$ we will write it as $[-\ell_1, \ell_2]$
with $\ell_1>0$ and $\ell_2>0$. This allows to define the extension of an avalanche
as $\ell = \ell_1 + \ell_2$.
To calculate the joint PDF of $\ell_1$ and $\ell_2$ for a kick at $x=0$
we consider \eqref{towpointsdistrib} with $r_1=-x_1 < 0 < r_2=x_2$. Using the previous results about the instanton
equation with two sources, and the fact that the interface model is translationaly invariant, we obtain the
joint cumulative distribution for $\ell_1>0$ and $\ell_2>0$:
\begin{equation}
F_{\delta w_0} (x_1,x_2) : = \mathbb{P}_{\delta w_0} \left( \ell_1 <x_1, \ell_2 <x_2 \right)\ .
\end{equation}
It can, for any $x_1,x_2>0$, be expressed in terms of the function $f$ obtained
in the preceding section,
\begin{equation}
\begin{split}\label{cumul_distrib_extension}
F_{\delta w_0} (x_1,x_2) & =
\mathbb{P}_{\delta w_0}\!\left(S_{r_1} =0,S_{r_2}=0 \right) \\
& = \exp\!\left(\int_x \delta w_0 \delta(x)\,\tilde{u}_{-x_1,x_2}(x)\right)\\
& = e^{\delta w_0 \frac{1}{(x_1+x_2)^2} f \left(- \frac{x_2-x_1}{2(x_1+x_2)}\right)}
\end{split}\ .
\end{equation}
Since the argument of $f$ is within the interval $]- \frac{1}{2},\frac{1}{2}[$
we must use the expression \eqref{towSourceSol1}.
From this one can obtain several results. First taking $x_2 \to \infty$
one obtains the PDF of $\ell_1$ alone,
\begin{equation}
\mathbb{P}_{\delta w} \left( \ell_1 \right) = \frac{12 \delta w}{\ell_1^3} e^{- \delta w \frac{6}{\ell_1^2}} \ .
\end{equation}
A similar result holds for $\ell_2$.
In principle, one can now obtain the distribution of avalanches extensions
\begin{equation}
\mathbb{P}_{\delta w_0}\!\left(\ell\right) = \int_0^{\infty}\!\!\!\! d \ell_1 \int_0^{\infty}\!\!\!\! d\ell_2\, \delta(\ell - \ell_1 - \ell_2) \partial_{\ell_1} \partial_{\ell_2}
F_{\delta w_0} (\ell_1,\ell_2)
\end{equation}
It has a rather complicated expression. Let us define in addition to the total length, the aspect ratio
\begin{equation}
k = \frac{\ell_1-\ell_2}{2(\ell_1+\ell_2)} \quad , \quad - \frac{1}{2} < k < \frac{1}{2}\ .
\end{equation}
Using a change of variables, we obtain the joint density of total extension and aspect ratio
in the limit $\delta w_0 \to 0$,
\begin{eqnarray}\label{defR1}
\rho\left(\ell,k\right) &:=& \lim\limits_{\delta w_0 \to 0} \frac{1}{\delta w_0} \mathbb{P}_{\delta w_0} \left(\ell,k\right) = \frac{R(k)}{\ell^3} \ ,\; \\
\label{defR2}
R(k) &:=& 6 f(k) + 6 k f'(k) + \left(k^2 - \frac{1}{4}\right) f''(k) \ .~~
\end{eqnarray}
The function $f(x)$ was defined in Eq.~\eqref{towSourceSol1}. While the probability as a function of $\ell$ decays as $\ell^{-3}$, the dependence on the aspect ratio is more complicated and plotted in figure \ref{R(k)}. Note that in this expression
$f(k)$ can be replaced by $f_{\rm reg}(k) :=f(k) + \frac{6}{(k+\frac{1}{2})^2} + \frac{6}{(k-\frac{1}{2})^2}$,
which is a regular function of $k$, vanishing at $k=\pm \frac{1}{2}$.
\begin{figure}[t]
\Fig{Figure6}
\caption{Decay amplitude $R(k)$ as a function of the aspect ratio $k$ involved in the joint density of $\ell$ and $k$,
and defined in Eqs.~(\ref{defR1}) and (\ref{defR2}).}
\label{R(k)}
\end{figure}%
Integration over $k$ gives
\begin{eqnarray} \label{density1}
\rho\left(\ell\right) &= &\frac{B}{\ell^3} \qquad \text{ with } \\
B&=&24 + 2 \int_{-1/2}^{1/2} f_{\rm reg}(k) =8 \sqrt{3} \pi \ .
\end{eqnarray}
\subsection{Avalanche extension with a uniform kick}
If a kick extends over the whole system, as e.g.\ a uniform kick $\delta w_x = \delta w$, the avalanche will have almost surely an infinite extension as the local size is non-zero everywhere,
\begin{equation}
\mathbb{P}_{\delta w} \left(S_{ r} =0 \right) = 0 \ \text{ for any } \,{ r} \in \mathbb{R}\ .
\end{equation}
However, in the limit of a small $\delta w$ which is also the limit of a ``single avalanche", we can recover the result for the distribution of extensions. This is consistent with the idea that ``single avalanches" do not depend on the way they are triggered. These calculations allow to obtain the extension distribution without solving explicitly the instanton equation. (The use of elliptic integrals is in fact equivalent to the use of Weierstrass functions as solutions of the instanton equation, c.f.\ Appendix \ref{app:W}).
We now focus on the following ratio of generating functions
\begin{equation}
{\langle e^{\lambda_1 s_0 +\lambda_2 s_r}\rangle \over \langle e^{\lambda_1 s_0}\rangle\langle e^{\lambda_2 s_r}\rangle}
\label{generating_ratio}
\end{equation}
in the limit $\lambda_1,\lambda_2 \rightarrow - \infty$. It compares the probability that both local sizes $s_0:= S_0$ and $s_r := S_r$ are simultaneously $0$ to the product of the two probabilties that each one is $0$.
We can express this ratio, using the instanton-equation approach, as
\begin{equation}
\begin{split}\label{ratio}
\lim_{\lambda_1,\lambda_2 \rightarrow - \infty} &{\langle e^{\lambda_1 s_0 +\lambda_2 s_r}\rangle \over \langle e^{\lambda_1 s_0}\rangle\langle e^{\lambda_2 s_r}\rangle}\\
= \exp \!\bigg(&\int_x \delta w_x \Big[\tilde{u}_r(x)-\tilde{u}_{\infty}(x)-\tilde{u}_{\infty}(x-r) \Big]\bigg)
\end{split}
\end{equation}
where $\tilde{u}_r:=\tilde{u}_{r_1=0,r_2=r}$. We denote by $\tilde{u}_{\infty}:=\tilde{u}_{r_1=0,r_2= \infty}$, the solution of the instanton equation with one source at $r=0$ and the other one at infinity. It is the same as the solution for only one source in $r=0$. The
above expression is valid for any form of driving $\delta w_x$.
\begin{figure}[t]
\Fig{Figure7}
\caption{The distribution of extensions $\rho(\ell)$, as obtained from the elliptic integrals (\ref{F7}) and (\ref{F8}) (black line). The (straight) green dotted line is the small-$\ell$ asymptotics (\ref{68}), whereas the (curved) red dotted line is the large-$\ell$ asymptotics (\ref{70}). The numerical simulation (green histogram) is cut at small scale due to discretization effects.}
\end{figure}
We can now specify to the case of small and uniform driving $\delta w_x = \delta w$; the quantity of interest is then
\begin{equation} \label{GeneratingExtension}
Z(r)=\int_x \tilde{u}_r(x)-\tilde{u}_{\infty}(x)-\tilde{u}_{\infty}(x-r) \ .
\end{equation}
While $\tilde{u}_r(x)$ is not integrable, $Z(r)$ is well defined as the two $\tilde{u}_{\infty}$ terms cancel precisely the two non-integrable poles located at $x=0$ and $x=r$.
Using that $\tilde{u}_r$ is a solution of Eq.~\eqref{InstantonTwoSources}, we can obtain an expression of $Z(r)$ as an elliptic integral, see Appendix \ref{app:elliptic} for details of the calculation. The formulas written there are for the massive case, but
only allow to get an implicit expression for $Z(r)$.
They however allow us to extract the small-scale behavior of the avalanche-extension distribution
(equivalently the massless limit). For small $r$, the behavior of $Z(r)$ is
\begin{equation}
Z(r) \simeq \frac{4 \sqrt 3 \pi } {r} \ .
\end{equation}
To understand the connection with the avalanche extension, we need to get back to the interpretation of (\ref{generating_ratio}). Now that we have specified the kick to be uniform, the two averages of the denominator are independant of $r$, and act only as a normalization constants. The numerator, in the limit of $\lambda_{1,2} \rightarrow - \infty$, is the probability that both $s_0$ and $s_r$ are simultaneously equal to $0$. Deriving this two times \textit{w.r.t.} $r$ (which lets the denominator invariant) gives the probability that the avalanche start in $x=0$ and end in $x=r$. Dividing by $\delta w$ and taking the limit\footnote{Note that the denominators can then be set to unity. There is no ambiguity since the calculation could be performed first at finite but large $\lambda_i$, and setting $\delta w$ to zero after taking the derivative and dividing by $\delta w$, and only at the end taking the limit of infinite $\lambda_i$.} $\delta w \to 0$ , we
obtain the extension density in the limit of a single avalanche as
\begin{eqnarray}\label{68}
\rho(\ell) &= &\frac{1}{\delta w} \partial_r^2 e^{\delta w Z(r)}|_{\delta w=0^+,r=\ell} \\
&=& \partial_r^2 \left.\tilde{Z}(r) \right\rvert_{r=\ell} \simeq \tilde B \ell^{-3}\,\text{ when }\,\ell \rightarrow 0
\nonumber}\newcommand {\eq}[1]{(\ref{#1})
\end{eqnarray}
with
\begin{equation}
\tilde B
= 8 \sqrt{3} \pi \ .
\end{equation}
We recover here the $\ell ^{-3}$ divergence for small $\ell$ of the extension of avalanches.
Note that this calculation gives exactly the same prefactor as in Eq.~(\ref{density1}), which confirms
that we are studying the same object, namely a ``single avalanche".
Finally, in the massive case, one can also compute the tail of
the extension distribution, resulting into (see Appendix \ref{app:elliptic})
\begin{equation} \label{70}
\rho(\ell) \simeq 72 \, \ell e^{-\ell} \text{ when } \ell \rightarrow \infty \ .
\end{equation}
\section{Non-stationnary dynamics in the BFM}
\label{sec:non-stat}
The easiest way to construct a position theory equivalent to the BFM model define in Eq.~(\ref{BFMdef}) is to consider the non-stationnary evolution of an elastic line in some specific quenched disorder,
\begin{equation}
\eta \partial_t u_{xt}= \nabla_x^2 u_{xt} + F\left(u_{xt},x\right) + m^2(w_{xt}- u_{xt})\ .
\end{equation}
Here the disorder has the correlations of independent one-sided Brownian motion
\begin{equation} \label{nonstatcorr}
\overline{F(u,x)F(u',x')}=2 \sigma \delta^d(x-x')\min(u,u')\ .
\end{equation}
Consider the initial condition $u_{xt=0}=0$. We can then compute the correlation function of the position $$u_{xt}=\int_0^t \dot{u}_{xs}\,ds$$ for a
uniform driving $w_t = v t\, \theta (t)$, starting at $t=0$.
The calculation is sketched in Appendix \ref{app:nonstat}. In dimensionless
units and in Fourier space, the result reads
\begin{eqnarray} \label{nonstatBFM}
\langle u_{qt}u_{-qt} \rangle^c &=& v \Bigg[ \frac{2 q^2 (t-1)+2 t -5}{\left(q^2+1\right)^3}-\frac{4 e^{-\left(q^2+1\right) t}}{q^2 \left(q^2+1\right)^3} \nonumber}\newcommand {\eq}[1]{(\ref{#1})\\
&&+\frac{4 e^{- t}}{q^2 \left(2 q^2+1\right)}+\frac{e^{-2 \left(q^2+1\right) t}}{\left(q^2+1\right)^3 \left(2 q^2+1\right)} \Bigg]\ .~~~~~~~~
\end{eqnarray}
At large times, the displacement correlations behave as (restoring units)
\begin{equation} \label{correl}
\langle u_{qt} u_{-qt} \rangle^c \underset{ t \rightarrow \infty}{\simeq} \frac{2 \sigma v t }{(q^2+m^2)^2}\ .
\end{equation}
The $q$ dependence is similar to the so-called Larkin random-force model \cite{Larkin1970},
but with a time-dependent amplitude, i.e.\ the effective disorder is
growing with time, which is natural given the correlations \eqref{nonstatcorr}. The correlation of the position
thus remains non-stationary at all times\footnote{Note that there are
stationary versions of the BFM, which we will not discuss here, see
discussions in e.g.\ \cite{LeDoussalWiese2011a,DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a}.}.
From Eq.~(\ref{correl}) one obtains the correlations of the displacement in real space, still in the large-$t$ limit
\begin{eqnarray}
\overline{ (u_{xt}-u_{0t})^2 } && \simeq 2 v t \int \frac{d^dq}{(2 \pi)^d} \frac{1}{(q^2+m^2)^2} (1-\cos q x) \nonumber}\newcommand {\eq}[1]{(\ref{#1}) \\
&& \sim v t \times x^{2 \zeta_L}
\end{eqnarray}
with $\zeta_L= (4-d)/2$ the Larkin roughness exponent. Note that the
average displacement is
$\overline{u_{xt}} = v t - \frac{1-e^{-m^2 t}}{m^2}$ (see Appendix \ref{app:nonstat} ).
Hence we see that the BFM roughness scaling $u \sim x^{4-d}$
is dimensionally consistent with the
correlation at large times,
\begin{equation}
\overline{ (u_{xt}-u_{0t})^2 } \simeq 2 ~ \overline{u_{xt}} ~ x^{4-d} \ .
\end{equation}
This result, $\zeta = 4 -d = \varepsilon$, is in agreement with the FRG approch:
the position theory of the BFM model is an exact fixed point for the flow
equation of the FRG with a roughness exponent $\zeta = \varepsilon$,
as discussed in \cite{LeDoussalWiese2011b,DobrinevskiLeDoussalWiese2011b}.
\section{Conclusion}
We presented a general investigation of the Brownian Force Model, using its exact solvability via the instanton equation in various settings. After reviewing the results and the calculations of \cite{LeDoussalWiese2008c,LeDoussalWiese2011a,DobrinevskiLeDoussalWiese2011b,LeDoussalWiese2012a},
we extended the study in several directions.
First, we computed observables containing information about the spatial structure of avalanches in the BFM: the joint density of $S$ and $S_0$ (or equivalently, the distribution of the local size
$S_0$ at fixed total global size $S$), and the distribution of the extension $\ell$ of an avalanche.
These distributions display power laws in their small-scale regime,
which we recovered using scaling arguments, together with universal amplitudes.
We also extended the method to study new driving protocols relevant to distinct experimental setups.
The derived results show new exponents for the small-scale behavior of the global avalanche-size distribution following a locally imposed displacement, and for the small-scale behavior
of the local-size distribution following a localized kick.
Finally, we presented results for the non-stationary dynamics of the BFM, focusing
on observables which exist only in the position theory, such as the roughness exponent.
This explains why both the Larkin roughness and the BFM roughness (emerging from the FRG approach),
play a role in this model, depending on whether the driving is stationary or not.
\acknowledgements
We thank A.\ Rosso, A.\ Kolton and A.\ Dobrinevski for stimulating discussions, PSL for support by Grant No.\ ANR-10-IDEX-0001-02-PSL, as well as KITP for hospitality and support in part by NSF Grant No.\ NSF PHY11-25915.
|
2205.07025
|
\section{Introduction}
An \emph{animal} on a $d$-dimensional lattice is a connected set of lattice cells, where
connectivity is through ($d{-}1$)-dimensional faces of the cells. Specifically, on the planar
square lattice, connectivity of cells is through edges. Two animals are considered identical if
one can be obtained from the other by \emph{translation} only, without rotations or flipping.
(Such animals are called ``fixed'' animals, as opposed to ``free'' animals.)
Lattice animals attracted interest in the literature as combinatorial objects~\cite{eden1961two} and
as a computational model in statistical physics and chemistry~\cite{temperley1956combinatorial}.
(In these areas, one usually considers \emph{site} animals, that is, clusters of lattice
vertices, hence, the graphs considered there are the \emph{dual} of our graphs.)
In this paper, we consider lattices in two dimensions, specifically, the hexagonal, triangular,
and square lattices, where animals are called polyhexes, polyiamonds, and
polyominoes, respectively.
We show the application of our results to the square and hexagonal lattices,
and explain how to extend the latter to the triangular lattice.
An example of such animals is shown in figure~\ref{fig:examples}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\drawpoly[scale = 0.75]{exmpSqr.txt}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\drawpolyhex[scale = 0.75]{exmpHex.txt}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\drawpolyiamond[scale = 0.70]{exmpTri.txt}
\end{subfigure}
\caption{An example of a polyomino, a polyhex, and a polyiamond.}
\label{fig:examples}
\end{figure}
Let $A^\mathcal{L}(n)$ denote the number of lattice animals of size~$n$, that is, animals
composed of $n$ cells, on a lattice~$\mathcal{L}$.
A major research problem in the study of lattices is understanding the nature
of~$A^\mathcal{L}(n)$, either by finding a formula for it as a function of~$n$, or by evaluating
it for specific values of~$n$.
These problems are to this date still open for any nontrivial lattice.
Redelmeier~\cite{redelmeier1981counting} introduced the first algorithm for
counting all polyominoes of a given size, with no polyomino being generated more than once.
Later, Mertens~\cite{Mertens1990} showed that Redelmeier's algorithm can be
utilized for any lattice.
The first algorithm for counting lattice animals without generating all of them was
introduced by Jensen~\cite{jensen2000statistics}. Using his method, the number of animals on
the 2-dimensional square, hexagonal, and triangular lattices were computed up to size~56,
46, and~75, respectively.
An important measure of lattice animals is the size of their \emph{perimeter} (sometimes called
``site perimeter''). The perimeter of a lattice animal is defined as the set of empty cells
adjacent to the animal cells. This definition is motivated by percolation models in
statistical physics. In such discrete models, the plane or space is made of small cells
(squares or cubes, respectively), and quanta of material or energy ``jump'' from a cell to
a neighboring cell with some probability. Thus, the perimeter of a cluster determines
where units of material or energy can move to, and guide the statistical model of the flow.
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\drawpoly[scale = 0.45]{exmpQ.txt}
\caption{Q}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\drawpoly[scale = 0.45]{exmpIQ.txt}
\caption{I(Q)}
\end{subfigure}
\caption{A polyomino~$Q$ and its inflated
polyomino~$I(Q)$. Polyomino cells are colored gray, while
perimeter cells are colored white.}
\label{fig:exmp}
\end{figure}
Asinowski et al.~\cite{asinowski2017enumerating,asinowski2018polycubes} provided
formulae for polyominoes and polycubes with perimeter size close to the maximum possible.
On the other extreme reside animals with the \emph{minimum} possible perimeter size for their area.
The study of polyominoes of a minimal perimeter dates back to Wang and Wang~\cite{wang1977discrete},
who identified an infinite sequence of cells on the square lattice, the first~$n$ of
which (for any~$n$)
form a minimal-perimeter polyomino. Later, Altshuler et
al.~\cite{altshuler2006}, and independently Sieben~\cite{sieben2008polyominoes}, studied
the closely-related problem of the \emph{maximum} area of a polyomino with~$p$ perimeter cells,
and provided a closed formula for the minimum possible perimeter of $n$-cell polyominoes.
Minimal-perimeter animals were also studied on other lattices.
For animals on the triangular lattice (polyiamonds), the main result is due to
F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}, who characterized all the polyiamonds
with maximum area for their perimeter, and provided a formula for the minimum perimeter of a
polyiamond of size~$n$.
Similar results were given by Vainsencher and Bruckstein~\cite{VainsencherB08}
for the hexagonal lattice.
In this paper, we study an interesting property of minimal-perimeter animals, which relates to the
notion of the \emph{inflation} operation. Simply put, inflating an animal is adding to it
all its perimeter cells (see Figure~\ref{fig:exmp}).
We provide a set of conditions (for a given lattice), which if it holds , then inflating all
minimal-perimeter animals of some size yields all minimal-perimeter animals of some larger size in a
bijective manner.
While this paper discusses some combinatorial properties of minimal-perimeter polyominoes,
another algorithmic question emerges from these properties, namely,
``how many minimal-perimeter polyominoes are there of a given size?''
This question is addressed in detail in a companion paper~\cite{barequet2020algorithms}.
The paper is organized as follows.
In Section~\ref{sec:main}, we provide some definitions and prove our main theorem.
In sections~\ref{sec:polyominoes} and~\ref{sec:polyhexes}, we show the application of
Section~\ref{sec:main} to polyominoes and polyhexes, respectivally.
Then, in Section~\ref{sec:polyiamonds} we explain how the same result also applies to the
regular triangular lattice.
We end in Section~\ref{sec:conclusion} with some concluding remarks.
\subsection{Polyhexes as Molecules}
In addition to research of minimal-perimeter animals in the literature on combinatorics,
there has been much more intensive research of minimal-perimeter polyhexes in the literature on organic
chemistry, in the context of the structure of families of molecules.
For example, significant amount of work dealt with molecules called \emph{benzenoid hydrocarbons}.
It is a known natural fact that molecules made of carbon atoms are structured as shapes on the
hexagonal lattice. Benzenoid hydrocarbons are made of
carbon and hydrogen atoms only. In such a molecule, the carbon atoms are arranged as a
polyhex, and the hydrogen atoms are arranged around the carbons atoms.
\begin{figure}
\centering
\begin{tabular}{ccc}
\raisebox{0.39\height}{\includegraphics[scale=0.13]{Naphtaline.png}} & ~~~ &
\includegraphics[scale=0.13]{Circumnaphtaline.png} \\
(a) Naphthalene ($C_{10} H_8$) & & (b) Circumnaphtaline ($C_{32} H_{14})$
\end{tabular}
\caption{Naphthalene and its circumscribed version.}
\label{fig:naphthalene}
\end{figure}
Figure~\ref{fig:naphthalene}(a) shows a schematic drawing of the molecule of Naphthalene
(with formula~$C_{10}H_8$), the simplest benzenoid hydrocarbon, which is made of ten carbon
atoms and eight hydrogen atoms, while Figure~\ref{fig:naphthalene}(b) shows Circumnaphthalene
(molecular formula~$C_{32}H_{14}$).
There exist different configurations of atoms for the same molecular formula, which are
called \emph{isomers} of the same formula.
In the field of organic chemistry, a major goal is to enumerate all the different
isomers of a given formula.
Note that the carbon and hydrogen atoms are modeled by lattice \emph{vertices} and not by
cells of the lattice, but as we explain below, the numbers of hydrogen atoms identifies with the
number of perimeter cells of the polyhexes under discussion.
Indeed, the hydrogen atoms lie on lattice vertices that do not belong to the polyhex formed by the
carbon atoms (which also lie on lattice vertices), but are connected to them by lattice edges.
In minimal-perimeter polyhexes, each perimeter cell contains exactly two such hydrogen vertices,
and every hydrogen vertex is shared by exactly two perimeter cells.
(This has nothing to do with the fact that a single cell of the polyhex might be neighboring
several---five, in the case of Naphthalene---``empty'' cells.)
Therefore, the number of hydrogen atoms in a molecule of a benzenoid hydrocarbon is identical to the
size of the perimeter of the imaginary polyhex.\footnote{
In order to model atoms as lattice cells, one might switch to the dual of the hexagonal lattice,
that is, to the regular triangular lattice, but this will not serve our purpose.
}
In a series of papers (culminated in Reference~\cite{dias1987handbook}), Dias provided the
basic theory for the enumeration of benzenoid hydrocarbons.
A comprehensive review of the subject was given by Brubvoll and Cyvin~\cite{brunvoll1990we}.
Several other works~\cite{harary1976extremal,cyvin1991series,dias2010} also dealt with the
properties and enumeration of such isomers.
The analogue of what we call the ``inflation'' operation is called \emph{circumscribing} in
the literature on chemistry.
A circumscribed version of a benzenoid hydrocarbon molecule~$M$ is created by adding to~$M$
an outer layer of hexagonal ``carbon cells,'' that is, not only the hydrogen atoms (of~$M$)
adjacent to the carbon atoms now turn into carbon atoms, but also new carbon atoms are added
at all other ``free'' vertices of these cells so as to ``close'' them.
In addition, hydrogen atoms are put at all free lattice vertices that are connected by edges
to the new carbon atoms.
This process is visualized well in Figure~\ref{fig:naphthalene}.
In the literature on chemistry, it is well known that circumscribing all isomers of a given
molecular formula yields,
in a bijective manner,
all isomers that correspond to another molecular formula.
(The sequences of molecular formulae that have the same number of isomers created by
circumscribing are known as \emph{constant-isomer series}.)
Although this fact is well known, to the best of our knowledge, no rigorous proof of it
was ever given.
As mentioned above, we show that inflation induces a bijection between sets of
minimal-perimeter animals on the square, hexagonal, and in a sense, also on the triangular lattice.
By this, we prove the long-observed (but never proven)
phenomenon of ``constant-isomer series,'' that is, that circumscribing isomers of benzenoid
hydrocarbon molecules (in our terminology, inflating minimum-perimeter polyhexes)
yields all the isomers of a larger molecule.
\section{Minimal-Perimeter Animals}
\label{sec:main}
Throughout this section, we consider animals on some specific lattice~$\mathcal{L}$.
Our main result consists of a set of conditions on minimal-perimeter animals on~$\mathcal{L}$,
which is sufficient for satisfying a bijection between sets of minimal-perimeter animals on~$\mathcal{L}$.
\subsection{Preliminaries}
\label{subsec:preliminaries}
\begin{figure}
\centering
\begin{tabular}{ccccc}
\drawpolyhex[scale=0.5]{exmp_hex.txt} & &
\drawpolyhex[scale=0.5]{exmp_hex_I.txt} & &
\drawpolyhex[scale=0.5]{exmp_hex_D.txt}\\
$Q$ & & $I(Q)$ & & $D(Q)$
\end{tabular}
\caption{A polyhex~$Q$, its inflated polyhex~$I(Q)$, and its
deflated polyhex~$D(Q)$. The gray cells belong to~$Q$,
the white cells are its perimeter, and
its border cells are marked with a pattern of dots.}
\label{fig:exmp_poly}
\end{figure}
Let~$Q$ be an animal on~$\mathcal{L}$.
Recall that the \emph{perimeter} of~$Q$, denoted by~$\perim{Q}$, is the set of all
empty lattice cells that are neighbors of at least one cell of~$Q$.
Similarly, the \emph{border} of~$Q$, denoted by~$\border{Q}$, is the set of cells
of~$Q$ that are neighbors of at least one empty cell.
The \emph{inflated} version of~$Q$ is defined as~$I(Q) := Q \cup \perim{Q}$.
Similarly, the \emph{deflated} version of~$Q$ is defined
as~$D(Q) := Q \backslash \border{Q}$.
These operations are demonstrated in Figure~\ref{fig:exmp_poly}.
Denote by~$\epsilon(n)$ the minimum possible size of the perimeter of an $n$-cell
animal on~$\mathcal{L}$,
and by~$M_n$ the set of all minimal-perimeter $n$-cell animals on~$\mathcal{L}$.
\subsection{A Bijection}
\begin{theorem}
\label{thm:main}
Consider the following set of conditions.
\begin{enumerate}[label=(\arabic*)]
\item The function~$\epsilon(n)$ is weakly-monotone increasing.
\item There exists some constant~$c^* = c^*(\mathcal{L})$, for which, for any minimal-perimeter
animal~$Q$, we have that~$\abs{\perim{Q}} = \abs{\border{Q}} + c^*$
and~$\abs{\perim{I(Q)}} \leq \abs{\perim{Q}}+c^*$.
\item If~$Q$ is a minimal-perimeter animal of size $n+\epsilon(n)$, then~$D(Q)$ is a
valid (connected) animal.
\end{enumerate}
If all the above conditions hold for~$\mathcal{L}$,
then~$\abs{M_n} = \abs{M_{n+\epsilon(n)}}$.
If these conditions are not satisfied for only a finite amount of sizes of animals, then
the claim holds for all sizes greater than some lattice-dependent nominal size~$n_0$.
\myqed
\end{theorem}
\begin{proof}
We begin with proving that inflation preserves perimeter minimality.
\begin{lemma}
\label{lemma:minimal-inflating}
If~$Q$ is a minimal-perimeter animal, then~$I(Q)$ is a
minimal-perimeter animal as well.
\end{lemma}
\begin{proof}
Let~$Q$ be a minimal-perimeter animal. Assume to the contrary
that~$I(Q)$ is not a minimal-perimeter animal, thus, there exists
an animal~$Q'$ such that $\abs{Q'} = \abs{I(Q)}$,
and~$\abs{\perim{Q'}} < \abs{\perim{I(Q)}}$.
By the second premise of Theorem~\ref{thm:main}, we know that
$\abs{\perim{I(Q)}} \leq \abs{\perim{Q}} + c^*$, thus,
$\abs{\perim{Q'}} < \abs{\perim{Q}}+c$, and
since~$Q'$ is a minimal-perimeter animal, we also know by the same premise
that~$\abs{\perim{Q'}} = \abs{\border{Q'}}+c$, and, hence,
that~$\abs{\border{Q'}} < \abs{\perim{Q}}$.
Consider now the animal~$D(Q')$.
Recall that $\abs{Q'} = \abs{I(Q)}=\abs{Q}+\abs{\perim{Q}}$, thus,
the size of~$D(Q')$ is at least~$\abs{Q}+1$,
and~$\abs{\perim{D(Q')}} < \abs{\perim{Q}} = \epsilon(n)$
(since the perimeter of~$D(Q')$ is a subset of the border of~$Q'$).
This is a contradiction to the first premise, which states that the
sequence~$\epsilon(n)$ is monotone increasing.
Hence, the animal~$Q'$ cannot exist, and~$I(Q)$ is a minimal-perimeter animal.
\myqed
\end{proof}
We now proceed to demonstrating the effect of repeated inflation on the size of
minimal-perimeter animals.
\begin{lemma}
\label{lemma:pnc_size}
The minimum perimeter size of animals of size~$n+k\epsilon(n)+c^*k(k-1)/2$
(for~$n > 1$ and any $k \in \mathbb{N}$) is~$\epsilon(n)+c^*k$.
\end{lemma}
\begin{proof}
We repeatedly inflate a minimal-perimeter animal~$Q$, whose initial size is~$n$.
The size of the perimeter of~$Q$ is~$\epsilon(n)$, thus, inflating
it creates a new animal of size~$n+\epsilon(n)$, and the size of the border
of~$I(Q)$ is~$\epsilon(n)$, thus, the size of~$I(Q)$
is~$\epsilon(n) + c^*$.
Continuing the inflation of the animal, the $k$th inflation will increase the size of the
animal by $\epsilon(n) + (k-1)c^*$ and will increase the size of the perimeter by~$c^*$.
Summing up these quantities yields the claim.
\myqed
\end{proof}
Next, we prove that inflation preserves difference, that is, inflating two different
minimal-perimeter animals (of equal or different sizes) always produces two different
new animals. (Note that this is not true for non-minimal-perimeter animals.)
\begin{lemma}
\label{lemma:different_inflating}
Let~$Q_1,Q_2$ be two different minimal-perimeter animals.
Then, regardless of whether or not~$Q_1,Q_2$ have the same size,
the animals~$I(Q_1)$ and~$I(Q_2)$ are different as well.
\end{lemma}
\begin{proof}
Assume to the contrary that $Q = I(Q_1) = I(Q_2)$, that is,
$Q = Q_1 \cup \perim{Q_1} = Q_2 \cup \perim{Q_2}$.
In addition, since $Q_1 \neq Q_2$, and since a cell cannot belong
simultaneously to both an animal and to its perimeter, this means
that~$\perim{Q_1} \neq \perim{Q_2}$. The border of~$Q$ is a
subset of both~$\perim{Q_1}$ and~$\perim{Q_2}$, that is,
$\border{Q} \subset \perim{Q_1} \cap \perim{Q_2}$.
Since~$\perim{Q_1} \neq \perim{Q_2}$, we obtain that
either~$\abs{\border{Q}} < \abs{\perim{Q_1}}$
or~$\abs{\border{Q}} < \abs{\perim{Q_2}}$;
assume without loss of generality the former case.
Now consider the animal~$D(Q)$.
Its size is~$\abs{Q}-\abs{\border{Q}}$.
The size of~$Q$ is~$\abs{Q_1}+\abs{\perim{Q_1}}$, thus,
$\abs{D(Q)} > \abs{Q_1}$, and since the perimeter of~$D(Q)$
is a subset of the border of~$Q$, we conclude
that~$\abs{\perim{D(Q)}} < \abs{\perim{Q_1}}$.
However, $Q_1$ is a minimal-perimeter animal, which is a
contradiction to the first premise of the theorem, which states
that~$\epsilon(n)$ is monotone increasing.
\myqed
\end{proof}
To complete the cycle, we also prove that for any minimal-perimeter
animal~$Q \in M_{n+\epsilon(n)}$, there is a minimal-perimeter source
in~$M_n$, that is, an animal~$Q'$ whose inflation yields~$Q$.
Specifically, this animal is $D(Q)$.
\begin{lemma}
\label{lemma:deflating}
For any~$Q \in M_{n+\epsilon(n)}$, we also have
that~$I(D(Q)) = Q$.
\end{lemma}
\begin{proof}
Since~$Q \in M_{n+\epsilon(n)}$, we have by
Lemma~\ref{lemma:pnc_size} that~$\abs{\perim{Q}} = \epsilon(n)+c^*$.
Combining this with the equality~$\abs{\perim{Q}} = \abs{\border{Q}}+c^*$, we
obtain that~$\abs{\border{Q}} = \epsilon(n)$, thus, $\abs{D(Q)} = n$
and $\abs{\perim{D(Q)}} \geq \epsilon(n)$.
Since the perimeter of~$D(Q)$ is a subset of the border of~$Q$,
and~$\abs{\border{Q}} = \epsilon(n)$, we conclude that the perimeter
of~$D(Q)$ and the border of~$Q$ are the same set of cells,
and, hence, $I(D(Q)) = Q$.
\myqed
\end{proof}
Let us now wrap up the proof of the main theorem.
In Lemma~\ref{lemma:minimal-inflating} we have shown that for any
minimal-perimeter animal~$Q \in M_n$, we have that~$I(Q) \in M_{n+\epsilon(n)}$.
In addition, Lemma~\ref{lemma:different_inflating} states that the inflation of two different
minimal-perimeter animals results in two other different minimal-perimeter animals.
Combining the two lemmata, we obtain that~$\abs{M_n} \leq \abs{M_{n+\epsilon(n)}}$.
On the other hand, in Lemma~\ref{lemma:deflating} we have shown that
if~$Q \in M_{n+\epsilon(n)}$, then~$I(D(Q)) = Q$, and, thus, for any animal
in~$M_{n+\epsilon(n)}$, there is a unique source in~$M_n$ (specifically, $D(Q)$),
whose inflation yields~$Q$. Hence, $\abs{M_n} \geq \abs{M_{n+\epsilon(n)}}$.
Combining the two relations, we conclude that~$\abs{M_n} = \abs{M_{n+\epsilon(n)}}$.
\myqed
\end{proof}
\subsection{Inflation Chains}
Theorem~\ref{thm:main} implies that there exist infinite chains of sets of minimal-perimeter
animals, each set obtained by inflating all members of the previous set, while the
cardinalities of all sets in a chain are equal. Obviously, there are sets of
minimal-perimeter animals that are not created by the inflation of any other sets.
We call the size of animals in such sets an \emph{inflation-chain root}.
Using the definitions and proofs in the previous section, we are able to characterize which
sizes can be inflation-chain roots.
Then, using one more condition, which holds in the lattices
we consider, we determine which values are the actual inflation-chain roots.
To this aim, we define the pseudo-inverse function
\[
\epsilon^{-1}(p) = \min\set{n \in \mathbb{N} \mid \epsilon(n) = p}.
\]
Since~$\epsilon(n)$ is a monotone-increasing discrete function, it is a step function,
and the value of~$\epsilon^{-1}(p)$ is the first point in each step.
\begin{theorem}
\label{thm:root-candidates}
Let~$\mathcal{L}$ be a lattice satisfying the premises of Theorem~\ref{thm:main}.
Then, all inflation-chain roots are either~$\epsilon^{-1}(p)$
or~$\epsilon^{-1}(p)-1$, for some $p \in \mathbb{N}$.
\end{theorem}
\begin{proof}
Recall that~$\epsilon(n)$ is a step function, where each step represents all
animal sizes for which the minimal perimeter is~$p$.
Let us denote the start and end of the step representing the perimeter~$p$ by~$n_b^p$
and~$n_e^p$, respectively. Formally, $n_b^p = \epsilon^{-1}(p)$
and~$n_e^p = \epsilon^{-1}(p+1)-1$.
For each size~$n$ of animals in the step~$\bra{n_b^p,n_e^p}$, inflating a minimal-perimeter
animal of size~$n$ results in an animal of size~$n{+}p$, and
by~Lemma~\ref{lemma:pnc_size}, the perimeter of the inflated animal is~$p{+}c^*$.
Thus, the inflation of animals of all sizes in the step of perimeter~$p$ yields animals
that appear in the step of perimeter~$p{+}c^*$.
In addition, they appear in a
\emph{consecutive} portion of the step, specifically, the range~$\bra{n_b^p+p,n_e^p+p}$.
Similarly, the step~$\bra{n_b^{p+1},n_e^{p+1}}$ is mapped by inflation to the
range~$\bra{n_b^{p+1}+p+1,n_e^{p+1}+p+1}$, which is a portion of the step of~$p{+}1$.
Note that the former range ends at~$n_e^p+p = n_b^{p+1}+p-1$, while the latter range
starts at~$n_b^{p+1}+p+1$, thus, there is exactly one size of animals,
specifically,~$n_b^{p+1}+p$, which is not covered by inflating animals in the
ranges~$\bra{n_b^p+p,n_e^p+p}$ and~$\bra{n_b^{p+1},n_e^{p+1}}$.
These two ranges represent two different perimeter sizes. Hence, the size~$n_b^{p+1}+p$
must be either the end of the first step, $n_e^{p+c^*}$,
or the beginning of the second step,
$n_b^{p+c^*+1}$. This concludes the proof.
\myqed
\end{proof}
The arguments of the proof of Theorem~\ref{thm:root-candidates} are visualized in
Figure~\ref{fig:minpH_roots} for the case of polyhexes.
In fact, as we show below (see Theorem~\ref{thm:root-conditioned}), only the second
option exists, but in order to prove this, we also need a maximality-conservation
property of the inflation operation.
Here is another perspective for the above result.
Note that minimal-perimeter animals, with size corresponding to $n_e^{p}$ (for
some~$p \in \mathbb{N}$), are the largest animals with perimeter~$p$.
Intuitively, animals with the largest size, for a certain perimeter size, tend to be
``spherical'' (``round'' in two dimensions), and inflating them makes them even more spherical.
Therefore, one might expect that for a general lattice, the inflation operation will preserve
the property of animals being the largest for a given perimeter. In fact, this has been proven
rigorously for the square lattice~\cite{altshuler2006,sieben2008polyominoes} and for the
hexagonal lattice~\cite{VainsencherB08,fulep2010polyiamonds}.
However, this also means that inflating a minimal-perimeter animal of size~$n_e^p$
yields a minimal-perimeter animal of size~$n_e^{p+c^*}$, and, thus, $n_e^p$ cannot be an
inflation-chain root. We summarize this discussion in the following theorem.
\begin{theorem}
\label{thm:root-conditioned}
Let~$\mathcal{L}$ be a lattice for which the three premises of Theorem~\ref{thm:main} are
satisfied, and, in addition, the following condition holds.
\begin{enumerate}[label=(\arabic*)]
\setcounter{enumi}{3}
\item The inflation operation preserves the property of having a maximum size for a
given perimeter.
\end{enumerate}
Then, the inflation-chain roots are precisely~$(\epsilon_\mathcal{L})^{-1}(p)$, for all $p \in \mathbb{N}$.
\myqed
\end{theorem}
\subsection{Convergence of Inflation Chains}
We now discuss the structure of inflated animals, and show that
under a certain condition, inflating repeatedly \emph{any} animal (or
actually, any set, possibly disconnected, of lattice cells) ends up in a
minimal-perimeter animal after a finite number of inflation steps.
Let~$I^k(Q)$ ($k>0$) denote the result of applying repeatedly~$k$ times the inflating
operator~$I(\cdot)$, starting from the animal~$Q$. Equivalently,
\[
I^k(Q) = Q \cup \set{c \mid \mbox{Dist}(c,Q) \leq k},
\]
where~$\mbox{Dist}(c,Q)$ is the Lattice distance from a cell~$c$ to the animal~$Q$.
For brevity, we will use the notation $Q^k = I^k(Q)$.
Let us define the function
\(
\phi(Q) = \minp^{-1}(\abs{\perim{Q}}) - \abs{Q}
\)
and explain its meaning.
When~$\phi(Q) \geq 0$, it counts the cells that should be added to~$Q$, with no change
to its perimeter, in order to make it a minimal-perimeter animal.
In particular, if~$\phi(Q) = 0$, then~$Q$ is a minimal-perimeter animal.
Otherwise, if~$\phi(Q) < 0$, then~$Q$ is also a minimal-perimeter animal, and $\abs{\phi(Q)}$ cells can be
removed from~$Q$ while still keeping the result a minimal-perimeter animal and without changing its perimeter.
\begin{lemma}
\label{lemma:jumps-p-1}
For any value of~$p$, we have that~$\minp^{-1}(p+c^*)-\minp^{-1}(p) = p-1$.
\end{lemma}
\begin{proof}
Let~$Q$ be a minimal-perimeter animal with area~$n_b^p = \minp^{-1}(p)$.
The area of~$I(Q)$ is~$n_b^p+p$, thus, by Theorem~\ref{thm:main},
$\perim{I(Q)} = p+c^*$. The area~$n_b^{p+c^*}$ is an
inflation-chain root, hence, the area of~$I(Q)$ cannot be~$n_b^{p+c^*}$.
Except~$n_b^{p+c^*}$, animals of all other areas in the
range~$[n_b^{p+c^*},\dots,n_e^{p+c^*}]$ are created by inflating
minimal-perimeter animals with perimeter~$p$.
The animal~$Q$ is of area~$n_b^p$, \emph{i.e.}, the area of~$I(Q)$ must
be the minimal area from $\bra{n_b^{p+c^*},n_e^{p+c^*}}$ which is not
an inflation-chain root. Hence, the area of~$I(Q)$ is~$n_b^{p+c^*}+1$.
We now equate the two expressions for the area of $I(Q)$:
$n_b^p+p = n_b^{p+c^*}+1$. That is, $n_b^{p+c^*}-n_b^{p} = p-1$.
The claim follows.
\end{proof}
Using Lemma~\ref{lemma:jumps-p-1}, we can deduce the following result.
\begin{lemma}
\label{lem:conv-step}
If~$\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$,
then~$\phi(I(Q)) = \phi(Q)-1$.
\end{lemma}
\begin{proof}
\begin{align*}
\phi(I(Q)) &= \minp^{-1}(\abs{\perim{I(Q)}}) - \abs{I(Q)} \\
&= \minp^{-1}(\abs{\perim{Q}}+c^*) - (\abs{Q} + \abs{\perim{Q}}) \\
&= \minp^{-1}(\abs{\perim{Q}}) + \abs{\perim{Q}} -1 - \abs{Q}
- \abs{\perim{Q}} \\
&= \minp^{-1}(\abs{\perim{Q}}) -\abs{Q} - 1 \\
&= \phi(Q) - 1.
\end{align*}
\end{proof}
Lemma~\ref{lem:conv-step} tells us that inflating an animal, $Q$, which satisfies
$\abs{\perim{I(Q)}} = \abs{\perim{Q}} +c^*$, reduces $\phi(Q)$ by $1$.
In other words, $I(Q)$ is ``closer'' than~$Q$ to being a minimal-perimeter animal.
This result is stated more formally in the following theorem.
\begin{theorem}
\label{thm:convergence}
Let~$\mathcal{L}$ be a lattice for which the four premises of Theorems~\ref{thm:main}
and~\ref{thm:root-conditioned} are satisfied, and, in addition, the following condition holds.
\begin{enumerate}[label=(\arabic*)]
\setcounter{enumi}{4}
\item For every animal~$Q$, there exists some finite number~$k_0 = k_0(Q)$, such that for
every $k>k_0$, we have that~$\abs{\perim{Q^{k+1}}} = \abs{\perim{Q^{k}}} + c$.
\end{enumerate}
Then, after a finite number of inflation steps,
any animal becomes a minimal-perimeter animal.
\end{theorem}
\begin{proof}
The claim follows from Lemma~\ref{lem:conv-step}.
After~$k_0$ inflation operations, the premise of this lemma holds.
Then, any additional inflation step will reduce~$\phi(Q)$ by~$1$ until~$\phi(Q)$ is nullified, which is
precisely when the animal becomes a minimal-perimeter animal.
(Any additional inflation steps would add superfluous cells, in the sense that they can be removed while keeping
the animal a minimal-perimeter animal.)
\end{proof}
\section{Polyominoes}
\label{sec:polyominoes}
Throughout this section, we consider the two-dimensional square lattice~$\mathcal{S}$, and
show that the premises of Theorem~\ref{thm:main} hold for this lattice.
The lattice-specific notation ($M_n$, $\epsilon(n)$, and~$c^*$) in this section refer to~$\mathcal{S}$.
\subsection{Premise 1: Monotonicity}
The function~$\epsilon^\mathcal{S}(n)$, that gives the minimum possible size of the perimeter of a
polyomino of size~$n$, is known to be weakly-monotone increasing.
This fact was proved independently by Altshuler et al.~\cite{altshuler2006} and by
Sieben~\cite{sieben2008polyominoes}.
The latter reference also provides the following explicit formula.
\begin{theorem}
\label{thm:minp_sqr}
\textup{\cite[Thm.~$5.3$]{sieben2008polyominoes}}
$\epsilon^\mathcal{S}(n) = \ceil{\sqrt{8n-4} \,}+2$.
\myqed
\end{theorem}
\subsection{Premise 2: Constant Inflation}
The second premise is apparently the hardest to show.
We will prove that it holds for~$\mathcal{S}$ by analyzing the patterns which may appear on the
border of minimal-perimeter polyominoes.
Asinowski et al.~\cite{asinowski2017enumerating} defined the \emph{excess} of a perimeter cell
as the number of adjacent occupied cell minus one, and the total \emph{perimeter excess} of an
animal~$Q$, $e_P(Q)$, as the sum of excesses over all perimeter cells of~$Q$.
We extend this definition to border cells, and, in a similar manner, define the \emph{excess} of
a border cell as the number of adjacent empty cells minus one, and the \emph{border excess}
of~$Q$, $e_B(Q)$, as the sum of excesses over all border cells of~$Q$.
First, we establish a connection between the size of the perimeter of a polyomino to the size
of its border.
The following formula is universal for all lattice animals.
\begin{lemma}
\label{lemma:pebe}
For every animal~$Q$, we have that
\begin{equation}
\label{eq:pebe}
\abs{\perim{Q}} + e_P(Q) = \abs{\border{Q}} + e_B(Q).
\end{equation}
\end{lemma}
\begin{proof}
Consider the (one or more) rectilinear polygons bounding the animal~$Q$.
The two sides of the equation are equal to the total length of the polygon(s) in terms of
lattice edges.
Indeed, this length can be computed by iterating over either the border or the perimeter
cells of~$Q$. In both cases, each cell contributes one edge plus its excess to the
total length. The claim follows.
\myqed
\end{proof}
\begin{figure}
\centering
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{bs1.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{bs2.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{bs3.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{bs4.txt}
\caption{}
\end{subfigure}
\qquad
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{ps1.txt}
\addtocounter{subfigure}{18}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{ps2.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{ps3.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.1\textwidth}
\centering
\drawpoly[scale=0.5]{ps4.txt}
\caption{}
\end{subfigure}
\caption{All possible patterns of cells, up to symmetries, with positive excess.
The gray cells are polyomino cells, while the white cells are perimeter cells.
The centers of the ``crosses'' are the subject cells, and the patterns show
the immediate neighbors of these cells.
Patterns~(a--d) exhibit excess border cells, while Patterns~(w--z) exhibit
excess perimeter cells.
}
\label{fig:patterns_sqr}
\end{figure}
\begin{figure}
\centering
\drawpoly{patterns.txt}
\caption{A~sample polyomino with marked patterns.}
\label{fig:patterns-exmp}
\end{figure}
Let~$\#\square$ be the number of excess cells of a certain type in a polyomino,
where~`$\square$' is one of the symbols~$a$--$d$ and~$w$--$z$,
as classified in Figure~\ref{fig:patterns_sqr}.
Figure~\ref{fig:patterns-exmp} depicts a polyomino which includes cells of all these types.
Counting~$e_P(Q)$ and~$e_B(Q)$ as functions of the different patterns of
excess cells, we see that
\(
e_B(Q) = \#a + 2\#b + 3\#c + \#d
\)
and
\(
e_P(Q) = \#w + 2\#x + 3\#y + \#z.
\)
Substituting~$e_B$ and~$e_P$ in \myeqref{eq:pebe}, we obtain that
\[
e_P(Q) = e_B(Q) + \#a + 2\#b+3\#c + \#d - \#w - 2\#x-3\#y - \#z.
\]
Since Pattern~(c) is a singleton cell, we can ignore it in the general
formula. Thus, we have that
\[
e_P(Q) = e_B(Q) + \#a + 2\#b + \#d - \#w - 2\#x-3\#y - \#z.
\]
We now simplify the equation above, first by eliminating the hole pattern, namely, Pattern~(y).
\begin{lemma}
\label{lemma:no-holes-sqr}
Any minimal-perimeter polyomino is simply connected (that is, it
does not contain holes).
\end{lemma}
\begin{proof}
The sequence~$\epsilon(n)$ is weakly-monotone increasing.\footnote{
In the sequel, we simply say ``monotone increasing.''
}
Assume that there exists a minimal-perimeter polyomino~$Q$ with a
hole. Consider the polyomino~$Q'$ that is obtained by filling this
hole. The area of~$Q'$ is clearly larger than that of~$Q$,
however, the perimeter size of~$Q'$ is smaller than that of~$Q$ since we eliminated
the perimeter cells inside the hole but did not introduce new perimeter cells.
This is a contradiction to~$\epsilon(n)$ being monotone increasing.
\myqed
\end{proof}
Next, we continue to eliminate terms from the equation by showing some invariant related to the
turns of the boundary of a minimal-perimeter polyomino.
\begin{lemma}
\label{lemma:sum_of_turns}
For a simply connected polyomino, we have that
\(
\#a +2\#b -\#w -2\#x = 4.
\)
\end{lemma}
\begin{proof}
The boundary of a polyomino without holes is a simple polygon, thus,
the sum of its internal angles is $(v-2)\pi$,
where~$v$ is the complexity (number of vertices) of the polygon.
Note that Pattern~(a) (resp.,~(b)) adds one (resp., two)
$\pi/2$-vertex to the polygon.
Similarly, Pattern~(w) (resp.~(x)) adds one (resp., two) $3\pi/2$-vertex.
All other patterns do not involve vertices.
Let~$L = \#a+2\#b$ and~$R = \#w+2\#x$.
Then, the sum of angles of the boundary polygon implies that
$L \cdot \pi/2 + R \cdot 3\pi/2 = (L+R-2) \cdot \pi$,
that is, $L-R = 4$. The claim follows.
\myqed
\end{proof}
Finally, we show that Patterns~(d) and~(z) cannot exist in a minimal-perimeter polyomino.
We define a \emph{bridge} as a cell whose removal renders the polyomino disconnected.
Similarly, a perimeter bridge is a perimeter cell that neighbors two or more
connected components of the complement of the polyomino.
Observe that minimal-perimeter polyominoes do not contain any bridges, \emph{i.e.},
cells of Patterns~(d) or~(z). This is stated in the following lemma.
\begin{lemma}
\label{lemma:no-bridges-sqr}
A minimal-perimeter polyomino does not contain any bridge cells.
\end{lemma}
\begin{proof}
Let~$Q$ be a minimal-perimeter polyomino. For the sake of
contradiction, assume first that there is a cell~$f \in \perim{Q}$
as part of Pattern~(z). Assume without loss of generality
that the two adjacent polyomino cells are to the left and to
the right of~$f$. These two cells must be connected, thus, the area
below (or above)~$f$ must form a cavity in the polyomino shape.
Let, then, $Q'$ obtained by adding~$f$ to~$Q$ and filling the cavity.
\figrefs{fig:no_z+d}(a,b) illustrate this situation.
The cell directly above~$f$ becomes a perimeter cell, the cell~$f$
ceases to be a perimeter cell, and at least one perimeter cell in the
area filled below~$f$ is eliminated,
thus,~$\abs{\perim{Q'}} < \abs{\perim{Q}}$
and~$\abs{Q'} > \abs{Q}$,
which is a contradiction to the sequence~$\epsilon(n)$ being
monotone increasing.
Therefore, polyomino~$Q$ does not contain perimeter cells that
fit Pattern~(z).
\begin{figure}
\centering
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z1.txt}
\caption{$Q$}
\end{subfigure}
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_z2.txt}
\caption{$Q'$}
\end{subfigure}
\qquad
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpoly[scale=0.5, every node/.style={scale=0.6}]{no_d1.txt}
\caption{$Q$}
\end{subfigure}
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpoly[scale=0.5]{no_d2.txt}
\caption{$Q'$}
\end{subfigure}
\caption{Forbidden patterns for the proof of Theorem~\ref{theorem:pb4}.}
\label{fig:no_z+d}
\end{figure}
Now assume for contradiction that~$Q$ contains a cell~$f$ that forms
Pattern~(d). Let~$Q'$ be the polyomino obtained from~$Q$ by
removing~$f$ (this will break~$Q'$ into two separate pieces) and then
shifting to the left the piece on the right (this will unite the two pieces
into a new polyomino).
\figrefs{fig:no_z+d}(c,d) demonstrate this situation.
This operation is always valid since~$Q$ is of minimal perimeter,
hence, by Lemma~\ref{lemma:no-holes-sqr}, it is simply connected, and thus,
removing~$f$ breaks~$Q$ into two separate polyominoes with a gap of one
cell in between. Shifting to the left the piece on the right will not create a
collision since this would mean that the two pieces were touching, which is not the case.
On the other hand, the shift will eliminate the gap that was created by the
removal of~$f$, hence, the two pieces will now form a new connected polyomino.
The area of~$Q'$
is one less than the area of~$Q$, and the perimeter of~$Q'$ is
smaller by at least two than the perimeter of~$Q$, since the
perimeter cells below and above~$f$ cease to be part of the perimeter,
and connecting the two parts does not create new perimeter cells.
From the formula of~$\epsilon(n)$, we know that
$\epsilon(n)-\epsilon(n-1) \leq 1$ for~$n \geq 3$.
However, $\abs{Q} - \abs{Q'} = 1$
and~$\abs{\perim{Q}} - \abs{\perim{Q'}} = 2$, hence,
$Q$ is not a minimal-perimeter polyomino, which contradicts our
assumption.
Therefore, there are no cells in~$Q$ that fit Pattern~(d).
This completes the proof. \myqed
\end{proof}
We are now ready to wrap up the proof of the constant-inflation theorem.
\begin{theorem}
\label{theorem:pb4}
(Stepping Theorem)
For any minimal-perimeter polyomino~$Q$ (except the singleton cell), we have
that $e_P(Q)=e_B(Q)+4.$
\end{theorem}
\begin{proof}
Lemma~\ref{lemma:sum_of_turns} tells us that~$e_P(Q)=e_B(Q)+4+\#d-\#z$.
By Lemma~\ref{lemma:no-bridges-sqr}, we know that $\#d = \#z = 0$.
The claim follows at once.
\myqed
\end{proof}
\subsection{Premise 3: Deflation Resistance}
\begin{lemma}
\label{lemma:def_valid}
Let~$Q$ be a minimal-perimeter polyomino of area~$n+\epsilon(n)$
(for $n \geq 3$). Then, $D(Q)$ is a valid (connected) polyomino.
\end{lemma}
\begin{proof}
Assume to the contrary that~$D(Q)$ is not connected, so that it is
composed of at least two connected parts.
Assume first that~$D(Q)$ is composed of exactly two parts,
$Q_1$ and~$Q_2$.
Define the \emph{joint perimeter} of the two parts,
$\perim{Q_1,Q_2}$, to be~$\perim{Q_1} \cup \perim{Q_2}$.
Since~$Q$ is a minimal-perimeter polyomino of area $n+\epsilon(n)$,
we know by Theorem~\ref{theorem:pb4}
that its perimeter size is~$\epsilon(n)+4$ and its
border size is~$\epsilon(n)$, respectively.
Thus, the size of~$D(Q)$ is exactly~$n$ regardless of whether or
not~$D(Q)$ is connected.
Since deflating~$Q$ results in~$Q_1 \cup Q_2$,
the polyomino~$Q$ must have an (either horizontal, vertical, or
diagonal) ``bridge'' of border cells which disappears by the deflation.
The width of the bridge is at most~2, thus,
$\abs{\perim{Q_1} \cap \perim{Q_2}} \leq 2$. Hence,
$\abs{\perim{Q_1}} + \abs{\perim{Q_2}} - 2 \leq
\abs{\perim{Q_1,Q_2}}$.
Since~$\perim{Q_1,Q_2}$ is a subset of~$\border{Q}$,
we have that $\abs{\perim{Q_1,Q_2}} \leq \epsilon(n)$. Therefore,
\begin{equation}
\label{eq:def_valid_1}
\epsilon(\abs{Q_1}) + \epsilon(\abs{Q_2}) - 2 \leq \epsilon(n).
\end{equation}
Recall that~$\abs{Q_1} + \abs{Q_2} = n$.
It is easy to observe that~$\epsilon(\abs{Q_1})+\epsilon(\abs{Q_2})$
is minimized when~$\abs{Q_1}=1$ and $\abs{Q_2} = n-1$ (or vice
versa). Had the function~$\epsilon(n)$ (shown in \figref{fig:minp_plot})
\begin{figure}
\centering
\includegraphics[scale=0.4]{minp_s.pdf}
\caption{The function~$\epsilon(n)$.}
\label{fig:minp_plot}
\end{figure}
been $2+\sqrt{8n-4}$ (without rounding up), this would be obvious.
But since $\epsilon(n) = \left\lceil 2+\sqrt{8n-4} \, \right\rceil$, it is a
step function (with an infinite number of intervals), where the gap
between all successive steps is exactly~1, except the gap between the
two leftmost steps which is~2. This guarantees that despite the
rounding, the minimum of~$\epsilon(\abs{Q_1})+\epsilon(\abs{Q_2})$
occurs as claimed.
Substituting this into \myeqref{eq:def_valid_1}, and using the
fact that~$\epsilon(1)=4$, we see that $\epsilon(n-1) + 2 \leq \epsilon(n)$.
However, we know~\cite{sieben2008polyominoes} that
$\epsilon(n) - \epsilon(n-1) \leq 1$ for $n\geq 3$, which is a contradiction.
Thus, the deflated version of~$Q$ cannot split into two parts unless it splits into two
singleton cells, which is indeed the case for a minimal-perimeter
polyomino of size~8, specifically,
\(
D( \!\! \raisebox{-2.25mm}{\drawpoly[scale=0.3]{diag8.txt}} \!\! ) = \!\!
\raisebox{-1.5mm}{\drawpoly[scale=0.3]{diag2.txt}}
\).
The same method can be used for showing that~$D(Q)$ cannot be composed
of more then two parts. Note that this proof does not hold for
polyominoes of area which is not of the form~$n+\epsilon(n)$, but it
suffices for the use in Theorem~\ref{thm:main}.
\myqed
\end{proof}
As mentioned earlier, it was already proven elsewhere~\cite{altshuler2006,sieben2008polyominoes}
that Premise~4 (roots of inflation chains) is fulfilled for the square lattice.
Therefore, we proceed to showing that Premise~5 holds.
\subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino}
In this section, we show that starting from any polyomino~$P$, and applying repeatedly some finite number
of inflation steps, we obtain a polyomino $Q = Q(P)$, for which $\perim{I(Q)} = \perim{Q} + 4$.
Let~$R(Q)$ denote the \emph{diameter} of~$Q$, \emph{i.e.}, the maximal horizontal or
vertical distance ($L^\infty$) between two cells of~$Q$.
The following lemma shows that some geometric features of a polyomino
disappear after inflating it enough times.
\begin{lemma}
\label{lem:no-hdz}
For any~$k > R(Q)$, the polyomino~$Q^k$ does not contain any
(i)~holes; (ii)~cells of Type~(d); or (iii)~patterns of Type~(z).
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(i)]
Let~$Q$ be a polyomino, and assume that~$Q^k$ contains a hole.
Consider a cell~$c$ inside the hole, and let~$c_u$ be the cell
of~$Q^k$ that lies immediately above it. (Note that since~$c_u$
belongs to the border of~$Q^k$, it is not a cell of~$Q$.) Any cell
that resides (not necessarily directly) below~$c$ is closer to~$c$ than
to~$c_u$. Since $c_u \in Q^k$, it ($c_u$) is closer than~$c$ to~$Q$, thus,
there must be a cell of~$Q$ (not necessarily directly) above~$c$,
otherwise~$c_u$ would not belong to~$Q^k$.
The same holds for cells below, to the right, and to the left
of~$c$, thus,~$c$ resides within the axis-aligned bounding box of
the extreme cells of~$Q$, and after~$R(Q)$ steps,~$c$ will be
occupied, and any hole will be eliminated.
\item[(ii)]
Assume that there exists a polyomino~$Q$, for which the polyomino~$Q^k$ contains a
cell of Type~(d).
Without loss of generality, assume that the neighbors of~$c$ reside
to its left and to its right, and denote them by $c_\ell,c_r$, respectively.
Denote by~$c_o$ one of the cells whose inflation created~$c_\ell$, \emph{i.e.},
a cell which belongs to~$Q$ and is in distance of at most~$k$ from~$c_\ell$.
In addition, denote by $c_u,c_d$ the adjacent perimeter cells which
lie immediately above and below~$c$, respectively. The cell~$c_d$ is not
occupied, thus, its distance from~$c_o$ is~$k+1$, which means
that~$c_o$ lies in the same row as~$c_\ell$. Assume for contradiction
that~$c_o$ lies in a row below~$c_\ell$. Then, the distance between~$c_o$ and~$c_d$
is at most~$k$, hence~$c_d$ belongs to~$Q^k$.
The same holds for~$c_u$; thus, cell~$c_o$ must lie in the same row as~$c_\ell$.
Similar considerations show that~$c_o$ must lie to the left of~$c_\ell$,
otherwise~$c_d$ and~$c_u$ would be occupied.
In the same manner, one of the cells that originated~$c_r$
must lie in the same row as~$c_r$ on its right.
Hence, any cell of Type~(d) have cells of~$Q$ to its right and to its left,
and thus, it is found inside the bounding axis-aligned bounding box of~$Q$,
which will necessarily be filled with polyomino cells after~$R(Q)$ inflation
steps.
\item[(iii)]
Let~$c$ be a Type-(z) perimeter cell of~$Q^k$. Assume, without loss
of generality, that the polyomino cells adjacent to it are to its
left and to its right, and denote them by~$c_\ell$ and~$c_r$, respectively.
Let~$c_o$ denoted a cell whose repeated inflation has added~$c_\ell$ to $Q^k$.
(Note that~$c_o$ might not be unique.)
This cell must lie to the left of~$c$, otherwise, it will be closer to~$c$ than to~$c_\ell$,
and~$c$ would not be a perimeter cell.
In addition, $c_o$ must lie in the same row as~$c_\ell$, for otherwise, by the same
considerations as above, one of the cells above or below~$c$ will be occupied.
The same holds for~$c_r$ (but to its right), thus, cells of Type~(z) must
reside between two original cells of~$Q$, \emph{i.e.}, inside the bounding box
of~$Q$, and after~$R(Q)$ inflation steps, all cells inside this box
will become polyomino cells.
\end{itemize}
\end{proof}
We can now conclude that inflating a polyomino~$Q$ for~$R(Q)$ times eliminates all
holes and bridges, and, thus, the polyomino~$Q^k$ will obey the equation
$\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$.
\begin{lemma}
\label{lem:conv-pb4}
Let~$Q$ be a polyomino, and let~$k = R(Q)$. We have that
$\abs{\perim{Q^k}} = \abs{\border{Q^k}} + 4$.
\end{lemma}
\begin{proof}
This follows at once from Lemma~\ref{lem:no-hdz} and
Theorem~\ref{theorem:pb4}.
\end{proof}
\section{Polyhexes}
\label{sec:polyhexes}
In this section, we show that the premises of Theorem~\ref{thm:main} hold for the
two-dimensional hexagonal lattice~$\mathcal{H}$. The roadmap followed in this section is
similar to the one used in Section~\ref{sec:polyominoes}. In this section, all the
lattice-specific notations refer to~$\mathcal{H}$.
\subsection{Premise 1: Monotonicity}
The first premise has been proven for~$\mathcal{H}$ independently by Vainsencher and
Bruckstien~\cite{VainsencherB08} and by F\"{u}lep and Sieben~\cite{fulep2010polyiamonds}.
We will use the latter, stronger version which also includes a formula for $\epsilon(n)$.
\begin{theorem}
\label{thm:minp_hex}
\textup{\cite[Thm.~$5.12$]{fulep2010polyiamonds}}
$\epsilon(n) = \ceil{\sqrt{12n-3}\,}+3$.
\myqed
\end{theorem}
Clearly, the function~$\epsilon(n)$ is weakly-monotone increasing.
\subsection{Premise 2: Constant Inflation}
To show that the second premise holds, we analyze the different patterns that may
appear in the border and perimeter of minimal-perimeter polyhexes.
We can classify every border or perimeter cell by one of exactly~24 patterns,
distinguished by the number and positions of their adjacent occupied cells.
The~24 possible patterns are shown in Figure~\ref{fig:patterns_hex}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b0.txt}
\caption{}
\label{fig:b0}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b1.txt}
\caption{}
\label{fig:b1}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b2.txt}
\caption{}
\label{fig:b2}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b3.txt}
\caption{}
\label{fig:b3}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b4.txt}
\caption{}
\label{fig:b4}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b5.txt}
\caption{}
\label{fig:b5}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b6.txt}
\caption{}
\label{fig:b6}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b7.txt}
\caption{}
\label{fig:b7}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b8.txt}
\caption{}
\label{fig:b8}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b9.txt}
\caption{}
\label{fig:b9}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b10.txt}
\caption{}
\label{fig:b10}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{b11.txt}
\caption{}
\label{fig:b11}
\end{subfigure}
\medskip \\
\begin{subfigure}[t]{0.075\textwidth}
\centering
\addtocounter{subfigure}{2}
\drawpolyhex[scale=0.65]{p0.txt}
\caption{}
\label{fig:p0}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p1.txt}
\caption{}
\label{fig:p1}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p2.txt}
\caption{}
\label{fig:p2}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p3.txt}
\caption{}
\label{fig:p3}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p4.txt}
\caption{}
\label{fig:p4}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p5.txt}
\caption{}
\label{fig:p5}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p6.txt}
\caption{}
\label{fig:p6}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p7.txt}
\caption{}
\label{fig:p7}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p8.txt}
\caption{}
\label{fig:p8}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p9.txt}
\caption{}
\label{fig:p9}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p10.txt}
\caption{}
\label{fig:p10}
\end{subfigure}
\begin{subfigure}[t]{0.075\textwidth}
\centering
\drawpolyhex[scale=0.65]{p11.txt}
\caption{}
\label{fig:p11}
\end{subfigure}
\caption{All possible patterns (up to symmetries) of border (first row) and perimeter
(second row) cells.
The gray cells are polyhex cells, while the white cells are perimeter cells.
Each subfigure shows a cell in the middle, and the possible pattern of cells
surrounding it.}
\label{fig:patterns_hex}
\end{figure}
Let us recall the equation subject of Lemma~\ref{lemma:pebe}.
\[
\abs{\perim{Q}} + e_P(Q) = \abs{\border{Q}} + e_B(Q).
\]
Our first goal is to express the excess of a polyhex~$Q$ as a function of the numbers of
cells of~$Q$ of each pattern. We denote the number of cells of a specific
pattern in~$Q$ by $\#\drawpolyhex[scale=0.6]{single_hex.txt}$, where `$\drawpolyhex[scale=0.6]{single_hex.txt}$' is one of the~22 patterns listed
in Figure~\ref{fig:patterns_hex}. The excess (either border or perimeter excess) of
Pattern~$\drawpolyhex[scale=0.6]{single_hex.txt}$ is denoted by $e(\drawpolyhex[scale=0.6]{single_hex.txt})$.
(For simplicity, we omit the dependency on~$Q$ in the notations of~$\#\drawpolyhex[scale=0.6]{single_hex.txt}$
and~$e(\drawpolyhex[scale=0.6]{single_hex.txt})$. This should be understood from the context.)
The border excess can be expressed
as~$e_B(Q) = \sum_{\drawpolyhex[scale=0.6]{single_hex.txt} \in \{a,\dots,l\}} e(\drawpolyhex[scale=0.6]{single_hex.txt})\#\drawpolyhex[scale=0.6]{single_hex.txt}$, and, similarly, the
perimeter excess can be expressed
as~$e_P(Q) = \sum_{\drawpolyhex[scale=0.6]{single_hex.txt} \in \{o,\dots,z\}} e(\drawpolyhex[scale=0.6]{single_hex.txt})\#\drawpolyhex[scale=0.6]{single_hex.txt}$.
By plugging these equations into \myeqref{eq:pebe}, we obtain that
\begin{equation}
\label{eq:all-patterns}
\abs{\perim{Q}} + \sum_{\drawpolyhex[scale=0.6]{single_hex.txt} \in \{o,\dots,z\}} e(\drawpolyhex[scale=0.6]{single_hex.txt})\#\drawpolyhex[scale=0.6]{single_hex.txt} =
\abs{\border{Q}} + \sum_{\drawpolyhex[scale=0.6]{single_hex.txt} \in \{a,\dots,l\}} e(\drawpolyhex[scale=0.6]{single_hex.txt})\#\drawpolyhex[scale=0.6]{single_hex.txt}~.
\end{equation}
The next step of proving the second premise is showing that minimal-perimeter polyhexes
cannot contain some of the~22 patterns. This will simplify \myeqref{eq:all-patterns}.
\begin{lemma}
\label{lemma:no-holes_hex}
(Analogous to Lemma~\ref{lemma:no-holes-sqr}.)
A minimal-perimeter polyhex does not contains holes.
\end{lemma}
\begin{proof}
Assume to the contrary that there exists a minimal-perimeter polyhex~$Q$ that contains
one or more holes, and let~$Q'$ be the polyhex obtained by filling one of the holes
in~$Q$.
Clearly, $|Q'| > |Q|$, and by filling the hole we eliminated some perimeter cells
and did not create new perimeter cells.
Hence, $\abs{\perim{Q'}} < \abs{\perim{Q}}$.
This contradicts the fact that~$\epsilon(n)$ is monotone increasing, as implied
by Theorem~\ref{thm:minp_hex}.
\myqed
\end{proof}
Another important observation is that minimal-perimeter polyhexes tend to be ``compact.''
We formalize this observation in the following lemma.
Recall the definition of a bridge from Section~\ref{sec:polyominoes}:
A \emph{bridge} is a cell whose removal unites two holes or renders the polyhex
disconnected (specifically, Patterns~(b), (d), (e), (g), (h), (j), and~(k)).
Similarly, a \emph{perimeter bridge} is an empty cell whose addition to the polyhex creates
a hole in it (specifically, Patterns~(p), (r), (s), (u), (v), (x),
and~(y)).
\begin{lemma}
\label{lemma:bridges}
(Analogous to Lemma~\ref{lemma:no-bridges-sqr}.)
Minimal-perimeter polyhexes contain neither bridges nor perimeter bridges.
\myqed
\end{lemma}
\begin{proof}
Let~$Q$ be a minimal-perimeter polyhex, and assume first that it contains a bridge cell~$f$.
By Lemma~\ref{lemma:no-holes_hex}, since~$Q$ does not contain holes, the removal of~$f$
from~$Q$ will break it into two or three disconnected polyhexes.
We can connect these parts by translating one of them towards the other(s) by one cell.
(In case of Pattern~(h), the polyhex is broken into three parts, but then translating
any of them towards the removed cell would make the polyhex connected again.)
Locally, this will eliminate at least two perimeter cells created by the bridge.
(This can be verified by exhaustively checking all the relevant patterns.)
The size of the new polyhex, $Q'$, is one less than that of~$Q$, while the
perimeter of~$Q'$ is smaller by at least two than that of~$Q$.
However, Theorem~\ref{thm:minp_hex} implies that~$\epsilon(n)-\epsilon(n-1) \leq 1$
for all $n \geq 3$, which is a contradiction to~$Q$ being a minimal-perimeter polyhex.
Assume now that~$Q$ contains a perimeter bridge. Filling the bridge will not
increase the perimeter. (It might create one additional perimeter cell, which will be
canceled out with the eliminated (perimeter) bridge cell.)
In addition, it will create a hole in the polyomino.
Then, filling the hole will create a polyhex with a larger size and a
smaller perimeter, which is a contradiction to~$\epsilon(n)$ being monotone
increasing.
\myqed
\end{proof}
As a consequence of Lemma~\ref{lemma:no-holes_hex}, Pattern~(o) cannot appear in any
minimal-perimeter polyhex.
In addition, Lemma~\ref{lemma:bridges} tells us that the Border Patterns~(b),
(d), (e), (g), (h), (j), and~(k), as well as the Perimeter Patterns~(p),
(r), (s), (u), (v), (x), and~(y) cannot appear in any minimal-perimeter polyhex.
(Note that patterns~(b) and~(p) are not bridges by themselves, but the adjacent cell is a bridge,
that is, the cell above the central cells in~\drawpolyhex[scale=0.3]{b1.txt}
and~\drawpolyhex[scale=0.3]{p1.txt} are bridges.)
Finally, Pattern~(a) appears only in the singleton cell (the unique polyhex of size~1),
which can be disregarded.
Ignoring all these patterns, we obtain that
\begin{equation}
\label{eq:pb321}
\abs{\perim{Q}} + 3\#q + 2\#t + \#w = \abs{\border{Q}} + 3\#c + 2\#f + \#i.
\end{equation}
Note that Patterns~(l) and~(z) have excess~0, and, hence, although they may appear in
minimal-perimeter polyhexes, they do not contribute to the equation.
Consider a polyhex which contains only the six feasible patterns that contribute to the excess
(those that appear in \myeqref{eq:pb321}).
Let~$\xi$ denote the single polygon bounding the polyhex.
We now count the number of vertices and the sum of internal angles of~$\xi$ as functions of the
numbers of appearances of the different patterns.
In order to calculate the number of vertices of~$\xi$, we first determine the
number of vertices contributed by each pattern. In order to avoid multiple counting of a
vertex, we associate each vertex to a single pattern. Note that each vertex of~$\xi$
is surrounded by three (either occupied or empty) cells,
out of which one is empty and two are occupied, or vise versa. We call the cell, whose type
(empty or occupied) appears once (among the surrounding three cells), the ``representative''
cell, and count only these representatives. Thus, each vertex is counted exactly once.
For example, out of the six vertices surrounding Pattern~(c), five vertices belong to the
bounding polygon, but the representative cell of only three of them is the cell at the center
of this pattern, thus, by our scheme, Pattern~(c) contributes three vertices, each having
a~$2\pi/3$ angle.
Similarly, only two of the four vertices in the configuration of Pattern~(t), are
represented by the cell at the center of this pattern. In this case, each vertex is the
head of a $4\pi/3$ angle.
To conclude, the total number of vertices of~$\xi$ is
\[
3\#c+2\#f+\#i+3\#q+2\#t+\#w,
\]
and the sum of internal angles is
\begin{equation}
\label{eq:sum-1}
(3\#c+2\#f+\#i)2\pi/3 + (3\#q+2\#t+\#w)4\pi/3.
\end{equation}
On the other hand, it is known that the sum of internal angles is equal to
\begin{equation}
\label{eq:sum-2}
(3\#c+2\#f+\#i+3\#q+2\#t+\#w-2)\pi.
\end{equation}
Equating the terms in Formulae~\eqref{eq:sum-1} and~\eqref{eq:sum-2}, we obtain that
\begin{equation}
\label{eq:sum-3}
3\#c+2\#f+\#i = 3\#q+2\#t+\#w + 6.
\end{equation}
Plugging this into \myeqref{eq:pb321}, we conclude
that~$\abs{\perim{Q}} = \abs{\border{Q}} + 6$, as required.
We also need to show that the second part of the second premise holds, that is, that if~$Q$
is a minimal-perimeter polyhex, then $\abs{\perim{I(Q)}} \leq \abs{\perim{Q}} + 6$.
To this aim, note that $\border{I(Q)} \subset \perim{Q}$, thus, it is sufficient
to show that~$\abs{\perim{I(Q)}} \leq \abs{\border{I(Q)}} + 6$. Obviously,
\myeqref{eq:all-patterns} holds for the polyhex~$I(Q)$, hence, in order to prove the
relation, we only need to prove the following lemma.
\begin{lemma}
\label{lemma:inf-no-bridges}
If~$Q$ is a minimal-perimeter polyhex, then~$I(Q)$ does not contain any
bridge.
\myqed
\end{lemma}
\begin{proof}
Assume to the contrary that~$I(Q)$ contains a bridge.
Then, the cell that makes the bridge must have been created in the inflation
process. However, any cell~$c \in I(Q) \backslash Q$ must have a
neighboring cell~$c' \in Q$. All the cells adjacent to~$c'$ must also be part
of~$I(Q)$, thus, cell~$c$ must have three consecutive neighbors around it,
namely, $c'$ and the two cells neighboring both~$c$ and~$c'$.
The only bridge pattern that fits this requirement is Pattern~(j).
However, this means that there must have been a gap of two cells in~$Q$ that
caused the creation of~$c$ during the inflation of~$Q$. Consequently, by
filling the gap and the hole it created, we will obtain (see Figure~\ref{fig:no-j})
a larger polyhex with a smaller perimeter, which contradicts the fact that~$Q$
is a minimal-perimeter polyhex.
\myqed
\end{proof}
\begin{figure}
\centering
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpolyhex[scale=0.5]{noj1.txt}
\caption{$Q$}
\end{subfigure}
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpolyhex[scale=0.5]{noj2.txt}
\caption{$I(Q)$}
\end{subfigure}
\begin{subfigure}[t]{0.2\textwidth}
\centering
\drawpolyhex[scale=0.5]{noj3.txt}
\caption{$Q'$}
\end{subfigure}
\caption{The construction in Lemma~\ref{lemma:inf-no-bridges} which shows that~$I(Q)$
cannot contain a cell of Pattern~$(j)$. Assuming that it does, by filling the
hole in it, we obtain~$Q'$ which contradicts the perimeter-minimality
of~$Q$. (The marked cells in~$Q'$ are those added to~$Q$.)}
\label{fig:no-j}
\end{figure}
\subsection{Premise 3: Deflation Resistance}
We now show that deflating a minimal-perimeter polyhex
results in another (smaller) valid polyhex.
The intuition behind this condition is that a minimal-perimeter polyhex is ``compact,''
having a shape which does not become disconnected by deflation.
\begin{lemma}
\label{lemma:def-valid-polyhex}
For any minimal-perimeter polyhex~$Q$, the shape~$D(Q)$ is also a valid
(connected) polyhex.
\myqed
\end{lemma}
\begin{proof}
The proof of this lemma is very similar to the first part of the proof of
Lemma~\ref{lemma:bridges}.
Consider a minimal-perimeter polyhex~$Q$.
In order for~$D(Q)$ to be disconnected, $Q$ must contain a bridge of either a
single cell or two adjacent cells.
A 1-cell bridge cannot be part of~$Q$ by Lemma~\ref{lemma:bridges}.
The polyhex~$Q$ can neither contain a 2-cell bridge.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyhex[scale=0.45]{hex_bridge_1.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyhex[scale=0.45]{hex_bridge_2.txt}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyhex[scale=0.45]{hex_bridge_3.txt}
\caption{}
\end{subfigure}
\caption{An example for the construction in the proof of Lemma~\ref{lemma:bridges}.
The two-cell bridge is colored in red in (a). Then, in (b), the bridge is removed,
and, in (c), the two parts are ``glued'' together.}
\label{fig:two-cell-bridge}
\end{figure}
Assume to the contrary that it does, as is shown in Figure~\ref{fig:two-cell-bridge}(a).
Then, removing the bridge (see Figure~\ref{fig:two-cell-bridge}(b)), and then connecting
the two pieces (by translating one of them towards the other by one cell along a direction
which makes a $60^{\circ}$ angle with the bridge), creates (Figure~\ref{fig:two-cell-bridge}(c))
a polyhex whose size is smaller by two than that of the original polyhex, and whose perimeter is
smaller by at least two (since the perimeter cells adjacent to the bridge disappear).
The new polyhex is valid, that is, the translation by one cell of one part towards the
other does not make any cells overlap, otherwise there is a hole in the original polyhex, which
is impossible for a minimal-perimeter polyhex by Lemma~\ref{lemma:no-holes_hex}.
However, we reached a contradiction since for a minimal-perimeter polyhex of size~$n \geq 7$,
we have that~$\epsilon(n) - \epsilon(n-2) \leq 1$.
Finally, it is easy to observe by a tedious inspection that the deflation of any polyhex of
size less than~7 results in the empty polyhex.
\myqed
\end{proof}
In conclusion, we have shown that all the premises of Theorem~\ref{thm:main} are satisfied
for the hexagonal lattice, and, therefore, inflating a set of all the minimal-perimeter
polyhexes of a certain size yields another set of minimal-perimeter polyhexes of another,
larger, size. This result is demonstrated in Figure~\ref{fig:hex_corrolary}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm9_1.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm9_2.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm9_3.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm9_4.txt}
\end{subfigure} \medskip \\
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm23_1.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm23_2.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm23_3.txt}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\drawpolyhex[scale=0.45]{hm23_4.txt}
\end{subfigure}
\caption{A demonstration of Theorem~\ref{thm:main} for polyhexes.
The top row contains all polyhexes in~$M_9$ (minimal-perimeter polyhexes of
size~9), while the bottom row contains their
inflated versions, all the members of~$M_{23}$.}
\label{fig:hex_corrolary}
\end{figure}
We also characterized inflation-chain roots of polyhexes.
As is mentioned above, the premises of Theorems~\ref{thm:main} and~\ref{thm:root-conditioned}
are satisfied for polyhexes~\cite{VainsencherB08,sieben2008polyominoes}, and, thus, the
inflation-chain roots are those who have the minimum size for a given minimal-perimeter size.
An easy consequence of Theorem~\ref{thm:minp_hex} is that the
formula~$\floor{\frac{(p-4)^2}{12}+\frac{5}{4}}$ generates all these inflation-chain roots.
This result is demonstrated in Figure~\ref{fig:minpH_roots}.
\begin{figure}
\centering
\includestandalone[scale=0.40]{minpH_roots}
\vspace{-0.25cm}
\caption{The relation between the minimum perimeter of polyhexes, $\epsilon(n)$, and
the inflation-chain roots. The points represent the minimum perimeter of a
polyhex of size~$n$, and sizes which are inflation-chain roots are colored in red.
The arrows show the mapping between sizes of minimal-perimeter polyhexes (induced
by the inflation operation) and demonstrate the proof of
Theorem~\ref{thm:root-candidates}.}
\label{fig:minpH_roots}
\end{figure}
As in the case of polyominoes, and as was mentioned earlier,
it was already proven elsewhere~\cite{VainsencherB08,fulep2010polyiamonds}
that Premise~4 (roots of inflation chains) is fulfilled for the hexagonal lattice.
Therefore, we proceed to showing that Premise~5 holds.
\subsection{Premise 5: Convergence to a Minimum-Perimeter Polyomino}
Similarly to polyominoes, we now show that starting from a polyhex~$Q$ and applying repeatedly
a finite number, $k$, of inflation steps, we obtain a polyhex $Q^k=I^k(Q)$, for which
$\perim{I(Q^k)} = \perim{Q^k} + 6$.
Let~$R(Q)$ denote the \emph{diameter} of~$Q$, \emph{i.e.}, the maximal distance between two
cells of~$Q$ when projected onto one of the three main axes.
As in the case of polyominoes,
some geometric features of~$Q$ will disappear after $R(Q)$ inflation steps.
\begin{lemma}
\label{lem:no-hdz-hex}
(Analogous to Lemma~\ref{lem:no-hdz}.)
For any $k > R(Q)$, the polyhex~$Q^k$ does not contain any
(i)~holes; (ii)~polyhex bridge cells; or (iii)~perimeter bridge cells.
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(i)]
The proof is identical to the proof for polyominoes.
\item[(ii)]
After~$R(Q)$ inflation steps, the obtained polyhex is clearly connected.
If at this point there exists a bridge cell, then it must have been created in the
last inflation step since after further steps, this cell would cease being a bridge cell.
If during this inflation step, that eliminates the mentioned bridge,
another bridge is created
then its removal will not render the polyomino disconnected (since it was already connected
before applying the inflation step), thus, it must have created a hole in the polyhex, in
contradiction to the previous clause.
\item[(iii)]
We will present here a version of the analogue proof for polyominoes, adapted for polyhexes.
Let~$c$ be a perimeter bridge cell of~$Q^k$. Assume, without loss of generality, that two of the polyhex
cells adjacent to it are above and below it, and denote them by~$c_1$ and~$c_2$, respectively.
The cell whose inflation resulted in adding~$c_1$ to the
polyhex~$c_1$,
denoted by~$c_o$, must reside above~$c$, otherwise, it would be closer to~$c$ than to~$c_1$,
and~$c$ would not be a perimeter cell.
The same holds for~$c_2$ (below $c$), thus, any perimeter bridge cell must
reside between two original cells of~$Q$. Hence, after~$R(Q)$ inflation steps, all such cells
will become a polyhex cells.
\end{itemize}
\end{proof}
\begin{lemma}
\label{lem:conv-pb4-hex}
(Analogous to Lemma~\ref{lem:conv-pb4}.)
After~$k = R(Q)$ inflation steps, the polyhex~$Q^k$ will obey
$\abs{\perim{Q^k}} = \abs{\border{Q^k}} +6$.
\end{lemma}
\begin{proof}
This follows at once from Lemma~\ref{lem:no-hdz-hex} and
Equation~\ref{eq:sum-3}.
\end{proof}
\section{Polyiamonds}
\label{sec:polyiamonds}
Polyiamonds are sets of edge-connected triangles on the regular triangular lattice.
Unlike the square and the hexagonal lattice, in which all cells are identical in shape and
in their role, the triangular lattice has two types of cells, which are seen as a left and a
right pointing arrows (\drawpolyiamond[scale=0.4]{t2_diam.txt},\drawpolyiamond[scale=0.4]{t1_diam.txt}).
Due to this complication, inflating a minimal-perimeter polyiamond does not necessarily
result in a minimal-perimeter polyiamond. Indeed, the second premise of
Theorem~\ref{thm:main} does not hold for polyiamonds.
This fact is not surprising, since inflating minimal-perimeter polyiamonds creates ``jaggy''
polyiamonds whose perimeter is not minimal.
Figures~\ref{fig:exmp_diamond}(a,b) illustrate this phenomenon.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyiamond[scale=0.4]{exmp_diamond.txt}
\caption{$Q$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyiamond[scale=0.4]{exmp_diamond_I.txt}
\caption{$I(Q)$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\drawpolyiamond[scale=0.4]{exmp_diamond_II.txt}
\caption{$Q'$}
\end{subfigure}
\caption{An example of inflating polyiamonds.
The polyiamond~$Q$ is of a minimum perimeter, however, its inflated
version, $I(Q)$ is not of a minimum perimeter.
The polyiamond~$Q'$, obtained by adding to~$Q$ all the cells
sharing a \emph{vertex} with $Q$, is a minimal-perimeter polyiamond.}
\label{fig:exmp_diamond}
\end{figure}
However, we can fix this situation in the triangular lattice by modifying the definition
of the perimeter of a polyiamond so that it it would include all cells that share
a \emph{vertex} (instead of an edge) of the boundary of the polyiamond.
Under the new definition, Theorem~\ref{thm:main} holds.
The reason for this is surprisingly simple:
The modified definition merely mimics the inflation of animals on the graph dual to that
of the triangular lattice. (Recall that graph duality maps vertices to faces (cells), and
vice versa, and edges to edges.)
However, the dual of the triangular lattice is the hexagonal lattice, for which we have
already shown in Section~\ref{sec:polyhexes} that all the premises of
Theorem~\ref{thm:main} hold. Thus, applying the modified inflation operator in the
triangular lattice induces a bijection between sets of minimal-perimeter polyiamonds.
This relation is demonstrated in Figure~\ref{fig:exmp_diamond}.
\comment{
\section{Polycubes}
\label{sec:polycubes}
In this section we consider animals in the high dimension square
lattice, namely polycubes. Empirically, it seems that inflating all
the minimal-perimeter polycubes of a given size the result is all the
minimal-perimeter polycubes of some larger size. We can not say it
definitively since we are not aware of any algorithm which generates
all the minimal-perimeter polycubes other then generating all the
polycubes and checking which ones have minimal-perimeter. Since the
number of polycubes grows rapidly with the size we can not produce all
the minimal with size greater than some relatively small value (in the
3D case, we only know the number of polycubes with size up to $19$).
For all the values we did check it seems that the inflation operation
does induce a bijection between sets of minimal-perimeter polycubes.
However, we can not prove this using Theorem~\ref{thm:main} since the second condition does not hold. Even more than that, we can show that Theorem~\ref{thm:main} probably apply only to two dimensional lattices. A conclusion from Lemma~\ref{lemma:pnc_size} is that for a lattice $\mathcal{L}$, satisfying the conditions of Theorem~\ref{thm:main} it holds that $\epsilon_\mathcal{L}(n) = \Theta(\sqrt{n})$. It is reasonable to assume that in a $d$-dimensional lattice $\mathcal{L}_d$, the relation between the size of a minimal-perimeter animal and its perimeter is roughly as the relation between a $d$-dimensional sphere and its surface area, thus, we can assume that $\epsilon^{\mathcal{L}_d}(n) = \Theta(n^{\frac{d-1}{d}})$, and thus Theorem~\ref{thm:main} does not hold for high dimensional lattices.
Proving this relation in high dimensions remains an open problem, and probably another technique should be utilized in order to prove (or disprove) this property in high dimensions.
}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we show that the inflation operation induces a bijection between sets of
minimal-perimeter animals on any lattice which satisfies three conditions.
We demonstrate this result on three planar lattices: the square, hexagonal, and also the
triangular (with a modified definition of the perimeter).
The most important contribution of this paper is the application of our result to polyhexes.
Specifically, the phenomenon of the number of isomers of a benzenoid hydrocarbons
remaining unchanged under circumscribing, which was observed in the literature of chemistry
more than~30 years ago but has never been proven till now.
However, we do not believe that this set of conditions is necessary.
Empirically, it seems that by inflating all the minimal-perimeter polycubes (animals on the
3-dimensional cubical lattice) of a given size, we obtain all the minimal-perimeter polycubes
of some larger size. However, the second premise of Theorem~\ref{thm:main} does not hold for
this lattice.
Moreover, we believe that as stated, Theorem~\ref{thm:main} applies only to 2-dimensional
lattices!
A simple conclusion from Lemma~\ref{lemma:pnc_size} is that if the premises of
Theorem~\ref{thm:main} hold for animals on a lattice~$\mathcal{L}$,
then~$\epsilon_\mathcal{L}(n) = \Theta(\sqrt{n})$.
We find it is reasonable to assume that for a $d$-dimensional lattice $\mathcal{L}_d$, the
relation between the size of a minimal-perimeter animal and its perimeter is roughly equal
to the relation between a $d$-dimensional sphere and its surface area.
Hence, we conjecture that $\epsilon^{\mathcal{L}_d}(n) = \Theta(n^{1-1/d})$, and, thus,
Theorem~\ref{thm:main} is not suitable for higher dimensions.
\bibliographystyle{abbrv}
|
1912.05569
|
\section{Introduction}
Spherically symmetric and static black holes play an important r\^ole in Einstein's General Relativity owing to their simplicity, not only for their own construction, but also for the analysis of their surrounding geodesic motions. The analysis \cite{Synge:1966okc,Luminet:1979nyg} of null geodesics of such a black hole that are asymptotic to the Minkowski spacetime indicates that photons can have a close orbit, forming an photon sphere. For most known exact black hole solutions, there is only one such a photon sphere and it is unstable. There are two classes of photons whose orbits do not cross the photon sphere: those inside will spiral into the horizon and those outside will escape to infinity, surrounding a shadow disk, whose radius, also called optical radius, is the impact parameter of the photons.
For spherically symmetric and static black holes, multiple close orbits can exist even under the stringent dominant energy condition and an explicit example was constructed in Einstein-Maxwell gravity extended with a quasi-topological electromagnetic structure \cite{Liu:2019rib}. In this black hole, there exists a stable photon sphere sandwiched between two unstable ones. Thus the trapped photons inside the outer photon sphere can form a photon shield outside the horizon, without falling into the horizon or escaping to infinity.
Recently a sequence of inequalities was proposed relating the radii of the black hole event horizon $R_+$, the (outer and unstable) photon sphere $R_{\rm ph}$, the black hole shadow $R_{\rm sh}$ \cite{Lu:2019zxb}
\be
\fft32 R_+\,\le\, R_{\rm ph}\,\le\, \fft{R_{\rm sh}}{\sqrt3}\,\le\, 3M.\label{d4conjecture}
\ee
This set includes the well-known Riemann-Penrose inequality $R_+\le 2M$ \cite{Penrose:1973um}, and the inequalities proposed by Hod ($R_{\rm ph}\le 3M$) \cite{Hod:2017xkz} and Cveti\v c, Gibbons and Pope, ($R_{\rm ph}\le R_{\rm sh}/\sqrt3$) \cite{Cvetic:2016bxi}. The Riemann-Penrose inequality is considered proven under the dominant energy condition, see e.g.~\cite{Mars:2009cj}. The other two inequalities can also be proven under the dominant energy condition, together with negative trace of the energy-momentum tensor. However, large number of black holes satisfying at least the null energy condition were examined in \cite{Lu:2019zxb} and no counterexample was found for (\ref{d4conjecture}). A different lower bound for photon sphere was also conjectured in \cite{Hod:2012nk}, and it was shown \cite{Cvetic:2016bxi} to be violated by the Kaluza-Klein dyonic black hole \cite{Rasheed:1995zv,Lu:2013ura}.
The four-dimensional inequalities (\ref{d4conjecture}) was generalized to higher dimensions and they become
\cite{Lu:2019zxb}
\be
\big(\ft12(D-1)\big)^{\fft{1}{D-3}}R_+\,\le R_{\rm ph}\,\le
\sqrt{\ft{D-3}{D-1}}\, R_{\rm sh}\,\le\,
\Big(\ft{8\pi M(D-1)}{(D-2)\Omega_{D-2}}\Big)^{\fft{1}{D-3}}\,.\label{dconjecture}
\ee
In particular, it was stated that the Reissner-Nordstr\"om black hole in general dimensions was verified to satisfy these inequalities \cite{Lu:2019zxb}. A sufficient energy condition for the $R_{\rm ph}$-$M$ inequality was established in \cite{Gallo:2015bda}. The purpose of this paper is not to verify the conjecture with more examples in Einstein gravity in general dimensions. Instead, we shall consider charged black holes in Einstein-Gauss-Bonnet-Maxwell (EGBM) gravities in general $D\ge 5$. This is worth checking for two reasons. On one hand, the theory is beyond Einstein gravity since it involves quadratic curvature invariants. On the other hand, if we consider the Gauss-Bonnet term as matter, then the black holes satisfy the weak energy condition, which makes the verification necessary and nontrivial.
The paper is organized as follows. In section 2, we start with a review of the charged asymptotically flat black holes in EGBM gravities in general $D\ge 5$ dimensions. We then show that the inequalities (\ref{dconjecture}) are satisfied by the simpler RN black holes and also the $D=5$ neutral black hole. We then prove that the inequalities hold for all the static general black holes. We conclude the paper in section 3.
\section{Einstein-Gauss-Bonnet-Maxwell Gravity}
We start with the Lagrangian of EGBM gravity in general $D$ dimensions
\be
{\cal L}=\sqrt{-g} \big(R -\ft14 F^2 + \alpha_{\rm GB} E^{(4)}\big)\,,\qquad
E^{(4)}=R^2 -4 R^{\mu\nu}R_{\mu\nu} + R^{\mu\nu\rho\sigma} R_{\mu\nu\rho\sigma}.
\ee
The quadratic Euler integrand is inspired by ${\cal N}=1$ superstring, arising as an $\alpha'$ correction of the string world sheet \cite{Bergshoeff:1989de}. The theory admits Minkowski spacetime as its vacuum and ghost-free condition requires that the coupling constant $\alpha_{\rm GB}\ge 0$ \cite{Boulware:1985wk}. The AdS vacuum of this theory, on the other hand, has ghostlike graviton modes.
\subsection{Charged black holes}
The EGBM gravity admits spherically-symmetric and static charged black holes \cite{Wiltshire:1985us,Cvetic:2001bk}, given by
\bea
ds^2_D &=& - f dt^2 + \fft{dr^2}{f} + r^2 d\Omega_{D-2}^2\,,\qquad A =\Phi(r) dt\,,\qquad \Phi=\sqrt{\fft{2(D-2)}{D-3}}\fft{q}{r^{D-3}},\nn\\
f &=& 1 + \fft{r^2}{2\alpha}\left( 1 - \sqrt{1 + \fft{8\alpha\mu}{r^{D-1}} -
\fft{4\alpha q^2}{r^{2(D-2)}}}\right),\qquad \alpha=(D-3)(D-4)\alpha_{\rm GB}\,.\label{sol}
\eea
The mass and electric charge are given by
\be
M=\fft{(D-2)\Omega_{D-2}}{8\pi}\mu\,,\qquad Q_e = \fft{\sqrt{(D-3)(D-2)}\Omega_{D-2}}{8\sqrt2 \pi} q\,.
\ee
Here $\Omega_{D-2}$ denotes the volume of the unit round $S^{D-2}$. Since $Q_e/q$ is some numerical factor, we shall not always distinguish $Q_e$ and $q$ as the electric charge. The neutral solutions were obtained in
\cite{Boulware:1985wk,Cai:2001dz}.
For sufficiently large mass, the solution describes a black hole with both inner and outer horizons, $0\le r_-\le r_+$. We use the notation $r_0$ to denote a generic horizon, and the corresponding temperature and entropy are
\bea
T&=& \fft{(D-3) r_0^2 \left(r_0^{2 D}-q^2 r_0^6\right)+\alpha (D-5) r_0^{2 D}}{4 \pi \left(2 \alpha +r_0^2\right)}\,,\nn\\
S&=&\ft14 \Omega_{D-2} r_0^{D-2} \Big(1+\frac{2 \alpha (D-2)}{(D-4) r_0^2}\Big)\,.
\eea
It is easy to verify that the first law of black hole thermodynamics $dM=TdS + \Phi(r_0) dQ_e$ is satisfied for both inner and outer horizons. For given charge $q$, there is a smallest horizon radius $r_{\rm ex}>0$, corresponding to the extremal black hole; it is determined by
\be
\mu_{\rm ex}= r_{\rm ex}^{D-3}+ \frac{\alpha (D-4) r_{\rm ex}^{D-5}}{D-3}\,,\qquad q^2 =\left(r_{\rm ex}^2 + \fft{D-5}{D-3} \alpha\right)r_{\rm ex}^{2(D-4)}\,.\label{extremal}
\ee
For $\mu> \mu_{\rm ex}$, the black holes have two horizons. If we regard the Euler integrand $E^{(4)}$ as matter, then from the Einstein gravity point of view, the charged black holes satisfy the weak energy condition. It turns out that $\rho - p_{\rm sphere}$ can be negative, which stops the solution from satisfying the dominant energy condition. Furthermore, the trace of the energy-momentum tensor can be positive.
\subsection{Photon spheres and shadows}
Owing to the spherical symmetry, the null geodesic motions can be easily analysed. For the metric given in (\ref{sol}), the radius of the photon sphere is determined by
\be
\fft{d}{dr}\left(\fft{f}{r^2}\right)\Big|_{\rm r_{\rm ph}}=0\,.\label{pheq}
\ee
The impact parameter, also called the optical radius or the shadow radius is given by
\be
R_{\rm sh} = \fft{r_{\rm ph}}{\sqrt{f(r_{\rm ph})}}\,.
\ee
Note that the form of both radii are independent of the spacetime dimensions. In order to establish (\ref{dconjecture}), it is convenient to define
\bea
{\cal X} &=& \sqrt{\fft{D-1}{D-3}} \fft{R_M}{R_{\rm sh}}\,,\qquad
R_M=\left(\fft{8\pi M (D-1)}{(D-2)\Omega_{D-2}}\right)^{\fft{1}{D-3}}\,,\nn\\
{\cal Y}&=& \sqrt{\fft{D-3}{D-1}} \fft{R_{\rm sh}}{r_{\rm ph}}\,,\qquad {\cal Z} = \left(\fft{2}{D-1}\right)^{\fft{1}{D-3}}\,\fft{r_{\rm ph}}{r_+}.\label{XYZdef}
\eea
Our goal in this paper is to prove
\be
{\cal X}\ge 1\,,\qquad {\cal Y}\ge 1\,,\qquad {\cal Z}\ge 1\,,\label{XYZineq}
\ee
for charged black holes in EGBM gravities in general dimensions.
\subsection{RN black holes}
We begin with analysing the RN black hole in general dimensions by setting $\alpha=0$. It was stated in \cite{Lu:2019zxb} that (\ref{dconjecture}) was satisfied by these black holes, but no detail was given. We thus present the proof of this simpler example before we progress to the general solutions. The metric function
is
\be
f=1 - \fft{2\mu}{r^{D-3}} + \fft{q^2}{r^{2(D-3)}}\,.
\ee
The solution describes a black hole when $q\le \mu$, with the outer horizon radius
\be
R_+=\left( \mu + \sqrt{\mu^2 -q^2}\right)^{\fft{1}{D-3}}\,.\nn
\ee
The photon sphere and black hole shadow radii are
\bea
R_{\rm ph} &=& \left(\ft12 (D-1)\mu + \ft12\sqrt{(D-1)^2\mu^2 - 4(D-2) q^2}\right)^{\fft1{D-3}}\,,\nn\\
R_{\rm sh} &=& \frac{2^{-\frac{D-1}{2 (D-3)}} \left((D-1) \mu +\sqrt{(D-1)^2 \mu ^2-4 (D-2) q^2}\right)^{\frac{D-2}{D-3}}}{\sqrt{D-3} \sqrt{(D-1) \mu ^2-2 q^2+\mu \sqrt{(D-1)^2 \mu ^2-4 (D-2) q^2}}}\,.
\eea
To verify the inequalities (\ref{XYZineq}), it is instructive to introduce a dimensionless parameter $\lambda$ to replace the charge parameter $q$:
\be
q=\fft{\mu \sqrt{\lambda(\lambda + D-1)}}{\lambda + D-2}\,.
\ee
The range $0\le q \le \mu $ is now mapped to $\lambda \in [0,+\infty]$, with $\lambda=0$ giving the Schwarzschild
black hole, and $\lambda=1$ yielding the extremal RN black hole. We find that
\bea
{\cal X}&=& \fft{(D-1)^{\frac{D-1}{2 (D-3)}} (\lambda+D -2)^{\frac{1}{D-3}} \sqrt{(D-3) \lambda +(D-2)^2}}{(D-2)^{\frac{D-2}{D-3}} (\lambda+D -1)^{\frac{D-1}{2 (D-3)}}}\,,\nn\\
{\cal Y}&=& \Big(1+\frac{\lambda }{(D-1) \left((D-3) \lambda +(D-2)^2\right)}\Big)^{\fft12}\,,\nn\\
{\cal Z}&=& \Big(1 + \frac{(D-3)^2 \lambda/(D-1)}{2 (D-2) \sqrt{(D-3) \lambda +(D-2)^2}+(D-3) \lambda +2 (D-2)^2}\Big)^{\fft{1}{D-3}}.
\eea
The inequalities ${\cal Y}\ge 1$ and ${\cal Z}\ge 1$ are manifest. The inequality ${\cal X}\ge 1$ can be established by a numerical plot for given $D$. In general $\cal (X,Y,Z)$ are all monotonically increasing functions of $\lambda$. Near the Schwarzschild limit $\lambda\rightarrow 0$, we have
\bea
\{\cal X,Y,Z\} &=& 1 + \fft{\lambda}{2(D-2)^2}\left\{\frac{1}{D-3},\frac{1}{D-1},\frac{D-3 }{2(D-1)}\right\}
+ {\cal O}(\lambda^2)\,.
\eea
Near the extremal limit $\lambda\rightarrow \infty$, we have
\bea
{\cal X} &=& \frac{(D-2)^{-\frac{D-2}{D-3}} (D-1)^{\frac{D-1}{2 (D-3)}}}{2 \sqrt{D-3}}\Big(2 (D-3)-\frac{1}{\lambda } + {\cal O}(\lambda^{-2})\Big)\,,\nn\\
{\cal Y}&=& \frac{D-2}{2 (D-3)^{3/2} \sqrt{D-1}} \Big(2 (D-3)-\frac{1}{\lambda }+{\cal O}(\lambda^{-2})\Big).\nn\\
{\cal Z}&=& \left(\frac{2(D-2)}{D-1}\right)^{\frac{1}{D-3}}\Big(1 - \fft{1}{\sqrt{(D-3)\lambda}}+
{\cal O}(\lambda^{-1})\Big)\,.
\eea
\subsection{$D=5$ neutral black hole}
We now consider the effect on the inequalities by the Gauss-Bonnet term. It is instructive first to examine a simpler example, namely the neutral black hole in five dimensions:
\be
f=1 + \fft{r^2}{2\alpha}\left( 1 - \sqrt{1 + \fft{8\alpha\mu}{r^{4}}}\right).
\ee
In this case, there is only one horizon $r_+$, determined by
\be
\mu = \ft12 (r_+^2 + \alpha)\,.
\ee
This implies that we must have $\mu> \ft12\alpha$ for the solution to describe a black hole.
The radii of photon sphere and shadow are
\be
r_{\rm ph}=\Big(8\mu(2\mu-\alpha)\Big)^{\fft14}\,,\qquad
R_{\rm sh}=\frac{\alpha \sqrt[4]{8\mu(2 \mu -\alpha) }}{\sqrt{2\mu(2 \mu -\alpha) }-2 \mu +\alpha }\,.
\ee
It is instructive to define a dimensionless parameter $\beta = \alpha/r_+^2$, we then find
\be
{\cal X}= \sqrt{1 + \fft{\sqrt{\beta+1}-1}{\sqrt{\beta+1}+1}}\,,\qquad {\cal Y}= \sqrt{\ft12 (1+\sqrt{\beta+1})}\,,\qquad
{\cal Z}= \sqrt[4]{\beta +1}\,.
\ee
Since the parameter $\beta$ runs from 0 to $+\infty$, it is straightforward to see that the inequalities (\ref{XYZineq}) are all satisfied, with the saturation occurs at $\beta=0$, corresponding to the Schwarzschild black hole.
Since the black hole entropies in higher-order gravities are no longer simply a quarter of the area of the horizon. The $R_+$ and $M$ relation in (\ref{dconjecture}) is no longer related to the Penrose entropy conjecture. The black hole entropy can be obtained from the Wald entropy formula, yielding
\be
S=\ft{1}{4}\Omega_3 \left(r_+^3 + 6 \alpha r_+\right)\,.
\ee
The mass and entropy relation now becomes
\be
\fft{256 \pi^3 M^3}{27\Omega_3\, S^2} = \fft{(1 + \beta)^3}{(1 + 6\beta)^2}\,.
\ee
Thus for small but non-vanishing $\beta$, the Penrose conjecture is violated, but it is restored for sufficiently large $\beta$. We may define the effective radius associated with the entropy by $S=\fft14 \Omega_3(\bar R_+^S)^3$, and we have
\be
\bar R^S_+ = \sqrt{r_+^3 + 6\alpha r_+}\,.
\ee
We then have
\be
\fft{R_{\rm ph}}{\sqrt2\, \bar R_{+}^S} = \frac{\sqrt[4]{\beta +1}}{\sqrt[3]{6 \beta +1}}\,.
\ee
Intriguingly, this ratio is a monotonically decreasing function of $\beta$. In other words, $R_{\rm ph}$ can be smaller than $\bar R_{+}^S$, a clear indication that $R_+$ is a better size parameter than $\bar R_+^S$.
\subsection{General black holes}
The reason we can easily prove the identities in the previous subsections was that we could analytically solve the photon sphere equation (\ref{pheq}) for the photon sphere radius in the RN black holes or the $D=5$ neutral black hole. This turns out not be possible for the general black holes. We shall adopt the technique developed in \cite{Lu:2019zxb} to prove the inequalities. We first prove $Z\ge 1$ and then use this inequality to prove ${\cal X}\ge 1$ and ${\cal Y}\ge 1$. We define a function $W(r)$ as
\be
W(r)=\sqrt{1 + \fft{8\alpha\mu}{r^{D-1}} - \fft{4\alpha q^2}{r^{2(D-2)}}}\, \Big(\fft{f}{r^2}\Big)'\,.
\ee
The photon sphere is located at $r_{\rm ph}$, which is the largest root of $W$. It can be easily seen that as $r\rightarrow \infty$, $W$ is negative with
\be
W= - \fft{2}{r^3} + \fft{2(D-1)\mu}{r^D} + \cdots
\ee
Since $r_{\rm ph}$ is the largest root, it follows that for any $r$ with $W(r)>0$, then we must have $r<r_{\rm ph}$. We thus define
\be
\rho=(\ft12 (D-1))^{\fft{1}{D-3}}\, r_+\,,
\ee
and we find
\bea
W(\rho) &=& U - \sqrt{V}\,,\nn\\
U&=&\frac{2}{\rho ^3} + \frac{2^{\frac{D-5}{D-3}} (D-1)^{\frac{2}{D-3}}\alpha}{\rho^5} +\frac{(D-3)^2 q^2}{2\rho^{2D-3}}\,,\nn\\
V&=&\frac{4}{\rho ^6}+\frac{32 \alpha }{(D-1) \rho ^8}+ \frac{ 2^{\frac{5 D-17}{D-3}}\alpha ^2}{(D-1)^{\frac{D-5}{D-3}}\rho ^{10}} + \fft{8(D-3) \alpha q^2}{ \rho ^{2 (D+1)}}\,.
\eea
Note that in the above, we have expressed $\mu$ in terms $r_+$ and hence $\rho$. It is clear that both $(U,V)$ are positive and further more, it is quite straightforward to prove that $U^2 -V\ge 0$ for $\rho>0$. We therefore demonstrate that $W(\rho)>0$. It follows that $r_{\rm ph} \ge \rho$, proving that ${\cal Z}\ge 1$.
In order to demonstrate that ${\cal X}\ge1$ and ${\cal Y}\ge 1$, we find it is useful to express $\mu$ in terms of the photon sphere radius $r_{\rm ph}$ by solving (\ref{pheq}). We have
\bea
\mu &=& \fft{4 \alpha r_{\rm ph}^{D-5}}{(D-1)^2}+ \fft{(D-2) q^2}{(D-1) r_{\rm ph}^{D-3}}\nn\\
&& + \sqrt{\fft{r_{\rm ph}^{2(D-3)}}{(D-1)^2} + \fft{16 \alpha^2 r_{\rm ph}^{2(D-5)}}{(D-1)^4} + \fft{4(D-3)\alpha q^2}{(D-1)^3 r_{\rm ph}^2} }\,.
\eea
The shadow radius is now given by
\be
R_{\rm sh}=\sqrt{\fft{2\alpha r_{\rm ph}^{2(D+1)}}{(D-2) q^2 r_{\rm ph}^8-(D-1) \mu r_{\rm ph}^{D+5}+r_{\rm ph}^{2 D} \left(2 \alpha +r_{\rm ph}^2\right)}}\,,
\ee
implying that
\bea
{\cal X}^2 &=& \frac{(D-1)^{\frac{D-1}{D-3}} \mu ^{\frac{2}{D-3}} \left((D-2) q^2 r_{\rm ph}^8-(D-1) \mu r_{\rm ph}^{D+5}+r_{\rm ph}^{2 D} \left(2 \alpha +r_{\rm ph}^2\right)\right)}{2 \alpha (D-3) r_{\rm ph}^{2 (D+1)}}\,,\nn\\
{\cal Y}^2 &=& \frac{2 \alpha (D-3) r_{\rm ph}^{2 D}}{(D-1) \left((D-2) q^2 r_{\rm ph}^8-(D-1) \mu r_{\rm ph}^{D+5}+r^{2 D} \left(2 \alpha +r_{\rm ph}^2\right)\right)}\,.\label{XYsq}
\eea
The trick now is to make use of ${\cal Z}\ge 1$, which implies
\be
r_{\rm ph} \ge (\ft12 (D-1))^{\fft{1}{D-3}}\, r_+ \ge (\ft12 (D-1))^{\fft{1}{D-3}}\, r_{\rm ex}\,.
\ee
The second inequality holds because $r_{\rm ex}$ is the (smallest) horizon radius of the extremal black hole for given charge $q$, determined by (\ref{extremal}).
We can now define two dimensionless parameters $\beta\ge 0$ and $\gamma> 1$, defined by
\be
\alpha = \beta r_{\rm ex}^2\,,\qquad
r_{\rm ph} = (\ft12 (D-1))^{\fft1{D-3}} \gamma r_{\rm ex}\,.
\ee
Note that the lower bound for $\gamma$ is bigger than 1, but for our purpose, it is sufficient to assume $\gamma>1$. Substituting $\alpha$ and $r_{\rm ph}$ into (\ref{XYsq}), and we find that both $\cal X$ and $\cal Y$ are functions of the dimensionless quantities $(\beta,\gamma)$ only, with the dimensionful parameter $r_{\rm ex}$ dropped out. In $D=5$, the expressions are quite simple and they are manifestly no smaller than 1:
\bea
{\cal X}^2 &=& 1 + \frac{\left(4 \gamma ^4-3\right) \left(\sqrt{\beta ^2 \gamma ^2+\beta +4 \gamma ^6}-2 \gamma ^3\right)+\beta \gamma }{4 \beta \gamma ^5}\ge 1\,,\nn\\
{\cal Y}^2 &=& 1+ \frac{1+\gamma \left(\sqrt{\beta ^2 \gamma ^2+\beta +4 \gamma ^6}+\beta \gamma -2 \gamma ^3\right)}{4 \gamma ^4-1}\ge 1\,.
\eea
For general dimensions, the expressions are much more complicated, we find
\bea
{\cal X} &=& 2^{\frac{2 (D-2)}{(D-3)^2}} (D-1)^{\frac{1-D}{(D-3)^2}} \Big(\fft{\beta}{\gamma^2}\Big) ^{\frac{1}{D-3}}\, \Big(C_1 -\fft{2\sqrt{C_2}}{D-3}\Big)^{\fft12} \Big(C_3 + \sqrt{C_2}\Big)^{\fft1{D-3}}, \nn\\
{\cal Y} &=& \Big(C_1 -\fft{2\sqrt{C_2}}{D-3}\Big)^{-\fft12},
\eea
where
\bea
C_1 &=&1+ \frac{\big(\fft12(D-1)\big)^{\frac{D-1}{D-3}} \gamma ^2 }{(D-3)\beta},\nn\\
C_2&=&1 +\frac{(D-1)^{\frac{5-D}{D-3}} (D-3+(D-5)\beta)}{2^{\frac{2}{D-3}}\beta \gamma ^{2 (D-4)}}
+ \frac{(D-1)^{\frac{2 (D-1)}{D-3}}\gamma ^4}{2^{\frac{4 (D-2)}{D-3}}\beta ^2}\,,\nn\\
C_3 &=& 1+\frac{(D-2) (D-1)^{\frac{5-D}{D-3}} (D-3+(D-5)\beta)}{ (D-3) 2^{\frac{2}{D-3}}\beta \gamma ^{2 (D-4)}}\,.
\eea
A contour plot of variables $(\beta,\gamma)$ can establish the inequalities (\ref{XYZineq}).
In particular, when $\beta =0$, corresponding to the RN black hole in general dimensions, we have
\bea
{\cal X} &=& \frac{1}{2} (D-1) \gamma ^{D-3} \left(1+\frac{4 (D-2)}{(D-1)^2 \gamma ^{2(D-3)}}\right)^{\frac{1}{D-3}} \sqrt{\frac{1}{4} (D-1)^2 \gamma ^{2 (D-3)}-1}\,,\nn\\
{\cal Y} &=& \sqrt{1 + \fft{1}{\fft14 (D-1)^2 \gamma^{2(D-3)} -1}}\,.
\eea
In the limit of $\beta\rightarrow \infty$, $\cal X$ is positive and divergent at the order $\beta^{\fft1{D-3}}$ for $D\ge 6$ and
\be
{\cal Y}= \left(1-\frac{2}{D-3} \sqrt{1+4^{\frac{1}{3-D}} (D-5) (D-1)^{-\frac{D-5}{D-3}} \gamma ^{2 (4-D)}}
\right)^{-\fft12}\,.
\ee
The Schwarzschild black hole limit is achieved by taking $\gamma\rightarrow \infty$. For large $\gamma$, we have
\be
{\cal X}\sim {\cal Y} = 1+ \frac{2^{\frac{D-1}{D-3}} (D-1)^{\frac{1-D}{D-3}}\beta}{(D-3)\gamma ^2}+\cdots\,.
\ee
\section{Conclusions}
In this paper, we considered charged static black holes in EGBM gravities in general dimensions. These black holes are spherically symmetric and asymptotic to Minkowski spacetimes. From the view of Einstein gravity, these black holes satisfy the weak energy condition, provided that the Gauss-Bonnet coupling is nonnegative, which also ensures that the perturbation is free of ghost excitations. There exists an unstable photon sphere outside the horizon, giving rise to the edge of a shadow disk for an observer at infinity. We found the the radii of the horizon, photon sphere and shadow disk satisfy the sequence of inequalities (\ref{dconjecture}), conjectured for the black holes in Einstein gravity. The robustness of this sequence calls for a better understanding of the underlying condition.
\section*{Acknowledgement}
This work is supported in part by NSFC (National Natural Science Foundation of China) Grant No.~11875200 and No.~11935009.
|
1912.05581
|
\section{Motivation}
The microscopic nature of dark matter remains one of the most pressing questions in particle physics. Indirect detection---the search for visible signatures of dark matter decay or annihilation at terrestrial or space-based experiments---is one of the leading programs to unravel this mystery. In this paper, we study the possibly dominant role of loop diagrams to dark matter annihilation/decay processes and subsequent indirect detection signatures in frameworks where dark matter is part of a secluded or hidden sector that couples weakly to the Standard Model (SM) sector via a neutrino portal \cite{Lindner:2010rr,Cherry:2014xra,Roland:2014vba,Shakya:2015xnx,Gonzalez-Macias:2016vxy,Escudero:2016tzx,Escudero:2016ksa,Schmaltz:2017oov,Batell:2017cmf,Batell:2017rol,Shakya:2018qzg,Blennow:2019fhy}. In the context of dark matter annihilation or decay, loop diagrams generally become important when tree level processes are forbidden for some reason, \textit{e.g.} for line signals in gamma rays, but are otherwise only expected to produce subleading corrections. However, the dominance of loop processes when tree level channels in the same final states are open is more subtle and interesting, and important for phenomenology.
From theoretical considerations, a neutrino portal to a dark sector is known to be one of only a few ways to connect visible and hidden sectors via renormalizable interactions, and the existence of dark matter in such setups has been extensively studied in several earlier works \cite{Lindner:2010rr,Cherry:2014xra,Roland:2014vba,Shakya:2015xnx,Gonzalez-Macias:2016vxy,Escudero:2016tzx,Escudero:2016ksa,Schmaltz:2017oov,Batell:2017cmf,Batell:2017rol,Shakya:2018qzg,Blennow:2019fhy}. Such frameworks are particularly motivated in light of models of neutrino mass generation, which requires physics beyond the Standard Model, including SM-singlet new states (sterile neutrinos), which can act as portals to hidden sectors. Neutrino-rich indirect detection signatures in such models have been extensively studied in the literature \cite{Garcia-Cely:2017oco,ElAisati:2017ppn,Campo:2017nwh,Chianese:2018ijk,Blennow:2019fhy,Dekker:2019gpe,Heeck:2019guh}. From the point of view of phenomenology, indirect detection of dark matter in neutrino-rich final states has recently garnered tremendous interest in the community, driven by sensitive instruments such as IceCube \cite{Aartsen:2019swn}, Super-Kamiokande \cite{Frankiewicz:2015zma}, and ANTARES \cite{Tonnis:2019hyr}, and have also been fuelled by anomalous high energy neutrino events at IceCube \cite{Aartsen:2013jdh,Aartsen:2014gkd}, which can be interpreted as hints of decaying dark matter (see eg. \cite{Cohen:2016uyg,Roland:2015yoa} and references therein).
Motivated by such considerations, the main purpose of this paper is to demonstrate that loop processes, often ignored, can dominate dark matter decay or annihilation in realistic scenarios of neutrino portal dark matter. For concreteness, we describe this effect in a specific model of decaying hidden sector $Z'$ dark matter (Section \ref{sec:framework}), where the loop process is manifestly finite and can be calculated explicitly (Section \ref{sec:calculation}). However, we point out that the dominance of loop processes over tree level processes is more general and can occur in several other frameworks (Section \ref{sec:others}). We also study the implications of this effect on the spectra of SM particles (neutrinos, positrons, and gamma rays) from dark matter, relevant for indirect detection (Section \ref{sec:pheno}).
\section{Framework}
\label{sec:framework}
We base our discussions on a model of hidden sector $Z'$ dark matter, which illustrates the main ideas of this paper in the most straightforward manner. The model consists of sterile neutrinos in an extended, secluded sector, similar in spirit to the frameworks studied in \cite{Shakya:2018qzg,Roland:2014vba,Roland:2015yoa,Roland:2016gli,Chacko:2016hvu}. We consider a dark gauged $\text{U}(1)'$ sector, with gauge coupling $g'$ and a corresponding gauge boson $Z'$, and three new categories of fields: a fermion $\nu'$ and a singlet scalar $S$ with $\text{U}(1)'$ charges $+1,-1$ respectively, as well as completely singlet sterile neutrinos $N_i$, which carry no $\text{U}(1)'$ or SM charges. Note that $\nu'$ and $S$ can be thought of as hidden sectors analogs of the SM neutrinos and Higgs, which can be combined into a gauge singlet and therefore couple to $N_i$ via a renormalizable Dirac mass term. The Lagrangian for this model is
\begin{multline}\label{eq:modellag}
\mathcal{L}=|D_\mu S|^2 - V(H,S) - \frac{1}{4}F'_{\mu\nu}F'^{\mu\nu} + \nu'^\dag i \bar{\sigma}^\mu D_\mu \nu' + N_i^\dag i \bar{\sigma}^\mu \partial_\mu N_i\\ - \frac{1}{2} (M_A N_A^\dag N_A^\dag + \theta_N M_{AB} N_A^\dag N_B^\dag+M_B N_B^\dag N_B^\dag +\text{c.c.}) \\
- (y' S\nu'^\dag N_A^\dag + \text{c.c.}) - (y_\nu \tilde{H}L^\dag N_B^\dag + \text{c.c.})\,,
\end{multline}
where $D_\mu = \partial_\mu + i g' Z'_\mu$. We assume that $H$ and $S$ acquire vacuum expectation values (vev) $v$ and $x$ respectively, spontaneously breaking the electroweak and $\text{U(1)}'$ symmetries.
We consider two sets of heavy singlet sterile neutrinos\,\footnote{The exact number of heavy sterile neutrinos is irrelevant for our discussions, hence we leave it unspecified.}, $N_A$ and $N_B$, that couple dominantly to the hidden and visible sector respectively. We will consider the singlet neutrino mass scale $M_N\approx M_A \approx M_B \approx M_{AB}$ to be heavier than all other scales in the theory. The sterile neutrinos $N_B$ give rise to neutrino masses $m_\nu\approx y_\nu^2 v^2/M_N$ for the SM neutrinos via the well known type-I seesaw mechanism \cite{Minkowski:1977sc}. Furthermore, this heavy sterile neutrino sector acts as the portal between the visible and secluded sectors via the mass cross-term $\theta_N M_{AB}$, where $\theta_N$ has been introduced to control the size of the mixing between the two sectors.
Spontaneous $\text{U(1)}'$ breaking gives the $Z'$ a mass $m_{Z'} = g'x/2$, and the SM-singlet neutrinos $\nu', N_A$, $N_B$ mix to form mass eigenstates $N_1,N_2, N_3$ with masses $M_1,M_2, M_3$ respectively. For $M_N\gg y'x$, we also have a seesaw effect in the hidden sector, resulting in $M_1\approx y'^2 x^2/M_N$ and $M_2, M_3\approx M_N$. Upon electroweak symmetry breaking (we will treat it as a perturbative effect), all three of these mass eigenstates inherit small mixings with the SM neutrinos via the Dirac mass terms.
We are interested in the parameter space where $Z'$ is the lightest hidden sector particle and therefore the dark matter. We thus focus on the hierarchy $m_Z' < M_1$, so that the $Z'$ cannot decay into the $N_1$ states at tree level\,\footnote{Scenarios where sterile neutrinos are light enough to be produced directly in dark matter annihilation or decay can also produce interesting dark matter signatures (see e.g. \cite{Escudero:2016tzx,Roland:2015yoa,Capozzi:2017auw,Folgado:2018qlv,Gori:2018lem}), but we do not consider such scenarios in this paper.}. It can decay into SM neutrinos via neutrino mixing between the two sectors; however, the lifetime for this process can be sufficiently long that $Z'$ remains a viable dark matter candidate. The free parameters in this setup are $\theta_N,g',x,y_\nu,y_\nu',$ and $M_N$. One can trade the latter four parameters for the three neutrino mass scales, $m_\nu, M_1$, and $M_2$, and the dark matter mass $m_{Z'}$. The remaining free parameters $g'$ and $\theta_N$ can then be used appropriately to set the dark matter relic abundance and lifetime.
We neglect the kinetic mixing term $\frac{\epsilon}{2}F^{\mu\nu}Z'_{\mu\nu}$ between the hypercharge and dark $Z'$ gauge boson. This mixing, even if absent at tree level, is generally generated by loop effects in the presence of heavy particles that couple to both gauge fields; however, in the model above, such mixing is only generated at three loops (involving the secluded fermion $\nu'$, the heavy mediators $N$, and the SM neutrinos $\nu$) and is therefore expected to be negligible. Likewise, we also assume that the renormalizable Higgs portal coupling $S^2h^2$ is negligible. Finally, we assume that additional heavy matter content is present to ensure the $\text{U(1)}'$ dark current remains anomaly free as needed without affecting the decay processes we consider; we will comment further on this later.
\section{Evaluation of Loop Processes}
\label{sec:calculation}
The leading tree and loop diagrams for dark matter decay in this model are shown in Figure\,\ref{fig:diagrams}. The leading tree level decay, represented by the first diagram, is into two neutrinos:
\begin{equation}
\Gamma_{2t}=\frac{m_{Z'}g'^2 \theta_N^4 y_\nu^4}{48\pi}\frac{v^4}{M_1^2 M_2^2}=\frac{m_{Z'}g'^2 \theta_N^4}{48\pi}\frac{m_{\nu}^2}{M_1^2},
\end{equation}
where, in the second step, we have used the seesaw relation $m_\nu=y_\nu^2 v^2/M_N$. Note that this decay width is suppressed by the active-sterile mixing angle, represented by Higgs vev insertions in the diagram: this is particularly clear from the second expression above, where $\theta_N \sqrt{m_\nu/M_1}$ is the effective mixing angle between $\nu$ and $N_1\approx \nu'$, which is the fermion state that couples directly to $Z'$ with gauge coupling strength $g'$.
\begin{figure}[t]
\includegraphics[width=0.25\columnwidth]{decay-tree-labeled}\enspace
~~\includegraphics[width=0.25\columnwidth]{decay-fourbody-labeled}\enspace
~~~\includegraphics[width=0.25\columnwidth]{decay-loop-labeled}
\caption{\label{fig:diagrams} Dark matter decay modes: tree-level two body, tree-level four body, and one loop. This list is not exhaustive; we only show a representative set of decay modes (see text).}
\end{figure}
If the dark matter is sufficiently heavy, additional three and four body decay channels that involve SM Higgs and gauge bosons become kinematically accessible. These diagrams can be understood as replacing the Higgs vevs that gives rise to the SM-singlet neutrino mixing with the emission of physical states, as shown schematically in Figure\,\ref{fig:diagrams}; due to the SU(2) nature of the SM neutrinos, there are additional diagrams that involve charged leptons and gauge bosons. The decay widths into these multi-body final states can be found, e.g., in \cite{Cohen:2016uyg}. Replacing a Higgs vev on one of the neutrino legs with the emission of a physical particle gives rise to three body decay channels $\nu\bar{\nu} h,\,\nu\bar{\nu} Z\,,\nu \bar{l} W$ with decay widths \cite{Cohen:2016uyg}
\begin{equation}
\Gamma_{3t}\approx\frac{m_{Z'}^2}{768\pi^2 v^2}\Gamma_{2t}.
\label{eq:threebody}
\end{equation}
Here $l$ refers to a charged lepton, whose flavor depends on the flavor of the SM neutrino that couples to the portal states. Likewise, the four body decay channels, obtained by replacing vevs on both neutrino legs with physical particle emissions, include the final states $\nu\bar{\nu} h h,\,\nu\bar{\nu} Z Z, \nu\bar{\nu}Z h,\,\nu \bar{l} h W,\,\nu \bar{l} Z W, l\bar{l}WW$; the decay widths for these processes scale (up to some $\mathcal{O}(1)$ factors) as \cite{Cohen:2016uyg}
\begin{equation}
\Gamma_{4t}\approx\frac{m_{Z'}^2}{24\pi^2 v^2}\Gamma_{3t}.
\label{eq:fourbody}
\end{equation}
Note that the expression of four and three body decay widths in terms of three and two body widths is very intuitive: one incurs additional phase space suppression due to the emission of an additional particle, but gains a factor of $m_Z'^2/v^2$ because the Higgs vev insertion gets replaced by the energy scale of the process, which is the dark matter mass. Thus, for sufficiently heavy dark matter $m_{Z'}\gg v$, the four body process can dominate: since electroweak symmetry breaking is effectively a small perturbation in this limit, the emission of a physical Higgs boson, which can proceed in the limit of unbroken electroweak symmetry, is preferred.
We now turn to the evaluation of the loop processes shown in Fig.\,\ref{fig:diagrams}, where the Higgs vev insertions are replaced by a Higgs propagator, giving rise to a one loop contribution to $Z'\to\nu\bar{\nu}$. Loop diagrams of this form are generally divergent; however, this diagram is manifestly finite in our framework by construction due to the absence of tree level couplings of SM neutrinos or gauge bosons to the $Z'$. This contribution can therefore be unambiguously evaluated. We calculate the full two-body decay width for $Z'\to \nu\bar{\nu}$, including the loop correction, under the approximation $m_h,\, m_{Z'} \ll M_1 \ll M_N$, to be
\begin{equation}
\Gamma_{2} = \frac{m_{Z'}g'^2 \theta_N^4 y_\nu^4}{48\pi}\Big|\frac{v^2}{M_1 M_2} + \frac{1}{32\pi^2}\frac{M_1}{M_2}\big(\ln\frac{M_2^2}{M_1^2} + 1\big)\Big|^2\,.
\label{fig:loopfull}
\end{equation}
We find that the naive log-divergence of the loop process is rendered finite upon summing over the various sterile neutrino propagator combinations in the loop, leaving behind the finite logarithm $\ln(M_2^2/M_1^2)$. The factor of $M_1/M_2$ in front of the logarithm represents the mixing angle between the $\nu'$ and $N_A$ states. In the limit where the tree level contribution can be neglected, the width $Z'\to\nu\bar{\nu}$ due to the loop process is
\begin{equation}
\Gamma_{2} = \frac{m_{Z'}g'^2 \theta_N^4 }{48\pi (32\pi^2)^2} \frac{m_\nu^2 M_1^2}{v^4}\big(\ln\frac{M_2^2}{M_1^2} + 1\big)^2.
\label{eq:loop}
\end{equation}
Again, due to the SU(2) nature of the SM neutrinos, there are analogous loop processes for decays into other SM final states that scale in the same manner. Replacing the neutral SU(2) states with charged SU(2) states results in a W-loop induced decay into charged leptons $Z'\to l^+l^-$ with the same amplitude as $Z'\to \nu\bar{\nu}$ above (note that the analogous tree level decay process into charged leptons does not exist, since the charged components of the Higgs field do not obtain vevs). Likewise, ``flipping" the external legs and closing the loop with fermions instead of bosons gives rise to $Z'\to Zh, W^+W^-$ at one loop. In the limit $m_h, m_Z \ll m_{Z'} \ll M_1 \ll M_N$, we evaluate these widths to be
\begin{gather}
\begin{aligned}
\label{eq:loop2}
\Gamma_{Z'\to Zh} &= \frac{m_{Z'}g'^2 \theta_N^4 }{64\pi (32\pi^2)^2} \frac{m_\nu^2 M_1^2}{v^4}\big(1-\ln\frac{M_2^2}{M_1^2} \big)^2\,,\\
\Gamma_{Z'\to WW}& = 2\,\Gamma_{Z'\to Zh}\,.
\end{aligned}
\end{gather}
These are parametrically the same as the loop-induced widths to fermions above, up to $\mathcal{O}(1)$ factors. Due to Bose symmetry, $\Gamma_{Z'\to hh}$ vanishes, while $\Gamma_{Z'\to ZZ}$ is suppressed \cite{Keung:2008ve} relative to the above widths by a factor $\sim 12 m_Z^2/m_Z'^2$ according to our calculations, which renders it negligible for dark matter at the TeV scale or higher.
It is now illustrative to compare the loop dominated decay width in Eq.\,(\ref{eq:loop}) to the leading two body and four body tree level decay widths:
\begin{equation}
{\Gamma_{2t}}:{\Gamma_{4t}}:{\Gamma_{2}}\approx v^4:\frac{M_{Z'}^4}{18 (32\pi^2)^2}:\frac{M_1^4}{(32\pi^2)^2}\, \big(\ln\frac{M_2^2}{M_1^2} + 1\big)^2,
\end{equation}
From this comparison, we see that the loop processes can dominate if $M_1> m_{Z'}, 10\, v$ (recall that $M_1>m_{Z'}$ is an underlying assumption of our model for $Z'$ to be the lightest particle in the hidden sector). The origin of this domination is also clear from the above discussions. The two body decays require mixing between active and sterile neutrinos on both neutrino legs, and are therefore suppressed by $v^4$ from the associated Higgs vev insertions. The four-body decays do not require electroweak symmetry to be broken and therefore avoid this suppression, depending instead on the relevant energy scale of the process, $m_{Z'}$. The loop diagram (which can also proceed without electroweak symmetry breaking) avoids even this (milder) suppression, as the relevant energy scale is instead the sterile neutrino mass $M_1$.
Finally, we also note the existence of two-loop diagrams (obtained by closing the singlet Higgs $S$ loop on the ``hidden sector" side, together with the SM Higgs loop) that can contribute to dark matter decay in the above framework. Relative to the one loop diagram, this process incurs additional loop suppression but could evade the $M_1^2/M_2^2$ suppression in Eq.\,\ref{fig:loopfull} (recall that this represents a mixing angle between $\nu'$ and $N_A$), which can be significant in the regime $M_1\ll M_2$. For simplicity, in this paper we will restrict ourselves to the regime where the ratio $M_1/M_2$ is sufficiently large that the two-loop contribution is subdominant and can be ignored.
\section{Other Scenarios}
\label{sec:others}
In this section, we present a broader discussion of the importance of the loop process in other scenarios, shedding further light on the conditions necessary for the loop process to dominate dark matter phenomenology.
A particularly well motivated dark matter candidate that couples preferentially to neutrinos is the Majoron $J$ \cite{Gelmini:1980re,Boulebnane:2017fxw,Pilaftsis:1993af,Rothstein:1992rh,Berezinsky:1993fm,Frigerio:2011in,Lattanzi:2007ux,Bazzocchi:2008fh,Lattanzi:2013uza,Queiroz:2014yna}, the Goldstone boson associated with spontaneously broken lepton number. While one loop contributions to the decay $J\to\nu\nu$ exist, they are always subdominant to the tree level processes, in contrast to the above discussions. This discrepancy can be understood by following the flow of lepton number and hypercharge: the Majoron carries lepton number $+2$ but no hypercharge; on the other hand, the final state $\nu\nu$, enforced by lepton number conservation, carries two units of hypercharge. This must therefore be balanced by two Higgs vev insertions in the decay process. This is already present a priori in the two body decay process in the form of active-sterile mixing, but the loop diagram requires explicit vev insertions, which suppresses it and keeps it subdominant to the two-body tree level diagram. Therefore, loop processes of the kind discussed above only provide subleading corrections for Majoron dark matter.
The decay of scalar dark matter into neutrinos through the heavy neutrino portal, meanwhile, is helicity suppressed and must necessarily pick up factors of neutrino masses $m_\nu$ (or equivalently, Higgs vevs), hence the loop process cannot overcome the $\sim v^4$ suppression factor present in the tree-level diagram. One can consider other dark matter scenarios that avoid this helicity suppression, e.g. annihilation of a dark matter fermion $\chi$ into neutrinos mediated by dimension-6 current$\times$current operators $\chi^\dag\bar\sigma^\mu\chi N^\dag\bar\sigma_\mu N$: The amplitude for the loop mediated decay to neutrinos in this case, however, is UV divergent, and an unambiguous prediction of its size independent of the details of the UV physics is not possible. Several other models, including a more naive implementation of a vector dark matter model, also suffer from this UV sensitivity.
Nevertheless, there exist other neutrino portal dark matter scenarios where the loop contribution is finite as well as dominant. If dark matter is a hidden sector fermion that annihilates via a heavy $Z'$ mediator into neutrinos, the above discussions are directly applicable, and dark matter annihilations can be dominated by loop processes. Likewise, loop dominance can also feature in supersymmetric theories: if a hidden sector gaugino is the lightest supersymmetric particle (LSP) and dark matter that primarily annihilates into SM neutrinos via exchange of a heavy sterile sneutrino in the $t$-channel, the loop-induced annihilation process obtained by extending this tree level diagram with a Higgs loop, analogous to our discussions in the previous sections, would also be finite as well as dominant.
In such extended frameworks, it should be kept in mind that there might be additional loop processes contributing to dark matter annihilation or decay, involving additional particles required, for instance, for anomaly cancellation. For instance, if the underlying theory of the $Z'$ dark matter model discussed in Section \ref{sec:framework} is supersymmetric, one gets Higgsino-sneutrino loops that are supersymmetric counterparts to the loop diagrams that were considered. Such loop contributions are parametrically of the same form as those calculated above and can cause $\mathcal{O}(1)$ modifications of the dark matter annihilation or decay rate.
Finally, it is worth pointing out that this loop dominated behavior is not confined to neutrino portal models but can be realized more broadly in any framework where leading tree level channels incur some form of suppression (analogous to the active-sterile mixing angle suppression in neutrino portal models) that can be lifted by considering loop processes.
\section{Dark Matter Phenomenology}
\label{sec:pheno}
We now turn to a discussion of the implications of loop dominance for dark matter phenomenology. Effects on dark matter production mechanism and lifetime, while likely significant, are model-specific aspects and therefore of limited applicability, hence we only discuss these briefly within our $Z'$ dark matter model. The effects on the annihilation or decay signatures that would be observed at indirect detection instruments, on the other hand, are model-independent and robust (in the sense that they hold more broadly for a greater class of neutrino portal models where the loop process dominates); thus we study these aspects in greater detail.
\subsection{Dark Matter Parameter Space}
In the $Z'$ dark matter model from Section \ref{sec:framework}, if the mixing between the two sectors $\theta_N$ is small, the secluded sector does not thermalize with the SM thermal bath and is instead populated by freeze-in processes. The annihilation processes $\nu h\to \nu' S$, mediated by $N_{A,B}$ in the s-channel, produces small amounts of the secluded sector particles $\nu', S$. While the $\nu'$ tends to decay primarily into the visible sector via $\nu'\to \nu Z$, the scalar S decays primarily as $h'\to Z'Z'$ if $g'$ is larger than $y'x/M_2$ (which controls the other available decay channel $h'\to\nu'\nu'$), producing a small dark matter abundance. Since $\nu h\to \nu' S$ is a dimension 5 operator, the subsequent dark matter abundance depends on the reheat temperature $T_{RH}$, the highest temperature attained by the early radiation-dominated Universe; this abundance can be estimated as \cite{Hall:2009bx,Elahi:2014fsa}
\begin{equation}
Y_{Z'}\sim10^{-6}y_\nu^2 y'^2\theta_N^2\frac{M_{Pl} T_{RH}}{M_2^2}.
\end{equation}
Substituting the neutrino masses $m_\nu, M_1$ for the Yukawa couplings and plugging in the known values of $m_\nu, v,$ and $M_{Pl}$ (the Planck mass), we obtain the following relation in order to achieve the correct dark matter relic density:
\begin{equation}
\left(\frac{m_{Z'}}{\text{TeV}}\right)\left(\frac{\text{TeV}}{M_{1}}\right)\left(\frac{10\,\text{TeV}}{T_{RH}}\right)\sim\left(\frac{g'\,\theta_N}{10^{-5}}\right)^2
\end{equation}
The dark matter lifetime, on the other hand, is controlled by the leading (loop level) decay widths, which scale as $\Gamma\sim g'^2\theta_N^4$. With appropriate choices of $\theta_N, g',$ and $T_{RH}$, we can therefore achieve both the correct dark matter relic density as well as lifetimes that are interesting for indirect detection signals. As illustrative numbers, $g'\sim10^{-3},\theta_N\sim10^{-7}, m_{Z'}\sim$ TeV, $M_1\sim 100$ TeV, and $T_{RH}\sim10^7$ TeV lead to a consistent cosmology with the correct dark matter relic abundance, loop-dominated decays, and dark matter lifetime $\sim10^{28}$s.
\subsection{Indirect Detection Signatures}
We now discuss indirect detection signatures for a benchmark decaying dark matter mass of $8$ TeV, for which the two-, three-, and four-body tree level decay widths are all comparable, thereby producing the most general spectrum. As discussed earlier, due to the SU(2) nature of neutrinos, the final states also contain charged leptons as well as the SM gauge and Higgs bosons. In Figure \ref{fig:spectra}, we compare the spectra of neutrinos, positrons, and gamma rays from dark matter, assuming tree (red) or loop (black) processes are dominant, as evaluated with Pythia \cite{Sjostrand:2006za,Sjostrand:2007gs}, for dominant coupling to individual lepton flavors. We note that these are prompt spectra and do not include propagation effects (for positrons), or secondary contributions such as inverse Compton scattering or internal bremsstrahlung from within the loops (for gamma rays). All spectra correspond to the same number of events, enabling comparisons within and across the panels, but the overall normalization is arbitrary. Neutrino and positron line signals at $E=m_{DM}/2$ have been shrunk by a factor of $10^4$ to fit within the panels.
\begin{figure}[h!]
\includegraphics[width=0.85\columnwidth]{neutrinoplot}\\
\vskip0.1cm
\includegraphics[width=0.85\columnwidth]{positronplot}\\
\vskip0.1cm
\includegraphics[width=0.85\columnwidth]{gammaplot}
\caption{\label{fig:spectra} Top to bottom: Spectra of neutrinos, positrons, and gamma rays produced from $m_{DM}=8$ TeV dark matter decay, in scenarios where the loop (black) or tree level (red) processes dominate, for dominant coupling to individual lepton flavors. All curves correspond to the same number of dark matter decay events, with an overall arbitrary normalization. These are prompt spectra at production as computed by Pythia \cite{Sjostrand:2006za,Sjostrand:2007gs}, and do not include propagation effects or secondary contributions such as inverse Compton scattering. Neutrino and positron line signals at $E=m_{DM}/2$ have been shrunk by a factor of $10^4$ to fit within the panels.}
\end{figure}
A distinguishing feature of the loop dominated scenario is the presence of a neutrino line at $m_\text{DM}$ ($m_\text{DM}$/2) for annihilating (decaying) dark matter, which persists for arbitrarily high dark matter masses. In the plot, we also see a neutrino line signal (red) from tree level decays, as the two body decay branching fraction is still significant for this particular benchmark; however, this line would disappear for higher dark matter masses as four body decays grow to dominate. On the other hand, for tree level decays, we see hard neutrinos at approximately half to two-thirds of the energy of the neutrino line from neutrinos in three and four body decays, which are absent in the loop dominated scenario. If the coupling is dominantly to the electron-type neutrino, one also gets an analogous line in the positron spectrum in the loop dominated case; however, note that propagation effects will smear this line, making it challenging to distinguish it from the hard positron peak present in the tree level decay spectrum.
In general, the loop processes tend to produce harder spectra of positrons and gamma rays, as seen in the plots, since all decays are into two particle final states. For both tree and loop dominated scenarios, the positron spectrum is the hardest when the sterile neutrinos dominantly couple to the electron-type neutrino, and grows progressively softer for muon-type and tau-type couplings, as can be understood from the decay channels of muons and tau leptons.
Another salient feature of the loop dominated scenario is that the widths into neutrinos, charged leptons, and SM bosons are approximately the same, as they arise from interchanges of internal and external legs of the same loop diagram, as discussed and calculated in Section \ref{sec:calculation} (see Eqs.\,(\ref{eq:loop}),(\ref{eq:loop2})). Comparing the size of the neutrino line with the peak flux of positrons or gamma rays might therefore provide ways to distinguish between loop dominated and tree level decays: recall that in the latter case, the ratio between two, three, and four body decay widths can be deduced from the dark matter mass (see Eq.\,\ref{eq:threebody},\ref{eq:fourbody}). This feature can also distinguish the loop dominated scenario from other frameworks not related to the heavy neutrino portal, such as, for instance, $Z'$ dark matter that couples as $Z'_\mu \bar{L}^\dag \bar\sigma^\mu L$; this model would mimic the neutrino and positron line signals, but the decays into SM bosons with comparable rates, a robust prediction of the loop dominated scenario, would be absent.
Since the main purpose of this section is to point out the main qualitative differences in the spectra of SM particles between signals dominated by tree and loop effects, we do not delve into detailed studies of experimental sensitivity or bounds on dark matter parameters. These require additional considerations, such as inclusion of secondary emission and propagation effects, which are beyond the scope of this paper, and have been performed elsewhere, see \textit{e.g.}\ \cite{Garcia-Cely:2017oco,ElAisati:2017ppn,Campo:2017nwh,Chianese:2018ijk,Blennow:2019fhy,Dekker:2019gpe,Heeck:2019guh}.
\section{Summary}
In this paper, we considered scenarios where neutrino-related loop diagrams can dominate dark matter phenomenology despite the presence of tree-level channels. We found this feature to be fairly generic in models where the dark matter is part of a hidden sector that is connected to the SM via a heavy neutrino portal; such a portal is generic and arises in well-motivated new physics scenarios from the point of view of hidden sectors as well as neutrino mass generation mechanisms. In such frameworks, the tree level processes, although open, are significantly suppressed by the existence of heavy sterile neutrino propagators, incurring suppression factors of powers of $v/M_N$ or $m_\text{DM}/M_N$. We demonstrated that the loop processes can overcome this suppression (provided they do not require explicit Higgs vev insertions for e.g. hypercharge conservation) and therefore dominate dark matter phenomenology in large regions of parameter space.
While this unexpected dominance of loop processes can affect the calculation of dark matter production and lifetime, affecting compatible regions of parameter space in various models, such concerns are model-specific; more generic and interesting is the effect on dark matter indirect detection signatures, where more robust, model-independent predictions are possible. We demonstrated that the energy spectra of positrons, gamma rays, and neutrinos in loop-dominated scenarios are qualitatively different from those from tree-level decay processes. A naive calculation of dark matter signatures in these setups using only tree level processes would therefore yield extremely inaccurate predictions for the signals expected at experiments.
We highlight two salient features of such loop dominated dark matter signatures. The first is the existence of a monochromatic neutrino line. While tree level decays also feature such lines, we found that for $m_\text{DM}\gtrsim 10\, v$, the line signal gets overwhelmed by four-body decays, which grow to dominate. Second, due to SU(2) invariance, the occurrence of analogous decays into charged leptons as well as SM gauge and Higgs bosons at comparable rates is a robust prediction of the loop-dominant scenario, which is difficult to replicate in tree-level neutrino portal dark matter models or other dark matter models that couple to lepton doublets. The observation of monochromatic neutrino lines along with accompanying spectra in positrons or gamma rays that match the predictions of the above relations would therefore suggest that dark matter interactions are mediated by heavy sterile neutrinos, and that such loop dominated effects are dominantly at play, providing crucial insight into the underlying model.
Given the tremendous interest in new physics related to the neutrino sector, in particular in connection with dark matter, along with the emergence of high sensitivity neutrino detectors and gamma ray experiments, we believe that it is important for the community to be aware of such unexpected but dominant effects that can occur in well motivated theoretical frameworks and offer qualitatively different yet robustly predictable dark matter indirect detection signatures that might be discovered in the coming years.
\vskip 0.2cm
\textit{Acknowledgements:} We acknowledge illuminating conversations with Wolfgang Altmannshofer. BS thanks the the GGI Institute for Theoretical Physics, and the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149), where parts of this research were completed, for hospitality and support. HHP and SP are partly supported by the U.S. Department of Energy grant number DE-SC0010107.
|
2201.02532
|
\section{Introduction} \label{sec:intro}
In recent decades, researchers have paid great attention to approximate and dynamic factor models, as they allow to model and forecast high-dimensional data in a parsimonious manner.
Their attractiveness lies in the decomposition of a multivariate time series into two components: a low-dimensional common component that provides an essential signal about the time series dynamics and a high-dimensional idiosyncratic error component.
First introduced by \cite{chamberlain1983} and further developed by \cite{forni2000}, \cite{stock2002, stock2002b}, and \cite{bai2003}, the factor model framework is widely applied in a large number of fields ranging from economic forecasting (\citealt{eickmeier2008}) and monetary policy analysis (\citealt{bernanke2005}) to psychology, epidemiology, environmental studies, and social sciences.
For reviews and more references, see \cite{bai2008}, \cite{breitung2013}, and \cite{stock2016}.
While multivariate factor models for high-dimensional data have been extensively studied, the research on factor models for functional (infinite-dimensional) data is not yet well advanced.
Functional data analysis (FDA) has emerged as a new field in statistics that allows addressing problems where the underlying data structure can be represented as continuous curves or functions. Its application is especially useful when the complexity of the data does not allow the use of conventional multivariate methods or finds it too restrictive.
Comprehensive reviews on FDA can be found in \cite{ramsay2005}, \cite{horvath2012}, \cite{hsing2015}, and \cite{wang2016}.
Economic examples of time-dependent functional data, commonly referred to as functional time series (FTS), include energy spot prices, income profiles, and the term structures of bond yields, credit default swaps, and inflation expectations.
When it comes to modeling and predicting FTS, the literature focuses mainly on the functional autoregressive (FAR) model (see \citealt{bosq2000}, \citealt{besse2000}, and \citealt{kargin2008}).
The complexity of the infinite-dimensional FAR operator allows for general dynamic structures but lacks interpretable dynamic components.
From an economic modeling perspective, it is often desirable to explain common dynamics by a few economically interpretable common indicators that are at the same time sufficiently comprehensive.
Therefore, the main objective of this paper is to propose and identify a functional factor model (FFM) that allows the extraction of a low-dimensional predictive component from an infinite-dimensional FTS.
Although modeling an FTS by its low-dimensional dynamic component is appealing, only a few papers have addressed this topic so far.
\cite{hays2012} and \cite{liebl2013} proposed an FFM with a discrete idiosyncratic component, and \cite{kowal2017} considered a Bayesian functional dynamic model.
Other related works identified an FFM asymptotically through a panel structure of a large number of FTS (\citealt{tavakoli2019}), addressed the problem of separate identification of a functional smooth and a rough component (\citealt{descary2019}), and discussed that discretely observed functional data naturally follow some approximate factor model structure (\citealt{hormann2022}).
Our paper differs from the available literature in several important ways and makes three contributions.
First, we propose an approximate FFM, where both the common and idiosyncratic components are random variables taking values in a functional space.
The covariance kernel of the idiosyncratic component is left unrestricted and may have asymptotically non-negligible off-diagonal elements. As opposed to \cite{hays2012}, \cite{liebl2013}, and \cite{kowal2017}, we assume that the number of factors is unknown and must be estimated.
Second, we address in detail the identification of all model parameters without relying on a functional panel structure as in \cite{tavakoli2019}.
Under suitable conditions, we show that the latent components of the model are identified through the principal components of the global covariance operator of the process of interest.
In addition to the orthogonality of the loading functions, our fundamental identification condition is that the factors exhibit some nonzero autocorrelation while the idiosyncratic component does not.
It allows to separately identify the functional common component from the functional idiosyncratic component.
The common and idiosyncratic components may be weakly cross-correlated, which allows for certain forms of nonstationarities and heteroskedasticity in our model.
Third, we develop a simple to use two-step estimation and prediction procedure.
In the first step, the FPCs of the global covariance function are used to estimate latent components.
In the second step, the number and dynamics of the factors are estimated jointly and can be used to provide an optimal forecast in the mean square error (MSE) sense. While results on the estimation of FPCs are available in the literature, the theory of the correct specification of the number of factors is absent so far.
The consistent selection of the number of factors is crucial to the theoretical and empirical validity of factor models.
An additional difficulty arises from the fact that the factors themselves and their dynamics must also be estimated.
We propose an information criterion based on the prediction error and assume that the common factors follow a stationary vector autoregressive (VAR) process with an unknown number of lags.
The criterion includes a suitable penalty term to avoid overselection and provides jointly consistent estimates for the number of factors and lags under mild restrictions.
The proposed model and estimation procedure extends the conventional multivariate factor model to the case of functional data.
Following the terminology introduced in \cite{chamberlain1983}, our model is an approximate factor model in that the points lying on the trajectory of the idiosyncratic function are allowed to be correlated.
The correlations are asymptotically non-negligible, which allows for more general structures than those considered in \cite{stock2002}, \cite{bai2002}, and \cite{bai2003}, where only weak (i.e., asymptotically negligible) correlations are permitted.
The idiosyncratic error function is infinite-dimensional and has an unrestricted and nontrivial covariance kernel implying that the eigenvalues of both the common and the idiosyncratic components are allowed to be of the same order of magnitude.
In the conventional multivariate factor model literature, identification conditions are formulated on the eigenstructure in terms of asymptotic properties in both the cross-section dimension $N$ and time dimension $T$.
In particular, the classical factor model assumptions ensure that the first $K$ eigenvalues of the covariance matrix diverge whereas the $(K+1)$th eigenvalue is bounded as $N$ tends to infinity (see \citealt{chamberlain1983}).
\cite{forni2000}, \cite{stock2002}, and \cite{bai2002} provide suitable conditions in different settings for joint asymptotics with $N$ and $T \to \infty$.
These papers also allow for cross-sectional and serial dependence as well as some forms of weak dependencies and heteroskedasticity.
Identification strategies of multivariate factor models cannot be directly transferred to a FFM.
Asymptotic properties with $(N,T) \to \infty$ asymptotics are infeasible because the cross-sectional domain of an FTS is infinite-dimensional by definition, so other identification strategies are required.
We propose identification conditions ensuring that the common component contains the predictive part of the FTS while the idiosyncratic component is non-predictive.
We restrict the idiosyncratic component to be functional white noise (see \citealt{bosq2000}) while the common component follows a non-trivial time series process.
As the result, the idiosyncratic error function does not exhibit autocorrelation, and the common component fully explains the dynamics of the FTS.
As in the approximate factor models of \cite{stock2002}, \cite{bai2002}, and \cite{bai2003}, the factors in our model are dynamic in that they follow a time-dependent process.
However, the dynamic factors are not loaded through a lag structure, implying a static relationship between the factors and the FTS, which differentiates our approach from the dynamic factor models of \cite{forni2000} and \cite{forni2001}.
However, the multivariate time series process for the factors makes the model capable of representing general forms of temporal dynamics in the FTS.
Therefore, our model is particularly suitable for functional prediction, which is not possible with the dynamic factor model methodology proposed by \cite{forni2000} and \cite{forni2001} whose estimators are based on two-sided filters.
Moreover, our model provides the tools to understand and work with the finite-dimensional dynamic structure of an infinite-dimensional FTS.
The practical usefulness of our model is demonstrated with an application to yield curve modeling and prediction.
We compare our results to the most established modeling framework in the literature, the dynamic Nelson Siegel model (DNS) (see \citealt{diebold2013} for a review).
The DNS can be interpreted as a special case of the proposed framework but is much more restrictive than the general FFM.
Our main finding is that neither the loading functions should be pre-determined (as reported in \citealt{lengwiler2010} and \citealt{hays2012} as well) nor the number of factors should be fixed.
We find that a four-factor model characterizes the dynamics of yield curves better than the three-factor model in the DNS framework.
In particular, we show that the FFM with a data-driven number of factors improves the forecasting performance of the conventional DNS model with three fixed factors.
The paper is structured as follows. Section \ref{sec:model} presents the FFM and the model assumptions. Section \ref{sec:Identification} discusses in detail under which assumptions the model parameters are identified. The functional principal components estimator, the information criterion to jointly estimate the number of factors and lags, and the optimal curve predictor are presented in Section \ref{sec:estimation}.
Section \ref{sec:simulations} provides a Monte Carlo simulation to understand the model's performance in finite samples.
In Section \ref{sec:application}, we apply the method to yield curves of seven different countries, and Section \ref{sec:conc} concludes.
\section{The approximate functional factor model} \label{sec:model}
We consider a time series of curves $Y_1(r), \ldots, Y_T(r)$ on the domain $r \in [a,b]$, which is a closed subset of the real line.
The general factor model for functional time series with $K$ common factors is given as
\begin{align}
Y_{t}(r) &= \mu(r) + \sum_{l=1}^K F_{l,t} \psi_{l}(r) + \epsilon_{t}(r), \nonumber \\
&= \mu(r) + \Psi'(r) F_t + \epsilon_r(r) , \qquad t=1, \ldots, T, \quad r \in [a,b], \label{eq:factormodel}
\end{align}
where $\mu(r)$ is an intercept function, $F_{l,t}$ denotes the $l$-th factor at time $t$, $\psi_l(r)$ is the corresponding $l$-th loading function, and $\epsilon_t(r)$ is an idiosyncratic error term.
The number of factors $K$ is fixed and unknown.
While $\mu(r)$ and the vector of loading functions $\Psi(r) = (\psi_1(r), \ldots, \psi_K(r))'$ are unobserved deterministic terms, the vector of factors $F_t = (F_{1,t}, \ldots, F_{K,t})'$ is assumed to follow the VAR($p$) process
\begin{equation}
F_t = \sum_{i=1}^p A_i F_{t-i} + \eta_t = A(L) F_{t-1} + \eta_t, \label{eq:VAR}
\end{equation}
which introduces a dynamic time-dependent structure to the model.
The lag polynomial $A(L)$ is defined as $A_1 + A_2 L + \ldots + A_p L^{p-1}$ with the $K \times K$ coefficient matrices $A_1, \ldots, A_p$, where $L$ denotes the lag operator, and $\eta_t = (\eta_{1,t}, \ldots, \eta_{K,t})'$ is the vector of factor innovations.
To motivate this model, consider the dynamic term-structure model by \cite{nelson1987} and \cite{diebold2006}, which is one of the most commonly applied models for yield curves.
The curve $Y_t(r)$ is associated with the yield of some bond with time to maturity $r \in [a,b]$ at time $t = 1, \ldots, T$. The underlying premise is that the series $Y_t(r)$ is driven by three factors $F_{1,t}$, $F_{2,t}$, and $F_{3,t}$, with known loading functions
\begin{equation}
\psi_{1}(r) = 1, \quad \psi_{2}(r) = \frac{1 - e^{-\lambda r}}{\lambda r}, \quad \psi_{3}(r) = \frac{1 - e^{-\lambda r}}{\lambda r} - e^{- \lambda r}, \label{eq:dns}
\end{equation}
which are referred to as the Nelson-Siegel loadings.
The fixed parameter $\lambda$ determines the decay of the loadings.
Extensions of this parsimonious model are proposed in \cite{svensson1995} and \cite{bliss1997}.
Although economic theory motivates such a representation, there is evidence against assuming a fixed number of factors with a predefined loading structure.
\cite{lengwiler2010} and \cite{hays2012} argued that the Nelson-Siegel loadings are not optimal in some respect, which motivates the development of a general factor model for functional time series where the number and the shape of loading functions are assumed to be unknown.
The theory developed in this paper is not restricted to the yield curve example. We assume a general setting that naturally arises from the prediction problem of functional data. The analysis of FTS under such settings is fundamentally different from conventional multivariate analysis of factor and time series models since functional data is generally infinite-dimensional.
Some notation is required to formalize the assumptions imposed in this paper. Let $H=L^2([a,b])$ be the space of functions $x:[a,b] \to \mathbb R$ with $\int_a^b x^2(r) \,\mathrm{d} r < \infty$.
Together with the inner product $\langle x,y \rangle = \int_a^b x(r) y(r) \,\mathrm{d} r$ and the norm $\Vert x \Vert = \langle x, x \rangle^{1/2}$, the space $H$ is a Hilbert space.
Moreover, let $L_H^p$ denote the space of $H$-valued random functions with $E[\|X\|^p] < \infty$.
Any $X \in L_H^4$ possesses a covariance function $c_X(r,s) = Cov[X(r), X(s)]$, $r,s \in [a,b]$.
The integral operator with kernel $c_X(r,s)$ is denoted as the covariance operator of $X$, which has the eigenequation $\int_a^b c_X(r,s) v(s) \,\mathrm{d} s = \xi v(r)$, $r \in [a,b]$,
where $\xi$ is an eigenvalue and $v$ a corresponding eigenfunction of the covariance operator.
To differentiate between the norms used in this paper, the notation $\Vert \cdot \Vert_2$ denotes both the Euclidean vector norm and the corresponding compatible Euclidean matrix norm and $\Vert\cdot\Vert_{\mathcal{S}}$ denotes the operator norm of the Hilbert-Schmidt space of operators from $H$ to $H$.
\begin{assumption}[\textbf{Common Component}] \label{as:factors} \phantom{x}
\begin{itemize}
\item[(a)] The loadings $\{\psi_k\}_{k=1}^K$ are deterministic and continuous functions and form an orthonormal system, that is, $\langle \psi_k, \psi_l \rangle = 0$ and $\| \psi_l \| = 1$, for all $k,l = 1, \ldots, K$ with $k \neq l$;
\item[(b)] The factors satisfy $E[F_t] = 0$, $E\|F_t\|_2^4 < \infty$, and, for some $\lambda_1 > \ldots > \lambda_K > 0$,
\begin{align*}
\lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T E\big[F_t F_t'\big] = \text{diag}(\lambda_1, \ldots, \lambda_K);
\end{align*}
\item[(c)] The $K$-th factor exhibits autocorrelation or cross-correlation such that, for some $i \in \mathbb N$,
\begin{align*}
\lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T \sum_{l=1}^K E\big[F_{K,t} F_{l,t-i} \big]^2 > 0;
\end{align*}
\item[(d)] All roots of $\det(I-zA(z))$ lie outside the unit circle;
\item[(e)] $\{\eta_{t}\}$ is a multivariate martingale difference sequence with the natural filtration $\mathcal{F}_t=\sigma(\{\eta_{s},s\leq t\})$. Further, $\lim_{T \to \infty} T^{-1} \sum_{t=1}^T E[\eta_t\eta_t' \mid \mathcal{F}_{t-1}] = \Sigma_\eta$, $E\|\eta_t\|_2^{\kappa} < C < \infty$ for some $\kappa > 4$, and
\begin{equation*}
\lim_{T \to \infty} \sup_{i_1, i_2, i_3,i_4 \in \mathbb{N}} \frac{1}{T} \bigg| \sum_{t,s=1}^T Cov \big[ \eta_{k_1,t-i_1} \eta_{k_2,t-i_2}, \eta_{k_3, s-i_3} \eta_{k_4, s-i_4} \big] \bigg| < \infty,
\end{equation*}
for all $k_1, k_2, k_3, k_4 \in\{ 1, \ldots, K\}$,
where $\eta_{k,t}$ denotes the $k$-th element of the vector $\eta_{t}$.
\end{itemize}
\end{assumption}
Assumptions \ref{as:factors}(a) and (b) are the functional counterparts of the restrictions considered in the factor models of \cite{stock2002} and \cite{bai2003}.
They ensure the separate identifiability of factors and loadings which otherwise would be identified only up to a rotation matrix.
For other possible identifying restrictions on the rotation matrix, see \cite{bai2013}.
Assumption \ref{as:factors}(c) ensures that $F_t$ is time-dependent, which differentiates the common component from the idiosyncratic component in its dynamic structure.
This condition plays a crucial role in separating the common and idiosyncratic components and hence in identifying the number of factors $K$.
Assumptions \ref{as:factors}(d) and (e) imply that $F_t$ is a stationary and causal VAR process that can be consistently estimated.
Note that Assumptions \ref{as:factors}(a)--(c) postulate general conditions under which the model can be identified, whereas Assumptions \ref{as:factors}(d) and (e) are used to construct an estimation framework for model \eqref{eq:factormodel}.
In principle, Assumptions \ref{as:factors}(d) and (e) could be replaced by any other stationary time series model for the factors, which we do not pursue in this paper.
\begin{assumption}[\textbf{Idiosyncratic Component}] \label{as:errors} \phantom{x}
\begin{itemize}
\item[(a)] $\epsilon_t$ is a $H$-martingale difference sequence, that is, $\epsilon_t$ is adapted to the natural filtration $\mathcal{A}_t=\sigma\left(\epsilon_s,\ s\leq t\right)$ with $E[\epsilon_t(r) \mid \mathcal{A}_{t-1}] = 0$, and $\sup_{r \in [a,b]}| E[\epsilon_t^\kappa(r)] | < C < \infty$ for some $\kappa > 4$.
Furthermore, $\epsilon_t$ has the global covariance kernel
\begin{align*}
\delta(r,s) := \lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T E[\epsilon_t(r) \epsilon_t(s)\mid \mathcal{A}_{t-1}], \qquad r,s \in [a,b].
\end{align*}
The eigenvalues $\{\zeta_l\}$ of the integral operator with kernel $\delta(r,s)$ satisfy $\zeta_l > \zeta_{l+1}$ for all $l$;
\item[(b)] The asymptotic variance of the idiosyncratic component is bounded in each direction of $H$ by $\lambda_K$, that is,
\begin{align*}
\lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T E \big[\langle \epsilon_t, x \rangle^2\big] < \lambda_K \| x \|^2, \qquad \forall x \in H;
\end{align*}
\item[(c)] The common and the idiosyncratic component are weakly orthogonal, that is,
\begin{align*}
\lim_{T \to \infty} \frac{1}{\sqrt T} \sum_{l=1}^K \sum_{t=1}^T E \big[ \langle \epsilon_t, \psi_l \rangle^2 \big] < \infty;
\end{align*}
\item[(d)]
The common and the idiosyncratic component are weakly dependent, that is,
\begin{equation*}
\lim_{T \to \infty} \sup_{r \in [a,b]} \sup_{s \in \mathbb N}E\bigg\Vert
\frac{1}{\sqrt{T}}\sum_{t=1}^{T} F_{t} \epsilon_{t-s}(r)
\bigg\Vert_2^2
< \infty.
\end{equation*}
\end{itemize}
\end{assumption}
While it is common in functional time series analysis to assume $H$ (strong) white noise for the error term (see \citealt{bosq2000}), we consider an $H$-martingale difference sequence, which is more in line with the factor literature. Assumption \ref{as:errors}(a) rules out the presence of serial correlation in the idiosyncratic component. However, it allows for a weak form of time dependence as it implies that $\int_a^b \int_a^b E[ T^{-1} \sum_{t=1}^{T}( \epsilon_t(r)\epsilon_t(s)-\delta(r,s))]^2\,\mathrm{d} r \,\mathrm{d} s =O(1)$. Further, Assumption \ref{as:errors}(a) allows for a non-degenerated covariance kernel $\delta(r,s)$ of the idiosyncratic error as opposed to the approximate factor model literature (see \citealt{bai2002} and \citealt{bai2003}) and exact functional factor models (see \citealt{hays2012} and \citealt{liebl2013}), where off-diagonal elements of $\delta(r,s)$ are asymptotically negligible.
Assumption \ref{as:errors}(b) is an eigenstructure condition required for the separate identification of the two components.
It ensures that the eigenvalues in the idiosyncratic component do not become larger than those in the common component.
However, we do not postulate different rates for the eigenvalues, as is common in the literature on multivariate factor models.
Assumption \ref{as:errors}(c) is also required for the separate identification.
Note that combining \eqref{eq:factormodel} and \eqref{eq:VAR} implies that the model can be written as
\begin{align*}
Y_t(r) = \mu(r) + \Psi'(r) \sum_{i=1}^p A_i F_{t-i} + \Psi'(r) \eta_t + \epsilon_t(r).
\end{align*}
with innovations term $Y_t(r) - E[Y_t(r) \mid Y_{t-1}, Y_{t-2}, \ldots] = \Psi'(r) \eta_t + \epsilon_t(r)$.
In essence, our weak orthogonality condition ensures that $\Psi'(r) \eta_t$ is the asymptotically relevant part of the innovations that drives the FTS in the subspace $\text{span}\{\psi_1, \ldots, \psi_K\}$ of $H$.
Note that the average idiosyncratic term $T^{-1} \sum_{t=1}^T \epsilon_t(r)$ can be decomposed into the terms $T^{-1} \sum_{t=1}^T (\epsilon_t(r) - \sum_{l=1}^K \langle \epsilon_t, \psi_l \rangle)$ and $T^{-1} \sum_{t=1}^T \sum_{l=1}^K \langle \epsilon_t, \psi_l \rangle$, where the latter is asymptotically negligible by Assumption \ref{as:errors}(c).
Finally, we assume a weak form of dependence given by Assumption \ref{as:errors}(d), implying certain forms of local nonstationarities and weak correlations between the common and idiosyncratic components.
Given these points, our model is more general than those considered so far in FDA literature.
\begin{remark} \label{rem:transformation}
Throughout this paper, we assume that the curves $Y_1, \ldots, Y_T$ are already given as fully observed elements of $H$.
In practice, however, the data is typically only available in the form of high-dimensional vectors, and additional preprocessing steps are needed to transform the discrete observations into functions. This problem has been extensively studied in the literature on functional data analysis and is well understood.
The most commonly applied techniques are based on basis expansions (see \citealt{ramsay2005}) or a conditional expectation approach (see \citealt{Yao2005}).
In the empirical part of our paper we employ techniques based on natural cubic splines.
\cite{hall2006}, \cite{li2010}, \cite{zhang2016}, and \cite{kneip2020} showed that, if the discrete data is observed densely enough, mean functions, eigenvalues, and FPC can be estimated at the same $\sqrt{T}$-rate as if the curves were fully observed.
\end{remark}
\section{Identification}\label{sec:Identification}
We start our analysis with the identification of the functional factor model \eqref{eq:factormodel}--\eqref{eq:VAR}.
First, it follows directly from Assumptions \ref{as:factors}(b) and \ref{as:errors}(a) that $Y_t$ is $L_H^4$ with time-invariant mean function $E[Y_t(r)] = \mu(r)$ and time-dependent covariance function
\begin{equation}\label{eq:CovYt2}
Cov\big[Y_t(r), Y_t(s)\big] = E\bigg[\bigg(\sum_{k=1}^K F_{k,t} \psi_k(r) + \epsilon_t(r)\bigg) \bigg( \sum_{l=1}^K F_{l,t} \psi_l(s) + \epsilon_t(s)\bigg) \bigg],
\end{equation}
which, by Assumptions \ref{as:errors}(a) and (d), implies
\begin{equation}\label{eq:CovYt}
c(r,s) := \lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T Cov\big[Y_t(r), Y_t(s)\big] = \sum_{l=1}^K \lambda_l \psi_l(r) \psi_l(s) + \delta(r,s).
\end{equation}
Note that \eqref{eq:CovYt2} and \eqref{eq:CovYt} coincide if the factors and errors are mutually uncorrelated and if factors and errors are strictly stationary.
The kernel $c(r,s)$ is called the global covariance function of $Y_t$, and its integral operator is given by $C_Y(x)(r) = \int_a^b c(r,s) x(s) \,\mathrm{d} s$, where $x \in H$.
The pairs $(\lambda_l, \psi_l)$ for $l=1,...,K$ satisfy the eigenequation for $C_Y$, i.e.,
\begin{align*}
\int_a^b c(r,s) \psi_l(s) \,\mathrm{d} s = \sum_{k=1}^K \lambda_k \psi_k(r) \langle \psi_k, \psi_l \rangle
= \lambda_l \psi_l(r),
\end{align*}
which follows from Assumption \ref{as:factors}(a) and the fact that $\int_a^b \delta(r,s) \psi_l(s) \,\mathrm{d} s = 0$ implied by Assumption \ref{as:errors}(c).
Hence, $\lambda_1, \ldots, \lambda_K$ and $\psi_1,...,\psi_K$ are identified as eigenvalues and corresponding eigenfunctions of $C_Y$.
Any eigenfunction of $C_Y$ is either an element of $\text{span}\{ \psi_1, \ldots, \psi_K \}$ or an eigenfunction of the integral operator with kernel $\delta(r,s)$, which we denote as $C_\epsilon$.
Hence, an eigenvalue of $C_Y$ is either element of $\{\lambda_1, \ldots, \lambda_K\}$ or an eigenvalue of $C_\epsilon$. Further, by Assumption \ref{as:errors}(b), all eigenvalues of $C_\epsilon$ are smaller than $\lambda_K$, which implies that $\{\lambda_1, \ldots, \lambda_K\}$ are the $K$ largest eigenvalues of $C_Y$.
Consequently, the loading functions $\psi_1,...,\psi_K$ are identified as the first $K$ functional principal components of $C_Y$, which are uniquely determined up to a sign change.
Finally, the factors can be represented as projection coefficients onto the loading functions, i.e.,
\begin{align*}
F_{l,t} = \sum_{k=1}^{K}F_{k,t}\langle \psi_k ,\psi_l \rangle = \langle Y_t - \mu - \epsilon_t, \psi_l \rangle,
\end{align*}
where the first equation follows from the orthonormality of $\{\psi_k\}_{k=1}^K$. Due to the weak orthogonality in Assumption \ref{as:errors}(c), the factors are asymptotically identified as the functional principal component scores of $C_Y$ in that $T^{-1} \sum_{t=1}^T (F_{l,t} - \langle Y_t - \mu, \psi_l \rangle) = O_P(T^{-1/2})$, as $T \to \infty$.
The results obtained so far are based only on equation \eqref{eq:factormodel} together with Assumptions \ref{as:factors}(a)--(b) and \ref{as:errors}.
However, it is impossible to identify the number of factors $K$ and separate the common component from the idiosyncratic one without an additional condition.
The identification strategies in the classical factor literature (see \citealt{stock2002} and \citealt{bai2003}) are based on weak cross-correlations in the error component that are asymptotically negligible.
In the context of functional data, weak cross-correlation results in an idiosyncratic component with a covariance kernel that has negligible off-diagonal elements, which would be too restrictive due to the infinite-dimensional nature of functional data.
Therefore, we allow for non-degenerate covariance kernels and resort to the time-dependence of $Y_t$ to identify $K$, which is one of the main departure points from the classical factor literature.
The key to identifying the number of factors $K$ lies in the fact that errors are uncorrelated in $t$ and that the ``last'' $K$-th factor is correlated with at least one lagged factor.
This property is established by Assumption \ref{as:factors}(c).
Let $\{\varphi_j\}$ be a sequence of orthonormal eigenfunctions of $C_Y$ with corresponding descendingly ordered eigenvalues.
Since functional principal components are identified up to a sign change, we have $\varphi_l = \text{sign}(\langle \varphi_l, \psi_l \rangle) \psi_l$ and, by Assumption \ref{as:errors}(c),
\begin{align*}
\frac{1}{T} \sum_{t=1}^T \Big( \langle Y_t - \mu, \varphi_l \rangle - \text{sign}(\langle \varphi_l, \psi_l \rangle) F_{l,t} \Big) = O_P(T^{-1/2})
\end{align*}
for all $l=1, \ldots, K$.
In contrast, for $l \geq K$, we have $\langle Y_t - \mu, \varphi_l \rangle = \langle \epsilon_t, \varphi_l \rangle$.
Note that the $K$-th factor is correlated by Assumption \ref{as:factors}(c).
Moreover, $\langle \epsilon_t, \varphi_j \rangle$ is uncorrelated with $\langle \epsilon_{t-h}, \varphi_l \rangle$ for all $h \neq 0$ by Assumption \ref{as:errors}(a).
Therefore, the number of factors is identified as
\begin{equation}\label{eq:Kfactors}
K = \min \bigg\{ l \geq 0 \ \bigg\vert \ \lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^T E\big[\langle Y_t - \mu, \varphi_{l+i} \rangle \langle Y_{t-h} - \mu, \varphi_{l+j} \rangle \big] = 0, \ \forall i, j, h \geq 1 \bigg\}.
\end{equation}
The following theorem summarizes our identification results.
\begin{theorem}[\textbf{Identification}] \label{thm:identification}
Under Assumptions \ref{as:factors}(a)--(c) and \ref{as:errors}, the unobserved components of model \eqref{eq:factormodel} are identified. In particular,
\begin{itemize}
\item[(i)] Under Assumptions \ref{as:factors}(b), \ref{as:errors}(a), and \ref{as:errors}(d), $Y_t$ is $L_H^4$-valued with $E[Y_t(r)] = \mu(r)$ and \\ $\lim_{T \to \infty} T^{-1} \sum_{t=1}^T Cov[Y_t(r), Y_t(s)] = c(r,s)$, $r,s \in [a,b]$, where $c(r,s)$ is given in \eqref{eq:CovYt};
\item[(ii)] Under Assumptions \ref{as:factors}(a)--(b) and \ref{as:errors},
$\lambda_1, \ldots, \lambda_K$ are the largest eigenvalues of the integral operator with kernel $c(r,s)$, and $\psi_1, \ldots, \psi_K$ are corresponding eigenfunctions.
Moreover, the factors satisfy $F_{l,t} = \langle Y_t - \mu - \epsilon_t, \psi_l \rangle$, where $T^{-1} \sum_{t=1}^T ( F_{l,t} - \langle Y_t - \mu, \psi_l \rangle ) = O_P(T^{-1/2})$;
\item[(iii)] Under Assumptions \ref{as:factors}(a)--(c) and \ref{as:errors}, the number of factors $K$ is identified as in \eqref{eq:Kfactors}.
\end{itemize}
\end{theorem}
\begin{remark}
Note that $\Psi(r)$ and $F_t$ are identified separately only up to a sign change. Changing the sign of both the loadings and the factors will leave the common component $\Psi'(r) F_t$ unchanged.
The identification results in Theorem \ref{thm:identification} do not depend on the specific dynamic structure given in equation \eqref{eq:VAR} and Assumptions \ref{as:factors}(d)--(e). The factor model is identified under any time-dependent specification for the process $F_t$ that satisfies Assumption \ref{as:factors}(b)--(c).
\end{remark}
\begin{remark}\label{rem:dimred}
Factor analysis offers a tool for dimension reduction of high-dimensional (in our case infinite-dimensional) data sets.
In the context of FDA, the FPC analysis has become a leading dimension reduction method. However, as shown by \cite{brillinger1981} and \cite{forni2000} for multivariate time series and by \cite{hormann2015} and \cite{panaretos2013} for FTS, an FPC analysis, in general, might be inappropriate for serially dependent data.
The solution proposed in these papers for dependent data is based on a dynamic FPC analysis in the frequency domain.
As the estimation of dynamic FPCs involves two-sided filters, making it inapplicable for prediction exercises of FTS, it might be of a great interest for practitioners to know the cases when a standard FPC is applicable. The identification results in Theorem \ref{thm:identification} are helpful in this matter as they can be seen as sufficient conditions under which a standard FPC analysis can be used for dimension reduction and consequent prediction of an FTS.
We would like to highlight that the development of dynamic FPC methods for FTS with one-sided filter deserves a detailed, separate investigation
on its own and is not pursued in this paper.
\end{remark}
\section{Estimation and prediction}\label{sec:estimation}
The identification result of the previous section indicates that all parameters of model \eqref{eq:factormodel} can be represented in terms of the first two moments of the functional time series $Y_t$, i.e., the population mean function $\mu(r)$ and the global covariance function $c(r,s)$.
Accordingly, we employ a moment estimator approach where corresponding sample counterparts are replaced by their population moments.
In Section \ref{sec:estimPrim}, we show the consistency of the moment estimator.
In Section \ref{sec:estimK}, we discuss how to estimate the number of factors and the factor dynamics consistently.
In Section \ref{sec:practical}, we give some guidance for the practical implementation of our information criterion, and in Section \ref{sec:prediction} we derive optimal predictors and present an algorithm for our estimation and prediction procedure.
\subsection{Estimation of the primitives} \label{sec:estimPrim}
Consider the sample mean function
\begin{equation*}
\widehat \mu(r) = \frac{1}{T} \sum_{t=1}^T Y_t(r), \quad r \in [a,b],
\end{equation*}
and sample covariance function
\begin{equation*}
\widehat{c}(r,s) = \frac{1}{T} \sum_{t=1}^T \big(Y_t(r) - \widehat \mu(r)\big)\big(Y_t(s) - \widehat \mu(s)\big), \quad r,s \in [a,b].
\end{equation*}
The sample covariance operator $\widehat C_Y$ is defined as the integral operator with kernel $\widehat{c}(r,s)$, which has the eigenvalues $\widehat \lambda_1 \geq \widehat \lambda_2 \geq \ldots \geq \widehat \lambda_T \geq 0$ and corresponding orthonormal eigenfunctions $\widehat \psi_1, \ldots, \widehat \psi_T$.
The eigenfunction $\widehat \psi_l$ is called the $l$-th empirical FPC, and the projection coefficient $\widehat F_{l,t} = \langle Y_t - \widehat \mu, \widehat \psi_l \rangle$ is the $l$-th empirical FPC score.
Another way of motivating this estimator comes from the least squares principle.
Since $\sum_{t=1}^T \| \sum_{l=1}^K F_{l,t} \psi_l \|^2 = \sum_{t=1}^T \sum_{l=1}^K F_{l,t}$, which follows from Assumptions 1(a)--(b), the least squares minimization problem for model \eqref{eq:factormodel} with respect to the factors is solved as
\begin{equation*}
\argmin_{F_{k,s}} \sum_{t=1}^T \Big\| Y_t - \widehat \mu - \sum_{l=1}^K F_{l,t} \psi_l \Big\|^2 = \argmin_{F_{k,s}} \big\{ F_{k,s}^2 - 2 F_{k,s} \langle Y_s - \widehat \mu, \psi_k \rangle \big\} = \langle Y_s - \widehat \mu, \psi_k \rangle,
\end{equation*}
where $k=1, \ldots, K$, and $s=1, \ldots, T$.
The solution to the problem concerning the loading functions is shown in \cite{hoermann2012} and is solved as
\begin{equation*}
\argmin_{\psi_k} \sum_{t=1}^T \Big\| Y_t - \widehat \mu - \sum_{l=1}^K \langle Y_t - \widehat \mu, \psi_l \rangle \psi_l \Big\|^2 = \widehat \psi_k(r).
\end{equation*}
When it comes to FPC analysis, many results on the convergence of the sample FPCs to their population counterparts are available in the literature.
Results of this type are developed for independent observations (see \citealt{Dauxois1982}), linear process (see \citealt{bosq2000}), weakly dependent data (see \citealt{hoermann2010}), and data with long-range dependence (see \citealt{salish2019}).
However, as we have seen in the previous section, if the factors in model \eqref{eq:factormodel} are weakly correlated with the idiosyncratic error, the covariance function of $Y_t$ is time-dependent (see Assumption \ref{as:errors}(d) and equation \eqref{eq:CovYt2}), which makes our analysis different from those of the above references.
\begin{theorem}[\textbf{Primitives}] \label{thm:consistency}
If Assumptions \ref{as:factors} and \ref{as:errors} hold true, then, as $T \to \infty$,
\begin{itemize}
\item[(a)] $\big\Vert \widehat \mu - \mu \big\Vert = O_P(T^{-{1/2}})$;
\item[(b)] $\big\Vert \widehat C_Y - C_Y \big\Vert_\mathcal{S} = O_P(T^{-1/2})$;
\item[(c)] $\big| \widehat \lambda_l - \lambda_l \big| = O_P(T^{-{1/2}})$ for $1 \leq l \leq K$;
\item[(d)] $\big\Vert s_l \widehat \psi_l - \psi_l \big\Vert = O_P(T^{-{1/2}})$ for $1 \leq l \leq K$, where $s_l = \text{sign}(\langle \widehat \psi_l, \psi_l \rangle)$.
\end{itemize}
\end{theorem}
\noindent
Theorem \ref{thm:consistency} complements the available results with the case of weak dependencies between factors and errors. A direct consequence is that $\sum_{t=1}^T \| \sum_{l=1}^K \widehat F_{l,t} \widehat \psi_l - F_{l,t} \psi_l \| = O_P(T^{1/2})$, which follows from the decomposition $|s_l \widehat F_{l,t} - F_{l,t}| \leq \|\widehat \mu - \mu\| + \|Y_t - \mu\| \cdot \|s_l \widehat \psi_l - \psi_l\| + |\langle \epsilon_t, \psi_l \rangle|$ together with Assumption \ref{as:errors}(c).
\subsection{Estimation of the number of factors and the dynamics}\label{sec:estimK}
In this section, we propose an estimation procedure that selects asymptotically the correct numbers of factors $K$ and lags $p$ and simultaneously allows to estimate the VAR model \eqref{eq:VAR} consistently.
The dynamic component of the factor model is represented by the $K\times pK$ matrix of true autoregressive coefficients $\bm{A} = [A_1,A_2,..., A_p]$ in equation \eqref{eq:VAR}.
We resort to the standard conditional least square (LS) estimator to estimate $\bm A$.
That is, for a selected number of factors $J$ and lags $m$,
the true unobserved $K\times 1$ vectors of factors $F_t$ are replaced by $J \times 1$ vectors of FPC scores $\widehat F_t^{(J)} = (\widehat F_{1,t}, \ldots, \widehat F_{J,t})'$, where $\widehat F_{l,t} = \langle Y_t - \widehat \mu, \widehat \psi_l \rangle$, and the LS estimator $\bm{\widehat A}_{(J,m)} = [\widehat A_1^{(J)}, \ldots, \widehat A_m^{(J)}]$ is given by
\begin{equation}\label{eq:LS}
\bm{\widehat A}_{(J,m)} = \sum_{t=m+1}^T \widehat F_t^{(J)} \bm{\widehat x}_{t-1}^{(J,m)} \bigg( \sum_{t=m+1}^T \bm{\widehat x}_{t-1}^{(J,m)} \big(\bm{\widehat x}_{t-1}^{(J,m)}\big)' \bigg)^{-1},
\end{equation}
where $\bm{\widehat x}_{t-1}^{(J,m)}= ((\widehat F_{t-1}^{(J)})', \ldots, (\widehat F_{t-q}^{(J)})')'$.
Given that $\bm{A}$ is estimated with the LS procedure conditional on the selected number of lags and factors, we obtain estimators of $K$ and $p$ from the minimization of the corresponding mean squared error
\begin{equation}\label{eq:MSE}
MSE_T(J,m)=\frac{1}{T-m}\sum_{t=m+1}^{T}\big\Vert Y_{t}-\widehat{Y}_{t|t-1}^{(J,m)}\big\Vert^2
\end{equation}
with respect to $J$ and $m$, where the fitted values are given by
\begin{equation*}
\widehat{Y}_{t|t-1}^{(J,m)}(r) = \widehat \mu(r) + \big(\widehat \Psi^{(J)}(r)\big)' \bm{\widehat A}_{(J,m)} \bm{\widehat x}_{t-1}^{(J,m)}, \qquad \widehat \Psi^{(J)}(r) = \big(\widehat \psi_1(r), \ldots, \widehat \psi_J(r)\big).
\end{equation*}
\noindent
In order to evaluate $MSE_T(J,m)$ and obtain estimators of $K$ and $p$, we need to take into account two forms of uncertainty when constructing $\widehat{Y}_{t|t-1}^{(J,m)}$.
One comes from estimating the primitives and factors, and the other from estimating the dynamic equation \eqref{eq:VAR}.
Since the estimation of the autoregressive coefficient matrix $\bm{A}$ depends on $J$ and $m$, it is essential to understand how misspecified values for both parameters affect the estimation of $\bm{A}$.
To proceed with the discussion, we require several notations.
First, since the functional principal components are only identified up to a sign change, the off-diagonal elements of $A_i$ and $\widehat A_i^{(K)}$ might have different signs asymptotically.
Second, to compare the true stacked $K \times Kp$ lag coefficient matrix $\bm{A}$ with its $J \times Jm$ estimator matrix $\bm{\widehat A}_{(J,m)}$, their dimensions have to be aligned.
Consider the $K \times \max\{J,K\}$ completion matrix
\begin{equation*}
\bm{S}_J = \begin{cases} \big[\text{diag}(s_1, \ldots, s_K), \bm 0_{K,J-K} \big], & \text{if} \ J > K, \\
\text{diag}(s_1, \ldots, s_K), & \text{if} \ J \leq K,
\end{cases}
\end{equation*}
where $s_l = \text{sign}(\langle \widehat \psi_l, \psi_l \rangle)$, and $\bm 0_{K,J}$ is the $K \times J$ matrix of zeros.
We define the aligned and sign-adjusted true stacked lag coefficient matrix
\begin{align*}
\bm A^* = \begin{cases}
\big[ \bm S_J'A_1 \bm S_J, \ldots, \bm S_J' A_p \bm S_J, \bm 0_{J, (m-p)J} \big], & \text{if} \ m > p, \\
\big[ \bm S_J' A_1 \bm S_J, \ldots, \bm S_J' A_p \bm S_J \big], & \text{if} \ m \leq p,
\end{cases}
\end{align*}
which is or order $\max\{J,K\} \times (\max\{J,K\}\max\{m,p\})$.
To compare the estimated matrix $\bm{\widehat A}_{(J,m)}$ with the aligned coefficient matrix $\bm A^*$, we insert zeros in $\bm{\widehat A}_{(J,m)}$ where their dimensions do not match. For this purpose, we consider the completion matrix
\begin{align*}
\bm R_J = \begin{cases}
\big[ I_K, \bm 0_{K,J-K} \big], & \text{if} \ J < K, \\
I_K, & \text{if} \ J \geq K,
\end{cases}
\end{align*}
together with the aligned estimated matrix
\begin{align*}
\bm{\widehat A}^* =
\begin{cases}
\big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J, \bm 0_{J,(p-m)J} \big], & \text{if} \ m<p, \\
\big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J \big], & \text{if} \ m \geq p,
\end{cases}
\end{align*}
and formulate the following consistency result.
\begin{theorem}[\textbf{Dynamics}]\label{thm:Bias}
Let Assumptions \ref{as:factors} and \ref{as:errors} hold true, and let $p_{max}$ and $K_{max}$ be bounded integers with $p_{max} \geq p$ and $K_{max} \geq K$.
Then, for any selected numbers of lags $m\leq p_{max}$ and factors $J\leq K_{max}$, as $T \to \infty$,
\begin{equation*}
\big\Vert \bm{\widehat{A}}^* - \bm{A}^{*} \big\Vert_{2}=O_p\left(T^{-1/2}\right) \text{ if } J \geq K \text{ and } m \geq p.
\end{equation*}
Further, if $J<K$ or $m<p$ or both, we have
\begin{equation*}
\pliminf_{T\to\infty} \big\Vert \bm{\widehat{A}}^* - \bm{A}^{*} \big\Vert_{2} > 0.
\end{equation*}
\end{theorem}
It follows from Theorem \ref{thm:Bias} that the consistency of the LS estimator will be achieved as long as $J\geq K$ and $m\geq p$.
If at least one of the selected parameters $J$ or $m$ is smaller than the actual values, then model \eqref{eq:VAR} cannot be consistently estimated with the conditional LS estimator.
These findings indicate why the selection of $K$ and $p$ should be made simultaneously when using the LS estimator.
For instance, if the number of selected factors is larger than the actual one, i.e., $J\geq K$, and the number of selected lags is $m<p$, the LS estimator is biased, whereas it is consistent when $m\geq p$.
The main implication of Theorem \ref{thm:Bias} for our analysis is that the behavior of $MSE_T(J,m)$ is driven by $\Vert \bm{\widehat{A}}^* - \bm{A}^{*} \Vert_{2}$, where the MSE is asymptotically minimized as long as $J \geq K$ and $m \geq p$.
More specifically, an estimated model with $K+j$ factors and $p+i$ lags for $i,j>0$ can never asymptotically fit worse than a model with $K$ factors and $p$ lags.
Of course, this can lead to parameter proliferation and efficiency losses as more factors and lags are estimated.
Hence, we consider an MSE-based information criterion for estimating $K$ and $p$ of the form
\begin{equation}\label{eq:infCrit}
\mathrm{CR}_T(J,m)=f\bigg(\frac{1}{T-m}\sum_{t=m+1}^{T}\big\Vert Y_{t}-\widehat{Y}_{t|t-1}^{(J,m)}\big\Vert^2\bigg)+g_T(J,m),
\end{equation}
where $g_T(J,m)$ is a penalty term for overfitting a model.
If the penalty term $g_T(J,m)$ is strictly monotonically increasing in both arguments $J$ and $m$ and $f(\cdot)$ is some strictly increasing function, the following consistency result holds.
\begin{theorem}[\textbf{Numbers of factors and lags}]\label{thm:InformCriteria}
Let the conditions of Theorem \ref{thm:Bias} hold true, and let the number of factors, $K$, and the number of lags, $p$, be estimated as
\begin{equation}
\big(\widehat{K},\widehat{p} \big)= \argmin_{1 \leq L \leq K_{max}, \ 1 \leq m \leq p_{max} } \mathrm{CR}_T(J,m). \label{eq:Kp-estimator}
\end{equation}
where $g_T(J,m)\to 0$ and $Tg_T(J,m)\to \infty$ for all $0\leq J\leq K_{max}$ and $0\leq m\leq p_{max}$, as $T\to\infty$.
Then, $\lim_{T\to\infty} \mathrm{P}(\widehat{K}=K,\widehat{p}=p)=1$.
\end{theorem}
The results of Theorem \ref{thm:InformCriteria} indicate that penalized MSE-based information criteria select both the correct number of factors and the correct order of lags with probability 1.
The crucial element for the consistent estimation of $K$ and $p$ is a penalty term that vanishes at an appropriate rate, ensuring that an overparameterized model is not chosen.
The practical implementation of the proposed information criterion requires the specification of $f(\cdot)$ and $g_T(J,m)$.
Moreover, the evaluation of the functional norms $\Vert Y_{t}-\widehat{Y}_{t|t-1}^{(J,m)}\Vert^2$ for $t=1,...,T$ is required, which may impose unnecessary limitations for practitioners.
For this reason, we discuss simple implementation procedures for the estimator in the following subsection.
\subsection{Practical implementation of the information criterion} \label{sec:practical}
In this section, we discuss two approaches on how to implement the information criterion $\mathrm{CR}_T(J,m)$ in practice.
The main aim is to provide a procedure that is easy to implement by means of existing software.
Using the theoretical results of Section \ref{sec:estimK}, we propose two solutions: one based on the analytical representation of the expression in \eqref{eq:infCrit}, the other on a graphical representation.
The numerical implementation of both methods requires the computation of empirical eigenfunctions and eigenvalues using numerical integration.
The ``fda'' package from \cite{ramsay2009} for R and MATLAB or our accompanying R-package can be used to compute the eigenelements in practice.
\paragraph{Analytical representation.}
To obtain a simplified analytical expression for $\mathrm{CR}_T(J,m)$, we start with the expression for the MSE given in \eqref{eq:MSE}.
The fitted values can be written as $\widehat{Y}_{t|t-1}^{(J,m)}(r) = \widehat{\mu}(r) + \sum_{l=1}^{J} \widehat{F}_{l,t|t-1}\widehat{\psi}_l(r)$, where $\widehat{F}_{t|t-1}^{(J)} = (\widehat{F}_{1,t|t-1}, \ldots, \widehat{F}_{J,t|t-1})' = \bm{\widehat A}_{(J,m)} \bm{\widehat x}_{t-1}^{(J,m)}$.
Furthermore, it should be noted that the sample covariance operator $\widehat C_Y$ has at most $T$ nonzero eigenvalues, which implies that the observed curves have the empirical basis representation $Y_t(r)=\widehat{\mu}(r)+\sum_{l=1}^{T} \widehat{F}_{l,t}\widehat{\psi}_l(r)$.
Using the residuals $\widehat{\eta}_{l,t}=\widehat{F}_{l,t}-\widehat{F}_{l,t|t-1}$ the functional forecast error can be written as
\begin{equation}
Y_t(r) - \widehat{Y}_{t|t-1}^{(J,m)}(r) = \sum_{l=1}^{J} \widehat{\eta}_{l,t}\widehat{\psi}_l(r) + \sum_{l=J+1}^{T}\langle Y_t-\widehat{\mu},\widehat{\psi}_l \rangle \widehat{\psi}_l(r). \label{eq:functionalforecasterror}
\end{equation}
From the orthonormality of $\{\widehat \psi_1, \ldots, \widehat \psi_T\}$, the MSE can be rewritten as
\begin{equation}
MSE_T(J,m) = \frac{1}{T-m} \sum_{t=m+1}^{T} \bigg( \sum_{l=1}^{J}\widehat{\eta}_{l,t}^2 + \sum_{l=J+1}^{T}\langle Y_t-\widehat{\mu},\widehat{\psi}_l \rangle^2 \bigg)
\approx \tr \big(\widehat{\Sigma}_{\eta}^{(J,m)}\big) + \sum_{l=J+1}^T\widehat{\lambda}_l, \label{eq:MSEsimplified}
\end{equation}
where $\widehat{\Sigma}_{\eta}^{(J,m)} = (T-m)^{-1} \sum_{t=m+1}^T \widehat \eta_t \widehat \eta_t'$, $\widehat \eta_t = (\widehat{\eta}_{1,t}, \ldots, \widehat{\eta}_{J,t})'$, and the last step follows from the fact that $(T-m)^{-1}\sum_{t=m+1}^{T}\langle Y_t-\widehat{\mu},\widehat{\psi}_l \rangle^2 \approx T^{-1} \sum_{t=1}^{T}\langle Y_t-\widehat{\mu},\widehat{\psi}_l \rangle^2=\widehat{\lambda}_l$.
The advantage of the expression \eqref{eq:MSEsimplified} over the MSE in \eqref{eq:MSE} is that all components can be easily computed.
In particular, $\widehat{\Sigma}_{\eta}^{(J,m)}$ is the least square estimator of ${\Sigma}_{\eta}$ obtained by fitting a VAR($m$) model based on the time series of FPC scores $\{\widehat{F}_t^{(J)}\}$.
The $L^2$ norms of the functional forecast error \eqref{eq:functionalforecasterror} in the expression of the MSE \eqref{eq:MSE} reduce to the trace of $\widehat\Sigma_{\eta}^{(J,m)}$ and the higher order eigenvalues $\{\widehat{\lambda}_l\}_{l > J}$.
Finally, to put all terms of our generic information criterion \eqref{eq:infCrit} on the same scale, we recommend the $\ln(\cdot)$ transformation for both $f(\cdot)$ and $g_T(J,m)$.
More precisely, we construct $g_T(J,m)$ similar to the penalty term in well-established information criteria from multivariate time series analysis such as the Bayesian information criterion (BIC) and the Hannan-Quinn criterion (HQC).
Our BIC-type estimator for $K$ and $p$ is given by
\begin{equation}
(\widehat K_{\text{bic}}, \widehat p_{\text{bic}})
= \argmin_{1 \leq J \leq K_{max}, \ 1 \leq m \leq p_{max} } \ln \bigg( \tr \big(\widehat \Sigma_{\eta}^{(J,m)}\big) + \sum_{l=J+1}^T \widehat \lambda_l \bigg) + Jm\frac{\ln(T)}{T}, \label{eq:CR-BIC}
\end{equation}
where $J m$ is the number of estimated parameters in the model, and $T^{-1} \ln(T)$ is the penalization rate.
As an alternative with a lower penalization rate, the HQC-type estimator
\begin{equation}
(\widehat K_{\text{hqc}}, \widehat p_{\text{hqc}})
= \argmin_{1 \leq J \leq K_{max}, \ 1 \leq m \leq p_{max} } \ln \bigg( \tr \big(\widehat \Sigma_{\eta}^{(J,m)} \big) + \sum_{l=J+1}^T \widehat \lambda_l \bigg) + 2Jm\frac{\ln(\ln(T))}{T} \label{eq:CR-HQC}
\end{equation}
can be used.
Both \eqref{eq:CR-BIC} and \eqref{eq:CR-HQC} satisfy the conditions from Theorem \ref{thm:InformCriteria} and are therefore consistent estimators for $K$ and $p$.
\begin{remark}
Our final versions of the information criterion are related to the fFPE criterion proposed in \cite{aue2015}, which is given by
\begin{equation}
(\widehat K_{\text{fFPE}}, \widehat p_{\text{fFPE}})
= \argmin_{1 \leq J \leq K_{max}, \ 1 \leq m \leq p_{max} } \bigg\{ \frac{T+Jm}{T} \tr \big(\widehat \Sigma_{\eta}^{(J,m)}\big) + \sum_{l=J+1}^T \widehat \lambda_l \bigg\}. \label{eq:CR-fFPE}
\end{equation}
Although the fFPE criterion was derived in the context of dimension reduction of functional time series for prediction exercises, it can be used to select the number of factors and lags, interpreting the number of factors as a dimension.
However, the arguments from the proof of Theorem \ref{thm:InformCriteria} indicate that the fFPE information criterion of \cite{aue2015} may lead to overparameterizations of the functional factor model \eqref{eq:factormodel}--\eqref{eq:VAR} since it does not contain a penalty term.
A further comparison of the estimators \eqref{eq:CR-BIC} and \eqref{eq:CR-HQC} with \eqref{eq:CR-fFPE} is made in Section \ref{sec:simulations} to corroborate this remark.
\end{remark}
\paragraph{Graphical representation.}
A careful inspection of the proof of Theorem \ref{thm:InformCriteria} shows that the MSE reaches its asymptotic minimum when $J \geq K$ and $m \geq p$.
This result can be used to select $(K,p)$ graphically, similar to the concept of the scree plot.
More precisely, one can plot $MSE_T(J,m)$ for various combinations of $J$ and $m$ and choose the minimum vertex of a rectangular surface with respect to $J$ and $m$ for which the MSE remains ``flat''.
For this purpose, expression \eqref{eq:MSEsimplified} can be used.
Figure \ref{fig:MSE} shows an example illustrating an MSE surface.
This figure suggests that $\widehat{K}=4$ and $\widehat{p}=4$ should be selected.
\begin{figure}
\begin{center}
\vspace*{-10ex}
\includegraphics[width=0.48\textwidth]{img/figure1.pdf}
\vspace*{-11ex}
\end{center}
\caption{Example of MSE minimization using simulated data with $K=4$ and $p=4$}
\label{fig:MSE}
\end{figure}
The graphical approach has an advantage over the analytical expressions presented in \eqref{eq:CR-BIC} and \eqref{eq:CR-HQC} since it does not require the specification of the penalty term. However, it cannot be automated when it comes to a multiple model selection (for instance, in Monte Carlo simulations). Furthermore, it often comes to a subjective decision of a researcher where the smallest point of the MSE rectangular ``flat'' area is since the estimated MSE will also fluctuate in this area in finite samples.
\subsection{Mean square error optimal prediction} \label{sec:prediction}
Since the factors $F_t$ follow a causal VAR($p$) model, their best $h$-step ahead predictor in the mean square error sense is given by the conditional expectation,
\begin{equation*}
F_{T+h|T} = E\big[F_{T+h} \mid Y_{T}, Y_{T-1}, \ldots\big] = \sum_{i=1}^p A_i F_{T+h-i|T},
\end{equation*}
where $F_{T+j|T} := (\langle Y_t - \mu, \psi_1 \rangle, \ldots , \langle Y_t - \mu, \psi_K \rangle)'$ for $j \leq 0$.
Similarly, for the functional process $Y_t$, let the infinite history up to time $T$ be given by $\mathcal I_T = \sigma(\{ Y_t, \ t \leq T \})$, and let $g(\mathcal I_{T}) \in L_H^4$ be any predictor function for $Y_{T+h}$ that is measurable with respect to $\mathcal I_T$.
Then, by the law of the iterated expectation, $\argmin_{g(\mathcal I_T)} \{ E \| Y_{T+h} - g(I_{T}) \|^2 \} = E[Y_{T+h}|I_{T}]$.
The resulting best $h$-step ahead curve predictor is then
\begin{equation} \label{eq:MSEforecast}
Y_{T+h|T}(r) = E\big[Y_{T+h}(r) \mid Y_{T},Y_{T-1},...\big]= \mu(r) + \Psi(r)' F_{T+h|T}.
\end{equation}
The theoretical predictor $Y_{T+1|T}$ attends the smallest possible mean-squared error, which is given as $E\Vert Y_{T+1}-Y_{T+1|T} \Vert^2_2 = E\Vert\eta_{T+1}\Vert^2_2+ E\Vert\epsilon_{T+1} \Vert^2$.
The estimators introduced in Sections \ref{sec:estimPrim} and \ref{sec:estimK} allow us to replace the unobserved parameters in \eqref{eq:MSEforecast}, $\mu$, $\Psi$, $F_T$, $A(L)$, $K$, and $p$, by consistent estimators, which leads to the feasible predictor given as
\begin{equation} \label{eq:feasibleforecast}
\widehat Y_{T+h|T}^{(\widehat{K},\widehat{p})}(r) = \widehat \mu(r) + \big(\widehat \Psi^{(\widehat K)}(r)\big)' \widehat F_{T+h|T}^{(\widehat{K})},
\end{equation}
where $\widehat F_{T+1|T}^{(\widehat{K})} = \sum_{i=1}^{\widehat{p}} \widehat A_i^{(\widehat{K})} \widehat{F}_{T+1-i|T}^{(\widehat{K})}
$ with $\widehat F_{T+j|T}^{(\widehat K)} = \widehat F_{T+j}^{(\widehat K)}$ for $j \leq 0$.
However, the estimation step introduces an additional small sample estimation error that comes from estimating the primitives, $K$, $p$, and the dynamics.
Theorems \ref{thm:consistency}--\ref{thm:InformCriteria} indicate that the estimation error becomes negligible as $T\to\infty$, i.e.,
\begin{equation*}
\big\Vert Y_{T+1}-\widehat Y_{T+1|T}^{(\widehat{K},\widehat{p})} \big\Vert = \big\Vert Y_{T+1}-Y_{T+1|T} \big\Vert+ O_P(T^{-1/2}),
\end{equation*}
which provides a theoretical justification for the asymptotic optimality of the predictor \eqref{eq:feasibleforecast} in terms of the MSE.
We conclude this section with an estimation and prediction algorithm that complements the functional prediction algorithm of \cite{aue2015} with our methods.
\paragraph{Estimation and prediction algorithm}\text{ }
\noindent\textbf{Step 1: Estimation of the primitives.} Compute the sample mean function $\widehat \mu(r)$ and the sample covariance function $\widehat c(r,s)$ from the observed curves $Y_1(r), \ldots, Y_T(r)$.
Fix some $K_{max}$ large enough and compute the eigencomponents $\{(\widehat \lambda_l, \widehat \psi_l)\}_{l=1}^{K_{max}}$ and the functional principal component scores $\widehat{F}_{l,t} = \langle Y_t - \widehat{\mu}, \widehat{\psi}_l \rangle$, $l=1,...,K_{max}$,
as estimates for the factors.
\noindent\textbf{Step 2: Estimation of $K$, $p$, and the factor dynamics.} Fix some $p_{max}$ large enough, compute $MSE_T(J,m)$ from \eqref{eq:MSEsimplified} for any $J=0, \ldots, K_{max}$ and $m=0, \ldots, p_{max}$, and select $K$ and $p$ according to \eqref{eq:CR-BIC} or \eqref{eq:CR-HQC}.
Finally, estimate the VAR($\widehat p$) model \eqref{eq:VAR} by the LS estimator given in \eqref{eq:LS} yielding $[\widehat A_1^{(\widehat K)}, \ldots, \widehat A_{\widehat p}^{(\widehat K)}] = \widehat{\bm{A}}_{(\widehat K, \widehat p)}$.
\noindent\textbf{Step 3: Fitted curves and forecasting.}
The fitted curves for the sample $t=1, \ldots, T$ are $\widehat Y_t(r) = \widehat \mu(r) + \sum_{l=1}^{\widehat K} \widehat F_{l,t} \widehat \psi_l(r)$,
and the $h$-step predictor $\widehat Y_{T+h|T}^{(\widehat{K},\widehat{p})}(r)$ is given by \eqref{eq:feasibleforecast}.
\section{Simulations} \label{sec:simulations}
We analyze the finite sample properties of the estimator for $K$ and $p$ presented in Theorem 4 using a Monte Carlo simulation.
The functional time series are simulated as
\begin{equation}
Y_t(r) = \sum_{l=1}^K F_{l,t} v_l(r) + \sum_{l=K+1}^{10} e_{l,t} v_l(r), \quad r \in [0,1], \quad t=1, \ldots, T, \label{eq:simulationprocess}
\end{equation}
where $v_1(r) = 1$, $v_{2j}(r) = \sqrt 2 \sin(2 j \pi r)$, and $v_{2j+1}(r) = \sqrt 2 \cos(2 j \pi r)$ are the Fourier basis functions.
The errors are simulated as $e_t = (e_{1,t}, \ldots, e_{10,t})' \sim N(0,\text{diag}(1, 2^{-2}, \ldots, 10^{-2}))$ independently, and the factors are defined as $F_t = (F_{1,t}, \ldots, F_{K,t})' = A(L)^{-1} \eta_t$, where $\eta_t = (e_{1,t}, \ldots, e_{K,t})'$.
We consider 4 different model specifications, which are presented in Table \ref{tab:modelspecs}. The models reflect different dependence structures, with the numbers of factors ranging from 1 to 3 and lags ranging from 1 to 4. The model specification M1 coincides with the setting that was used by \cite{aue2015} in their simulations.
\begin{table}[t]
\caption{Model specifications for the Monte Carlo simulations}
\begin{center}
\small
\begin{tabular}{r|c|c|l}
model & $K$ & $p$ & lag polynomial \\
\hline
M1 & 3 & 1 & $A(L) = \mathrm{I}_3 -
\begin{psmallmatrix} -0.05 & -0.23 & \phantom{-}0.76 \\
\phantom{-}0.80 & -0.05 & \phantom{-}0.04 \\
\phantom{-}0.04 & \phantom{-}0.76 & \phantom{-}0.23\end{psmallmatrix} L$ \\
M2 & 2 & 2 & $A(L) = \mathrm{I}_2 -
\begin{psmallmatrix} 0.8 & -0.8 \\ 0.1 & -0.5 \end{psmallmatrix} L
- \begin{psmallmatrix} -0.3 & -0.3 \\ -0.2 & \phantom{-}0.3 \end{psmallmatrix} L^2 $ \\
M3 & 2 & 4 & $A(L) = \mathrm{I}_2 -
\begin{psmallmatrix} 0.4 & -0.2 \\ 0.0 & \phantom{-}0.3 \end{psmallmatrix} L -
\begin{psmallmatrix} -0.1 & -0.1 \\ \phantom{-}0.0 & -0.1 \end{psmallmatrix} L^2 -
\begin{psmallmatrix} 0.15 & 0.15 \\ 0.00 & 0.15 \end{psmallmatrix} L^3 -
\begin{psmallmatrix} 0.3 & -0.4 \\ 0.0 & \phantom{-}0.6 \end{psmallmatrix} L^4$\\
M4 & 1 & 4 & $A(L) = 1 - 0.2L - 0.7 L^4$ \\ \hline
\end{tabular}
\hspace*{0.1ex} \parbox{15.2cm}{ \vspace{0.5ex} \scriptsize
Note: The table presents the implemented specifications for model \eqref{eq:simulationprocess} for the simulation results from Table \ref{tab:biasRMSE}.
}
\end{center}
\label{tab:modelspecs}
\end{table}
\begin{table}[t]
\caption{Finite sample performances of the joint estimators for $K$ and $p$}
\begin{center}
\scriptsize
\begin{tabular}{rr|rrrrrr|rrrrrr}
& & \multicolumn{6}{|c}{bias} & \multicolumn{6}{|c}{RMSE} \\
& $T$ & \multicolumn{1}{|c}{$\widehat K_{bic}$} & \multicolumn{1}{c}{$\widehat K_{\text{hqc}}$} & \multicolumn{1}{c}{$\widehat K_{\text{fFPE}}$} & \multicolumn{1}{c}{$\widehat p_{bic}$} & \multicolumn{1}{c}{$\widehat p_{\text{hqc}}$} & \multicolumn{1}{c}{$\widehat p_{\text{fFPE}}$} & \multicolumn{1}{|c}{$\widehat K_{bic}$} & \multicolumn{1}{c}{$\widehat K_{\text{hqc}}$} & \multicolumn{1}{c}{$\widehat K_{\text{fFPE}}$} & \multicolumn{1}{c}{$\widehat p_{bic}$} & \multicolumn{1}{c}{$\widehat p_{\text{hqc}}$} & \multicolumn{1}{c}{$\widehat p_{\text{fFPE}}$} \\ \hline
M1 & 100 & 0.006 & 0.027 & 0.528 & 0.000 & 0.004 & 1.441 & 0.084 & 0.188 & 1.220 & 0.014 & 0.060 & 2.703 \\
M1 & 200 & 0.003 & 0.019 & 0.352 & 0.000 & 0.002 & 0.707 & 0.059 & 0.157 & 0.948 & 0.004 & 0.041 & 1.645 \\
M1 & 500 & 0.001 & 0.014 & 0.350 & 0.000 & 0.001 & 0.524 & 0.038 & 0.130 & 0.942 & 0.000 & 0.029 & 1.310 \\
M2 & 100 & -0.016 & 0.007 & 0.477 & -0.236 & -0.055 & 2.003 & 0.148 & 0.124 & 1.209 & 0.491 & 0.308 & 3.019 \\
M2 & 200 & 0.000 & 0.004 & 0.231 & -0.007 & 0.009 & 1.384 & 0.020 & 0.068 & 0.705 & 0.091 & 0.105 & 2.368 \\
M2 & 500 & 0.000 & 0.003 & 0.211 & 0.000 & 0.005 & 1.131 & 0.010 & 0.059 & 0.645 & 0.010 & 0.075 & 2.071 \\
M3 & 100 & -0.455 & -0.181 & 0.414 & -1.041 & -0.249 & 1.513 & 0.677 & 0.434 & 1.156 & 1.720 & 0.907 & 2.186 \\
M3 & 200 & -0.021 & 0.000 & 0.118 & -0.011 & 0.012 & 1.056 & 0.147 & 0.031 & 0.452 & 0.184 & 0.117 & 1.749 \\
M3 & 500 & 0.000 & 0.000 & 0.080 & 0.000 & 0.006 & 0.891 & 0.000 & 0.008 & 0.341 & 0.008 & 0.080 & 1.569 \\
M4 & 100 & 0.000 & 0.001 & 0.412 & 0.003 & 0.111 & 1.968 & 0.011 & 0.028 & 1.152 & 0.313 & 0.458 & 2.533 \\
M4 & 200 & 0.000 & 0.000 & 0.152 & 0.013 & 0.076 & 1.751 & 0.000 & 0.012 & 0.502 & 0.122 & 0.347 & 2.358 \\
M4 & 500 & 0.000 & 0.000 & 0.112 & 0.006 & 0.051 & 1.652 & 0.000 & 0.007 & 0.403 & 0.078 & 0.270 & 2.275 \\
\hline
\end{tabular}
\hspace*{0.1ex} \parbox{15.7cm}{ \vspace{0.5ex} \scriptsize
Note: The biases and root mean square errors (RMSE) for the estimators presented in \eqref{eq:CR-BIC}, \eqref{eq:CR-HQC}, and \eqref{eq:CR-fFPE} are simulated for a functional time series of sample size $T$ under models M1--M4 from Table \ref{tab:modelspecs} using 100,000 Monte Carlo replications.
The information criteria are evaluated using $K_{max} = 8$ and $p_{max} = 8$ as the maximum numbers of factors and lags.
}
\end{center}
\label{tab:biasRMSE}
\end{table}
We compare the estimators from the BIC-type and HQ-type information criteria from equations \eqref{eq:CR-BIC} and \eqref{eq:CR-HQC} with the fFPE criterion proposed by \cite{aue2015}, which is given in \eqref{eq:CR-fFPE}.
The results are presented in Table \ref{tab:biasRMSE} and support our theoretical findings.
Furthermore, both $\widehat K_{bic}$ and $\widehat p_{bic}$, as well as $\widehat K_{hqc}$ and $\widehat p_{hqc}$, provide a good approximation of the true parameters for reasonable sample sizes.
\section{Application: yield curve modeling and forecasting} \label{sec:application}
We study three yield curve datasets to model and estimate the dynamics of the term structure of government bond yields.
The first dataset (hereafter JKV) is taken from \cite{jungbacker2014}\footnote{Data source: \url{http://qed.econ.queensu.ca/jae/2014-v29.1/}.} and consists of monthly unsmoothed Fama-Bliss zero-coupon yield curves of U.S.\ Treasuries, which are observed at 17 different fixed maturities of 3, 6, 9, 12, 15, 18, 21, 24, 30, 36, 48, 60, 72, 84, 96, 108, and 120 months, from January 1987 until December 2007, with a sample size of $T=252$.
The period ranges from after the Volcker disinflation until the 2008 financial crisis, which can be treated as a consistent monetary policy regime (see, e.g., \citealt{monch2012}).
The second dataset (hereafter FED) is obtained from the Federal Reserve Statistical Release H.15\footnote{Data source: \url{https://www.federalreserve.gov/datadownload/Choose.aspx?rel=H15}.} and consists of monthly zero-coupon yield curves of U.S.\ Treasuries, which are observed at 11 different constant maturities of 1, 3, 6, 12, 24, 36, 60, 84, 120, 240, and 360 months, from July 2001 until December 2021, with a sample size of $T=242$.
Plots of the JKV and FED data are presented in Figure \ref{fig:3Dplot}.
The third dataset (hereafter G7) contains zero-coupon discount rates for government bond yields of the Group of Seven counties Canada, France, Germany, Italy, Japan, United States and United Kingdom.
The monthly data, covering the period from January 1995 until June 2022, are taken from the Thomson Reuters Eikon database and are available for 19 different times to maturity.
However, the G7 data contains some missing values for certain dates and times to maturity.
\begin{figure}[t]
\caption{Yield curves of U.S. Treasuries}
\centering
\vspace*{-4ex}
\includegraphics[scale=0.17]{./img/figure2.pdf}
\hspace*{0.1ex}
\parbox{15cm}{ \vspace{-11.5ex} \scriptsize
Note: The figure depicts the monthly yield curves of U.S.\ Treasuries from 1985 until 2007 from the JKV dataset (left) and the monthly yield curves from 2001 until 2021 from the FED dataset (right).
}
\vspace*{-5ex}
\label{fig:3Dplot}
\end{figure}
Following \cite{zhang2016}, the relative orders of observed maturities to sample size are large enough to classify the data as dense functional data, for which parametric convergence rates are preserved under conventional preprocessing methods (see also Remark \ref{rem:transformation}).
To obtain a functional representation of the yield curve $Y_t(r)$ at time $t$ with time to maturity $r \in [a,b]$, where $a$ is the lowest time to maturity and $b$ is the longest one, we follow \cite{ramsay2005} and represent the curves using appropriate basis functions.
We consider natural cubic splines where the knots are placed at all observed maturities so that the observed yields are exactly interpolated.
Specifically, for the G7 dataset and the dates with missing values, we only set knots at the available times to maturity so that all unobserved points on the curve are imputed by the natural splines.
For a discusssion on the optimality properties of natural interpolating splines, see \cite{hsing2015}, Section 6.6.
\begin{table}[t]
\caption{Estimated numbers of factors and lags for the JKV and FED datasets}
\begin{center}
\footnotesize
\begin{tabular}{c|cccccc}
& $\widehat K_{\text{bic}}$ & $\widehat K_{\text{hqc}}$ & $\widehat K_{\text{fFPE}}$ & $\widehat p_{\text{bic}}$ & $\widehat p_{\text{hqc}}$ & $\widehat p_{\text{fFPE}}$ \\
\hline
JKV data (full period) & 4 & 6 & 6 & 1 & 1 & 2 \\
FED data (full period) & 4 & 4 & 4 & 1 & 1 & 5 \\
JKV data (first 120 months) & 2 & 4 & 7 & 1 & 1 & 8 \\
FED data (first 120 months) & 4 & 4 & 4 & 1 & 1 & 5 \\
\hline
\end{tabular}
\hspace*{0.1ex} \parbox{12.9cm}{ \vspace{0.5ex} \scriptsize
Note: The estimated numbers of factors and lags from the BIC estimator \eqref{eq:CR-BIC}, the HQC estimator \eqref{eq:CR-HQC}, and the fFPE criterion \eqref{eq:CR-fFPE} are presented using the full samples and the training samples of the first 120 months. The maximum numbers of factors and lags are set as $K_{max}=8$ and $p_{max}=8$.
}
\end{center}
\label{tab:factorslags}
\end{table}
\begin{figure}[t]
\caption{Loading functions of the DNS model and the JKV and FED datasets}
\centering
\includegraphics[scale=0.21]{./img/figure3.pdf}
\label{fig:loadings}
\hspace*{0.1ex} \parbox{15.5cm}{ \vspace{0.5ex} \scriptsize
Note: The left figure presents the dynamic Nelson-Siegel loading functions defined in equation \eqref{eq:dns}.
The decay parameter is set to $\lambda = 0.0609$, maximizing the curvature factor at $r = 30$ month maturity (see \citealt{diebold2006}).
The middle and right plots present the first four empirical functional principal components of the functional time series in Figure \ref{fig:3Dplot}.
}
\end{figure}
\begin{table}[t]
\caption{Estimated numbers of factors for the G7 dataset}
\begin{center}
\footnotesize
\begin{tabular}{c|ccccccc}
& CA & FR & DE & IT & JP & GB & US \\
\hline
$\widehat K_{\text{bic}}$ & 4 & 4 & 4 & 3 & 3 & 5 & 6 \\
$\widehat K_{\text{hqc}}$ & 4 & 4 & 5 & 3 & 3 & 5 & 6 \\
$\widehat K_{\text{fFPE}}$ & 6 & 6 & 6 & 6 & 8 & 6 & 8 \\
number of available times to maturity & 13 & 13 & 13 & 12 & 15 & 17 & 10 \\
lowest available time to maturity (months) & 1 & 24 & 24 & 36 & 3 & 3 & 3 \\
largest available time to maturity (months) & 360 & 360 & 360 & 360 & 240 & 360 & 360 \\
\hline
\end{tabular}
\hspace*{0.1ex} \parbox{13.7cm}{ \vspace{0.5ex} \scriptsize
Note: The estimated numbers of factors using the BIC estimator \eqref{eq:CR-BIC} and the HQC estimator \eqref{eq:CR-HQC} are presented for the full G7 dataset for each county. The maximum number of factors and lags are set as $K_{max}=8$ and $p_{max}=8$. The column names reflect the ISO-3166-1 country codes.
}
\end{center}
\label{tab:factorsG7}
\end{table}
\begin{figure}[t]
\caption{Sample loading functions of the G7 dataset}
\centering
\includegraphics[scale=0.21]{./img/figure4.pdf}
\label{fig:loadingsG7}
\hspace*{0.1ex} \parbox{15.5cm}{ \vspace{0.5ex} \scriptsize
Note: The estimated loading functions for Canada, France, Germany, Italy, Japan, and United Kingdom are presented.
}
\end{figure}
In the first step of our analysis, we estimate the number of factors and the number of lags needed to adequately describe the yield curve series.
For this purpose, we implement the information criterion developed in Section \ref{sec:practical} and the one proposed in \cite{aue2015}.
An interesting finding is that in most cases, we need at least four factors to describe yield curve dynamics as opposed to the three-factor DNS modeling framework (see Table \ref{tab:factorslags}).
A similar picture emerges for the G7 countries, for which the number of estimated factors ranges from 3 to 6 (see Table \ref{tab:factorsG7}).
This result becomes even more prominent if we concentrate on forecasting exercises rather than consistently estimating $K$ and $p$.
The fFPE criterion reports even higher values for the number of required factors. This criterion is designed for predictions and selects $K$ and $p$ such that the forecast MSE is minimized.
Therefore, one of our main findings relevant to practitioners is that the number of factors should not be pre-determined, but instead selected in a data-driven manner.
Turning our attention to the estimated loading functions, we have plotted the first four functions from the DFFM and the DNS loadings in Figure \ref{fig:loadings} and the estimated loading functions for the G7 dataset in Figure \ref{fig:loadingsG7}.
We observe a similar outcome as in \cite{hays2012}.
The first three estimated loading functions inherit shapes similar to the DNS loadings and share similar economic interpretations.
However, their magnitude and curvature differ slightly. Furthermore, our analysis adds a fourth factor to the model to improve forecasting performance. To see how much additional factors improve the performance of the DFFM, we proceed with comparing forecasts.
\begin{figure}[t]
\caption{Graphical representation of the mean squared errors}
\centering
\vspace*{-4ex}
\includegraphics[scale=0.17]{./img/figure5.pdf}
\label{fig:MSEplots}
\vspace*{-5ex}
\hspace*{0.1ex} \parbox{15cm}{ \vspace{0.5ex} \scriptsize
Note: The mean squared errors for different numbers of factors $J$ and lags $m$ according to equation \eqref{eq:MSEsimplified} are plotted. The left figure shows the plot for the JKV dataset, and the right figure shows the plot for the FED dataset.
}
\end{figure}
To evaluate and compare out-of-sample performances, we follow the setting in \cite{diebold2006} and forecast the yield curves sequentially for each month until the end of the sample.
The first prediction is made using the first 120 observations, the second prediction using the first 121 observations, and so on, so that the $h$-step prediction for time $t$ is made using the curves from the beginning of the sample to time $t-h$.
We consider both unrestricted and restricted VAR model specifications for the dynamics of the factors.
The unrestricted VAR model specification \eqref{eq:VAR} with $K$ factors and $p$ lags has $K p$ coefficient parameters, which might be prone to in-sample overfitting.
Since empirical cross-correlation functions indicate little cross-factor interaction, restricted VAR models may be more appropriate.
Therefore, following \cite{hyndman2007}, we also consider univariate autoregressive (AR) models for each factor separately, where the coefficient matrices $A_1, \ldots, A_p$ are restricted to be diagonal.
We use both fixed and data-driven settings for determining the numbers of factors and lags and apply the information criteria \eqref{eq:CR-BIC} and \eqref{eq:CR-HQC} in the data-driven settings for each prediction separately.
The curve predictions \eqref{eq:feasibleforecast} are computed sequentially, the root mean square forecast errors (RMSFE) are evaluated at the observed times to maturity $a=r_1 < \ldots < r_N =b$, and the average root mean square forecast error is given by
\begin{align}
RMSFE(h,K,p) = \sqrt{\frac{1}{N(T-h-119)} \sum_{i=1}^N \sum_{t=120}^{T-h} \Big( \widehat Y_{t+h|t}^{(K,p)}(r_i) - Y_{t+h}(r_i) \Big)^2 }. \label{eq:averageRMSFE}
\end{align}
The results are presented in Tables \ref{tab:RMSFE} and \ref{tab:RMSFE2}.
For all datasets, the predictions from the functional factor model tend to produce more accurate forecasts than those from the DNS model, which we include as a benchmark.
The factors in the DNS model are estimated by regressing the available yields onto the Nelson-Siegel loadings given by equation \eqref{eq:dns} for a fixed value of $\lambda = 0.0609$.
In a second step, a linear autoregressive model without constant is fitted to the estimated factors from the first step, which gives rise to a forecast of the entire yield curve.
Following \cite{diebold2006}, we include both unrestricted VAR(1) and univariate AR(1) factor dynamics.
\begin{table}[t]
\caption{Average root mean square forecast errors for the JKV and FED dataset}
\begin{center}
\scriptsize
\begin{tabular}{lrrrrrrrrrrrr}
$K$ & BIC & HQC & BIC & HQC & 3 & 4 & 6 & 3 & 4 & 6 & DNS & DNS \\
$p$ & BIC & HQC & BIC & HQC & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
factor dynamics & VAR & VAR & AR & AR & VAR & VAR & VAR & AR & AR & AR & VAR & AR \\ \hline
\multicolumn{3}{l}{JKV data} \\
in-sample 1-step & 0.277 & 0.271 & 0.284 & 0.284 & 0.286 & 0.277 & 0.271 & 0.286 & 0.284 & 0.284 & 0.285 & 0.290 \\
1-step ahead & 0.265 & 0.264 & 0.267 & 0.264 & 0.267 & 0.264 & 0.260 & 0.266 & 0.264 & 0.263 & 0.265 & 0.271 \\
3-step ahead & 0.504 & 0.512 & 0.501 & 0.500 & 0.506 & 0.510 & 0.514 & 0.501 & 0.500 & 0.500 & 0.502 & 0.517 \\
6-step ahead & 0.769 & 0.773 & 0.772 & 0.772 & 0.800 & 0.767 & 0.787 & 0.772 & 0.772 & 0.772 & 0.791 & 0.798 \\
\hline
\multicolumn{3}{l}{FED data} \\
in-sample 1-step & 0.226 & 0.226 & 0.229 & 0.229 & 0.230 & 0.226 & 0.220 & 0.230 & 0.229 & 0.226 & 0.247 & 0.250 \\
1-step ahead & 0.198 & 0.200 & 0.174 & 0.174 & 0.182 & 0.198 & 0.198 & 0.176 & 0.174 & 0.172 & 0.205 & 0.208 \\
3-step ahead & 0.397 & 0.398 & 0.327 & 0.328 & 0.349 & 0.397 & 0.403 & 0.327 & 0.327 & 0.330 & 0.352 & 0.366 \\
6-step ahead & 0.605 & 0.604 & 0.498 & 0.499 & 0.542 & 0.606 & 0.609 & 0.498 & 0.498 & 0.501 & 0.524 & 0.549 \\
\hline
\end{tabular}
\hspace*{0.1ex} \parbox{15.5cm}{ \vspace{0.5ex} \scriptsize
Note: The average root mean square forecast errors from equation \eqref{eq:averageRMSFE} are presented. The first two rows indicate the selected number of factors and lags, and the third row indicates whether unrestricted VAR dynamics or AR dynamics are used. The results from the DNS model are given in the last two columns.
}
\end{center}
\label{tab:RMSFE}
\end{table}
\begin{table}[tp]
\caption{Average root mean square forecast errors for the G7 dataset}
\begin{center}
\scriptsize
\begin{tabular}{lrrrrrrrrrrrr}
$K$ & BIC & HQC & BIC & HQC & 3 & 4 & 6 & 3 & 4 & 6 & DNS & DNS \\
$p$ & BIC & HQC & BIC & HQC & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
factor dynamics & VAR & VAR & AR & AR & VAR & VAR & VAR & AR & AR & AR & VAR & AR \\ \hline
\multicolumn{3}{l}{Canada} \\
in-sample 1-step & 0.199 & 0.199 & 0.201 & 0.201 & 0.201 & 0.199 & 0.193 & 0.203 & 0.201 & 0.199 & 0.214 & 0.223 \\
1-step ahead & 0.180 & 0.179 & 0.176 & 0.176 & 0.180 & 0.180 & 0.176 & 0.178 & 0.176 & 0.173 & 0.198 & 0.208 \\
3-step ahead & 0.351 & 0.351 & 0.336 & 0.336 & 0.350 & 0.350 & 0.357 & 0.337 & 0.336 & 0.338 & 0.350 & 0.381 \\
6-step ahead & 0.549 & 0.548 & 0.511 & 0.510 & 0.549 & 0.546 & 0.563 & 0.511 & 0.510 & 0.511 & 0.518 & 0.550 \\
\hline
\multicolumn{3}{l}{France} \\
in-sample 1-step & 0.197 & 0.197 & 0.200 & 0.200 & 0.202 & 0.197 & 0.195 & 0.204 & 0.200 & 0.199 & 0.206 & 0.211 \\
1-step ahead & 0.208 & 0.208 & 0.206 & 0.205 & 0.208 & 0.205 & 0.204 & 0.208 & 0.204 & 0.202 & 0.206 & 0.218 \\
3-step ahead & 0.402 & 0.401 & 0.391 & 0.391 & 0.396 & 0.397 & 0.400 & 0.392 & 0.391 & 0.390 & 0.364 & 0.411 \\
6-step ahead & 0.634 & 0.631 & 0.604 & 0.604 & 0.614 & 0.622 & 0.630 & 0.604 & 0.603 & 0.603 & 0.534 & 0.625 \\
\hline
\multicolumn{3}{l}{Germany} \\
in-sample 1-step & 0.199 & 0.196 & 0.200 & 0.198 & 0.205 & 0.199 & 0.196 & 0.206 & 0.200 & 0.198 & 0.203 & 0.208 \\
1-step ahead & 0.208 & 0.205 & 0.205 & 0.202 & 0.211 & 0.204 & 0.202 & 0.209 & 0.201 & 0.199 & 0.202 & 0.214 \\
3-step ahead & 0.399 & 0.397 & 0.389 & 0.388 & 0.397 & 0.398 & 0.400 & 0.390 & 0.388 & 0.387 & 0.372 & 0.415 \\
6-step ahead & 0.616 & 0.616 & 0.598 & 0.598 & 0.611 & 0.617 & 0.617 & 0.599 & 0.598 & 0.598 & 0.553 & 0.631 \\
\hline
\multicolumn{3}{l}{Italy} \\
in-sample 1-step & 0.323 & 0.318 & 0.324 & 0.321 & 0.323 & 0.320 & 0.313 & 0.324 & 0.322 & 0.319 & 0.319 & 0.329 \\
1-step ahead & 0.348 & 0.351 & 0.341 & 0.341 & 0.341 & 0.341 & 0.337 & 0.342 & 0.339 & 0.337 & 0.335 & 0.356 \\
3-step ahead & 0.678 & 0.706 & 0.625 & 0.625 & 0.620 & 0.639 & 0.646 & 0.625 & 0.624 & 0.623 & 0.609 & 0.664 \\
6-step ahead & 0.900 & 0.956 & 0.863 & 0.865 & 0.850 & 0.884 & 0.884 & 0.863 & 0.862 & 0.862 & 0.796 & 0.908 \\
\hline
\multicolumn{3}{l}{Japan} \\
in-sample 1-step & 0.123 & 0.110 & 0.125 & 0.115 & 0.123 & 0.122 & 0.119 & 0.125 & 0.124 & 0.122 & 0.166 & 0.169 \\
1-step ahead & 0.089 & 0.085 & 0.089 & 0.086 & 0.089 & 0.088 & 0.086 & 0.089 & 0.089 & 0.085 & 0.150 & 0.148 \\
3-step ahead & 0.168 & 0.157 & 0.172 & 0.158 & 0.168 & 0.168 & 0.172 & 0.171 & 0.172 & 0.171 & 0.198 & 0.189 \\
6-step ahead & 0.258 & 0.228 & 0.273 & 0.236 & 0.258 & 0.259 & 0.270 & 0.273 & 0.273 & 0.273 & 0.261 & 0.240 \\
\hline
\multicolumn{3}{l}{United Kingdom} \\
in-sample 1-step & 0.214 & 0.214 & 0.217 & 0.217 & 0.256 & 0.222 & 0.212 & 0.256 & 0.223 & 0.216 & 0.238 & 0.240 \\
1-step ahead & 0.217 & 0.215 & 0.214 & 0.213 & 0.247 & 0.218 & 0.214 & 0.246 & 0.222 & 0.211 & 0.228 & 0.230 \\
3-step ahead & 0.436 & 0.431 & 0.418 & 0.419 & 0.427 & 0.415 & 0.432 & 0.427 & 0.423 & 0.418 & 0.412 & 0.421 \\
6-step ahead & 0.702 & 0.691 & 0.645 & 0.645 & 0.645 & 0.644 & 0.690 & 0.645 & 0.647 & 0.645 & 0.621 & 0.630 \\
\hline
\multicolumn{3}{l}{United States} \\
in-sample 1-step & 0.225 & 0.225 & 0.233 & 0.233 & 0.273 & 0.252 & 0.225 & 0.274 & 0.255 & 0.233 & 0.288 & 0.293 \\
1-step ahead & 0.239 & 0.236 & 0.233 & 0.232 & 0.272 & 0.262 & 0.230 & 0.272 & 0.257 & 0.230 & 0.239 & 0.248 \\
3-step ahead & 0.488 & 0.486 & 0.465 & 0.466 & 0.472 & 0.478 & 0.474 & 0.479 & 0.470 & 0.466 & 0.442 & 0.470 \\
6-step ahead & 0.775 & 0.770 & 0.731 & 0.733 & 0.716 & 0.743 & 0.760 & 0.737 & 0.732 & 0.733 & 0.676 & 0.722 \\
\hline
\end{tabular}
\hspace*{0.1ex} \parbox{15.5cm}{ \vspace{0.5ex} \scriptsize
Note: The average root mean square forecast errors from equation \eqref{eq:averageRMSFE} are presented. The first two rows indicate the selected number of factors and lags, and the third row indicates whether unrestricted VAR dynamics or AR dynamics are used. The results from the DNS model are given in the last two columns.
}
\end{center}
\label{tab:RMSFE2}
\end{table}
\section{Conclusion}
\label{sec:conc}
This paper provides an in-depth study of the factor model for functional time series, including identification, estimation, and prediction. From a practical point of view, the DFFM is an attractive modeling framework for infinitely-dimensional temporal data as it allows to perform analyses and predictions via a low-dimensional common component of the data. Our results are useful for a broad range of applications in which the number of factors in the common component is unknown, and the idiosyncratic component potentially has strong cross-correlation and is weakly correlated with the common component. We have developed a simple-to-use novel method, yielding consistent estimates of the number of factors and their dynamics. A Monte Carlo study and an empirical illustration to yield curves show that our method provides an attractive modeling and predictive framework.
Several methodological problems await further analysis. The first is to develop the distributional and inferential theory for the estimators beyond the consistency results obtained in this paper. For instance, in the empirical illustration to yield curves, it might be interesting to provide confidence bands or test some restrictions on the loading functions. The second is to go beyond the weakly stationary assumption on the factors. For instance, letting some of the factors have short memory whereas others are permitted to have a long memory (persistence). Finally, the third is to develop a predictive methodology for the factors using semiparametric or nonparametric models.
\section*{Acknowledgments}
We are thankful to Jörg Breitung, Tobias Hartl, Alois Kneip, Malte Knüppel, and Dominik Liebl for very helpful comments and suggestions. Many thanks also to Justin Franken for his assistance in implementing the accompanying \texttt{R}-package.
The usage of the CHEOPS HPC cluster for parallel computing is gratefully acknowledged.
The first author also gratefully acknowledges the financial support from the Argelander Grants of the University of Bonn, and the second author gratefully acknowledges the financial support from Juan de la Cierva Incorporaci\'{o}n, IJC2019-041742-I.
\section*{Supporting Information}
An accompanying \texttt{R}-package is available at \url{https://github.com/ottosven/dffm}.
\subsection{Proof of Theorem \ref{thm:consistency}}
Before presenting the main proof of Theorem \ref{thm:consistency}, we show the following auxiliary lemma.
\begin{lemma}\label{lem:factorscovariancematrix}
Let Assumptions \ref{as:factors} and \ref{as:errors} hold for model \eqref{eq:factormodel}--\eqref{eq:VAR}. Then, for any $0\leq h<\infty$, as $T \to \infty$, we have
\begin{align*}
E\bigg\| \frac{1}{T} \sum_{t=h+1}^T F_t F_{t-h}' - E[F_t F_{t-h}'] \bigg\|_2^2 = O(T^{-1}).
\end{align*}
\end{lemma}
\begin{proof} The problem can be rewritten as follows
\begin{align}\label{eq:problem}
E \bigg\| \frac{1}{T} \sum_{t=h+1}^T F_t F_{t-h}' - E[F_t F_{t-h}] \bigg\|_2^2 =\frac{1}{T^2} \sum_{t,s=h+1}^T \sum_{m,l=1}^K Cov\big[F_{m,t}F_{l,t-h},F_{m,s}F_{l,s-h}\big].
\end{align}
Since the VAR($p$) process $F_t$ is stable by Assumption \ref{as:factors}(d), the inverse lag polynomial $B(L) = \sum_{j=0}^\infty B_j L^j = (I - \sum_{i=1}^p A_i L^i)^{-1}$ exists, and $F_t$ has the vector moving average representation
\begin{align*}
F_{t} = \sum_{j=0}^\infty B_j \eta_{t-j},
\end{align*}
where $\sum_{j=0}^\infty \left\Vert B_{j} \right\Vert_2 < \infty$, or, equivalently
\begin{align*}
F_{l,t} = \sum_{j=0}^\infty \sum_{k=1}^{K}b_j^{(l,k)} \eta_{k,t-j},
\end{align*}
where $b_j^{(l,k)}$ is $(l,k)$ element of matrix $B_j$ and $\eta_{k,t-j}$ is $k$-th element of vector $\eta_{t-j}$.
Then,
\begin{align*}
&Cov[F_{m,t} F_{l,t-h}, F_{m,s} F_{l,s-h}] \\
&= \sum_{i_1,i_2,i_3,i_4=0}^\infty \sum_{k_1,k_2,k_3,k_4=1}^{K} b_{i_1}^{(m,k_1)}b_{i_2}^{(l,k_2)}b_{i_3}^{(m,k_3)}b_{i_4}^{(l,k_4)} Cov[\eta_{k_1,t-i_1}\eta_{k_2,t-h-i_2},\eta_{k_3,s-i_3}\eta_{k_4,s-h-i_4}],
\end{align*}
and, by Assumption \ref{as:factors}(e), there exists a constant $C < \infty$ such that
\begin{align*}
\sup_{i_1, i_2, i_3, i_4 \in \mathbb N} \sum_{k_1, k_2, k_3, k_4=1}^K \bigg| \sum_{t,s=h+1}^T Cov\big[\eta_{k_1,t-i_1}\eta_{k_2,t-h-i_2},\eta_{k_3,s-i_3}\eta_{k_4,s-h-i_4}\big] \bigg| < T \cdot C.
\end{align*}
Consequently, for \eqref{eq:problem}, we obtain
\begin{align*}
\frac{1}{T^2} \sum_{t,s=h+1}^T \sum_{m,l=1}^K Cov\big[F_{m,t}F_{l,t-h},F_{m,s}F_{l,s-h}\big] \leq \frac{K^6 C}{T} \bigg( \sum_{i=0}^\infty \|B_i\|_{\infty} \bigg)^4 = O(T^{-1}),
\end{align*}
where $ \Vert A \Vert_{\infty}=\max_{i,j}\{|a_{i,j}|\}$ is the maximum norm, and the final step follows by the matrix inequality $\Vert A \Vert_{\infty}\leq\Vert A \Vert_{2}$ (see, e.g., \citealt{luetkepohl1996})
and the fact that $\sum_{j=0}^\infty \Vert B_{j} \Vert_2 < \infty$.
\end{proof}
\paragraph{Main proof of Theorem \ref{thm:consistency}.}
\noindent\emph{Proof of item (a).}
First, we decompose
\begin{equation*}
E \| \widehat \mu - \mu \|
= E \int_a^b \Big( \frac{1}{T} \sum_{t=1}^T \Psi'(r) F_t + \epsilon_t(r) \Big)^2 \,\mathrm{d} r = A_T + B_T + C_T,
\end{equation*}
where
\begin{align*}
A_T &= \frac{1}{T^2} \sum_{t,h=1}^T \sum_{l,m=1}^K E[F_{l,t} F_{m,h}] \langle \psi_l, \psi_m \rangle, \quad
B_T = \frac{2}{T^2} \sum_{t,h=1}^T \sum_{l=1}^K E[F_{l,t} \langle \psi_l, \epsilon_h \rangle], \\
C_T &= \frac{1}{T^2} \sum_{t,h=1}^T E[\langle \epsilon_t, \epsilon_h \rangle].
\end{align*}
By Assumption \ref{as:factors}(d), the factors follow a stable VAR($p$), implying that $F_t$ has the vector moving average representation
\begin{align*}
F_{t} = \sum_{j=0}^\infty B_j \eta_{t-j},
\end{align*}
such that $\sum_{j=0}^\infty \left\Vert B_{j} \right\Vert_2 < \infty$ and, by Assumption \ref{as:factors}(e),
$\sum_{h=-\infty}^\infty \left\Vert E[F_{t} F_{t-h}'] \right\Vert_2<\infty$.
Using Assumption \ref{as:factors}(a) we have
\begin{eqnarray*}
\left|A_T\right| &=& \bigg| \frac{1}{T^2} \sum_{t,h=1}^T \sum_{l=1}^K E[F_{l,t} F_{l,h}]\bigg|\leq \frac{C}{T^2} \sum_{t,h=1}^T \left\Vert E[F_{t} F_{t-h}]\right\Vert_2 = O(T^{-1}),
\end{eqnarray*}
where $C>0$ denotes a constant.
For the term $B_T$ we make use of the triangle and the Cauchy-Schwarz inequality, which yield
\begin{eqnarray*}
\left|B_T\right| &\leq& 2\sum_{l=1}^K\bigg| E\bigg[\bigg(\frac{1}{T} \sum_{t=1}^T F_{l,t}\bigg) \Big\langle \psi_l, \frac{1}{T} \sum_{h=1}^T\epsilon_h \Big\rangle\bigg]\bigg|\\
&\leq& 2\sum_{l=1}^K \sqrt{E\bigg[\Big(\frac{1}{T} \sum_{t=1}^T F_{l,t}\Big)^2\bigg]E\bigg[\Big\langle \psi_l, \frac{1}{T} \sum_{h=1}^T\epsilon_h \Big\rangle^2\bigg]}.
\end{eqnarray*}
Since $\sum_{h=-\infty}^\infty \left\Vert E[F_{t} F_{t-h}'] \right\Vert_2<\infty$, we have that $E[(T^{-1} \sum_{t=1}^T F_{l,t})^2]=O(T^{-1})$. From the triangle inequality, the orthonormality of the loadings, and the martingale difference sequence property of $\epsilon_t$, it follows that $E[\langle \psi_l, T^{-1} \sum_{h=1}^T\epsilon_h \rangle^2] \leq E[\| T^{-1} \sum_{h=1}^T \epsilon_h \|^2] = O(T^{-1})$. Hence, $B_T=O(T^{-1})$.
Finally, for the term $C_T$, Assumption \ref{as:errors}(a) implies
$$ |C_T| \leq \frac{1}{T^2} \sum_{t,h=1}^T |E[\langle \epsilon_t, \epsilon_h \rangle]| \leq \frac{1}{T^2}\sum_{t=1}^T E \Vert \epsilon_t \Vert^2 = O(T^{-1}),$$
and, consequently, $E \Vert \widehat \mu - \mu \Vert^2 = A_T+B_T+C_T=O(T^{-1})$.
\noindent\emph{Proof of item (b).}
Without loss of generality and for the simplicity of the proof exposition we assume that $\widehat \mu(r) = \mu(r)$ for all $r\in[a,b]$.
The result for $\widehat \mu(r) \neq \mu(r)$ follows from (a).
Then, we have
\begin{align*}
E \| \widehat C_Y - C_Y \|^2_\mathcal{S}
= E \int_a^b \int_a^b (\widehat c(r,s) - c(r,s))^2 \,\mathrm{d} r \,\mathrm{d} s,
\end{align*}
where
\begin{align*}
\widehat c(r,s) &= \frac{1}{T} \sum_{t=1}^T \bigg( \sum_{l=1}^K F_{l,t} \psi_l(r) + \epsilon_t(r) \bigg) \bigg( \sum_{m=1}^K F_{m,t} \psi_m(s) + \epsilon_t(s) \bigg), \\
c(r,s) &= \sum_{l,m=1}^K \lambda_l \psi_l(r) \psi_m(s) 1_{\{l=m\}} + \delta(r,s).
\end{align*}
Consider the decomposition $$\widehat c_Y(r,s) - c_Y(r,s) = A_T(r,s) + B_T(r,s) + C_T(r,s) + D_T(r,s),$$ where
\begin{align*}
A_T(r,s) &= \frac{1}{T} \sum_{t=1}^T \sum_{l,m=1}^K \big( F_{l,t} F_{m,t} - \lambda_l 1_{\{l=m\}} \big) \psi_l(r) \psi_m(s), \\
B_T(r,s) &= \frac{1}{T} \sum_{t=1}^T \sum_{l=1}^K F_{l,t} \psi_l(r) \epsilon_t(s), \quad
C_T(r,s) = \frac{1}{T} \sum_{t=1}^T \sum_{l=1}^K F_{l,t} \psi_l(s) \epsilon_t(r), \\
D_T(r,s) &= \frac{1}{T} \sum_{t=1}^T \big( \epsilon_t(r) \epsilon_t(s) - \delta(r,s) \big).
\end{align*}
It suffices to show that the Hilbert-Schmidt norms of these four terms are $O\left(T^{-1}\right)$.
The proof of the original problem $E\int_a^b \int_a^b (\widehat c(r,s) -c(r,s) )^2 \,\mathrm{d} r \,\mathrm{d} s=O\left(T^{-1}\right)$ then follows from the Cauchy-Schwarz inequality.
For the first term, we make use of the auxiliary Lemma \ref{lem:factorscovariancematrix}, i.e.,
\begin{align*}
E\int_a^b \int_a^b \left( A_T(r,s)\right)^2\,\mathrm{d} r \,\mathrm{d} s
&= E\int_a^b \int_a^b \bigg( \Psi'(r) \bigg(\frac{1}{T}\sum_{t=1}^{T}F_tF_t'-E[F_tF_t']\bigg)\Psi(s)\bigg)^2\,\mathrm{d} r \,\mathrm{d} s \\
&= E\bigg\| \frac{1}{T} \sum_{t=1}^T F_t F_{t}' - E[F_t F_{t}'] \bigg\|_2^2 = O(T^{-1}).
\end{align*}
For the second term we have
\begin{align*}
E\int_a^b \int_a^b \left( B_T(r,s)\right)^2 \,\mathrm{d} r \,\mathrm{d} s
&= E\int_a^b \sum_{l=1}^K \bigg( \frac{1}{T} \sum_{t=1}^T F_{l,t} \epsilon_t(s)\bigg)^2 \,\mathrm{d} s \\
&= E \int_a^b \bigg\| \frac{1}{T} \sum_{t=1}^T F_t \epsilon_t(s) \bigg\|_2^2 \,\mathrm{d} s = O(T^{-1}),
\end{align*}
where the first equality follows from Assumption \ref{as:factors}(a), and the last equality follows from Assumption \ref{as:errors}(d).
Since $C_T(r,s) = B_T(s,r)$, the proof for the third term follows analogously.
Finally, for the last term,
\begin{align*}
&E\int_a^b \int_a^b \left( D_T(r,s)\right)^2 \,\mathrm{d} r \,\mathrm{d} s
= \frac{1}{T^2} \sum_{t=1}^T E \int_a^b \int_a^b (\epsilon_t(r) \epsilon_t(s) - \delta(r,s))^2 \,\mathrm{d} r \,\mathrm{d} s \\
&\phantom{=} \ + \frac{2}{T^2} \sum_{t=1}^{T-1} \sum_{q=t+1}^T E \int_a^b \int_a^b (\epsilon_q(r) \epsilon_q(s) - \delta(r,s))(\epsilon_t(r) \epsilon_t(s) - \delta(r,s)) \,\mathrm{d} r \,\mathrm{d} s \\
&= \frac{1}{T^2} \sum_{t=1}^T \bigg[ E\|\epsilon_t\|^4 - \int_a^b \int_a^b \delta^2(r,s) \,\mathrm{d} r \,\mathrm{d} s \bigg] \\
&\phantom{=} \ + \frac{2}{T^2} \sum_{t=1}^{T-1} \sum_{q=t+1}^T \int_a^b \int_a^b \Big[ E[\epsilon_q(r)\epsilon_q(s)\epsilon_t(r)\epsilon_t(s)] - \delta^2(r,s) \Big] \,\mathrm{d} r \,\mathrm{d} s = O(T^{-1}),
\end{align*}
where the last equality follows from Assumption \ref{as:errors}(a), implying that
\begin{align*}
&\lim_{T \to \infty} \frac{1}{T^2} \sum_{t=1}^{T-1} \sum_{q=t+1}^T E[\epsilon_q(r)\epsilon_q(s)\epsilon_t(r)\epsilon_t(s)] \\
&= \lim_{T \to \infty} \frac{1}{T^2} \sum_{t=1}^{T-1} \sum_{q=t+1}^T E[E[\epsilon_q(r)\epsilon_q(s)| \mathcal{A}_{q-1}]\epsilon_t(r)\epsilon_t(s)]
= \delta^2(r,s).
\end{align*}
\noindent\emph{Proof of item (c).} Lemma 2.2 in \cite{horvath2012} implies
\begin{align*}
\max_{1 \leq l \leq K} |\widehat \lambda_l - \lambda_l| \leq \Vert \widehat C_Y - C_Y \Vert_\mathcal{S},
\end{align*}
and the result follows from (b). \\
\noindent\emph{Proof of item (d).}
Lemma 2.3 in \cite{horvath2012} implies
\begin{align*}
\max_{1 \leq l \leq K} \Vert s_l \widehat \psi_l - \psi_l \Vert \leq \frac{2 \sqrt 2}{\alpha} \Vert \widehat C_Y - C_Y \Vert_\mathcal{S},
\end{align*}
where $ \alpha = \min \{ \lambda_1 - \lambda_2, \lambda_2 - \lambda_3, \ldots, \lambda_{K-1} - \lambda_K, \lambda_K \}$, and the result follows from (b).
\subsection{Proof of Theorem \ref{thm:Bias}} \label{appendixThm3}
To facilitate the understanding of the main proof, we first introduce some notations and auxiliary results.
\begin{itemize}
\item[(i)] The numbers of selected factors and lags are given by $J$ and $m$, where $0 \leq J \leq K_{max}$ and $0 \leq m \leq p_{max}$.
\item[(ii)] The selected empirical FPCs are denoted as $\{\widehat \psi_1, \ldots \widehat \psi_T\}$. The first $J$ empirical FPCs are stacked into the functional vector $\widehat \Psi^{(J)} = (\widehat \psi_1, \ldots, \widehat \psi_J)'$.
Note that the empirical FPCs are not uniquely defined since $\{- \widehat \psi_1, \ldots - \widehat \psi_T\}$ are also orthonormal eigenfunctions of $\widehat C_Y$.
Note that the analysis is affected by the selected signs of $\widehat \psi_1, \ldots, \widehat \psi_{K_{max}}$. In the further steps, we condition on these signs.
\item[(iii)]
Let $\{\phi_l\}$ be a sequence of orthonormal eigenfunctions of $\delta(r,s)$ that correspond to the descendingly ordered eigenvalues $\{\zeta_l\}$.
We determine the signs of the first $(K_{max} - K)$ orthonormal eigenfunctions by fixing the sign as $\text{sign}(\langle \phi_j, \widehat \psi_{K+j} \rangle) = 1$ for $1 \leq j \leq (K_{max} - K)$.
Then, under the conditions of Theorem \ref{thm:Bias}, $\phi_1, \ldots \phi_{K_{max} - K}$ are uniquely determined conditionally on the sign of the chosen empirical FPCs for a given sample $\{Y_1, \ldots, Y_T\}$.
\item[(iv)]
Let $s_l = \text{sign}(\langle \widehat \psi_l, \psi_l \rangle)$.
The sequence $\{\varphi_l\}$ with $\varphi_l = s_l \psi_l$ for $l \leq K$ and $\varphi_l = \phi_{l-K}$ for $l > K$
forms a sequence of orthonormal eigenfunctions of $C_Y$.
Moreover, $\varphi_l$ is uniquely identified for $l=1, \ldots, K_{max}$ conditional on the sign of the selected empirical FPCs since all eigenvalues of $C_Y$ have multiplicity 1.
The first $J$ eigenfunctions are stacked into the functional vector $\Phi^{(J)} = (\varphi_1, \ldots, \varphi_J)'$.
\item[(v)] Define the true FPC scores as $\widetilde F_{l,t} = \langle Y_t - \mu, \varphi_l \rangle$, so that
\begin{align*}
\widetilde F_{l,t} = \begin{cases} s_l F_{l,t} + \langle \epsilon_t, \varphi_l \rangle, & \text{if} \ l \leq K, \\
\langle \epsilon_t, \varphi_l \rangle, & \text{if} \ l > K.
\end{cases}
\end{align*}
For $j=1, \ldots, K_{max}$ the scores $\widetilde F_{l,t}$ are uniquely identified conditional on the eigenfunctions $\varphi_l$ defined above.
Moreover, we use the notations
\begin{align*}
\widetilde F_t^{(J)}=(\widetilde F_{1,t},\widetilde F_{2,t},...,\widetilde F_{J,t})', \quad \widehat{F}_t^{(J)}=(\widehat{F}_{1,t},\widehat{F}_{2,t},...,\widehat{F}_{J,t})'
\end{align*}
where $\widehat{F}_{l,t}=\langle Y_t - \widehat\mu, \widehat\psi_l \rangle$ are the empirical FPC scores.
The stacked score vectors wit $m$ lags are defined as
\begin{align*}
\bm{\widetilde x}_{t-1}^{(J,m)}=\big(\big(\widetilde {F}_{t-1}^{(J)}\big)',\big(\widetilde {F}_{t-2}^{(J)}\big)',...,\big(\widetilde {F}_{t-m}^{(J)}\big)'\big)', \quad \bm{ \widehat x}_{t-1}^{(J,m)}=\big(\big(\widehat{F}_{t-1}^{(J)}\big)',\big(\widehat{F}_{t-2}^{(J)}\big)',...,\big(\widehat{F}_{t-m}^{(J)}\big)'\big)'
\end{align*}
\item[(vi)] For a selected number of factors $J$ and lags $p$, we concider the completion matrices
\begin{equation*}
\bm{S}_J = \begin{cases} \big[\text{diag}(s_1, \ldots, s_K), \bm 0_{K,J-K} \big], & \text{if} \ J > K, \\
\text{diag}(s_1, \ldots, s_K), & \text{if} \ J \leq K,
\end{cases} \qquad
\bm R_J = \begin{cases}
\big[ I_K, \bm 0_{K,J-K} \big], & \text{if} \ J < K, \\
I_K, & \text{if} \ J \geq K,
\end{cases}
\end{equation*}
the aligned and sign-adjusted true stacked lag coefficient matrix
\begin{align*}
\bm A^* = \begin{cases}
\big[ \bm S_J'A_1 \bm S_J, \ldots, \bm S_J' A_p \bm S_J, \bm 0_{J, (m-p)J} \big], & \text{if} \ m > p, \\
\big[ \bm S_J' A_1 \bm S_J, \ldots, \bm S_J' A_p \bm S_J \big], & \text{if} \ m \leq p,
\end{cases}
\end{align*}
and the aligned stacked estimated lag matrix
\begin{align*}
\bm{\widehat A}^* =
\begin{cases}
\big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J, \bm 0_{J,(p-m)J} \big], & \text{if} \ m<p, \\
\big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J \big], & \text{if} \ m \geq p.
\end{cases}
\end{align*}
\item[(vii)] For the correct numbers of factors and lags, the estimated coefficient matrix can be represented as $\bm{\widehat A}_{(K,p)} = \widehat \Gamma_{(K,p)} \widehat \Sigma_{(K,p)}^{-1}$, where
\begin{align*}
\widehat\Gamma_{(J,m)}=\frac{1}{T}\sum_{t=m+1}^T \widehat{F}_t^{(J)}\big(\bm{\widehat x}_{t-1}^{(J,m)}\big)', \quad
\widehat\Sigma_{(J,m)}=\frac{1}{T}\sum_{t=m+1}^T \bm{\widehat x}_{t-1}^{(J,m)}\big(\bm{\widehat x}_{t-1}^{(J,m)}\big)'.
\end{align*}
Their counterparts with unknown FPC scores are
\begin{align*}
\widetilde\Gamma_{(J,m)}=\frac{1}{T}\sum_{t=m+1}^T \widetilde {F}_t^{(J)} \big(\bm{\widetilde x}_{t-1}^{(J,m)}\big)', \quad
\widetilde\Sigma_{(J,m)}=\frac{1}{T}\sum_{t=m+1}^T \bm{\widetilde x}_{t-1}^{(J,m)} \big(\bm{\widetilde x}_{t-1}^{(J,m)}\big)',
\end{align*}
and using the population moments, we define
\begin{align*}
\Gamma_{(J,m)}=\lim_{T \to \infty} \frac{1}{T} \sum_{t=m+1}^T E\big[\widetilde {F}_t^{(J)}\big(\bm{\widetilde x}_{t-1}^{(J,m)}\big)'\big], \quad
\Sigma_{(J,m)}= \lim_{T \to \infty} \frac{1}{T} \sum_{t=m+1}^T E\big[\bm{\widetilde x}_{t-1}^{(J,m)}\big(\bm{\widetilde x}_{t-1}^{(J,m)}\big)'\big].
\end{align*}
Note that all eigenvalues of $\Sigma_{(J,m)}$ are bounded and bounded away from zero so that its inverse exists.
Let the vector of stacked true and sign-adjusted lagged factors and the true sign-adjusted stacked lag matrix be abbreviated as
\begin{align*}
\bm x_{t-1} := ((\bm S_K F_{t-1})', \ldots, (\bm S_K F_{t-p})')', \quad
\bm{A}_{(S)} := \big[ \bm S_K A_1 \bm S_K, \ldots, \bm S_K A_p \bm S_K \big].
\end{align*}
The VAR($p$) process $F_t$ has the sign-adjusted representation
\begin{align*}
\bm S_K F_t = \sum_{i=1}^p \bm S_K A_i \bm S_K \bm S_K F_{t-i} + \eta_t = \bm{A}_{(S)} \bm x_{t-1} + \eta_t
\end{align*}
and satisfies the normal equation
\begin{align*}
E[\bm S_K F_t \bm x_{t-1}'] = \bm{A}_{(S)} E[\bm x_{t-1} \bm x_{t-1}'].
\end{align*}
By Assumption \ref{as:errors}(c),
\begin{align*}
&\frac{1}{T} \sum_{t=1}^T \big( E[\bm S_K F_t \bm x_{t-1}] - E[\widetilde {F}_t^{(K)}(\bm{\widetilde x}_{t-1}^{(K,p)})'] \big) = O(T^{-1/2}), \\
&\frac{1}{T} \sum_{t=1}^T \big( E[\bm x_{t-1} \bm x_{t-1}'] - E[\bm{\widetilde x}_{t-1}^{(K,p)}(\bm{\widetilde x}_{t-1}^{(K,p)})'] \big) = O(T^{-1/2}).
\end{align*}
Therefore, the true stacked sign-adjusted coefficient matrix is represented as
\begin{align*}
\bm{A}_{(S)} = \Gamma_{(K,p)} \Sigma_{(K,p)}^{-1}.
\end{align*}
Moreover, by Assumptions \ref{as:errors}(a) and (d), $T^{-1} \sum_{t=1}^T E[\widetilde F_{l,t} \bm{\widetilde x_{t-1}}^{(J,m)}] \to 0$ for $l > K$.
Therefore, if $J \geq K$ and $m \geq p$,
\begin{align}
\bm A^* = \Gamma_{(J,m)} \Sigma_{(J,m)}^{-1}, \quad \bm{\widehat A}^* = \bm{\widehat A}_{(J,m)} = \widehat \Gamma_{(J,m)} \widehat \Sigma_{(J,m)}^{-1}. \label{eq:lagmatrices}
\end{align}
\end{itemize}
\begin{lemma}\label{lem:Estfactorscovariancematrix}
Under conditions of Theorem \ref{thm:Bias}, and for $0\leq s \leq m$, as $T\to\infty$, we have
\begin{align*}
\max_{\substack{0 \leq m\leq p_{max} \\ 0 \leq l,j\leq K_{max}}}\Bigg| \frac{1}{T} \sum_{t=m+1}^T \Big(\widehat{F}_{l,t}\widehat{F}_{j,t-s} - \widetilde{F}_{l,t} \widetilde {F}_{j,t-s}\Big) \Bigg| = O_p(T^{-1/2}).
\end{align*}
\end{lemma}
\begin{proof}
Note first that the empirical FPC scores admit the decomposition
\begin{align}
\widehat F_{l,t} &= \langle Y_t - \widehat \mu, \widehat \psi_l \rangle
= \widetilde F_{l,t} + \langle Y_t - \mu, \widehat \psi_l - \varphi_l \rangle + \langle \mu - \widehat \mu, \widehat \psi_l \rangle \nonumber \\
&= \widetilde F_{l,t} + \sum_{k=1}^K F_{k,t} \langle \psi_k, \widehat \psi_l - \varphi_l \rangle + \langle \epsilon_t, \widehat \psi_l - \varphi_l \rangle + \langle \mu - \widehat \mu, \widehat \psi_l \rangle
= \widetilde F_{l,t} + R_{l,t}, \label{A2.decomp}
\end{align}
where
\begin{align}
R_{l,t} = \sum_{k=1}^K F_{k,t} \langle \psi_k, \widehat \psi_l - \varphi_l \rangle + \langle \mu - \widehat \mu, \widehat \psi_l \rangle + \langle \epsilon_t, \widehat \psi_l - \varphi_l \rangle. \label{eq:decomp2}
\end{align}
Throughout the proof of this lemma we will be using the fact that
\begin{equation}\label{A2.eq1}
\big|\langle\mu-\widehat{\mu}, \widehat{\psi}_l\rangle\big|\leq\left\Vert\mu-\widehat{\mu}\right\Vert=O_p(T^{-1/2}),
\end{equation}
for all $l$, which follows from the Cauchy-Schwarz inequality, the orthonormality of $\{\widehat{\psi}_l\}$, and Theorem \ref{thm:consistency}(a). Moreover, using similar arguments, we have
\begin{equation}\label{A2.eq2}
\big|\langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle\big| \leq \| \widehat{\psi}_l- \varphi_l \| = O_p(T^{-1/2}),
\end{equation}
where
\begin{align}
\max_{1 \leq l \leq K_{max}} E \| \widehat \psi_l - \varphi_l \| = O(T^{-1/2}) \label{eq:eigenfconvergence}
\end{align}
follows analogously to the proof of Theorem \ref{thm:consistency}(d) since the eigenvalues of $C_Y$ have multiplicity 1.
Using the decomposition \eqref{A2.decomp} we have
\begin{align}
\max_{\substack{m\leq p_{max} \\ l,j\leq K_{max}}} \bigg| \frac{1}{T} \sum_{t=m+1}^T \widehat{F}_{l,t}\widehat{F}_{j,t-s} -\widetilde F_{l,t} \widetilde {F}_{j,t-s} \bigg|
\leq \max_{\substack{m\leq p_{max} \\ l,j\leq K_{max}}} \big( S_{1,m,l,j} + S_{2,m,l,j} + S_{3,m,l,j} \big), \label{A2.eq3}
\end{align}
where
\begin{align*}
S_{1,m,l,j} = \bigg| \frac{1}{T} \sum_{t=m+1}^T R_{l,t} R_{j,t-s}\bigg|, \ S_{2,m,l,j} = \bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{l,t} R_{j,t-s}\bigg|, \ S_{3,m,l,j} = \bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{j,t-s} R_{l,t}\bigg|.
\end{align*}
The rest of the proof is split in two steps.
First, we show that $S_{1,m,l,j} = O_P(T^{-1/2})$ for any $m$, $l$, and $j$, and, in the second step, we show that $S_{2,m,l,j} = O_P(T^{-1/2})$.
Note that the convergence rate of $S_{3,m,l,j}$ is identical to that of the second term and is therefore omitted. \\
\noindent
\emph{Step 1:} Following equation \eqref{eq:decomp2}, $S_{1,m,l,j}$ consist of six sub-terms, which we study below:
\begin{description}
\item[(i)] For the first sub-term we have
\begin{align*}
&\bigg|\frac{1}{T}\sum_{t=m+1}^T\sum_{k,n=1}^{K} \widetilde F_{k,t} \widetilde F_{n,t-s} \langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle \langle \psi_n,\widehat{\psi}_j- \varphi_j\rangle \bigg| \\
&\leq K^2 \max_{\substack{ 1\leq k,n\leq K}} \bigg|\frac{1}{T}\sum_{t=m+1}^T \widetilde F_{k,t} \widetilde F_{n,t-s} \bigg| \left| \langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle\right| \left| \langle \psi_n,\widehat{\psi}_j- \varphi_j\rangle \right|=O_p(T^{-1}),
\end{align*}
where the last equality follows from \eqref{A2.eq2} and the fact that, by Assumptions \ref{as:factors}(d), \ref{as:factors}(e) and \ref{as:errors}(a), $T^{-1}\sum_{t=m+1}^T \widetilde F_{k,t} \widetilde F_{n,t-s}=O_p(1)$.
\item[(ii)] For the second sub-term
\begin{align*}
&\bigg|\frac{1}{T}\sum_{t=m+1}^T\sum_{k=1}^{K} \widetilde F_{k,t} \langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle \langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle\bigg| \\
&\leq K \max_{\substack{ 1\leq k\leq K}} \bigg|\frac{1}{T}\sum_{t=m+1}^T \widetilde F_{k,t}\bigg| \left| \langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle\right| \left| \langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle \right|=O_p(T^{-1}),
\end{align*}
which follows from \eqref{A2.eq1}, \eqref{A2.eq2} and the fact that $T^{-1}\sum_{t=m+1}^T \widetilde F_{k,t}=O_p(1)$.
\item[(iii)] For the next sub-term
\begin{align*}
&\bigg|\frac{1}{T}\sum_{t=m+1}^T\sum_{k=1}^{K}\widetilde F_{k,t} \langle \psi_k,\widehat{\psi}_l- \varphi_l\rangle \langle \epsilon_{t-s} ,\widehat{\psi}_j - \varphi_j \rangle\bigg| \\
&\leq K \max_{\substack{ 1\leq k\leq K}} \bigg\| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{k,t} \epsilon_{t-s} \bigg\| \big\| \widehat{\psi}_j - \varphi_j \big\| \big| \langle \psi_k,\widehat{\psi}_l-\varphi_l\rangle \big| =O_p(T^{-1}),
\end{align*}
where the last equality follows from \eqref{A2.eq2} and Assumption \ref{as:errors}(d).
\item[(iv)] By \eqref{A2.eq1} we have $|T^{-1} \sum_{t=m+1}^T \langle\mu-\widehat{\mu}, \widehat{\psi}_l\rangle \langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle | \leq \| \mu - \widehat \mu \|^2 = O_p(T^{-1})$.
\item[(v)] Similarly, for the fifth term,
\begin{align*}
&\bigg|\frac{1}{T}\sum_{t=m+1}^T \langle\mu-\widehat{\mu}, \widehat{\psi}_l\rangle \langle\epsilon_{t-s}, \widehat{\psi}_j - \varphi_j\rangle\bigg|
\leq \ \frac{1}{T}\sum_{t=m+1}^T \| \epsilon_{t-s} \| \| \mu - \widehat \mu \| \| \widehat{\psi}_j - \varphi_j \| =O_p(T^{-1}),
\end{align*}
which follows from \eqref{A2.eq1}, \eqref{A2.eq2}, and Assumption \ref{as:errors}(a).
\item[(vi)] Finally, for the last sub-term, by Assumption \ref{as:errors}(c),
\begin{align*}
\bigg| \frac{1}{T}\sum_{t=m+1}^T \langle \epsilon_{t},\widehat{\psi}_l - \varphi_l \rangle \langle \epsilon_{t-s},\widehat{\psi}_j - \varphi_j \rangle \bigg|
\leq \frac{1}{T}\sum_{t=m+1}^T \| \epsilon_t \| \| \epsilon_{t-s} \| \| \widehat{\psi}_l - \varphi_l\| \| \widehat{\psi}_j - \varphi_j\|,
\end{align*}
which is $O_P(T^{-1})$ by \eqref{A2.eq2} and the fact that
\begin{align*}
E \Bigg[ \bigg| \frac{1}{T} \sum_{t=m+1}^T \left\Vert\epsilon_{t}\right\Vert \left\Vert\epsilon_{t-s}\right\Vert \bigg| \Bigg]
\leq \frac{1}{T}\sum_{t=m+1}^T E \big[ \|\epsilon_{t}\|^2 \big]^{1/2} E \big[ \| \epsilon_{t-s} \|^2 \big]^{1/2} = O(1).
\end{align*}
\end{description}
Putting all results (i)--(vi) together allows us to conclude Step 1 of the proof with
\begin{equation*}
\max_{\substack{m\leq p_{max} \\ l,j\leq K_{max}}} \bigg|\frac{1}{T}\sum_{t=m+1}^T R_{l,t}R_{j,t-s} \bigg|=O_p(T^{-1}).
\end{equation*}
\emph{Step 2:} For the second term on the r.h.s of \eqref{A2.eq3}, $S_{2,m,l,j} $, it holds that
\begin{align}
\bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} R_{j,t-s} \bigg|
&\leq \bigg| \frac{1}{T}\sum_{t=m+1}^T\sum_{n=1}^{K} \widetilde F_{l,t}\widetilde F_{n,t-s}\langle \psi_n,\widehat{\psi}_j- \varphi_j\rangle \bigg| \nonumber \\
&+ \bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle \bigg|
+ \bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \langle \epsilon_{t-s} ,\widehat{\psi}_j - \varphi_j\rangle \bigg| \label{A2.eq4}
\end{align}
For the first term on the r.h.s of \eqref{A2.eq4} using the same arguments as in step 1 we have
\begin{align*}
\bigg| \frac{1}{T}\sum_{t=m+1}^T\sum_{n=1}^{K}\widetilde F_{l,t}\widetilde F_{n,t-s} \langle \psi_n,\widehat{\psi}_j- \varphi_j\rangle \bigg|
\leq K \max_{n\leq K} \bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \widetilde F_{n,t-s} \bigg| \| \widehat{\psi}_j- \varphi_j\| = O_P(T^{-1/2}).
\end{align*}
For the second term on the r.h.s of \eqref{A2.eq4} it holds that
\begin{align*}
\bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle \bigg|
\leq \bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \bigg| \left|\langle\mu-\widehat{\mu}, \widehat{\psi}_j\rangle\right|=O_p(T^{-1/2}),
\end{align*}
and, for the last term, analogously to step 1 item (iii),
\begin{align*}
\bigg| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \langle \epsilon_{t-s} ,\widehat{\psi}_j - \varphi_j\rangle \bigg|
\leq \bigg\| \frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} \epsilon_{t-s} \bigg\| \|\widehat{\psi}_j - \varphi_j \| = O_P(T^{-1/2}),
\end{align*}
which concludes step 2 with
\begin{equation*}
\max_{\substack{m\leq p_{max} \\ l,j\leq K_{max}}} \left|\frac{1}{T}\sum_{t=m+1}^T \widetilde F_{l,t} R_{j,t-s}\right|=O_p(T^{-1/2}).
\end{equation*}
Hence all terms in the r.h.s of \eqref{A2.eq3} behave as $O_p(T^{-1/2})$, which concludes the proof.
\end{proof}
\paragraph{Main proof of Theorem \ref{thm:Bias}.}
We split the proof into four cases:
\textbf{(A)} $J \geq K$ and $m \geq p$;
\textbf{(B)} $J<K$ and $m<p$;
\textbf{(C)} $J \geq K$ and $m<p$;
and \textbf{(D)} $J<K$ and $m \geq p$. \\
\emph{Case (A): $J \geq K$ and $m \geq p$.}
Following \eqref{eq:lagmatrices}, the estimated lag coefficient matrix has the representation $\bm{\widehat{A}}^* =\bm{\widehat{A}}_{(J,m)}=\widehat\Gamma_{(J,m)}\widehat\Sigma_{(J,m)}^{-1}$, and the true stacked and sign-adjusted coefficient matrix is identified as $\bm A^{\ast}=\Gamma_{(J,m)}\Sigma_{(J,m)}^{-1}$.
Hence,
\begin{align}\label{eq:bound}
&\Vert\bm{\widehat{A}} - \bm{A}^{\ast} \Vert_{2}
\leq \| \widehat \Gamma_{(J,m)} \|_2 \|\widehat \Sigma^{-1}_{(J,m)} - \Sigma^{-1}_{(J,m)} \|_2 + \|\widehat \Gamma_{(J,m)} - \Gamma_{(J,m)} \|_2 \| \Sigma^{-1}_{(J,m)} \|_2.
\end{align}
Note that the eigenvalues of $\Sigma_{(J,m)}$ are bounded and bounded away from zero, which implies that $\Vert\Sigma_{(J,m)}^{-1}\Vert_2$ is bounded.
Since the fourth moments of the factors and errors are bounded, $\Gamma_{(J,m)}$ is a $Jm\times J$ matrix of bounded elements, which implies that $\Vert\Gamma_{(J,m)}\Vert_{2}$ is also bounded. Moreover, we have $\| \widehat \Gamma_{(J,m)} \|_2 \leq \| \Gamma_{(J,m)} \|_2 + \|\widehat \Gamma_{(J,m)} - \Gamma_{(J,m)} \|_2$.
The rates of convergence of $\|\widehat \Gamma_{(J,m)} - \Gamma_{(J,m)} \|_2$ and $\|\widehat \Sigma^{-1}_{(J,m)} - \Sigma^{-1}_{(J,m)} \|_2$ are established by Lemmas \ref{lem:factorscovariancematrix} and \ref{lem:Estfactorscovariancematrix}.
More detailed, we have that
\begin{equation*}
\Vert\widehat\Gamma_{(J,m)}-\Gamma_{(J,m)}\Vert_2\leq \Vert\widehat\Gamma_{(J,m)}-\widetilde\Gamma_{(J,m)}\Vert_2+ \Vert\widetilde\Gamma_{(J,m)}-\Gamma_{(J,m)}\Vert_2.
\end{equation*}
By Lemma \ref{lem:factorscovariancematrix}
and the fact that $\widetilde F_{l,s}$ and $\widetilde F_{m,t}$ are uncorrelated for all $k,m > K$ with $k \neq m$, we have $\Vert\widetilde\Gamma_{(J,m)}-\Gamma_{(J,m)}\Vert_2=O_p(T^{-1/2})$.
Lemma \ref{lem:Estfactorscovariancematrix} yields
\begin{equation*}
\Vert\widehat\Gamma_{(J,m)}-\widetilde\Gamma_{(J,m)}\Vert_2\leq \sqrt{m}J \max_{\substack{s\leq p_{max} \\ l,j\leq K_{max}}}\left| \frac{1}{T} \sum_{t=1}^T \left(\widehat{F}_{l,t}\widehat{F}_{j,t-s} -\widetilde {F}_{l,t} \widetilde {F}_{j,t-s} \right) \right| = O_p(T^{-1/2}).
\end{equation*}
Using identical arguments we obtain
\begin{align}
\Vert \widehat\Sigma_{(J,m)}-\Sigma_{(J,m)}\Vert_2=O_p(T^{-1/2}). \label{eq:thm3covmatrix1}
\end{align}
Following the proof of Lemma 3 in \cite{berk1974} we define $q = \widehat \Sigma^{-1}_{(J,m)} - \Sigma^{-1}_{(J,m)}$. Then,
\begin{align*}
q = (\Sigma^{-1}_{(J,m)} + q) (\Sigma_{(J,m)} - \widehat \Sigma_{(J,m)}) \Sigma^{-1}_{(J,m)},
\end{align*}
which implies that
\begin{align} \label{eq:bound.aux1}
\|q\|^2 \leq \frac{\|\Sigma^{-1}_{(J,m)}\|_2^2 \|\Sigma_{(J,m)} - \widehat \Sigma_{(J,m)}\|_2}{1 - \|\Sigma^{-1}_{(J,m)}\|_2 \|\Sigma_{(J,m)} - \widehat \Sigma_{(J,m)}\|_2},
\end{align}
where the numerator of \eqref{eq:bound.aux1} is $O_P(T^{-1/2})$, and the denominator is bounded away from zero. Thus,
\begin{align}
\|\widehat \Sigma^{-1}_{(J,m)} - \Sigma^{-1}_{(J,m)}\|_2 = O_P(T^{-1/2}), \label{eq:thm3covmatrix2}
\end{align}
and $\Vert \bm{\widehat{A}}^* - \bm{A}^{\ast} \Vert_{2} =O_p(T^{-1/2})$ follows by putting
together all rates into \eqref{eq:bound}. \\
\emph{Case (B) and (C): $m<p$.} In this scenario, we have
\begin{align*}
\bm{A}^* = \big[ \bm S_J' A_1 \bm S_J , \ldots, \bm S_J' A_p \bm S_J \big], \quad
\bm{\widehat A}^* = \big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J, \bm 0_{J, (p-m)J} \big].
\end{align*}
Then, for any $T$,
\begin{align*}
\big\Vert \bm{\widehat{A}}^{\ast} - \bm{A}^{\ast} \big\Vert_{2}^2
= \sum_{i=1}^{m}\big\Vert \bm R_J' \widehat A_i^{(J)} \bm R_J - \bm S_J' A_i \bm S_J \big\Vert_{2}^2 + \sum_{i=m+1}^{p}\Vert \bm S_J' {A}_i \bm S_J \Vert_{2}^2\geq \sum_{i=m+1}^{p}\left\Vert{A}_i \right\Vert_{2}^2 >0,
\end{align*}
where the last inequality follows by Assumption \ref{as:factors}(c). \\
\emph{Case (D): $J<K$ and $m \geq p$.} In this scenario, we have
\begin{align*}
\bm{A}^* = \big[ \bm S_J' A_1 \bm S_J , \ldots, \bm S_J' A_p \bm S_J, \bm 0_{J, (m-p)J} \big], \quad
\bm{\widehat A}^* = \big[ \bm R_J' \widehat A_1^{(J)} \bm R_J, \ldots, \bm R_J' \widehat A_m^{(J)} \bm R_J \big].
\end{align*}
Then,
\begin{align*}
\big\Vert \bm{\widehat{A}}^{\ast} - \bm{A}^{\ast} \big\Vert_{2}^2
= \sum_{i=1}^{p} \| \bm R_J' \widehat A_i^{(J)} \bm R_J - \bm S_J' A_i \bm S_J \|_2^2 + \sum_{i=p+1}^m \| \bm R_J' \widehat A_i^{(J)} \bm R_J \|_2^2
\end{align*}
where the matrices can be partitioned as
\begin{align*}
\bm R_J' \widehat A_i^{(J)} \bm R_J = \begin{pmatrix}
\widehat{A}_i^{(J)} & 0 \\
0 & 0
\end{pmatrix}, \quad
\bm S_J' A_i \bm S_J = \begin{pmatrix}
\bm{a}_i & \bm{b}_{i} \\
\bm{c}_i & \bm{d}_i
\end{pmatrix}.
\end{align*}
Consequently, for any $T$,
\begin{align*}
\big\Vert \bm{\widehat{A}}^{\ast} - \bm{A}^{\ast} \big\Vert_{2}^2
\geq \sum_{i=1}^{p} \big(\left\Vert \bm{b}_i\right\Vert_{2}^2+\left\Vert \bm{c}_i\right\Vert_{2}^2+\left\Vert \bm{d}_i\right\Vert_{2}^2\big) > 0.
\end{align*}
where the last inequality follows by Assumption \ref{as:factors}(c), which concludes the proof of the theorem.
\subsection{Proof of Theorem \ref{thm:InformCriteria}}
The proof is structured as follows.
The key ingredient needed to show Theorem \ref{thm:InformCriteria} is given in Lemma \ref{lem:CR_aux}.
To prove Lemma \ref{lem:CR_aux}, Lemma \ref{lem:CR_aux2} is needed.
We take explicitly into account the dependence on $K$, $p$, $J$ and $m$ to keep the proofs traceable. Hence, in addition to the notations introduced in the proof of Theorem \ref{thm:Bias} given at the top of Section \ref{appendixThm3}, the following notations will be used:
\begin{itemize}
\item[(i)] Given the selected numbers $J$ and $m$ we define $J^*=\max\{J,K\}$ and $m^* = \max\{m,p\}$.
\item[(ii)]
Combining model equations \eqref{eq:factormodel} and \eqref{eq:VAR} the functional time series can be written as
\begin{align*}
Y_{t}&= \mu + \Psi' F_{t} + \epsilon_{t}= \mu + \Psi' \sum_{i=1}^p A_i F_{t-i} + \Psi'\eta_{t}+\epsilon_{t} \\
&=\mu + \Psi' \bm S_K \sum_{i=1}^p \bm S_K A_i \bm S_K \bm S_K F_{t-i} + \Psi'\eta_{t}+\epsilon_{t}
= \mu + \big(\Phi^{(K)}\big)' \bm A_{(S)} \bm x_{t-1} + \Psi'\eta_{t}+\epsilon_{t}.
\end{align*}
\item[(iii)] The estimated one-step ahead predictor curve is
\begin{align*}
\widehat Y_{t|t-1}^{(J,m)} = \widehat \mu + \big(\widehat{\Psi}^{(J)}\big)' \widehat{\bm{A}}^{(J,m)} \widehat{\bm{x}}_{t-1}^{(J,m)} = \widehat \mu + \big(\widehat{\Psi}^{(J^*)}\big)' \widehat{\bm{A}}^* \widehat{\bm{x}}_{t-1}^{(J^*,m^*)}.
\end{align*}
\item[(iv)] We define its population counterparts using the unknown factors as factors and FPC scores as
\begin{align*}
Y_{t|t-1} = \mu + \Psi' \sum_{i=1}^p A_i F_{t-i} =
\mu + \big(\Phi^{(K)}\big)' \bm{A}_{(S)} \bm{x}_{t-1}
\end{align*}
and using the true FPC scores as
\begin{align*}
\widetilde Y_{t|t-1} = \mu + \Psi' \sum_{i=1}^p A_i \widetilde F_{t-i}^{(K)} =
\mu + \big(\Phi^{(K)}\big)' \bm{A}_{(S)} \bm{\widetilde x}_{t-1}^{(K,p)} = \mu + \big(\Phi^{(J^*)}\big)'\bm A^* \bm{\widetilde x}_{t-1}^{(J^*,m^*)}.
\end{align*}
\item[(v)] The mean square error when using $J$ factors and $m$ lags is given as
\begin{align*}
MSE_T(J,m)=\frac{1}{T-m}\sum_{t=m+1}^{T} \Vert Y_{t}-\widehat{Y}_{t|t-1}^{(J,m)}\Vert^2.
\end{align*}
\end{itemize}
\begin{lemma} \label{lem:CR_aux2}
Under the conditions of Theorem \ref{thm:Bias}, for any $J \leq K_{max}$ and $m \leq p_{max}$,
\begin{itemize}
\item[(a)] $\| T^{-1} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} ( \bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)})' \|_2 = O_P(T^{-1/2})$
\item[(b)] $\| T^{-1} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} ( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)})' \|_2 = O_P(T^{-1/2})$
\item[(c)] $\sum_{h=1}^K \| T^{-1} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} \eta_{h,t} \|_2 = O_P(T^{-1/2})$
\item[(d)] $\sum_{l=1}^{J} \| T^{-1} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} \langle \widehat \psi_l, \epsilon_t \rangle \|_2 = O_P(T^{-1/2})$
\end{itemize}
\end{lemma}
\begin{proof}
\textit{Item (a):}
By the triangle and the Cauchy-Schwarz inequality, we obtain
\begin{align*}
&\bigg\| \frac{1}{T} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} ( \bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)})' \bigg\|_2
\leq \sum_{i=1}^p \sum_{j=1}^{m} \sum_{l=1}^K \sum_{h=1}^{J} \bigg| \frac{1}{T} \sum_{t=m+1}^T \widehat F_{h,t-j} \langle \epsilon_{t-i}, \varphi_l \rangle \bigg| \\
&\leq \frac{1}{\sqrt T} \sum_{i=1}^p \sum_{j=1}^{m} \sum_{l=1}^K \sum_{h=1}^{J} \sqrt{ \Big( \frac{1}{T} \sum_{t=m+1}^T \widehat F_{h,t-j}^2 \Big) \Big( \frac{1}{\sqrt T} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_l \rangle^2 \Big)} = O_P(T^{-1/2}),
\end{align*}
since $E[\widehat F_{h,t-j}^2] < \infty$, and $\sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_l \rangle^2 = O_P(T^{1/2})$ by Assumption \ref{as:errors}(c). \\
\textit{Item (b):} By the triangle inequality, we obtain
\begin{align*}
&\bigg\| \frac{1}{T} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} ( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)})' \bigg\|_2
\leq \sum_{i=1}^p \sum_{j=1}^{m} \sum_{l=1}^K \sum_{h=1}^{J} \bigg| \frac{1}{T} \sum_{t=m+1}^T \widehat F_{h,t-j} \big( \langle Y_{t-i} - \mu, \varphi_l \rangle - \widehat F_{l,t-i} \big) \bigg| \\
&\leq \max_{\substack{ 1 \leq i,j \leq m^* \\ 1 \leq h,l \leq J^* }} \bigg| \frac{1}{T} \sum_{t=m+1}^T \widehat F_{h,t-j} \big( \langle Y_{t-i} - \mu, \varphi_l - \widehat \psi_l \rangle + \langle \widehat \mu - \mu, \widehat \psi_l \rangle \big) \bigg| \\
&\leq \max_{\substack{ 1 \leq i,j \leq m^* \\ 1 \leq h,l \leq J^* }} \frac{1}{T} \sum_{t=m^*+1}^T \big| \widehat F_{h,t-j} \big| \Big( \big\| Y_{t-i} - \mu \big\| + 1 \Big) \Big( \big\| \varphi_l - \widehat \psi_l \big\| + \big\| \widehat \mu - \mu \big\| \Big) = O_P(T^{-1/2}),
\end{align*}
where the last step follows by Theorem \ref{thm:consistency} and equation \eqref{eq:eigenfconvergence}. \\
\textit{Item (c):}
By the triangle inequality, we have
\begin{align}
&\sum_{h=1}^K \bigg\|\frac{1}{T} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} \eta_{h,t} \bigg\|_2 \nonumber \\
&\leq \sum_{h=1}^K \bigg\|\frac{1}{T} \sum_{t=m+1}^T \bm x_{t-1}^{(J,m)} \eta_{h,t} \bigg\|_2 + \sum_{h=1}^K \bigg\|\frac{1}{T} \sum_{t=m+1}^T ( \widehat{\bm x}_{t-1}^{(J,m)} - \widetilde{\bm x}_{t-1}^{(J,m)} \big) \eta_{h,t} \bigg\|_2 \nonumber \\
&\leq \sum_{h=1}^K \sum_{i=1}^m \sum_{l=1}^J \bigg( \Big| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{l,t-i} \eta_{h,t} \Big| + \Big| \frac{1}{T} \sum_{t=m+1}^T \big(\widetilde F_{l,t-i} - \widehat F_{l,t-i} \big) \eta_{h,t} \Big| \bigg).\label{eq:lem4.eq1}
\end{align}
Consider a fixed $h$, $i$, and $l$.
For the first term of \eqref{eq:lem4.eq1}, we treat the cases $l \leq K$ and $l > K$ separately.
If $l \leq K$, we have $\widetilde F_{l,t-i} = s_l F_{l,t-i} + \langle \epsilon_{t-i}, \varphi_l \rangle$, so that
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{l,t-i} \eta_{h,t} \bigg|
\leq \bigg| \frac{1}{T} \sum_{t=m+1}^T F_{l,t-i} \eta_{h,t} \bigg| + \bigg| \frac{1}{T} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_l \rangle \eta_{h,t} \bigg| \\
&\leq \bigg| \frac{1}{T} \sum_{t=m+1}^T F_{l,t-i} \eta_{h,t} \bigg|
+ \sqrt{ \frac{1}{T} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_l \rangle^2 } \sqrt{ \frac{1}{T} \sum_{t=m+1}^T \eta_{h,t}^2 },
\end{align*}
where $T^{-1} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_l \rangle^2 = O_P(T^{-1/2})$ by Assumption \ref{as:errors}(c), and $T^{-1} \sum_{t=m+1}^T \eta_{h,t}^2 = O_P(1)$ by Assumption \ref{as:factors}(e).
Since $F_t$ is a causal process with respect to $\eta_t$, and $\eta_t$ is a martingale difference sequence with respect to $\mathcal F_{t}$ with bounded $\kappa$-th moments, $F_{l,t-i} \eta_{h,t}$ is also a martingale difference sequence with respect to $\mathcal F_t$ with bounded $(\kappa/2)$-th moments, where $\kappa > 4$ by Assumption \ref{as:factors}(e).
Then, by the central limit theorem for martingale difference sequences (see, e.g., Corollary 5.2.6 in \citealt{White2001}),
\begin{align*}
\bigg| \frac{1}{T} \sum_{t=m+1}^T F_{l,t-i} \eta_{h,t} \bigg| = O_P(T^{-1/2}).
\end{align*}
Consequently, $T^{-1} \sum_{t=m+1}^T \widetilde F_{l,t-i} \eta_{h,t} = O_P(T^{-1/2})$ for all $l \leq K$.
For the case $l > K$, we have $\widetilde F_{l,t-i} = \langle \epsilon_{t-i}, \varphi_l \rangle$, and
\begin{align}
\bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{l,t-i} \eta_{h,t} \bigg|
= \bigg|\Big \langle \frac{1}{T} \sum_{t=m+1}^T \epsilon_{t-i} \eta_{h,t}, \varphi_l \Big\rangle \bigg|
\leq \bigg\| \frac{1}{T} \sum_{t=m+1}^T \epsilon_{t-i} \eta_{h,t} \bigg\| = O_P(T^{-1/2}), \label{eq:lem4.eq2}
\end{align}
which follows by Assumption \ref{as:errors}(d) and the fact that $\eta_t = F_t - \sum_{j=1}^p A_i F_{t-j}$.
For the second term in \eqref{eq:lem4.eq1}, the difference of the factors can be rearranged as
\begin{align*}
\widetilde F_{l,t-i} - \widehat F_{l,t-i} = \langle Y_{t-i} - \mu, \varphi_l \rangle - \langle Y_{t-i} - \widehat \mu, \widehat \psi_l \rangle
= \langle Y_{t-i} - \mu, \varphi_l - \widehat \psi_l \rangle + \langle \widehat \mu - \mu, \widehat \psi_l \rangle
\end{align*}
for any $l=1, \ldots, J$, $i=1, \ldots, m$, and $t=m+1, \ldots, T$.
Then, by equation \eqref{eq:eigenfconvergence}, Theorem \ref{thm:consistency}(a), the Cauchy-Schwarz inequality, and the fact that $Y_t\in L_H^4$,
\begin{align}
&\bigg|\frac{1}{T} \sum_{t=m+1}^T (\widetilde F_{l,t-i} - \widehat F_{l,t-i}) \eta_{h,t} \bigg| \nonumber \\
&\leq \frac{1}{T} \sum_{t=m+1}^T \| Y_{t-i} - \mu \| \cdot | \eta_{h,t} | \cdot \|\varphi_l - \widehat \psi_l\| + \frac{1}{T} \sum_{t=m+1}^T | \eta_{h,t} | \cdot \| \widehat \mu - \mu \| = O_P(T^{-1/2}). \label{eq:lem4.eq3}
\end{align}
By combining \eqref{eq:lem4.eq2} and \eqref{eq:lem4.eq3}, it follows that \eqref{eq:lem4.eq1} is $O_P(T^{-1/2})$. \\
\textit{Item (d):}
By the triangle inequality, we have
\begin{align}
&\sum_{l=1}^J \bigg\| \frac{1}{T} \sum_{t=m+1}^T \widehat{\bm x}_{t-1}^{(J,m)} \langle \widehat \psi_l, \epsilon_t \rangle \bigg\|_2
\leq \sum_{l=1}^J \sum_{i=1}^m \sum_{h=1}^J \bigg| \frac{1}{T} \sum_{t=m+1}^T \widehat F_{h,t-i} \langle \widehat \psi_l, \epsilon_t \rangle \bigg| \nonumber \\
&\leq \sum_{l=1}^J \sum_{i=1}^m \sum_{h=1}^J \bigg( \Big| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{h,t-i} \langle \widehat \psi_l, \epsilon_t \rangle \Big| + \Big| \frac{1}{T} \sum_{t=m+1}^T \big(\widehat F_{h,t-i} - \widetilde F_{h,t-i} \big) \langle \widehat \psi_l, \epsilon_t \rangle \Big| \bigg) \label{eq:lem4.eq4}
\end{align}
We follow the same steps as in the proof of item (c).
For the first term in \eqref{eq:lem4.eq4}, the triangle and Cauchy-Schwarz inequalities imply
\begin{align}
\bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{h,t-i} \langle \widehat \psi_l, \epsilon_t \rangle \bigg|
\leq \bigg\| \frac{1}{T} \sum_{t=m+1}^T F_{l,t-i} \epsilon_t \bigg\| + \bigg| \frac{1}{T} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_h \rangle \langle \widehat \psi_l, \epsilon_t \rangle \bigg|. \label{eq:lem4.eq6}
\end{align}
The first term in \eqref{eq:lem4.eq6} is $O_P(T^{-1/2})$ by Assumption \ref{as:errors}(d).
For the second term in \eqref{eq:lem4.eq6}, note that $\langle \epsilon_{t-i}, \varphi_h \rangle \langle \widehat \psi_l, \epsilon_t \rangle$ is a martingale difference sequence with respect to $\mathcal A_t$ with bounded $(\kappa/2)$-th moments, where $\kappa > 4$.
The central limit theorem for martingale difference sequences implies
\begin{align*}
\bigg| \frac{1}{T} \sum_{t=m+1}^T \widetilde F_{h,t-i} \langle \widehat \psi_l, \epsilon_t \rangle \bigg|
= \bigg| \frac{1}{T} \sum_{t=m+1}^T \langle \epsilon_{t-i}, \varphi_h \rangle \langle \widehat \psi_l, \epsilon_t \rangle \bigg| = O_P(T^{-1/2}),
\end{align*}
which implies that \eqref{eq:lem4.eq6} is $O_P(T^{-1/2})$.
Finally, for the second term in \eqref{eq:lem4.eq4}, analogously to \eqref{eq:lem4.eq3} and the fact that $\epsilon_t$ has bounded fourth moments,
\begin{align*}
\Big| \frac{1}{T} \sum_{t=m+1}^T \big(\widehat F_{h,t-i} - \widetilde F_{h,t-i} \big) \langle \widehat \psi_l, \epsilon_t \rangle \Big| = O_P(T^{-1/2}).
\end{align*}
\end{proof}
\begin{lemma} \label{lem:CR_aux}
Under the conditions of Theorem \ref{thm:Bias}, for any $J \leq K_{max}$ and $m \leq p_{max}$,
\begin{itemize}
\item[(a)] \begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\| \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 = \begin{cases} O_P(T^{-1}) & \text{if} \ J\geq K \ \text{and} \ m \geq p, \\ \Theta_P (1) & \text{otherwise,} \end{cases}
\end{align*}
\item[(b)] \begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\langle Y_t - \widetilde Y_{t|t-1}, \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\rangle
= \begin{cases} O_P(T^{-1}) & \text{if} \ J\geq K \ \text{and} \ m \geq p \\ O_P(T^{-1/2}) & \text{otherwise,} \end{cases}
\end{align*}
\item[(c)] \begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\langle \widetilde Y_{t|t-1} - \widehat Y_{t|t-1}^{(K,p)}, \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\rangle
= \begin{cases} O_P(T^{-1}) & \text{if} \ J\geq K \ \text{and} \ m \geq p \\ O_P(T^{-1/2}) & \text{otherwise,} \end{cases}
\end{align*}
\end{itemize}
where $\Theta_P(\cdot)$ denotes the exact order Landau symbol, that is, $a_T = \Theta_P(1)$ if and only if $a_T = O_P(1)$ and $a_T^{-1} = O_P(1)$.
\end{lemma}
\begin{proof}
\textit{Statement (a):}
The predictor curves can be represented as
\begin{align*}
\widehat Y_{t|t-1}^{(J,m)} = \widehat \mu + \big(\widehat \Psi^{(J)}\big)' \widehat{\bm A}^{(J,m)} \widehat{\bm x}_{t-1}^{(J,m)}
= \widehat \mu + \big(\widehat \Psi^{(J^*)}\big)' \widehat{\bm A}^* \widehat{\bm x}_{t-1}^{(J^*,m^*)},
\end{align*}
where $J^* = \max\{J,K\}$, and
\begin{align*}
\widehat Y_{t|t-1}^{(K,p)}
&= \widehat \mu + \big(\widehat \Psi^{(K)}\big)' \big( \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big) \widehat{\bm x}_{t-1}^{(K,p)} + \big(\widehat \Psi^{(K)}\big)' \bm A_{(S)} \widehat{\bm x}_{t-1}^{(K,p)} \\
&= \widehat \mu + \big(\widehat \Psi^{(K)}\big)' \big( \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big) \widehat{\bm x}_{t-1}^{(K,p)} + \big(\widehat \Psi^{(J^*)}\big)' \bm A^* \widehat{\bm x}_{t-1}^{(J^*,m^*)}.
\end{align*}
Then,
\begin{align*}
\widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} = Z_{(1)} + Z_{(2)},
\end{align*}
where
\begin{align*}
Z_{(1)} = \big(\widehat \Psi^{(J^*)}\big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)}, \quad
Z_{(2)} =\big(\widehat \Psi^{(K)}\big)' \big( \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big) \widehat{\bm x}_{t-1}^{(K,p)}.
\end{align*}
To simplify the exposition we ignore the additional indices $\{t,T,J,m,K,p\}$ on which $Z_{(1)}$ and $Z_{(2)}$ depend.
To disentangle the loading vectors and matrix products, let $e_l^{(J)}$ be the $l$-th unit vector of length $J$, where the $l$-th entry of $e_l^{(J)}$ is 1, and all other entries are zeros.
For the first term, we have
\begin{align*}
\big\|Z_{(1)} \big\|^2
&= \int_a^b \Big( \sum_{l=1}^{J^*} \widehat \psi_l(r) \big(e_l^{(J^*)}\big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)} \Big)^2 \,\mathrm{d} r \\
&= \sum_{l=1}^{J^*} \Big( \big(e_l^{(J^*)}\big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)} \Big)^2 \\
&= \big\| \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big\|_2^2 \\
&= \tr\Big( \big(\widehat{\bm x}_{t-1}^{(J^*,m^*)}\big)' \big(\bm A^* - \widehat{\bm A}^* \big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)} \Big) \\
&= \tr\Big( \big(\bm A^* - \widehat{\bm A}^* \big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\widehat{\bm x}_{t-1}^{(J^*,m^*)}\big)' \Big),
\end{align*}
and
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\|Z_{(1)} \big\|^2 = \tr\Big( \big(\bm A^* - \widehat{\bm A}^* \big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat \Sigma_{(J^*, m^*)} \Big).
\end{align*}
From \eqref{eq:thm3covmatrix1} and \eqref{eq:thm3covmatrix2} in the proof of Theorem \ref{thm:Bias} we have $\| \widehat{\Sigma}_{(J^*, m^*)} - \Sigma_{(J^*, m^*)} \|_2 = o_P(1)$ and $\| \widehat{\Sigma}_{(J^*, m^*)}^{-1} - \Sigma_{(J^*, m^*)}^{-1} \|_2 = o_P(1)$.
Consider the Cholesky decompositions $\widehat \Sigma_{(J^*, m^*)} = \widehat \Omega \widehat \Omega'$ and $\Sigma_{(J^*, m^*)} = \Omega \Omega'$, where $\|\Omega\|_2 < \infty$ and $\|\Omega^{-1}\| < \infty$.
Then,
\begin{align*}
\tr\Big( \big(\bm A^* - \widehat{\bm A}^* \big)' \big(\bm A^* - \widehat{\bm A}^* \big) \widehat \Sigma_{(J^*, m^*)} \Big) = \|\big(\bm A^* - \widehat{\bm A}^* \big) \widehat \Omega\|_2^2,
\end{align*}
and
\begin{align*}
\frac{\|(\bm A^* - \widehat{\bm A}^* ) \widehat \Omega\|_2^2}{\|\bm A^* - \widehat{\bm A}^* \|_2^2}
&\leq \|\widehat \Omega\|_2 = O_P(1), \\
\frac{\|\bm A^* - \widehat{\bm A}^* \|_2^2}{\|(\bm A^* - \widehat{\bm A}^*) \widehat \Omega \|_2^2}
&= \frac{\|(\bm A^* - \widehat{\bm A}^* ) \widehat \Omega \widehat \Omega^{-1} \|_2^2}{\|(\bm A^* - \widehat{\bm A}^* ) \widehat \Omega \|_2^2}
\leq \|\widehat \Omega^{-1}\|_2 = O_P(1),
\end{align*}
which implies that $\frac{1}{T} \sum_{t=m^*+1}^T \big\|Z_{(1)} \big\|^2$ is of exactly the same order as $\| \bm A^* - \widehat{\bm A}^* \|_2^2$.
By Theorem \ref{thm:Bias}, we have $\| \bm A^* - \widehat{\bm A}^* \|_2^2 = O_P(T^{-1})$ for case I and $\| \bm A^* - \widehat{\bm A}^* \|_2^2 = \Theta_P(1)$ for case II, which implies that
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\|Z_{(1)} \big\|^2 = \begin{cases} O_P(T^{-1}) & \text{for case I,} \\ \Theta_P (1) & \text{for case II.} \end{cases}
\end{align*}
For the second term,
\begin{align*}
\big\|Z_{(2)}\big\|^2 = \Big\| \sum_{l=1}^K \widehat \psi_l \big(e_l^{(K)}\big)' \big( \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big) \widehat{\bm x}_{t-1}^{(K,p)} \Big\|^2
= \Big\| \big( \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big) \widehat{\bm x}_{t-1}^{(K,p)} \Big\|_2^2,
\end{align*}
where $\|\widehat{\bm A}^{(K,p)} - \bm A_{(S)} \|_2^2 = O_P(T^{-1})$ by Theorem \ref{thm:Bias}, Lemma \ref{lem:Estfactorscovariancematrix}, and
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\|Z_{(2)} \big\|^2 \leq \frac{1}{T} \sum_{t=m^*+1}^T \big\| \widehat{\bm x}_{t-1}^{(K,p)} \big\|_2^2 \big\|\widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big\|_2^2
= O_P(T^{-1})
\end{align*}
for both cases. Finally, for the cross term,
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} , Z_{(2)} \rangle &\leq \frac{1}{T} \sum_{t=m^*+1}^T \big\| \widehat{\bm x}_{t-1}^{(K,p)} \big\|_2 \big\| \widehat{\bm x}_{t-1}^{(J,m)}\|_2 \big\|\widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big\|_2 \big\|\bm A^* - \widehat{\bm A}^* \big\|_2
\end{align*}
which is $O_P(T^{-1})$ for case I and $O_P(T^{-1/2})$ for case II by Theorem \ref{thm:Bias}.
Since
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big\| \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\|^2
&= \frac{1}{T} \sum_{t=m^*+1}^T \big( \|Z_{(1)}\|^2 + \|Z_{(2)}\|^2 + 2 \langle Z_{(1)} , Z_{(2)} \rangle \big),
\end{align*}
statement (a) follows. \\
\textit{Proof of statement (b):}
We decompose
\begin{align*}
Y_t - \widetilde Y_{t|t-1} = \big( \Phi^{(K)} \big)' \bm A_{(S)} (\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)}) + \Psi' \eta_t + \epsilon_t = Z_{(3)} + Z_{(4)} + Z_{(5)},
\end{align*}
where
\begin{align*}
Z_{(3)} = \big( \Phi^{(K)} \big)' \bm A_{(S)} (\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)}) = \sum_{l=1}^K \varphi_l \big( \bm A_{(S)}' e_l^{(K)} \big)' (\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)}),
\end{align*}
$Z_{(4)} = \Psi' \eta_t$, and $Z_{(5)} = \epsilon_t$.
Recall that from the proof of statement (a) that
\begin{align*}
Z_{(1)} = \sum_{l=1}^{J^*} \widehat \psi_l \big( (\bm A^* - \widehat{\bm A}^*)' e_l^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)}, \quad
Z_{(2)} = \sum_{l=1}^K \widehat \psi_l \big( (\widehat{\bm A}^{(K,p)} - \bm A_{(S)})' e_l^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)}.
\end{align*}
It remains to show that
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)} , Z_{(3)} + Z_{(4)} + Z_{(5)} \rangle = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
We consider the six terms $\langle Z_{(i)} , Z_{(j)} \rangle$ for $i = 1,2$ and $j = 3,4,5$ separately.
First,
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)}, Z_{(3)}\rangle \bigg| \\
&\leq \sum_{l=1}^{J^*} \sum_{h=1}^K \big|\langle \widehat \psi_l, \varphi_h \rangle \big| \ \bigg| \frac{1}{T} \sum_{t=m^*+1}^T
\big( (\bm A^* - \widehat{\bm A}^*)' e_l^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)} \big)' \big( \bm A_{(S)}' e_h^{(K)} \big) \bigg|
\\
&\leq J^* K \big\|\bm A_{(S)} \big\|_2 \big\| \bm A^* - \widehat{\bm A}^* \big\|_2 \bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)} \big)' \bigg\|_2
= O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2),
\end{align*}
where the last step follows by Lemma \ref{lem:CR_aux2}(a).
Analogously, for the second term,
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(2)}, Z_{(3)}\rangle \bigg| \\
&\leq \sum_{l=1}^{K} \sum_{h=1}^K \big|\langle \widehat \psi_l, \varphi_h \rangle \big| \ \bigg| \frac{1}{T} \sum_{t=m^*+1}^T \big( (\widehat{\bm A}^{(K,p)} - \bm A)' e_l^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \big(\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)} \big)' \big( \bm A_{(S)}' e_h^{(K)} \big) \bigg| \\
&\leq K^2 \big\|\bm A_{(S)} \big\|_2 \big\|\widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big\|_2
\bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(K,p)} \big(\bm x_{t-1} - \bm{\widetilde x}_{t-1}^{(K,p)} \big)' \bigg\|_2
= O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
For the third term,
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)}, Z_{(4)}\rangle \bigg| \\
&\leq \sum_{l=1}^{J^*} \sum_{h=1}^K \big|\langle \widehat \psi_l, \psi_h \rangle \big| \ \bigg| \frac{1}{T} \sum_{t=m^*+1}^T
\big( (\bm A^* - \widehat{\bm A}^*)' e_l^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)} \eta_{h,t} \bigg| \\
&\leq J^* \big\| \bm A^* - \widehat{\bm A}^* \big\|_2 \sum_{h=1}^K \bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \eta_{h,t} \bigg\|_2
= O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2),
\end{align*}
where the last step follows by Lemma \ref{lem:CR_aux2}(c), and, analogously, for the fourth term,
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(2)}, Z_{(4)}\rangle \bigg| \\
&\leq \sum_{l=1}^{K} \sum_{h=1}^K \big|\langle \widehat \psi_l, \psi_h \rangle \big| \ \bigg| \frac{1}{T} \sum_{t=m^*+1}^T
\big( (\widehat{\bm A}^{(K,p)} - \bm A)' e_l^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \eta_{h,t} \bigg| \\
&\leq K \big\|\widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big\|_2 \sum_{h=1}^K \bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(K,p)} \eta_{h,t} \bigg\|_2
= O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
For the fifth term we have
\begin{align*}
&\bigg| \frac{1}{T} \sum_{m^*+1}^T \langle Z_{(1)}, Z_{(5)}\rangle \bigg|
\leq \bigg| \frac{1}{T} \sum_{t=m^*+1}^T \sum_{l=1}^{J^*} \Big( \big( (\bm A^* - \widehat{\bm A}^*)' e_l^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)} \Big) \langle \widehat \psi_l, \epsilon_t \rangle \bigg| \\
&\leq \big\| \bm A^* - \widehat{\bm A}^* \big\|_2 \sum_{l=1}^{J^*} \bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \langle \widehat \psi_l, \epsilon_t \rangle \bigg\|_2 = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2),
\end{align*}
where the last step follows by Lemma \ref{lem:CR_aux2}(d), and, analogously, for the sixth term,
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(2)}, Z_{(5)}\rangle \bigg|
\leq \bigg| \frac{1}{T} \sum_{t=m^*+1}^T \sum_{l=1}^{K} \Big( \big( (\widehat{\bm A}^{(K,p)} - \bm A_{(S)})' e_l^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \Big) \langle \widehat \psi_l, \epsilon_t \rangle \bigg| \\
&\leq \big\| \widehat{\bm A}^{(K,p)} - \bm A_{(S)} \big\|_2 \sum_{l=1}^{K} \bigg\| \frac{1}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(K,p)} \langle \widehat \psi_l, \epsilon_t \rangle \bigg\|_2 = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
Then, statement (b) follows with Theorem \ref{thm:Bias}. \\
\textit{Proof of statement (c):}
We decompose
\begin{align*}
\widetilde Y_{t|t-1} - \widehat Y_{t|t-1}^{(K,p)} &= \mu + \big( \Phi^{(K)} \big)' \bm A_{(S)} \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat \mu - \big( \widehat \Psi^{(K)} \big)' \widehat{\bm A}^{(K,p)} \widehat{\bm x}_{t-1}^{(K,p)} \\
&= Z_{(6)} + Z_{(7)} + Z_{(8)} + Z_{(9)},
\end{align*}
where $Z_{(6)} = \mu - \widehat \mu$,
\begin{align*}
Z_{(7)} &= \big( \Phi^{(K)} - \widehat \Psi^{(K)} \big)' \bm A_{(S)} \bm{ \widetilde x}_{t-1}^{(K,p)} = \sum_{l=1}^K \big( \varphi_l - \widehat \psi_l \big) \big( \bm A_{(S)}' e_l^{(K)} \big)' \bm{ \widetilde x}_{t-1}^{(K,p)},\\
Z_{(8)} &= \big(\widehat \Psi^{(K)}\big)' (\bm A_{(S)} - \widehat{\bm A}^{(K,p)}) \bm{\widetilde x}_{t-1}^{(K,p)} = \sum_{l=1}^K \widehat \psi_l \big( (\bm A_{(S)} - \widehat{\bm A}^{(K,p)})' e_l^{(K)} \big)' \bm{ \widetilde x}_{t-1}^{(K,p)}, \\
Z_{(9)} &= \big(\widehat \Psi^{(K)}\big)' \widehat{\bm A}^{(K,p)} (\bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)}) = \sum_{l=1}^K \widehat \psi_l \big( (\widehat{\bm A}^{(K,p)})' e_l^{(K)} \big)' \big( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)} \big).
\end{align*}
It remains to show that
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)} , Z_{(6)} + Z_{(7)} + Z_{(8)} + Z_{(9)} \rangle = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
We consider the four terms $\langle Z_{(1)} + Z_{(2)}, Z_{(j)} \rangle$ for $j = 6,7,8,9$ separately.
First, from the proof of statement (a),
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \big( \| Z_{(1)} \| + \| Z_{(2)} \| \big) = O_P(\|\bm A^* - \widehat{\bm A}^* \|_2),
\end{align*}
which, together with Theorem \ref{thm:consistency}(a), implies that
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)}, Z_{(6)}\rangle \bigg|
\leq \frac{1}{T} \sum_{t=m^*+1}^T \big( \| Z_{(1)} \| + \| Z_{(2)} \| \big) \| \mu - \widehat \mu \| = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
For the second term, we have
\begin{align*}
&\langle Z_{(1)}, Z_{(7)} \rangle = \sum_{l=1}^K \sum_{h=1}^{J^*} \big( (\bm A^* - \widehat{\bm A}^* )' e_h^{(J^*)} \big)'
\widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \big(\bm A_{(S)}' e_l^{(K)} \big) \langle \varphi_l - \widehat \psi_l, \widehat \psi_h \rangle, \\
&\langle Z_{(2)}, Z_{(7)} \rangle = \sum_{l=1}^K \sum_{h=1}^{K} \big( (\widehat{\bm A}^{(K,p)} - \bm A_{(S)} )' e_h^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \big(\bm A_{(S)}' e_l^{(K)} \big) \langle \varphi_l - \widehat \psi_l, \widehat \psi_h \rangle
\end{align*}
which, together with \eqref{eq:eigenfconvergence}, implies
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)}, Z_{(7)} \rangle \bigg| \\
&\leq \sum_{l=1}^K \big\| \varphi_l - \widehat \psi_l \big\| \big\| \bm A^* - \widehat{\bm A}^* \big\|_2 \big\| \bm A_{(S)} \big\|_2 \bigg\| \frac{2 J^*}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \bigg\|_2 = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
For the third term,
\begin{align*}
&\langle Z_{(1)}, Z_{(8)} \rangle = \sum_{l=1}^K \sum_{h=1}^{J^*} \big( (\bm A^* - \widehat{\bm A}^* )' e_h^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \big( (\bm A_{(S)} - \widehat{\bm A}^{(K,p)})' e_l^{(K)} \big) \langle \widehat \psi_l, \widehat \psi_h \rangle, \\
&\langle Z_{(2)}, Z_{(8)} \rangle = \sum_{l=1}^K \sum_{h=1}^{K} \big( (\widehat{\bm A}^{(K,p)} - \bm A_{(S)} )' e_h^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \big( (\bm A_{(S)} - \widehat{\bm A}^{(K,p)})' e_l^{(K)} \big) \langle \widehat \psi_l, \widehat \psi_h \rangle,
\end{align*}
which, by Theorem \ref{thm:Bias}, implies
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)}, Z_{(8)} \rangle \bigg| \\
&\leq \big\| \bm A^* - \widehat{\bm A}^* \big\|_2 \big\| \bm A_{(S)} - \widehat{\bm A}^{(K,p)} \big\|_2 \bigg\| \frac{2 K}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big(\widetilde{\bm x}_{t-1}^{(K,p)}\big)' \bigg\|_2 = O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2).
\end{align*}
For the fourth term, we have
\begin{align*}
&\langle Z_{(1)}, Z_{(9)} \rangle = \sum_{l=1}^K \sum_{h=1}^{J^*} \big( (\bm A^* - \widehat{\bm A}^* )' e_h^{(J^*)} \big)' \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)} \big)' \big( (\widehat{\bm A}^{(K,p)})' e_l^{(K)} \big) \langle \widehat \psi_l, \widehat \psi_h \rangle, \\
&\langle Z_{(2)}, Z_{(9)} \rangle = \sum_{l=1}^K \sum_{h=1}^{K} \big( (\widehat{\bm A}^{(K,p)} - \bm A_{(S)} )' e_h^{(K)} \big)' \widehat{\bm x}_{t-1}^{(K,p)} \big( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)} \big)' \big( (\widehat{\bm A}^{(K,p)})' e_l^{(K)} \big) \langle \widehat \psi_l, \widehat \psi_h \rangle,
\end{align*}
and
\begin{align*}
&\bigg| \frac{1}{T} \sum_{t=m^*+1}^T \langle Z_{(1)} + Z_{(2)}, Z_{(9)} \rangle \bigg| \\
&\leq \big\| \bm A_{(S)} - \widehat{\bm A}^{(K,p)} \big\|_2 \big\| \widehat{\bm A}^{(K,p)} \big\|_2 \bigg\| \frac{2 K}{T} \sum_{t=m^*+1}^T \widehat{\bm x}_{t-1}^{(J^*,m^*)} \big( \bm{\widetilde x}_{t-1}^{(K,p)} - \widehat{\bm x}_{t-1}^{(K,p)} \big)' \bigg\|_2,
\end{align*}
which is $O_P(T^{-1/2} \|\bm A^* - \widehat{\bm A}^* \|_2)$ by Lemma \ref{lem:CR_aux2}(b).
Finally, (c) follows with Theorem \ref{thm:Bias}.
\end{proof}
\paragraph{Main proof of Theorem \ref{thm:InformCriteria}.}
In what follows, we shall show that
\begin{align*}
\lim\limits_{T\to\infty} \mathrm{P}\Big(\mathrm{CR}_{T}(J,m)<\mathrm{CR}_T(K,p)\Big)=0
\end{align*}
for all $J\leq K_{max}$ and $m\leq p_{max}$.
From the definition of the criterion, we have
\begin{eqnarray*}
\mathrm{CR}_{T}(J,m)-\mathrm{CR}_T(K,p)= MSE_T(J,m)-MSE_T(K,p)+ g_T(J,m)-g_T(K,p).
\end{eqnarray*}
It is sufficient to prove the result for the case when $f(x)=x$ as the proof for any other strictly increasing transformation $f(\cdot)$ is identical.
Hence, it remains to show that
\begin{eqnarray*}
\lim_{T \to \infty} \mathrm{P}\Big(MSE_T(K,p)-MSE_T(J,m)>g_T(J,m)-g_T(K,p)\Big) = 0.
\end{eqnarray*}
We split the proof into two cases.
We denote the case of overselection ($J\geq K$ and $m \geq p$) as case I, and the case of underselection ($J < K$ or $m < p$ or both) as case II.
From $\|\widehat \mu - \mu \|=O_P(T^{-1/2})$, $\|\widehat{\bm A}^* - \bm{A}^*\|_2 = O_P(1)$, the fact that $Y_t \in L_H^4$, the orthonormality of the loadings, and Lemma \ref{lem:Estfactorscovariancematrix}, it follows that
$MSE_T(J,m) = (T-m)^{-1} \sum_{t=m+1}^T \| Y_t - \widehat Y_{t|t-1}^{(J,m)}\|^2 = O_P(1)$.
Moreover, we have $MSE_T(J,m) - T^{-1}(T-m) MSE_T(J,m) = O_P(T^{-1})$, and
\begin{align*}
&\frac{T-m}{T} MSE_T(J,m) - \frac{T-p}{T} MSE_T(K,p) - \frac{1}{T} \sum_{t=m^*+1}^T \Big( \big\| Y_t - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 - \big\| Y_t - \widehat Y_{t|t-1}^{(K,p)} \big\|^2 \Big) \\
&= \frac{1}{T} \sum_{t=m+1}^{m^*} \big\| Y_t - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 - \frac{1}{T} \sum_{t=p+1}^{m^*} \big\| Y_t - \widehat Y_{t|t-1}^{(K,p)} \big\|^2 = O_P(T^{-1}),
\end{align*}
where $m^* = \max\{m,p\}$, which implies that
\begin{align}
&MSE_T(J,m) - MSE_T(K,p) \nonumber \\
&= \frac{1}{T} \sum_{t=m^*+1}^T \Big( \big\| Y_t - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 - \big\| Y_t - \widehat Y_{t|t-1}^{(K,p)} \big\|^2 \Big) + O_P(T^{-1}), \label{eq:lem5aux0a}
\end{align}
for both cases I and II, so it remains to study $T^{-1} \sum_{t=m^*+1}^T ( \| Y_t - \widehat Y_{t|t-1}^{(J,m)} \|^2 - \| Y_t - \widehat Y_{t|t-1}^{(K,p)} \|^2)$.
A useful decomposition is obtained by adding and subtracting $\widehat Y_{t|t-1}^{(K,p)}$, i.e.,
\begin{align}
&\big\|Y_t - \widehat Y_{t|t-1}^{(J,m)}\big\|^2
= \big\| Y_t - \widehat Y_{t|t-1}^{(K,p)} + \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 \nonumber \\
&= \big\| Y_t - \widehat Y_{t|t-1}^{(K,p)}\big\|^2 + \big\| \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 + 2 \big\langle Y_t - \widehat Y_{t|t-1}^{(K,p)}, \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\rangle. \label{eq:lem5aux0b}
\end{align}
Then, from Lemma \ref{lem:CR_aux}, it follows that
\begin{align*}
\frac{1}{T} \sum_{t=m^*+1}^T \Big( \big\| \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\|^2 + 2 \big\langle Y_t - \widehat Y_{t|t-1}^{(K,p)}, \widehat Y_{t|t-1}^{(K,p)} - \widehat Y_{t|t-1}^{(J,m)} \big\rangle \Big) = \begin{cases} O_P(T^{-1}) & \text{for case I}, \\ \Theta_P (1) & \text{for case II}, \end{cases}
\end{align*}
which, together with \eqref{eq:lem5aux0a} and \eqref{eq:lem5aux0b}, implies that
\begin{align}
MSE_T(J,m) - MSE_T(K,p) = \begin{cases} O_P(T^{-1}) & \text{for case I}, \\ \Theta_P (1) & \text{for case II}. \end{cases} \label{eq:thm4aux1}
\end{align}
For case I, we have $MSE_T(J,m)-MSE_T(K,p)=O_p(T^{-1})$.
If $J \geq K$ and $m>p$ or $J>K$ and $m \geq p$, it follows that $g_T(J,m)-g_T(K,p)>0$, which converges to zero at a slower rate than $T^{-1}$.
This follows from the condition that $T g_T(J,m) \to \infty$ as $T \to \infty$ for all $J$ and $m$.
Thus, $\mathrm{P}(\mathrm{CR}_{T}(J,m)<\mathrm{CR}_T(K,p))\to 0$ as $T\to\infty$.
If $(J,m) = (K,p)$, the result is trivially satisfied. \\
For case II, \eqref{eq:thm4aux1} implies $\pliminf_{T\to\infty} (MSE_T(K,p)-MSE_T(J,m)) > 0$, which yields
\begin{align*}
\plimsup\limits_{T\to\infty} \ (MSE_T(K,p)-MSE_T(L,m)) < 0.
\end{align*}
Since $\lim_{T\to\infty} (g_T(L,m)-g_T(K,p))=0$, which is implied by the condition that $g_T(J,m) \to 0$ for all $J$ and $m$, it follows that
$\mathrm{P}(\mathrm{CR}_{T}(J,m)<\mathrm{CR}_T(K,p))\to 0$ as $T\to\infty$, which concludes the proof of the theorem.
|
1806.07294
|
\section{Introduction}
Stochastic variance reduced methods~\citep{roux2012stochastic,johnson2013accelerating, shalev2013stochastic} have been recently proposed as an improved alternative to the venerable stochastic gradient descent (\textsc{Sgd}) method~\citep{robbins1951}.
As \textsc{Sgd}, these methods only require to visit a small batch of random examples per iteration. This makes them ideally suited for large scale machine learning problems. Unlike \textsc{Sgd}, the variance of the updates decreases to zero --hence the name-- and converge with non-decreasing step sizes.
While initial stochastic variance reduced methods only considered smooth objectives, variants with support for a non-smooth term like Prox\textsc{Svrg}~\citep{xiao2014proximal} and \textsc{Saga}~\citep{defazio2014saga} were soon developed. These methods are highly efficient whenever the nonsmooth part is \emph{proximal}, that is, its proximal operator is available in closed form or at least fast to compute. This includes penalties such as the $\ell_1$ or group lasso norm, but not more complex ones like the overlapping group lasso~\citep{jacob2009group}, multidimensional total variation~\citep{barbero2014modular} or trend filtering~\citep{kim2009ell1}, to name a few.
A key observation is that many of these complex penalties can be
decomposed as a sum of proximal terms. Proximal splitting methods like the
three operator splitting \citep{davis2017three} or the Condat-V{\~u} algorithm~\citep{condat2013direct, vu2013splitting} then provide a principled
approach to incorporate these penalties into the optimization.
However, these methods require to compute the full gradient of the smooth term at each iteration, which can become costly in the context of large scale machine learning problems as it involves a full pass over the dataset. A question of key practical interest is whether these proximal splitting methods can be accelerated through the use of stochastic variance reduction techniques.
Our {\bfseries main contribution} is the development and analysis of \textsc{Vr-Tos}, a stochastic variance reduced method that can solve problems with a sum of proximal terms.
The proposed method bridges two previously distant families of algorithms and inherit the best of both:
like the three operator splitting of \citet{davis2017three}, it can solve problems with multiple proximal terms, and like variance reduced stochastic methods its cost is independent on the number of smooth terms, converges with a fixed step size, and reaches the same asymptotic convergence rate than full gradient methods.
Furthermore, we also develop a sparse
variant of the proposed algorithm which can take advantage of
the sparsity in the input data.
The paper is organized as follows:
\begin{itemize}[leftmargin=*]
\item \emph{Method.} \S\ref{sec:methods} describes the \textsc{Vr-Tos}\ algorithm, and extends it in \S\ref{scs:sparse} to leverage sparsity in the input data. \S\ref{scs:extension_k_terms} extends these methods to the case of an arbitrary number of proximal terms.
\item \emph{Analysis.} In \S\ref{scs:analysis} we provide a non-asymptotic convergence analysis of the proposed method. We show that, like other variance reduced methods, it converges with a fixed step size and can achieve the same asymptotic rate as the full gradient variants.
\item \emph{Experiments.} In \S\ref{scs:experiments} we compare the proposed method and related algorithms on a logistic regression problem with overlapping group lasso penalty on 4 datasets.
\end{itemize}
\subsection{Definitions and notation}
By convention, we denote vectors and vector-valued functions in lowercase boldface (e.g. ${\boldsymbol x}$) and matrices in uppercase boldface letters (e.g. $\boldsymbol D$).
The { proximal operator } of a convex lower semicontinuous function $h$ is defined as
$ \prox_{\gamma h}({\boldsymbol x}) \stackrel{\text{def}}{=} \argmin_{{\boldsymbol z} \in {\mathbb R}^p} \{ h({\boldsymbol z}) + \frac{1}{2 \gamma}\|{\boldsymbol x} - {\boldsymbol z}\|^2\}$.
We say a function $f$ is $L$-smooth if it is differentiable and its gradient is $L$-Lipschitz, while it is { $\mu$-strongly convex} if $f - \frac{\mu}{2}\|\cdot\|^2$ is convex
We denote by $[\,{\boldsymbol x}\,]_b$ the $b$-th coordinate in ${\boldsymbol x}$. This notation is overloaded so that for a collection of blocks $T = \{B_1, B_2, \ldots\}$, $[ {\boldsymbol x} ]_T$ denotes the vector ${\boldsymbol x}$ restricted to the coordinates in the blocks of $T$. For convenience, when $T$ consists of a single block $B$ we use $[ {\boldsymbol x} ]_B$ as a shortcut of $[ {\boldsymbol x} ]_{\{B\}}$.
Finally, we distinguish ${\mathbb E}$, the full expectation taken with respect to all the randomness in the system, from $\mathbf{E}$, the conditional expectation with respect to the random index sampled at iteration $t$, conditioned on all randomness up to iteration $t$.
\section{Methods}\label{sec:methods}
In this section we present our main contribution, the variance reduced three operator splitting method.
We will first consider problems with only two non-smooth terms, and generalize this formulation to an arbitrary number in \S\ref{scs:extension_k_terms}.
We consider the following optimization problem
\begin{empheq}[box=\mybluebox]{equation}\tag{OPT}\label{eq:opt_problem}
\begin{aligned}
&\vphantom{\sum^i}\minimize_{{\boldsymbol x} \in {\mathbb R}^p}\, f({\boldsymbol x}) + g({\boldsymbol x}) + h({\boldsymbol x}) \,,\\
& \vphantom{\sum_i^n}\text{ with } f({\boldsymbol x}) = \textstyle\frac{1}{n} \sum_{i=1}^n \psi_i({\boldsymbol x}) + \omega({\boldsymbol x})~
\end{aligned}
\end{empheq}
where each $\psi_i$ is convex and $L_\psi$-smooth, $\omega$ is convex and $L_\omega$-smooth and $g, h$ are \emph{proximal}, i.e., convex and we have access to their proximal operator.
This formulation allows to express a broad range of problems arising in machine learning and signal processing: the finite-sum includes common loss functions such as least squares or logistic loss; the two proximal terms $g, h$ can be extended to an arbitrary number and include penalties such as the group lasso with overlap, total variation, $\ell_1$ trend filtering, etc. Furthermore, the proximal terms can be extended-valued, thus allowing for convex constraints through the use of the indicator function. With respect to previous work, this significantly enlarges the class of functions stochastic variance reduced methods can solve efficiently.
We allow the terms inside the finite sum to be an addition of two terms: $\psi_i$ and $\omega$. This might seem superfluous since it is not more general than the standard formulation with a single term. However, in practice $\psi_i$ (e.g., a least squares or logistic loss, see \ref{apx:implementation}) can be highly structured and allow for reduced storage schemes and/or have sparse gradients (see \S\ref{scs:sparse}), properties which might not be shared by $\omega$, (e.g., an $\ell_2$ regularization term).
\begin{algorithm}[t]
\KwIn{${\boldsymbol y}_0 \in {\mathbb R}^p$, ${\boldsymbol \alpha}_0 \in {\mathbb R}^{n \times p}$, $\gamma > 0$}
{\bfseries Temporary storage}: ${\boldsymbol z}_t$, ${\boldsymbol v}_t$ and ${\boldsymbol x}_t$, all in ${\mathbb R}^p$
\KwResult{approximate solution to \eqref{eq:opt_problem} }
\For{$t=0, 1, \ldots $ }{
${\boldsymbol z}_t = \prox_{\gamma h}({\boldsymbol y}_t)$
Sample $i \in \{1, \ldots, n\}$ uniformly at random
${\boldsymbol v}_{t} = \nabla \psi_i({\boldsymbol z}_t) - {\boldsymbol \alpha}_{i, t} + \overline{{\boldsymbol \alpha}}_t + \nabla \omega({\boldsymbol z}_t)$
${\boldsymbol x}_t = \prox_{\gamma g}(2 {\boldsymbol z}_t - {\boldsymbol y}_t - \gamma {\boldsymbol v}_{t})$
${\boldsymbol y}_{t+1} = {\boldsymbol y}_t + {\boldsymbol x}_t - {\boldsymbol z}_t$
Update ${\boldsymbol \alpha}_{t+1}$ according to \eqref{eq:q_memorization}\label{l:update_alpha}
}
\Return ${\boldsymbol z}_t$
\caption{Variance Reduced \textsc{Tos}\ (\textsc{Vr-Tos})}\label{alg:vrtos}
\end{algorithm}
Central to our algorithm is the concept of $q$-memorization~\citep{hofmann2015variance}, which we recall below. It provides a convenient abstraction over common gradient memorization techniques like the ones in \textsc{Saga}\ and \textsc{Svrg}.
\begin{definition}
A \emph{uniform $q$-memorization} algorithm selects at each iteration $t$ a random index set $J_t$ of memory terms to update according to
\begin{equation}\label{eq:q_memorization}
{\boldsymbol \alpha}_{j, t+1} = \begin{cases}\nabla f_j({\boldsymbol z}_t) &\text{ if $j \in J_t$}\\{\boldsymbol \alpha}_{j, t} &\text{ otherwise\,,}\end{cases}
\end{equation}
such that any $j$ has the same probability $q/n$ of being updated.
\end{definition}
We now introduce the variance-reduced three operator splitting (\textsc{Vr-Tos}), a method to solve problems of the form \eqref{eq:opt_problem}. It is specified in Algorithm~\ref{alg:vrtos} and takes as input a vector of coefficients ${\boldsymbol y}_0 \in {\mathbb R}^p$, a table ${\boldsymbol \alpha}_0 \in {\mathbb R}^{n\times p}$ to store previous gradients and a step size $\gamma > 0$. Although in the general case this table is required to be of size $n\times p$, for linearly-parametrized loss functions like the logistic or least squares loss this can be reduced to size $n$ (\ref{apx:implementation}). Furthermore, the \textsc{Svrg}-like update detailed below avoids the need for this storage at the expense of a lightly increased per iteration cost.
The proposed method performs one evaluation of each of the proximal terms and builds the gradient estimator ${\boldsymbol v}_t$ from the table of previous gradients ${\boldsymbol \alpha}_t$ and the index $i$ sampled uniformly at random. It is easy to see that ${\boldsymbol v}_t$ is an unbiased estimate of the gradient, that is, $\mathbf{E}\,{\boldsymbol v}_t = \nabla f({\boldsymbol z}_t)$.
This method allows the memory terms to be updated using any scheme that verifies the $q$-memorization framework (line \ref{l:update_alpha}). Some common schemes are:
\begin{itemize}[leftmargin=*]
\item \emph{\textsc{Saga}-like update}. At each iteration, the algorithm updates the same coefficient that has been sampled, i.e. $J_t = \{i\}$. In this scheme each memory term has probability $1/n$ of being updated, and so $q=1$.
\item \emph{\textsc{Svrg}-like update}. Fix parameter $q > 0$ and draw at each iteration $r$ from a uniform distribution in the $[0, 1]$ interval. If $r < q/n$, the algorithm performs a complete update ${\boldsymbol \alpha}_{j, t+1} = \nabla \psi_j({\boldsymbol z}_t)$ for all $j$, otherwise they are left unchanged.
Like in the \textsc{Svrg}\ algorithm~\citep{johnson2013accelerating}, it is possible to avoid storing the memory terms since the $\overline{\boldsymbol \alpha}_t$ is constant unless a full refresh is triggered. In this setting, only the $p$-dimensional vectors $\overline{\boldsymbol \alpha}_t$ and $\widetilde{{\boldsymbol z}}_t$ needs to be stored, where $\widetilde{{\boldsymbol z}}_t$ is the value of ${\boldsymbol z}_t$ last time a full refresh was triggered. This variant avoids the need to store ${\boldsymbol \alpha}_t$, at the cost of a slight per iteration cost, as ${\boldsymbol \alpha}_i = \nabla f_i(\widetilde{{\boldsymbol z}}_t)$ needs to be computed at each iteration.
This memory update scheme was proposed by \citet{hofmann2015variance}, and unlike the original \textsc{Svrg}\ algorithm the number of iterates between two full regresh is a random variable instead of a fixed number of iterations.
\end{itemize}
\subsection{Sparse \textsc{Vr-Tos}}\label{scs:sparse}
\paragraph{Need for a sparse variant.} Modern web-scale optimization problems that arise in machine learning are not only large, they are also often \emph{sparse}. For example, in the \texttt{LibSVM} datasets suite\footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}}, 8 out of the 11 datasets with more than a million samples have a density below $0.01\%$, and the largest one in number of samples has a density below 1 per million.
Linearly-parametrized loss functions of the form $ \psi_i({\boldsymbol x}) = l_i({\boldsymbol a}_i^T {\boldsymbol x})$ have a gradient of the form $\nabla \psi_i({\boldsymbol x}) = {\boldsymbol a}_i l_i'({\boldsymbol a}_i^T {\boldsymbol x})$, which inherits the same sparsity pattern as the data ${\boldsymbol a}_i$. Since the data might be extremely sparse, it is hence of great practical interest to leverage sparsity in the partial gradients. This is the case in generalized linear models such as least squares or logistic regression, where ${\boldsymbol a}_i$ are the rows of a data matrix.
In this subsection we assume that $g$ and $\omega$ are block separable, i.e.,
can be decomposed block coordinate-wise as $g({\boldsymbol x}) = \textstyle{\sum_{B \in \mathcal{B}}} g_B([{\boldsymbol x}]_B)$ and $\omega({\boldsymbol x}) = \textstyle{\sum_{B \in \mathcal{B}}} \omega_B([{\boldsymbol x}]_B)$, where $\mathcal{B}$ is a partition of the coefficients into subsets which will call \emph{blocks} and $g_B, \omega_B$ only depends on coordinates in block $B$.
Furthermore, we will make use of the following notation:
\begin{itemize}[leftmargin=*]
\item \emph{Extended support}. We define the extended support of $\nabla \psi_i$, denoted $T_i$ as the set of blocks of $\mathcal{B}$ that intersect with its support, formally defined as $T_i \stackrel{\text{def}}{=} \{B: \text{supp}(\nabla f_i) \cap B \neq \varnothing, \,B\in\mathcal{B} \}$.
For totally separable penalties such as the $\ell_1$ norm, the blocks are individual coordinates and so the extended support covers the same coordinates as the support.
\item \emph{Reweighting constants}.
Let $\boldsymbol{P}_i$ be the projection onto the extended support, i.e., the diagonal matrix where $[{\boldsymbol P}_i]_{B, B}$ is the identity if $B \in T_i$ and zero otherwise.
For simplicity we assume that each block appears in at least one $T_i$, as otherwise the problem can be reformulated without it.
For each block $B \in \mathcal{B}$ we define $d_B$ as the inverse frequency of that block in the extended support, i.e. $d_B = 1 / (\frac{1}{n}\sum_{i=1}^n \mathds{1}\{B \in T_i\})^{-1}$. For notational convenience we define the block-diagonal matrix ${\boldsymbol D}$ as $[{\boldsymbol D}]_{B, B} = d_B \boldsymbol{I}$ for each block $B \in \mathcal{B}$. Note that by definition $\frac{1}{n}\sum_{i=1}^n {\boldsymbol P}_i = {\boldsymbol D}^{-1}$. Computation of this diagonal matrix should be done as a preprocessing step of the algorithm.
\begin{algorithm}[t]
\KwIn{${\boldsymbol y}_0 \in {\mathbb R}^p$, ${\boldsymbol \alpha}_0 \in {\mathbb R}^{n \times p}$, $\gamma > 0$}
{\bfseries Temporary storage}: ${\boldsymbol z}_t$, ${\boldsymbol v}_t$ and ${\boldsymbol x}_t$, all in ${\mathbb R}^p$
\KwResult{approximate solution to \eqref{eq:opt_problem} }
\For{$t=0, 1, \ldots $ }{
Sample $i \in \{1, \ldots, n\}$ uniformly at random
$T_i = \text{extended support of } \nabla \psi_i$
$[{\boldsymbol z}_t]_{T_i} = [\prox^{{\boldsymbol D}^{-1}}_{\gamma h}({\boldsymbol y}_t)]_{T_i}$
$[{\boldsymbol v}_{t}]_{T_i} = [\nabla \psi_i({\boldsymbol z}_t)\!-\!{\boldsymbol \alpha}_{i, t} + {\boldsymbol D}(\overline{{\boldsymbol \alpha}}_t + \nabla \omega({\boldsymbol z}_t))]_{T_i}$
$[{\boldsymbol x}_t]_{T_i} = [\prox_{\gamma \varphi_i}(2 {\boldsymbol z}_t - {\boldsymbol y}_t - \gamma {\boldsymbol v}_{t})]_{T_i}$
$[{\boldsymbol y}_{t+1}]_{T_i} = [{\boldsymbol y}_t + {\boldsymbol x}_t - {\boldsymbol z}_t]_{T_i}$
Update ${\boldsymbol \alpha}_{t+1}$ according to \eqref{eq:q_memorization}
}
\Return $\prox^{{\boldsymbol D}^{-1}}_{\gamma h}({\boldsymbol y}_t)$
\caption{Sparse \textsc{Vr-Tos}}\label{alg:vrtos_sparse}
\end{algorithm}
\item The \emph{scaled proximal operator} is defined for a function $\varphi$, step size $\gamma > 0$, positive definite matrix $\boldsymbol{H}$ and norm $\|\cdot\|_{{\boldsymbol H}}^2 \stackrel{\text{def}}{=} \langle \cdot, {\boldsymbol H}\cdot \rangle$ as
\begin{equation}
\prox^{\boldsymbol{H}}_{\gamma \varphi}({\boldsymbol x}) \stackrel{\text{def}}{=} \argmin_{{\boldsymbol z} \in {\mathbb R}^p}\big\{\, \varphi({\boldsymbol z}) + \frac{1}{2\gamma}\|{\boldsymbol x} - {\boldsymbol z}\|_{\boldsymbol{H}}^2\,\big\}
\end{equation}
\end{itemize}
We now have all necessary ingredients to present the sparse variant of \textsc{Vr-Tos}. This is specified in Algorithm~\ref{alg:vrtos_sparse}. In this variant, all operations are restricted to the extended support.
The algorithm requires to compute the scaled proximal operators of $g$ and $h$. By block separability of $g$ its scaled proximal operator can be computed in block-wise as $[\prox^{{\boldsymbol D}^{-1}}_{\gamma g}({\boldsymbol x})]_{B} = [\prox_{(d_B \gamma) h}({\boldsymbol x})]_B$ for all $B \in \mathcal{B}$.
Hence the cost of computing $[{\boldsymbol x}_t]_{T_i}$ will depend on the extended support size and not on the dimensionality.
We can unfortunately not guarantee the same complexity for $[{\boldsymbol z}_t]_{T_i}$ since we do not have a closed form for the scaled proximal operator of $h$ in general.
We review some specific cases in which it is possible to compute this scaled proximal operator in \ref{apx:optimizing_multiple_penalties}.
Alternatively, in the next subsection we propose a reformulation that avoids the need to compute this scaled proximal operator at the expense of higher memory usage.
In the case that one proximal term is zero, the proposed algorithm with \textsc{Saga}-like update of the memory terms defaults to the Sparse \textsc{Saga}\ variant of \citet{pedregosa2017breaking}. With \textsc{Svrg}-like update of the memory terms it instead yields a novel sparse variant of Prox\textsc{Svrg}~\citep{xiao2014proximal}.
For both of the proposed algorithms, when input is dense, ${\boldsymbol P}_i = {\boldsymbol D} = \boldsymbol{I}$ and we recover Algorithm~\ref{alg:vrtos}.
\subsection{Extension to an arbitrary number of proximal terms}\label{scs:extension_k_terms}
The proposed method can be easily extended to the more general setting of an objective function with an arbitrary number of proximal terms of the form
\begin{empheq}[box=\mybluebox]{equation}\tag{OPT-$k$}\label{eq:obj_fun_k}
\begin{aligned}
&\minimize_{{\boldsymbol x} \in {\mathbb R}^p}\, f({\boldsymbol x}) + \textstyle\sum_{j=1}^k g_j({\boldsymbol x})\,,\nonumber \\
&\text{ with } f({\boldsymbol x}) = \textstyle\frac{1}{n} \sum_{i=1}^n \psi_i({\boldsymbol x}) + \omega({\boldsymbol x})~,
\end{aligned}
\end{empheq}
where $\psi_i$ and $\omega$ are as in \eqref{eq:opt_problem} and $g_1, \ldots, g_k$ are proximal.
This is done by expressing the above as a problem of the form~\eqref{eq:opt_problem} in an enlarged space and then applying the proposed algorithm to this reformulation. For this, we will introduce $k$ new variables which we will constrain to be equal via an indicator function. The above problem can be written equivalently as follows,
\begin{equation*}\label{eq:obj_extended}
\min_{{\boldsymbol X} \in {\mathbb R}^{k\times p}}\,f(\overline{\boldsymbol X}) + \underbrace{\textstyle\sum_{j=1}^k g_j({\boldsymbol X}_j)}_{\stackrel{\text{def}}{=} g({\boldsymbol X})} + \underbrace{\imath\{{\boldsymbol X}_1\!=\!\cdots\!=\!{\boldsymbol X}_k\}}_{\stackrel{\text{def}}{=} h({\boldsymbol X})}\,,
\end{equation*}
where we have split the original variable into $k$ variables ${\boldsymbol X}_1, \ldots, {\boldsymbol X}_k$ and constrained them to be equal using an indicator function in the last term.
In this formulation the first term is smooth, and the other two terms are proximal. The second term is proximal since the variables in $g_i$ are decoupled, each $g_i$ is proximal by assumption and the last term is an indicator function over a linear subspace, and hence its scaled proximal operator can be computed in closed form as follows (Lemma \ref{lemma:projection_kterms}):
\begin{align}
&[\prox^{\boldsymbol{D}^{-1}}_{\gamma h}({\boldsymbol X})]_{i, j} = \left({\textstyle\sum_{i=1}^n} a_{i, j} {\boldsymbol X}_{i, j}\right) / \left({\textstyle\sum_{i=1}^n} a_{i, j}\right)\nonumber \\
&\text{ with } a_{i, j} = {\boldsymbol D}^{-1}_{i p + j, i p + j}~,
\end{align}
Hence, the problem with multiple proximal terms \eqref{eq:obj_fun_k} can be formulated as a problem with two proximal terms \eqref{eq:opt_problem} and so it is possible to apply the proposed method defined in the previous subsections.
This gives a variance reduced method for problems with an arbitrary number of proximal term.
It is worth noting that for the sparse variants this formulation avoids the potentially difficult computation of the scaled proximal operator of $h$.
\section{Related work}\label{scs:related_work}
{ \begin{table*}[t]
\centering
\footnotesize
\setlength\tabcolsep{5pt}\begin{tabular}{c c | c c c c |}
\cline{2-6}
\multicolumn{1}{c|}{} & \multirow{2}{*}{Methods} &
\multirow{1}{*}{incremental} & \multirow{1}{*}{non-decreasing} & multiple non-smooth & \multirow{2}{*}{sparse updates}\\
\multicolumn{1}{c|}{} & &\multirow{-1}{*}{updates} & \multirow{-1}{*}{step size} & terms & \\
\cline{2-6}
\multicolumn{1}{c|}{} & {\cellcolor{Gray} \textsc{Vr-Tos} } & \cellcolor{Gray} & \cellcolor{Gray} & \cellcolor{Gray} & \cellcolor{Gray} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&\footnotesize(\emph{this work})
\cellcolor{Gray}&
\multirow{-2}{*}{\color{mydarkgreen}\large\cmark} \cellcolor{Gray}
\cellcolor{Gray}&
\cellcolor{Gray} \multirow{-2}{*}{\color{mydarkgreen}\large\cmark} & \cellcolor{Gray} \multirow{-2}{*}{\color{mydarkgreen}\large\cmark}& \cellcolor{Gray} \multirow{-2}{*}{\color{mydarkgreen}\large\cmark}\\
\multicolumn{1}{c|}{}&{ \textsc{Saga} } &
& & &
\\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\citep{defazio2014saga}}&
\multirow{-2}{*}{\color{mydarkgreen}\large\cmark}
&
\multirow{-2}{*}{\color{mydarkgreen}\large\cmark} &\multirow{-2}{*}{\color{mydarkred}\large\xmark} & \multirow{-2}{*}{{\color{mydarkgreen}{\large\cmark}}\scriptsize\citep{pedregosa2017breaking}}\\
\multicolumn{1}{c|}{\multirow{4}{*}}&\multirow{1}{*}{\cellcolor{Gray} Prox\textsc{Svrg} } &
\cellcolor{Gray} &\cellcolor{Gray} &\cellcolor{Gray} &\cellcolor{Gray} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\citep{xiao2014proximal}}\cellcolor{Gray}&
\multirow{-2}{*}{\color{mydarkgreen}{\large\cmark}} \cellcolor{Gray}
\cellcolor{Gray}&
\cellcolor{Gray} \multirow{-2}{*}{\color{mydarkgreen}{\large\cmark}} & \multirow{-2}{*}{\color{mydarkred}\large\xmark} \cellcolor{Gray} & \multirow{-2}{*}{{\color{mydarkred}\large\xmark}{\,${\dagger}$}} \cellcolor{Gray}\\
\multicolumn{1}{c|}{\multirow{4}{*}}&\multirow{1}{*}{ \textsc{Tos} } &
& & & \\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\citep{davis2017three}}&
\multirow{-2}{*}{\color{mydarkred}\large\xmark}
&
\multirow{-2}{*}{\color{mydarkgreen}{\large\cmark}} & \multirow{-2}{*}{\color{mydarkgreen}{\large\cmark}}& \multirow{-2}{*}{N/A}\\
\multicolumn{1}{c|}{\multirow{4}{*}}&\multirow{1}{*}{\cellcolor{Gray} Stochastic \textsc{Tos} } &
\cellcolor{Gray} &\cellcolor{Gray} & \cellcolor{Gray} & \cellcolor{Gray} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\citep{yurtsever2016stochastic}}\cellcolor{Gray}&
\multirow{-2}{*}{} \cellcolor{Gray}
\cellcolor{Gray}\multirow{-2}{*}{\large\color{mydarkgreen}\cmark}&
\cellcolor{Gray} \multirow{-2}{*}{{\color{mydarkred}\large\xmark}} & \cellcolor{Gray}\multirow{-2}{*}{\large\color{mydarkgreen}\cmark} & \cellcolor{Gray} \multirow{-2}{*}{{\color{mydarkred}\large\xmark}}\\
\cline{2-6}\vspace{-2.8ex}\\
\end{tabular}
\caption{{\bfseries Comparison with related work.} The proposed method is unique in that it combines the advantages of variance-reduced methods (incremental updates, non-decreasing step sizes and sparse updates) with the advantages of proximal splitting (support for multiple non-smooth terms). $\dagger$: a sparse variant of Prox\textsc{Svrg}\ follows as a special case of Algorithm~\ref{alg:vrtos_sparse} with $h=0$ and the SVRG-like update of the memory terms.}\label{table:related_work}
\end{table*}
}
We comment on the most closely related ideas, summarized in Table \ref{table:related_work}.
Methods that support objective functions of the form \eqref{eq:opt_problem} with two or more proximal terms and a smooth term accessed via its gradient have recently been proposed. Examples are the
the primal-dual hybrid gradient method (also known as the Condat-V{\~u})~\citep{condat2013primal,vu2013splitting},\footnote{We note that this method can optimize the more general objective function $f({\boldsymbol x}) + g({\boldsymbol x}) + h(\boldsymbol{L}{\boldsymbol x})$, for an arbitrary linear operator $\boldsymbol{L}$ that is fixed to the identity in our setting.}
the generalized forward-backward splitting~\citep{raguet2013generalized} or the three operator splitting~\citep{davis2017three}. Due to its excellent empirical performance and amenability to sparse updates we have chosen this last method as the basis for the proposed method. The proposed \textsc{Vr-Tos}\ method can be seen as a generalization of this last method, as both method are identical when $n=1$.
A different stochastic variant of the three operator splitting was proposed by \citet{yurtsever2016stochastic} for the slightly more general case in which $f$ is given by an expectation. Like the proposed algorithms, this method only needs to evaluate the gradient of one element in the finite sum per iteration.
Unlike the proposed methods, the variance of the updates does not decrease to zero and requires --as other non-variance reduced method-- a decreasing step size. Furthermore, all updates are dense even in the presence of sparse gradients so the method performs poorly on large sparse problems.
\citep{balamurugan2016stochastic} proposed a variance-reduced method to solve problems a general class of saddle point problems including
$\min_{\boldsymbol x}\max_{\boldsymbol u} \frac{1}{n}\sum_{i=1}^n f_i({\boldsymbol x}) + M({\boldsymbol x}, {\boldsymbol u})$,
where $M(\cdot)$ is proximal. With $M({\boldsymbol x}, {\boldsymbol u}) = g({\boldsymbol x}) + \langle {\boldsymbol x}, {\boldsymbol u}\rangle - h^*({\boldsymbol u})$, this is equivalent to the problem in \eqref{eq:opt_problem}. However, the method requires $M$ to be strongly concave in ${\boldsymbol u}$, which is equivalent to $h$ being smooth, and so is not applicable to the same class of problems as the proposed method. We note that this requirement is not merely an artifact of the theory, as the algorithm requires knowledge of this smoothness parameter.
Stochastic variance-reduced variants of ADMM have also been recently proposed, see e.g. \citep{zheng2016stochastic, yu2017fast}. Compared to the proposed methods, none of the existing variants support sparse updates and require tuning more than one step-size parameter.
\section{Analysis}\label{scs:analysis}
In this section we provide a non-asymptotic convergence rate analysis for the proposed method:
\vspace{-0.5em}\begin{itemize}[leftmargin=*]
\item All the proposed variants converge with a step size $1/(3L_f)$, with $L_f \stackrel{\text{def}}{=} L_\psi + d_{\max}L_\omega$, where $d_{\max}$ is the maximum element in the diagonal matrix ${\boldsymbol D}$ ($d_{\max} = 1$ for non-sparse variants).
\item For \textsc{Vr-Tos}\ (Algorithm~\ref{alg:vrtos}) we obtain convergence rates that asymptotically match those of the full-gradient variant, i.e., $\mathcal{O}(1/t)$ convergence rate for convex problems (Theorem~\ref{thm:sublinear_convergence}) and a linear convergence rate under strong convexity of $f$ and smoothness of $h$ (Theorem~\ref{thm:linear_convergence}).
\item For the sparse variant, Sparse \textsc{Vr-Tos}\ (Algorithm~\ref{alg:vrtos_sparse}), we obtain a linear convergence rate under the same assumptions (Theorem~\ref{thm:linear_convergence}). However, for general convex objectives we could only obtain a worse $\mathcal{O}(1/\sqrt{t})$ convergence rate (Theorem~\ref{thm:sublinear_convergence_sparse}).
\end{itemize}
In this section we will use the following {\bfseries extra notation}.
We define the following primal ($\mathcal{P}$), and dual function ($\mathcal{D}$) as:
\begin{align}
&\mathcal{P}({\boldsymbol x}) \stackrel{\text{def}}{=} f({\boldsymbol x}) + g({\boldsymbol x}) + h({\boldsymbol x})~,\nonumber \\
&\mathcal{D}({\boldsymbol u}) \stackrel{\text{def}}{=} (f + g)^*(-{\boldsymbol u}) + h^*({\boldsymbol u})~,\label{eq:primal_dual_loss}
\end{align}
where $^*$ denotes the Fenchel conjugate. We denote by ${\boldsymbol x}^\star$ an arbitrary minimizer of the primal objective and define the ``dual iterate'' ${\boldsymbol u}_t \stackrel{\text{def}}{=} {\boldsymbol D}^{-1}({\boldsymbol y}_t - {\boldsymbol z}_t) / \gamma$ (${\boldsymbol D} = \boldsymbol{I}$ for the dense variants).
We also define the following generalized three operator splitting operator:
\begin{align}\label{eq:operator}
&\boldsymbol{G}_{\gamma}({\boldsymbol y}) \stackrel{\text{def}}{=} {\boldsymbol y} - {\boldsymbol z}_{\boldsymbol y} + \prox^{{\boldsymbol D}^{-1}}_{\gamma g}(2{\boldsymbol z}_{\boldsymbol y} - {\boldsymbol y} - \gamma {\boldsymbol D}\nabla f({\boldsymbol z}_{\boldsymbol y}))\,,\nonumber \\
&~\text{with ${\boldsymbol z}_{\boldsymbol y} = \prox^{{\boldsymbol D}^{-1}}_{\gamma h}({\boldsymbol y})$}~,
\end{align}
and its set of fixed points, which we denote $\Fix(\boldsymbol{G}_\gamma)$.
Another quantity that will appear often in the analysis is $H_0 \stackrel{\text{def}}{=} {1}/{(2 n L_f)}\sum_{i=1}^n \|{\boldsymbol \alpha}_{i, 0} - \psi_{i}({\boldsymbol x}^\star)\|^2$.
Throughout this section we make the following two technical assumptions:
\vspace{-0.5em}\paragraph{Assumption 1: Regularity.} We assume each $\psi_i$ is $L_\psi$-smooth, $\omega$ is $L_{\omega}$-smooth, $g$ and $h$ are proper (i.e., have nonempty domain), lower semicontinuous (i.e., its sublevel sets are closed) convex functions. We recall that lower semicontinuity is a weak form of continuity that allows extended-valued functions with domain over a closed set.
{\bfseries Assumption 2: Qualification conditions.} We assume the relative interior of $\dom g$ and $\dom h$ have a non-empty intersection.
This is a very weak and standard assumption, which allows to rule out pathological cases such as disjoint domains and allows to relate the primal and dual optimal objective (see e.g.\citep[Proposition 15.13]{bauschke2017convex} or \citep[Proposition 5.3.8]{bertsekas2015convex}), a property sometimes referred to as strong or total duality.
{\bfseries Sublinear convergence}.
The following theorem shows a $\mathcal{O}(1/t)$ convergence rate for \textsc{Vr-Tos}\ on arbitrary convex objectives.
One of the issues when analyzing the convergence of the three operator splitting is that the objective function might be $+\infty$, for example when both proximal terms are an indicator function.
Following \citet{chambolle2016ergodic,pedregosa2018adaptive}, we will state the convergence rate for general functions in terms of the \emph{saddle point suboptimality}, defined as
\begin{align}
&\mathcal{L}(\widetilde{\boldsymbol x}, {\boldsymbol u}) - \mathcal{L}({\boldsymbol x}, \widetilde{\boldsymbol u})\,,~\text{ with }~\nonumber \\
&\mathcal{L}({\boldsymbol x}, {\boldsymbol u}) \stackrel{\text{def}}{=} f({\boldsymbol x}) + g({\boldsymbol x})+ \langle {\boldsymbol x}, {\boldsymbol u}\rangle - h^*({\boldsymbol u})~,
\end{align}
where $\mathcal{L}$ is the Lagrangian associated with $\mathcal{P}$ and $\mathcal{D}$. As \citet{davis2017three}, we will also state convergence rates in terms of the objective suboptimality under a Lipschitz assumption on $h$ in \eqref{eq:sub_copnvergence_obj}.
\begin{theorem}\label{thm:sublinear_convergence}
Let $\overline{{\boldsymbol x}}_t$ denote the averaged (also known as ergodic) iterate, i.e., $\overline{{\boldsymbol x}}_t = {(\sum_{k=0}^t{\boldsymbol x}_k )/(t+1)}$ and $\overline{{\boldsymbol u}}_t = (\sum_{k=0}^t{\boldsymbol u}_k )/(t+1)$. Then the \textsc{Vr-Tos}\ method (Algorithm~\ref{alg:vrtos}) converges for any step size $\gamma \leq 1 / (3L_f)$, and for $\gamma = 1/ (3L_f)$ we have the following bound for all $({\boldsymbol x}, {\boldsymbol u}) \in \dom g \times \dom h^*$:
\begin{equation}\label{eq:sub_copnvergence_saddle}
{\mathbb E}\left[\mathcal{L}(\overline{\boldsymbol x}_t, {\boldsymbol u}) - \mathcal{L}({\boldsymbol x}, \overline{\boldsymbol u}_t)\right] \leq \frac{10 n }{q(t+1)}C_0,
\end{equation}
with ${\boldsymbol y} = {\boldsymbol x} + \gamma {\boldsymbol u}$, ${\boldsymbol y}^\star \in \Fix(\boldsymbol{G}_\gamma)$, and $C_0 = \left[\frac{3 L_{f}q}{20 n}\|{\boldsymbol y}_0 - {\boldsymbol y}\|^2 + \frac{3 L_{f}q}{2n}\|{\boldsymbol y}_0 - {\boldsymbol y}^\star\|^2 + H_0 \right]~$, where we recall $H_0 = {1}/{(2 n L_f)}\sum_{i=1}^n \|{\boldsymbol \alpha}_{i, 0} - \psi_{i}({\boldsymbol x}^\star)\|^2$.
Furthermore, if $h$ is $\beta_h$-Lipschitz we have the following rate in terms of the primal objective:
\begin{equation}\label{eq:sub_copnvergence_obj}
\mathcal{P}(\overline{{\boldsymbol x}}_t) - \mathcal{P}({\boldsymbol x}^\star) \leq \frac{10 n }{q(t+1)}\widetilde{C}_0 ~,
\end{equation}
with $\widetilde{C}_0 = \frac{6 L_{f}q}{20 n}\|{\boldsymbol z}_0 - {\boldsymbol x}^\star\|^2 + \frac{3 L_{f}q}{2n}\|{\boldsymbol y}_0 - {\boldsymbol y}^\star\|^2 + \frac{q}{15 n L_f}\beta_h^2 + H_0$.
\end{theorem}
The previous theorem gives a $\mathcal{O}(1/t)$ convergence rate in terms of the saddle point suboptimality for arbitrary convex functions and $\mathcal{O}(1/t)$ rate in function suboptimality under a Lipschitz assumption on $h$, matching the strongest bounds of \textsc{Saga}~\citep{defazio2014saga}.
For their sparse variants, however, we have only been able to prove a slower $\mathcal{O}(1/\sqrt{t})$ rate on the operator residual, despite the fact that in practice the algorithm exhibits a much faster empirical convergence (see \S \ref{scs:experiments}). \ref{apx:fixed_point_characterization} contains a characterization of the fixed points of this operator that justifies why this is a meaningful suboptimality criterion for \eqref{eq:opt_problem}. Although there is no direct correspondence between rates on the gradient and on objective values, lower bounds are asymptotically equivalent~\citep{nesterov2012make}.
\newcommand{\TheoremSubSparse}{
Sparse \textsc{Vr-Tos}\ (Algorithm~\ref{alg:vrtos_sparse}) converges for every step size ${\gamma \leq 1/(3L_f)}$. In particular, for $\gamma = {1}/{(3L_f)}$ and ${\boldsymbol y}_t$ obtained after $t \geq 1$ updates we have the bound
\begin{equation}
\min_{k=0, \ldots, t}\left\{ {\mathbb E} \|{\boldsymbol y}_k - \boldsymbol{G}_\gamma({\boldsymbol y}_k)\| \right\} \leq \sqrt{ \frac{C_0}{L q (t+1)}} = \mathcal{O}\left(\frac{1}{\sqrt{t}}\right),
\end{equation}
with $C_0= \frac{5 d_{\max}n}{{Lq(t+1)}} \left[({2Lq}/{n})\|{\boldsymbol y}_0 - {\boldsymbol y}^\star\|^2 + H_0\right]$.
}
\begin{theorem}\label{thm:sublinear_convergence_sparse}
\TheoremSubSparse
\end{theorem}
{\bfseries Linear convergence.}
The three operator splitting has been shown to have a linear convergence rate under the assumption of strong convexity of the smooth term and smoothness of one of the proximal terms~\citep[\S4.4]{davis2015three}. Although this last condition is rarely verified in practice since its main application is on non-smooth proximal terms, it is instructive to see that the proposed method --despite the reduced cost per iteration-- also enjoys a linear convergence rate under the same assumptions.
\newcommand{\TheoremLinear}{
Let $\psi_i$ be $\mu_\psi$-strongly convex and $\omega$ be $\mu_{\omega}$-strongly convex, where $\mu_\psi + \mu_\omega > 0$. Furthermore, let $h$ be $L_h$-smooth.
Then for any step size $\gamma \leq {1}/{(3 L_f)}$, all the proposed methods converge geometrically in expectation. For $\gamma = {1}/{(3 L_f)}$, we have the following bound for Algorithm \ref{alg:vrtos} ($d_{\max{}}=1$ in this case) and Algorithm~\ref{alg:vrtos_sparse}:
\begin{equation}
{\mathbb E} \|{\boldsymbol z}_{t+1} - {\boldsymbol x}^\star\|^2 \leq \left(1 - \min \Big \{ \frac{q}{4n}, \frac{1}{3 d^3_{\max}\delta^2 \kappa} \Big \}\right)^t D_0\quad,
\end{equation}
with $D_0 \stackrel{\text{def}}{=} {d_{\max}}\left[\frac{q}{2\gamma(1 - \gamma\mu)n} \|{\boldsymbol y}_0 - {\boldsymbol y}^\star\|^2 + H_0\right]$, $\delta = (1 + L_h/(3L_f))$, $\kappa = L_f/\mu$ and ${\boldsymbol y}^\star \in \Fix(\boldsymbol{G}_\gamma)$.
}
\begin{theorem}[Linear convergence]\label{thm:linear_convergence}
\TheoremLinear
\end{theorem}
{ \begin{table*}[t]
\centering
\footnotesize
\hspace*{-1.4cm}
\setlength\tabcolsep{5pt}\begin{tabular}{c c | c c c c |}
\cline{2-6}
\multicolumn{1}{c|}{} & {Method} &
{step size} & Proximal oracle & Convergence rate & {Extra assumptions}\\
\cline{2-6}
\multicolumn{1}{c|}{\multirow{2}{*}{\begin{sideways}\textbf{\sffamily Geometric}\end{sideways}}}
&
{\cellcolor{Gray} \textsc{Saga} } &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&\scriptsize\citep{defazio2014saga}
\cellcolor{Gray}&
\multirow{-2}{*}{\Large\nicefrac{$1$\,}{\,$3L_f$}} \cellcolor{Gray} &
\multirow{-2}{*}{$\mathbf{prox}_{\gamma(g + h)}$}
\cellcolor{Gray}&
\cellcolor{Gray} \multirow{-2}{*}{$\displaystyle \Big( 1 - \min\big\{\textstyle\frac{1}{4 n},\frac{1}{3 \kappa} \big\} \Big)^t C_0$} &
\cellcolor{Gray} \multirow{-2}{*}{{Each $\psi_i$ is $\mu$-cvx}} \\
\multicolumn{1}{c|}{}&{ Prox\textsc{Svrg} } &
&
&
&
\multirow{2}{*}{$f$ is $\mu$-cvx}\\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\scriptsize\citep{xiao2014proximal}}&
\multirow{-2}{*}{\Large\nicefrac{$1$\,}{\,$10L_f$}} &
\multirow{-2}{*}{$\mathbf{prox}_{\gamma(g + h)}$}
&
\multirow{-2}{*}{$\Big(\frac{1}{\kappa 0.6 m } + \frac{2}{3}\Big)^t C_0$} &\\
\multicolumn{1}{c|}{} &
{\cellcolor{Gray} \textsc{Vr-Tos} } &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} \multirow{-1}{*}{{Each $\psi_i$ is $\mu$-cvx}}\\
\multicolumn{1}{c|}{\multirow{4}{*}}&{\scriptsize(this work)}
\cellcolor{Gray}&
\multirow{-2}{*}{\Large\nicefrac{$1$\,}{\,$3L_f$}} \cellcolor{Gray} &
\multirow{-2}{*}{$\mathbf{prox}_{\gamma g}~,~\mathbf{prox}_{\gamma h}$}
\cellcolor{Gray}&
\cellcolor{Gray} \multirow{-2}{*}{$\displaystyle \Big( 1 - \min\big\{\textstyle\frac{q}{4 n},\frac{1}{3 d_{\max{}}^2\delta^2 \kappa} \big\} \Big)^t C_0$} &
\cellcolor{Gray} \multirow{-1}{*}{{and $h$ is $L_h$-smooth}} \\
\cline{2-6}\addlinespace[0.2cm]
\cmidrule{2-6}\vspace{-3.2ex}\\
\cline{2-6}
\multicolumn{1}{c|}{\multirow{4}{*}}& {\cellcolor{Gray}\multirow{-1}{*}{ \textsc{Saga}} } &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} &
\cellcolor{Gray} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\scriptsize\citep{defazio2014saga}}\cellcolor{Gray}&
\multirow{-2}{*}{\Large\nicefrac{$1$\,}{\,$3L_f$}} \cellcolor{Gray} &
\multirow{-2}{*}{$\mathbf{prox}_{\gamma (g + h)}$}
\cellcolor{Gray}& \multirow{-2}{*}{$\mathcal{O}(1/t)$}
\cellcolor{Gray} \multirow{-2}{*}{} &
\cellcolor{Gray} \multirow{-2}{*}{None}\\
\multicolumn{1}{c|}{\multirow{-2}{*}{\begin{sideways}\textbf{\sffamily Sublinear}\end{sideways}}}
& {\multirow{-1}{*}{ Stochastic \textsc{Tos}} } &
&
&
&
\multirow{-1}{*}{$f$ is $\mu$-cvx +} \\
\multicolumn{1}{c|}{\multirow{4}{*}}&
\multirow{-1}{*}{\scriptsize \citep{yurtsever2016stochastic}}&
\multirow{-2}{*}{$\mathcal{O}${\Large(\nicefrac{1\,}{\,$t$})}} &
\multirow{-2}{*}{$\mathbf{prox}_{\gamma g}~,~\mathbf{prox}_{\gamma h}$}
&
\multirow{-2}{*}{$\mathcal{O}(1/t)$} &
\multirow{-1}{*}{bound on gradients} \\
\multicolumn{1}{c|}{\multirow{4}{*}} &
\cellcolor{Gray}\textsc{Vr-Tos} &
\cellcolor{Gray}&
\cellcolor{Gray}&
\cellcolor{Gray}\multirow{3}{*}{$\displaystyle $} & \cellcolor{Gray}\\
\multicolumn{1}{c|}{\multirow{4}{*}}& \cellcolor{Gray}{\scriptsize (this work, dense/sparse variant)}
&\cellcolor{Gray} \multirow{-2}{*}{\Large\nicefrac{$1$\,}{\,$3L_f$}}
&\cellcolor{Gray} \multirow{-2}{*}{$\mathbf{prox}_{\gamma g}~,~\mathbf{prox}_{\gamma h}$}
&\cellcolor{Gray} \multirow{-2}{*}{$\mathcal{O}(1/{t})$ / $\mathcal{O}(1/\sqrt{t})$}
&\cellcolor{Gray} \multirow{-2}{*}{None} \\
\cline{2-6}
\end{tabular}
\caption{{\bfseries Assumptions and properties of related incremental methods.} In every case, we take the step size recommended by the theory, where we assume $\omega=0$ to make them comparable. Proximal oracle is the proximal operators that are needed by the algorithm. Extra assumptions refer to those other than Assumptions 1 and 2. The linear rates use the quantities $\delta = (1 + \gamma L_h)$, $\kappa = L_f/\mu$. For Prox\textsc{Svrg}, $m$ denotes the epoch size and the convergence rate is relative to the number of epochs and not iterations like the rest.
}
\label{table:convergence_rates}
\end{table*}
}
\subsection{Discussion}\label{scs:discussion}
{\bfseries Comparison of convergence rates}. We summarize the obtained convergence rates for the proposed methods and compare them against the best known rates for related stochastic methods in Table~\ref{table:convergence_rates}.
In the linearly-convergent regime, we obtain rates that are similar to \textsc{Saga}\, but with the rate factor multiplied by ${1}/{(\delta^2 d^3_{\max})}$, quantity that depends on the smoothness of $g$ and the sparsity of the gradients.
{\bfseries An improved Prox\textsc{Svrg}\ variant}. The analysis of Prox\textsc{Svrg}~\citep{xiao2014proximal} requires that the step size verifies an implicit equation that depends among other things on the strong convexity parameter. For typical choices of the parameters this is $1/(10L_f)$~\citep[Theorem 1]{xiao2014proximal}. In contrast, Sparse \textsc{Vr-Tos}\ with \textsc{Svrg}-like sampling with $h=0$ yields a variant of Prox\textsc{Svrg}\ with more favorable properties. First, none of its parameters depend on the strong convexity constant (while still obtaining a linear convergence rate since $L_h=0$ in this case), which is most often unknown. Second, it admits the much larger step size $1/(3L_f)$, which is, to the best of our knowledge, the largest step size of any \textsc{Svrg}\ variant. Third, it can leverage sparsity in the input data through sparse updates.
{\bfseries Linear convergence without smoothness of the proximal term}. Theorem \ref{thm:linear_convergence} requires smoothness of one of the proximal terms to guarantee linear convergence. Despite this, linear convergence is observed in practice without this assumption (Figure~\ref{fig:bench_fused}). This has also been observed in the case of the original (non-variance reduced) three operator splitting~\citep{davis2017three, pedregosa2018adaptive}, although an explanation for this is still an open problem.
Furthermore, the lack of linear convergence when both proximal terms are non-smooth does not seem to be a limitation of the proof, as a counterexample was provided in \citep[Appendix D.6]{davis2015three}. In this work, the authors constructed a strongly monotone operator with a sublinear convergence.
{\bfseries Step size adaptivity to linear convergence}. A practical consequence of the above theorems is that using the same step size $\gamma={1}/{(3 L_f)}$ we obtain a sublinear convergence by Theorem \ref{thm:sublinear_convergence} and a linear rate (under additional assumptions) by Theorem \ref{thm:linear_convergence}. That is, one can use the ``universal'' step size ${1}/{(3L_f)}$ and automatically obtain linear convergence whenever the assumptions of Theorem \ref{thm:linear_convergence} are verified.
{\bfseries Limitations}. The following are some scenarios under which the proposed method is
expected to perform poorly. The cost in computation and storage scales linearly with the number of proximal terms, hence it cannot cope with other scenarios with many nonsmooth terms such as empirical risk minimization with the hinge loss or group lasso with overlap with a large number of overlaps (for instance $> 100$).
Also, there are still penalties that cannot be reduced to a sum of proximal terms, such as the nuclear norm. Algorithms based on Frank-Wolfe~\citep{jaggi2013revisiting} or with approximate proximal operators ~\citep{schmidt2011convergence} might be better suited in such regimes.
\section{Experiments}\label{scs:experiments}
\begin{figure*}[t]
\centering
\begin{tabular}{lrrrr}
\toprule
{\bfseries\sffamily Dataset} & \multicolumn{1}{c}{\#\tablefont{samples}} & \multicolumn{1}{c}{\#\tablefont{dimensions}
} & {\tablefont{density}} & \multicolumn{1}{c}{$L_f/\mu$}\\
\midrule
{\bfseries\sffamily RCV1 (full)}~\citep{lewis2004rcv1} & \hfill 697,641 & \hfill 47,236 & \hfill $1.5 \times 10^{-3}$ & \hfill 2.50 $\times 10^4$ \\
{\bfseries\sffamily URL}~\citep{ma2009identifying} & \hfill 2,396,130 & \hfill 3,231,961 & \hfill $3.5 \times 10^{-5}$ & \hfill 1.28 $\times 10^7$\\
{\bfseries\sffamily KDD10}~\citep{yu2010feature} & \hfill 19,264,097 & \hfill 29,890,095 & \hfill 9.8 $\times 10^{-7}$ & \hfill $5.2\times10^8$ \\
{\bfseries\sffamily Criteo}~\citep{juan2016field} & \hfill 45,840,617 & \hfill 1,000,000 & \hfill $3.8\times 10^{-5}$ & \hfill $1.1\times 10^7$\\
\bottomrule
\end{tabular}
\vspace*{1em}
\centering \includegraphics[width=0.92\linewidth]{main_fig.pdf}
\caption{{\bfseries Top}: Description of considered datasets. {\bfseries Bottom:} Suboptimality vs time of different algorithms on a logistic regression with overlapping group lasso penalty problem. }
\label{fig:bench_fused}
\end{figure*}
Although the proposed methods can be applied more broadly, we consider for the experiments a logistic regression problem with squared $\ell_2$ regulrization and an overlapping group lasso penalty~\citep{jacob2009group}. Following \citet{jacob2009group} we choose
groups of 10 variables with 2 variables of overlap between two successive groups: $\{\{1, \ldots, 10\}, \{8, \ldots, 18\}, \{16, \ldots, 26\}, \ldots\}$.
The amount of group regularization was chosen such that the solution has roughly $10\%$ of non-zero coefficients and the of $\ell_2$ regularization was fixed to ${1}/{n}$. We consider the following methods:
\begin{itemize}[leftmargin=*]
\item The proposed method Sparse \textsc{Vr-Tos}\ (Algorithm~\ref{alg:vrtos_sparse}), where the overlapping group lasso penalty is split as a sum of two non-overlapping group lasso penalties, for which the proximal operator is available in closed form. We used the formulation with 3 proximal terms of \S\ref{scs:extension_k_terms} to better leverage sparsity in the dataset and consider \textsc{Saga}\ and \textsc{Svrg}-like updates, denoted \textsc{Vr-Tos}\ (\textsc{Saga}\ variant) and \textsc{Vr-Tos}\ (\textsc{Svrg}\ variant) respectively. This implementation is publicly available in the C-OPT package.\footnote{\url{http://openopt.github.io/copt/}}
It is worth noting that while original penalty is \emph{not} block separable, each of the terms in the splitting as two group lasso penalties \emph{is} block separable. This will allow us to make a much more efficient use of sparsity than what is possible on on methods like \textsc{Saga}\ and Prox\textsc{Svrg}.
\item The three operator splitting (denoted \textsc{Tos}), in its recently proposed variant with adaptive step size~\citep{pedregosa2018adaptive}.
\item The stochastic three operator spitting of \citep{yurtsever2016stochastic} with the same splitting as \textsc{Vr-Tos}, denoted \textsc{Stos}.
\item \textsc{Saga}\ and Prox\textsc{Svrg}, where the proximal operator is evaluated approximately using 10 iterations of the Douglas-Rachford method.
\end{itemize}
The above methods were compared on 4 large-scale datasets described in the table of Figure~\ref{fig:bench_fused}. Further details and implementation aspects are discussed in \ref{apx:implementation}.
The best performing algorithms overall are the proposed \textsc{Vr-Tos}\ variants, which are over an order of magnitude faster than the second best method, the adaptive three operator splitting. The stochastic three operator splitting, not being able to take advantage of the sparsity in the gradients, performs poorly in this benchmark, appearing as a straight line. \textsc{Saga}\ and Prox\textsc{Svrg}\ were the slowest since they require to compute a costly proximal operator at each iteration and are unable to leverage the sparsity of the dataset due to the non-block-separability of the non-smooth term.
It is worth noting from Figure~\ref{fig:bench_fused} that the two variants of Sparse \textsc{Vr-Tos}\ exhibit an empirical linear convergence, despite the fact that the theory only predicts in this regime a much slower $\mathcal{O}(1/\sqrt{t})$ convergence rate (Theorem~\ref{thm:sublinear_convergence}).
We provide extra experiments in \ref{apx:overlapping_benchmarks}.
\section{Future work}
This work can be extended in several ways. As highlighted in \S\ref{scs:discussion}, a theoretical explanation for the empirical linear convergence without smoothess of any proximal term, even for the full gradient algorithm, is lacking. We conjecture \emph{partly smooth} is a sufficient condition on the penalties to ensure local linear convergence, as recently proven for related methods~\citep{liang2018local}.
Second, we conjecture that the convergence rate of the sparse variant can be improved up to to $\mathcal{O}(1/t)$.
A third direction for future work would be the development an extension that allow for a linear operator inside one of the proximal terms, as in \citep{condat2013direct,zhao2018stochastic,yan2018new}.
\section*{Acknowledgements}
The authors warmly thank
Vincent Roulet, Vlad Niculae, R\'emi Leblond and Federico Vaggi for their feedback on the manuscript, as well as Adrien Taylor, Alexandre D'Aspremont, Gabriel Peyr\'e, Guillaume Obozinski, P. Balamurugan, Francis Bach and Marwa El Halabi for fruitful discussions.
This work has been done while FP was under funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodorowska-Curie grant agreement 748900. KF is funded through the project OATMIL ANR-17-CE23-0012 of the French National Research Agency (ANR).
Computing time on was donated by Amazon through the program ``AWS Cloud Credits for Research''.
\bibliographystyle{apalike}
|
2103.06277
|
\section{Introduction}
It was recently suggested that hybrid HeCO~WDs might play a key role in the production of thermonuclear supernovae (SNe)\citep{PeretsZenati2019,Zenati2019}. Their disruption by a more massive CO~WD, and later accretion of the debris onto the CO~WD was shown in 2D simulations to trigger a thermonuclear explosion of the CO~WD, giving rise to a type Ia supernova.
However, due to the limitations of 2D simulations, these models did not simulate the early phases of accretion from the HeCO~WD onto the CO~WD and the later disruption. In particular, accretion of helium from the hybrid WD onto the CO~WD could potentially give rise to a dynamical detonation of the accreted helium layer (or a pre-existing helium layer on the CO~WD if such exists). In such a case it is possible that the helium layer detonation may trigger a secondary carbon detonation of the CO~WD leading to its explosion, even before the companion hybrid WD is disrupted. This latter possibility follows ideas of double-detonation models beginning in the 80's \cite{Ibe85,Iben+87}, and their more recent incarnation as dynamical double detonations \cite[see, e.g.]{Guillochon2010,Pakmor2013,Sato+15,Shen2018b}.
In order to understand the outcomes of a CO-HeCO double degenerate binary system, and explore whether it gives rise to the disruption of the HeCO~WD, we employ dynamical 3D simulations that begins following the final phase of the inspiral of the binary CO-HeCO~WD system.
We specifically investigate the fate of a double degenerate binary system consisting of a primary pure CO-WD with a mass of $0.8\,\mathrm{M_\odot}$ and a secondary hybrid HeCO WD with total mass of $0.69\,\mathrm{M_\odot}$ made of a CO core of $0.59\,\mathrm{M_\odot}$ and a massive helium shell of $0.1\,\mathrm{M_\odot}$. Note that both the mass ratio, and the highly He-enriched HeCO WD {\it differ} from those studied in 2D accretion disk simulations \cite{PeretsZenati2019}. In particular, the much higher He shell for this hybrid is attained through somewhat different evolution than that described in \cite{Zenati2019}, as we further discuss in section 2 below.
We obtain the density profile and composition of both WDs using the 1D stellar evolution code \textsc{mesa} \citep{Paxton2011,Paxton2019} similarly to \citet{Zenati2019}. We follow the evolution of the hybrid WD starting from the main sequence as a $4.1\mathrm{M_\odot}$ star in a binary system with a separation of $a=4.18\,\mathrm{AU}$ with a more massive $7.5\,\mathrm{M_\odot}$ companion star that will become the primary WD, and adopt solar metallicity $\rm Z=Z_{\odot}=0.02$. We discuss the formation of the double degenerate binary system in detail in Sec.~\ref{sec:binary}~and~\ref{sec:popsynth}.
Starting from the 1D profiles we then generate 3D representations of these WDs in the moving mesh hydrodynamics code \textsc{arepo} \citep{Arepo,Pakmor2016} that includes a fully coupled nuclear reaction network. We initialise the binary system with an initial separation of $4\times 10^4\,\mathrm{km}$. We follow the inspiral of the binary system as the secondary hybrid WD fills its Roche lobe and transfers material to the primary WD. Eventually a detonation ignites on the surface of the primary WD in its accreted helium shell. We describe the inspiral phase and ignition of the helium detonation in Sec.~\ref{sec:inspiral}.
The helium detonation sweeps around the primary CO~WD and sends a shock wave into its center. As there is not a lot of helium around the primary WD the detonation is weak. Its shock wave still converges at the edge of the CO core of the primary WD but fails to detonate carbon. In contrast, at the same time the helium detonation propagates upwards through the dense accretion stream towards the secondary hybrid WD. When it arrives there the helium detonation also travels around the hybrid WD, burning its massive helium shell. Here the detonation is strong and fast and its shock wave converges close to the center of the hybrid WD where it manages to ignite the CO core. The emerging carbon detonation then burns and unbinds the whole secondary WD. We describe this phase and characterise the ejecta of the secondary hybrid WD and the properties of the surviving primary CO~WD in Sec.~\ref{sec:explosion}.
We then employ the radiation transfer code SuperNu \citep{Wollaeger2013,Wollaeger2014} to compute synthetic lightcurves for the explosion in Sec.~\ref{sec:observables}. We use population synthesis to estimate the frequency of similar events and discuss the observability of the explosion and the surviving WDs in Sec.~\ref{sec:popsynth}.
We finally summarise our results and provide an outlook on the most important questions that arise from our work in Sec.~\ref{sec:conclusion}.
\section{Formation of the binary system}
\label{sec:binary}
At the current age of the Universe single star evolution has produced CO~WDs in the mass range $\sim 0.50$ - $\rm 1.05\ M_\odot$ and oxygen-neon (ONe) WDs in the mass range $\sim 1.05$ - $\rm 1.38\ M_\odot$. However, binary evolution makes this picture much more complex and can give rise to WDs with very different properties including very low mass (VLM) WDs \citep{Ist+16,Zha+18}. Moreover, there are two ways how binary systems can produce hybrid WDs with a CO core and an outer helium shell, either through a phase of mass transfer via Roche-lobe overflow (RLOF) or through a common envelope phase (CEE) \cite[see e.g.][]{Iva13}.
During this binary interaction the hydrogen-rich envelope of the star that will become the hybrid WD is stripped following the formation of a He core. The later evolution of the stripped star and its helium core is then significantly altered compared to the uninterrupted evolution of an identical but non-interacting (single) star. After most of the red giant envelope is removed the outer hydrogen shell burning is quelled, but the helium-core keeps growing in mass \citep{Moroni09}, and the star begins to contract. If the helium core is sufficiently massive, the contraction will eventually trigger helium ignition \citep{Ibe85,Zenati2019} and the formation of a CO core. In this case the helium to CO abundance ratio of the final WD will be determined by the specific detailed evolution of helium burning as well as mass-loss through winds from the envelope \citep{Tut+92,Moroni09,Zenati2019}.
As shown in \citet{Zenati2019} such interacting binary systems can produce hybrid WDs in the mass range of $\rm 0.38-$ $\rm 0.72 M_\odot$ with a helium envelope containing $\rm \sim 2-$ $20\%$ of the total mass of the WD. Recent observational evidence for the existence of such hybrid WDs has been mounting. The ZTF survey \citep{Kupfer20} reported their finding of the first short-period binary in which a hot subdwarf star (sdOB) has filled its Roche lobe and has started mass transfer to its companion. The binary system has a period of $P = 39.3401\,\mathrm{min}$, making it the most compact hot subdwarf binary currently known. \citet{Kupfer20} estimated that the hot subdwarf will become a hybrid WD (with a helium layer of $\sim 0.1\,\mathrm{M_\odot}$) and merge with its CO~WD companion in about $17$~Myr. In this case it may end in a thermonuclear explosion or form an R~CrB star. In addition \cite{Beuermann20} and \cite{Steven20} found eclipsing binaries that may have a hybrid WD as their primary star.
\subsection{A highly He-enriched hybrid WD}
In this work we explore the fate of a close binary system consisting of a $0.8\,\mathrm{M_\odot}$ CO~WD and a $0.69\,\mathrm{M_\odot}$ hybrid WD that has a helium shell with a mass of $0.1\,\mathrm{M_\odot}$ or $\sim 14\%$ of its mass.
Although this hybrid WD is produced through the same stellar evolution stages as the hybrid WD of the same total mass in \cite{Zenati2019}, some different choices of parameters give rise to a more He-enriched WD than those described in \cite{Zenati2019}. We follow the stellar evolutionary tracks of both binary components from the pre-main sequence stage to the final binary system consisting of two WDs. We stop the evolution once the star becomes a fully degenerate WD. This condition effectively translates to a WD luminosity and temperature below $L \leq 1.12\ L_\mathrm{\odot}$ and $T_\mathrm{eff} \leq 4.92 \ T_\mathrm{\odot,eff}$, respectively.
The evolution of the binary system depends strongly on the initial conditions. Based on our population modelling, we describe the typical binary evolution in Section\,\ref{sec:popsynth}. It is important to note that the initial conditions of the binary system studied here are {\rm different} than those described in \cite{Zenati2019} for the formation of the $0.69$ M$_\odot$ HeCO~WD. Here we begin with an initial mass ratio $\rm q= M_{donor}/M_{companion} \sim 0.58$ and an initial orbital period $\rm P=4.37\,\mathrm{d}$. We also consider slightly different overshooting parameter (0.0012) and mixing parameter ($\alpha=1.3$)
We explored a range of hybrid WDs to chose the parameters and found these parameters give rise to very similar results for WDs less massive than $0.63$ M$_\odot$ as described in \citep{Zenati2019}. However, when considering these parameters for the formation of more massive hybrids, they give rise to the formation of an even more He-enriched HeCO~WD than discussed in \citet{Zenati2019}, such as the one considered here. These results will be discussed in depth in a dedicated paper.
\begin{figure}
\includegraphics[width=0.97\linewidth]{evolution_orbit.pdf}
\includegraphics[width=0.97\linewidth]{evolution_enuc.pdf}
\caption{The top panel shows the orbital evolution of the two WDs in the binary system in the plane of rotation. Dashed lines show their path while the additional force mimicking gravitational wave emission is active. Straight lines show the evolution of the primary WD (red) and the secondary WD (blue). The latter ends when the secondary WD is disrupted. The yellow star denotes the moment when the helium detonation ignites first on the surface of the primary WD. The red star marks the time when the carbon detonation ignites in the center of the secondary WD after which it is quickly disrupted. The blue dotted line shows the movement of the center of mass of the ejecta of the secondary WD. The lower panel shows the cumulative total nuclear energy released. Stars again mark the ignition of the helium detonation (yellow) and the carbon detonation (red).}
\label{fig:evolution}
\end{figure}
\begin{figure*}
\includegraphics[width=0.81\textwidth]{det_helium.pdf}
\caption{Columns show slices through the midplane of the binary for the snapshots just before, at, and directly after the formation of the first detonation, the helium detonation on the surface of the primary CO~WD. The snapshots are separated by $0.1\mathrm{s}$. The top row shows a density slice of the whole binary, the bottom rows show slices zooming in on the position where the detonation forms featuring density, temperature, mass fraction of $^4\mathrm{He}$, and mean atomic weight, respectively.}
\label{fig:he_det}
\end{figure*}
\section{Inspiral and ignition}
\label{sec:inspiral}
Once a binary system with two WD components has formed, it slowly loses angular momentum via gravitational wave emission until eventually the two WDs get sufficiently close for the secondary (hybrid) WD to fill its Roche lobe. It then starts transferring mass onto the primary CO~WD. For a long time mass transfer is very slow and irrelevant for the properties of the system, while the orbit continues to shrink from further emission of gravitational waves, and the mass transfer rate increases. Eventually the binary becomes sufficiently close such that the mass transfer rate is large enough as to transfer substantial amounts of mass on a timescale of several orbits.
At this point, which serves as the initial starting point of our 3D simulation, the density of the accretion stream continues to increase, and unless interrupted by other process would eventually lead to the disruption of the secondary, after a few tens of orbits. However, we find that in the configuration explored here, thermonuclear detonation occurs beforehand, and gives rise to very different outcomes, as we now describe.
\begin{figure*}
\includegraphics[width=\textwidth]{shock_primary.pdf}
\caption{Propagation of the shock originating from the helium detonation as it moves around the primary WD. The panels show the time evolution from the time of the ignition of the helium detonation (top left panel) to the time when the shock converges in the CO core of the primary WD (bottom right panel). The black dotted and gray solid contours indicate densities of $2\times 10^6\,\mathrm{g\,cm^{-3}}$ and $10^7\,\mathrm{g\,cm^{-3}}$, respectively.}
\label{fig:shock_pri}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{shock_secondary.pdf}
\caption{Propagation of the shock originating from the helium detonation moves around the seconday WD. The panels show the time evolution from the time when the helium detonation arrives at the secondary WD and starts to propagate around it (top left panel) to the time when the shock converges in the carbon-oxygen core of the secondary WD (bottom right panel). The black dotted and gray solid contours indicate densities of $2\times 10^6\,\mathrm{g\,cm^{-3}}$ and $10^7\,\mathrm{g\,cm^{-3}}$, respectively.}
\label{fig:shock_sec}
\end{figure*}
\subsection{Arepo}
To model the dynamical evolution of the binary system we use the moving-mesh code \textsc{arepo} \citep{Arepo,Pakmor2016,Weinberger2020}. It discretises the volume into cells and solves the equations of hydrodynamics using a second order finite volume scheme \citep{Pakmor2016}. Its Voronoi mesh is reconstructed in every timestep from a set of mesh-generating points that each span up single cell. Fluxes over interfaces are computed using the HLLC Riemann solver in the moving frame of the interface \citep{Pakmor2011b}. We use \textsc{arepo} in its pseudo-Lagrangian mode, i.e.the mesh-generating points follow the gas velocity with small corrections to keep the mesh regular. On top of the movement of the mesh-generating points we employ explicit refinement and de-refinement for cells that are more than a factor of two away from the desired target gas mass of the cells. For the simulation presented here, our target mass resolution is always $m_\mathrm{target}=10^{-7}\,\mathrm{M_\odot}$. In addition we require that the volume of a cell is not more than $10$ times larger than its largest direct neighbour, otherwise the cell is refined.
In addition to hydrodynamics \textsc{arepo} includes self-gravity of the gas. Gravitational accelerations are computed using a tree solver and are coupled to the hydrodynamics via a Leapfrog time integration scheme. The gravitational softening of the cells is set to be equal to $2.8$ times their radius, with a minimum softening of $10\,\mathrm{km}$. To improve the efficiency of the simulation we use local timesteps in \textsc{arepo}, i.e. every cell is integrated on the largest timestep of a discrete set of timesteps, that is smaller than the timestep of its local timestep criteria. This way only a small number of cells is integrated on the smallest timestep in the simulation and the bulk of the cells can be integrated on much larger timesteps.
To model degenerate electron gases present in WDs we use the \textsc{helmholtz} equation of state \citep{Timmes2000} including Coulomb corrections. Moreover, we include a $55$~isotope nuclear reaction network fully coupled to the hydrodynamics \citep{Pakmor2012}. The included isotopes are $\mathrm{n}$, $\mathrm{p}$, $^4\mathrm{He}$, $^{11}\mathrm{B}$, $^{12-13}\mathrm{C}$, $^{13-15}\mathrm{N}$, $^{15-17}\mathrm{O}$, $^{18}\mathrm{F}$, $^{19-22}\mathrm{Ne}$, $^{22-23}\mathrm{Na}$, $^{23-26}\mathrm{Mg}$, $^{25-27}\mathrm{Al}$, $^{28-30}\mathrm{Si}$, $^{29-31}\mathrm{P}$, $^{31-33}\mathrm{S}$, $^{33-35}\mathrm{Cl}$, $^{36-39}\mathrm{Ar}$, $^{39}\mathrm{K}$, $^{40}\mathrm{Ca}$, $^{43}\mathrm{Sc}$, $^{44}\mathrm{Ti}$, $^{47}\mathrm{V}$, $^{48}\mathrm{Cr}$, $^{51}\mathrm{Mn}$, $^{52,56}\mathrm{Fe}$, $^{55}\mathrm{Co}$, $^{56,58-59}\mathrm{Ni}$. We use the JINA reaction rates \citep{Cyburt2010}. Nuclear reactions are computed for all cells with $T > 10^6\,\mathrm{K}$ except for cells that are part of the shock front which we assume to be the case when $\nabla \cdot \vec{v} < 0$ and $\left| \nabla P \right| r_\mathrm{cell} / P > 0.66$ \citep{Seitenzahl2009}. Note that we reran the simulation until the nuclear burning ceases with an additional limiter that artificially reduces nuclear reaction rates to guarantee that the nuclear timescale is always longer than the hydrodynamical timestep of a cell \citep{Kushnir2013,Shen2018}. As we show and discuss in appendix~\ref{app} the results are essentially identical, with the main difference that the helium detonation moves slower with the additional limiter that reduces the reaction rates.
\subsection{Setup and Inspiral}
From the stellar evolution calculation we obtain the density profile, temperature profile, and composition profile of both WDs. To generate the 3D initial conditions in \textsc{arepo} we employ a healpix based algorithm that generates roughly cubical initial cells \citep{Pakmor2012,Ohlmann2017}. We first put both stars individually into a box with boxsize $10^5\,\mathrm{km}$ with a background density of $10^{-5}\,\mathrm{g\,cm^{-3}}$ and background specific thermal energy of $10^{14}\,\mathrm{erg\,g^{-1}}$. We relax them for $40\,\mathrm{s}$ (for the $0.8\,\mathrm{M_\odot}$ CO~WD) and $60\,\mathrm{s}$ (for the $0.69\,\mathrm{M_\odot}$ hybrid WD. For the first $80\%$ of the relaxation we apply a friction force that damps out initial velocities that are introduced from noise in the original mesh. In the last $20\%$ of the relaxation time we disable the friction force and check that the density profiles of the WDs do not change anymore, i.e. that the relaxed stars are stable at their initial profile.
After the relaxation we take the final state of both WDs from their relaxation in isolation and add them together into a simulation box with a size of $10^7\,\mathrm{km}$. We put the WDs on a spherical co-rotating orbit at a distance of $a=4.2\times10^4\,\mathrm{km}$, which sets the initial orbital period to $T=120\,\mathrm{s}$. We chose this initial orbit rather wide so that the initial tidal forces of the WDs on each other are small. We use a passive tracer fluid to track the material of each WD individually in the simulation. This also allows us to easily compute the centers of both WDs.
At this separation gravitational wave inspiral is still relevant, though it takes too much time to follow the system with its true inspiral rate. To circumvent this problem we add a tidal force that removes angular momentum from the binary system similar to gravitational waves. However, we chose the force such that the separation $a$ decreases at a constant rate $v_\mathrm{a}$, i.e.
\begin{equation}
\frac{da}{dt} = v_\mathrm{a}.
\end{equation}
This leads to a purely azimuthal acceleration on the primary WD given by
\begin{equation}
\vec{a}_{1} = - \frac{M_{2}^2}{M_1+M_2} \frac{G}{2a} \frac{\vec{v}_{1}}{\vec{v}_{1}^2} v_\mathrm{a},
\end{equation}
and vice versa for the secondary WD. We chose $v_\mathrm{a}=50\,\mathrm{km\,s^{-1}}$ so that the orbit changes fast compared to physical gravitational wave emission but slowly compared to the dynamical timescales of the WDs so that the WDs can easily adapt to the changing tidal forces.
The evolution of the orbits of the two WDs is shown in Fig.~\ref{fig:evolution}. At $t=430\,\mathrm{s}$, after about six orbits, the orbital period has decreased to $T=38\,\mathrm{s}$ and the separation to $a=1.9\times10^4\,\mathrm{km}$. At this point we switch off the angular momentum loss term. The point where it is switched off is chosen such that the accretion stream is dense enough to dynamically affect the surface of the primary WD.
At this time the secondary WD has donated $4\times10^{-3}\,\mathrm{M_\odot}$ of helium to the primary WD which now forms a helium shell on the surface of the primary WD. Moreover, about $4\times 10^{-5}\mathrm{M_\odot}$ of pure helium material has been unbound from the system through tidal tails.
\subsection{Ignition of the helium detonation}
A density slice through the midplane of the binary system is shown in top row of Fig.~\ref{fig:he_det} when the system has evolved $10\,\mathrm{s}$ past the point when we switched off the angular momentum loss term. As the accretion stream shears along the surface of the primary CO~WD it generates Kelvin-Helmholtz instabilities that disturb the initially separated helium and carbon-oxygen layers. At this time the accreted helium layer and the CO core of the primary WD have mixed only very little.
A zoom-in on this interface (additional rows in Fig.~\ref{fig:he_det}) shows the base of one helium rich bubble on the surface of the primary CO~WD with a radius of about $10^3\,\mathrm{km}$.
The gas here is compressed and heated up and eventually reaches a temperature of about $10^9\,\mathrm{K}$ at a density larger than $2\times10^{5}\,\mathrm{g\,cm^{-3}}$. The cells in this hotspot have a typical radius of $15\,\mathrm{km}$. Under these conditions helium starts to burn explosively and a helium detonation forms quickly, consistent with ignition simulations of resolved helium detonations \citep{Shen2014b}. The helium is burned to intermediate mass elements. The helium detonation compresses the material as the shock runs over it and heats it up as shown in the right panel of the second row of Fig.~\ref{fig:he_det}. Note that the helium detonation forms at the same place when the additional burning limiter is applied (for details see Appendix~\ref{app}).
The temperatures and densities of the burning Helium layers are not sufficiently hot nor dense enough to also burn the adjacent pure CO material, so the helium detonation first propagates outwards into the helium shell of the primary WD and then starts sweeping around it. At the same time the helium detonation sends a shock-wave into the CO core of the primary WD. A summary of the global properties of the detonation is shown in Table~\ref{tab:det}.
\begin{table*}
\begin{centering}
\begin{tabular}{cccccccccc}
\hline
Event & {$\rm T_\mathrm{ign}$} & {$\rm \rho_\mathrm{ign}$} & {$\rm t_\mathrm{ign}$} & {$\Delta{Q}_{\rm nuc}$} & {$\rm IME$} & {$\rm IGE$}\tabularnewline
- & $[K]$ & $[g\ cm^{-3}]$ & $[sec]$ & $[erg]$ & $[M_\odot]$ & $[M_\odot]$\tabularnewline
\hline
He det & $2.2\times 10^{9}$ & $2\times 10^{5} $ & $440.5$ & $1.2\times10^{50}$ & $4.7\times10^{-2}$ & $3.5\times10^{-3}$\tabularnewline
C det & $3.8\times 10^{9}$ & $2\times 10^{7}$ & $443.5$ & $4\times10^{50}$ & $3.4\times 10^{-1}$ & $2.3\times10^{-2}$ \tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Physical conditions that lead to ignition of the helium and carbon detonations and their global yields. The table shows the ignition temperature, ignition density, and ignition time of the detonations as well as the total energy release and the amount of intermediate mass elements and iron group elements that the detonation synthesises.}
\label{tab:det}
\end{table*}
\section{Explosion}
\label{sec:explosion}
After the formation of the helium detonation the situation is initially similar to the well-known double detonation scenario \citep{Fink2007,Guillochon2010,Pakmor2013}, in which the helium detonation travels around the primary WD while sending a shock wave into the carbon-oxygen core. In this scenario the eventually spherical shock-wave converges in a single point in the core where it ignites a carbon detonation that disrupts the WD.
The main difference to the system we investigate here is that the helium detonation does not detonate our primary WD, but detonates the secondary WD instead. Moreover, the secondary WD produces radioactive $^{56}\mathrm{Ni}$ despite its low mass as it is strongly compressed prior to its explosion.
Since the helium detonation will travel all around the primary WD, it is unavoidable that the shock wave that it sends into the CO core will eventually converge at a single point. The strength of the shock, the position of this point, and most importantly the gas density at the convergence point will decide if a carbon detonation forms. The position of the convergence point depends on the speed of the shock wave that is moving through the CO core relative to the speed of the helium detonation moving around the core and the size of the core relative to the circumference of the CO core.
If the helium detonation is much faster than the shock wave it generates at the surface that is traveling into the core, the shock wave starts roughly at the same time everywhere on the surface. In this case the convergence point is close to the center of the core. In contrast, if the helium detonation is comparable in speed or slower than the shock wave that is propagating into the core, the convergence point will be close to the interface between the helium shell and CO core opposite to the ignition point of the helium detonation.
Therefore, the position of the convergence point depends fundamentally on the properties of the helium shell and the CO core of the WD. In particular, a higher density at the base of the helium shell increases the speed and completeness of the nuclear burning and the helium detonation becomes stronger and faster, and also sends a stronger shock into the core. As a rule of thumb, if the shock wave convergences in the CO core at a density $\geq 10^7\,\mathrm{g\,cm^{-3}}$ a carbon detonation will likely form, while carbon burning will likely not start if the density is below $3\times10^6\,\mathrm{g\,cm^{-3}}$ \citep{Seitenzahl2009,Shen2014}.
\begin{figure*}
\includegraphics[width=\textwidth]{ejecta.pdf}
\caption{Unbound ejecta at the time of the ignition of the helium detonation on the surface of the CO WD (top row) and at the end of the simulation long after the carbon detonation. The left panels show the projected density of the unbound material. The middle and right panels show the distribution of the mass of the ejecta (middle panel) and composition (right panel) in velocity space.}
\label{fig:ejecta}
\end{figure*}
\subsection{Primary WD}
Our primary WD is significantly less massive (only $0.8\,\mathrm{M_\odot}$) than the WDs typically studied in attempts to make type Ia supernovae, which require roughly solar mass primary WDs \citep{Sim2010,Shen2018}. Note also that the central density of our pre-shocked primary WD barely reaches $10^{7}\,\mathrm{g\,cm^{-3}}$, below which a carbon detonation does not produce any radioactive $^{56}\mathrm{Ni}$.
The propagation of the detonation in the helium shell around the primary WD and through its CO core is shown in Fig.~\ref{fig:shock_pri}. It takes the helium detonation about $3\,\mathrm{s}$ to reach the opposite side of the point of ignition. In this time it releases $2\times 10^{49}\,\mathrm{erg}$ from burning the helium shell around the primary WD.
The shock wave it sends into the CO core compresses the core only marginally, temporarily increasing its central density by $\approx 5\%$ from $1.03\times 10^7\,\mathrm{g\,cm^{-3}}$ to $1.08\times 10^7\,\mathrm{g\,cm^{-3}}$.
The shock wave takes about the same time to cross the core as the helium detonation to burn around the core, so that the detonation converges roughly at the edge of the CO core at the opposite side of the ignition of the helium detonation. The shock convergence occurs far off-center at a radius of about $5000\,\mathrm{km}$ and at a density of only $\approx 2\times10^6\,\mathrm{g\ cm^{-3}}$. Owing to the low density and the weak detonation the converging shock fails to ignite a carbon detonation. It is possible that with significantly higher resolution a carbon detonation might form, though the conditions are probably just insufficient as far as we can tell from resolved 1D ignition simulations \citep{Seitenzahl2009,Shen2014}. Note that creating a carbon detonation also fails with an additional burning limiter (for details see Appendix~\ref{app}), so we are reasonably confident that this result does not depend on the details of the numerical treatment of the detonation.
Compared to \citet{Pakmor2013} who simulated a system with a similar primary WD of $1.0\,\mathrm{M_\odot}$ with a helium shell of $0.01\,\mathrm{M_\odot}$ our primary WD in this simulation is less massive, so the central density as well as the density at the base of the helium shell are lower. Therefore the nuclear burning in the helium detonation is less complete, it releases less energy, and the helium detonation is weaker and slower. For comparison, the helium detonation only needs $1\,\mathrm{s}$ to travel around the $1.0\,\mathrm{M_\odot}$ primary WD in \citet{Pakmor2013} compared to $3\,\mathrm{s}$ for our $0.8\,\mathrm{M_\odot}$ primary WD.
One way to improve the chances of creating a carbon detonation may be to increase the strength of the helium detonation, through the existence of a larger He-shell mass. Higher mass could possibly be mediated by a longer period of mass transfer from the secondary hybrid~WD to the primary CO~WD prior to the dynamical interaction between both WDs that we model here. For this the binary system probably needs to transfer several $0.01\,\mathrm{M_\odot}$ of helium to the primary WD during the many orbits in which the secondary WD already fills its Roche lobe but when the accretion rates are very low and the accretion is dynamically unimportant. Whether such a scenario is possible is unknown as, unfortunately, it is not feasible at the moment to properly simulate such a system for the millions of orbits that would be required to model this phase properly. If such a scenario works its rates could be non-negligible \citep{Ruiter2014}. Moreover, the primary CO~WD could also have obtained a He-shell during the evolution of the binary system \citep{Neunteufel2016,Neunteufel2019}.
\subsection{Secondary WD}
The main difference in our system compared to previous simulations \citep[see, e.g.][]{Pakmor2013} is the hybrid nature of the secondary WD, which has a massive helium shell but is more massive than a pure helium WD and has a much higher central density. As we show, this difference gives rise to qualitatively different evolution and outcomes that previous models where He-WD secondaries were considered.
At the time the helium detonation ignites on the primary, the accretion stream from the secondary WD to the primary WD consists mostly of helium and is degenerate with a density of about $10^5\,\mathrm{g\,cm^{-3}}$. When the helium detonation that is sweeping around the primary WD reaches the end of the accretion stream, it is able to travel upwards through the accretion stream and reach into the helium shell of the hybrid WD. Note that the total mass in the accretion stream is small so its energy release is negligible compared to the helium burned on the surface of the two WDs.
The propagation of the helium detonation around the secondary WD and the shock it sends into its CO core is shown in Fig.~\ref{fig:shock_sec}. Since most of the original $0.1\,\mathrm{M_\odot}$ of helium is still on the secondary WD and the base of the helium shell is at a comparably high density of $\sim 10^6\,\mathrm{g\,cm^{-3}}$ the helium detonation on the secondary WD is very energetic and fast. It sweeps around it in less than $0.5\,\mathrm{s}$, much faster than the shock wave it sends into the CO core of the secondary WD and releases $1.0\times 10^{50}\,\mathrm{erg}$ of energy from burning the helium shell around the secondary WD, about five times the amount released from burning the helium shell around the primary WD.
As shown in Fig.~\ref{fig:shock_sec} the shock wave that is traveling into the core is therefore starting on an almost spherical surface. It converges about $1\,\mathrm{s}$ after the helium detonation around the secondary hybrid WD ceases close to its center. As the shock wave is quite energetic it also compresses the secondary WD in the center prior to converging there and raises its central density significantly by almost a factor of three from $5.6\times10^6\,\mathrm{g\,cm^{-3}}$ to $1.5\times10^7\,\mathrm{g\,cm^{-3}}$.
When the shock wave converges close to the center at a density of about $10^7\,\mathrm{g\,cm^{-3}}$ it heats up carbon enough to ignite a carbon detonation. Similar to the convergence in the primary WD, the formation of the carbon detonation happens independently of the details of the treatment of the detonation (for details see Appendix~\ref{app}). Once the carbon detonation has been ignited, it quickly sweeps through the CO core of the hybrid WD. It releases $4\times 10^{50}\,\mathrm{erg}$ of nuclear energy and unbinds the secondary WD. Its ashes then expand and leave the intact primary WD behind.
\begin{figure*}
\includegraphics[width=\textwidth]{remnant.pdf}
\caption{Surviving primary WD at the end of the simulation. The left panel shows density slices in the plane of rotation and perpendicular to it. The middle panel shows the radial density (solid lines) and temperature (dashed lines) profile of the surviving WD as well as the initial density profile of the primary WD at the beginning of the simulation for comparison. The right panel shows the composition of the surviving WD.}
\label{fig:remnant}
\end{figure*}
\subsection{The unbound ejecta}
After the carbon detonation has burned the secondary WD completely, its hot ashes become unbound and expand. Eventually they reach homologous expansion. We show the ejecta $46\,\mathrm{s}$ after the ignition of the carbon detonation in the lower row of Fig.~\ref{fig:ejecta}. At this time the structure of the ejecta deviates only by a few percent from homologous expansion.
The ejecta contain a total of $0.6\,\mathrm{M_\odot}$ which is most of the mass of the secondary WD. The mass distribution of the ejecta is similar to a low energetic type Ia supernova with most of its mass between $5000\,\mathrm{km/s}$ and $10000\,\mathrm{km/s}$ and a very low mass tail up to $27000\,\mathrm{km/s}$. The ejecta consist mostly of oxygen ($0.21\,\mathrm{M_\odot}$), silicon ($0.16\,\mathrm{M_\odot}$), sulfur ($0.09\,\mathrm{M_\odot}$), helium ($0.07\,\mathrm{M_\odot}$), and carbon ($0.06\,\mathrm{M_\odot}$). The ejecta contain only $0.02\,\mathrm{M_\odot}$ of iron group elements of which $0.013\,\mathrm{M_\odot}$ is $^{56}\mathrm{Ni}$. The small amount of iron group elements is a direct consequence of the low central density of the hybrid WD. It produced iron group elements in the carbon detonation only because it was compressed in the center by the helium detonation prior to the ignition of the carbon detonation.
Almost all of the mass of the ejecta is at velocities $v \leq 15000\,\mathrm{km/s}$. The outer parts of the ejecta ($v \geq 10000\,\mathrm{km/s}$) are dominated by unburned helium and only have small trace contributions of carbon, oxygen, and intermediate mass elements. The core of the ejecta ($v < 5000\,\mathrm{km/s}$) is dominated by intermediate mass elements and contains all of the radioactive $^{56}\mathrm{Ni}$ in the ejecta. The part in between, that contains most of the mass is dominated by oxygen and contains significant amounts of intermediate mass elements.
We expect the remnant of this explosion to look similar to the remnant of an ordenary low energy supernova, as it has a similar mass and kinetic energy.
In the top row of Fig.~\ref{fig:ejecta} we show the unbound material at the time the helium detonation ignites, i.e. before any relevant amount of nuclear burning has happened, in a similar way. This material has been ejected from the outer Lagrange point of the secondary WD and forms an outflowing spiral structure around the binary system. It mostly consists of helium and has a typical velocity of $1000\,\mathrm{km/s}$ with a significant tail down to a few $100\,\mathrm{km/s}$ and up to $1500\,\mathrm{km/s}$. Although the ejected mass is unphysically low as we only follow the binary system for a few orbits prior to the explosion the velocity distribution provides an idea of outflowing material in the system prior to the explosion that may become visible either by interaction with the ejecta of the explosion or as early absorption lines in the supernova spectra (see e.g. discussion of CSM interaction in WD-WD mergers in \citealt{Jac+20} and Bobrick et al. in prep.).
We present detailed synthetic light-curves and spectra for the ejecta in Sec.~\ref{sec:observables}.
Note that the ejecta in Fig.~\ref{fig:ejecta} are shown in their rest-frame. This frame moves with a velocity of $v_\mathrm{kick,ejecta}=1600\,\mathrm{km/s}$ relative to the rest-frame of the original binary system. This relative velocity is a direct consequence of burning only one of the WDs. Its material, as it is burned, is moving with the orbital velocity of the secondary WD relative to the center of the binary system. When the ashes of the secondary WD suddenly expand after the nuclear burning has ceased they essentially keep their bulk velocity. Since an external observer sees the binary system and therefore also the ejecta from a random angle, we would expect that spectral lines in observed spectra of this explosion exhibit a shift between $-v_\mathrm{kick,ejecta}$ and $v_\mathrm{kick,ejecta}$ relative to the rest-frame its host galaxy that follows a cosine distribution. The shift is large enough to easily be detected and is a clear prediction for our system.
Note that this velocity shift is inherently expected for any explosion of a single WD in a binary system and its magnitude will be roughly equal to the orbital velocity of the exploding WD. For non-degenerate companions the orbital velocity is typically of order $100\,\mathrm{km/s}$, so this shift will hardly be visible. In contrast, for a binary system of massive CO~WDs \citep{Pakmor2013,Shen2018b} a shift of order of $2000\,\mathrm{km/s}$ is expected that should be detectable in a sufficiently large sample of normal type Ia supernovae if their origin is dominated by such a scenario.
Furthermore, if one identifies a candidate supernova remnant (SNR) from any such scenario in the Galaxy, its center of mass velocity should show such high velocity shifts compared to its rest-frame velocity in the Galaxy. Moreover, if suggested to be related with a hypervelocity WD (e.g. in one of the cases suggested by \citeauthor{Shen2018b} 2018), the SNR center-of-mass velocity should be in the opposite direction from that of the hyper-velocity WD.
\subsection{The primary WD as surviving hyper-velocity WD}
After the secondary WD suddenly explodes and its ejecta becomes unbound the primary WD is left behind. Without its companion it continues to move on a straight line with the orbital velocity it had at the time of the explosion. For our system it ends up moving with a velocity of $v_\mathrm{kick,WD}=1300\,\mathrm{km/s}$ relative to the rest-frame of the original binary system.
The density profile, temperature profile, and composition profiles of the primary CO~WD at the end of the simulation are shown in Fig.~\ref{fig:remnant}. The core of the WD is essentially undisturbed and very close to spherical for $r \leq 5000\,\mathrm{km}$. The central density has decreased by about a factor of two and the central temperature has increased to about $5\times10^7\,\mathrm{K}$.
At larger radii $r > 5000\,\mathrm{km}$ the WD has changed more significantly. The helium it had accreted prior to the explosion has mostly been burned to heavier elements and became unbound, but some part of it as well as some part of the ejecta of the secondary WD have been captured by the primary WD and now constitute a large fluffy envelope. This envelope has increased the radius of the CO~WD from initially $\sim 10^4\,\mathrm{km}$ by an order of magnitude to $\sim 10^5\,\mathrm{km}$. The envelope is quite hot with a peak temperature of $2\times10^8\,\mathrm{K}$ at the base of the envelope and $10^6\,\mathrm{K}$ at its surface. The composition of the envelope is dominated by the intermediate mass elements with a little bit of helium. There is also a small amount of $5 \times 10^{-3}\,\mathrm{M_\odot}$ radioactive $^{56}\mathrm{Ni}$ present in the envelope.
The surviving WD is slightly more massive than the original primary CO~WD. It lost $0.01\,\mathrm{M_\odot}$ but gained $0.04\,\mathrm{M_\odot}$ from the secondary hybrid WD, most of it from capturing low velocity ejecta of its explosion.
The properties of the surviving WD are particularly interesting in light of the recently found `D6` WDs that have been argued to be the surviving secondary WDs in the dynamically driven double degenerate double detonation (D6) scenario for normal type Ia supernovae \citep{Shen2018b}. These WDs appear to have a large fluffy envelope consisting of intermediate mass and iron group elements. They show neither hydrogen nor helium in their spectra, potentially consistent with our results of no hydrogen and at most very little (not observable) amount of Helium on the surviving WD. In the D6 model, the donor WD must have had a Helium envelope. \citet{Shen2018b} suggest that helium is not observed in the hyper-velocity objects because of low temperatures. It is not clear what the temperatures of the donor should be in the D6 model, but in any case we note that any helium shell might have burned following the explosion, similar to our model where the helium detonation propagates back to the donor.
The observed WDs move with a velocity of $\sim 10^3\,\mathrm{km}$ relative to the Milky~Way, comparable to our findings. Moreover, their number is significantly lower than what we naively expected to find if every normal type Ia supernova produces one of them, as suggested by the D6 model. These numbers, however, are potentially consistent with the rates we infer. We will discuss this connection further in Sec.~\ref{sec:popsynth} where we attempt to estimate the frequency of events like ours.
\begin{figure}
\includegraphics[width=0.98\linewidth]{lc_bol.pdf}\caption{Bolometric light curves for the spherically symmetric 1D model (red), the angle averaged light curve of the 2D model (brown) and three different angles including both polar directions and the plane of rotation.}
\label{fig:lc_bol}
\end{figure}
\section{Synthetic observables of the ejecta}
\label{sec:observables}
We map the ejecta including their detailed composition from the nucleosynthesis post-processing of $10^6$ Lagrangian tracer particles to a spherical 1D mesh as well as a cylindrical 2D mesh assuming axisymmetry.
We then run the radiation transfer code SuperNu \citep{Wollaeger2013,Wollaeger2014} in order to calculate synthetic lightcurves and spectra for the explosion. SuperNu uses Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) methods to stochastically solve the special-relativistic radiative transport and diffusion equations to first order in $v/c$ in up to three dimensions. The hybrid IMC and DDMC scheme used in SuperNu makes it computationally efficient in regions with high optical depth. This approach allows SuperNu to solve for energy diffusion with very few approximations, which is very relevant for supernova light curves.
The bolometric light-curves for the 1D RT model, the angle averaged light-curve of the 2D RT model, as well as the light-curves for three different lines of sight of the 2D RT model are shown in Fig.~\ref{fig:lc_bol}. The bolometric light-curve peaks $16$d after the explosion at an absolute magnitude of $M_\mathrm{bol}\approx-15$, which makes it significantly fainter than even the faintest SNe~Ia. The faintness is expected for the tiny amount of radioactive material produced.
The bolometric light-curves show significant line of sight dependence, with a spread of about one magnitude at peak between the brightest line of sight (the negative $z$-axis, along the direction of the angular momentum vector of the binary system) and the faintest line of sight (the positive $z$-axis).
Observers in the plain of rotation see a bolometric light-curve that is similar to the angle averaged bolometric light-curve at peak.
At late times the angle averaged bolometric light-curve is dominated by emission in the direction of the positive $z$-axis (which was the faintest direction at peak).
Since the ejecta contain a large amount of $2\times10^{-3}\,\mathrm{M_\odot}$ of $^{44}\mathrm{Ti}$ the transient will likely appear very red.
\section{Event rates and binary evolution}
\label{sec:popsynth}
In this section we estimate the occurrence rate of explosions similar to the one described in detail above based on binary population synthesis modelling. We use the binary evolution code \texttt{SeBa} \citep{Por96,Too12} to simulate the evolution of three million binaries per model starting from the zero-age main-sequence (ZAMS) until the merger of the double WD system. At every time-step, processes such as stellar winds, mass transfer, angular momentum loss, tides, and gravitational radiation are considered with the appropriate prescriptions.
\texttt{SeBa} is freely available through the Astrophysics MUlti-purpose Software Environment, or AMUSE \citep[][see also \href{http://amusecode.org/}{{\color{blue}amusecode.org}} ]{Por09, Por18}.
As the main cause for discrepancies between different binary population synthesis codes is found in the choice of input physics and initial conditions \citep{Too14}, we construct two models (model $\alpha\alpha$ and $\gamma\alpha$) that are typically used in double WD modelling with \texttt{SeBa} \citep[see e.g.][]{Too12,Reb19, Zenati2019}. These models differ with respect to the modelling of unstable mass transfer, i.e. common-envelope (CE) evolution \citep{Iva13}. Generally the CE-phase is modelled on the basis of energy conservation \citep{Pac76, Web84}. In this model orbital energy is consumed to unbind the CE with an efficiency $\alpha_{\rm CE}$ (Eq.\,\ref{eq:alpha-ce}). This recipe is used in model $\alpha\alpha$ for every CE-phase. In our alternative model ($\gamma\alpha$) of CE-evolution, we consider a balance of angular momentum with an efficiency parameter $\gamma$ \citep[Eq.\,\ref{eq:gamma-ce}, ][]{Nel00}.
The $\gamma$-recipe is used unless the binary contains a compact object or the CE is triggered by a tidal instability (rather than dynamically unstable Roche lobe overflow). More details on the models are given in Appendix\,\ref{app-bps}.
Previous work has already shown that hybrid WDs are common \citep{Zenati2019} and frequently merge with other WDs \citep{PeretsZenati2019}. Here we focus on mergers between a massive hybrid WD ($M_{\rm Hybrid} \gtrsim 0.63M_\odot$) and a CO WD ($M_{\rm Hybrid} <M_{\rm CO}\lesssim 0.85M_{\odot}$). On average they make up about several percents of all double WD mergers, giving an integrated rate of several $10^{-5}$ events per solar mass of created stars over a Hubble time.
The typical evolution towards the merger consists of several phases of interaction. Generally the CO~WD forms first, afterwards the hybrid~WD is formed. This is the case for $76\%$ of the systems in our default models ($M_{\rm Hybrid}>0.63M_{\odot}$, $m_{\rm CO}<0.85M_{\odot}$), and $66-83\%$ with the variations as described below.
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{rates.pdf}
\caption{Merger rates for all double WD mergers (DWD) (blue), the traditional DWD merger channel for SNe~Ia that assumes all merging DWD binaries with a total mass $M_\mathrm{tot}>1.38M_\odot$ produce a SN~Ia (red) and the scenario described in this paper (green). Thick lines use the $\gamma\alpha$ model, thin lines the $\alpha\alpha$ model for common envelope evolution.}
\label{fig:dtd}
\end{figure}
For the majority of these the progenitor of the hybrid is initially (i.e. on the ZAMS) the more massive star in the system. Consequently, it is this star that evolves of the MS before the companion does. It fills its Roche lobe, initiates a phase of stable mass transfer, loses its hydrogen envelope, and becomes a low-mass hydrogen-poor helium-burning star (i.e. stripped star). During the mass transfer the companion has accreted a mass of $\sim 1-2 M_{\odot}$. After the mass transfer phase has ended and the companion has evolved off the MS, the companion initiates a common-envelope phase. After the companion's hydrogen envelope is lost from the system, the binary consists of two stripped stars. At this stage the companion is more massive than the progenitor of the hybrid, due to the previous phase of mass accretion. Consequently its evolutionary timescale as a stripped star is shorter than that of the hybrid progenitor, and it becomes a CO~WD first, then after that the hybrid WD is finally formed.
This channel is referred to as a formation reversal channel in \citep[][see their section 4.3]{Too12}.
In more detail, in our two models ($\gamma\alpha$ and $\alpha\alpha$) the total merger rate of double WDs is $3.1\times 10^{-3} M_{\odot}^{-1}$ and $3.2\times 10^{-3} M_{\odot}^{-1}$ respectively. Assuming that these events only occur for systems with $M_{\rm Hybrid}>0.63 M_{\odot}$ and $M_{\rm CO}<0.85M_{\odot}$, the synthetic event rate is $ (7-8.5)\times 10^{-5} M_{\odot}^{-1}$.
The event rate is not very sensitive to the exact minimum hybrid mass and maximum CO mass. If the minimum hybrid mass to ensure a scenario as described in this paper is as high as $M_{\rm Hybrid}>0.68$ as is the case for our specific simulation described above, the rate slightly decreases to $(2.9-3.4)\times 10^{-5} M_{\odot}^{-1}$. If instead, the minimum hybrid mass can be as low as $M_{\rm Hybrid}>0.58M_{\odot}$\footnote{We also studied a 3D model for the case of a 0.58 $M_{\odot}$ hybrid, in which case, the hybrid did not detonate, but rather was disrupted later on by the primary, as we shall discuss in more detail in a future paper. It therefore provides us with a lower limit for these companion-detonation SNe}, the rate increases somewhat to $ (1.2-1.5)\times 10^{-4} M_{\odot}^{-1}$. On the other hand, if the mass of the CO~WD can be as high as $0.9M_{\odot}$ the event rates of our default models increases to $ (7.8-9.4)\times 10^{-5} M_{\odot}^{-1}$ (and up to $(1.4-1.6)\times 10^{-4} M_{\odot}^{-1}$ for $M_{\rm Hybrid}>0.58M_{\odot}$).
When reducing the minimum CO mass $M_{\rm CO}<0.8 M_{\odot}$, the event rate decreases slightly to
$(6.5-7.3)\times 10^{-5} M_{\odot}^{-1}$ (and down to
$(2.7-2.9)\times 10^{-5} M_{\odot}^{-1}$ for $M_{\rm Hybrid}>0.68 M_{\odot}$).
Overall these types of mergers comprise about 2.5$\%$ of all double WD mergers in our default models, and 0.9-5.1$\%$ with the variations in the limiting values of $M_{\rm Hybrid}$ and $M_{\rm CO}$.
The event rates described above are based on stellar simulations at Solar metallicity, here taken as $Z=Z_{\odot} = 0.02$.
At lower metallicities the synthetic event rates are not significantly different. The overall rate for the default models $\gamma\alpha$ and $\alpha\alpha$ is $(9.5-9.8)\times 10^{-5} M_{\odot}^{-1}$ at $Z=0.001$.
The event rate is about an order of magnitude lower than the estimated SNe~Ia rate \citep{Li2011, Mao17}. If all SNe~Ia would originate from the D6 scenario, we expect to find $\approx 20$ hypervelocity WDs that are surviving companion WDs of SNe~Ia. However, \citet{Shen2018b} only found three candidates, inconsistent with the D6 scenario but roughly consistent with optimistic rates for our scenario. A significantly larger number of those WDs would likely make them inconsistent with a common origin from our scenario.
\section{Summary and Outlook}
\label{sec:conclusion}
We presented a 3D hydrodynamical simulation of the final phase of a double WD binary consisting of a hybrid~WD with massive $0.1\,\mathrm{M_\odot}$ He shell and a CO~WD that are about to merge.
We find that after some initial mass transfer of helium from the secondary hybrid~WD on the primary CO~WD a thin helium shell builds up around the CO~WD. We showed that as the accretion stream becomes denser and its impact on the surface of the CO~WD becomes more violent eventually a helium detonation forms on the surface of the CO~WD.
The helium detonation wraps around the CO~WD but fails to ignite the CO core as the little amount of helium around the CO~WD only creates a weak detonation and the shock wave from the detonation converges far off-center in the CO~WD at low densities.
However, the helium detonation also travels up the accretion stream and burns the thick helium shell around the hybrid WD. We show that this generates a strong shock wave that converges close to the center of the CO core of the hybrid WD and ignites a carbon detonation.
The hybrid WD is completely burned and unbound by the carbon detonation. Owing to compression of the core of the hybrid WD by the strong shock wave the carbon detonation is able to synthesis $0.018\,\mathrm{M_\odot}$ of $^{56}\mathrm{Ni}$ despite the initial low central density of the hybrid WD. The ejecta lead to a very faint and likely very red transient, that lasts for several tens of days.
We estimated the event rate of the scenario described here as up to $10\%$ of the SNe~Ia rate, making them an interesting target for more sensitive observations by future facilities like the Vera C. Rubin Observatory.
The CO~WD remains intact and is flung at high velocity to become a hyper-velocity WD, with ejection velocity of the order of its original orbital velocity of $1300\mathrm{km/s}$ relative to the rest frame of the original binary system. It collects a thin outer layer from the ashes of the explosion of the hybrid WD that contains $5\times10^{-3}\,\mathrm{M_\odot}$ of $^{56}\mathrm{Ni}$ and provides an alternative origin for identified hyper-velocity WDs to the D6 model proposed by \citep{Shen2018b}. Moreover, the expected rates of such hyper-velocity WDs from our modelled scenario are consistent with the current observationally inferred rate of hyper-velocity-WDs (using the GAIA catalogue; \citet{Shen2018b}), which are significantly lower than suggested by the D6 model.
We expect the center-of-mass velocity of the ejecta of the exploding (secondary) WD to also have a velocity shift of the order of $1600\,\mathrm{km\,s^{-1}}$, in the opposite direction to that of the hyper-velocity WD, which might be observable as a systematic shift of SNe velocities compared with their host galaxies. In Galactic cases, one might find SNRs related to specific hyper-velocity WDs. In those cases the SNR center-of-mass velocity should be similarly high and directed in the opposite direction with that of the hyper-velocity WD.
Explosions of secondary WDs in double WD binary systems have been seen previously \citep{Pakmor2012,Papish2015,Tanikawa2019} though always triggered by an explosion of the primary WD, that is absent in our scenario.
We conclude that detonation of hybrid WDs mediated by He-detonations on primary companions generate new interesting scenarios for novel transients that need to further investigated in 3D hydrodynamical simulations, and searched for by future surveys.
\section*{Data availability}
The simulations underlying this article will be shared on reasonable request to the corresponding author.
\section*{Acknowledgements}
RP thanks Markus Kromer, Stefan Taubenberger, Wolfgang Hillebrandt, and Sasha Kozyreva for interesting and helpful discussions. ST acknowledge support from the Netherlands Research Council NWO (VENI 639.041.645 grants). YZ and HBP thank Daan Van Rossum for interesting discussions and acknowledge support for this project from the European Union's Horizon 2020 research and innovation program under grant agreement No 865932-ERC-SNeX.
\bibliographystyle{mnras}
|
1504.00676
|
\section{Introduction}
The existence of bright quasars~(QSOs) at $z \ga 6-7$
\citep{2006NewAR..50..665F,2010AJ....139..906W,2011Natur.474..616M,2015Natur.518..512W}
presents an intriguing question: how do supermassive black holes (SMBHs)
with masses $\ga {\rm a ~few} \times 10^9 ~M_\odot$ form within the first billion years after the Big Bang?
Perhaps the simplest possible explanation is that the $10-100 \ {\rm M}_\odot$ black hole remnants of the first generation of stars grow into these supermassive black holes via gas accretion. However, this requires essentially uninterrupted Eddington limited accretion for the entire history of the Universe, which seems unlikely due to radiative feedback from the accreting BH
\citep[e.g.][]{2007MNRAS.374.1557J,2009ApJ...696L.146M,2011ApJ...739....2P,2012ApJ...747....9P,2012MNRAS.425.2974T}.
Major mergers of BHs are not expected to accelerate growth significantly.
This is because the kick velocity of the merged BH is typically larger than the escape velocity of its host galaxy
\citep[e.g.][]{2007ApJ...661..430H,2007PhRvL..99d1102K} leading to ejection and halting gas accretion
(but see also \citealt{TH09}).
An alternative scenario is the formation of supermassive stars~(SMSs) with a mass of $\ga 10^{5} \ {\rm M}_\odot$
\citep{1994ApJ...432...52L,2003ApJ...596...34B,2004MNRAS.354..292K,2006MNRAS.370..289B},
which directly collapse into SMBHs via general relativistic instability
\citep{1964ApJ...140..417C,1971reas.book.....Z,2002ApJ...572L..39S}. The larger mass of these seed BHs reduces the accretion time necessary to reach the masses implied by high-redshift QSOs.
Formation of a SMS requires a $\sim 10^{5} \ {\rm M}_\odot$ metal-poor gas cloud with no molecular hydrogen (H$_2$)
in a massive dark matter halo with virial temperature of $\sim 10^4$ K \citep{2003ApJ...596...34B}.
In the absence of H$_2$, the gas cloud can only cool by atomic hydrogen
(Ly$\alpha$, two-photon, and H$^-$ free-bound emissions)
and the temperature remains $\sim 8000$ K \citep[e.g.,][]{O01}.
Once such a massive gas cloud is assembled, it collapses monolithically and
isothermally without significant fragmentation
\citep{2003ApJ...596...34B, 2010MNRAS.402.1249S, 2009MNRAS.393..858R, 2009MNRAS.396..343R,
2013MNRAS.433.1607L, 2014MNRAS.445L.109I,2015MNRAS.446.2380B}.
After the collapse, a single protostar with a mass of $\sim 1~{\rm M}_\odot$ is formed at the center of the cloud.
The protostar grows via rapid gas accretion, $\ga 1~{\rm M}_\odot~{\rm yr}^{-1}$~\citep{2014MNRAS.445L.109I,2015MNRAS.446.2380B},
and becomes a SMS within $\sim 1$ Myr~\citep{2012ApJ...756...93H,2013ApJ...778..178H,2013A&A...558A..59S}.
Suppressing molecular hydrogen cooling is the largest obstacle to high-redshift SMS formation. One way to achieve this is through H$_2$ photodissociation by far-ultraviolet (FUV) photons in the Lyman-Werner (LW) band ($11.2-13.6$ eV)
\citep[e.g.,][]{O01,2002ApJ...569..558O, 2003ApJ...596...34B,2010MNRAS.402.1249S,IO11,
2014MNRAS.443.1979L,2014MNRAS.445..544S}.
In order to dissociate H$_2$ in a massive dark matter halo, the required FUV intensity is $J_{\rm LW}^{\rm crit}\simeq 1500$
\citep[in units of $10^{-21}$ erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$ Hz$^{-1}$;][]{2014MNRAS.445..544S}.
Given that star-forming galaxies are also strong X-ray sources, $J_{\rm LW}^{\rm crit}$ may increase by a factor $\sim 10$
\citep{IO11,2014arXiv1411.2590I}.
This is due to the electron-catalyzed reactions (${\rm H}+{\rm e}^- \rightarrow {\rm H}^-+\gamma$;
$\rm{H}^- + {\rm H}\rightarrow \rm{H}_2+\rm{e}^-$),
which promote H$_2$ formation in low-metallicity gas.
Massive dark matter halos irradiated with $J_{\rm LW}^{\rm crit} \sim 10^4$ are expected to be extremely rare.
Direct-collapse BHs~(DCBHs) formed in this way may not be able to account for the abundance of observed high-$z$ quasars
\citep{Dijkstra+14,2014arXiv1411.2590I}.
We point out that a variation of this scenario based on the synchronized formation and merger of pairs of
$T_{\rm vir}\sim 10^4 ~ \rm{K}$ dark matter halos, may produce enough DCBHs to explain the observations
\citep{2014MNRAS.445.1056V}.
However additional numerical simulations are required to validate this possibility.
A different pathway to H$_2$ suppression is collisional dissociation (H$_2$ + H $\rightarrow$ 3H),
which can occur if the metal poor gas reaches high density and temperature, satisfying
$(n/10^2 \ {\rm cm^{-3}}) \times (T/10^6 \ {\rm K}) \ga 1$ (the so-called ``zone of no return'').
\cite{IO12} proposed that galactic-scale shocks can satisfy this condition.
If the shock happens at the central region of a massive halo $\la 0.1~R_{\rm vir}$,
the density and temperature of the shocked gas become $n\ga 10^4~{\rm cm}^{-3}$ and $T\ga 10^4$ K,
and efficient collisional dissociation of H$_2$ can occur.
However, the simulations of \cite{2014MNRAS.439.3798F} showed that for several examples
of less massive halos with $T_{\rm vir}\la 10^4$ K, shocks do not reach the center preventing SMS formation.
It was also pointed out that in typical halos (not the high-velocity collisions discussed in this paper), the zone of no return cannot
be reached without radiative cooling, which may lead to star formation and prevent SMS formation \citep{Visbal+14}.
SMS formation may still be possible in larger halos ($T_{\rm vir}\ga 10^4$ K) if shocks reach the centers of the halos
before significant amounts of stars of formed. This requires further study with numerical simulations.
Another proposed SMS formation scenario is based on massive-galaxy mergers~\citep{2010Natur.466.1082M, Mayer_et_al_2014}.
A merger can drive strong gas inflow and supersonic turbulence in the inner galactic core,
which prevents significant fragmentation of the gas even with some metals~\citep[but see][]{2013MNRAS.434.2600F}.
If efficient angular momentum transport can be sustained in the inner $\sim 0.1 \ \rm pc$ for a sufficiently long time (which requires confirmation from further numerical simulations) a SMS of $\ga 10^{8} \ {\rm M}_\odot$ could form.
DCBHs from such SMSs might explain the observed abundance of high-$z$ QSOs.
In this paper, we propose high-velocity collisions of protogalaxies as a new pathway to form SMSs and DCBHs at high redshift.
As observed in the local Universe, a fraction of galaxies and also clusters of galaxies collide with a much larger velocity than the
typical peculiar velocity (e.g., the Taffy galaxy and the bullet cluster;
\citealt{1993AJ....105.1730C,2002AJ....123.1881C,1998ApJ...496L...5T,2002ApJ...567L..27M}).
At the interface of such colliding galaxies, shock-induced starbursts have been confirmed
\citep[e.g.,][]{1978ApJ...219...46L, Saitoh_et_al_2009}.
We show that, when a similar collision with a high-velocity $\ga 200$ km s$^{-1}$ happens between metal-poor galaxies,
a hot gas ($\sim 10^6$ K) forms in the post-shock region and the subsequent radiative cooling makes the gas dense enough
that any H$_2$ molecules are destroyed by collisional dissociation.
Once a gas clump of $\sim 10^5~{\rm M}_\odot$ with a low concentration of H$_2$ due to collisional dissociation forms,
its gravitational collapse can be triggered and a SMS forms.
Note that our scenario does not require supersonic turbulence or extremely efficient angular momentum transfer
as in the galaxy merger scenario. We estimate the abundance of SMSs and DCBHs produced by high-velocity galaxy collisions,
and show that it can be comparable to that of high-$z$ QSOs.
This paper is organized as follows.
In \S2, we derive the necessary conditions to form SMSs in protogalaxy collisions.
In \S3, we estimate the number density of protogalaxy collisions resulting in the SMS formation,
and show that the DCBHs from such SMSs could be the seeds of high-redshift QSOs.
Finally, we summarize and discuss our results in \S4.
Throughout we assume a $\Lambda$CDM cosmology consistent with the latest constraints from \emph{Planck} \citep{2014A&A...571A..16P}: $\Omega_\Lambda=0.68$, $\Omega_{\rm m}=0.32$, $\Omega_{\rm b}=0.049$, $h=0.67$, $\sigma_8=0.83$, and $n_{\rm s} = 0.96$.
\section{SMS formation via protogalaxy collisions }
Generally speaking, a SMS can form from a $\ga 10^{5} \ {\rm M}_\odot$ metal-poor gas clump without H$_2$.
Since H$_2$ can form via the electron-catalyzed reactions
(${\rm H}+{\rm e}^- \rightarrow {\rm H}^- + \gamma; {\rm H}^- + {\rm H} \rightarrow {\rm H}_2 + {\rm e}^-$),
it must be efficiently dissociated.
We propose a SMS formation scenario where H$_2$ is dissociated from the shocks produced by a high-velocity collision of two dark matter halos. In this section, we show that SMS formation requires the relative velocity of the colliding protogalaxies to be in a specific range. If the collision velocity is too low, the gas will not be shocked to sufficient temperature and density to dissociate H$_2$. On the other hand, if the velocity is too high and the shock too violent, the gas will be disrupted before it can cool via atomic hydrogen and form a SMS. This velocity window depends on redshift due to the evolution of the typical gas properties of pre-shocked gas within dark matter halos.
\subsection{Protogalaxy properties}
Next, we describe the properties of dark matter halos and the gas within them as a function of redshift.
This sets the collision velocity bounds for SMS formation which are derived below.
We consider protogalaxies hosted by dark-matter halos with virial temperatures of $T_{\rm vir} \sim 10^4$ K, corresponding to the atomic cooling threshold.
Larger halos undergo radiative cooling which triggers star formation and metal enrichment, inhibiting SMS star formation ~(see Sec. \ref{sec:metal_enrichment} for further discussion).
On the other hand, for halos $T_{\rm vir} \ll 10^4 \ \rm K$, the small gas mass is not sufficient to form a SMS~(see equation \ref{eq:Mgas}).
The virial mass of a dark matter halo is given by
\begin{equation}
M_{\rm cool} \simeq 1.7 \times 10^7~{\rm M}_\odot \left(\frac{T_{\rm vir}}{10^4~{\rm K}}\right)^{3/2}
\left(\frac{1+z}{16}\right)^{-3/2},
\label{eq:Mcool}
\end{equation}
and the virial radius by
\begin{equation}
R_{\rm vir}\simeq 350~h^{-1}~{\rm pc}~\left(\frac{T_{\rm vir}}{10^4~{\rm K}}\right)^{1/2}
\left(\frac{1+z}{16}\right)^{-3/2}
\label{eq:Rvir}
\end{equation}
\citep{2001PhR...349..125B}.
Simulations show that before cooling becomes efficient the central regions of dark matter halos contain a gas core with
approximately constant density and radius \citep[see e.g.][]{Visbal+14}
\begin{equation}
R_{\rm core} \simeq 0.1~R_{\rm vir}.
\label{eq:Rcore}
\end{equation}
The gas core is surrounded by an envelope with a density profile roughly given by $\propto r^{-2}$.
The entropy profile, defined as $K=k_{\rm B}Tn_0^{-2/3}$, also has a core with $K/K_{\rm vir} \sim 0.1$.
Here $K_{\rm vir}=k_{\rm B}T_{\rm vir}\bar n_{\rm b}^{-2/3}$ and
$\bar n_{\rm b}$ is $200\Omega_{\rm m}^{-1}$ times mean number density of baryons
\citep[e.g.,][]{Visbal+14}.
Since $T\simeq T_{\rm vir}$ in the (pre-shock) gas core, the gas density can be estimated as
\begin{equation}
n_0 \simeq \left(\frac{K}{K_{\rm vir}}\right)^{-1.5}{\bar n_b}
\simeq 22~{\rm cm}^{-3}\left(\frac{1+z}{16}\right)^3.
\label{eq:pre_n}
\end{equation}
The total core gas mass is
\begin{equation}\label{eq:Mgas}
M_{\rm gas, core} \sim 3.0\times 10^5~{\rm M}_\odot~\left(\frac{T_{\rm vir}}{10^4~{\rm K}}\right)^{3/2}\left(\frac{1+z}{16}\right)^{-3/2},
\end{equation}
which is $\sim 10$ per cent of the total gas mass inside the dark-matter halo.
\subsection{Lower velocity limit}
The lower collision velocity limit is set by the requirement that H$_2$ is collisionally dissociated.
This will happen if the shock is strong enough for the gas to enter the so called ``zone-of-no-return"~\citep[see][and Appendix A]{IO12}.
This region in temperature-density space is defined by
\begin{equation}
T\ga 5.2\times 10^5~{\rm K} \left(\frac{n}{100~{\rm cm}^{-3}}\right)^{-1},
\label{eq:zone}
\end{equation}
for $n\la10^4~{\rm cm}^{-3}$, where $T$ and $n$ are the post-shock temperature and density, respectively.
For a collision velocity, $v_0$, much larger than the sound speed of the pre-shocked gas ($\sim 10$ km s$^{-1}$),
\begin{equation}
T = \frac{3\mu m_{\rm p} v_0^2}{16k_{\rm B}} \simeq 8.5 \times 10^5~{\rm K}\left(\frac{v_0}{250~{\rm km~s^{-1}}}\right)^2,
\label{eq:post_T}
\end{equation}
and
\begin{equation}
n = 4n_0,
\label{eq:post_n}
\end{equation}
where we set the mean molecular weight as $\mu=0.6$.
From equations (\ref{eq:pre_n}), (\ref{eq:zone}), (\ref{eq:post_T}), and (\ref{eq:post_n}),
one can obtain the lower limit of the collision velocity for SMS formation, which is given by
\begin{equation}
v_0 \ga 210 \ {\rm km \ s^{-1}} \ \left(\frac{1+z}{16}\right)^{-3/2}.
\label{eq:lower_limit}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[height=58mm,width=80mm]{fig1.eps}
\caption{
The collision-velocity window for supermassive star formation in protogalaxy collisions (shaded light-blue region).
The solid curve shows the lowest velocity required to induce collisional dissociation of H$_2$~(equation \ref{eq:lower_limit}).
The dashed curve shows the highest velocity that meets the radiative shock condition~(equation \ref{eq:upper_limit}).
}
\label{fig:z_v}
\end{center}
\end{figure}
\subsection{Upper velocity limit}
The upper collision velocity limit leading to SMS formation is set by the radiative shock condition.
If the collision velocity is too large, the shocked gas starts to expand adiabatically before the radiative cooling sets in.
In this case, the shocked gas cannot become dense enough to dissociate H$_2$ collisionally, and the shock-induced SMS formation cannot be triggered.
Therefore, the radiative cooling time needs to be shorter than the dynamical time of the shock.
The dynamical time of the shock can be estimated as
\begin{equation}
t_{\rm dyn} = \frac{R_{\rm core}}{v_0} \simeq 2.0 \times 10^5~{\rm yr}
\left(\frac{R_{\rm core}}{50~{\rm pc}}\right) \left(\frac{v_0}{250~{\rm km~s^{-1}}}\right)^{-1},
\label{eq:t_sc}
\end{equation}
while the radiative cooling time of the shocked gas is given by
\begin{equation}
t_{\rm cool}=\frac{3nk_{\rm B}T}{2\Lambda_{\rm rad}}.
\label{eq:t_cool_gene}
\end{equation}
Here, $\Lambda_{\rm rad}(n,T)=n^2\bar \Lambda (T)$ is the cooling rate (in units of erg s$^{-1}$ cm$^{-3}$).
The cooling function $\bar \Lambda(T)$ consists of the contributions from
atomic H line emission at $T\sim 10^4$ K, atomic He$^+$ and He line emissions at $T\sim 10^5$ K,
and the bremsstrahlung emission $\propto T^{1/2}$ at $T\ga 10^6$ K
\citep{1993ApJS...88..253S,2007ApJ...666....1G}.
In our scenario, the shocked gas temperature initially ranges up to $5\times 10^5 {\ \rm K} \la T \la 5\times 10^6$ K,
which corresponds to the collision velocity of $150 \ {\rm km \ s^{-1}} \la v_0 \la 500$ km s$^{-1}$ (see equation \ref{eq:post_T}).
For such gas, the cooling function can be well approximated by $\bar \Lambda_0 \simeq 5\times 10^{-24}$ erg s$^{-1}$ cm$^3$.
Given that $n \propto T^{-1}$ during the radiative contraction, the cooling time becomes shorter for a lower-temperature,
i.e. the gas contracts in a thermally unstable way.
Thus, the radiative cooling time is essentially given by substituting $\bar \Lambda_0$ into equation (\ref{eq:t_cool_gene});
\begin{equation}
t_{\rm cool,max}\simeq 2.8 \times 10^4~{\rm yr}~\left(\frac{v_0}{250~{\rm km~s^{-1}}}\right)^2 \left(\frac{n_0}{10~{\rm cm}^{-3}}\right)^{-1}.
\label{eq:t_cool}
\end{equation}
From equations (\ref{eq:Rvir} $-$ \ref{eq:pre_n}), (\ref{eq:t_sc}), and (\ref{eq:t_cool}), the radiative shock condition ($t_{\rm dyn}\ga t_{\rm cool}$) can be rewritten as
\begin{equation}
v_0 \la 620 \ {\rm km \ s^{-1}} \ \left(\frac{T_{\rm vir}}{10^4~{\rm K}}\right)^{1/6} \left(\frac{1+z}{16}\right)^{1/2}.
\label{eq:upper_limit}
\end{equation}
Fig.~\ref{fig:z_v} shows the collision velocity window for SMS formation as a function of redshift (the shaded region).
For collisions of $T_{\rm vir}\sim 10^4$ K halos in this window, gas is shocked into the zone-of-no-return and H$_2$ molecules are destroyed by collisional dissociation. Additionally, the shocked gas radiatively cools to $\sim 10^{4} \ \rm K$ within the shock dynamical time.
Once the $\ga 10^{5} \ {\rm M}_\odot$ gas cloud is assembled and cools, it becomes unstable
due to self-gravity, and a SMS can be formed~\citep[e.g.][]{2014MNRAS.445L.109I,2015MNRAS.446.2380B}.
Note that since the total mass of the colliding gas in dark matter halos with $T_{\rm vir}\simeq 10^4$ K
can be as large as $\sim 10^{6} \ \rm {\rm M}_\odot$,
several SMSs may be formed at once in the collisions we consider.
These SMSs would result in DCBHs of $\ga 10^{5} \ {\rm M}_\odot$ at $z > 10$ , and as we show in the following section could potentially be the seeds of high-$z$ QSOs.
\section{DCBH Abundance from High-Velocity Collisions}
Precisely estimating the number of collisions which result in SMSs and DCBHs is very challenging because
it depends on detailed nonlinear physics. This most likely necessitates N-body simulations,
however, the rarity of these collisions (we estimate $\sim 10^{-9}~{\rm Mpc^{-3}}$ from $z=10-20$) requires simulations much larger than are feasible with current computers.
Here we address this issue by performing a simple order of magnitude estimate with an analytic formula based on idealized assumptions and calibrated with a N-body simulation.
We find that the number density of DCBHs could be high enough to explain observations of high-$z$ QSOs.
However, we emphasize that our estimate has large uncertainties which we discuss in \S4.
\subsection{Collision rate}
We estimate the high-velocity protogalaxy collision rate by considering
the number of dark matter halos just below the atomic cooling
threshold that collide with a relative velocity in the range shown in Fig.~\ref{fig:z_v}.
For simplicity, we consider one halo moving with a very high peculiar velocity and the other with a typical velocity (determined with the N-body simulation described below) in the opposite direction.
Making the idealized assumption that halo positions and velocities are randomly distributed (i.e. ignoring clustering and coherent velocities, which are discussed in Sec. \ref{sec:rate}), the collision rate per volume is given by
\begin{equation}
\frac{dn_{\rm coll}}{dt} = n_{\rm fast} n_{\rm h} \times v_{\rm fast} \times \pi b^2
= f_{\rm fast} n_{\rm h}^2 \times v_{\rm fast} \times \pi b^2,
\end{equation}
where $n_{\rm h}$ is the number density of all halos near the cooling threshold,
$v_{\rm fast}$ is the velocity of the fast-moving halo necessary to form one (or several) SMS(s),
$n_{\rm fast}=f_{\rm fast}n_{\rm h}$ is the number density of halos with peculiar velocity greater
than this value (but below the maximum value), and $b$ is the impact parameter required for SMS formation.
Note that these values are all initially calculated in physical units and the abundance is later converted to comoving units to compare with QSO observations.
We compute the halo number density with the Sheth-Tormen mass function \citep{1999MNRAS.308..119S} and consider a mass range of $(0.5-1)\times M_{\rm cool}$~(equation \ref{eq:Mcool}).
For the impact parameter, $b$, we use ten per cent of the virial radius (equation \ref{eq:Rcore}).
We assume $v_{\rm fast}=v_{\rm min} - v_{\rm typ}$, where $v_{\rm typ}$ is the typical halo peculiar velocity (assumed to be $40~{\rm km~s^{-1}}$)
and $v_{\rm min}$ is shown in Fig.~\ref{fig:z_v} (solid curve). The fraction of fast-moving halos is given by
\begin{equation}
f_{\rm fast} = \int_{v_{\rm min}-v_{\rm typ} }^{v_{\rm max}-v_{\rm typ} } p(v) dv,
\end{equation}
where $v_{\rm max}$ is given in Fig.~\ref{fig:z_v} (dashed curve) and $p(v)$ is the peculiar velocity probability
density function (PDF) which we estimate from a N-body simulation as described in the following subsection.
\begin{figure}
\begin{center}
\includegraphics[height=60mm,width=80mm]{fig2.eps}
\caption{The peculiar velocity PDF, $p(v)$, measured from our N-body simulation (solid curves)
and the best fits described in Section 3 (dashed curves) for redshifts $z=20$, 15, 12, 10 (from left to right).}
\label{v_pdf}
\end{center}
\end{figure}
\subsection{N-body simulation and velocity PDF}
To estimate the dark matter halo peculiar velocity PDF, we ran a cosmological N-body simulation
with the publicly available code \textsc{gadget2} \citep{2005MNRAS.364.1105S}.
The simulation has a box length of length $10~h^{-1}$ Mpc (comoving) and mass resolution of $768^3$ particles,
corresponding to a particle mass of $1.96\times10^5~h^{-1}~{\rm M}_\odot$. Snapshots were saved at $z=20$, 15, 12, and 10.
We used the \textsc{rockstar} halo finder \citep{2013ApJ...762..109B} to locate dark matter halos and determine their masses and velocities.
\begin{figure*}
\begin{center}
\includegraphics[height=60mm,width=80mm]{fig3_1.eps}
\hspace{4mm}
\includegraphics[height=60mm,width=80mm]{fig3_2.eps}
\caption{The estimated number density of DCBHs created through high-velocity collisions per unit redshift (left panel) and the cumulative number density as a function of redshift (right panel).}
\label{dc_abund}
\end{center}
\end{figure*}
In Fig.~\ref{v_pdf}, we plot the velocity PDF of $M=(0.5-1)\times M_{\rm cool}$ dark matter halos at $z=20$, 15, 12, and 10.
We also plot fits with a form guided by \cite{2001MNRAS.322..901S} and \cite{2003MNRAS.343.1312H}
who argue that the PDF is a Gaussian distribution in each velocity component
(Maxwell-Boltzmann distribution for the total 1D velocity) at fixed halo mass and local overdensity.
For halos near the cooling threshold this leads to
\begin{equation}
p(v) = \frac{\int d\delta p(\delta) (1+b_{\rm h} \delta) p(v | \delta)}{\int d\delta p(\delta) (1+b_{\rm h} \delta)},
\end{equation}
where $p(\delta)$ is the cosmological overdensity PDF, $b_{\rm h}$ is the Sheth-Tormen dark matter halo bias \citep{1999MNRAS.308..119S}, and $ p(v | \delta)$ is the velocity PDF at fixed $\delta$.
The overdensity PDF is assumed to be a lognormal distribution \citep[see e.g.][]{1991MNRAS.248....1C}
\begin{equation}
p(\delta) = \frac{1}{\sqrt{2\pi \sigma_\delta^2}} \exp{\left (- \frac{[\ln(1+\delta) + \sigma_\delta^2/2 ]^2}{2 \sigma_\delta^2} \right )} \frac{1}{1+\delta}.
\end{equation}
The velocity PDF at fixed overdensity is assumed to be a Maxwell-Boltzmann distribution with variance
\begin{equation}
\sigma^2 = \left ( 1 + \delta \right )^{2 \mu} \sigma_{v}^2.
\end{equation}
We set $\sigma_\delta^2 = \ln[1 + 0.25/(1+z)]$ \citep{2003MNRAS.343.1312H} (which determines the size of the region corresponding to $\delta$) and fit two parameters to our data, $\sigma_{v}$ and $\mu$.
For redshifts of $z=20$, 15, 12, and 10, we find $\sigma_{v}=$24.04, 27.27, 30.91, and 33.54 and $\mu=$ 0.8687, 1.2404, 1.0949, and 1.2081, respectively.
\subsection{DCBH number density}
Using the best fit $p(v)$ discussed above, we find the number of DCBHs produced from $z=10-20$.
We assume that the LW background at $z<20$ suppresses
star formation and subsequent metal enrichment in halos below the atomic cooling threshold.
At higher redshift we assume that star formation in minihalo progenitors prevents DCBH formation.
The total number density formed as a function of redshift is given by
\begin{equation}
n_{\rm DCBH}(z) = \int dz \frac{dt}{dz} \frac{dn_{\rm coll}}{dt}.
\end{equation}
To get the velocity PDF at intermediate redshifts between our simulation snapshots,
we linearly interpolate the results from Fig.~\ref{v_pdf}.
In Fig.~\ref{dc_abund}, we plot the number density of DCBHs as a function of redshift.
Most DCBHs come from high redshift due to the lower minimum velocity given in Fig.~\ref{fig:z_v}.
We find a total density by $z=10$ of $10^{-9} ~\rm{Mpc}^{-3}$ (comoving).
Thus, it seems possible that these DCBHs could potentially explain the abundance of high-$z$ QSOs.
\section{Discussion and Conclusions}
We have shown that high-velocity collisions of metal-poor galaxies may result in the formation of supermassive stars (SMSs).
When dark matter halos with a virial temperature $\sim 10^4$ K collide with a relative velocity
$\ga 200$ km s$^{-1}$, gas is heated to very high temperature ($\sim 10^6$ K) in the shocked region.
The shocked gas cools isobarically via free-free emission and forms a dense sheet ($\ga 10^4~{\rm cm}^{-3}$).
In this dense gas, H$_2$ molecules are collisionally dissociated, and
the gas never cools below $\sim 10^4$ K.
Such a clump of gas with mass $\sim 10^5~{\rm M}_\odot$, once assembled, becomes gravitationally unstable and forms a SMS,
which would directly collapse into a black hole (DCBH) via general relativistic instability.
We estimated the abundance of DCBHs produced by this scenario
with a simple analytical argument calibrated with cosmological N-body simulations and found a
number density of $\sim 10^{-9}$ Mpc$^{-3}$ (comoving) by $z=10$. This is large enough to explain the abundance of high-redshift bright QSOs.
\subsection{Observational Signatures}
Next, we briefly discuss the possible observational signatures of SMSs formed through high-velocity collisions of protogalaxies.
The temperature of the shocked gas in the collisions we consider above is $T \la 10^{6} \ \rm K$ (equation \ref{eq:post_T}).
The gas cools initially via bremsstrahlung, then atomic He$^{+}$ and He line emissions, and finally atomic H line emission.
The intrinsic bolometric luminosity can be estimated as
$L_{\rm bol} \sim 10 M_{\rm gas, core} {v_0}^2/t_{\rm cool, max} \la 10^{43} \ \rm erg \ s^{-1}$
for our representative case (see equations \ref{eq:Mgas} and \ref{eq:t_cool}).
Given that the colliding gas in dark matter halos is mostly neutral,
the cooling radiation is reprocessed into various recombination lines, e.g., Ly$\alpha$, H$\alpha$, and He II $\lambda1640$.
The H$\alpha$ and He II $\lambda1640$ emission lines are particularly interesting, since the intergalactic medium would be optically thin to them.
If $\sim {\rm a \ few}$ per cent of the bolometric luminosity goes into these lines,
which is reasonable~\citep[see, e.g.,][for numerical simulations of cooling radiation from hot metal-poor gas with $\sim 10^{5} \ \rm K$]{Johnson+11},
the emission could be detected from $z \la 15$ by the Near-Infrared Spectrograh~(NIRSpec) on the James Webb Space Telescope (JWST) with an exposure time of $\ga 100 \ \rm h$.
Due to the high cooling temperature, the ratio of He II $\lambda1640$ to H$\alpha$ flux is expected to be large,
which may make it distinguishable from other sources (e.g. population III galaxies).
In principle it may also be possible to detect H Ly$\alpha$ emission from protogalaxy collisions. Ly$\alpha$ emission could constitute a large fraction of the bolometric luminosity (perhaps 10 per cent).
However, even if a collision is observed in Ly$\alpha$ it may be difficult to distinguish from other objects such as accreting massive dark matter halos.
A detailed radiative transfer calculation is required to accurately predict the emission spectrum,
which is beyond the scope of this paper.
Even though protogalaxy collisions may be bright enough to observe, recent collisions are expected to be extremely rare.
At most there will be $\sim$ a few in the whole sky, given the event rate,
$dn_{\rm coll}/dt (z=15) \sim 10^{-11}$ Mpc$^{-3}$ (comoving) Myr$^{-1}$
and the emission duration, $\sim \ 0.1 \ \rm Myr$.
Thus, it will be extremely challenging to detect the signal described above in the near future.
\subsection{Impact of assumptions}
Here we discuss some of the key assumptions we made, and how changes to these assumptions would affect our results.
\subsubsection{Metal enrichment}
\label{sec:metal_enrichment}
In \S2, we calculated the thermodynamics of the shocked gas after protogalaxy collisions assuming zero metallicity.
This assumption is valid for gas metallicity smaller than $\la 10^{-3}~Z_\odot$ \citep{IO12}.
If the metallicity is higher than this critical value, the shocked gas can cool down to below $\sim 10^4$ K
via metal-line emissions (C$_{\rm II}$ and O$_{\rm I}$) and fragment into clumps of $\sim 10~{\rm M}_\odot$, preventing SMS formation.
In general, the metal-enrichment of gas in massive dark matter halos proceeds in two different ways.
The first is internal enrichment by in situ star formation. Although not yet completely understood, the earliest star formation is expected to be
triggered by H$_2$ cooling in progenitor ``minihalos" with $T_{\rm vir}<10^{4} \ \rm K$,
which eventually assemble into the more massive dark matter halos we consider in this paper.
The level of self enrichment in minihalos is sensitive to the initial mass function~(IMF) of population III stars
\citep[e.g.][]{2014ApJ...781...60H,2014ApJ...792...32S}.
If the IMF is extremely top heavy, the metal enrichment is predominately provided by pair-instability SNe.
In this case, the metallicity at the gas core inside the dark matter halo could be as large as $\sim 10^{-4}-10^{-3}~Z_\odot$ at
$z \sim 10$ \citep[e.g.][]{2010ApJ...716..510G, 2012ApJ...745...50W}.
On the other hand, if the IMF is mildly top heavy, core-collapse SNe from $\sim 40~{\rm M}_\odot$ stars
would be the dominant source \citep{2011Sci...334.1250H,2012MNRAS.422..290S,2012ApJ...760L..37H}.
In this case, the metallicity may be one order of magnitude lower ($\la 10^{-4}~Z_\odot$)
at the same redshift \citep{2002ApJ...567..532H, 2006NuPhA.777..424N}.
In our abundance estimates of SMS formation through high-velocity collisions, we only consider dark matter halos
with $T_{\rm vir} \la 10^4 \ \rm K$.
We assume that below $z=20$ the abundance of H$_2$ required for star formation in minihalos is sufficiently suppressed by LW background radiation ~\citep[e.g.][]{2000ApJ...534...11H,2001ApJ...548..509M,2007ApJ...671.1559W,2008ApJ...673...14O}.
The required LW background flux for this to occur is estimated to be $J_{\rm LW} \sim 0.2-2~(3\times 10^{-4}-4\times 10^{-2})$ at $z=15~(20)$~\citep{2014MNRAS.445.1056V}.
The anticipated LW background flux is $J_{\rm LW}\sim 0.1-10~(0.01-20)$ at $z\sim 15~(20)$
\citep[e.g.][]{2009ApJ...695.1430A,2013MNRAS.428.1857J,2014MNRAS.445..107V}, which depends on the detailed properties of Pop III stars and the efficiency with which they are produced. While there are certainly large uncertainties in the LW background, we find our assumption of minihalo star formation suppression to be reasonable.
The second way in which halos can obtain metals is through external enrichment by galactic winds from nearby massive galaxies.
Semi-analytic models predict that the intergalactic medium can be polluted by this effect leading to an average metallicity of $\langle Z\rangle \simeq 10^{-4}~Z_\odot$ by $z\ga 12$ \citep[e.g.][]{2007MNRAS.382..945T,2010MNRAS.407.1003M}.
However,the fraction of the intergalactic medium that has been polluted is expected to be small at the high redshifts important for our calculation (e.g the estimated volume filling factor is $\sim 10^{-4}$ for $z>12$; \citealt{2014MNRAS.440.2498P}).
Thus, external metal enrichment is unlikely to impact our assumption of zero metallicity.
In summary, it is reasonable to neglect the effect of metal cooling for dark-matter halos with a virial temperature $T_{\rm vir} \la 10^{4} \ \rm K$ at $z \ga 10$ after the LW background suppresses star formation in minihalos.
\subsubsection{Gas thermodynamics}
In this paper, we derived the conditions for SMS formation in protogalaxy collisions (Fig. \ref{fig:z_v})
based on the``zone of no return" shown in Appendix A.
This zone is obtained from a one-zone calculation of thermodynamics of the shocked gas.
Of course, galaxy collisions are actually three-dimensional phenomena; detailed hydrodynamical simulations are necessary to confirm our scenario.
We obtained equation (\ref{eq:zone}) by assuming that the shock is plane-parallel and steady.
This assumption would be valid for nearly head-on collisions and timescales shorter
than the shock dynamical timescale.
Accordingly, we set the maximum impact parameter as $b \sim 0.1~R_{\rm vir}$,
which corresponds to the size of the gas core of an atomic-cooling halo.
However, galaxy collisions occur typically with a larger impact parameter $b \sim R_{\rm vir}$.
A critical impact parameter for SMS formation needs to be identified by numerical simulation of protogalaxy collisions.
We note that the formation rate of SMSs and DCBHs in our scenario is somewhat sensitive to this critical value ($\propto b^2$).
As mentioned in \S2, the shocked gas in the zone of no return is thermally unstable.
Once the instability is triggered, fluctuations in the shocked gas grow and form clumpy structures
with a length scale of $\la c_{\rm s}t_{\rm cool}$~\citep{1965ApJ...142..531F}. As a result of this, the structure of the shocked gas deviates from the plane-parallel sheet in a cooling time.
Unfortunately, our one-zone calculation cannot capture these effects.
Note that, as long as the H$_2$ abundance is suppressed, the cooling length is kept shorter than the Jeans length,
thus the thermal instability does not necessarily result in a smaller fragmentation mass.
Nevertheless, the effects of the thermal instability on SMS formation need to be studied using high-resolution simulations.
We implicitly assumed that after radiative cooling of the shocked gas becomes irrelevant (i.e. $t_{\rm cool} \ga t_{\rm ff}$),
the corresponding Jeans mass is assembled, perhaps within $\sim t_{\rm ff}$, and the gas clump collapses due to its self-gravity.
Our one-zone calculation cannot address how the mass assembly process actually proceeds in detail.
Even when the mass budget is large enough $> 10^5~{\rm M}_\odot$, the mass assembling may be halted,
e.g., due to the angular momentum of the gas. A detailed numerical simulation is also required to clarify this point.
Subsequent mass accretion onto the DCBHs formed in protogalaxy collisions is also uncertain at this stage.
This needs to be clarified in order to address whether such DCBHs can be the seeds of high-$z$ QSOs.
The initial mass of the DCBHs is $\sim 10^5~{\rm M}_\odot$ whereas the total gas mass of each colliding galaxy is $\ga 10^6~{\rm M}_\odot$.
We also note that the DCBH is unlikely to be hosted by the dark matter halo, at least just after the formation,
because the collision velocity of the parent halos is much larger than the virial velocity.
Nevertheless, continuous mass accretion from the intergalactic medium may be expected since high-velocity collisions
typically occur in over-dense regions.
Additional galactic and intergalactic-scale calculations including radiative feedback from accreting BHs are required to confirm this.
\subsubsection{High-velocity collision rate estimate}\label{sec:rate}
There are a number of uncertainties associated with the various assumptions we made
to estimate the number density of DCBHs produced from high-velocity protogalaxy collisions.
First, we note that our estimate depends strongly on the precise values of $v_{\rm min}$.
Due to the steepness of $p(v)$ at large $v$, a 20 per cent decrease in $v_{\rm min}$ increases the abundance of DCBHs by more than an order of magnitude.
The abundance also depends strongly on the value of the impact parameter needed to create a black hole ($n_{\rm DCBH} \propto b^2$).
Future hydrodynamic simulations of individual collision events are needed to constrain $v_{\rm min}$ and $b$.
Additionally, the small box size of our simulation may systematically reduce the abundance of halos with high peculiar velocity.
This is because large-scale density fluctuations, corresponding to modes larger than the box are artificially removed.
We leave it to future work to determine how much this effect could enhance the number density we compute here.
\begin{figure}
\begin{center}
\includegraphics[height=60mm,width=80mm]{fig4.eps}
\caption{The velocity PDF computed directly from our N-body simulation at $z=15$ (solid curve) and computed after subtracting away the local coherent velocity as described in Section 4.2.3 (dashed curve).}
\label{coh_pdf}
\end{center}
\end{figure}
We considered the case of one fast-moving halo and one halo at typical peculiar velocity in the opposite direction.
Of course there can be other combinations of peculiar velocities and collision angles which lead to a DCBH.
We find that we get similar results performing the more complicated analysis of adding up the contribution
from all angles and different combinations of peculiar velocities.
There is a factor of a few enhancement compared to the simple calculation discussed above.
We note that our idealized assumptions of random positions and velocity directions are not expected to be accurate
and estimate their impact on our number density estimate here.
We expect that fast halos will preferentially be found in over dense regions, possibly regions which will soon virialize.
The halo density enhancement (ignored in our estimate) is given by $(1+b_{\rm h} \delta)$
and the density of DCBHs will depend on this value squared.
At $z=15$, in a region that is about to virialize ($\delta \approx \delta_{\rm c}=1.686$),
the density enhancement of DCBHs is roughly 100.
Our assumption of random velocities most likely overestimates the number density of DCBHs formed.
This is because on small scales there will be some velocity coherence between nearby halos, reducing their relative velocities.
To estimate the impact of this effect we recompute $p(v)$ from our simulation,
and for each halo subtract away the mass-weighted mean velocity of all other nearby halos in the mass range $M=(0.5-1)M_{\rm cool}$.
We include all halos within the typical separation length of these halos, $R_{\rm c}=n_{\rm h}^{-1/3}$.
Increasing the value of $R_{\rm c}$ by a factor of two does not significantly affect our results.
If this distance is taken to be significantly smaller there are not enough neighbors to compute the coherent velocity.
This PDF at $z=15$ is shown in Fig.~\ref{coh_pdf}.
At high $v$, it is more than an order of magnitude lower than $p(v)$ obtained without subtracting coherent velocities.
The typical peculiar velocity is reduced by $\sim 25~{\rm km~s^{-1}}$.
The relative changes at $z=20$ are similar.
We find that these two effects (the high-$v$ $p(v)$ and the typical $v$) lower the abundance of DCBHs by approximately
two orders of magnitude, which may roughly cancel when combined with the correction due to the density enhancement discussed above.
Despite the large uncertainties described above, the high-velocity collision of protogalaxies is an interesting pathway to form SMSs and DCBHs
without extremely strong LW radiation and could explain the abundance of high-$z$ bright QSOs.
In future work, we plan to perform detailed numerical studies on the gas dynamics of colliding galaxies and the event rate of appropriate collisions to determine if these events could truly be responsible for the first SMBHs.
\section*{Acknowledgements}
We thank Zolt\'an Haiman, Greg Bryan, Hidenobu Yajima, Eliot Quataert, P\'eter M\'esz\'aros and Renyue Cen for fruitful discussions.
This work is partially supported by the Grants-in-Aid by the Ministry of Education, Culture, and Science of Japan (KI),
and by NASA through Einstein Postdoctoral Fellowship grant number PF4-150123 awarded by the Chandra X-ray Center,
which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060~(KK). EV is supported by the Columbia Prize Postdoctoral Fellowship in the Natural Sciences.
Our N-body simulations were carried out at the Yeti High Performance
Computing Cluster at Columbia University.
|
1712.08237
|
\section{Introduction}
Let $T\in (0,\infty)$ be a fixed deterministic time horizon and $(\Omega, {\cal F}, P)$ a given probability space equipped with the completed filtration $({\cal F}_t)_{t\in [0,T]}$ of a $d$-dimensional Brownian motion $W$.
We denote by $L^a(X)$ the local time at level $a\in \mathbb{R}$ of the semimartingale $X$.
Given a signed Radon measure $\nu$ on (the Borel subsets of) $\mathbb{R}$ and a progressively measurable function $\sigma:[0,T]\times\Omega\times\mathbb{R}\to \mathbb{R}^d$, we are interested in studying pathwise uniqueness for solutions of the one-dimensional stochastic differential equation
\begin{equation}\label{eqmainlt}
X_t = x + \int_0^t\sigma_u(X_u)\,\mathrm{d} W_u + \int_{\mathbb{R}}L_t^a(X)\,\nu(\mathrm{d} a).
\end{equation}
Such equations first appeared in the work of \citet{LeGall83} and was subsequently developed e.g. by \citet{Engl-Schm123} and \citet{Sto-Ypr}.
One strong interest in this type of equations involving the local time of the unknown is due to its link to the so-called skew Brownian motion introduced and studied by \citet{Harr-She} and \citet{Blei-Engl}.
Pathwise uniqueness results for the SDE \eqref{eqmainlt} was obtained by \citet{Ouknine88} when $\sigma$ is of bounded variation.
In the case when $\nu$ is $\sigma$-finite and the diffusion coefficient is time-homogeneous, \citet{Blei-Engl} derived necessary and sufficient conditions for existence and uniqueness in law of a solution.
More recently, \citet{Ben-Bou-Ouk13} derived pathwise uniqueness results using the balayage formula.
Recall that the relevance of pathwise uniqueness of SDEs is stressed by the celebrated result of \citet{Yam-Wata} which allows, from pathwise uniqueness and weak existence, to derive strong existence.
If the measure $\nu$ is absolutely continuous w.r.t. the Lebesgue measure, a direct application of the occupation time formula shows that the SDE \eqref{eqmainlt} can be rewritten as
\begin{equation}
\label{eq:main_no.lt}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \int_0^tf(X_u)\sigma_u^2(X_u)\mathrm{d} u,
\end{equation}
where the measurable function $f:\mathbb{R}\to \mathbb{R}$ is the density of $\nu$.
In the one dimensional case, when the drift is bounded and Borel measurable, \citet{Zvonkin} derives existence and uniqueness of strong solution. This result was generalised to multidimensional case by \citet{Veret81}. Since then, there has been a strong research effort to derive \emph{existence, uniqueness} and \emph{regularity} properties for \emph{strong solutions} of SDEs with non-smooth coefficients; see for example \citet{Engl-Schm123,Eng-Sch85}, \citet{Kry-Roeck05}, \citet{Menoukeu-13} and the references therein.
A prevalent assumption in the literature to derive pathwise uniqueness (and strong existence) is a uniform ellipticity condition on the diffusion coefficient $\sigma$, that is, $\frac{1}{2}tr(\sigma_t\sigma_t')(x)\ge c$ for some $c>0$ for all $(t,x)$.
The main objective of the present work is to study properties of solutions of the SDE \eqref{eqmainlt} without any a priory assumption on uniform ellipticity of the diffusion coefficient which in turn will be assumed to be merely measurable.
In this setting, the question of pathwise uniqueness of \eqref{eqmainlt} was studied by \citet{Engl-Schm123} and more recently, \citet{Cham-Jabin17} derived strong existence and pathwise uniqueness results for classical SDEs (i.e. without local time) when $\sigma$ is allowed to vanish.
Making ample use of the theory of local time, --more precisely of the comparison theorem for local times of \citet{Ouknine90} and \citet{Ben-Bou-Ouk13}-- and using more simple arguments, we improve existing results on pathwise uniqueness of \eqref{eqmainlt}, giving simplified arguments. In particular, we show that the so-called condition (LT) of \citet{Bar-Per84} guarantees pathwise uniqueness of SDEs with local time even in a more general setting than that of \eqref{eqmainlt}; see Theorem \ref{thm:lt_local_nonhom} and Proposition \ref{pro:lt}.
Assuming that the diffusion is deterministic and time-homogeneous i.e., $\sigma_t(\omega,x)=\sigma(x)$, we derive the well-known uniqueness result of \citet{Engl-Schm123} using the comparison theorem for local times and the occupation time formula, see Theorem \ref{thm:pathwise.unique}.
For illustration purpose, we show how comparison can also be used to derive uniqueness for reflected SDEs.
Using the result of \citet{Yam-Wata} along with a transformation that eliminates the drift, we derive, as applications of our uniqueness results, existence of strong solutions of \eqref{eqmainlt}.
We further study some properties of the solution, including continuity and PDE-representation.
In particular, we show the rather striking fact that even if the coefficients of the SDEs are not smooth, the solution is still a continuous function of time and of the initial condition.
This is in line with the results of \citet{Mohammed15}, \citet{Menoukeu-17} and \citet{Fed-Flan} obtained under uniform ellipticity and of \citet{Bah-Mer-Ouk98} in the case for classical SDEs.
The rest of this work is organized as follows:
In the next section, we study pathwise uniqueness of SDEs with local time of the unknown, first considering the time-inhomogeneous case and then the time-homogeneous case, and in the final section we apply our pathwise uniqueness results to derive properties of SDEs with local time as well as classical SDEs not involving the local time of the unknown.
\section{Pathwise uniqueness}
\label{sec:uniqueness}
\subsection{The time-inhomogeneous case}
In this section, we study SDEs of the form
\begin{equation}
\label{eq:SDE_local_nonhom}
X_t = x + \int_0^t\sigma_u(X_u) \mathrm{d} W_u + \int_0^t\int_{\mathbb{R}}\mathrm{d} L^a_u(X)\nu_u(\mathrm{d} a),
\end{equation}
where $\sigma:[0,T]\times \Omega\times \mathbb{R}\to \mathbb{R}^d$ is a progressively measurable function, $(\nu_t)_{t\in [0,T]}$ is a flow of Borel measures on $\mathbb{R}$, $W$ is a $d$-dimensional Brownian motion and $L^a(X)$ the local time at level $a\in \mathbb{R}$ of the process $X$.
The SDE \eqref{eq:SDE_local_nonhom} was studied in \cite{Weinryb} when $\sigma$ is constant and $\nu_t$ of the form $\alpha_t\delta_0(da)$ where $\delta_0$ is the Dirac mass at zero; see also Subsection \ref{subsec:exist}.
It is well known that uniqueness in law is weaker than pathwise uniqueness.
Our first result gives a condition under which the converse holds true for the SDE \eqref{eq:SDE_local_nonhom}.
The following condition was introduced in \cite{Bar-Per84} and further considered in \cite{Rut90}:
\begin{definition}[Condition (LT)]
The function $\sigma$ satisfies the condition (LT) if $L^0_t(X^1 - X^2)=0$ for all $t \in [0,T]$ and for every processes $X^1, X^2$ such that
\begin{equation}
\label{eq:lt}
X^i_t = X^i_0 + \int_0^t\sigma_u(X^i_u) \mathrm{d} W_u + V^i_t, \quad i=1,2,
\end{equation}
where $(V^i_t)_{t \in [0,T]}$ are continuous adapted processes with bounded variation.
\end{definition}
\begin{theorem}
\label{thm:lt_local_nonhom}
Suppose $\sigma$ satisfies (LT), and the SDE \eqref{eq:SDE_local_nonhom} satisfies uniqueness in law. Then it satisfies pathwise uniqueness.
\end{theorem}
\begin{proof}
Using the condition (LT), we can show that if $X^1$ and $X^2$ are solutions of \eqref{eq:SDE_local_nonhom} so are $X^1\wedge X^2$ and $X^1\vee X^2$.
In fact, since $\int_0^t\int_{\mathbb{R}}\mathrm{d} L^a_u(X)\nu_u(\mathrm{d} a)$ is continuous, adapted and with bounded variations, Tanaka's formula yields
\begin{align*}
X^1_t\vee X_t^2 &= X_t^2 + (X_t^1 - X_t^2)^+
= X_t^2 + \int_0^t1_{\{X^1_u > X^2_u\}}\mathrm{d} (X^1_u - X^2_u) + \frac{1}{2}L^0_t(X^1 - X^2)\\
&= X_0 + \int_0^t1_{\{X^1_u > X^2_u\}}\mathrm{d} X_u^1 + \int_0^t1_{\{X^1_u\le X^2_u\}}\mathrm{d} X^2_u\\
&= X_0 + \int_0^t\sigma_u(X^1_u\vee X_u^2 )\mathrm{d} W_u + \int_0^t\int_\mathbb{R}\nu_u(\mathrm{d} a)\left(1_{\{X^1_u>X^2_u\}}\mathrm{d} L^a_u(X^1) + 1_{\{X^1_u\le X^2_u\}}\mathrm{d} L^a_u(X^2)\right).
\end{align*}
By (a trivial adaptation of) the result of \cite{Ouknine90} on the local time of the maximum, one has
\begin{equation*}
L^a_t(X^1\vee X^2) = \int_0^t1_{\{X^1_u> X_u^2\}}\mathrm{d} L_u^a(X^1) + \int_0^t1_{\{X^1_u\le X^2_u\}}\mathrm{d} L_u^a(X^2).
\end{equation*}
Hence,
\begin{equation*}
X^1_t\vee X^2_t = \int_0^t\sigma_u(X^1_u\vee X_u^2)\mathrm{d} W_u + \int_0^t\int_\mathbb{R}\nu_u(\mathrm{d} a)\mathrm{d} L^a_u(X^1\vee X^2).
\end{equation*}
Using the identity $L^a(X^1\wedge X^2) = L^a(X^1) + L^a(X^2) - L^a(X^1\vee X^2)$ (see e.g. \cite{Ouknine90}), the argument above also shows that $X^1\wedge X^2$ is a solution.
Thus, by uniqueness in law, $X^1\wedge X^2$ and $X^1\vee X^2$ must have the same law.
Therefore, for every $t \in [0,T]$,
\begin{equation*}
E[|X^1_t - X^2_t|] = E[X^1_t\vee X^2_t] - E[X^1_t\wedge X^2_t] = 0.
\end{equation*}
That is, $X^1$ and $X^2$ are indistinguishable since they are continuous processes.
\end{proof}
The condition (LT) is standard in the study of time inhomogeneous SDEs, see e.g. \cite{Ben-Bou-Ouk13} and \cite{Bar-Per84}.
We present below an example of functions satisfying (LT).
\begin{example}
Let $d=1$ and $\sigma:[0,T]\times \Omega\times \mathbb{R} \to \mathbb{R}$ be such that there is $\varepsilon>0$: $\sigma \ge \varepsilon$ and there are two functions $\alpha^1, \alpha^2:[0,T]\times \Omega\times \mathbb{R} \to \mathbb{R}$ that are increasing in $x$ for all $t$ and of uniformly bounded variation in $t$ on every compact of $\mathbb{R}$, and such that $1/\sigma = \alpha^1 - \alpha^2$.
Pathwise uniqueness for SDEs with diffusion coefficients satisfying these conditions were first studied in \cite{Nakao}.
In fact, set $Y^i_t:= F(t, X^i_t)$, with $F(t,x):= \int_0^x\frac{\mathrm{d} u}{\sigma_t(u)}$, where $X^i$ are processes satisfying \eqref{eq:lt}.
It follows by \cite[Theorem 1]{Ouknine_fonc_89} that for each $i=1,2$, there is a continuous bounded variation process $ V^i$ such that $Y^i_t = B_t + V^i_t$.
Thus, $Y^1_t - Y^2_t = V^1_t - V^2_t$.
Since the right hand side of the latter equality is a continuous process with bounded variations, it holds $L^0_t(Y^1 - Y^2) =0$.
Thus, by \cite{Ouknine88} one has $L^0_t(X^1 - X^2)=0$.
\end{example}
Next, we assume that the flow of measures $(\nu_t)_{t\in [0,T]}$ is constant, i.e. for all $t$, one has $\nu_t \equiv \nu$. Then the SDE \eqref{eq:SDE_local_nonhom} becomes
\begin{equation}
\label{eq:SDE.inhom}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \int_{\mathbb{R}}L_t^a(X)\,\nu(\mathrm{d} a).
\end{equation}
In this case, the requirement on the uniqueness in law in Theorem \ref{thm:lt_local_nonhom} can be dropped.
The SDE \eqref{eq:SDE.inhom} has been considered in \cite{LeGall84} and subsequently in \cite{Engl-Schm123} and \cite{Blei-Engl}, under the conditions
\begin{itemize}
\item[(A1)] $|\nu|(\{a\})<1$ for all $a \in \mathbb{R}$,
\item[(A2)] $|\nu|(\mathbb{R})<\infty$.
\end{itemize}
We show in Proposition \ref{pro:lt} below that the above conditions can be weakened.
Consider the functions
\begin{equation*}
f_\nu(x):= \exp(-2\nu^c((-\infty,x]))\Pi_{y\le x}\left(\frac{1-\nu\{y\}}{1+\nu\{y\}} \right)\quad \text{and} \quad F_\nu(x):= \int_0^x f_\nu(z)\mathrm{d} z,
\end{equation*}
where $\nu^c$ denotes the continuous part of the measure $\nu$.
Recall that due to conditions (A1) and (A2), the function $f_\nu$ is well-defined, increasing, right-continuous and $0<\ubar{m}\le f_\nu\le \bar{m}$ for some $\ubar{m},\bar{m}\in \mathbb{R}$.
Furthermore, it can be checked that $F_\nu$ is invertible, and $F_\nu$ and $F_\nu^{-1}$ are Lipschitz continuous functions; see e.g. \cite{LeGall84} for details.
Denote by $N_\sigma$ the set of zeros of the function $\sigma$ defined as follows
$N_\sigma:= \{x \in \mathbb{R}: \sigma_t(x)=0\,\,\, \mathrm{d} t\text{-a.s.}\}$.
\begin{proposition}
\label{pro:lt}
Suppose $|\nu|(\{a\})<1$ for all $a \in N_\sigma$ and $|\nu|(N_\sigma^c)<\infty$. In addition, suppose $\sigma$ satisfies condition (LT). Then the SDE \eqref{eq:SDE.inhom} satisfies the pathwise uniqueness property.
\end{proposition}
The proof of Proposition \ref{pro:lt} uses the following lemma that gives conditions under which two continuous processes are indistinguishable:
\begin{lemma}
\label{lem:indist}
Let $\nu$ be a measure satisfying conditions \textup{(}A1\textup{)} and \textup{(}A2\textup{)} and
let $X^1$ and $X^2$ be two semimartingales of the form
\begin{equation*}
X^i = x + M_t^i + \int_\mathbb{R}L^a_t(X^i)\nu(\mathrm{d} a), \quad i=1,2,
\end{equation*}
where $M^i$ are continuous local martingales.
If $L^0(X^1 - X^2)=0$, then $X^1$ and $X^2$ are indistinguishable.
\end{lemma}
\begin{proof}
First recall that the function $F_\nu$ satisfies $\ubar{m}(x - y)^+\le (F_\nu(x) - F_\nu(y))^+\le \bar{m}(x - y)^+$ for all $x, y$; see e.g. \cite{Ben-Bou-Ouk13}.
Set $Y^i_t: = F_\nu(X^i_t)$, $i=1,2$, $t \in [0,T]$.
It follows from Tanaka's formula that
\begin{equation*}
Y^{i}_t = F_\nu(x) + \int_0^tf_\nu(X^i_u)Z^i_u\mathrm{d} W_u,
\end{equation*}
with $Z^i$ the predictable process such that $M^i_t = M^i_0 + \int_0^tZ^i_u\mathrm{d} W_u$.
Thus, $F_\nu(X^i)$ is a local martingale for each $i$.
Moreover, $\ubar{m}(X^1 - X^2)^+ \le (Y^1 - Y^2)^+\le \bar{m}(X^1 - X^2)^+$, so that since $L^0(X^1 - X^2)=0$, it follows from the comparison theorem for local times (see \cite{Ouknine88}) that $L^0_t(Y^1 - Y^2) =0$.
Thus, an application of Tanaka's formula again shows that
\begin{equation*}
|Y^1_t - Y^2_t| = \int_0^t\textrm{sign}(Y^1_u - Y^2_u)\mathrm{d} (Y^1_u - Y^2_u),
\end{equation*}
from which a simple localization argument shows that $E[|Y^1_t - Y^2_t|] =0$, i.e. $Y^1_t = Y^2_t$.
Since $F_\nu$ is invertible, this implies $X^1_t = X^2_t$ for every $t \in [0,T]$.
We therefore conclude that $X^1$ and $X^2$ are indistinguishable since they are continuous processes.
\end{proof}
We now turn to the proof of Proposition \ref{pro:lt}.
\begin{proof}[of Proposition \ref{pro:lt}]
First notice that
\begin{equation}
\label{eq:lemLT}
L^x_t(X)=0 \quad \text{for all $x \in N_\sigma$ and every solution $X$ of \eqref{eq:SDE.inhom}}.
\end{equation}
Indeed, let $x \in N_\sigma$.
Since $L^a_t(x)=0$ for all $(t,\omega)\in [0,T]\times \Omega$ and $a \in \mathbb{R}$, the (constant) process $x$ solves the SDE \eqref{eq:SDE.inhom} with initial condition $X_0 = x$.
Thus, for every solution $X$ of \eqref{eq:SDE.inhom}, it follows from condition (LT) that $L^x_t(X) = L^0_t(X-x)=0$.
Let $\tilde{\nu}$ denote the restriction of the measure $\nu$ on $N_\sigma^c$, i.e. $\tilde \nu (A):= \nu(A \cap N_\sigma^c)$ for all Borel subset $A$ of $\mathbb{R}$.
The measure $\tilde \nu$ satisfies the conditions (A1) and (A2) and by \eqref{eq:lemLT}, every solution $X$ of the SDE \eqref{eq:SDE.inhom} satisfies
\begin{equation*}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \int_{\mathbb{R}}L_t^a(X)\,\tilde{\nu}(\mathrm{d} a).
\end{equation*}
Thus, the result follows by Lemma \ref{lem:indist}.
\end{proof}
\subsection{The time homogeneous case with deterministic coefficient}
In this subsection, we study the SDE
\begin{equation}
\label{eq:SDE.homogene}
X_t = x + \int_0^t\sigma(X_u)\,dW_u + \int_{\mathbb{R}}L^a(X)\nu(da),
\end{equation}
with $\sigma:\mathbb{R}\to \mathbb{R}^d$ a measurable function.
We show that in this case, the condition (LT) can essentially be replaced by integrability conditions on $\sigma$ to obtain pathwise uniqueness.
Consider the following conditions: There exist two functions $f:\mathbb{R}\to \mathbb{R}_+$ and $h:\mathbb{R}\to \mathbb{R}_+$ such that
\begin{itemize}
\item[(A3)] $\int_{0^+}\frac{\mathrm{d} a}{h^2(a)}=+\infty$ and $f/\sigma \in L^2_{\text{loc}}(\mathbb{R})$,
\item[(A4)] $|\sigma(x) - \sigma(y)| \le (f(x) + f(y))h(|x - y|)\quad \text{and} \quad N_\sigma \subseteq N_f:= \{x:f(x) = 0\}$.
\end{itemize}
\begin{theorem}
\label{thm:pathwise.unique}
Assume that the conditions \textup{(}A1\textup{)}-\textup{(}A4\textup{)} are satisfied.
Then, the SDE \eqref{eq:SDE.homogene} has the pathwise uniqueness property.
\end{theorem}
The proof of Theorem \ref{thm:pathwise.unique} uses the following lemma:
\begin{lemma}
\label{lem:int.zero}
Let $X^1$ and $X^2$ be two solutions of \eqref{eq:SDE.homogene}.
Suppose $\int_0^\cdot\frac{\mathrm{d} \langle X^1_u - X^2_u\rangle}{h^2(X^1_u - X^2_u)}<\infty$. Then $L^0(X^1-X^2)=0$.
\end{lemma}
\begin{proof}
Assume by contradiction that there is $t \in [0,T]$, $\delta>0$ and a set $A \in {\cal F}$ with $P(A)>0$ such that $L_t^0(X^1 - X^2)>\delta$ on $A$.
Since the function $a\mapsto L^a_t(X^1 - X^2)$ is right continuous, there is $\varepsilon >0$ such that $L^a_t(X^1 - X^2)(\omega)>\delta/2$ for $\omega \in A$.
Thus, by (A3) one has
\begin{equation*}
+\infty =\frac{\delta}{2}\int_0^\varepsilon\frac{1}{h^2(a)}\mathrm{d} a \le\int_\mathbb{R}\frac{L^a_t(X^1 - X^2)}{h^2(a)}\mathrm{d} a = \int_0^t\frac{\mathrm{d} \langle X^1_u - X^2_u\rangle}{h^2(X^1_u - X^2_u)}<+\infty \quad \text{on } A,
\end{equation*}
where the second equality follows from the occupation time formula, see e.g. \cite{Revuz1999}.
Thus $P(A)=0$, which is a contradiction.
Therefore, $L^0(X^1-X^2)=0$.
\end{proof}
\begin{proof}[of Theorem \ref{thm:pathwise.unique}]
In light of lemmas \ref{lem:indist} and \ref{lem:int.zero}, it remains to show that $\int_0^\cdot\frac{\mathrm{d} \langle X^1_u - X^2_u\rangle}{h(X^1_u - X^2_u)}<\infty$.
This follows again as an application of the occupation time formula.
In fact, by (A4) one has
\begin{align*}
\int_0^t\frac{\mathrm{d} \langle X^1_u - X^2_u\rangle}{h^2(X^1_u - X^2_u)} &= \int_0^t\frac{(\sigma(X^1_u) - \sigma(X^2_u))^2}{h^2(X^1_u-X^2_u)}\mathrm{d} u \le \int_0^t\left(f(X^1_u) + f(X^2_u)\right)^2\mathrm{d} u\\
& \le 2\int_0^t\frac{f^2(X^1_u) }{\sigma^2(X^1_u)}\sigma^2(X^1_u)1_{\{X^1_u \notin N_f\}}\mathrm{d} u + 2\int_0^t\frac{f^2(X^2_u)}{\sigma^2(X^2_u)}\sigma^2(X^2_u)1_{\{X^2_u \notin N_f\}}\mathrm{d} u\\
&= 2\int_{\mathbb{R}}\frac{f^2(a)}{\sigma^2(a)}\,L^a_t(X^1)1_{N_f^c}\mathrm{d} a + 2\int_{\mathbb{R}}\frac{f^2(a)}{\sigma^2(a)}\,L^a_t(X^2)1_{N_f^c}\mathrm{d} a,
\end{align*}
where the last equality follows from the occupation time formula.
Let $(t,\omega) \in [0,T]\times \Omega$.
Since the function $a\mapsto L^a_t(X^i)(\omega)$, $i=1,2$ has support on the compact $K^i_t(\omega):= [\inf_{0\le u\le t}X_u^i(\omega), \sup_{0\le u\le t}X^i_u(\omega)]$, due to (A3) it holds
\begin{equation*}
\int_\mathbb{R}\frac{f^2(a)}{\sigma^2(a)}L^a_t(X^i)(\omega)1_{N_f^c}\mathrm{d} a \le \sup_{a \in K^i_t(\omega)}L^a_t(X^i)\int_{K^i_t(\omega)\cap N^c_f}\frac{f^2(a)}{\sigma^2(a)}\mathrm{d} a<\infty.
\end{equation*}
This concludes the proof.
\end{proof}
\begin{remark}
Let us observe that the result of Theorem \ref{thm:pathwise.unique} is known when $\sigma^2>0$; see e.g. \cite{Engl-Schm123} and \cite{Cham-Jabin17}.
We allow $\sigma$ to vanish and, our proof is based on different and more direct arguments.
\end{remark}
A particularly interesting example where the conditions (A3) and (A4) are fulfilled arises when $\sigma$ belongs to a suitable Sobolev space.
In fact, let us consider the maximal operator ${\cal M}$ of a function $f:\mathbb{R}\to\mathbb{R}^d$ defined as
\begin{equation*}
{\cal M}f(x):=\sup_{r>0}\frac{1}{B_r}\int_{B_r}|f|(x+\mathrm{d} z),\quad x\in \mathbb{R}^d,
\end{equation*}
where $B_r$ is the ball of radius $r$ around the origin, and the derivative operator
\begin{equation*}
\partial_x^{1/2}f:= \mathscr{F}^{-1}|z|^{1/2}\mathscr{F}f,
\end{equation*}
with $\mathscr{F}$ the Fourier transform in $\mathbb{R}^d$.
The function ${\cal M}f$ is positive and Borel measurable; see for example \cite{Stein70}.
Hence its integral with respect to a Borel measure is well-defined, with value in $\mathbb{R}_+\cup\{+\infty\}$.
Moreover, for any locally integrable function $f$, the derivative $\partial_x^{1/2}f$ is well-defined.
\begin{corollary}
\label{cor:pathwise.uniqueness}
Assume that conditions (A1)-(A2) are satisfied. Assume further that $\partial^{1/2}_x\sigma$ is a locally finite Radon measure and
\[
{\cal M}\partial^{1/2}_x\sigma/\sigma \in L^2_{\text{loc}}(\mathbb{R}),\quad \sigma \in L^1_{\text{loc}}(\mathbb{R})\quad \text{and} \quad N_\sigma \subseteq \{x: {\cal M}\partial^{1/2}_x\sigma(x)=0\}.
\]
Then the SDE \eqref{eq:SDE.homogene} satisfies the pathwise uniqueness property.
\end{corollary}
\begin{proof}
It follows from \cite[Lemma 3.5]{Cham-Jabin17} that $\sigma$ satisfies
\begin{equation*}
|\sigma(x) - \sigma(y)|\le \left({\cal M}\partial^{1/2}_x\sigma(x) + {\cal M}\partial^{1/2}_x\sigma(y) \right)|x-y|^{1/2}.
\end{equation*}
Thus, the functions $h(x):=|x|^{1/2}$ and $f(x):= {\cal M}\partial_x^{1/2}\sigma(x)$ satisfy the conditions (A3) and (A4).
The result follows from Theorem \ref{thm:pathwise.unique}.
\end{proof}
\begin{remark}
In fact, the preceding corollary extends recent results by \cite{Cham-Jabin17} and in particular \cite[Corollary 2.22]{Cham-Jabin17}.
\end{remark}
\subsection{Reflected SDEs}
In this subsection, we consider the reflected SDE
\begin{equation}
\label{eq:reflected.sde}
\begin{cases}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \frac{1}{2}L^0_t(X),\\
X\ge 0.\end{cases}
\end{equation}
As in the previous sections, we are interested in the pathwise uniqueness property of the above equation.
\begin{proposition}
\label{prop:reflected}
If we have uniqueness in law and given any two solutions $X$ and $Y$ of \eqref{eq:reflected.sde} the measure $\mathrm{d} L^0_t(X - Y)$ is supported by the set $\{X = Y =0\}$, then the SDE \eqref{eq:reflected.sde} satisfies the pathwise uniqueness property.
\end{proposition}
\begin{proof}
It follows from \cite[Lemma 1]{Ouknine93} that for every integer $n\ge 1$, and for any continuous semimartingales $X$ and $Y$, it holds
\begin{equation}
\label{eq:2n+1}
L^0_t(X^{2n+1} - Y^{2n+1}) = (2n+1)\int_0^t(X^{2n}_s + Y^{2n}_s)\mathrm{d} L_s^0(X - Y).
\end{equation}
In particular, for $n =1$,
\begin{equation}
\label{eq:2n+1}
L^0_t(X^{3} - Y^{3}) = 3\int_0^t(X^{2}_s + Y^{2}_s)\mathrm{d} L_s^0(X - Y).
\end{equation}
Thus, if $\text{supp}(dL^0_s(X - Y))\subseteq \{X = Y = 0\}$, it holds $L^0_t(X^{3} - Y^{3}) =0$.
Let $X$ and $Y$ be two solutions of \eqref{eq:reflected.sde}.
Then, as shown in \cite{Ouknine93}, $X^{3}$ and $Y^{3}$ satisfy the same SDE, so that $X^{3}\vee Y^{3}$ and $X^{3}\wedge Y^{3}$ are also solutions of \eqref{eq:reflected.sde} (confer the proof of Theorem \ref{thm:lt_local_nonhom}.)
But $X^{3}, Y^{3}$ and $X^{3}\vee Y^{3}$ have the same law.
Thus, $X = Y.$
\end{proof}
The subsequent example provides a diffusion for which the above result is valid.
Notice that $\sigma$ is not necessarily Lipschitz continuous.
\begin{example}
\label{exa:support}
Assume that there is an integer $n$ such that
\begin{equation*}
|x^{2n}\sigma_t(x) - y^{2n}\sigma_t(y)|^2 \le C|x^{2n+1} - y^{2n+1}|
\end{equation*}
for some $C\ge0$.
Thus, by the arguments in the proof of the main result of \cite{Ouknine93}, it holds $L^0_t(X^{2n+1} - Y^{2n+1})=0$.
It then follows from \eqref{eq:2n+1} that the measure $\mathrm{d} L^0_s(X^1 - X^2)$ is supported by the set $\{X^1 = X^2 = 0\}$ whenever $X^1$ and $X^2$ are two solutions of \eqref{eq:reflected.sde}.
In fact,
if $A:= \{ (t,\omega): (X^1_t)^2 + (X^2_t)^2>0\} $, then we have $A=\cup A_n$ with $A_n = \{ (X^1)^2 + (X^2)^2>\frac 1n \}$.
This shows that
\begin{equation*}
0\le \int_{A_n}dL^0_s(X^1-X^2)\le n \int ((X^1_s)^2 + (X^2_s)^2)dL^0_s(X^1-X^2) = 0.
\end{equation*}
By monotone convergence, this implies $\int_AdL^0_s(X^1-X^2) = 0$. Thus, the support of the measure $dL^0_s(X^1-X^2)$ is in $A^c = \{ X^1 = X^2 = 0 \}$.
\end{example}
A similar method allows us to study the case of SDEs with jumps.
In fact, consider the SDE
\begin{equation}
\label{eq:reflexed-jump}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \int_{[0,t)}b(X_{u-})\mathrm{d} u + \int_0^t\int_{\mathbb{R}\setminus\{0\}}\gamma(X_{u-},z)\tilde\eta(\mathrm{d} u, \mathrm{d} z) + \frac{1}{2}L^0_t(X),\quad X\ge0
\end{equation}
where $b:\mathbb{R}\to \mathbb{R}$ is bounded and measurable, $\gamma:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ is bounded and measurable and $\tilde\eta$ a given signed measure on $[0,T]\times \mathbb{R}$.
\begin{proposition}
\label{prop:jump}
Suppose the following:
\begin{itemize}
\item[(i)] Uniqueness in law holds,
\item[(ii)] the function $x\mapsto \gamma(x,z)+x$ is increasing, $\eta(dz)$-a.e in $\{|z|<\varepsilon\}$ for some $\varepsilon>0$,
\item[(iii)] there is an odd number $n= 2k+1$, $k \in \mathbb{N}$ and a constant $C\ge0$ such that
\begin{equation*}
|(x^n\sigma_t(x) - y^n\sigma_t(y))|+ |\int_{\mathbb{R}\setminus \{0\}}(x^n \gamma(x,z) - y^n\gamma(y,z))^2\eta(\mathrm{d} z)| \le C |x^{n+1}- y^{n+1}|.
\end{equation*}
\end{itemize}
Then the SDE \eqref{eq:reflexed-jump} has the pathwise uniqueness property.
\end{proposition}
\begin{proof}
First notice that for every two (strong) solutions $X^1$ and $X^2$, the measure $dL^0_t(X^1 - X^2)$ is supported by the set $\{X^1_- = X^2_- =0\}$.
In fact, this follows as in the case $\gamma=0$ studied in Example \ref{exa:support}, using occupation time formula.
Indeed, since
\begin{equation*}
[X^1 - X^2]^c_t = \int_0^t(\sigma_u(X^1_u)-\sigma_u(X^2_u))^2\mathrm{d} u + \int_0^t\int_{\mathbb{R}\setminus\{0\}}(\gamma(X^1_{u-},z) - \gamma(X^2_{u-},z))^2\eta(\mathrm{d} z)\mathrm{d} s,
\end{equation*}
it follows that for every measurable function $f:\mathbb{R}\to \mathbb{R}$, we have
\begin{equation*}
\int_\mathbb{R}f(a)L^a_t(X^1 - X^2)\mathrm{d} a = \int_0^tf(X^1_u-X^2_u)\Big((\sigma_u(X^1_u)-\sigma_u(X^2_u))^2 + \int_{\mathbb{R}\setminus\{0\}}(\gamma(X^1_{u-},z) - \gamma(X^2_{u-},z))^2\eta(\mathrm{d} z)\Big) \mathrm{d} u.
\end{equation*}
Now, let $X^1$ and $X^2$ be two solutions of \eqref{eq:reflexed-jump}.
Using Tanaka's formula as in the proof of Theorem \ref{thm:lt_local_nonhom}, it holds
\begin{align*}
X^1\vee X^2
&= x + \int_0^t\sigma_u(X^1_u\vee X^2_u)\mathrm{d} W_u + \int_0^tb(X_u^1\vee X^2_u)\mathrm{d} u + \int_0^t\int_{\mathbb{R}\setminus\{0\}}\gamma((X^1\vee X^2)_{u-},z)\tilde\eta(\mathrm{d} s,\mathrm{d} z)\\
& + \frac{1}{2}\Big(\int_0^t1_{\{X^1_->X^2_-\}}\mathrm{d} L^1 + \int_0^t1_{\{X^1_-\le X^2_-\}}\mathrm{d} L^2 + \int_0^t1_{\{X^1_-=X^2_- = 0\}}\mathrm{d} L^0_s(X^1-X^2)\Big).
\end{align*}
This last expression is exactly $\frac{1}{2}L^0_t(X^1\vee X^2)$ as a consequence of \cite{Ouknine90} and the fact that the measure $\mathrm{d} L^0_t(X^1 - X^2)$ is supported by the set $\{X^1_- = X^2_- =0\}$.
The result now follows as in Proposition \ref{prop:reflected} by the uniqueness in law.
\end{proof}
\begin{remark}
Suppose that there exists a monotone function $f$ such that
\begin{equation*}
(\sigma_t(x) - \sigma_t(y))^2 + \int_{\mathbb{R}\setminus \{0\}}(\gamma(x,z) - \gamma(y,z))^2\eta(\mathrm{d} z) \le |f(x) - f(y)|,\quad \text{and } \sigma >\varepsilon.
\end{equation*}
Then $L^0_t(X^1-X^2) = 0$ whenever $X^1$ and $X^2$ are solutions.
Moreover, if $\tilde\eta(\mathrm{d} s,\mathrm{d} z) = N(\mathrm{d} t,\mathrm{d} z) - \eta(\mathrm{d} z)\mathrm{d} t$, then $N(t,A) = N([0,t]\times A)$ is a Poisson process with intensity $t\eta(A)$, where $0\notin \bar{A}$, the closure of $A$, then in Equation \eqref{eq:reflexed-jump}, we do not need $\mathbb{R}\setminus \{0\}$ but just a neighborhood of $0$, i.e. $\{0< |z|<\varepsilon\}$.
\end{remark}
\section{Applications}
\label{sec:applications}
In the remainder of the paper, we apply the pathwise uniqueness results of the previous section to the theory of SDEs with and without local time of the unknown.
Most of our proofs will use the well-known Zvonkin's transform already introduced in Lemma \ref{lem:indist}.
Thus, the function
$$\tilde\sigma(x):= (f_\nu\cdot \sigma)\circ F^{-1}_\nu(x)$$
will play a central role in our arguments.
\subsection{Existence}
\label{subsec:exist}
In this section, we establish existence and uniqueness of strong solution for the SDE \eqref{eq:SDE_local_nonhom}, when the measures $\nu$ is of a specific type. More precisely, assume that the measure $\nu$ has the following form:
\begin{equation*}
\nu(\mathrm{d} a) := \sum_{i=1}^n\alpha^i\delta_{a^i},
\end{equation*}
with $\alpha^i \in \mathbb{R}$ and $a^1< a^2<\ldots< a^n$.
Let us now set $\beta^i_t:=\alpha^i\alpha_t(a^i)$, so that the SDE \eqref{eq:SDE_local_nonhom} becomes
\begin{equation}
\label{eq:SDE.etore}
X_t = x + \int_0^t\sigma_u(X_u)\mathrm{d} W_u + \sum_{i = 1}^n\int_0^t\beta^i(u)\mathrm{d} L^{a^i}_u(X).
\end{equation}
The above SDE \eqref{eq:SDE.etore} with an additional drift term was recently studied in \cite{Etore2017}.
For the case $\sigma = 1$, $n=1$ and $a^1 = 0$, we refer to \cite{Weinryb}. The next result generalises \cite[Theorem 3.5]{Etore2017}.
\begin{proposition}
\label{thm:etore}
Assume
there is $m, M \in \mathbb{R}_+$ such that $0<m\le \sigma\le M$ and that $\sigma$ satisfies the condition (LT), and for every $i = 1,\dots, n$, $\beta^i:[0,T]\to [\ubar{k}, \bar{k}]$ is of class $C^1$, with $-1< \ubar{k}<\bar{k}<1 $ and $|(\beta_t^i)'|\le M$ for all $t \in [0,T]$.
Then, the time inhomogeneous SDE \eqref{eq:SDE.etore} has a unique strong solution.
\end{proposition}
\begin{proof}
It follows from \cite{Etore2017} that we have uniqueness in law.
Since $\sigma$ additionally satisfies (LT), it follows from Theorem \ref{thm:lt_local_nonhom} that the SDE satisfies the pathwise uniqueness property.
The existence of a unique strong solution therefore follows by \citet{Yam-Wata}.
\end{proof}
\begin{remark}
As pointed above a similar result was obtained in \cite[Theorem 3.5]{Etore2017}.
Notice however that in the latter work, the authors needed smoothness of the diffusion coefficient, we relax this assumption in this paper in the sense that, we only require $\sigma$ to satisfy (LT). In addition, our proof is based on the comparison theorem for local times.
\end{remark}
Let us now turn to the time-inhomogeneous case.
Put
$$I_\sigma:= \left\{a\in \mathbb{R}: \int_{O} \sigma^{-2}(y)\mathrm{d} y<\infty \text{ for all open neighborhood $O$ of $a$}\right\} .$$
\begin{proposition}
\label{pro:exist_lt}
Under the assumptions of Theorem \ref{thm:pathwise.unique}, the SDE \eqref{eq:SDE.homogene} admits a unique (non-constant) strong solution if and only if $N_\sigma^c\subseteq I_\sigma$.
\end{proposition}
\begin{proof}
As in the proof of Lemma \ref{lem:indist}, the transformation $Y:= F_\nu(X)$ satisfies the dynamics
\begin{equation}
\label{eq:sde.transf}
Y_t = F_\nu(x) + \int_0^t\tilde{\sigma}(Y_u)\mathrm{d} W_u.
\end{equation}
Since $0<\ubar{m}\le f_\nu\le\bar{m}<\infty$, it can be checked that $F_\nu(N_\sigma) = N_{\tilde{\sigma}}$ and $F_{\nu}(I_\sigma) = I_{\tilde\sigma}$.
Thus, since $F_\nu$ is invertible, it holds $F_\nu(N_\sigma)^c = F_\nu(N_\sigma^c) \subseteq F_\nu(I_\sigma)$.
This implies that the function $\tilde{\sigma}^{-2}1_{N^c_{\tilde{\sigma}}}$ is locally integrable.
Hence, by \cite[Theorem 2.2]{Eng-Sch85} the SDE \eqref{eq:sde.transf} admits a weak solution.
The result follows now by Theorem \ref{thm:pathwise.unique} and the Yamada-Watanabe theorem.
Reciprocally, if the SDE \eqref{eq:SDE.homogene} admits a non-constant solution, then the SDE \eqref{eq:sde.transf} admits a solution as well.
Thus, it follows by \cite[Theorem 4.7]{Engl-Schm123} that $N^c_{\tilde\sigma}\subseteq I_{\tilde\sigma}$.
\end{proof}
\subsection{Path and space regularity}
In this section we study continuity of the solution of the SDE \eqref{eq:SDE.homogene} in times as well as w.r.t. the initial condition.
Let $\alpha>0$ and denote by $C^\alpha([0,T],\mathbb{R})$ the space of $\alpha$-H\"older continuous functions on $[0,T]$ with values in $\mathbb{R}$.
Its norm is defined by
\begin{equation*}
||f||_\alpha:= \sup_{0\le t\le T}|f(t)| + \sup_{0\le s\le t\le T}\frac{|f(s) - f(t)|}{|s-t|^\alpha}.
\end{equation*}
Let us denote by $X^x$ the solution of the SDE \eqref{eq:SDE.homogene} with initial condition $x$.
\begin{proposition}
\label{pro:cont}
Assume that \textup{(}A1\textup{)}-\textup{(}A4\textup{)} hold and that $\sigma$ is continuous and of linear growth, i.e. there is a constant $A\ge 0$ such that $|\sigma(x)|\le A(1 +|x|)$ for all $x \in \mathbb{R}$.
Then, for all $\alpha \in [0,1/2)$ and $\varepsilon>0$, one has
\begin{equation}
\lim_{x \to x_0}P\left(||X^x - X^{x_0}||_\alpha >\varepsilon\right) = 0 \quad\text{for all } x_0 \in \mathbb{R}.
\end{equation}
\end{proposition}
\begin{proof}
For each $x \in \mathbb{R}$ the transformation $Y^x:= F_\nu(X^x)$ satisfies
\begin{equation}
\label{eq:Y.SDE}
Y^x_t = F_\nu(x) + \int_0^t\tilde{\sigma}(Y_u^x)\mathrm{d} W_u.
\end{equation}
Since $f_\nu$ is bounded, $\sigma$ of linear growth and $F^{-1}_\nu$ Lipschitz continuous, one has
\begin{equation*}
|\tilde\sigma(y)|=|f_\mu(F^{-1}_\nu(y))\sigma(F^{-1}_\nu(y)) |\le \bar{m}A(1 + |F^{-1}_\nu(y)|)\le \bar{m}C(1 + F^{-1}_\nu(0) + \bar{m}|y|).
\end{equation*}
That is, the function $\tilde{\sigma}$ is continuous and of linear growth.
Thus, since by Theorem \ref{thm:pathwise.unique} pathwise uniqueness holds for the SDE \eqref{eq:Y.SDE}, it follows from \cite[Proposition 3.8]{Bah-Mer-Ouk98} that
\begin{equation*}
\lim_{x \to x_0}P\left(||Y^x - Y^{x_0}||_\alpha>\varepsilon\right) = 0.
\end{equation*}
By Lipschitz continuity of $F^{-1}_\nu$, it holds
\begin{equation*}
P\left(||X^x - X^{x_0}||_\alpha >\varepsilon\right) = P\left(||F^{-1}_\nu(Y^x) - F^{-1}_\nu(Y^{x_0})||_\alpha >\varepsilon\right)\le P\left(||Y^x - Y^{x_0}||_\alpha >\varepsilon\right).
\end{equation*}
This shows the desired result.
\end{proof}
The solution of the SDE \eqref{eq:SDE.homogene} is also H\"older continuous in time. The proof of the result is omitted.
\begin{proposition}
Assume that conditions \textup{(}A1\textup{)}-\textup{(}A4\textup{)} are satisfied.
If $\sigma$ is locally integrable, then there is a constant $C>0$ such that
\begin{equation*}
E\left[|X_t - X_s|^2\right] \le C|t-s|^{1/2}\quad \text{for all}\quad s,t \in [0,T].
\end{equation*}
\end{proposition}
\subsection{Feynman-Kac type formula}
Let $(s,x) \in [0,T]\times \mathbb{R}$ and $X^{s,x}$ be the solution (if it exists) of the SDE
\begin{equation*}
X^{s,x}_t = x +\int_s^t\sigma(X^{s,x}_u)\mathrm{d} W_u + \int_\mathbb{R}L^a_t(X^{s,x})\nu(\mathrm{d} a).
\end{equation*}
\begin{proposition}
Assume that the conditions \textup{(}A1\textup{)}-\textup{(}A4\textup{)} are satisfied.
Let $f:\mathbb{R}\to \mathbb{R}$ and $g:[0,T]\times \mathbb{R}\to \mathbb{R}$ be two bounded continuous functions.
Then the function
\begin{equation*}
v(s,x) := E\Big[f(X^{s,x}_T) + \int_s^Tg_u(X^{s,x}_u)\mathrm{d} u \Big]
\end{equation*}
satisfies $v(s,x)=u(s, F_\nu(x))$, where $u$ is a viscosity solution of the partial differential equation
\begin{equation}
\label{eq:pde}
\begin{cases}
&\partial_tu + \frac{1}{2}((f_\nu\cdot \sigma)\circ F^{-1}_\nu)^2\partial^2_{xx}u + g\circ F^{-1}_\nu =0\\
&u(T,x) = f\circ F^{-1}_\nu(x).
\end{cases}
\end{equation}
\end{proposition}
\begin{proof}
Putting $y:= F_\nu(x)$ and $Y^{s,y}:= F_\nu(X^{s,x})$, we have
\begin{equation}
Y_t^{s,y} = y + \int_s^t\tilde{\sigma}(Y_u^{s,y})\mathrm{d} W_u
\end{equation}
and $v(s,x) = E[f(F^{-1}_\nu(Y^{s,y}_T)) + \int_s^Tg_u(F^{-1}_\nu(Y_u^{s,y}))\mathrm{d} u] = :u(s,y)$.
Thus, $v(s,x) = u(s, F_\nu(x))$.
It remains to show that $u$ is a viscosity solution of \eqref{eq:pde}.
Let $(s,y)\in [0,T]\times \mathbb{R}$ and $\varphi \in C^{1,2}$ be a test function with bounded derivatives such that $\varphi - u$ attains a global maximum at $(s,y)$ with $\varphi(s,y) = u(s,y)$.
If $s = T$, we clearly have $\varphi(T,y) = f\circ F^{-1}_\nu(y)$.
Thus, for all $\varepsilon>0$, It\^o's formula yields
\begin{align*}
\varphi(s + \varepsilon, Y^{s,y}_{s + \varepsilon}) - \varphi(s, y) = \int_s^{s + \varepsilon}\Big\{\partial_t\varphi(t,Y^{s,y}_t) + \frac{1}{2}\tilde{\sigma}^2(Y^{s,y}_t)\partial_{xx}^2\varphi(s, Y^{s,y}_t)\Big\}\mathrm{d} t + \int_s^{s + \varepsilon}\partial_x\varphi(t, Y^{s,y}_t)\tilde{\sigma}(Y^{s,y}_t)\mathrm{d} W_t
\end{align*}
Thus, taking expectation on both sides, one has
\begin{multline*}
E\Big[u(s+\varepsilon, Y^{s,y}_{s + \varepsilon})+\int_s^{s+\varepsilon}g_t( F^{-1}_\nu(Y^{s,y}_t))\mathrm{d} t\Big] - u(s, y)\\ \ge E\Big[\int_s^{s + \varepsilon}\Big\{\partial_t\varphi(t,Y^{s,y}_t) + \frac{1}{2}\tilde{\sigma}^2(Y^{s,y}_t)\partial_{xx}^2\varphi(t, Y^{s,y}_t) + g_t( F^{-1}_\nu(Y^{s,y}_t))\Big\}\mathrm{d} t \Big].
\end{multline*}
Since $Y^{s,y}$ is a Markov process, the left hand side above is $0$.
Thus, multiplying both sides by $1/\varepsilon$ and taking the limit as $\varepsilon$ goes to $0$ gives
\begin{equation*}
\partial_t\varphi(s,y) + \frac{1}{2}\tilde{\sigma}^2\partial^2_{xx}\varphi(s,y) + g_s( F^{-1}_\nu(y)) \ge 0.
\end{equation*}
That is, $u$ is a viscosity subsolution of \eqref{eq:pde}.
A similar argument shows that $u$ is also a viscosity supersolution.
\end{proof}
\subsection{Applications to classical SDEs}
In this final section, we consider the (classical) SDE
\begin{equation}
\label{eq:class.SDE}
X_t = x + \int_0^t\sigma(X_u)\mathrm{d} W_u + \int_0^tb(X_u)\mathrm{d} u
\end{equation}
where, $b,\sigma:\mathbb{R}\to \mathbb{R}$ are two measurable functions.
It is well known that the SDE \eqref{eq:class.SDE} is a particular case of the SDE \eqref{eq:SDE.homogene} involving the local time, when the measure $\nu$ is absolutely continuous w.r.t. the Lebesgue measure.
This observation allows us to deduce, from Section \ref{sec:uniqueness}, pathwise uniqueness, strong existence and regularity results for the classical SDE \eqref{eq:class.SDE} with measurable coefficients.
Let $N_b:=\{x: b(x) = 0\}$ and consider the following condition:
\begin{itemize}
\item[(A2')] $N_\sigma \subseteq N_b$
and $\frac{b}{\sigma^2}1_{N^c_b} \in L^1(\mathbb{R})$.
\end{itemize}
\begin{corollary}[Pathwise uniqueness and continuity]
\label{cor:classical.unique}
In either of the following cases the SDE \eqref{eq:class.SDE} satisfies the pathwise uniqueness property:
\begin{itemize}
\item[(i)]The condition \textup{(}A2'\textup{)} is satisfied and the function $\sigma$ satisfies (LT),
\item[(ii)] The conditions \textup{(}A2'\textup{)}, \textup{(}A3\textup{)} and \textup{(}A4\textup{)} are satisfied.
\end{itemize}
Moreover, if $\sigma$ is continuous and there are $A,B\ge 0$ such that $|\sigma(x)\le A(B +|x|)$, then under either of the above conditions, for all $\alpha \in [0,1/2)$ and $\varepsilon>0$ the solution $X^x$ of \eqref{eq:class.SDE} with initial condition $x$, satisfies
\begin{equation}
\lim_{x \to x_0}P\left(||X^x - X^{x_0}||_\alpha >\varepsilon\right) = 0 \quad\text{for all } x_0 \in \mathbb{R}.
\end{equation}
\end{corollary}
\begin{proof}
Consider the measure $\nu$ given by
\begin{equation*}
\nu(da):=\frac{b}{\sigma^2}1_{N^c_b}(a)\mathrm{d} a.
\end{equation*}
Since for every process $X$ satisfying \eqref{eq:class.SDE} it holds $d\langle X\rangle_t =\sigma^2(X_t)\,dt$, by the occupation time formula, one has
\begin{equation*}
\int_0^tb(X_u)\mathrm{d} u = \int_0^t\frac{b}{\sigma^2}\sigma^21_{N^c_b}(X_u)\mathrm{d} u = \int_\mathbb{R}\frac{b}{\sigma^2}1_{N^c_b}(a)L^a_t(X)\mathrm{d} a = \int_\mathbb{R}L^a_t(X)\,\nu(\mathrm{d} a).
\end{equation*}
Thus, the SDE \eqref{eq:class.SDE} can be rewritten as
\begin{equation}
\label{eq:class.SDE_local}
X_t = x + \int_0^t\sigma(X_u)\mathrm{d} W_u + \int_\mathbb{R}L^a_t(X)\,\nu(\mathrm{d} a).
\end{equation}
Therefore, the proof of pathwise uniqueness is a direct application of Proposition \ref{pro:lt} and Theorem \ref{thm:pathwise.unique} after identifying the SDE \eqref{eq:class.SDE} and the SDE \eqref{eq:class.SDE_local}.
Similarly, the proof of continuity follows from Proposition \ref{pro:cont}.
\end{proof}
\begin{corollary}[Strong existence]
Assume that the conditions \textup{(}A2'\textup{)}, \textup{(}A3\textup{)} and \textup{(}A4\textup{)} are satisfied.
Assume in addition that $N^c_\sigma \subseteq I_\sigma$. Then the SDE \eqref{eq:class.SDE} admits a unique strong solution.
\end{corollary}
\begin{proof}
The result follows from Proposition \ref{pro:exist_lt} after noticing that the SDE \eqref{eq:class.SDE} corresponds to \eqref{eq:SDE.homogene} with the measure $\nu(da):= \frac{b}{\sigma^2}1_{N_b^c}(a)\mathrm{d} a$.
\end{proof}
|
2210.12689
|
\section{Introduction}
Facial expression is one of the most external indications of a person's feelings and emotions. In daily conversation, according to the psychologist, only 7\% and 38\% of information is communicated through words and sounds respective, while up to 55\% is through facial expression \cite{Mehrabian1967InferenceOA}. It plays an important role in coordinating interpersonal relationships.
Ekman and Friesen \cite{Ekman1971ConstantsAC} recognized six essential emotions in the nineteenth century depending on a cross-cultural study \cite{4}, which indicated that people feel each basic emotion in the same fashion despite culture.
As a branch of the field of analyzing sentiment \cite{GAN2020104827}, facial expression recognition offers broad application prospects in a variety of domains, including the interaction between humans and computers \cite{8606936}, healthcare \cite{Ilyas2018FacialER}, and behavior monitoring \cite{Rabhi2018ARE}. Therefore, many researchers have devoted themselves to facial expression recognition.
In this paper, an effective hybrid data augmentation method is used. This approach is operated on two public datasets, and four benchmark models see some remarkable results.
\section{RELATED WORKS}
\subsection{VggNet}
The VGG model \cite{Simonyan15} was posted by the Visual Geometry Group team at Oxford University. The primary goal of this architecture is to demonstrate how the its final performance can be impacted by increasing network depth. In VGG, 7×7 convolution kernels are replaced by three 3×3 convolution kernels, and 5×5 convolution kernels are replaced by two 3×3 convolution kernels. The main goal of the change is to make sure that the depth of the network and the impact of the neural network can be ameliorated with the condition of the same perceptual field.
\subsection{ResNet}
The ResNet \cite{666826} model won first place in the ImageNet competition \cite{networks} held in 2015. The problem that deepening the model can decrease the accuracy was solved by this work. Due to the proposed residual block, it is easy to learn the identity mapping, even though stacked. If there are numerous blocks, redundant blocks can also learn the identity mapping with the help of the residual block. Furthermore, it improves the effectiveness of SGD optimization, which can optimize the network in deeper. What is more, no additional parameters and computational complexity are introduced. Only a very simple addition operation is performed and the complexity is negligible compared to the convolution operation. The ResNet architecture is shown in Figure 1.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{structure.jpg}
\caption{The structure of ResNet}
\end{figure}
\subsection{Xception}
The Xception \cite{8099678} model is an upgraded version of the InceptionV3 \cite{7780677} model. Chollet F offers a new structure of deep convolutional neural network named Xception that replaces the Inception module with a depthwise separable convolution. The residual network and the depthwise separable convolution are the fundamental components of this network. Xception is typically composed of 36 convolutional layers grouped into 14 blocks, with 12 blocks in the middle containing all linear residual connections. Simultaneously, the model holds the properties of depthwise separable convolution\cite{7780459} since the model executes spatial layer-by-layer convolution on every channel of the inputs individually, and then conducts point-by-point convolution on the output.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{xception.jpg}
\caption{The network structure of Xception}
\end{figure}
\section{APPROACH}
\subsection{HDA: hybrid data augmentation}
\subsubsection{Horizontal Flip.}
Geometric transformation is one of the most basic methods to augment data. Because of the particularity of the images, which means that facial expression images emoticons do not undergo a large degree of distortion and rotation in most cases, the horizontal flip (HF) is used to ensure that these images are consistent. Every image in the original dataset is horizontally flipped to create a mirror image. The formula of this method is shown in Eq. (1).\\
$$
\left[
\begin{array}{l}
x^{'}\\
y^{'}
\end{array}
\right]
=
\left[
\begin{array}{l}
width-1\\
0
\end{array}
\right]
+
\left[
\begin{array}{ll}
-1 & 0\\
0 & 1
\end{array}
\right]\cdot
\left[
\begin{array}{l}
x\\
y
\end{array}
\right]
$$
\subsubsection{Gaussian Noise.}
Gaussian noise (GN) represents a kind of statistical noise that has a probability density function equivalent to that of the normal distribution. In the proposed approach, the training images are added with Gaussian noise to simulate noise that may happen in the real world so that the model can become robust against the original images. The formula of Gaussian noise can be written as Eq. (2).
$$
GN(x,y)=\frac{1}{2\pi\sigma_{1}\sigma_{2}\sqrt{1-\rho^{2}}}e^{(-\frac{1}{2(1-\rho^{2})})(\frac{(x-\mu_{1})^{2}}{\sigma_{1}^{2}}-\frac{2\rho(x-\mu_{1})(y-\mu_{2})}{\sigma_{1}\sigma_{2}}+\frac{(y-\mu_{2})^{2}}{\sigma_{2}^{2}})}
$$
\section{EXPERIMENTS AND RESULTS}
\subsection{Datasets}
Following related work on facial emotion recognition, these experiments are conducted on the two benchmark public face emotion datasets: Ck+ dataset\cite{Lucey2010TheEC}, and Fer2013 dataset \cite{GOODFELLOW201559}.
Ck+: The most widely utilized laboratory-controlled dataset for facial expression recognition is the Extended Cohn–Kanade (Ck+) Dataset\cite{Lucey2010TheEC} (some samples are shown in Figure 3). Sequences that change from neutral to peak expression are included in the Ck+ dataset. Extraction of the final 1 to 3 frames which have peak formation and the first frame of every sequence is the most common data selection approach for evaluation. Then, people are divided into n groups for person-independent n-fold cross-validation experiments, where n is typically between 5, 8, and 10.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{ck+.jpg}
\caption{Some samples of the Ck+ dataset}
\end{figure}
Fer2013: Fer2013 \cite{GOODFELLOW201559} is a large-scale dataset acquired automatically by the Google image search API (some samples are shown in Figure 4). 35887 images are contained in the Fer2013 dataset, and each image is labeled as one of the seven basic emotions. All of the images in this dataset are grayscale images. Furthermore, this dataset contains 547 disgusted images, 5121 fear images, 4953 angry images, 8989 happy images, 4002 surprised images, 6077 sad images, and 6198 neutral images.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fer.jpg}
\caption{Some samples of the Fer2013 dataset}
\end{figure}
Some information about the datasets are shown in Table 1.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Dataest} & \textbf{Number}& \textbf{Number of Emotion} &\textbf{Gender} \\
\hline
JAFFE & 213 images+ & 7 & Female \\
\hline
CK+ & 593 videos & 7 &Female \& Male \\
\hline
Fer2013 & 35886 images & 7 &Female \& Male \\
\hline
\end{tabular}
\label{tab2}
\end{center}
\caption{Datasets information}
\end{table}
\subsection{Experimental settings}
The experiments are implemented via PyTorch \cite{NEURIPS2019_bdbca288}, and the NVIDIA GTX 2080Ti with 4 CPU cores and 13 Gigabytes of RAM is used for experiments. The Adam optimizer \cite{Kingma2015AdamAM} is used to train the networks with a learning rate of 2e-4 and betas of 0.9 and 0.999. The best model is selected through the principle of selection of the best accuracy of several experiments.
\subsection{Experimental evaluation metric}
Face emotion recognition can be viewed as a multi-classification problem. Accuracy (Acc) is used as the evaluation metric in this paper, and the calculation formula is as follows:
Acc=(TP+TN)/(TP+TN+FP+FN) (3)
In this formula, TP stands for a positive sample predicted by the model as a positive sample, TN stands for a negative sample predicted by the model as a negative sample, FP stands for a negative sample predicted by the model as a positive sample, and FN stands for a positive sample predicted by the model as a negative sample.
\subsection{Performance on Ck+ dataset}
This dataset consists of 8 emotions with a total of 981 trainable images, all of which are 640 pixels × 490 pixels in size. 593 trainable images from 7 expressions are selected for the experiment (shown in Figure 5). However, the background of the volunteers in the pictures is larger than the face images. Therefore, the image size of the dataset is processed to 48×48 pixels, which is convenient for the model input size to be uniform.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{different_expressions.jpg}
\caption{Images of different expressions in the Ck+ dataset}
\end{figure}
The results of the Ck+ dataset and the HDACK+ dataset are shown in Table 2, where both the Ck+ dataset and the HDACK+ dataset are divided in the ratio of 8:1:1 for training, validating, and testing respectively. The batch size used is 32, and 20 epochs are used for training.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Model} & \textbf{Ck+}& \textbf{HDACK+} \\
\hline
Vgg19 & 74.19\% & 97.80\% \\
\hline
Resnet18 & 90.32\% & 100\% \\
\hline
Resnet50 & 95.70\% & 99.73\% \\
\hline
Xception & 83.87\% & 99.73\% \\
\hline
\end{tabular}
\label{tab2}
\end{center}
\caption{Accuracy Comparisons of Different Models on Ck+ dataset}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{acc.jpg}
\caption{Accuracy comparison of different models on Ck+ dataset and HDACK+ dataset. (Best viewed in color)}
\end{figure}
Tables 2 and Figure 6 indicate the accuracies of the Ck+ dataset and those of the HDACK+ dataset. The dataset with data augmentation always has a higher performance in most models, and the ResNet18 model achieved 100\% testing accuracy in the HDACK+ dataset. After analyzing the results, significant improvement in accuracy can be seen on all models, especially for Vgg19. The accuracy comparison of Vgg19 shows that the accuracy improves by 23.61\%. The ResNet50 network is the least improved one, but the accuracy is also 4.03\% higher than the original data.
Furthermore, the figures show that the data augmentation improves the feature learning for every model. For the left one of each column, the Ck+ dataset is used. The training accuracies have an upward tendency, but the validation accuracies always have some shocks, even increasing slowly. Especially for ResNet18, the curve always fluctuates around 70\%. However, when comes to the right one of each column, the HDACK+ dataset is used. Both the curve of training accuracy and the curve of validation have an increasing toward, and the validation accuracies of all figures in the right position come to a peak of around 98\%.
\subsection{Performance on Fer2013 dataset}
After rejecting wrongly labeled frames, each image was registered and scaled to 48×48 pixels. There are 28,709, 3,589 and 3,589 images respectively for training, validation, and testing with seven expression labels in the Fer2013 dataset (shown in Figure 7).
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fer2.jpg}
\caption{Images of different expressions in the Fer2013 dataset}
\end{figure}
The results of the Fer2013 dataset and the HDAFer2013 dataset are presented in Table 3, where both the Fer2013 dataset and the HDAFer2013 dataset are divided in the ratio of 8:1:1 for training, validating, and testing respectively. The batch size used is 32, and 20 epochs are used for training.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Model} & \textbf{Fer2013}& $\mathbf{Fer2013_{DA}}$ \\
\hline
Vgg19 & 62.13\% & 84.87\% \\
\hline
Resnet18 & 65.67\% & 88.32\% \\
\hline
Resnet50 & 62.55\% & 88.17\% \\
\hline
Xception & 0.6035\% & 82.68\% \\
\hline
\end{tabular}
\label{tab2}
\end{center}
\caption{Accuracy Comparisons of Different Models on Fer2013 dataset}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{acc2.jpg}
\caption{Accuracy comparison of different models on Fer2013 dataset and HDAFer2013 dataset. (Best viewed in color)}
\end{figure}
Tables 3 and Figure 8 indicate the accuracies of the Fer2013 dataset and those of the HDAFer2013 dataset. The dataset used data augmentation always has a higher performance in most models. The results show that improvement in accuracy can be seen in all models. The accuracy comparison of ResNet50 shows that the accuracy improves by 25.62\%. The Xception network is the least improved, but the accuracy on it is also 22.33\% higher than that on the original data.
Furthermore, the figures show that the data augmentation improves the feature learning for every model. For the left one of each column, the Fer2013 dataset is used. The training accuracies have an upward tendency, but the validation accuracies always fluctuate around 63\%. However, when comes to the right one of each column, the HDAFer2013 dataset is used. Both the curve of training accuracy and the curve of validation have an increasing toward, and the validation accuracies of all figures in the right position come to a peak above 82\%.
\section{CONCLUSION}
In this study, a hybrid data augmentation method of the dataset, which has a high performance in some models, is presented. Because convolutional neural networks require more samples for training to get accurate and robust result, the hybrid data augmentation method is used to enlarge the number of samples. After applying the technique, the numbers of images in both the Ck+ dataset and the Fer2013 dataset have increased, and four benchmark models have higher performance than those previously. This approach is simple and robust in terms of data augmentation, which makes it applicable in the real world in the future.
\bibliographystyle{ACM-Reference-Format}
|
2210.12665
|
\section{Introduction}
\noindent
\noindent Polyominoes are plane figures obtained by joining unitary squares along their edges. They raise many combinatorial problems, for instance, tiling a certain region or the plane with polyominoes and related problems are of interest to mathematicians and computer scientists. Even though problems like, for example, the enumeration of pentominoes, have their origins in antiquity, polyominoes were formally defined by Golomb first in 1953 and later, in 1996, in his monograph~\cite{golomb}.\\
\noindent The study of polyominoes reveals many connections to different subjects. For instance, in algebraic languages: there seems to be a nice relation between polyominoes and Dyck words and Motzkin words \cite{delest}, statistical physics: polyominoes (and their higher-dimensional analogs known in the literature as lattice animals) appear as models of branched polymers and of percolation clusters \cite{animals}.\\
\noindent A classic topic in commutative algebra is the study of determinantal ideals. These are the ideals generated by the $t$-minors of any matrix, and a special attention received the case of the minors of a generic matrix, whose entries are indeterminates, see for instance \cite{Bruns_Herzog} and \cite{bernd}. More generally, ideals of $t$-minors of 2-sided ladders were studied, see \cite{conca1}, \cite{conca2}, \cite{conca3}, \cite{gorla}. When considering the case of 2-minors, these classes of ideals are special cases of the ideal $I_{\mathcal{P}}$ of inner 2-minors of a polyomino $\mathcal{P}$ in the polynomial ring over a field $K$ in the variables $x_v$ where $v$ is a vertex of $\mathcal{P}$. This type of ideal, called \textit{polyomino ideal}, was introduced in 2012 by Qureshi~\cite{Qureshi}. Since then, the study of the main algebraic properties of the polyomino ideal and of its quotient ring $K[\mathcal{P}]=S/I_{\mathcal{P}}$ in terms of the shape of $\mathcal{P}$ has become an exciting area of research. For instance, several mathematicians have studied the primality of $I_\mathcal{P}$, see \cite{Cisto_Navarra_closed_path}, \cite{Cisto_Navarra_weakly}, \cite{Cisto_Navarra_CM_closed_path}, \cite{def balanced}, \cite{Not simple with localization}, \cite{Trento}, \cite{Trento2}, \cite{Shikama}. Moreover,in \cite{Simple equivalent balanced} and \cite{Simple are prime}, the authors showed that $K[\mathcal{P}]$ is a normal Cohen-Macaulay domain if $\mathcal{P}$ is a simple polyomino, i.e. a polyomino without holes; a precise definition will be given in Section~\ref{Section: Introduction}. See also the references \cite{Herzog rioluzioni lineari}, \cite{L-convessi}, \cite{Ene-qureshi}, \cite{Kummini rook polynomial}, \cite{Trento3}.\\
\noindent Not many properties are known for non-simple polyominoes. However, the reader may consult \cite{Shikama} and \cite{Shikam rettangolo meno }, and, for the special class of closed path polyominoes, there are several interesting results that can be found in \cite{Cisto_Navarra_closed_path}, \cite{Cisto_Navarra_CM_closed_path} and \cite{Cisto_Navarra_Hilbert_series}.
In the paper \cite{Def. Konig type}, the authors introduced graded ideals of K\"onig type with respect to a monomial order $<$, i.e ideals $I$ for which there exists a sequence of the height of $I$ homogeneous polynomials forming part of a minimal system of generators of the ideal such that, there exists a monomial order $<$ with respect to whom their initial monomials form a regular sequence. The authors presented interesting consequences that may occur when working with a graded ideal with this property. Moreover, in the paper \cite{Hibi - Herzog Konig type polyomino}, Herzog and Hibi showed that if $\mathcal{P}$ is a simple thin polyomino, then its polyomino ideal has the K\"onig type property.\\
\noindent We are interested in understanding the K\"onig type property for non-simple polyominoes, following the path initiated by Herzog and Hibi. We will focus on a specific class of non-simple thin polyominoes, namely closed path polyominoes. \\
\noindent Not all the polyominoes have their ideals of K\"onig type and there is no known classification of the polyominoes which have this property. In particular, a class of polyominoes for which this property does not hold is given by the parallelogram polyominoes. Indeed, this follows by \cite[Proposition 2.3]{Parallelogram Hilbert series}, where the authors showed that parallelogram polyominoes are simple planar distributive lattices, and by using the classification of distributive lattices of K\"onig type provided in \cite[Theorem 4.1]{Hibi - Herzog Konig type polyomino}.\\
\noindent The paper is organized as follows. In Section~\ref{Section: Introduction}, we present a detailed introduction to polyominoes and polyomino ideals, and in Lemma~\ref{Lemma: Closed path number of vertices and cells} we prove that, if $\mathcal{P}$ is a closed path polyomino, then its number of vertices is twice the number of its cells, fact that will be useful in the next sections.\\
\noindent In order to study closed path polyominoes of K\"onig type, a combinatorial formula to compute height $I_{\mathcal{P}}$ is needed. Section~\ref{Krull} is devoted to this scope. In Theorem~\ref{Thm: Dimension closed path}, we give a combinatorial formula for the Krull dimension of $K[\mathcal{P}]$, and we prove it by using the theory of simplicial complexes. Essential here is the fact that, by \cite[Section 6]{Cisto_Navarra_closed_path}, $\mathcal{P}$ contains some specific configurations that will be analyzed in our proof. As a consequence, in Corollary~\ref{Coro: height of P}, we prove that the height of $I_{\mathcal{P}}$ is the number of cells of the closed path polyomino. We conjecture that this formula holds for any non-simple polyomino.\\
\noindent The last section, Section~\ref{Konig}, is devoted to the proof of the K\"onig type property of $I_{\mathcal{P}}$ for any closed path polyomino, see Theorem~\ref{konigfinal}. In Definition~\ref{Procedure: to define Y}, we define a suitable order on the vertices of the closed path polyomino $\mathcal{P}$ with respect to whom the desired property holds. There are two cases to be examined: either $\mathcal{P}$ contains a configuration of four cells (treated in Proposition~\ref{Lemma: A closed path with a tetromino is Konig type}) or $\mathcal{P}$ has an $L$-configuration in every change of direction (treated in Proposition~\ref{Lemma: A closed path with a L-conf is Konig type}). Moreover, we present concrete examples to illustrate our procedures.
%
\section{Polyominoes and polyomino ideals}\label{Section: Introduction}
\noindent Let $(i,j),(k,l)\in \numberset{Z}^2$. We say that $(i,j)\leq(k,l)$ if $i\leq k$ and $j\leq l$. Consider $a=(i,j)$ and $b=(k,l)$ in $\numberset{Z}^2$ with $a\leq b$. The set $[a,b]=\{(m,n)\in \numberset{Z}^2: i\leq m\leq k,\ j\leq n\leq l \}$ is called an \textit{interval} of $\numberset{Z}^2$.
Moreover, if $i< k$ and $j<l$, then $[a,b]$ is a \textit{proper} interval. In this case, we say $a$ and $b$ are the \textit{diagonal corners} of $[a,b]$, and $c=(i,l)$ and $d=(k,j)$ are the \textit{anti-diagonal corners} of $[a,b]$. If $j=l$ (or $i=k$), then $a$ and $b$ are in \textit{horizontal} (or \textit{vertical}) \textit{position}. We denote by $]a,b[$ the set $\{(m,n)\in \numberset{Z}^2: i< m< k,\ j< n< l\}$. A proper interval $C=[a,b]$ with $b=a+(1,1)$ is called a \textit{cell} of ${\mathbb Z}^2$; moreover, the elements $a$, $b$, $c$ and $d$ are called respectively the \textit{lower left}, \textit{upper right}, \textit{upper left} and \textit{lower right} \textit{corners} of $C$. The set of vertices of $C$ is $V(C)=\{a,b,c,d\}$ and the set of edges of $C$ is $E(C)=\{\{a,c\},\{c,b\},\{b,d\},\{a,d\}\}$. Let $\mathcal{S}$ be a non-empty collection of cells in $\numberset{Z}^2$. Then $V(\mathcal{S})=\bigcup_{C\in \mathcal{S}}V(C)$ and $E(\mathcal{S})=\bigcup_{C\in \mathcal{S}}E(C)$, while the rank of $\mathcal{S}$ is the number of cells belonging to $\mathcal{S}$. If $C$ and $D$ are two distinct cells of $\mathcal{S}$, then a \textit{walk} from $C$ to $D$ in $\mathcal{S}$ is a sequence $\mathcal{C}:C=C_1,\dots,C_m=D$ of cells of ${\mathbb Z}^2$ such that $C_i \cap C_{i+1}$ is an edge of $C_i$ and $C_{i+1}$ for $i=1,\dots,m-1$. Moreover, if $C_i \neq C_j$ for all $i\neq j$, then $\mathcal{C}$ is called a \textit{path} from $C$ to $D$. Moreover, if we denote by $(a_i,b_i)$ the lower left corner of $C_i$ for all $i=1,\dots,m$, then $\mathcal{C}$ has a \textit{change of direction} at $C_k$ for some $2\leq k \leq m-1$ if $a_{k-1} \neq a_{k+1}$ and $b_{k-1} \neq b_{k+1}$. In addition, a path can change direction in one of the following ways:
\begin{enumerate}
\item North, if $(a_{i+1}-a_i,b_{i+1}-b_i)=(0,1)$ for some $i=1,\dots,m-1$;
\item South, if $(a_{i+1}-a_i,b_{i+1}-b_i)=(0,-1)$ for some $i=1,\dots,m-1$;
\item East, if $(a_{i+1}-a_i,b_{i+1}-b_i)=(1,0)$ for some $i=1,\dots,m-1$;
\item West, if $(a_{i+1}-a_i,b_{i+1}-b_i)=(-1,0)$ for some $i=1,\dots,m-1$.
\end{enumerate}
\noindent We say that $C$ and $D$ are \textit{connected} in $\mathcal{S}$ if there exists a path of cells in $\mathcal{S}$ from $C$ to $D$. A \textit{polyomino} $\mathcal{P}$ is a non-empty, finite collection of cells in $\numberset{Z}^2$ where any two cells of $\mathcal{P}$ are connected in $\mathcal{P}$. For instance, see Figure \ref{Figure: Polyomino introduction}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{Example_polyomino_introduction.png}
\caption{A polyomino.}
\label{Figure: Polyomino introduction}
\end{figure}
\noindent Let $\mathcal{P}$ be a polyomino. A \textit{sub-polyomino} of $\mathcal{P}$ is a polyomino whose cells belong to $\mathcal{P}$. We say that $\mathcal{P}$ is \textit{simple} if for any two cells $C$ and $D$ not in $\mathcal{P}$ there exists a path of cells not in $\mathcal{P}$ from $C$ to $D$. A finite collection of cells $\mathcal{H}$ not in $\mathcal{P}$ is a \textit{hole} of $\mathcal{P}$ if any two cells of $\mathcal{H}$ are connected in $\mathcal{H}$ and $\mathcal{H}$ is maximal with respect to set inclusion. For example, the polyomino in Figure \ref{Figure: Polyomino introduction} is not simple. Obviously, each hole of $\mathcal{P}$ is a simple polyomino, and $\mathcal{P}$ is simple if and only if it has no hole. A polyomino is said to be \textit{thin} if it does not contain the square tetromino. The \textit{rank} of a polyomino, $\rank(\mathcal{P})$, is given by the number of its cells.
Consider two cells $A$ and $B$ of $\numberset{Z}^2$ with $a=(i,j)$ and $b=(k,l)$ as the lower left corners of $A$ and $B$ with $a\leq b$. A \textit{cell interval} $[A,B]$ is the set of the cells of $\numberset{Z}^2$ with lower left corner $(r,s)$ such that $i\leqslant r\leqslant k$ and $j\leqslant s\leqslant l$. If $(i,j)$ and $(k,l)$ are in horizontal (or vertical) position, we say that the cells $A$ and $B$ are in \textit{horizontal} (or \textit{vertical}) \textit{position}.\\
Let $\mathcal{P}$ be a polyomino. Consider two cells $A$ and $B$ of $\mathcal{P}$ in vertical or horizontal position.
The cell interval $[A,B]$, containing $n>1$ cells, is called a
\textit{block of $\mathcal{P}$ of rank n} if all cells of $[A,B]$ belong to $\mathcal{P}$. The cells $A$ and $B$ are called \textit{extremal} cells of $[A,B]$. Moreover, a block $\mathcal{B}$ of $\mathcal{P}$ is \textit{maximal} if there does not exist any block of $\mathcal{P}$ which contains properly $\mathcal{B}$. It is clear that an interval of ${\mathbb Z}^2$ identifies a cell interval of ${\mathbb Z}^2$ and vice versa, hence we can associate to an interval $I$ of ${\mathbb Z}^2$ the corresponding cell interval denoted by $\mathcal{P}_{I}$. A proper interval $[a,b]$ is called an \textit{inner interval} of $\mathcal{P}$ if all cells of $\mathcal{P}_{[a,b]}$ belong to $\mathcal{P}$. We denote by $\mathcal{I}(\mathcal{P})$ the set of all inner intervals of $\mathcal{P}$. An interval $[a,b]$ with $a=(i,j)$, $b=(k,j)$ and $i<k$ is called a \textit{horizontal edge interval} of $\mathcal{P}$ if the sets $\{(\ell,j),(\ell+1,j)\}$ are edges of cells of $\mathcal{P}$ for all $\ell=i,\dots,k-1$. In addition, if $\{(i-1,j),(i,j)\}$ and $\{(k,j),(k+1,j)\}$ do not belong to $E(\mathcal{P})$, then $[a,b]$ is called a \textit{maximal} horizontal edge interval of $\mathcal{P}$. We define similarly a \textit{vertical edge interval} and a \textit{maximal} vertical edge interval. \\
\noindent We follow \cite{Trento} and we call a \textit{zig-zag walk} of $\mathcal{P}$ a sequence $\mathcal{W}:I_1,\dots,I_\ell$ of distinct inner intervals of $\mathcal{P}$ where, for all $i=1,\dots,\ell$, the interval $I_i$ has either diagonal corners $v_i$, $z_i$ and anti-diagonal corners $u_i$, $v_{i+1}$, or anti-diagonal corners $v_i$, $z_i$ and diagonal corners $u_i$, $v_{i+1}$, such that:
\begin{enumerate}
\item $I_1\cap I_\ell=\{v_1=v_{\ell+1}\}$ and $I_i\cap I_{i+1}=\{v_{i+1}\}$, for all $i=1,\dots,\ell-1$;
\item $v_i$ and $v_{i+1}$ are on the same edge interval of $\mathcal{P}$, for all $i=1,\dots,\ell$;
\item for all $i,j\in \{1,\dots,\ell\}$ with $i\neq j$, there exists no inner interval $J$ of $\mathcal{P}$ such that $z_i$, $z_j$ belong to $J$.
\end{enumerate}
\noindent According to \cite{Cisto_Navarra_closed_path}, we recall the definition of a \textit{closed path polyomino}, and the configuration of cells characterizing its primality. We say that a polyomino $\mathcal{P}$ is a \textit{closed path} if there exists a sequence of cells $A_1,\dots,A_n, A_{n+1}$, $n>5$, such that:
\begin{enumerate}
\item $A_1=A_{n+1}$;
\item $A_i\cap A_{i+1}$ is a common edge, for all $i=1,\dots,n$;
\item $A_i\neq A_j$, for all $i\neq j$ and $i,j\in \{1,\dots,n\}$;
\item For all $i\in\{1,\dots,n\}$ and for all $j\notin\{i-2,i-1,i,i+1,i+2\}$, we have $V(A_i)\cap V(A_j)=\emptyset$, where $A_{-1}=A_{n-1}$, $A_0=A_n$, $A_{n+1}=A_1$ and $A_{n+2}=A_2$.
\end{enumerate}
\begin{figure}[h]
\centering
\subfloat{\includegraphics[scale=0.4]{Example_closed_path.png}}\qquad\qquad
\subfloat{\includegraphics[scale=0.4]{Example_closed_path_not_prime.png}}
\caption{An example of two closed paths.}
\label{Figure: Example closed paths}
\end{figure}
\noindent A path of five cells $C_1, C_2, C_3, C_4, C_5$ of $\mathcal{P}$ is called an \textit{L-configuration} if the two sequences $C_1, C_2, C_3$ and $C_3, C_4, C_5$ go in two orthogonal directions. A set $\mathcal{B}=\{\mathcal{B}_i\}_{i=1,\dots,n}$ of maximal horizontal (or vertical) blocks of rank at least two, with $V(\mathcal{B}_i)\cap V(\mathcal{B}_{i+1})=\{a_i,b_i\}$ and $a_i\neq b_i$ for all $i=1,\dots,n-1$, is called a \textit{ladder of $n$ steps} if $[a_i,b_i]$ is not on the same edge interval of $[a_{i+1},b_{i+1}]$ for all $i=1,\dots,n-2$. We recall that a closed path has no zig-zag walks if and only if it contains an L-configuration or a ladder of at least three steps (see \cite[Section 6]{Cisto_Navarra_closed_path}). For instance, in Figure \ref{Figure: Example closed paths} they are displayed a prime closed path on the left and an other one with zig-zag walks on the right.
\begin{lemma}\label{Lemma: Closed path number of vertices and cells}
Let $\mathcal{P}$ be a closed path polyomino. Then $|V(\mathcal{P})|=2\rank(\mathcal{P})$.
\end{lemma}
\begin{proof}
Consider the sub-polyominoes of $\mathcal{P}$ arranged in Figure \ref{Figure:sub-polyomino for prove number vertices are double of number cells}.
\begin{figure}[h!]
\subfloat[]{\includegraphics[scale=0.8]{conf1_number_vertices_cells.png}}\qquad
\subfloat[]{\includegraphics[scale=0.8]{conf2_number_vertices_cells.png}}\qquad
\subfloat[]{\includegraphics[scale=0.8]{conf3_number_vertices_cells.png}}\qquad
\subfloat[]{\includegraphics[scale=0.8]{conf4_number_vertices_cells.png}}
\caption{}
\label{Figure:sub-polyomino for prove number vertices are double of number cells}
\end{figure}
\noindent In (A) and (B), we attached to the cell $A_i$ the red vertices. In (C), we attached to $A_{i-1}$, $A_{i}$ and $A_{i+1}$ respectively the blue vertices, the black ones and the red ones. In (D), we attached to $A_{i-1}$ and $A_{i}$ respectively the blue vertices and the black ones. It easy to see that a closed path is made up of suitable arrangements of the sub-polyominoes in Figure \ref{Figure:sub-polyomino for prove number vertices are double of number cells} and of their rotations or reflections. Once we label the cells of $\mathcal{P}$, we can attach to every cell two distinct vertices as before. Therefore, we obtain that $|V(\mathcal{P})|=2\rank(\mathcal{P})$.
\end{proof}
\noindent Let $\mathcal{P}$ be a polyomino. We set $S_\mathcal{P}=K[x_v\:\; v\in V(\mathcal{P})]$, where $K$ is a field. If $[a,b]$ is an inner interval of $\mathcal{P}$, with $a$,$b$ and $c$,$d$ respectively diagonal and anti-diagonal corners, then the binomial $x_ax_b-x_cx_d$ is called an \textit{inner 2-minor} of $\mathcal{P}$. We define $I_{\mathcal{P}}$ as the ideal in $S_\mathcal{P}$ generated by all the inner 2-minors of $\mathcal{P}$ and we call it the \textit{polyomino ideal} of $\mathcal{P}$. We set also $K[\mathcal{P}] = S_\mathcal{P}/I_{\mathcal{P}}$, which is the \textit{coordinate ring} of $\mathcal{P}$. \\
\section{Krull dimension of closed path polyominoes}\label{Krull}
\noindent In this section we compute the Krull dimension of the coordinate ring attached to a closed path polyomino. We recall some basic facts on simplicial complexes. A \textit{finite simplicial complex} $\Delta$ on the vertex set $[n]=\{1,\dots,n\}$ is a
collection of subsets of $[n]$ satisfying the following two conditions:
\begin{enumerate}
\item if $F'\in \Delta$ and $F \subseteq F'$ then $F \in \Delta$;
\item $\{i\}\in \Delta$ for all $i=1,\dots,n$.
\end{enumerate} The elements of $\Delta$ are called \textit{faces}, and the dimension of a face is one less than
its cardinality. An \textit{edge} of $\Delta$ is a face of dimension 1, while a \textit{vertex} of $\Delta$ is a face
of dimension 0. The maximal faces of $\Delta$ with respect to the set inclusion are called \textit{facets}. The dimension
of $\Delta$ is the dimension of a facet.
Let $\Delta$ be a simplicial complex on $[n]$ and $R=K[x_1,\dots,x_n]$ be the polynomial ring in $n$ variables over a field $K$. To every
collection $F=\{i_1,\dots,i_r\}$ of $r$ distinct vertices of $\Delta$, we
associate a monomial $x^F$ in $R$ where
$x^F=x_{i_1}\dots x_{i_r}.$
The monomial ideal
generated by all monomials $x^F$ such that $F$ is not a face of $\Delta$ is called \textit{Stanley-Reisner ideal} and it denoted by $I_{\Delta}$. The \textit{face ring} of $\Delta$, denoted by $K[\Delta]$, is defined to be the quotient ring $R/I_{\Delta}$. From \cite[Corollary 6.3.5]{Villareal}, if $\Delta$ is a simplicial complex on $[n]$ of dimension $d$, then $\dim K[\Delta]=d+1=\max\{s\:\; x_{i_1}\dots x_{i_s}\notin I_{\Delta},i_1<\dots<i_s\}$.
\begin{defn}\rm \label{Definition: Gamma-path...}
A polyomino $\mathcal{R}$ is called \textit{$\Gamma$-path} with middle cell $D$ and hooking vertex $w$ if it consists of an horizontal maximal block $\mathcal{B}_1=[C_1,B_1]$ of rank greater than three, a vertical maximal block $\mathcal{B}_2=[B_2,C_2]$ of rank greater than three and a cell $D$ not belonging to $\mathcal{B}_{1}\cup\mathcal{B}_2$, with $V(\mathcal{B}_1)\cap V(\mathcal{B}_2)=\{w\}$ where $w$ is the upper left corner of $D$. This is illustrated in Figure~ \ref{Figure:configurations of W-paths} (A). With reference to Figure \ref{Figure:configurations of W-paths}, we define $\tau$-paths, $W$-paths and $\zeta$-paths in a similar way.
\begin{figure}[h!]
\subfloat[$\Gamma$-path]{\includegraphics[scale=0.7]{RW_heptomino1.png}} \subfloat[$\tau$-path]{\includegraphics[scale=0.7]{RW_heptomino2.png}}
\subfloat[$W$-path]{\includegraphics[scale=0.7]{RW_heptomino3.png}}
\subfloat[$\zeta$-path]{\includegraphics[scale=0.7]{RW_heptomino4.png}}
\caption{}
\label{Figure:configurations of W-paths}
\end{figure}
\noindent Moreover, we say that a pair $(\mathcal{F},\mathcal{G})$ of the previous polyominoes is \textit{compatible} if $\mathcal{F}$ is a $\Gamma$-path ($\tau$-path or $W$-path or $\zeta$-path) and $\mathcal{G}$ is one of the other paths.
\end{defn}
\begin{defn}\rm \label{Definition: Skew path...}
A polyomino $\mathcal{S}$ is called \textit{LU-skew path} with hooking vertices $a,b$ if it is made up of two maximal blocks $\mathcal{B}_1=[C_1,D_1]$ and $\mathcal{B}_2=[C_2,D_2]$, both of them having rank greater than three, with $V(\mathcal{B}_1)\cap V(\mathcal{B}_2)=\{a,b\}$ where $a,b$ are the right and, respectively, left upper corners of $D_1$. For instance, see Figure~\ref{Figure:configurations of Skew} (A). With reference to Figure~\ref{Figure:configurations of Skew}, we can define $LD$-skew, $DU$-skew and $UD$-skew paths in a similar way.
\begin{figure}[h!]
\begin{minipage}{.3\textwidth}
\subfloat[$LU$-skew]{\includegraphics[scale=0.7]{Skew_hexomino1.png}}
\vspace{0.3cm}
\subfloat[$LD$-skew]{\includegraphics[scale=0.7]{Skew_hexomino4.png}}
\end{minipage}\
\begin{minipage}{.3\textwidth}
\subfloat[$DU$-skew]{\includegraphics[scale=0.7]{Skew_hexomino2.png}}
\subfloat[$UD$-skew]{\includegraphics[scale=0.7]{Skew_hexomino3.png}}
\end{minipage}
\caption{}
\label{Figure:configurations of Skew}
\end{figure}
\end{defn}
\noindent Let $\mathcal{P}$ be a closed path and $(\mathcal{F},\mathcal{G})$ be a pair of two sub-polyominoes of $\mathcal{P}$ as in Definition \ref{Definition: Gamma-path...}. Without loss of generality, we may assume that the middle cell of $\mathcal{F}$ and of $\mathcal{G}$ are $A_1$ and, respectively, $A_k$ with $k\in [n-1]$. Then a sub-polyomino $\mathcal{Q}$ of $\mathcal{P}$ as in Definition \ref{Definition: Skew path...} is said to be \textit{between $\mathcal{F}$ and $\mathcal{G}$} if $\mathcal{Q}$ is contained in $\{A_i:1<i<k\}$.
\\
\begin{thm}\label{Thm: Dimension closed path}
Let $\mathcal{P}$ be a closed path. Then the Krull dimension of $K[\mathcal{P}]$ is $|V(\mathcal{P})|-\rank(\mathcal{P})$.
\end{thm}
\begin{proof}
We recall from \cite{Cisto_Navarra_Hilbert_series} that, if $\mathcal{P}$ does not contain any zig-zag walk, then $K[\mathcal{P}]$ is a normal Cohen-Macaulay domain and the Krull dimension is given by $|V(\mathcal{P})|-\rank(\mathcal{P})$. We need to examine only the case when $\mathcal{P}$ contains a zig-zag walk. Suppose that we are in this case.\\
\noindent Let $<^1$ be the total order on $V(\mathcal{P})$ defined as $u<^1 v$ if and only if, for $u = (i,j)$ and $v = (k,l)$, $i < k$, or $i = k$ and $j < l$. Let $Y\subset V(\mathcal{P})$ and consider $<^Y_{\mathrm{lex}}$ be the lexicographical order in $S_\mathcal{P}$ induced by the following order on the variables of $S_\mathcal{P}$:
\[ \mbox{for}\ u,v \in V(\mathcal{P})\qquad
x_u<^Y_{\mathrm{lex}} x_v \Leftrightarrow
\left\{
\begin{array}{l}
u\notin Y\ \mbox{and}\ v\in Y \\
u,v\notin Y\ \mbox{and}\ u<^1 v \\
u,v\in Y\ \mbox{and}\ u<^1 v
\end{array}
\right.
\]
From \cite[Theorem 4.9] {Cisto_Navarra_CM_closed_path}, we know that there exists a suitable set $Y\subset V(\mathcal{P})$ such that the set of generators of $I_{\mathcal{P}}$ forms the reduced Gr\"obner basis of $I_\mathcal{P}$ with respect to $<^Y_{\mathrm{lex}}$, defined in \cite[Algorithm 4.7, Definition 4.8]{Cisto_Navarra_CM_closed_path}. The square-free monomial ideal $J=\mathrm{in}_{<^Y_{\mathrm{lex}}}(I_{\mathcal{P}})$ can be viewed as the Stanley-Reisner ideal of a simplicial complex $\Delta(J)$. In order to determine the dimension of $K[\mathcal{P}]$, we compute the dimension of the simplicial complex $\Delta(J)$, which is the cardinality of a facet of $\Delta(J)$ minus 1. Hence, in what follows, our aim to define a suitable facet of $\Delta(J)$.\\
From \cite[Section 6]{Cisto_Navarra_closed_path}, it follows easily that $\mathcal{P}$ contains just the configurations defined in Definitions \ref{Definition: Gamma-path...} and \ref{Definition: Skew path...}, arranged in a suitable way. Let $\mathcal{S}_1$ be a $\Gamma$-path. Referring to Figure \ref{Figure:configurations of W-paths} (A), we label the cells of $\mathcal{P}$ setting $D=A_1$, $C_1=A_2$ and so on. Let $k>1$ be the minimum integer such that $\mathcal{S}_2$ is either a $W$-path or a $\tau$-path with middle cell $A_k$. The hooking vertices of $\mathcal{S}_1$ and $\mathcal{S}_2$ are on the same maximal horizontal edge interval $V$ of $\mathcal{P}$ and, if there exist, the hooking vertices of the $LU$-skew or $LD$-skew paths between $\mathcal{S}_1$ and $\mathcal{S}_2$ belong to $V$. Moreover, by the minimality of $k$, there does not exist any $DU$-skew and $UD$-skew path between $\mathcal{S}_1$ and $\mathcal{S}_2$ belonging to $V$. We want to define a suitable set of vertices of $\mathcal{P}$ in order to find a facet of $\Delta(J)$. We distinguish two cases, depending if $\mathcal{S}_2$ is a $W$-path or a $\tau$-path. Moreover, we have $z_T,x_T\in Y$ and $y_T\notin Y$, with reference to Figure \ref{Figure: I case of I case, proof dimension}.\\
\textsl{Case I:} Assume that $\mathcal{S}_2$ is a $W$-path.
\begin{enumerate}
\item Suppose that there does not exist any $LU$-skew or $LD$-skew paths between $\mathcal{S}_1$ and $\mathcal{S}_2$. Let $\mathcal{B}_1$ be the maximal horizontal block $[A_2,A_{k-1}]$. Denote the upper right corner of $A_{i}$ by $a_i^1$ for all $i=2,\dots,k-1$ and the upper left corner of $A_{2}$ by $a_{1}^1$. We set
$$V(\mathcal{S}_1,\mathcal{S}_2)=\{a_i^1:i\in [\rank(\mathcal{B}_1)]\},$$ as in Figure \ref*{Figure: I case of I case, proof dimension}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{RW_heptomino1_proof_dimension.png}
\caption{}
\label{Figure: I case of I case, proof dimension}
\end{figure}
\item Suppose that there exist $LD$-skew or $LU$-skew paths between $\mathcal{S}_1$ and $\mathcal{S}_2$ and, in particular, that there are $m$ maximal horizontal blocks $\mathcal{B}_i$ between $\mathcal{S}_1$ and $\mathcal{S}_2$. Set $\mathcal{B}_j=[A_{k_j},A_{k_{j+1}}]$, where $k_1=2$ and $k_j<k_{j+1}$ for all $j\in [m]$. For all $j\in [m]$ odd and for all $i=1,\dots,\rank(\mathcal{B}_j)$, we denote by $a_1^j$ the upper left corner of $A_{k_j}$ and by $a_i^j$ the right upper corner of $A_{k_j+i}$ in $\mathcal{B}_j$.
For all $j\in [m]$ even and for all $i=1,\dots,\rank(\mathcal{B}_j)$, we denote by $a_i^j$ the lower right corner of $A_{k_j+i}$ in $\mathcal{B}_j$. See Figure \ref{Figure: II case of I case, proof dimension}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{RW_heptomino2_proof_dimension.png}
\caption{}
\label{Figure: II case of I case, proof dimension}
\end{figure}
\noindent Recall that for all $j\in [m]$, $a^j_{|\mathcal{B}_j|-2},a^j_{|\mathcal{B}_j|-1}\notin Y$ and $a^{j+1}_{1},a^{j+1}_{2}\in Y$ with $a^{j+1}_{2}>a^{j+1}_{1}$.
Then we set
$$ V(\mathcal{S}_1,\mathcal{S}_2)=\bigcup_{j\in [m]\atop even} \{a_i^j:i\in [\rank(\mathcal{B}_j)]\} \cup \biggl(\bigcup_{j\in [m]\atop odd} \{a_i^j:i\in [\rank(\mathcal{B}_j)+1]\}\biggr)\backslash \bigcup_{j\in [m]\atop odd}\{a_2^j\}. $$
\end{enumerate}
\textsl{Case II:} Assume that $\mathcal{S}_2$ is a $\tau$-path.
\begin{enumerate}
\item Suppose that there exists just an $LD$-skew path between $\mathcal{S}_1$ and $\mathcal{S}_2$. For all $i=0,\dots,\rank(\mathcal{B}_1)-1$, we denote by $a_1^1$ the upper left corner of $A_2$ and by $a_i^1$ the right upper corner of $A_{k_j+i}$ in $\mathcal{B}_j$.
For all $i=1,\dots,\rank(\mathcal{B}_2)-1$, we denote by $a_i^2$ the lower right corner of $A_{t+i}$ in $\mathcal{B}_2$. See Figure \ref{Figure: I case of II case, proof dimension}. In this case, we set
$$ V(\mathcal{S}_1,\mathcal{S}_2)=\{a_i^1:i\in [\rank(\mathcal{B}_1)]\} \cup\{a_i^2:i\in [\rank(\mathcal{B}_2)]\}.$$
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{RW_heptomino3_proof_dimension.png}
\caption{}
\label{Figure: I case of II case, proof dimension}
\end{figure}
\item Suppose that there exist $LD$-skew or $LU$-skew paths between $\mathcal{S}_1$ and $\mathcal{S}_2$. With the same notations as in (2) of \textsl{Case I} (see Figure \ref{Figure: II case of II case, proof dimension}), we set
$$ V(\mathcal{S}_1,\mathcal{S}_2)=\bigcup_{j\in [m]\atop even} \{a_i^j:i\in [\rank(\mathcal{B}_j)]\} \cup \biggl(\bigcup_{j\in [m]\atop odd} \{a_i^j:i\in [\rank(\mathcal{B}_j)]\}\biggr)\backslash \bigcup_{j\in [m]\atop odd}\{a_2^j\}. $$
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{RW_heptomino4_proof_dimension.png}
\caption{}
\label{Figure: II case of II case, proof dimension}
\end{figure}
\end{enumerate}
\noindent In each case, we can define the following bijective correspondence $\phi_{1,2}:V(\mathcal{S}_1,\mathcal{S}_2)\to \{A_1\}\cup \bigcup_{i=1}^m \mathcal{B}_i$ with $\phi_{1,2}(a_1^1)=A_1$ and $\phi_{1,2}(a_i^j)=A_{k_j+i}$ all $j\in [m]$ and for all $i\in[\rank(\mathcal{B}_j)]$; in particular, $|V(\mathcal{S}_1,\mathcal{S}_2)|=\sum_{i=1}^{m}\rank(\mathcal{B}_i)+1$. Once we have defined the set $V(\mathcal{S}_1,\mathcal{S}_2)$, we can use the same arguments for all pairs $(\mathcal{S}_i,\mathcal{S}_{i+i})$ of compatible paths, where $\mathcal{S}_{i+1}$ is taken by minimality with respect $\mathcal{S}_i$, as done for $(\mathcal{S}_1,\mathcal{S}_2)$. Assume that there exist $t$ such pairs, where $\mathcal{S}_{t+1}=\mathcal{S}_1$. So as done before, for all $i\in [t]$, we can define $V(\mathcal{S}_i,\mathcal{S}_{i+1})$ by similar arguments as for $V(\mathcal{S}_1,\mathcal{S}_2)$. Hence, $V=\cup_{i=1}^t V(\mathcal{S}_i,\mathcal{S}_{i+1})$.
Observe that $|V|=\rank (\mathcal{P})$. We want to prove that $F=\{x_v:v\in V\}$ is a facet of $\Delta(J)$. Obviously $F$ is a face of $\Delta(J)$, so we need to prove the maximality with respect to the set inclusion. Suppose, by contradiction, that $F$ is not maximal. Then there exists a $p\in V(\mathcal{P})\backslash V$ such that $F\subset F\cup \{x_p\}$. Without loss of generality, we may assume that $p$ is a vertex of the sub-polyomino $\{A_1\}\cup \bigcup_{i=1}^m \mathcal{B}_i$ between $\mathcal{S}_1$ and $\mathcal{S}_2$.
\begin{enumerate}
\item Assume that we are in $(1)$ of \textsl{Case I}. Suppose $p\in V(\mathcal{B}_1)$. If $p$ is the lower left corner of $A_2$ then $x_px_q\in J$, where $q$ is the lower left corner of $A_n$ belongs to $J$, since $y_T\notin Y$, as a consequence $\{x_q,x_v\}\notin \Delta(J)$; but $x_q\in F$, so $\{x_q,x_v\}\in \Delta(J)$, which is a contradiction; in the other cases we obtain similarly a contradiction considering $\{x_{a_1^1},x_p\}$, as well as when $p$ is the lower left or right corner of $A_1$.
\item Assume that we are in $(2)$ of \textsl{Case I}. The only case which we discuss is when $p=a_{2}^{j+1}$ for some $j\in[m]$. In this case, since $a_{2}^{j+1}>a_{1}^{j+1}$ we have $x_{a^j_{|\mathcal{B}_j|-1}}x_{a_2^{j+1}}\in J$. But $\{x_{a^j_{|\mathcal{B}_j|-1}},x_{a_2^{j+1}}\}\in \Delta(J)$ since $p=a_{2}^{j+1}$, so we get a contradiction.
\item In the subcases $(1)$ and $(2)$ of \textsl{Case II}, we can argue in a similar way, and we obtain a contradiction.
\end{enumerate}
\noindent Therefore, $F$ is a facet of $\Delta$. Hence, $\dim \Delta(J)=\rank(\mathcal{P})-1$, so $\dim K[\mathcal{P}]=\rank(\mathcal{P})$. From Lemma \ref*{Lemma: Closed path number of vertices and cells}, we obtain that $\dim K[\mathcal{P}]=|V(\mathcal{P})|-\rank(\mathcal{P})$.
\end{proof}
\begin{coro}\label{Coro: height of P}
Let $\mathcal{P}$ be a closed path polyomino. Then $\mathrm{ht}(I_{\mathcal{P}})=\rank(\mathcal{P})$.
\end{coro}
\begin{proof}
It follows from Theorem~\ref{Thm: Dimension closed path} and \cite[Corollary 3.1.7]{Villareal}.
\end{proof}
\noindent It is known that if $\mathcal{P}$ is a simple polyomino, then $\dim(K[\mathcal{P}])=|V(\mathcal{P})|-\rank(\mathcal{P})$, so $\mathrm{ht}(I_{\mathcal{P}})=\rank(\mathcal{P})$. Arises naturally the following conjecture.
\begin{conj}
Let $\mathcal{P}$ be a non-simple polyomino. Then $\mathrm{ht}(I_{\mathcal{P}})=\rank(\mathcal{P})$.
\end{conj}
\section{Closed paths and K\"onig type property}\label{Konig}
\noindent Let $R=K[x_1,\dots,x_n]$ and $I$ be a graded ideal in $R$ of height $h$. In according to \cite{Def. Konig type}, we say that $I$ is of \textit{K\"onig type} if the following two conditions hold:
\begin{enumerate}
\item there exists a sequence $f_1,\dots, f_h$ of homogeneous polynomials forming part of a minimal system of homogeneous generators of $I$;
\item there exists a monomial order $<$ on $R$ such that $\mathop{\rm in}\nolimits_<(f_1),\dots,\mathop{\rm in}\nolimits_<(f_h)$ is a regular sequence.
\end{enumerate}
\noindent If $\mathcal{P}$ is a polyomino and $I_{\mathcal{P}}$ is its polyomino ideal, then we say that $\mathcal{P}$ is of \textit{K\"onig type} if $I_{\mathcal{P}}$ is an ideal of K\"onig type. In \cite{Hibi - Herzog Konig type polyomino} it is proved that all simple thin polyominoes are of K\"onig type. Obviously, a closed path is a thin and non-simple polyomino. The aim of this paper is to show that also this class of polyominoes satisfies this property, as well as it seems that it is true for all non-simple thin polyominoes.
\begin{rmk}\rm \label{Remark: For the Konig Type}
Let $\mathcal{P}$ be a closed path polyomino having $n$ distinct cells. Let $<_{\mathrm{lex}}$ be the lexicographic order induced by a total order on $\{x_v:v\in V(\mathcal{P})\}$. Suppose that there exist $n$ generators $f_1,\dots,f_n$ of $I_{\mathcal{P}}$ whose initial terms have not any variable in common. Then $I_{\mathcal{P}}$ is of K\"onig type. In fact, from Corollary~\ref{Coro: height of P}, we know that $\mathrm{ht}(\mathcal{P})=n$. Moreover, $\gcd(\mathop{\rm in}\nolimits_{<_{\mathrm{lex}}}(f_i),\mathop{\rm in}\nolimits_{<_{\mathrm{lex}}}(f_j))=1$ for all $i\neq j$, so $\mathop{\rm in}\nolimits_{<_{\mathrm{lex}}}(f_1),\dots,\mathop{\rm in}\nolimits_{<_{\mathrm{lex}}}(f_n)$ forms a regular sequence. Therefore we have the desired conclusion.
\end{rmk}
\begin{defn}\rm \label{Procedure: to define Y}
Let $\mathcal{P}:A_1,\dots,A_n$ be a closed path polyomino. In order to define a suitable total order on $\{x_v:v\in V(\mathcal{P})\}$, Table~\ref{Table2} will be very useful. Let $Y^{(1)}\subset V(\mathcal{P})$. Let $j\geq 2$ and assume that $Y^{(j-1)}$ is known. We want to define $Y^{(j)}$. We refer to Table~\ref{Table2} up to suitable rotations and reflections of $\mathcal{P}$. If it occurs one of the configurations in the left column of Table~\ref{Table2}, where the blue vertices are in $Y^{(j-1)}$, then we denote by $k$ the maximum integer such that $m+k$ is an orange vertex in the picture displayed in the corresponding right column. Hence, we set $Z^{(j)}_1=\{x_{m},\dots,x_{m+k}\}$ and $Z^{(j)}_2=\{x_{m'},\dots,x_{(m+k)'}\}$, where for all $a\in Y^{(j-1)}_1$ we put $x_a>x_{h_1}>x_{h_2}>x_{1'}$ with $m<h_1<h_2\leq m+k$ and for all $b\in Y^{(j-1)}_2$ we put $x_b>x_{t_1}>x_{t_2}$ with $m'<t_1<t_2\leq (m+k)'$. Therefore, we define $Y^{(j)}=Y^{(j)}_1\sqcup Y^{(j)}_2$ where $Y^{(j)}_1=Y^{(j-1)}_1\sqcup Z^{(j)}_1$ and $Y^{(j)}_2=Y^{(j-1)}_2\sqcup Z^{(j)}_2$.
\end{defn}
\begin{table}[H]
\centering
\renewcommand\arraystretch{1.3}{ \begin{tabular}{c|c|c}
& \textbf{IF} it occurs ... & \textbf{THEN} we refer to ... \\
\hline
I & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case1_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case1_THEN.png}
\end{minipage}\\
\hline
II & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case2_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case2_THEN.png}
\end{minipage}\\
\hline
III & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case3_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case3_THEN.png}
\end{minipage}\\
\hline
IV & \begin{minipage}{0.20\textwidth}
\includegraphics[scale=0.68]{Case4_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case4_THEN.png}
\end{minipage}\\
\hline
V & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case5_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case5_THEN.png}
\end{minipage}\\
\hline
VI & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case6_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case6_THEN.png}
\end{minipage}\\
\hline
VII & \begin{minipage}{0.20\textwidth}
\includegraphics[scale=0.68]{Case7_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case7_THEN.png}
\end{minipage}\\
\hline
VIII & \begin{minipage}{0.20\textwidth}
\includegraphics[scale=0.68]{Case8_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case8_THEN.png}
\end{minipage}\\
\hline
IX & \begin{minipage}{0.20\textwidth}
\includegraphics[scale=0.68]{Case9_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case9_THEN.png}
\end{minipage}\\
\hline
X & \begin{minipage}{0.20\textwidth}
\includegraphics[scale=0.68]{Case10_IF.png}
\end{minipage} & \begin{minipage}{0.19\textwidth}
\includegraphics[scale=0.68]{Case10_THEN.png}
\end{minipage}\\
\hline
\end{tabular}}
\caption{}
\label{Table2}
\end{table}
\noindent We need to distinguish just two cases depending on the changes of direction in $\mathcal{P}$, so we have the following two results.
\begin{prop}\label{Lemma: A closed path with a tetromino is Konig type}
Let $\mathcal{P}:A_1,\dots,A_n$ be a closed path polyomino. Suppose that $\mathcal{P}$ contains a configuration of four cells as in Figure \ref*{Figure: particular tetromino} (A), up to reflections or rotations of $\mathcal{P}$ or up to relabelling of the cells of $\mathcal{P}$. Then $I_{\mathcal{P}}$ is of K\"onig type.
\end{prop}
\begin{proof}
We distinguish two cases depending on the position of $A_3$ with respect to $A_2$.\\
\noindent \textsl{Case I:} We assume that $A_3$ is at North of $A_2$.
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[scale=0.8]{particular_tetromino.png}}\qquad
\subfloat[]{\includegraphics[scale=0.8]{tetromino_A3_at_North_A2_I.png}}
\subfloat[]{\includegraphics[scale=0.8]{tetromino_A3_at_North_A2_II.png}}
\caption{}
\label{Figure: particular tetromino}
\end{figure}
\noindent We set $Y^{(1)}=Y^{(1)}_1 \sqcup Y^{(1)}_2$ where $Y^{(1)}_1=\{x_1,x_2\}$ and $Y^{(1)}_2=\{x_{1'},x_{2'}\}$ with $x_1>x_2>x_{1'}>x_{2'}$, with reference to Figure~\ref{Figure: particular tetromino} (B) if $A_4$ is at East of $A_3$ or to Figure~ \ref{Figure: particular tetromino} (C) if $A_4$ is at North of $A_3$. Starting with this position for $Y^{(1)}$, we apply the procedure described in Definition~\ref{Procedure: to define Y}.
Since $\mathcal{P}$ has a finite number of cells and stopping it in the configuration $\{A_{n-1}, A_n, A_1, A_2\}$, the previous procedure consists of a finite number of steps, let us say $p$ steps. In particular, in Figure~\ref{Figure: tetromino in the last part CASE 1}, we summarize all cases which may appear in the last step, where the blue vertices represent the points which are in $Y^{(p-1)}$ in the penultimate step. Let $Y=Y^{(p)}$ be the order set of variables obtained by using the previous arguments and let $|Y|=2r$ with $r\in {\mathbb N}$. We have $x_1>x_2>\dots>x_r>x_{1'}>x_{2'}>\dots>x_{r'}$ and we set $Y_1=\{x_1,x_2,\dots,x_r\}$ and $Y_2=\{x_{1'},x_{2'},\dots,x_{r'}\}$. Moreover, observe that all vertices of $\mathcal{P}$ are covered two by two, so $r=n$ by Lemma~\ref{Lemma: Closed path number of vertices and cells}. Hence, we obtain $n$ generators of $I_{\mathcal{P}}$ whose initial terms do not have any variables in common, hence, by Remark~\ref*{Remark: For the Konig Type}, it follows that $I_{\mathcal{P}}$ is of K\"onig type.
\begin{figure}[h]
\centering
\subfloat[Note $a\in Y_2$]{\includegraphics[scale=1]{tetromino_last_part_Case1_I.png}}
\subfloat[Note $a\in Y_2$]{\includegraphics[scale=1]{tetromino_last_part_Case1_II.png}}
\subfloat[Note $b\in Y_2$]{\includegraphics[scale=1]{tetromino_last_part_Case1_III.png}}
\caption{The cases in the last step.}
\label{Figure: tetromino in the last part CASE 1}
\end{figure}
\noindent \textsl{Case II:} We assume that $A_3$ is at East of $A_2$. We set $Y^{(1)}=Y^{(1)}_1 \sqcup Y^{(1)}_2$ where $Y^{(1)}_1=\{x_1\}$ and $Y^{(1)}_2=\{x_{1'}\}$ with $x_1>x_{1'}$, with reference to Figure \ref{Figure:tetromino A_3 is at East of A_2} (A). As done before, we start with this position for $Y^{(1)}$ and we apply the procedure described in Definition~\ref{Procedure: to define Y}. Let $q$ be the number of the steps until $\{A_{n-1},A_n,A_1,A_2\}$. In Figure~\ref*{Figure: tetromino in the last part CASE 1} (A), (B) and (C) we show all cases in the last step and we point out that we set $x_0>x_1$. Hence, with the same arguments as before we get the desired conclusion.
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[scale=0.8]{tetromino_A3_at_East_A2.png}}
\subfloat[]{\includegraphics[scale=0.8]{tetromino_last_part_Case2_I.png}}
\subfloat[Note $a\in Y_2$]{\includegraphics[scale=0.8]{tetromino_last_part_Case2_II.png}}
\subfloat[Note $b\in Y_2$]{\includegraphics[scale=0.9]{tetromino_last_part_Case2_III.png}}
\caption{}
\label{Figure:tetromino A_3 is at East of A_2}
\end{figure}
\end{proof}
\begin{exa}\rm
An example of the procedure described in Lemma~\ref{Lemma: A closed path with a tetromino is Konig type} can be seen in Figure~\ref{Figure: example closed path of konig type 1}. In particular, $\mathcal{P}$ is of K\"onig type with respect to the lexicographic order induced by
$$ x_1>x_2>\dots>x_{30}>x_{1'}>x_{2'}>\dots>x_{30'}$$
and to the thirty generators of $I_{\mathcal{P}}$ corresponding to the inner intervals having $i$ and $i^{\prime}$ as diagonal or anti-diagonal corners, for all $i\in [30]$.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{Esempio_1_Kong_type.png}
\caption{}
\label{Figure: example closed path of konig type 1}
\end{figure}
\noindent We describe the algorithm given in Proposition~\ref{Lemma: A closed path with a tetromino is Konig type} and we show how it works step by step, with reference to Figure \ref{Figure: example closed path of konig type 1}.\\
\noindent \textsl{I step)} Starting from the tetromino $\{A_1,A_2,A_3,A_4\}$, observe that $A_4$ is at East of $A_3$ so we are in the case of Figure \ref{Figure: particular tetromino} (B). Hence we set $Y^{(1)}=Y^{(1)}_1\sqcup Y^{(1)}_2$ with $Y^{(1)}_1=\{x_1,x_2\}$ and $Y^{(1)}_2=\{x_{1'},x_{2'}\}$, where $x_1>x_2>x_{1'}>x_{2'}$.\\
\noindent \textsl{II step)} Consider now the tetromino $\{A_3,A_4,A_5,A_6\}$, so it occurs the case (V) of Table \ref{Table2}. Hence we set $Y^{(2)}=Y^{(2)}_1\sqcup Y^{(2)}_2$ with $Y^{(2)}_1=Y^{(1)}_1\sqcup Z^{(2)}_1$ where $Z^{(2)}_1=\{x_3,x_4\}$ and $Y^{(2)}_2=Y^{(1)}_2\sqcup Z^{(2)}_2$ where $Z^{(2)}_2=\{x_{3'},x_{4'}\}$, where $x_1>x_2>x_3>x_4>x_{1'}>x_{2'}>x_{3'}>x_{4'}$.\\
\noindent \textsl{III step)} Focusing on $\{A_5,A_6,A_7,A_8\}$, we are in the case (III) of Table \ref{Table2} after suitable rotations of $\mathcal{P}$. Hence $Z^{(3)}_1=\{x_5,x_6\}$ and $Z^{(2)}_2=\{x_{5'},x_{6'}\}$, where $x_1>\dots>x_4>x_5>x_6>x_{1'}>\dots>x_{4'}>x_{5'}>x_{6'}$.\\
\noindent \textsl{III step)} Take the trimino $\{A_8,A_9,A_{10}\}$, so we have the case (I) of Table \ref{Table2} after a reflection of $\mathcal{P}$ with respect to $y$-axis. Hence $Z^{(5)}_1=\{x_8\}$ and $Z^{(2)}_2=\{x_{8'}\}$, with $x_1>\dots>x_7>x_8>x_{1'}>\dots>x_{7'}>x_{8'}$.\\
\noindent \textsl{V-IX steps)} We can argue as done in the previous step so we obtain $x_1>\dots>x_{11}>x_{12}>x_{1'}>\dots>x_{11'}>x_{12'}$. From this point, it should be clear to the reader how we continue. \\
\noindent \textsl{X step)} Consider $\{A_{13},A_{14},A_{15},A_{16}\}$, so we are in the case (III) of Table \ref{Table2} after suitable rotations of $\mathcal{P}$. Hence $Z^{(10)}_1=\{x_{13},x_{14}\}$ and $Z^{(10)}_2=\{x_{13'},x_{14'}\}$.\\
\noindent \textsl{XI step)} Let $\{A_{14},A_{15},A_{16},A_{17}\}$, so we are in the case (VIII) of Table \ref{Table2} after suitable rotations of $\mathcal{P}$. Hence $Z^{(11)}_1=\{x_{15}\}$ and $Z^{(11)}_2=\{x_{15'}\}$.\\
\noindent \textsl{XII step)} Considering $\{A_{16},A_{17},A_{18},A_{19}\}$, we get the case (IV) of Table \ref{Table2} up to rotations of $\mathcal{P}$. Hence $Z^{(12)}_1=\{x_{16},x_{17}\}$ and $Z^{(12)}_2=\{x_{16'},x_{17'}\}$.\\
\noindent \textsl{XIII step)} Get the trimino $\{A_{18},A_{19},A_{20}\}$, so we are in the case (II) of Table \ref{Table2} after suitable rotation of $\mathcal{P}$. Hence $Z^{(13)}_1=\{x_{18}\}$ and $Z^{(13)}_2=\{x_{18'}\}$.\\
\noindent \textsl{XIV step)} Take $\{A_{19},A_{20},A_{21},A_{22}\}$, so we have the case (III) of Table \ref{Table2} after a suitable rotation of $\mathcal{P}$. Hence $Z^{(14)}_1=\{x_{19},x_{20}\}$ and $Z^{(14)}_2=\{x_{19'},x_{20'}\}$.\\
\noindent \textsl{XV step)} Focus on the tetromino $\{A_{20},A_{21},A_{22},A_{23}\}$, so we get the case (VIII) of Table \ref{Table2} after a suitable rotation of $\mathcal{P}$. Hence $Z^{(15)}_1=\{x_{21}\}$ and $Z^{(15)}_2=\{x_{21'}\}$.\\
\noindent \textsl{XVI step)} Considering the trimino $\{A_{22},A_{23},A_{24}\}$, we are in the case (II) of Table \ref{Table2} and $Z^{(16)}_1=\{x_{22}\}$ and $Z^{(16)}_2=\{x_{22'}\}$.\\
\noindent \textsl{XVII step)} Get $\{A_{23},A_{24},A_{25},A_{26}\}$, so we are in the case (III) of Table \ref{Table2}. Hence $Z^{(17)}_1=\{x_{23},x_{24}\}$ and $Z^{(17)}_2=\{x_{23'},x_{24'}\}$.\\
\noindent \textsl{XVIII step)} Consider $\{A_{24},A_{25},A_{26},A_{27}\}$, we are in the case (VII) of Table \ref{Table2}. Therefore $Z^{(18)}_1=\{x_{25}\}$ and $Z^{(18)}_2=\{x_{25'}\}$.\\
\noindent \textsl{XIX step)} Take $\{A_{26},A_{27},A_{28},A_{29}\}$, we are in the case (III) of Table \ref{Table2}, after a reflection with respect to $x$-axis. Hence $Z^{(19)}_1=\{x_{26},x_{27}\}$ and $Z^{(19)}_2=\{x_{26'},x_{27'}\}$.\\
\noindent \textsl{XX step)} Consider $\{A_{27},A_{28},A_{29},A_{30}\}$, so we get the case (VII) of Table \ref{Table2}, after a reflection with respect to $x$-axis. Hence $Z^{(20)}_1=\{x_{28}\}$ and $Z^{(20)}_2=\{x_{28'}\}$.\\
\noindent \textsl{Last step)} Consider $\{A_{29},A_{30},A_{1},A_{2}\}$, so we are in the case of Figure \ref{Figure: tetromino in the last part CASE 1} (B), or equivalently of (III) in Table \ref{Table2}. Hence $Z^{(21)}_1=\{x_{29},x_{30}\}$ and $Z^{(21)}_2=\{x_{29'},x_{30'}\}$.\\
In conclusion, we obtain the order set of variables as
$$ x_1>x_2>\dots>x_{30}>x_{1'}>x_{2'}>\dots>x_{30'}.$$
\end{exa}
\begin{prop}\label{Lemma: A closed path with a L-conf is Konig type}
Let $\mathcal{P}:A_1,\dots,A_n$ be a closed path polyomino. Suppose that $\mathcal{P}$ has an $L$-configuration in every change of direction. Consider such an $L$-configuration as in Figure~\ref{Figure: L configuration A4 at north of A3} (A), up to relabelling of the cells of $\mathcal{P}$. Then $I_{\mathcal{P}}$ is of K\"onig type.
\end{prop}
\begin{proof}
We distinguish three cases depending on the position of $A_4$ with respect to $A_3$. First of all, we assume that $A_4$ is at North of $A_3$. We set $Y^{(1)}=Y^{(1)}_1 \sqcup Y^{(1)}_2$ where $Y^{(1)}_1=\{x_1,x_2\}$ and $Y^{(1)}_2=\{x_{1'},x_{2'}\}$ with $x_1>x_2>x_{1'}>x_{2'}$, with reference to Figure~\ref{Figure: L configuration A4 at north of A3} (A). The procedure described in Definition~\ref{Procedure: to define Y} finishes with one of the two cases displayed in Figure~\ref{Figure: L configuration A4 at north of A3} (B) and (C). As done in Proposition~\ref{Lemma: A closed path with a tetromino is Konig type} the desired conclusion follows.
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[scale=1]{Lconf_A4_at_North_A3.png}}\qquad
\subfloat[Note $a \in Y_2$]{\includegraphics[scale=1]{Lconf_A4_at_North_A3_I.png}}\qquad
\subfloat[Note $b\in Y_2$]{\includegraphics[scale=1]{Lconf_A4_at_North_A3_II.png}}
\caption{}
\label{Figure: L configuration A4 at north of A3}
\end{figure}
\noindent We assume that $A_4$ is at South of $A_3$. In such a case we set $Y^{(1)}=Y^{(1)}_1 \sqcup Y^{(1)}_2$ where $Y^{(1)}_1=\{x_1\}$ and $Y^{(1)}_2=\{x_{1'}\}$ with $x_1>x_{1'}$, with reference to Figure \ref{Figure: L configuration A4 at south of A3} (A). Observe that the only two last cases are in Figure \ref{Figure: L configuration A4 at south of A3} (B) and (C).
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[scale=0.8]{Lconf_A4_at_South_A3.png}}
\subfloat[Note $a\in Y_2$]{\includegraphics[scale=0.8]{Lconf_A4_at_South_A3_I.png}}
\subfloat[Note $b\in Y_2$]{\includegraphics[scale=0.8]{Lconf_A4_at_South_A3_II.png}}
\caption{}
\label{Figure: L configuration A4 at south of A3}
\end{figure}
\noindent We assume that $A_4$ is at East of $A_3$. In such a case we set $Y^{(1)}=Y^{(1)}_1 \sqcup Y^{(1)}_2$ where $Y^{(1)}_1=\{x_3\}$ and $Y^{(1)}_2=\{x_{3'}\}$ with $x_3>x_{3'}$, with reference to Figure~\ref{Figure: L configuration A4 at east of A3} (A). Observe that the only two last cases are in Figure \ref{Figure: L configuration A4 at east of A3} (B) and (C), where we set:
$$ x_1>x_2>x_3\dots>x_r>x_{1'}>x_{2'}>x_{3'}\dots>x_{r'}.$$
The conclusion follows arguing as before.
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[scale=0.8]{Lconf_A4_at_East_A3.png}}\quad
\subfloat[]{\includegraphics[scale=0.8]{Lconf_A4_at_East_A3_I.png}}\quad
\subfloat[]{\includegraphics[scale=0.8]{Lconf_A4_at_East_A3_II.png}}
\caption{}
\label{Figure: L configuration A4 at east of A3}
\end{figure}
\end{proof}
\begin{exa}\rm
In Figure~\ref{Figure: example closed path of konig type 2} we show two examples of the procedure described in Proposition~\ref{Lemma: A closed path with a tetromino is Konig type}. In particular, $\mathcal{P}_1$ is of K\"onig type with respect the lexicographic order induced by
$$ x_1>x_2>\dots>x_{26}>x_{1'}>x_{2'}>\dots>x_{26'}$$
and to the sixty generators of $I_{\mathcal{P}_1}$ corresponding to the inner intervals having $i$ and $i'$ as diagonal or anti-diagonal corners, for all $i\in [26]$; as well as for the polyomino $\mathcal{P}_2$ similarly.
\begin{figure}[h!]
\centering
\subfloat[Closed path $\mathcal{P}_1$]{\includegraphics[scale=1]{Esempio_2_Kong_type.png}}
\subfloat[Closed path $\mathcal{P}_2$]{\includegraphics[scale=1]{Esempio_2_Kong_type_II.png}}
\caption{}
\label{Figure: example closed path of konig type 2}
\end{figure}
\end{exa}
\begin{thm}\label{konigfinal}
Let $\mathcal{P}$ be a closed path polyomino. Then $I_{\mathcal{P}}$ is of K\"onig type.
\end{thm}
\begin{proof}
If $\mathcal{P}$ contains a configuration of four cells as in Figure~\ref*{Figure: particular tetromino} (A), then $I_{\mathcal{P}}$ is of K\"onig type by Proposition~\ref{Lemma: A closed path with a tetromino is Konig type}. If $\mathcal{P}$ does not contain any such configuration, then it is easy to see that $\mathcal{P}$ has an $L$-configuration in every change of direction, so $I_{\mathcal{P}}$ is of K\"onig type by Proposition~\ref{Lemma: A closed path with a L-conf is Konig type}. Hence, the desired claim is proved.
\end{proof}
\begin{conj}
Let $\mathcal{P}$ be a non-simple thin polyomino. Then $I_{\mathcal{P}}$ is of K\"onig type.
\end{conj}
\section*{Acknowledgement} RD was supported by the Alexander von Humboldt Foundation and a grant of the Ministry of Research, Innovation and Digitization, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2021-1633, within PNCDI III.
|
1404.2798
|
\subsection*{#1}}
\newcommand{\Abst}[1]{\,#1}
\newcommand{k_{\rm B}}{k_{\rm B}}
\newcommand{m_{\rm e}}{m_{\rm e}}
\newcommand{N_{\rm e}}{N_{\rm e}}
\newcommand{n_{\rm p}}{n_{\rm p}}
\newcommand{n_{\rm b}}{n_{\rm b}}
\newcommand{T_{\rm e}}{T_{\rm e}}
\newcommand{T_{\gamma}}{T_{\gamma}}
\newcommand{\theta_{\rm e}}{\theta_{\rm e}}
\newcommand{\theta_{\gamma}}{\theta_{\gamma}}
\newcommand{m_{\rm p}}{m_{\rm p}}
\newcommand{\sigma_{\rm T}}{\sigma_{\rm T}}
\newcommand{\sigma_{\rm K-N}}{\sigma_{\rm K-N}}
\newcommand{r_{\rm c}}{r_{\rm c}}
\newcommand{n_{\rm Pl}}{n_{\rm Pl}}
\newcommand{n_{\rm BE}}{n_{\rm BE}}
\newcommand{n_{\rm W}}{n_{\rm W}}
\newcommand{\rho_{\rm Pl}}{\rho_{\rm Pl}}
\newcommand{\rho_{\rm BE}}{\rho_{\rm BE}}
\newcommand{N_{\rm Pl}}{N_{\rm Pl}}
\newcommand{N_{\rm BE}}{N_{\rm BE}}
\newcommand{\vek} [1]{\mbox{\boldmath${#1}$\unboldmath}}
\newcommand{\matr} [1]{\mbox{$\hat{#1}$}}
\newcommand{\SP}[2]{\mbox{$\vek{#1}\cdot\vek{#2}$}}
\newcommand{\partial}{\partial}
\newcommand{\pAb}[2]{\frac{\displaystyle\partial #1}{\displaystyle\partial #2}}
\newcommand{\pAbc}[3]{\left.\frac{\displaystyle\partial #1}{\displaystyle\partial #2}\right|_{#3}}
\newcommand{\PAb}[3]{\frac{\displaystyle\partial^{#3} #1}{\displaystyle\partial {#2}^{#3}}}
\newcommand{\tAb}[2]{\frac{\displaystyle d #1}{\displaystyle d #2}}
\newcommand{\TAb}[3]{\frac{\displaystyle d^{#3} #1}{\displaystyle d {#2}^{#3}}}
\newcommand{\Abl}[2]{\frac{{\rm d} #1}{{\rm d} #2}}
\newcommand{\AKc}[1]{{\rm C}_{#1}}
\newcommand{\AKs}[1]{{\rm S}_{#1}}
\newcommand{\sgn}[1]{\,{\rm sgn}\,{#1}}
\newcommand{\Mitw}[1]{\left<#1\right>}
\newcommand{\Vv}[1]{\mbox{${#1}$}}
\newcommand{\Vvek}[2]{\mbox{${#1}^{#2}$}}
\newcommand{\VvN}[1]{\mbox{${#1}_{0}$}}
\newcommand{\VvNN}[1]{\mbox{${#1}_{0}^2$}}
\newcommand{\VSP}[2]{\mbox{$\Vv{#1}\Vv{#2}$}}
\newcommand{\STV}[1]{\mbox{$\tilde{#1}$}}
\newcommand{\figref}[1]{Abb.\ref{#1}}
\newcommand{\tabref}[1]{Tab.\ref{#1}}
\newcommand{\kapref}[1]{Kap.\ref{#1}}
\newcommand{\deltaD}[1]{\mbox{$\delta( {#1} )$}}
\newcommand{\Aut}[1]{\it{#1}\rm }
\newcommand{\AutP}[1]{\it{#1}\rm }
\newcommand{\AutPet}[1]{\it{#1 et al.}\rm }
\newcommand{\AutPno}[1]{{#1}\rm }
\newcommand{\AutPnoet}[1]{{#1 et al.}\rm }
\newcommand{\Vvekm}[1]{\mbox{$\Vvek{#1}{\mu}$}}
\newcommand{\Vvekcv}[2]{\mbox{${#1}_{#2}$}}
\newcommand{\Vvekcvm}[1]{\mbox{$\Vvekcv{#1}{\mu}$}}
\newcommand{\pot}[2]{#1 \times 10^{#2}}
\newcommand{\poterr}[3]{(#1\pm#2)\times10^{#3}}
\newcommand{n_{\gamma}}{n_{\gamma}}
\newcommand{t_{\rm C}}{t_{\rm C}}
\newcommand{t_{\rm K}}{t_{\rm K}}
\newcommand{t_{\gamma \rm e}}{t_{\gamma \rm e}}
\newcommand{t_{\rm e\gamma}}{t_{\rm e\gamma}}
\newcommand{t_{\rm exp}}{t_{\rm exp}}
\newcommand{\Theta_{2.7}}{\Theta_{2.7}}
\newcommand{\theta_{z}}{\theta_{z}}
\newcommand{Y_{\rm p}}{Y_{\rm p}}
\newcommand{\Omega_{\rm b}h^2}{\Omega_{\rm b}h^2}
\newcommand{\Omega_{\rm m}h^2}{\Omega_{\rm m}h^2}
\newcommand{z_{\rm eq}}{z_{\rm eq}}
\section{Introduction}
\label{sec:intro}
The detection of large primordial B-modes by the BICEP2 experiment, with a tensor-to-scalar ratio $r=0.2^{+0.07}_{-0.05}$ \citep{BICEP2results}, has excited the cosmology community for the past month.
Not only does this large value for $r$ suggest that sub-orbital B-mode experiments like SPIDER \citep{SPIDER}, CLASS \citep{CLASS}, Polarbear \citep{Polarbear}, SPTpol \citep{SPTpol} and ACTpol \citep{ACTPol} should be able to characterize the B-mode power spectrum to high precision over the next few years, but it also points towards a slight tension with the upper limit $r<0.11$ (95\% c.l.) deduced from measurements of the temperature power spectrum by the Planck team \citep{Planck2013params}. One simple extension that restores the consistency between these measurements is to allow for a small negative running of the scalar spectrum, pushing us into the regime of non-standard inflation scenarios, since the simplest slow-roll models predict negligible running.
These recent findings have spurred much discussion. Could the large B-modes be due to foregrounds or some unaccounted temperature-polarization leakage \citep{Liu2014}? Is the tensor power spectrum blue-tilted \citep{Brandenberger2014, Gerbino2014, Biswas2014}? Do the BICEP2 results require non-standard inflation scenarios or more general early-universe models \citep{Harigaya2014, Nakayama2014, Contaldi2014, Abazajian2014, Miranda2014, McDonald2014, Scott2014}? Should we worry about large-field excursions \citep{Kehagias2014, Choudhury2014, Lyth2014} violating the Lyth bound? Maybe primordial magnetic fields rather than gravity waves generate the B-mode signal \citep{Bonvin2014}? Is a sterile neutrino the culprit \citep{Zhang2014, Dvorkin2014}? What about topological defects \citep{Lizarraga2014, Moss2014}? Clearly, more data are needed to refine the polarized foreground model and further tighten the constraints on the B-modes, answering these questions, and the community is eagerly awaiting the next round of results from Planck, SPTpol and the Keck array.
In this paper, we suggest yet another possible explanation for the large value of $r$ found by BICEP2 that is also consistent with the all-sky upper limit from Planck. The idea is motivated by the fact that the BICEP2 footprint is about $60$ degrees away from the maximum of the hemispherical power asymmetry \citep{Eriksen2004, Bennett:2010jb, Planck2013power, Aslanyan2013, Akrami2014}. As suggested by \citet{Dai2013}, a spatial variation of $r$ could provide one viable explanation for at least part of the temperature power asymmetry and its scale dependence \citep{Hirata2009power, Flender2013}. This could, e.g., be caused by an exotic super-horizon tensor mode \citep{Abolhasani2013}, a modulated preheating scenario \citep{Bethke2013}, dissipative processes \citep{DAmico2013}, or more generally in multi-field inflation models that possibly independently generate scalar and tensor perturbations.
One expectation is that a detection of primordial B-modes will be easier in the Southern hemisphere since there the value for $r$ lies above the average. This also suggests that in the Northern hemisphere the tensor contribution should be much smaller. In the future, this hypothesis could be tested by future B-mode experiments with sufficient sky-coverage, providing a check for the stationarity of the tensor contribution across the sky, even probing cases beyond a simple dipolar power asymmetry.
\vspace{-3.5mm}
\section{Linking the power asymmetry to spatially varying tensor modes}
\label{sec:spatial_var_r}
The hemispherical asymmetry is consistent with a dipolar modulation of an otherwise statistically isotropic cosmic microwave background (CMB) sky \citep{Prunet2005, Gordon2005, Gordon2007}, where the best-fitting dipole in galactic coordinates has a direction $(l, b) \approx (227, -27)^\circ$ and amplitude (in terms of r.m.s. temperature fluctuations on large angular scales, multipoles $\ell \lesssim 64$) of $A = 0.072 \pm 0.022$ \citep{Planck2013power}.
Explicitly, the CMB temperature fluctuation in a direction $\hat n$ can be written as $\Delta T(\hat n)=\Delta T_{\rm iso}(\hat n)[1+A \,\hat n\cdot \hat p]$ in this case, where $\Delta T_{\rm iso}(\hat n)$ denotes the temperature fluctuation for a statistically isotropic sky and $\hat p$ defines the dipole axis. Then $A$ can be defined as $A \simeq (1/2)[\sum_{\ell=2}^{\ell_{\rm max}}(2\ell +1) \Delta C_\ell^{TT}/C_\ell^{TT}]/\sum_{\ell=2}^{\ell_{\rm max}}(2\ell +1)$, where $\Delta C_\ell^{TT}/C_\ell^{TT}$ is the fractional correction to the CMB temperature power spectrum with respect to the sky average \citep[see][for more details]{Dai2013}.
In galactic coordinates, the central region of the BICEP2 footprint lies at $(l, b) \simeq (310, -59)^\circ$, which is roughly $60^\circ$ away from the power maximum. Assuming that $r$ varies spatially as $r(\theta)=r_0+\Delta r \cos\theta$, with $\theta$ defining the angle relative to the maximum of the hemispherical power asymmetry, this suggests $r_{\rm BICEP}\approx r_0+ \Delta r/2$. Assuming $r_0\simeq \Delta r\simeq 0.11$ one thus finds $r_{\rm BICEP}\approx0.17$, consistent with the BICEP2 result. This model by construction is also consistent with the Planck all-sky constraint. It furthermore suggest that in the direction $(l, b) \approx (227, -27)^\circ$ the contribution of tensor modes is close to $r_{\rm max}\approx 0.22$, while in the opposite direction $r_{\rm min}\approx 0$, a hypothesis that can be checked by future B-mode experiments. For this it will be important to distribute the measurements in both hemispheres, sampling a sufficient fraction of the whole sky, and also by combining different experiments. In the near future, this question could potentially be addressed by CLASS and SPIDER, which independently cover large parts of the CMB sky. Looking farther ahead, a CMB polarization measurement from space using PIXIE \citep{Kogut2011PIXIE}, LiteBird \citep{LiteBIRD} or a mission similar to PRISM \citep{PRISM2013WPII} could also allow testing this scenario.
\begin{figure}
\centering
\includegraphics[width=0.94\columnwidth]{./eps/TT.eps}
\\[1.5mm]
\includegraphics[width=0.94\columnwidth]{./eps/EE.eps}
\\[1.5mm]
\includegraphics[width=0.94\columnwidth]{./eps/BB.eps}
\caption{Relative changes in the $TT$ and $EE$ power spectra caused by the spatial variation of $r$ (upper two panels) with respect to the all-sky average with $r\simeq 0.11$. The lower panel directly shows the $BB$ power spectrum for $r=0, 0.11$ and $0.22$. If a dipolar modulation of $r$ is present, measurements of the $EE$ and $BB$ power spectra will add additional direct information. The curves were obtained using CAMB \citep{CAMB} for the Planck cosmology \citep{Planck2013params}.}
\label{fig:CMB_TT}
\end{figure}
In Fig.~\ref{fig:CMB_TT}, we show the temperature and polarization power spectra at large angular scales ($\ell\lesssim100$). For the $TT$ and $EE$ power spectra the relative differences with respect to the sky average are shown, while for $BB$ we show the power spectrum directly.
Using $r_{\rm max}\approx 0.22$ and $r_{\rm min}\approx 0$, at $\ell\lesssim 64$ we find an overall power asymmetry amplitude of $A\simeq 0.016$, which explains part of the power asymmetry found by the Planck team \citep{Planck2013power}. To explain the full power asymmetry, one requires $r_0\simeq \Delta r \simeq 0.65$ \citep{Dai2013}, which is already in strong tension with the all-sky average from Planck. It would furthermore predict $r_{\rm BICEP}\simeq 0.94$, which is ruled out by the BICEP2 measurement at more than $10 \sigma$. We note that $r_0\simeq \Delta r$ maximizes the overall tensor modulation, while more generally $0<\Delta r< r_0$ gives models with $r>0$. A model with $r_0\simeq \Delta r\simeq 0.2$ furthermore is consistent with the $1\sigma$ upper limit of BICEP, $r_{\rm BICEP}\simeq 0.27$, but in this case the tension with the full sky limit from Planck is not avoided. Still, this possibility could be constrained by future B-mode experiments.
While an explanation for the spatial variation of the tensor-to-scalar ratio points towards non-standard early-universe scenarios, the suggested model links two independent phenomena, providing a simple way to exclude this hypothesis.
It is furthermore clear that a simple dipolar scaling of the power modulation might not be sufficient \citep{Planck2013power}, or that even combinations of spatially varying cosmological parameters might be at work.
These aspects require more data and careful statistical tests which are beyond the scope of this paper. Measurements of the $EE$ and $BB$ power spectra as well as the $TE$, $TB$ and $EB$ power spectra would shed additional light on the underlying physical mechanism, allowing us to rule out different possibilities. In particular, the extra information could be used to increase the significance of a detection and push below the $TT$ cosmic variance limit if indeed $r$ is varying spatially at the level discussed here. Finally, even if spatially varying $r$ can only explain part of the hemispherical power asymmetry, it could render the scalar contribution to the anomaly less significant, dropping it below the $2\sigma$ level.
\vspace{-3mm}
\section{Conclusion}
\label{sec:conclusions}
We are entering a new era of CMB cosmology, with searches for B-modes turning into precise measurements. It is thus important to think about physical scenarios that can be tested with future polarization measurements, taking the clues given by the current data seriously. Here, we discussed the idea of a spatially varying tensor-to-scalar ratio connecting the recent BICEP2 result and the hemispherical power asymmetry. While this possibility requires a non-standard early-universe scenario, the model makes predictions that can be tested with future B-mode experiments which cover a large fraction of the sky. We argued that a spatial variation of $r$, while consistent with the current Planck full-sky limit as well as the BICEP2 result, cannot fully account for the amplitude of the hemispherical asymmetry. In fact, variations of $r$ as a full description of the asymmetry can be ruled out at more than $10\sigma$.
However, a simple dipolar modulation of $r$ and even more complicated spatial dependencies are consistent with current measurements, and could still account for some portion of the hemispherical asymmetry. If it is present, the contribution from tensor fluctuations to the CMB polarization signal should be much smaller in the Northern hemisphere, pushing it close to $r\simeq 0$ in the extreme case, while suggesting a value for $r$ that is slightly larger than for BICEP2 towards the direction of the hemispherical power asymmetry maximum, $(l, b) \approx (227, -27)^\circ$. This would inevitably point us beyond the standard inflation scenario, providing a direct link for one of the CMB temperature anomalies with an underlying physical process, a possibility that should be further explored.
Future work will investigate the possible connection between this and other temperature anomalies, such as the quadrupole and octopole alignments \citep{Oliveira2004, Copi2004, Schwarz2004}. This is motivated by the fact that tensor modes also contribute to the temperature power spectrum at multipoles $\ell \lesssim 100$. It is furthermore important that spatial variations of tensor fluctuations are not constrained by large-scale structure surveys so that B-mode measurements can provide unique insights in this direction. Also, even if a spatial variation of $r$ can only explain part of the hemispherical power asymmetry, it could decrease the scalar contribution to the anomaly below the $2\sigma$ level.
\small
\vspace{-3.5mm}
\section*{Acknowledgments}
JC cordially thanks Glenn Starkman for stimulating discussions of this problem. He is also grateful to Saroj Adhikari, Grigor Aslanyan, Guido D'Amico, Anupam Mazumdar, Arthur Kosowsky and Subodh Patil for helpful comments on the manuscript. This work was supported by NSF Grant No. 0244990 and by the John Templeton Foundation.
\bibliographystyle{mn2e}
|
1407.0451
|
\section{Introduction}
In conventional information theory and cryptography, it is taken for granted that a digital message can always be copied easily, even by someone ignorant of its meaning. Analog messages (e.g.~handwritten signatures) are somewhat harder to copy, but not really infeasible, and digital data can be protected to a considerable extent by interposing a restrictive hardware interface between the data and the outside world (e.g.~smart credit cards); but in both these cases, the difficulty of copying is only technological, not fundamental. However, when elementary quantum systems such as polarized photons are used as the transmission medium, routine copying of messages is no longer possible even in principle. In particular, there are ways of encoding messages so that they can be copied reliably only with the help of certain key information used in forming the message.
Quantum coding was first described in~\cite{W}, along with two applications: making money that is in principle impossible to counterfeit, and multiplexing two or three messages in such a way that only one can be read. More recently~\cite{BBBW}, quantum coding has been used in conjunction with public key cryptographic techniques to yield several schemes for unforgeable subway tokens. Here we show that quantum coding considerably enhances the usefulness of another standard cryptographic device, the one-time pad.
Mathematically, a polarized photon acts like a two-bit read-once memory one of whose bits ($k$) serves as a read key for the other ($m$). Querying the memory with the correct $k$ yields the correct value of $m$. Querying with the wrong $k$ yields a random bit instead of $m$, and in either case querying resets the memory so that subsequent queries yield no new information. Even after a query, it is generally impossible to infer the initial state of \mbox{either} bit, because the memory gives no indication of whether its response was the correct response to the correct key or a random response to the wrong key. Because it represents the behavior
of an elementary quantum system, this kind of restricted-access memory should be thought of as a natural information-processing primitive,
not as a complex technological device that could probably be circumvented in principle.
Ordinarily, when one thinks of a technological restricted-access memory, one has in mind an information-storage device. Photons can also be stored (e.g.~between mirrors, or in a closed optical fiber), but they cannot in practice be stored for very long, and their natural application is in the transmission of information. We thus have a situation in which restricted-access memory, as a storage device, is possible in practice but not in principle via conventional technology, and in principle but not in practice via storage of polarized photons. On the other hand, restricted-access transmissions, which can be read or copied only with the help of a key, are possible both in principle and in practice using polarized photons.
\section{Essential Properties of Polarized Photons}
Polarized light can be produced by sending ordinary light through a polariz\-ing apparatus such as a Polaroid filter or Nicol prism. A~beam of polarized light is characterized by its polar\-i\-zation axis, which is determined by the orientation of the polarizing apparatus in which the beam originates. \mbox{Although} polarization is a continuous variable, and in principle can be measured as accu\-rately as desired by passing the polarized beam through a second polar\-izing apparatus, the uncertainty principle forbids measurements on any \mbox{single} photon from revealing more than one bit about the beam's polarization. In~particular, if a beam with polar\-i\-zation axis $\alpha$ is sent into a polarizer oriented at angle $\beta$, the individual photons behave dichotomously and probabilistically, being transmitted with probability $\cos^2(\alpha - \beta)$ and \mbox{absorbed} with the complementary probability $\sin^2 (\alpha - \beta)$. The photons behave deter\-min\-is\-ti\-cally only when the two axes are parallel (certain transmission) or perpendicular (certain absorption).
If the two axes are not perpendicular, so that some photons are transmitted, one might hope to learn additional information about $\alpha$ by measuring the transmitted photons again with a polarizer oriented at some third angle; but this is to no avail, because the transmitted photons, in passing through the $\beta$ polarizer, emerge with exactly $\beta$ polarization, having lost all memory of their previous polarization $\alpha$. Any other elementary two-state quantum system, such as a spin-$1/2$ atom, behaves similarly dichotomously and probabilistically.
Another way one might hope to learn more than one bit from a single photon would be not to measure it directly, but rather somehow amplify it into a clone of identically polarized photons, then perform measurements on these; but this hope is also vain, because such cloning can be shown to be inconsistent with the foundations of quantum mechanics~\cite{WZ}.
\section{Quantum Coding}
In order to encode a message bit $m$ into a photon that can be read or copied reliably only with the help of a key bit $k$, we generate a photon with a selected one of the four polarization directions $0$, $45$, $90$ and $135$ degrees. [Generating a single photon of known polarization is possible by variation of the Einstein-Podolsky-Rosen setup~\cite{Bo}, in which a decaying atom emits two oppositely polarized photons. By polarizing and counting one photon, the other's presence is assured and its polarization fixed without measuring it directly.] If the key bit is a $0$, then the photon is polarized rectilinearly, i.e.~$0$ or $90$ degrees according to whether the message bit is $0$ or $1$. If the key bit is a $1$, then the photon is polarized diagonally, i.e.~$45$ or $135$ degrees according to the message bit.
\vspace{1.5ex}
\noindent
\emph{Def.}\ The \emph{quantum encoding} $Q_K (M)$ of a message $M$ by a key $K$ of equal length is the train of photons obtained by applying the above procedure bitwise to $M$ and $K$.
\vspace{1.5ex}
To read a quantum-encoded message with the help of its key, one simply reads each photon with a polarizer oriented so as to cause it to behave deter\-mi\-nis\-ti\-cally, for example, reading the rectilinear photons with a $0$-degree polarizer and the diagonal photons with a $45$-degree polarizer. An attempt to read a photon with the wrong key causes it to behave randomly, losing its stored information. For example, if a $45$-or $135$-degree photon is read with a $0$-degree polarizer, it will be transmitted with $50$ per cent probability in either case, and all evidence of its original polarization will be lost.
Suppose an eavesdropper intercepts and attempts to read a quantum transmission $Q_K(M)$ without being detected. Consider first the case in which the message $M$ and key $K$ are both random. Not knowing $K$, the eavesdropper makes the wrong measurement on half the photons, and thus obtains a message $M'$ differing from $M$ in $1/4$ of its bit positions (of course the eavesdropper does not know which ones). Having destroyed the original transmission by reading it, the eavesdropper must now, in order to remain undetected, inject a forged transmission designed to approximate the intercepted one as well as possible. Not knowing which measurements are wrong, the eavesdropper's best strategy is to produce a new train of photons in agreement with the results of the measurements, as if they had all been right. Half of the photons in such a forged transmission will be correct; the other half have wrong key values (i.e.~will be diagonal when they should be rectilinear, or vice versa), and when subsequently measured with the correct key by the intended receiver, these will give wrong answers half the time. Thus the error probability is $1/4$ per bit, both for reading the quantum transmission without knowing its key, and for having a forged replacement agree with what the original message would have said when decoded by the intended receiver. Of course, if the intended receiver knew only $K$ but had no prior knowledge of $M$, the eavesdropping would still\,\footnote{\,This word (``still'') appears to be superfluous. The authors do not understand in 2014 what they could have meant by it when they wrote the original 1982 manuscript.} go undetected, since a random message with random errors still looks random. Quantum money~\cite{W} corresponds to the case where the intended receiver (the bank) has perfect knowledge of both $M$ and $K$, while the counterfeiter knows neither. The usual message $M$ sent over communication channels is intermediate between these extremes: the receiver has partial prior knowledge of it (e.g.~expecting it to be in English).
Simply encoding an arbitrary message $M$ with a random quantum key $K$ has two disadvantages: 1)~if~the message is too random the receiver won't be able to detect eavesdropping, for the reason mentioned above; 2)~if~the message is too redundant (e.g.~English), eavesdropping will be detected, but by then the eavesdropper will have gained significant information about the message, perhaps even enough to decrypt it uniquely, because eavesdropping \mbox{induces} errors in only $1/4$ of the bits. (In this respect quantum coding differs from
ordinary one-time pad encryption, where ignorance of the key prevents the eavesdropper from learning anything about the encrypted message,\footnote{\,In~2014, the authors realize that the phrase ``the encrypted message'' was ambiguous and confusing. They intended it to mean the ``the message whose meaning had been concealed by encryption''---i.e.~the plaintext---rather than what would nowadays be seen as its more likely meaning in a cryptologic context, ``the message in encrypted form''---i.e.~the ciphertext. Eavesdropping on a classical one-time pad transmission of course yields complete information on the ciphertext but none on the plaintext.} though of course it can be freely copied.)
We now define a stronger kind of coding that overcomes both these disadvantages. The trick is to make the message redundant with an error-detecting code $M \rightarrow E(M)$, then hide the redundancy from the eavesdropper by an ordinary one-time pad $J$, before applying quantum coding.
\pagebreak
\vspace{1.5ex}
\noindent
\emph{Def.}\ For any error-detecting code $E$ (assumed known to the eavesdropper) let the \emph{strong} \emph{quantum code} $S^E$ be defined as follows: let $J$ and $K$ be two
random key strings of length $|E(M)|$ not known to the eavesdropper.\footnote{\,In 2014, the authors noticed a possible ambiguity in this sentence. It~is the random key strings $J$ and $K$ that are unknown to the eavesdropper, not their length~$|E(M)|$.} Then the \emph{strong quantum encoding} $S^E_{J,K}(M)$ of message $M$ is the train of photons $Q_k(J \text{ xor } E(M))$.
\vspace{1.5ex}
It is obvious (because of the one-time pad $J$\,)
that the eavesdropper can learn \mbox{nothing} about $M$ from $S^E_{J,K}(M)$. Moreover, for suitable error-correcting codes,\footnote{\,In 2014, the authors noticed that they had meant ``error-detecting codes'' here.} eavesdropping \mbox{incurs} a high risk of being detected. Even the rudimentary code of repeating the message twice $E(M)=MM$ suffices to detect eavesdropping with probability at least $1-0.79^k$ when $k$ photons have been intercepted, quite close to the optimum $1-0.75^k$ implied by the independent, probabilistic nature of eavesdropping-induced errors.
Although the simple code $E(M)=MM$ is nearly optimal for eaves\-drop\-ping-detection, a more complex code would be preferable for another reason: the detection of deliberate message alteration. Although randomly quantum-coded photons cannot be read reliably, they can be altered reliably. For example, the polarization axis of a photon can be rotated by $90$ degrees, without measuring or otherwise disturbing it, by passing the photon through an appropriate sequence of mirrors (or, more mysteriously, through a sugar solution). If this manipulation were applied to the first and \mbox{$(n+1)$}st
photons of a $2n$-photon transmission coded as above, both would be altered with certainty in such a way as to induce an undetected alteration in the message. A~more complex error-detecting code, e.g.~concatenating $MM$ with a check sum of the addresses of the ones in $M$, would make such alterations unlikely to escape detection. In the next section, where quantum transmissions are used to carry key information for future transmissions, it will be necessary to use an error-correcting code\,\footnote{\,In 2014, the authors noticed that they had meant ``error-detecting code'' here as well.} that provides some `diffusion', in the sense of making each bit of $E(M)$ depend on many bits of~$M$\@. This prevents the eavesdropper who has luckily guessed a few bits of the present key from thereby efficiently inferring any bits of future keys. Finally, in section~\ref{practical}, we will need a code $E$ that corrects errors as well as detecting them, to make up for photons that arrive at the receiver but fail to be detected.
\section{\mbox{Reusing a One-Time Pad Safely with the} \mbox{Help of Quantum Coding}}
We consider a situation in which two users of an insecure communications channel, who initially share a finite secret key, wish to communicate \mbox{secretly} as long as they can. In a classical channel, where eavesdropping is unde\-tect\-able in principle, they must assume that all their communications are being listened to, and the volume of safe communication is only \mbox{linear} in the size of the key, unless they resort to pseudorandom key-expansion
schemes~\cite{BM,Y},
which are at best (assuming $\text{P} \neq \text{NP}$) only computationally secure.
We show that by strongly quantum-coding their messages with suitable error-detecting codes, the sender and receiver can safely reuse the same $J$ and $K$ keys indefinitely until an eavesdrop is detected. (The safety is not absolute. There is an exponentially small chance ($O(2^{-|K|})$) that the eavesdropper, having guessed the entire quantum key $K$ correctly, will be able to eavesdrop on all the transmissions without detection and go on to break the reused $J$ key in the usual manner, as well as a moderate chance for the eavesdropper to learn
a few bits of the $K$ key correctly and go on to intercept and decrypt a few bits of each message; but these risks do not increase with the number of times an apparently secure key is reused.) An eavesdropper who tampers with or suppresses messages will also be detected with high probability, as will one who injects false messages.
When an eavesdrop is detected, the sender and receiver can go on communicating with only slightly diminished safety by replacing their compromised keys by fresh random information sent over the channel in previous uncompromised transmissions. With high probability they will be able to continue communicating in this fashion for an exponential ($2^{O(|K|)}$) number of key changes,\footnote{\,Note added in 2014: this 1982 use of the asymptotic notation ``$O$'' was an example of the common physics usage, where it may mean either an upper or lower bound depending on context.
Here, we intended a lower bound, which in modern computer science usage would be denoted~$2^{\Omega(|K|)}$.} unless the eavesdrops become so frequent before then that they are forced to use up key information faster than they can replace it, in which case they will (with high probability) be able to cease communication before any of their transmissions have become uniquely decodable by the eavesdropper.
Because the replacement keys are truly random, rather than being pseudorandomly computed from an original seed, the security of the scheme would not be reduced by allowing the eavesdropper unlimited computing power. Neither would it be compromised by technological improvements in the art of eavesdropping. The scheme does incorporate one technologically unrealistic assumption, viz.\ that photons can be detected with perfect efficiency
(cf.~section~\ref{practical}, where this assumption is not made).
We sketch how these advantages can be achieved. The ability to send many messages without loss of security (when no eavesdropping is detected) follows from the exponential decline of the probability of escaping detection with the number of distinct bit positions ever subjected to eavesdropping, whether these bit positions are listened
to all at once, or a few at a time over the course of many transmissions. For this reason, a quantum channel could even be used to safely send arbitrarily many copies of the \emph{same} strongly coded transmission, without the eavesdropper being able to forge it accurately, provided the copies were sent one at a time, each only on confirmation that the preceding one had apparently not been listened to. By contrast, if many identical transmissions were sent all at once, the eavesdropper could intercept them all, reliably determine each polarization by multiple measurements, and then escape detection by forging many trains of photons with the now known sequence of polarizations.
In order to be sure that no key is reused after a detected eavesdropping, the two communicating parties must alternate strictly in their use of each key. Otherwise the eavesdropper could, for example, intercept and absorb a message from A without forwarding it to B and then wait for B to use the same key on a subsequent message. The effect of absorbing a message is thus the same as that of spoiling it through eavesdropping: neither party reuses the key with which it was sent. If the initial body of shared key information included several keys reserved for first use by A and several for first use by B, the parties could alternate in the use of each key without strictly alternating transmissions. Of course if multiple keys were in use, and particularly if some transmissions were being absorbed by the eavesdropper, the communicating parties would have to prefix each quantum transmission by a (cleartext) indication of which key it was encoded with, to avoid reading a message with the wrong quantum key and spoiling it.
The ability to change keys without serious loss of security depends on using a somewhat diffusive error-detecting code when new key information is transmitted. With the simple non-diffusive code $E(M)=MM$, an eavesdropper who by good luck has correctly guessed the first and \mbox{$(n+1)$}st bits of the current $J$ and $K$ keys
will know what measurements to make to reliably read and forge the corresponding bits of a fresh pair of random keys $J'$, $K'$ when these are sent through the channel in four transmissions strongly encoded by $J$ and $K$; as well as confirming, by the consistency of decoding of the error-detecting code, that the guessed bits of $J$
and $K$ are indeed correct. Subsequent lucky guessing on further generations of keys (along with unlucky guessing causing some keys to be rejected due to detected eavesdropping) could be used to discover additional bits until, in linear time, some pair of keys $J''$ and $K''$ became entirely known.
To delay this collapse for an expected exponential number\,\footnote{\,Note added in 2014: as in the previous footnote, modern computer science usage would have us write an expected $2^{\Omega(n)}$ number of key changes.} of key changes $2^{O(n)}$, it suffices to use an error-detecting code that diffuses information about each bit of its argument $M$ among many bits of its value $E(M)$; so that knowledge, say, of any $n/4$ bits of $E(M)$ reveals little or nothing about any bit of $M$. Many error-detecting codes have this property, e.g.~a~random mapping from $n$-bit strings to $2n$-bit strings, or the linear code obtained by mod-$2$ multiplying $MM$ by an appropriate nonsingular matrix. With a diffusive code, knowledge of a few bits of $J$ and $K$ would not enable the eavesdropper to make reliable measurements of any bits of the replacement keys $J'$ and $K'$.
\section{A Practical Implementation}\label{practical}
Although visible light photons can be polarized with nearly perfect efficiency (e.g.~a~Nicol prism can split a beam into two beams, very nearly perfectly polar\-ized at right angles to each other, whose total intensity is scarcely less than that of the incoming beam), and transmitted with nearly perfect effi\-ciency (in a vacuum the only significant losses are due to diffraction, and these can be made negligible by using a beam diameter considerably greater than the square root of the product of the transmission distance and the wavelength of light), current technology allows them to be detected with only about thirty per cent efficiency.\footnote{\,Note added in 2014: this was the approximate quantum efficiency of photomultiplier tubes available
in~1982.}
Fortunately, the scheme of the preceding section can be modified to accom\-mo\-date finite detector efficiency, at the cost of using a more complicated error-correcting code $M \rightarrow E(M)$ in place of the error-detecting code, and a more complicated criterion for key rejection than the detection of a single error on decoding $E(M)$. Somewhat surprisingly,
the modified scheme remains secure against an eavesdropper with a more efficient, or even perfectly efficient, photon detector. The volume of safe communication for this scheme is more than linear, but may be less than exponential, in the initial key size.
The main modification is to use standard faint pulses of polarized light instead of single photons, each pulse being of such a size that when it is sent into a detector of the given efficiency (e.g.~$30$ per cent), or split into several fainter pulses (e.g.~by~a half-silvered mirror, or a Nicol prism) and sent into several such detectors, the total number of photons detected obeys a Poisson
distribution of mean~$1$. Such a standard faint pulse can easily be produced by filtering a standard bright pulse of polarized light to reduce its intensity by the requisite constant factor.
A standard faint pulse of a given polarization resists copying nearly as well as a single photon would. The best strategy for an eavesdropper to copy a faint pulse is to use a half silvered mirror and two Nicol prisms to split the incoming pulse into four beams, one of each canonical polarization, and monitor each beam by a photon detector. Most of the time, only one of the detectors will register a photon, and the eavesdropper will be no better off than in the single photon case. Occasionally two or three detectors will register, yielding more information. Only when three detectors register will the pulse's polarization be known unambiguously (e.g.~if~both diagonal detectors and the vertical detector register, then the pulse must have been vertically polarized). The faint pulse works well because the chance of three detectors responding to the same pulse is only about $2$ per cent (for a Poisson distribution of mean $1$). The other $98$ per cent of the time, the eavesdropper does not learn the pulse's polarization unambiguously, and, as with single photons, cannot reliably copy~it. Even a technologically advanced eavesdropper, with perfectly efficient photon detectors, could not copy faint pulses reliably. For example, if the advanced eavesdropper uses $100$ per cent detectors to analyze
a pulse intended for $30$ per cent detectors, an average of $3.3$ photons will be detected per pulse, but the chance that these will appear in three different beams, and thus reveal the pulse's polarization unambiguously, would still be only about $25$ per cent.
The converse phenomenon, namely statistical failure to detect even one photon when a pulse arrives, requires that the rejection test be made more complicated. Even if a transmission
is not subjected to eavesdropping, about $1/e$ of its light pulses go undetected, due to normal bad luck at the detectors. The rejection test must begin by deciding whether the number of missing light pulses is so great as to raise the suspicion of eavesdropping (a~wise eavesdropper now might not bother to forge replacements for the intercepted pulses, but instead let them remain missing, hoping to pass them off as pulses that arrived but were not detected). If the number of missing pulses is not too great, the error-correcting code must reliably restore the data they would have carried, as well as checking for polarization \mbox{errors}, which as before would indicate interception and forgery of some of the pulses. A~convolutional code~\cite{G} appears most suitable for achieving the desired high efficiency of error-correction in a channel with a large number of erasures~($1/e$). \mbox{Depending} on the purity of polarization available from the Nicol prisms, the code could be made to tolerate and correct a small number of polarization errors, but reject a larger number as evidence of forgery. Since the capacity of a binary channel with $1/e$ erasure probability is $0.632$, a four-fold expansion in $E(M)$ offers ample room for efficient error detection and correction. This in turn means that eight transmissions, each containing $n$ fresh key bits, would have to be accepted to replace the $8n$ bits sacrificed in a rejection.
The most problematical aspect of the modified scheme is the decision of when to reject a transmission and change keys. By contrast with the scheme of the previous section, it is now necessary to change keys periodically (at least every $n^{1/2}$ transmissions) even in the absence of any evidence of eavesdropping, in order to prevent an eavesdropper from intercepting all of the bit positions, a few at a time, over the course of many apparently safe transmissions. The expected number of safe key changes has not been worked out, but it is not implausible that it is still exponential in the key size.\footnote{\,Decades after these words were written, the basic idea behind this paper was reinvented \emph{independently} by Ivan Damg{\aa}rd, Thomas Pedersen and Louis Salvail, but they worked out the complete analysis of its security, which is missing here. Fittingly, these two papers will appear together in a special issue of \emph{Natural Computing} celebrating 30~years of~BB84.}
\vspace{1ex}
\pagebreak
\section*{Acknowledgements}
We wish to thank Stephen Wiesner for numerous helpful discussions
of quantum theory, John Denker for drawing our attention to the analogy between choice of basis (e.g.~rectilinear vs.\ diagonal) and a cryptographic key, \mbox{Andrew} Greenberg for pointing out that photons in flight could be used to test a channel for eavesdropping, and Lalit Bahl for advice on error-correcting codes.\footnote{\,We also thank Ilana Frank Mor for typesetting this paper in 2014 from the original 1982 manuscript and for detecting most of the typographical mistakes that have been corrected here.}
|
1407.0489
|
\section{Introduction}
Phase transitions are remarkable phenomena in both equilibrium and non-equilibrium systems. While traditional thermodynamics can be utilized
to study the static phase transitions and fluctuations associated with configurations of a system, the {\em thermodynamics of trajectories},
sometimes known as Ruelle's thermodynamics~\cite{Ruelle}, can be used to study the dynamical phase behavior. The latter approach
has recently been adapted to stochastic systems~\cite{LAW}. According to this approach one first considers an ensemble for trajectories of the dynamics over which
the time-extensive order parameters are defined. The order parameters are physical observables whose fluctuation behavior determines the dynamics
of the system. In the large deviation limit the probability distributions of the dynamical order parameters are fully captured by large deviation functions
which play the role of dynamical free-energies, hence they are sometimes called the topological or Ruelle pressure~\cite{Ruelle}.
An important dynamical order parameter, which can be used to classify the various time realizations of the system, is the dynamical activity.
It is defined as the total number of configuration changes in a trajectory during the observation time interval~\cite{LAW}. This physical observable
has recently received considerable attention~\cite{HJGC09}. Specifically, it has been employed in studies of the dynamical phase transitions in
glass former models and lattice proteins~\cite{G09,G1213}.
In a recent paper~\cite{MTJ14} the authors have studied the fluctuations of a non-entropic current in a one-dimensional
stochastic system of classical particles which can be considered as a variant of the zero temperature Glauber model.
In the bulk of the system the particles are subjected to asymmetric branching and death processes. The particles can also
enter (leave) the system from the left (right) boundary. It is known that, in the long-time limit, the system relaxes into a
stationary state, where it undergoes a static phase transition from a high-density into a low-density phase, depending on
the values of the bulk reaction rates. This is characterized as a bulk-induced phase transition.
In~\cite{MTJ14} the authors have shown that despite the simple nature of the process, the fluctuations of that non-entropic current shows a highly
non-trivial behavior. While the probability for observing a lower-than-typical current in the steady-state is generated by those
configurations consisting of a single domain wall, the probability for observing a higher-than-typical current is generated by those
configurations consisting of multiple domain walls. However, a careful examination of the structure of the large deviation function
for the probability distribution of this current suggests that the system might undergo dynamical phase transitions.
This has be inferred from existence of discontinuities in the derivatives of the scaled cumulant generating function of the current which is
closely related to that of the dynamical activity of the system.
The main goal of the present paper is to study the dynamical phase transitions in the above mentioned system by investigating the
behavior of the average activity of the system, as a dynamical observable, below its typical value (i.e. its value in the steady-state).
We have found that, in the limit of long observation time, the system indeed undergoes both continuous and discontinuous dynamical
phase transitions. In other words there are four different behaviors for the scaled cumulant generating function of the activity
as a function of the counting field. These different behaviors are associated with the existence of three different dynamical phases in the system.
In this lower-than-typical activity region, we have characterized different phases according to the configuration of the system at the beginning and the end
of each trajectory during the observation time.
At a first-order phase transition point, the system transits from a phase in which the average activity is solely generated by those
trajectories whose initial and final configurations are referred to a fully occupied lattice, into another phase where the average activity
is solely generated by those trajectories whose initial and final configurations are referred to a completely empty lattice and vice versa.
At a second-order phase transition point, in contrast, one encounters the following scenario. The average activity in one phase solely
comes from those trajectories whose initial and final configurations are either referred to a fully occupied or a completely empty lattice while in the
other phase the initial and the final configurations of the system at the beginning and the end of those trajectories who contribute to the average activity
are referred to neither a completely empty lattice nor a fully occupied one.
Despite the simplicity of the system studied in this paper, it shows an interesting dynamical phase behavior. While the static phase transition in
this system is solely determined by the bulk transition rates, the dynamical phase transitions are determined by both the boundary and bulk
transition rates.
This paper is organized as follows. In section 2 we start with the mathematical preliminaries. We will then define the model and review the known
results in section 3. The exact expression for the typical activity is given in the section 4. The dynamical phase behavior of the model
is studied in the section 5. The comparison between different dynamical phases is brought in the section 6. In section 7 the large deviation
for the probability distribution of the activity is calculated analytically. The final section is devoted to the concluding remarks.
\section{Mathematical preliminaries}
In order to present a self-contained paper, we begin with a brief review of the known results on the theory of ensembles of trajectories~\cite{LAW}.
Let us start with a continuous-time Markov process whose configuration space is given by $\{ C \}$. We define $P(C,K,t)$ as the probability of
being in the configuration $C$ at the time $t$ considering that the system has changed its configuration $K$ times during the time interval $[0,t]$.
The parameter $K$ is in fact the activity of the system at the time $t$. We assume that a spontaneous transition from configuration
$C$ to $C'$ takes place with a time-independent transition rate $\omega_{C \to C'}$. It is easy to see that $P(C,K,t)$ satisfies the
following master equation
$$
\frac{d}{dt} P(C,K,t) = \sum_{C'\neq C} \omega_{C' \to C} P(C',K-1,t) - \sum_{C' \neq C} \omega_{C \to C'} P(C,K,t)\; .
$$
Multiplying the above equation by $e^{-sK}$ and summing over all values of the activity $K \in [0,+\infty)$ we find
$$
\frac{d}{dt} \tilde{P}(C,s,t) = \sum_{C'\neq C} e^{-s}\omega_{C' \to C} \tilde{P}(C',s,t) - \sum_{C' \neq C} \omega_{C \to C'} \tilde{P}(C,s,t)
$$
in which we have defined
\begin{equation}
\tilde{P}(C,s,t)=\sum_{K=0}^{\infty} e^{-sK} P(C,K,t)\; .
\end{equation}
The parameter $s$ is called the counting field in related literature. Using the quantum Hamiltonian formalism~\cite{Sch} the
latter master equation can be written as follows
\begin{equation}
\label{SME}
\frac{d}{dt} | \tilde{P}(t) \rangle_s = \tilde{\cal H}_s |\tilde{P}(t)\rangle_s\; .
\end{equation}
Considering the complete basis vector $\{ | C \rangle \}$ the matrix elements of $ \tilde{\cal H}_s $ in this basis are
$$
\langle C | \tilde{\cal H}_s | C' \rangle = e^{-s}\omega_{C' \to C} - r(C) \delta_{C,C'}
$$
where $r(C)$, which is the total escape rate from the configuration $C$, is
$$
r(C)=\sum_{C' \neq C} \omega_{C \to C'} \; .
$$
The formal solution of Eq.~(\ref{SME}) is given by
\begin{equation}
| \tilde{P}(t) \rangle_s = e^{\tilde{\cal H}_s t} | \tilde{P}(0) \rangle_s \; .
\end{equation}
Since at $t=0$ the activity $K$ is zero, then we have $| \tilde{P}(0) \rangle_s=| P(0) \rangle$ in which
$| P(0) \rangle$ is the probability vector at $t=0$. Note that the probability for being in $C$
at $t=0$ is given by $P(C,K=0,t=0)=\langle C | P(0) \rangle$. It is usually assumed that the system is
in its steady-state at $t=0$; therefore, we choose $| P(0) \rangle=| P^\ast \rangle$ so that
$$
\tilde{\cal H}_{s=0} | P^\ast \rangle=0 \; .
$$
Let us consider an ensemble of trajectories in the configuration space of the system
during the time interval $[0,t]$. Every member of this ensemble starts, at $t=0$, from a given configuration $C$ with the probability
$\langle C | P^{\ast} \rangle$ and after the course of time $t$ has elapsed, it might have the activity $K$. We denote the probability
of having a given activity $K$ at the time $t$ by $P(K,t)$. It is clear that $P(K,t)=\sum_{C}P(C,K,t)$. The moment generating function
of the activity can now be calculated as follows
\begin{eqnarray*}
\langle e^{-sK} \rangle &=& \sum_{K=0}^{\infty}P(K,t) e^{-sK} \\
&=& \sum_{K=0}^{\infty} \sum_{C} P(C,K,t) e^{-sK} \\
&=& \sum_{C} \tilde{P}(C,s,t) \\
&=& \sum_{C}\langle C | e^{\tilde{\cal H}_s t} | P^{\ast} \rangle \; .
\end{eqnarray*}
Denoting the right and the left eigenvectors of $\tilde{\cal H}_s$ by $| \Lambda(s) \rangle$ and $\langle \tilde{\Lambda}(s) |$ respectively, we find
\begin{eqnarray}
\label{ExactGF}
\langle e^{-sK} \rangle &=& \sum_{C}\sum_{\Lambda} \langle C | \Lambda(s) \rangle \langle \tilde{\Lambda}(s) | e^{\tilde{\cal H}_s t} | P^{\ast} \rangle\nonumber \\
& = & \sum_{C}\sum_{\Lambda} \langle C | \Lambda(s) \rangle \langle \tilde{\Lambda}(s) | P^{\ast} \rangle e^{\Lambda(s) t}
\end{eqnarray}
where $\Lambda(s)$'s are the eigenvalue of $\tilde{\cal H}_s$. The above relation can be simplified even further if we assume
that the large deviation principle holds for a very large observation time. This means that
we asymptotically have
\begin{equation}
\label{Asym}
\langle e^{-sK} \rangle \asymp \sum_{C} \langle C | \Lambda^{\ast}(s) \rangle \langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle e^{\Lambda^{\ast}(s)t}
\end{equation}
in which $\Lambda^{\ast}(s)$ is the largest eigenvalue of $\tilde{\cal H}_s$. On the
other hand, $| \Lambda^{\ast}(s) \rangle$ ($\langle \tilde{\Lambda}^{\ast}(s) |$) is also the right (left) eigenvector
of $\tilde{\cal H}_s$ corresponding to the eigenvalue $\Lambda^{\ast}(s)$. It is known that for the systems
with unbounded configuration space,~(\ref{Asym}) should be used with care since one of the quantities
$\sum_{C} \langle C | \Lambda^{\ast}(s) \rangle$ or $\langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle$ might
diverge~\cite{K98,HRS}. In this case the scaled cumulant generating function of the dynamical observable defined as
$$
\lim_{t\to \infty} \frac{1}{t} \ln \langle e^{-sK} \rangle
$$
is no longer given by the largest eigenvalue of the modified Hamiltonian. This means that the expression~(\ref{ExactGF}) should
be calculated exactly. Note that the infinite dimensionality of the configuration space is only a necessary condition for the violation of~(\ref{Asym}).
In fact it highly depends on the dynamical observable in question. For an explicit example we refer the reader to the models studied in~\cite{HRS}.
In summary, we first calculate the largest eigenvalue $\Lambda^{\ast}(s)$ of the modified Hamiltonian $\tilde{\cal H}_s$ and then check whether any of
the quantities $\sum_{C} \langle C | \Lambda^{\ast}(s) \rangle$ or $\langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle$ diverge depending on the value of
$s$ or the initial distribution defined by $| P^{\ast} \rangle$. The divergence of any of these quantities might result in emergence of new dynamical phases.
Nevertheless, as long as none of the above quantities diverge, the dynamical phase structure of the system is merely determined by the largest eigenvalue
of the modified Hamiltonian. In this case the role of the initial distribution (which, as we mentioned, is usually assumed to be the steady-state) is marginal.
This means that the probability distribution function of the dynamical observable will be independent of the initial condition. As we will see this is what happens
for the system studied in this paper.
Defining the activity per unit time $k\equiv \frac{K}{t}$ and assuming a large deviation form for $P(k,t)$
in the long-time limit , the Legendre transformation of $\Lambda^{\ast}(s)$ gives the large deviation function
for this probability distribution~\cite{DL98,LS99}
\begin{equation}
P(k,t) \asymp e^{-t I(k)}
\end{equation}
in which
\begin{equation}
\label{LDF}
I(k)=-\min_s(\Lambda^{\ast}(s)+k s)\; .
\end{equation}
It is assumed that none of the above mentioned quantities in~(\ref{Asym}) diverge; hence, the large deviation function
can be obtained from the Legendre transformation of the largest eigenvalue of the modified Hamiltonian. It is worth mentioning
that every time the scaled cumulant generating function $\Lambda^{\ast}(s)$ is not differentiable at some points the
resulting rate function obtained from the Legendre transformation will be linear. One should note that the actual rate function might
be a nonconvex function instead of a linear function~\cite{T09}. In this case the information embedded in the linear part can only be
achieved by using other methods such as the direct kinetic Monte Carlo simulation or the population dynamics methods~\cite{GKLT11,TL09}.
The above mentioned formulation has a simple physical interpretation. Let us consider an ensemble of trajectories in the configuration
space of the system for which the counting field $s$ has been kept fixed during the course of the time. This ensemble is
known as $s$-ensemble in related literature. The dynamical partition function of the $s$-ensemble is given by
\begin{equation}
\label{SPF}
Z(s,t)=\langle e^{-sK} \rangle
\end{equation}
which, as we saw, has a large deviation form in the long-time limit. From one hand, the largest eigenvalue of
$\tilde{\cal H}_s$ plays the role of a dynamical free energy whose singularities (or discontinuities of its derivatives with respect to $s$)
determine the phase behavior of the $s$-ensemble. On the other hand, it can be shown that the right (left) eigenvector of
$\tilde{\cal H}_s$ associated with its largest eigenvalue, which has been called $| \Lambda^{\ast}(s) \rangle$ ($\langle \tilde{\Lambda}^{\ast}(s) |$),
is in fact the probability vector of the final (initial) configuration along a trajectory in the space of configurations, knowing that the value of
$s$-ensemble average of the activity per unit time has been $\langle k \rangle_s=-\frac{d}{ds}\Lambda^{\ast}(s)$~\cite{Laz13,S09}.
The $s$-ensemble average of the activity per unit time, or its first cumulant, is defined as
\begin{eqnarray}
\label{SAverage}
\langle k \rangle_s &=& \frac{1}{t} \langle K \rangle_s \nonumber \\
&=& \frac{1}{t}\frac{\langle K e^{-sK}\rangle }{\langle e^{-sK}\rangle} \\
&=& -\frac{1}{t} \frac{d}{d s} \ln \langle e^{-sK} \rangle \nonumber
\end{eqnarray}
which is equal to $-\frac{d}{ds}\Lambda^{\ast}(s)$ in the large observation time limit.
The $n$'th cumulant of the activity per unit time can be easily obtained by calculating $n$'th derivative of $\Lambda^{\ast}(s)$ with
respect to $s$ times $(-1)^n$. Averages in the $s$-ensemble with $s = 0$ correspond to the steady-state averages or the typical
values in the steady-state~\cite{G09,FG13}. The typical activity per unit time in the steady-state is given by
\begin{equation*}
\langle k \rangle_{s=0} = -\frac{d}{d s} \Lambda^{\ast}(s)\Big |_{s=0}\; .
\end{equation*}
This is the value of the activity per unit time which minimizes $I(k)$ defined in~(\ref{LDF}) and corresponds to the most probable activity
observed in the steady state.
In summary, the concept of $s$-ensemble enables us to study how the dynamical phase of the system changes
when the activity deviates from its typical value. For a given $s$ the average activity is determined, hence one can
construct an ensemble of trajectories with that value of the average activity which is not necessarily the typical value in
the steady state. The necessary information to describe the dynamical phase of the system is embedded in the largest
eigenvalue of the modified Hamiltonian and its corresponding left and right eigenvectors.
\section{The model: Known results}
The model we are studying in this paper is a one-dimensional stochastic system of classical particles defined on a finite lattice of length $L$
with open boundaries. This is a special case of the model studied in~\cite{J04} which can be considered as a variant of the zero temperature Glauber
model~\cite{SA}. The dynamical rules between two consecutive sites on the
lattice consist of asymmetric death and branching processes
\begin{equation}
\label{rules}
\begin{array}{ll}
A \; \emptyset \; \longrightarrow \; \emptyset \; \emptyset \quad \mbox{with the rate} \quad \omega_1 \; ,\\
A \; \emptyset \; \longrightarrow \; A \; A \quad \mbox{with the rate} \quad \omega_2
\end{array}
\end{equation}
in which a particle (vacancy) is denoted by $A$ ($\emptyset$). The particles are also injected into the system from the left
boundary of the lattice (the first lattice site) with the rate $\alpha$ provided that the target lattice site is already empty.
They are also extracted from the right boundary of the system (the last lattice site) with the rate $\beta$ provided that it is already occupied.
The dynamics of the system is irreducible and it has a unique (equilibrium) steady-state.
It is known that by fine-tuning the microscopic reaction rates, the system might undergo a static bulk-driven phase transition from a high-density
(for $\omega_1<\omega_2$) to a low-density phase (for $\omega_1>\omega_2$) in the long-time limit~\cite{J04}. Being
in the steady-state with $\omega_1<\omega_2$ the lattice becomes almost full of particles except near the right boundary
where the particles have a chance to leave the system. As the particles leave the system from the right boundary, the second reaction
in~(\ref{rules}) creates new particles near the right boundary and it maintains the system almost full of particles. One should note that
the reactions in~(\ref{rules}) do not change the density of particles in a fully occupied region. In summary, the average particle
density throughout the lattice is equal to $1$ while it falls off exponentially near the right boundary.
Since the average density of particles in the middle of the lattice is $1$ it is called the high-density phase~\cite{J04}.
In contrast, being in the steady-state with $\omega_1>\omega_2$ the system is almost empty except near the left boundary where the particles
still have a chance to enter the system. However, as soon as the particles enter the system the first reaction in~(\ref{rules}) removes them. This
results in an almost empty lattice except near the left boundary where the particle density increased exponentially. One should also note that
the reactions in~(\ref{rules}) do not change the density of the particles in a completely empty region. Since the density of the particles is
equal to $0$ in the middle of the lattice, this is called a low-density phase~\cite{J04}.
Considering the following basis kets
$$
\vert \emptyset \rangle = \left( \begin{array}{c}
1\\
0\\
\end{array} \right)\; ,\;\;
\vert A \rangle=\left( \begin{array}{c}
0\\
1\\
\end{array} \right)\;
$$
which will be used throughout this paper, let us define a product shock measure with a shock front at the lattice site $i$ as
\begin{equation}
\label{SM}
| \{1\}_{i}\{0\}_{L-i} \rangle \equiv
|A \rangle^{\otimes i}\otimes
|\emptyset \rangle^{\otimes(L-i)}
\end{equation}
for $0 \le i \le L$. Note that everywhere throughout the paper we will define $\vert \{X\}_{0}\{Y\}_{L} \rangle \equiv \vert \{Y\}_{L} \rangle$.
The measure $| \{1\}_{i}\{0\}_{L-i} \rangle$ has a simple interpretation. It corresponds to a
configuration of the system in which the lattice is occupied by particles from the first lattice site up to the $i$'th lattice site.
The rest of the lattice sites are empty.
It is known that in the steady-state the probability vector $| P^{\ast} \rangle$ can be written as a linear superposition of the
product shock measures defined in~(\ref{SM}) with the property that the shock position $i$ performs a biased random walk on the lattice.
On the other hand, it is known that $| P^{\ast} \rangle$ can also be calculated using a matrix method~\cite{BE07}.
By assigning the operators $E$ and $D$ to the presence of a vacancy and a particle presented at a given
lattice site, the steady-state probability for being in a given configuration $\{ \tau \} = \{ \tau_1,\cdots,\tau_L \}$ is given by
\begin{equation}
\label{Weight}
P(\{ \tau \}) \propto \langle\langle W \vert \prod_{i=1}^{L}(\tau_{i} D + (1-\tau_{i}) E) \vert V \rangle\rangle
\end{equation}
in which $\tau_{i}=0$ or $1$ if the $i$'th lattice site is empty or it is already occupied by a particle.
It has been shown that the auxiliary vectors $\vert V \rangle\rangle$ and $\langle \langle W \vert$ besides the operators $E$ and $D$
have a two-dimensional matrix representation given by~\cite{J04}
\begin{equation}
\begin{array}{ll}
\label{Representation}
D=\left( \begin{array}{cc}
0 & 0\\
d & \frac{\omega_{2}}{\omega_{1}}\\
\end{array} \right),\;\;
E=\left( \begin{array}{cc}
1 & 0\\
-d & 0\\
\end{array} \right),\\ \\
\vert V \rangle\rangle=\left( \begin{array}{cc} \frac{-\beta \omega_{2}}{(\omega_{2}-\omega_{1}+ \beta) d\omega_{1}}\\
1 \end{array} \right),\;\;
\langle\langle W \vert=\left( \begin{array}{cc} \frac{(\omega_{1}-\omega_{2}+ \alpha)d}{\alpha} & 1 \end{array} \right)
\end{array}
\end{equation}
in which $d$ is a free parameter. This matrix representation allows us to calculate the typical value of any physical quantity
in the long-time limit such as the typical value of the activity of the system in the steady-state which will be calculated in the next section.
\section{Typical activity in the steady-state}
As we mentioned in the introduction, we consider the dynamical activity, defined as the number of configuration changes in a dynamical trajectory,
as the dynamical order parameter to study the dynamical phase transitions in the system defined by~(\ref{rules}). The reader should note that
although the reaction rules defined in~(\ref{rules}), besides the injection and extraction rates at the boundaries, are microscopically irreversible processes,
the average entropy production of this system in the steady-state in zero.
It is known that the calculation of the entropy production for a system with microscopically irreversible transitions results in producing of an infinite
amount of entropy in the environment. During the last couple of years there have been some efforts by different authors to overcome this
ambiguity~\cite{PJ}; nevertheless, the question on how the entropy production should be defined for these systems is still an open question.
In what follows we will show that the typical value of the activity in the steady-state $\langle k \rangle_{s=0}$, for which the large deviation
function $I(k)$ defined in~(\ref{LDF}) is minimum, can be calculated exactly using the matrix method explained in the previous section. The
typical value of the activity in the steady-state is given by
\begin{equation}
\langle k \rangle_{s=0} =\sum_{C}\sum_{C' \ne C}\omega_{C \to C'} P(C)=\sum_{C} r(C)P(C) \; .
\end{equation}
This can be rewritten as
\begin{equation}
\begin{array}{ll}
\langle k \rangle_{s=0} = & \alpha \sum_{\{ \tau \}} P(\tau_1=0,\tau_2,\cdots,\tau_L)+
\beta \sum_{\{ \tau \}} P(\tau_1,\tau_2,\cdots,\tau_L=1) \\ \\
& +(\omega_1+\omega_2) \sum_{\{ \tau \} } \sum_{i=1}^{L-1} P(\tau_1,\cdots,\tau_i=1,\tau_{i+1}=0,\cdots,\tau_L) \; .
\end{array}
\end{equation}
It turns out that this expression can be calculated exactly using~(\ref{Weight}) and~(\ref{Representation}). After some straightforward
calculations we find
\begin{equation}
\label{TActivity}
\langle k \rangle_{s=0} =\frac{ (\frac{\omega_2}{\omega_1})^L-1}{
(\frac{\beta -\omega_1+\omega_2}{2\beta\omega_2}) (\frac{\omega_2}{\omega_1})^L-
(\frac{\alpha -\omega_2+\omega_1}{2\alpha\omega_1})}
\end{equation}
In the thermodynamic limit $L \to \infty$ we have
\begin{equation}
\label{average activity}
\langle k \rangle_{s=0} =\left\{
\begin{array}{ll}
\frac{2\alpha\omega_1}{\alpha-\omega_2+\omega_1} & \quad \mbox{for}\quad \omega_1 > \omega_2 \; , \\ \\
2\omega_{1} = 2\omega_{2} & \quad \mbox{for} \quad \omega_{1}=\omega_{2}\; , \\ \\
\frac{2\beta\omega_2}{\beta-\omega_1+\omega_2} & \quad \mbox{for} \quad \omega_1 < \omega_2 \; .
\end{array}
\right.
\end{equation}
Because of the symmetry of the model we will only study the low-density phase where $\omega_1 > \omega_2$. It is easy to
find the corresponding results in the high-density phase by considering the following transformation
\begin{eqnarray*}
&& \omega_1 \to \omega_2 \\
&& \alpha \to \beta \\
&& \mbox{Lattice site number} \;\; i \to L-i+1 \; .
\end{eqnarray*}
From now on, we will also drop the $s$-ensemble average subscript and consider $k$ (instead of $\langle k \rangle_s$) as
the $s$-ensemble average of the activity. The most probable activity or its typical value will also be denoted by $k^{\ast}$.
\section{Dynamical phase behavior of the model}
It is known that the fluctuations of a physical observable in a dynamical system can be captured from analytic properties of the
largest eigenvalue of its associated modified stochastic Hamiltonian $\tilde{\cal H}_s$ which has been denoted by $\Lambda^{\ast}(s)$.
Discontinuities in the first and the second derivatives of $\Lambda^{\ast}(s)$ are associated with the first- and the second-order dynamical
phase transitions. These derivatives are associated with the average and variance of the activity per unit time respectively.
On the other hand, the positive or negative values of the counting field $s$ favor histories with atypical values of the
activity~\cite{LAW,G09,FG13,S09,L13}.
Throughout the forthcoming sections we will mainly concentrate on the positive values
of the counting filed $s$. This means that we will deal with the $s$-ensemble average of the activity smaller than its typical value.
As we will see, depending on the values of the microscopic transition rates, the model might undergo both continuous and discontinuous
dynamical phase transition in this region.
\subsection{Eigenvalues and Eigenvectors of the modified stochastic Hamiltonian}
Let us start with calculating the eigenvalues and eigenvectors of the modified stochastic Hamiltonian $\tilde{\cal H}_s$ as a $2^L\times 2^L$
irreducible matrix. We will then select the largest eigenvalue and its corresponding eigenvector in the $s \ge 0$ region.
We have found that the largest eigenvalue of $\tilde{\cal H}_s$ in the $s \ge 0$ region can be obtained by considering the fact that the
model has a $(L+1)$-dimensional subspace of the configuration space, spanned by the vectors of type~(\ref{SM}) , which
is invariant under the evolution generated by $\tilde{\cal H}_s$. In other words, acting $\tilde{\cal H}_s$ on any of these
vectors results in a linear combination of the vectors in the same subspace. More precisely, we have
\begin{equation}
\label{EVQ}
\begin{array}{lll}
\tilde{\cal H}_s\vert \{0\}_{L} \rangle & = & \alpha e^{-s}\vert \{1\}_{1}\{0\}_{L-1} \rangle - \alpha \vert \{0\}_{L} \rangle \; , \\
\tilde{\cal H}_s\vert \{1\}_{L} \rangle & = & \beta e^{-s}\vert \{1\}_{L-1}\{0\}_{1} \rangle-\beta \vert \{1\}_{L} \rangle \; ,\\
\tilde{\cal H}_s\vert \{1\}_{i}\{0\}_{L-i} \rangle & = & \omega_{1} e^{-s} \vert \{1\}_{i-1}\{0\}_{L-i+1} \rangle
\\ & & + \omega_{2} e^{-s} \vert \{1\}_{i+1}\{0\}_{L-i-1} \rangle
\\ & & -(\omega_{1}+\omega_{2})\vert \{1\}_{i}\{0\}_{L-i} \rangle
\end{array}
\end{equation}
with $1 \le i \le L-1$. One can now easily construct the right eigenvector of $\tilde{\cal H}_s$ by writing
\begin{equation}
\label{EV}
| \Lambda (s) \rangle =\sum_{i=0}^{L} C_{i}(s) | \{1\}_{i}\{0\}_{L-i} \rangle
\end{equation}
for which
\begin{equation}
\label{EVE}
\tilde{\cal H}_s | \Lambda (s) \rangle =\Lambda(s) | \Lambda (s) \rangle \; .
\end{equation}
Although this will only give us $L+1$ eigenvalues of $\tilde{\cal H}_s$ out of $2^L$, our careful numerical investigations
have confirmed that the largest eigenvalue of the $\tilde{\cal H}_s$ in the $s \ge 0$ region lies among these eigenvalues.
The reader should note that the largest eigenvalue of $\tilde{\cal H}_s$ is equal to zero at $s=0$.
Substituting~(\ref{EV}) in~(\ref{EVE}) and using~(\ref{EVQ}) one finds the equations governing $C_{i}(s)$'s.
The equations governing $C_{i}(s)$'s consist of a homogeneous linear recursion of order $2$ besides
four boundary recursions. The standard approach to solve such recursions, which is sometimes called a plane
wave ansatz method, is to consider a solution of the form~\cite{Sch}
$$
C_{i}(s) = A(z_{1}) z_{1}^{i}+B(z_{2}) z_{2}^{-i} \;\; \mbox{for} \;\; 0\le i \le L \; .
$$
The coefficients $C_{0}(s)$ and $C_{L}(s)$ are also assumed to have the same structure except a multiplication factor.
The relation between the complex parameters $z_1$ and $z_2$ can be obtained from the bulk recursion. The coefficients
$A$ and $B$ can also be calculated form the boundary recursions.
This method enables us to calculate both the eigenvalues and eigenvectors of the modified Hamiltonian in the $s \ge 0$ region.
By substituting these solutions into the recursions and after some straightforward calculations we obtain
\begin{equation}
\label{Cs}
C_{i}(s)=\eta^{i} \frac{a(z) z^{i}+a(z^{-1})z^{-i}}{(1-\zeta)^{\delta_{i,0}}(1-\xi)^{\delta_{i,L}}}
\end{equation}
in which we have defined
$$
\eta \equiv \sqrt{\frac{\omega_{2}}{\omega_{1}}} \; ,
\quad \zeta \equiv 1-\frac{\alpha}{\omega_{2}} \; ,
\quad \xi \equiv 1-\frac{\beta}{\omega_{1}} \; .
$$
From the boundaries recursions one finds
\begin{equation}
\label{As}
\frac{a(z)}{a(z^{-1})}=-\frac{{\cal F}(z,\eta,\zeta)}{{\cal F}(z^{-1},\eta,\zeta)} =-z^{-2L}\frac{{\cal F}(z^{-1},\eta^{-1},\xi)}{{\cal F}(z,\eta^{-1},\xi)}
\end{equation}
where
\begin{equation}
\label{F}
{\cal F}(x,y,z) \equiv (x+\frac{z}{x})e^{-s} -(yz+\frac{1}{y})\; .
\end{equation}
The eigenvalues are also given by
\begin{equation}
\label{eigenvalues}
\Lambda(s)=-(\omega_{1}+\omega_{2})+e^{-s}\sqrt{\omega_{1} \omega_{2}}(z+z^{-1})
\end{equation}
where the equation governing $z$ is
\begin{equation}
\label{eq for zs}
z^{2L}=\frac{{\cal F}(z^{-1},\eta,\zeta){\cal F}(z^{-1},\eta^{-1},\xi)}{{\cal F}(z,\eta,\zeta){\cal F}(z,\eta^{-1},\xi)} \; .
\end{equation}
This equation has obviously $2L+4$ solutions. It is clear that if $z$ is a solution, then $z^{-1}$ is also a solution. On the other hand,
the trivial solutions i.e. $z=\pm 1$ have to be eliminated since they result in two zero eigenvectors. This equation finally gives us $L+1$
solutions which correspond to the same number of eigenvalues out of the total $2^L$ eigenvalues of the modified Hamiltonian $\tilde{\cal H}_s$.
As we have mentioned above, it turns out that the largest eigenvalue of $\tilde{\cal H}_s$ in $s\ge 0$ region lies among these $L+1$
eigenvalues.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=3in]{Fig1}
\caption{\label{Fig1} The dynamical phase diagram of the model in the low-density phase $\omega_1>\omega_2$ for
$\omega_1=3$ and $\omega_2=1$. Three different phases are denoted by $I$, $II$ and $III$. }
\end{centering}
\end{figure}
For a finite system one can solve~(\ref{eq for zs}) numerically. Nevertheless, one can solve~(\ref{eq for zs}) analytically in the
thermodynamic limit $L \to \infty$. Assuming $|z|>1$, we have found that in the thermodynamics limit the Eq.~(\ref{eq for zs})
has two real solutions and a phase solution. The real solutions are given by
\begin{eqnarray}
\label{Zs1}
z_{1} &=& {\cal G}(\eta^{-1},\zeta),\; \\
\label{Zs2}
z_{2} &=& {\cal G}(\eta,\xi)
\end{eqnarray}
in which we have defined
$$
{\cal G} (x,y) \equiv e^s (\frac{x}{2}+\frac{y}{2x})+\sqrt{e^{2 s}(\frac{x}{2}+\frac{y}{2x})^2-y} \; .
$$
By substituting these real solutions in~(\ref{eigenvalues}) one finds two eigenvalues $\Lambda_{1}(s)$ and $\Lambda_{2}(s)$ which are
given by the following expressions
\begin{eqnarray}
\label{eigen1}
\Lambda_{1}(s) &=& \omega_{2} {\cal R}(\eta^{-1},\zeta)\; , \\
\label{eigen2}
\Lambda_{2}(s) &=& \omega_{1} {\cal R}(\eta,\xi)
\end{eqnarray}
where we have defined
$$
{\cal R}(x,y) \equiv -\frac{(1-y)}{2 y} \left(\sqrt{\left(x^2+y\right)^2-4 e^{-2 s} x^2 y}-x^2+y\right) \; .
$$
The phase solution of the Eq.~(\ref{eq for zs}) results in an eigenvalue whose maximum value is
given by
\begin{equation}
\label{eigenph}
\Lambda_{\mbox{Ph}}(s)=-(\omega_{1}+\omega_{2})+2\sqrt{\omega_{1} \omega_{2}} e^{-s} \; .
\end{equation}
It turns out that one can also calculate the exact analytical expressions for the normalized right eigenvectors
of $\tilde{\cal H}_s$ corresponding to the eigenvalues $\Lambda_{1}(s) $, $\Lambda_{2}(s) $ and $\Lambda_{\mbox{Ph}}(s)$
in the large-$L$ limit. The physical interpretations of these vectors will be given in the forthcoming sections.
By considering~(\ref{EV}) the coefficients for $\vert \Lambda_{1}(s) \rangle$ are
$$
C_i^1(s)=
\frac{ (z-\eta ) (1-\eta z)
\left(\eta \left(1+\zeta z^2\right)
-\eta z^{2 i} \left(\zeta +z^2\right)
-e^s z \left(1+\zeta \eta ^2\right) \left(1-z^{2 i}\right)\right)}
{ \eta ^{1-i} z^{i+1}
\left(1-e^s\right) \left(1-z^2\right) \left(1+\zeta \eta ^2\right)
(1-\zeta )^{\delta _{i,0}} (1-\xi)^{\delta _{i,L}}}
$$
for $0 \le i \le L$ and $z=z_1$. The coefficients for $\vert \Lambda_{2}(s) \rangle$, which will be called $C_{i}^{2}(s)$, can be obtained
from the above expression by applying the following transformation
\begin{eqnarray*}
&& \eta \to \eta^{-1} \\
&& \zeta \to \xi \\
&& \xi \to \zeta \\
&& i \to L-i
\end{eqnarray*}
which also transforms $z_1$ to $z_2$. Finally for $\vert \Lambda_{\mbox{Ph}}(s) \rangle$ we have
$$
C_i^{\mbox{Ph}}(s)=\frac{(\eta -1)^2
\left(\eta (i+1)-\eta \zeta (i-1)-i e^s \left(1+\zeta \eta^2\right)\right)}
{\eta ^{1-i}\left(1-e^s\right) \left(1+\zeta \eta ^2\right)
(1-\zeta )^{\delta _{i,0}} (1-\xi )^{\delta _{i,L}}}
$$
for $0 \le i \le L$.
\subsection{The largest eigenvalue in the $s \ge 0$ region}
In the previous section we diagonalized $\tilde{\cal H}_s$ in an invariant sector in which the largest eigenvalue in the $s \ge 0$ region lies.
In the thermodynamic limit the largest eigenvalue of $\tilde{\cal H}_s$ highly depends on both the microscopic
transition rates of the model and the counting field $s$.
\begin{figure*}[tf]
\begin{centering}
\includegraphics[width=6in]{Fig2}
\caption{\label{Fig2} Three different cross sections of the dynamical phase diagram given in FIG.~\ref{Fig1}.
The leftmost figure is plotted for $\beta=5$ while the middle figure is plotted for $\alpha=10$. Finally, the rightmost
figure is plotted for $s=1.5$. The largest eigenvalue of $\tilde{\cal H}_s$ is clearly specified in each region.}
\end{centering}
\end{figure*}
It turns out that in the low-density phase $\omega_1 > \omega_2$ and for $s \ge 0$, three different regions can be distinguished
depending on the values of $s$, $\alpha$ and $\beta$. In each region the largest eigenvalue of $\tilde{\cal H}_s$ in the
thermodynamic limit is given by one the expressions $\Lambda_{1}(s)$, $\Lambda_{2}(s)$ or $\Lambda_{\mbox{Ph}}(s)$. The
boundaries of these three regions are explicitly calculated and given in~\ref{app}. The results are given as a
$3$-dimensional dynamical phase diagram in FIG.~\ref{Fig1}. In summary, we have
\begin{equation}
\label{LEigenvalue}
\Lambda^{\ast}(s) =\left\{
\begin{array}{ll}
\Lambda_{1}(s) \;\; \mbox{in the region} \;\; I \; ,\\
\Lambda_{2}(s) \;\; \mbox{in the region} \;\; II \; ,\\
\Lambda_{\mbox{Ph}}(s) \;\; \mbox{in the region} \;\; III \; .
\end{array}
\right.
\end{equation}
The reader can easily check that the first derivative of $\Lambda_{1}(s)$ with respect to $s$ at $s=0$ gives
the result obtained from the matrix method in~(\ref{average activity}) for $\omega_1 > \omega_2$. For
$\omega_2 > \omega_1$ one can use the transformation introduced in the previous section. This
can also be obtained from the first derivative of $\Lambda_{2}(s)$ with respect to $s$ at $s=0$.
In order to have a better understanding of the dynamical phase structure of the model, we have specifically
considered four different points in the space of the parameters defined as follows
$$
\begin{array}{cccccc}
& \vline &\omega_1&\omega_2&\alpha&\beta \\ \hline
A& \vline & 3 & 1 & 2 & 5\\
B& \vline& 3 & 1 & 12 & 2\\
C& \vline & 3& 1 & 12 & 3.5\\
D& \vline & 3 &1 & 12 & 8
\end{array}
$$
These points are plotted as vertical lines parallel to the $s$-axis in FIG.~\ref{Fig1}. In FIG.~\ref{Fig2} we have plotted three different cross sections
of the dynamical phase diagram of the model. These cross sections are three different planes $\beta=5$, $\alpha=10$ and $s=1.5$
in FIG.~\ref{Fig1}.
Along the line $A$ which lies in the region $I$ of FIG.~\ref{Fig1}, we always have $\Lambda^{\ast}(s)=\Lambda_{1}(s)$.
No dynamical phase transition takes place along this line. Along the line $B$, which is both in the region $I$ and the region $II$,
a first-order dynamical phase transition takes place at
\begin{equation}
\label{sc}
s_c=\frac{1}{2} \ln \left(\frac{(\alpha \omega_{1}-\beta \omega_{2})^2}{(\alpha -\beta ) \left(-\alpha \beta
(\omega_{1}-\omega_{2})+\alpha \omega_{1}^2-\beta \omega_{2}^2\right)}\right) \; .
\end{equation}
It turns out that we have $\Lambda^{\ast}(s)=\Lambda_{1}(s)$ for $0<s<s_c$ while $\Lambda^{\ast}(s)=\Lambda_{2}(s)$
for $s > s_c$. The line $C$ goes through the regions $I$, $II$ and $III$. By moving along the line $C$ one might encounter two
second-order dynamical phase transitions at $s=s_{\alpha}$ and $s=s_{\beta}$ given by the following expressions
\begin{eqnarray}
\label{sasb1}
s_{\alpha} &=& \ln \left( \sqrt{\frac{\omega_{1}}{\omega_{2}}} \frac{\alpha-2\omega_{2}}{\alpha-\omega_{1}-\omega_{2}} \right) \; ,\\
\label{sasb2}
s_{\beta} &=& \ln \left( \sqrt{\frac{\omega_{2}}{\omega_{1}}} \frac{\beta-2\omega_{1}}{\beta-\omega_{1}-\omega_{2}} \right) \; .
\end{eqnarray}
For $0<s<s_\alpha$ the largest eigenvalue is given by $\Lambda^{\ast}(s)=\Lambda_{1}(s)$, for $s_{\alpha} < s <s_{\beta}$
it is given by $\Lambda^{\ast}(s)=\Lambda_{\mbox{Ph}}(s)$ and for $s > s_{\beta}$ it is given by $\Lambda^{\ast}(s)=\Lambda_{2}(s)$.
Finally, along the line $D$ a second-order dynamical phase transition takes place at $s=s_{\alpha}$ given by~(\ref{sasb1}).
Along this line and for $0 < s < s_{\alpha}$ we have $\Lambda^{\ast}(s)=\Lambda_{1}(s)$ while for $s>s_{\alpha}$ the largest eigenvalue of
$\tilde{\cal H}_s$ is given by $\Lambda^{\ast}(s)=\Lambda_{\mbox{Ph}}(s)$.
As $s \to \infty$, the dynamical phase diagram of the model in terms of $\alpha$ and $\beta$, similar to the one given in the third column of
FIG.~\ref{Fig2} for a finite $s$, will approach to the following picture. The phase $I$ is limited to the region defined as
$\alpha < \omega_1+\omega_2$ and $\beta > \alpha$. The phase $II$, on the other hand, is limited to the region
$\beta < \omega_1+\omega_2$ and $\alpha > \beta$. The phase $III$ is also given by the region
$\alpha > \omega_1+\omega_2$ and $\beta > \omega_1+\omega_2$.
The comparison between the numerical and analytical results obtained for the largest eigenvalue of $\tilde{\cal H}_s$ and its derivatives
along the three lines $B$, $C$ and $D$ is given in FIG.~\ref{Fig3}. Here we would like to emphasize that by the
term {\em "numerical calculations"} we mean the exact numerical diagonalization of the modified Hamiltonian and finding its
largest eigenvalue as a function of $s$ for different system sizes. As can be seen in FIG.~\ref{Fig3} the numerical and analytical
results confirm each other. The interpolation function of the numerical solution converges well to the analytical solution.
The first and the second derivatives of the largest eigenvalue obtained from the analytical calculations are also plotted in the same
figure. The discontinuities of the first and second derivatives of the largest eigenvalue persist to exist even for small system sizes
as low as $L=8$ (not shown in the figure). However, they become more prominent (sharper) as the system size increases. The loci of the
discontinuities of $\Lambda^{\ast}(s)$ obtained from the analytical and numerical approaches are also the same.
\begin{figure}[t]
\begin{minipage}[t]{.5\textwidth}
\includegraphics[width=\textwidth]{Fig3}
\caption{The largest eigenvalue of $\tilde{\cal H}_s$ as a function of $s$ for $s \ge 0$ along the three
lines $B$, $C$ and $D$ from the top to the bottom respectively. The dotted lines are the numerically obtained
results for a system of length $L=8$ while the full lines are the analytical results obtained in the thermodynamic limit.
The insets show the first and the second derivatives of the largest eigenvalue with respect to $s$ which is obtained
analytically in the thermodynamic limit.}
\label{Fig3}
\end{minipage}
\hfill
\begin{minipage}[t]{.5\textwidth}
\includegraphics[width=\textwidth]{Fig4}
\caption{The large deviation function $I(k)$ for the probability distribution of the activity along the lines
$B$, $C$ and $D$ from the top to the bottom. The length of the system is $L=8$. The black dotted line is the numerically
obtained result. The colored lines are the analytical results. For the critical values of the activity $k$ see inside the text.}
\label{Fig4}
\end{minipage}
\end{figure}
\section{Comparison between different dynamical phases}
In this section we would like to comment on the differences between the dynamical phases (or the regions $I$, $II$ and $III$)
introduced in the previous section. Before going into the details let us remind the readers that the steady-state of our model is
unique and that the dynamical phase structure of the model is merely determined by the largest eigenvalue of the modified Hamiltonian.
Assuming that none of the terms in~(\ref{Asym}) is divergent (which turns to be the case for our model), the large deviation function of the
system is also unique and independent of the initial configuration of the system and that it can be obtained from the Legendre transformation
of the largest eigenvalue of the modified Hamiltonian.
The other point is that, as we have already mentioned, the right and the left eigenvectors of the modified Hamiltonian corresponding to the
largest eigenvalue give the full information about the possible dynamical trajectories which start from or end to any configuration given that the
average activity is kept fixed during the observation time. This information is encoded into the modified Hamiltonian itself and is independent
of the initial configuration of the system provided that none of the terms in~(\ref{Asym}) is divergent. In what follows we will study the general
properties of the modified Hamiltonian and discuss all possible events regardless of the initial configuration of the system which was assumed
to be the steady-state.
The final remark is that we are studying the low-density phase $\omega_1 > \omega_2$ under the condition
$s \ge 0$ which means that our results will be valid for $k <k^{\ast}$. Note also that $s$ and $k$ are related through $k=-\frac{d}{ds}\Lambda^{\ast}(s)$.
As we mentioned, the right (left) eigenvector of $\tilde{\cal H}_s$ associated with its largest eigenvalue is the probability vector of the
final (initial) configurations knowing that $k$ has been kept fixed through the evolution of the system.
Although the right eigenvector of $\tilde{\cal H}_s$ associated with its largest eigenvalue $| \Lambda^{\ast}(s) \rangle$
can be calculated exactly for any arbitrary positive $s$ in the thermodynamic limit, its left eigenvector $\langle \tilde{\Lambda}^{\ast}(s)|$
is more complicated to be calculated. As a matter of fact, as we have already seen, the right eigenvector $| \Lambda^{\ast}(s) \rangle$
lies in a small invariant subspace of the total configuration space. In contrast, the left eigenvector $\langle \tilde{\Lambda}^{\ast}(s)|$ should be written
as a linear combination of $2^L$ properly chosen basis vectors associated with all possible configurations of the system. It turns out
that finding a closed analytical expression for the coefficients of this expansion for a finite $L$ is a formidable task.
The probability of being in the configuration $\{\tau\}=\{ \tau_1,\ldots,\tau_L \}$ at the end of a trajectory, provided that a fixed $k$ is observed during
the observation time, can be calculated as follows. Considering the fact that for $s \ge 0$ the final configuration of the
system can be only one of the states defined in~(\ref{SM}), we define the expression
\begin{equation}
\label{probfinal}
P_{\mbox{final}}(i|k)\equiv \frac{C_{i}(s)}{\sum_{j=0}^{L}C_{j}(s)} \;\; \mbox{for} \;\; 0 \le i \le L
\end{equation}
which gives the probability that the final configuration in a trajectory is $\{ \tau_1,\ldots,\tau_L \}=\{ \{1\}_{i}, \{0\}_{L-i} \}$
provided that the average activity has been $k$. The coefficients $C_{i}(s)$'s are given in~(\ref{Cs}). On the other hand, the coefficients
$a(z)$ and $a(z^{-1})$ in~(\ref{Cs}) are given in~(\ref{As}) and the proper $z$ in each region should be obtained from~(\ref{eq for zs}),
hence for a system of size $L$,~(\ref{probfinal}) can be calculated, in principle, using~(\ref{Cs})-(\ref{eq for zs}) in each region.
The probability that the initial configuration in a trajectory with a fixed $k$ is $\{\tau\}=\{ \tau_1,\ldots,\tau_L \}$ will be denoted by $P_{\mbox{initial}}(\{\tau\}|k)$.
It turns out that in the thermodynamic limit this quantity can be only calculated analytically for $s\to +\infty$ and $s=0$; however, they can be
obtained numerically for any $L$ and $s$.
Let us start with $s=0$ where the only dynamical phase is the region $I$. Regardless of the values of $\alpha$ and $\beta$ the system
is in the low-density phase and the average activity is given by $k^{\ast}$ in the first line of~(\ref{average activity}). Since at $s=0$ we
have $|\Lambda^{\ast}(s=0)\rangle=| P^{\ast}\rangle$, using the matrix method one finds that
\begin{equation}
\label{Prob}
P_{\mbox{final}}(i|k^{\ast})=\frac{\left(1-\eta ^2\right) (1-\zeta )^{1-\delta _{0,i}} (1-\xi )^{1-\delta _{L,i}}\eta ^{2 i} }{(1-\xi )
\left(1-\frac{\zeta }{\eta ^{-2}}\right)-(1-\zeta ) \left(1-\frac{\xi }{\eta ^2}\right) \eta ^{2 L+2}}
\end{equation}
for $0\le i \le L$. It is clear that in the low-density phase where $\eta <1$ and in the large-$L$ limit, this exponentially decaying function
is almost zero everywhere except near the left boundary where $i$ is close to zero. This means that the lattice is almost
deserted or only a few particles are present near the left boundary. The left eigenvector of $\langle \tilde{\Lambda}^{\ast}(s=0)|$
can be obtained by noting that $\tilde{\cal H}_{s=0}$ is a stochastic matrix, hence we have
$$
\langle \tilde{\Lambda}^{\ast}(s=0)|=\Big( 1\;1\;1\; \cdots \; 1\;1\Big)_{1\times 2^{L}}\; .
$$
This implies that the probability for being in any configuration $\{\tau\}$ at the beginning of a trajectory
is $P_{\mbox{initial}}(\{\tau\}|k^{\ast})=2^{-L}$. This statement simply means that in order to observe
the typical activity during the observation time, the system can start, in principle, from any configuration and there
is no difference from where it starts a trajectory. None of the initial configurations of the system is preferable. There is no
contradiction here with the fact that in~(\ref{Asym}) the initial configuration of the system can be the steady-state.
Our numerically exact investigations show that at a finite $s$ the initial configuration of the system at the beginning of a
trajectory can be basically any arbitrary $\{\tau\}$. However, as $s$ increases toward positive
infinity the initial configurations of different trajectories can be only one of the states defined in~(\ref{SM}).
Let us now define $\epsilon \equiv e^{-s}$ where $0 \le \epsilon \le 1$ when $s\ge 0$. Considering the large-$s$ limit corresponds to the very small activities
and obviously the rare events. Either using the exact results obtained in the previous section in the thermodynamic limit
or the conventional perturbation method one finds, up to the order $\epsilon^2$
\begin{equation}
\label{EigenS}
\begin{array}{lll}
\Lambda_{1} (\epsilon) \simeq -\alpha-\frac{\alpha \omega_{1}}{\alpha-\omega_{1}-\omega_{2}}\epsilon^2+{\cal O}(\epsilon^4)\; , \\ \\
\Lambda_{2} (\epsilon) \simeq -\beta-\frac{\beta \omega_{2}}{\beta-\omega_{1}-\omega_{2}}\epsilon^2+{\cal O}(\epsilon^4)\; , \\ \\
\Lambda_{\mbox{Ph}} (\epsilon) = -(\omega_{1}+\omega_{2})+2\sqrt{\omega_{1}\omega_{2}}\epsilon \; .
\end{array}
\end{equation}
As can be seen in~(\ref{EigenS}) the first correction to the largest eigenvalue of $\tilde{\cal H}_\infty$, is
of order $\epsilon^2$ in the regions $I$ and $II$, while it is of order $\epsilon$ in the region $III$.
We have also found that, up to the order $\epsilon^2$, the right eigenvector of $\tilde{\cal H}_s$ in the region $I$ is given by
\begin{eqnarray}
\label{lambda1}
|\Lambda^{\ast}(\epsilon)\rangle & \propto& |000\cdots 0 \rangle- \nonumber \\
&& \frac{\alpha \epsilon}{\alpha-\omega_{1}-\omega_{2}} | 100\cdots 0 \rangle+ \nonumber \\
&& \frac{\omega_{1}(\alpha-2\omega_{2})\epsilon^2}{(\alpha-\omega_1-\omega_2)^2} | 000\cdots 0 \rangle+\\
&& \frac{\alpha\omega_{2}\epsilon^2}{(\alpha-\omega_1-\omega_2)^2} | 110\cdots 0 \rangle+ {\cal O} (\epsilon^3) \nonumber
\end{eqnarray}
while its corresponding eigenvalue is given by the first expression in~(\ref{EigenS}). The eigenvector $|\Lambda^{\ast} (\epsilon)\rangle$
associated with the largest eigenvalue of $\tilde{\cal H}_s$ in the region $II$, given by the second expression in~(\ref{EigenS}), is
\begin{eqnarray}
|\Lambda^{\ast} (\epsilon)\rangle & \propto& |1\cdots 111 \rangle- \nonumber \\
&& \frac{\beta \epsilon}{\beta-\omega_{1}-\omega_{2}} | 1\cdots 110 \rangle+ \nonumber\\
&& \frac{\omega_{2}(\beta-2\omega_{1})\epsilon^2}{(\beta-\omega_1-\omega_2)^2} | 111\cdots 1 \rangle+\\
&& \frac{\beta\omega_{1}\epsilon^2}{(\beta-\omega_1-\omega_2)^2} | 1\cdots 100 \rangle + {\cal O} (\epsilon^3) \nonumber \; .
\end{eqnarray}
We have found that in the region $III$, up to the order $\epsilon^0$, the eigenvector of $\tilde{\cal H}_s$ associated with its largest
eigenvalue, which is given by $ \Lambda_{\mbox{Ph}} (\epsilon)$ in~(\ref{EigenS}), has the following form
\begin{equation}
\label{pheigenvector}
| \Lambda^{\ast} (\epsilon)\rangle= \sum_{i=1}^{\infty} i(\eta-1)^2\eta^{i-1} | \{1\}_{i} 0 \cdots 0 \rangle \; .
\end{equation}
The reader should note that the above expansion does not contain a fully occupied lattice. It is also worth mentioning that for a system
of size $L$ the largest eigenvalue of $\tilde{\cal H}_{s}$ and its corresponding eigenvector in the region $III$ can be obtained using
the perturbation method and one finds
\begin{eqnarray}
\label{III}
\Lambda_{\mbox{Ph}}(\epsilon) &\simeq& -(\omega_1+\omega_2)+2\sqrt{\omega_1\omega_2} \cos (\frac{\pi}{L}) \epsilon +{\cal O} (\epsilon^2) \; ,\\
| \Lambda^{\ast} (\epsilon)\rangle &=& \sum_{i=1}^{L-1}\frac{1+\eta^2-2\eta\cos (\frac{\pi}{L})}{1+\eta^L}\times
\frac{\sin (\frac{\pi i}{L})}{\sin (\frac{\pi}{L})} \times \eta^{i-1} | \{1\}_{i}\{0\}_{L-i} \rangle
+{\cal O} (\epsilon) \nonumber
\end{eqnarray}
up to the order $\epsilon$ and $\epsilon^0$ respectively. One can readily check that in the thermodynamic limit $L \to \infty$
these expressions converge to the third expression in~(\ref{EigenS}) and~(\ref{pheigenvector}) respectively.
As $s \to +\infty$, in the region $I$ the largest eigenvalue of $\tilde{\cal H}_\infty$
is equal to $-\alpha$. Its corresponding right eigenvector will be denoted by $| 00\cdots 0 \rangle$ which represents an empty lattice.
It turns out that the left eigenvector of $\tilde{\cal H}_\infty$ with the eigenvalue $-\alpha$ is also given by $\langle 00\cdots 0 |$. This
means that the trajectories with almost zero activity start from an empty lattice and end to an empty lattice. In the region $II$, the
right (left) eigenvector of $\tilde{\cal H}_\infty$ is given by $| 11\cdots 1 \rangle$ ($\langle 11\cdots 1 |$) representing a fully occupied lattice.
The corresponding eigenvalue is $-\beta$. In this phase the zero activity comes from those trajectories which start from a completely
occupied lattice and also end to the same configuration. The situation in the region $III$ is slightly different. The right eigenvector
of $\tilde{\cal H}_\infty$ with the eigenvalue $-(\omega_1+\omega_2)$ is highly degenerate although this degeneracy can be easily
removed as we have done in~(\ref{III}) for a finite system. For an infinite system by using~(\ref{pheigenvector}) one finds
\begin{equation}
P_{\mbox{final}}(i|0)=i(\eta-1)^2\eta^{i-1}\;\;\mbox{for}\;\; 1\le i \le \infty \;.
\end{equation}
This distribution is picked around the point $i^{\ast}=| \ln \eta |^{-1}$ and it is almost zero elsewhere.
In summary, for a finite $s$ and in the large-$L$ limit in the region $I$, it is more probable that the configuration of the system at the beginning
of a trajectory is an almost empty lattice. Considering the fact that the largest eigenvalue of $\tilde{\cal H}_s$ in this region, given
by $\Lambda_{1}(s)$ in~(\ref{eigen1}), depends only on $\alpha$ one might conclude that the activity of the system in this
region is mainly produced by those trajectories during which the system has almost been empty and that only the particle
injection has generated the activity. In the region $II$ we have found that it is more probable that the initial
configuration in a trajectory is an almost fully occupied lattice. The largest eigenvalue of $\tilde{\cal H}_s$
in this region is given by~(\ref{eigen2}) which clearly depends only on $\beta$. This implies that the right
boundary or the extraction of particles will play the major role in creating the activity of the system in this region.
Finally, in the region $III$ it is more probable that the trajectories start from (and end to) those configurations in which the lattice is neither fully
occupied by the particles nor it is completely empty. The fact that the largest eigenvalue of $\tilde{\cal H}_s$, given by~(\ref{eigenph}), does not depend
on $\alpha$ and $\beta$ confirms the idea that the activity actually comes from the bulk of the system and the boundaries do not play the role.
As a closing remark we would like to comment on the dependence of the largest eigenvalue $\Lambda^{\ast}(s)$ in~(\ref{EigenS}) on $\epsilon$.
Let us write the right eigenvector of $\tilde{\cal H}_s$ associated with its largest eigenvalue in each phase as
\begin{equation}
| \Lambda^{\ast}(s) \rangle=| \Lambda^{0} \rangle+\epsilon | \Lambda^{1} \rangle+ \epsilon^2 | \Lambda^{2}\rangle+\cdots
\end{equation}
where $| \Lambda^{0} \rangle =| \Lambda^{\ast}(\infty) \rangle$. The conventional perturbation theory gives the largest eigenvalue as
\begin{equation}
\Lambda^{\ast}(s) = \Lambda^{\ast}(\infty) +\epsilon \langle \Lambda^{0} | \tilde{\cal H}_s^{\mbox o} | \Lambda^{0} \rangle+
\epsilon^2 \langle \Lambda^{0} |\tilde{\cal H}_s^{\mbox o} | \Lambda^{1} \rangle +\cdots
\end{equation}
in which $\tilde{\cal H}_s^{\mbox o}$ is the off-diagonal part of the matrix $ \tilde{\cal H}_s$.
In the region $I$ we have $| \Lambda^{0} \rangle=| 00\cdots 0 \rangle$. The first correction to $\Lambda^{\ast}(\infty)=-\alpha$
is clearly zero since $ \tilde{\cal H}_s^{\mbox o}$ cannot connect an empty lattice to an empty lattice. However, it does connect
$|10\cdots 0 \rangle$ to $|00\cdots 0 \rangle$ (see~(\ref{lambda1})). This is the reason why the first correction to the eigenvalue in the
phase $I$ is of the order $\epsilon^2$. The similar reasoning is true for the phase $II$. In the phase $III$, in contrast, the first
correction to the eigenvalue $\Lambda^{\ast}(\infty)=-(\omega_1+\omega_2)$ is of the order $\epsilon$ since the operator
$ \tilde{\cal H}_s^{\mbox o}$ can connect $| \Lambda^{0} \rangle$ given in~(\ref{pheigenvector}) to itself.
\section{The large deviation function}
Having the largest eigenvalue of $\tilde{\cal H}_s$ in each region, one can easily use~(\ref{LDF}) to calculate
the {\em convex part} of the large deviation function for the activity of the system along the lines $A$, $B$, $C$ and $D$ analytically.
We have also done the same calculations along the above four lines numerically. As we have already
mentioned the numerical calculations mean finding the largest eigenvalue of the modified Hamiltonian as a
function of $s$ numerically and then applying the Legendre transformation to the resulting interpolating function.
Along the line $A$ the largest eigenvalue of $\tilde{\cal H}_s$ is given by $\Lambda_1(s)$ for $0 \le s < +\infty$,
since this line lies completely in the region $I$. There is no discontinuity in $\Lambda_1(s)$ of any type in this
case. The large deviation function for the probability distribution of activity can be calculated by applying the Legendre
transformation~(\ref{LDF}) to $\Lambda_1(s)$ given by~(\ref{eigen1}) which results in
\begin{equation}
I_{1}(k)=\omega_2 {\cal M}(\eta^{-1},\zeta,k\omega_{2}^{-1})
\end{equation}
in which we have defined
\begin{eqnarray*}
{\cal M} (x,y,z) & \equiv & \frac{1-y}{2y} \Big( y-x^2 +\frac{(y+x^2)^2(1-y)}{zy+\sqrt{z^2y^2+(1-y)^2(y+x^2)^2}} \\
&-& \frac{z}{2} \ln \frac{2x^2( zy+\sqrt{z^2y^2+(1-y)^2(y+x^2)^2})}{z(y+x^2)^2} \Big).
\end{eqnarray*}
Along the line $B$ we have a first-order phase transition at $s_{c}$ by the expression in~(\ref{sc}) (see also the first row of FIG.~\ref{Fig3}).
As it is known~\cite{T09}, the fact that the first derivative of the largest eigenvalue with respect to $s$ is discontinuous at $s_{c}$ will
result in a linear behavior for the large deviation function. This is actually the nonconvex part of the large deviation function
which cannot be accessed using the Legendre transformation of the largest eigenvalue of the modified Hamiltonian. In summary, the
large deviation function has three parts: two nonlinear parts $I_1(k)$ and $I_2(k)$ given by
\begin{eqnarray}
I_1(k) &=& \omega_2 {\cal M}(\eta^{-1},\zeta,k\omega_{2}^{-1})\;\; \mbox{for}\;\; k>k_{c_2}\; , \\
I_2(k) &=& \omega_1 {\cal M}(\eta,\xi,k\omega_{1}^{-1})\;\; \mbox{for}\;\; k <k_{c_1}
\end{eqnarray}
in which
\begin{eqnarray}
k_{c_1} &=& \frac{2 \alpha \omega_{1} (\alpha -\beta ) (-\alpha \beta (\omega_{1}-\omega_{2})+\alpha \omega_{1}^2-\beta
\omega_{2}^2)}{| (\alpha +\omega_{1}-\omega_{2}) (\alpha ^2 \omega_{1}^2-\beta ^2 \omega_{2}^2)-2 \alpha
\beta \omega_{1} (\alpha \omega_{1}-\beta \omega_{2})| } \; , \\
k_{c_2} &=& \frac{2 \beta \omega_{2} (\beta -\alpha ) \left(-\alpha \beta (\omega_{2}-\omega_{1})-\alpha \omega_{1}^2+\beta
\omega_{2}^2\right)}{| (\beta -\omega_{1}+\omega_{2}) (\beta ^2 \omega_{2}^2-\alpha ^2 \omega_{1}^2)-2 \alpha
\beta \omega_{2} (\beta \omega_{2}-\alpha \omega_{1})| }
\end{eqnarray}
and a linear part which connects these two parts. In FIG.~\ref{Fig4} (the first row) we have plotted the large deviation function $I(k)$ as a
function of $k$ obtained from both analytical and numerical calculations.
The large deviation function for $k < k_{c_1}$ ($k > k_{c_2}$) is given by $I_{2}(k)$ ($I_{1}(k)$). They both lie on the numerically obtained results.
For $k_{c_1} < k < k_{c_2}$ it can be seen that the numerical results do not lie on either of the analytical functions. This is the interval where the
large deviation function is a linear function of $k$.
Along the line $C$ the second derivative of the largest eigenvalue of $\tilde{\cal H}_s$ with respect to $s$ has two discontinuities
at $s_{\alpha}$ and $s_{\beta}$ (see the second row of FIG.~\ref{Fig3}). Since the first derivative of $\Lambda^{\ast}(s)$ is
continuous along this line, no linear behavior is observed in its corresponding large deviation function. The large deviation
function has three parts along the line $C$. For $0 \le k \le k_{\beta}$ it is given by $I_{2}(k)$ while for $k \ge k_{\alpha}$
it is given by $I_{1}(k)$ in which
\begin{eqnarray}
k_{\alpha} &=& \frac{2\omega_2(\alpha-\omega_1-\omega_2)}{\alpha-2\omega_2} \; , \\
k_{\beta} &=&\frac{2\omega_1(\beta-\omega_1-\omega_2)}{\beta-2\omega_1} \; .
\end{eqnarray}
Finally, for $k_{\beta}\le k \le k_{\alpha}$ the large deviation function is given by
\begin{equation}
I_{\mbox{Ph}}(k)=\omega_{1}+\omega_{1}-k(1+\ln \frac{2\sqrt{\omega_{1}\omega_{2}}}{k})\; .
\end{equation}
As in the previous case, we have plotted both the analytical and numerical results for the large deviation function along the line $C$
in FIG.~\ref{Fig4}. In this case all three parts of the analytical large deviation function lie perfectly on the numerical results.
As we explained above, along the line $D$ we encounter a second-order phase transition when we move from the region $I$ to
the region $III$. In terms of the largest eigenvalue of $\tilde{\cal H}_s$ it takes place at $s_{\alpha}$. Using the Legendre
transformation~(\ref{LDF}) we have found that for $k > k_{\alpha}$ the large deviation function is given by $I_{1}(k)$ while for
$0 \le k \le k_{\alpha}$ it is given by $I_{\mbox{Ph}}(k)$. As can be seen in FIG.~\ref{Fig4} (the third row) the numerical results
lie on the theoretical predictions along this line.
\section{Concluding remarks}
In this paper we have studied the dynamical phase transitions in a one-dimensional stochastic system which can be considered as a
variant of the Zero temperature Glauber model. The bulk reactions
consist of asymmetric death and branching of particles. The particles can enter and leave the system from the boundaries.
This system is non-ergodic; however, its configuration space is irreducible and that it has a unique (equilibrium) steady-state. In the steady-state
the system undergoes a static phase transition in the thermodynamic limit which is controlled by the bulk reaction rates. Considering the activity
of the system as a dynamical order parameter, we have found that the system might undergo a dynamical phase transition in the thermodynamic limit which
is determined by both the boundary and bulk transition rates. It turns out that the dynamical phase diagram has three different regions or phases.
The physical properties of each phase is studied in detail in the thermodynamic limit. We have found that the activity of the system is generated either by the
reactions at the boundaries (in the regions $I$ and $II$) or by the bulk reactions of the system (in the region $III$).
We have started with a system of length $L$ which has clearly a bounded configuration space. Taking the limit $t \to \infty$ one finds
that the scaled cumulant generating function is given with the largest eigenvalue of the modified Hamiltonian. We will then take the
thermodynamic limit $L \to \infty$. It seems that in the thermodynamic limit the configuration space of our system becomes infinitely
large; however, we have carefully checked that neither $\sum_{C} \langle C | \Lambda^{\ast}(s) \rangle$
nor $\langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle$ diverge (at least for $s \ge 0$).
From one hand, since $ | \Lambda^{\ast}(s) \rangle$ is calculated
exactly in all three dynamical phases in the thermodynamic limit, one can easily check that
$\sum_{C} \langle C | \Lambda^{\ast}(s) \rangle$ never diverges. On the other hand, if we consider
$\langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle$ as a power series of $\epsilon$
we can easily see that for $\epsilon =0$ the expression $\langle \tilde{\Lambda}^{\ast}(\infty) | P^{\ast} \rangle$ is clearly
finite and equal to the probability of having a completely empty or a fully occupied lattice in the phases $I$ and $II$ respectively.
Our exact analytical calculations in the phase $III$ show that $\langle \tilde{\Lambda}^{\ast}(\infty) | P^{\ast} \rangle$ is also convergent.
For $\epsilon=1$ one finds that $\langle \tilde{\Lambda}^{\ast}(s) |=\sum_{C} \langle C | $ which results in
$\langle \tilde{\Lambda}^{\ast}(s) | P^{\ast} \rangle=1$.
In summary, the expression~(\ref{Asym}) is not divergent regardless of the value of
$s$ or the initial configuration of the system. Hence, the dominant eigenvalue of the modified Hamiltonian of the system generates, through
the G\"artner-Ellis Theorem, a large deviation function for the activity of the system~\cite{T09}.
We have calculated the convex part of the large deviation function for the probability distribution function of the activity
in each phase. Our analytical calculations are compared with the results obtained from numerical diagonalization of the
modified Hamiltonian.
In~\cite{HPS} the authors have studied the dynamics of instantaneous condensation in a single-site zero-range process conditioned
on an atypical current. The effective dynamics of their model maps to a biased random walk on a semi-infinite lattice. Since the dynamics
of our model for $s>0$ (the activities lower-than-typical value) can be explained in terms of the dynamics of a single shock moving on a finite
lattice with reflecting boundaries, it can be considered as a conserved zero-range process with two sites and $L$ particles.
From this point of view, the condensation (accumulation of the particles in a single site) in this zero-range process can be seen in the
dynamical phases $I$ and $II$ in the limit $s \to \infty$ where the activity goes to zero. Note that this phenomenon is happening in the
low-density phase $\omega_1 > \omega_2$ where the system is typically empty.
Finally, in~\cite{BS} the authors have studied the finite-time evolution of shocks and antishocks in the asymmetric
simple exclusion process on a ring conditioned on an atypically low particle current. It seems that the lower-than-typical activity
region in our model ($s>0$) is governed by the evolution of a single shock which performs random walk on the lattice.
The higher-than-typical activities, on the other hand, should be generated by the evolution of multiple shocks in the system.
One should note that the antishocks in our model do not evolve in time under the dynamical rules~(\ref{rules}) unless one
considers the reactions $\emptyset A \to AA$ and $\emptyset A \to \emptyset \emptyset$.
There are still many open problems that can be studied separately. Most of our calculations are performed in the thermodynamic limit.
It would be interesting to study the finite-size effects i.e. the dependence of the eigenvalues and the eigenvectors of the modified Hamiltonian of
the system on the length of the lattice $L$ in all three dynamical phases~\cite{ABN}. On the other hand, the case $s < 0$ which corresponds to
the average activity above the typical value has not been studied in this paper and needs a careful and detailed study. The largest eigenvalue
of the modified Hamiltonian for negative values of $s$ generates the large deviation for the fluctuations of the activity above its typical value.
|
1407.0373
|
\section{Introduction}
Let ${\mathcal C}_t$ be one of the categories
$\text{Rep}(S_t)$, $\text{Rep}(GL_t)$, $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2t})$,
obtained by interpolating the classical representation categories
${\bf Rep}(S_n)$, ${\bf Rep}(GL_n)$, ${\bf Rep}(O_n)$, ${\bf Rep}(Sp_{2n})$
to complex values of $n$, defined by Deligne-Milne and
Deligne (\cite{DM,De1,De2}).\footnote{Note that, to avoid confusion, we denote ordinary representation categories by
${\bf Rep}$, and interpolated ones by $\text{Rep}$.} In \cite{E1}, by analogy with
the representation theory of real reductive groups, we proposed to consider
various categories of ``Harish-Chandra modules'' based on ${\mathcal C}_t$,
whose objects $M$ are objects of ${\mathcal C}_t$ equipped with
additional morphisms satisfying certain relations. In this situation,
the structure of an object of ${\mathcal C}_t$ on $M$
is analogous to the action of the ``maximal compact subgroup'',
while the additional morphisms play the role of the
``noncompact part''. The papers \cite{E1, EA, Ma} study examples of such categories
based on the category $\text{Rep}(S_t)$
(which could be viewed as doing ``algebraic combinatorics in
complex rank''). This paper is a sequel to these works, and its goal is to
start developing ``Lie theory in complex rank'', extending
the constructions of \cite{E1} to ``Lie-theoretic''
categories $\text{Rep}(GL_t)$, $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2t})$ (based on the ideas outlined in \cite{E2}).
Namely, we define complex rank analogs of
the parabolic category O and the representation categories of real reductive Lie groups and supergroups,
affine Lie algebras, and Yangians.
We develop a framework and language for studying these categories,
prove basic results about them, and outline a number of directions of
further research. We plan to pursue these directions in future papers.
The organization of the paper is as follows.
In Section 2, we give some preliminaries on groups in tensor categories,
and then give background on Deligne categories $\text{Rep}(GL_t)$, $\text{Rep}(O_t)$,
$\text{Rep}(Sp_{2t})$. In particular, we show that the groups $GL_t$, $O_t$, $Sp_{2t}$ are connected,
and therefore can be effectively studied by looking at their Lie algebras.
In Section 3, we explain how to interpolate classical symmetric pairs,
and then proceed to discuss the basic theory of Harish-Chandra modules.
In Section 4, we develop the interpolation of
the basic theory of parabolic category O.
In Section 5, we discuss the interpolations of classical Lie supergroups,
$GL_{t|s}$ and $OSp_{t|2s}$.
In Section 6, we discuss the interpolation of the representation theory of affine Lie algebras.
Finally, in Section 7, we describe the interpolation of Yangians of classical Lie algebras.
{\bf Acknowledgments.} The author is grateful to
I. Entova-Aizenbud and V. Ostrik for many useful discussions.
The work of the author was partially supported by the NSF grants
DMS-0504847 and DMS-1000113.
\section{Preliminaries}
\subsection{Symmetric tensor categories}
Throughout the paper, we work over the field $\Bbb C$ of complex numbers.
By a symmetric tensor category, we will mean a $\Bbb C$-linear
artinian\footnote{An artinian category is an abelian category whose objects have finite length and morphism spaces are finite dimensional.}
rigid symmetric monoidal category ${\mathcal{C}}$, with biadditive tensor product and ${\rm End}(\bold 1)=\Bbb C$, as in \cite{DM}
(such categories are also called {\it pre-Tannakian categories}, or {\it tensor categories satisfying finiteness assumptions} (\cite{De1}, 2.12.1)).
We will not keep track of bracket positions in tensor products of several objects.
For $V\in {\mathcal{C}}$, we will denote by ${\rm ev}_V : V^*\otimes V\to \bold 1$ and ${\rm coev}_V: \bold 1\to V\otimes V^*$
the evaluation and coevaluation morphisms of $V$.
Recall that in a symmetric tensor category ${\mathcal{C}}$, it makes sense to talk about any linear algebraic structure
(such as a (commutative) associative algebra, Lie algebra, module over such an algebra, etc).
We will also routinely consider ordinary algebraic structures (over $\Bbb C$) as those in ${\mathcal{C}}$,
by using the functor $A\mapsto A\otimes \bold 1$.
If ${\mathcal{D}}$ is an artinian category, by ${\rm Ind}{\mathcal{D}}$ we will mean the ind-completion of ${\mathcal{D}}$;
it consists of inductive limits of objects of ${\mathcal{D}}$ (for instance, the ind-completion of the category of finite dimensional vector spaces is the category of all vector spaces).
\subsection{Affine group schemes in symmetric tensor categories and their Lie algebras}
Recall that an affine group scheme $G$ in a symmetric tensor category
${\mathcal{C}}$ corresponds to a commutative Hopf algebra $H$ in ${\mathcal{C}}$; we write $H=O(G)$ and $G={\rm Spec}H$.
To such a group scheme $G$ we can attach the category ${\bf Rep}(G)$ of representations of $G$ in ${\mathcal{C}}$, which is, by definition, the category of
(left) $O(G)$-comodules in ${\mathcal{C}}$.
Note that $O(G)$ carries two commuting actions of $G$
preserving the algebra structure -- left translations and right translations; as coactions, they are both defined by the coproduct $\Delta: O(G)\to O(G)\otimes O(G)$.
The corresponding diagonal action is called the {\it adjoint action}, and it preserves the Hopf algebra structure.
For an affine group scheme $G$ we can define its Lie algebra
in the same way as in classical Lie theory.
Namely, let $I$ be the augmentation
ideal in $H$. Then $I/I^2$ has a natural Lie coalgebra structure
(this is a general property of Hopf algebras), so $\mathfrak{g}:=(I/I^2)^*$ (which is, in general,
a pro-object) is a Lie algebra, denoted by $\mathfrak{g}={\rm Lie}G$.
Moreover, if $M$ is a locally algebraic $G$-module (i.e., a left $H$-comodule in ${\rm Ind}{\mathcal{C}}$) then we have a natural
map $\zeta_M: M\to I/I^2\otimes M$ (the categorical analog of the map $x\mapsto \rho(x)-1\otimes x$ taken modulo $I^2$ in the
first component, where $\rho: M\to H\otimes M$ is the coaction) which defines an action $\mathfrak{g}\otimes M\to M$
of $\mathfrak{g}$ on $M$ (the categorical analog of the derivative of a Lie group representation).
This gives rise the ``derivative'' functor $D: {\rm Ind}{\bf Rep}(G)\to {\rm Ind}{\bf Rep}(\mathfrak{g})$.
In particular, taking $M=H$ and $\rho=\Delta$ (the coproduct), we get an action $\delta: \mathfrak{g}\otimes H\to H$
of $\mathfrak{g}$ on $H$ by algebra derivations (this is a categorical analog of infinitesimal left translations).
\begin{definition} We say that an affine group scheme $G$ in ${\mathcal{C}}$ is connected
if $H^\mathfrak{g}:={\rm Ker}\zeta_H$ is isomorphic to $\bold 1$.\footnote{For classical algebraic groups $G$ over $\Bbb C$, this definition coincides
with the usual definition of connectedness: it says that a regular function on $G$ annihilated by all the right-invariant vector fields on $G$ must be constant.}
\end{definition}
In particular, we see that the connectedness property
is preserved under symmetric tensor functors.
\begin{proposition}\label{ff}
$G$ is connected if and only if the functor
$$
D: {\rm Ind}{\bf Rep}(G)\to {\rm Ind}{\bf Rep}(\mathfrak{g})
$$
is fully faithful.
\end{proposition}
\begin{proof}
Suppose $G$ is connected. We need to show that for any $X,Y\in {\bf Rep}(G)$, any $\mathfrak{g}$-homomorphism
$f: X\to Y$ is actually a $G$-homomorphism. For this, it's enough to show that
for any $G$-module $U$ in ${\mathcal{C}}$, one has $U^G=U^\mathfrak{g}$; indeed, then we can take $U=X^*\otimes Y$
(note that a priori, we only know that $U^G\subset U^\mathfrak{g}\subset \text{Hom}_{\mathcal{C}}(\bold 1,U)$).
We have an inclusion $U\to O(G)\otimes U_{\rm obj}$, where $U_{\rm obj}$ is the underlying object of $U$ with the trivial $G$-action
(the coaction of $O(G)$ on $U$). Thus, it suffices to check that
$O(G)^G=O(G)^\mathfrak{g}$ (i.e., the invariants under the usual and infinitesimal left translations coincide), i.e., that $O(G)^\mathfrak{g}=\bold 1$, which is the definition of connectedness.
Conversely, if the functor $D$ is fully faithful then $O(G)^G=O(G)^\mathfrak{g}$, so $O(G)^\mathfrak{g}=\bold 1$, i.e., $G$ is connected.
\end{proof}
\subsection{Classical groups in symmetric tensor categories}
Let us now define the general linear, orthogonal, and symplectic groups
in symmetric tensor categories.
\begin{definition}\label{clgroups}
(i) Let $V$ be an object in a symmetric tensor category ${\mathcal{C}}$.
The group scheme $GL(V)$ is cut out inside $V\otimes V^*\oplus V^*\otimes V$
by the equations $AB=BA={\rm Id}$ (i.e., $O(GL(V))$ is the quotient of $S(V^*\otimes V\oplus V\otimes V^*)$
by the ideal $J$ defined by these equations).
(ii) Suppose that $V$ is equipped with a symmetric (respectively, skew-symmetric)
isomorphism $\psi: V\to V^*$.
The group $O(V)$ (respectively, $Sp(V)$) is cut out inside
$V\otimes V^*$ by the equations $AA^*=A^*A={\rm Id}$, where $A^*$ is
the adjoint of $A$ with respect to $\psi$.
\end{definition}
The structure of an affine group scheme on $GL(V)$, $O(V)$, $Sp(V)$ is defined in the same way as in classical
Lie theory.
\begin{remark} 1. The equations in Definition \ref{clgroups} are to be understood categorically;
e.g., $AB=BA={\rm Id}$ means that the ideal $J$ is generated by
the images of $V^*\otimes V$ inside $S(V^*\otimes V\oplus V\otimes V^*)$
under the morphisms
$$
\sigma_{3,4}\circ({\rm Id}_{V^*}\otimes {\rm coev}_V\otimes {\rm Id}_V)-{\rm ev}_V\text{ and }
\sigma_{3,4}\sigma_{12,34}\circ ({\rm Id}_{V^*}\otimes {\rm coev}_V\otimes {\rm Id}_{V})-{\rm ev}_V
$$
(where $\sigma$ denotes the permutation of the appropriate components).
2. In classical Lie theory one of the two equations $AB=BA={\rm Id}$ or $AA^*=A^*A={\rm Id}$ suffices (and implies the other), but
we don't expect that this is the case in general. The proof of this implication uses determinants,
which are not available in general, and the statement fails for infinite dimensional
vector spaces (which don't form a rigid category, however).
\end{remark}
Now let us describe the Lie algebras of the groups $GL(V)$, $O(V)$, $Sp(V)$.
First of all, for any $V$, the object ${\mathfrak{gl}}(V):=V\otimes V^*$
is naturally an associative algebra and thus a Lie algebra, with the bracket being the commutator.
Next, if $V$ is equipped with a symmetric (respectively, skew-symmetric) isomorphism
$\psi: V\to V^*$, then we can define an automorphism of Lie algebras
$\theta: {\mathfrak{gl}}(V)\to {\mathfrak{gl}}(V)$ given by
$$
\theta=-\sigma(\psi\otimes \psi^{-1}),
$$
and one can define the Lie algebra ${\mathfrak{o}}(V)$ (respectively, ${\mathfrak{sp}}(V)$) to be ${\rm Ker}(\theta-{\rm Id})$.
Note that as objects we have ${\mathfrak{o}}(V)=\wedge^2V$ and ${\mathfrak{sp}}(V)=S^2V$.
\begin{proposition}\label{leaal}
We have ${\rm Lie}GL(V)={\mathfrak{gl}}(V)$, ${\rm Lie}O(V)={\mathfrak{o}}(V)$,
${\rm Lie}Sp(V)={\mathfrak{sp}}(V)$.
\end{proposition}
\begin{proof}
This is readily obtained as in classical Lie theory, by linearizing the equations defining the corresponding groups.
\end{proof}
Observe that we have a Lie subalgebra ${\mathfrak{sl}}(V)\subset {\mathfrak{gl}}(V)$, where ${\mathfrak{sl}}(V)$
is the kernel of the evaluation morphism. This Lie algebra is the Lie algebra of the group scheme
$PGL(V)=GL(V)/\Bbb C^*$, defined by the equality $O(PGL(V))=O(GL(V))_0$, where the subscript $0$ means the degree zero part
under the $\Bbb Z$-grading on $O(GL(V))$ in which $A$ has degree $1$ and $B$ has degree $-1$.
Note, however, that in general, we cannot define the group scheme $SL(V)$ (as the determinant
character of $GL(V)$ is not defined).
\subsection{The fundamental group of a symmetric tensor category}
Let us recall the basic theory of fundamental groups of symmetric tensor categories (\cite{De1}, Section 8).
If ${\mathcal{C}}$ is a symmetric tensor category, then one can define
a commutative algebra $R_{\mathcal{C}}$ in ${\rm Ind}({\mathcal{C}}\boxtimes {\mathcal{C}})$ by the formula
$$
R_{\mathcal{C}}:=(\oplus_{X\in {\mathcal{C}}}X\boxtimes X^*)/E,
$$
where $E$ is the sum of the images of the morphisms
$$
f\boxtimes {\rm Id}_{Y^*}-{\rm Id}_X\boxtimes f^*
$$
over all objects $X,Y\in {\mathcal{C}}$ and morphisms $f: X\to Y$.
The multiplication in $R_{\mathcal{C}}$
is just the tensor product (i.e., it tautologically maps
$(X\boxtimes X^*)\otimes (Y\boxtimes Y^*)$ to $(X\otimes Y)\boxtimes (Y^*\otimes X^*)$).
If ${\mathcal{C}}$ is semisimple, then $R_{\mathcal{C}}=\oplus_X X\boxtimes X^*$,
where $X$ runs over the isomorphism classes of simple objects of ${\mathcal{C}}$.
Let $H_{\mathcal{C}}=T(R_{\mathcal{C}})\in {\rm Ind}{\mathcal{C}}$,
where $T: {\mathcal{C}}\boxtimes {\mathcal{C}}\to {\mathcal{C}}$ is the tensor product functor
(so $H_{\mathcal{C}}=\oplus_X X\otimes X^*$ in the semisimple case).
Then $H_{\mathcal{C}}$ is a commutative Hopf algebra. Indeed, the coproduct maps
$X\otimes X^*$ to $(X\otimes X^*)\otimes (X\otimes X^*)$ by means of
the morphism ${\rm Id}_X\otimes {\rm coev}_{X^*}\otimes {\rm Id}_{X^*}$.
This Hopf algebra can be viewed as the algebra $O(\pi({\mathcal{C}}))$ of regular functions
on an affine group scheme $\pi({\mathcal{C}})$ in ${\mathcal{C}}$, which is called the {\it fundamental group}
of ${\mathcal{C}}$. Note that every object $X\in {\mathcal{C}}$ has a natural action of $\pi({\mathcal{C}})$
(i.e., a coaction of $H_{\mathcal{C}}$).
If $F: {\mathcal{C}}\to {\mathcal{D}}$ is a symmetric tensor functor between two
symmetric tensor categories, then we have a natural homomorphism of Hopf algebras
$\xi_F: F(H_{\mathcal{C}})\to H_{\mathcal{D}}$.
Consider the category ${\bf Rep}_{\mathcal{D}}(\pi({\mathcal{C}}))$
of representations of $\pi({\mathcal{C}})$ in ${\mathcal{D}}$, which by definition is
the category of $Y\in {\mathcal{D}}$ with a coaction $\tau: Y\to Y\otimes F(H_{\mathcal{C}})$
such that $({\rm Id}_Y\otimes \xi_F)\circ \tau: Y\to Y\otimes H_{\mathcal{D}}$ is the canonical coaction of $H_{\mathcal{D}}$ on $Y$.
The following theorem comes out of the standard formalism of fundamental groups:
\begin{theorem} (\cite{De1}, Theorem 8.17)
The functor $F$ defines an equivalence of categories
${\mathcal C}\to {\bf Rep}_{\mathcal{D}}(\pi({\mathcal{C}}))$.
\end{theorem}
\subsection{The category $\text{Rep}(GL_t)$}
Let us review the definition and known results about $\text{Rep}(GL_t)$.
The category $\text{Rep}(GL_t)$ was first defined in \cite{DM}, Examples 1.26, 1.27
(see also \cite{De1,De2} and \cite{CW} for a review). It is obtained
by interpolating ${\bf Rep}(GL_n)$ to non-integer values of $n$, as follows.
Recall that in the classical category ${\bf Rep}(GL_n)$ we have the
vector representation $V=\Bbb C^n$, and every irreducible representation
of $GL_n$ occurs in $V^{\otimes r}\otimes V^{*\otimes s}$ for some $r$, $s$.
Now,
$$
\text{Hom}(V^{\otimes r_1}\otimes V^{*\otimes s_1},V^{\otimes r_2}\otimes V^{*\otimes s_2})=
\text{Hom}(V^{\otimes r_1+s_2},V^{\otimes r_2+s_1}),
$$
so it is nonzero only if $r_1+s_2=r_2+s_1=m$, and
in the latter case is spanned by elements
of $\Bbb C[S_m]$, by the Fundamental Theorem of Invariant Theory
(this spanning set is a basis if $n\ge m$).
The category ${\bf Rep}(GL_n)$ can then be defined
as the (additive) Karoubian closure of the subcategory
with objects $[r,s]:=V^{\otimes r}\otimes V^{*\otimes s}$
and morphisms as above.
Now consider composition of morphisms.
To do so, note that the elements of $S_m$ defining
morphisms can be depicted as oriented planar tangles (with possibly intersecting strands)
with $r_1$ inputs and $s_1$ outputs on the bottom and
$r_2$ inputs and $s_2$ outputs on the top, and
$m$ arrows, each going from an input to an output.
The composition of morphisms is then defined
as concatenation of tangles, followed by closed loop removal,
with each removed loop earning a factor of $n$.
For example, if $A: [1,1]\to [1,1]$
is given by $A={\rm coev}_V\circ {\rm ev}_V$,
then $A^2=nA$.
Now, given $t\in \Bbb C$, one can define the category $\widetilde{\text{Rep}}(GL_t)$
with objects $[r,s]$, $r,s\in \Bbb Z_+$,
and the space of morphisms
$\text{Hom}([r_1,s_1],[r_2,s_2])$
being spanned by planar tangles as above, with the same composition law
as above, except that every removed closed loop earns a factor of $t$.
\begin{remark} The endomorphism algebra $\text{End}([r,s])$ is called the {\it walled Brauer algebra} and denoted $B_{r,s}(t)$.
\end{remark}
Note that the category $\widetilde{\text{Rep}}(GL_t)$ has a natural
strict symmetric monoidal structure. Namely, the tensor product functor is just the addition of pairs of integers
for objects and taking the union of planar tangles for morphisms, with the obvious symmetric braiding.
The unit object is the object $[0,0]$.
\begin{definition}
The category $\text{Rep}(GL_t)$
is the Karoubian closure of $\widetilde{\text{Rep}}(GL_t)$
(i.e., it is obtained from $\widetilde{\text{Rep}}(GL_t)$
by adding the images of all the idempotent morphisms).
\end{definition}
Clearly, $\text{Rep}(GL_t)$ is a Karoubian category
(i.e., an idempotent-closed additive category) over $\Bbb C$, which inherits
the tensor structure from $\widetilde{\text{Rep}}(GL_t)$. Moreover, it is not hard to show that
this category is rigid (with $[r,s]^*=[s,r]$).
Moreover, it is easy check that $\dim [r,s]=t^{r+s}$;
this is just the interpolation of the equality $\dim(V^{\otimes r}\otimes V^{*\otimes s})=n^{r+s}$.
\begin{theorem} (\cite{DM,De1,De2}) (i)
The category $\text{Rep}(GL_t)$ is a semisimple abelian symmetric
tensor category if $t\notin \Bbb Z$.
(ii) The category $\text{Rep}(GL_t)$ has the following universal property:
if ${\mathcal C}$ is a rigid tensor category then
isomorphism classes of (possibly non-faithful) symmetric tensor functors $\text{Rep}(GL_t)\to {\mathcal C}$
are in bijection with isomorphism classes of objects $X\in {\mathcal C}$ of dimension $t$, via
$F\mapsto F([1,0])$.
(iii) If $t=n\in \Bbb Z$, and if $p,q$ are nonnegative integers with $p-q=n$,
then the category $\text{Rep}(GL_{t=n})$ (which is not abelian)
admits a non-faithful symmetric tensor functor $\text{Rep}(GL_n)\to {\bf Rep}(GL_{p|q})$ to the representation category
of the supergroup $GL_{p|q}$, which sends $[1,0]$ to the supervector space $V=\Bbb C^{p|q}$.
(iv) We have a natural symmetric tensor functor ${\rm Res}: \text{Rep}(GL_t)\to \text{Rep}(GL_{t-1})$.
\end{theorem}
Note that (iii) and (iv) are easy consequences of (ii).
Let's consider the case $t\notin \Bbb Z$.
In this case, simple objects in $\text{Rep}(GL_t)$ are labeled by pairs of arbitrary partitions, $(\lambda,\mu)$,
$\lambda=(\lambda_1,...,\lambda_r)$, $\mu=(\mu_1,...,\mu_s)$.
Namely, letting $V=[1,0]$ be the tautological object
(the interpolation of the defining representation),
we have simple objects $X_{\lambda,\mu}$
which are direct summands in $S^\lambda V\otimes S^\mu V^*$,
where $S^\lambda$ is the Schur functor corresponding to the partition
$\lambda$. More specifically, $X_{\lambda,\mu}$ is the only direct summand
in $S^\lambda V\otimes S^\mu V^*$
which does not occur in $S^{\lambda'}V\otimes S^{\mu'}V^*$
with $|\lambda'|<|\lambda|$. This summand occurs with multiplicity $1$.
All of this is readily seen by noting that this is the case
in ${\bf Rep}(GL_n)$ for large $n$, in which case
$X_{\lambda,\mu}$ is the irreducible representation
$V_{[\lambda,\mu]_n}$ of $GL_n$, with highest weight $[\lambda,\mu]_n$, where
$$
[\lambda,\mu]_n=(\lambda_1,...,\lambda_r,0,...,0,-\mu_s,...,-\mu_1)
$$
(here, the string of zeros in the middle has length $n-r-s$).
Thus, we should think of $X_{\lambda,\mu}$ as the interpolation of
the representation $V_{[\lambda,\mu]_n}$ to complex values of $n$; in particular, $X_{\lambda,\mu}^*=X_{\mu,\lambda}$.
Consequently, the dimension of $X_{\lambda,\mu}$ is given by the interpolation of the
Weyl dimension formula:
\begin{equation}\label{dimfor}
\dim X_{\lambda,\mu}(t)=\\
d_\lambda d_\mu \prod_{i=1}^r\frac{\binom{t+\lambda_i-i-s}{\lambda_i}}{\binom{\lambda_i+r-i}{\lambda_i}}
\prod_{j=1}^s\frac{\binom{t+\mu_j-j-r}{\mu_j}}{\binom{\mu_j+s-j}{\mu_j}}
\prod_{i=1}^r\prod_{j=1}^s\frac{t+1+\lambda_i+\mu_j-i-j}{t+1-i-j},
\end{equation}
where
$$
d_\lambda=\dim V_\lambda=\prod_{1\le i<j\le r}\frac{\lambda_i-\lambda_j+j-i}{j-i}
$$
is the dimension of
the irreducible representation of $GL_{|\lambda|}$ with highest weight $\lambda$.
Note that since this function takes integer values at large positive integer $t$,
it is an integer-valued polynomial (a linear combination of binomial coefficients $\binom{t}{k}$).
\subsection{The categories $\text{Rep}(O_t)$ and $\text{Rep}(Sp_{2t})$.}
The category $\text{Rep}(O_t)$ is defined similarly to the category $\text{Rep}(GL_t)$.
Namely, recall that in the classical category ${\bf Rep}(O_n)$ we have the
vector representation $V=\Bbb C^n$, and every irreducible representation
of $O_n$ occurs in $V^{\otimes r}$ for some $r$.
Now,
$$
\text{Hom}(V^{\otimes r_1},V^{\otimes r_2})=
(V^{\otimes r_1+r_2})^{O_n},
$$
so it is nonzero only if $r_1+r_2=2m$, in which case it can be written as $\text{End}_{O_n}(V^{\otimes m})$ and
is the image of the Brauer algebra $B_m(n)$, by the Fundamental Theorem of Invariant Theory
for orthogonal groups (this image is isomorphic to the Brauer algebra if $n\ge m$).
The category ${\bf Rep}(O_n)$ can then be defined
as the Karoubian closure of the subcategory
with objects $[r]:=V^{\otimes r}$
and morphisms as above.
Now consider composition of morphisms.
A basis in the Brauer algebra $B_m(n)$
is formed by matchings of $2m$ points,
so we have a spanning set in $\text{Hom}(V^{\otimes r_1},V^{\otimes r_2})$
formed by unoriented planar tangles (with possibly intersecting strands) connecting $r_1$ points at the bottom and $r_2$ points at
the top, which define a perfect matching. Then composition is the concatenation of tangles,
followed by removal of closed loops, so that each removed loop is replaced by a factor of $n$.
Now, given $t\in \Bbb C$, one can define the category $\widetilde{\text{Rep}}(O_t)$
with objects $[r]$, $r\in \Bbb Z_+$,
and the space of morphisms
$\text{Hom}([r_1],[r_2])$
being spanned by planar tangles as above, with the same composition law
as above, except that every removed closed loop earns a factor of $t$.
Thus, for instance, the endomorphism algebra $\text{End}([m])$ is the
Brauer algebra $B_m(t)$.
Similarly to $\widetilde{\text{Rep}}(GL_t)$, the category $\widetilde{\text{Rep}}(O_t)$ has a natural
strict symmetric monoidal structure. Namely, the tensor product functor is just the addition of integers
for objects and taking the union of planar tangles for morphisms, with the obvious symmetric braiding.
The unit object is the object $[0]$.
\begin{definition}
The category $\text{Rep}(O_t)$
is the Karoubian closure of $\widetilde{\text{Rep}}(O_t)$.
\end{definition}
Clearly, $\text{Rep}(O_t)$ is a Karoubian category over $\Bbb C$, which inherits
the tensor structure from $\widetilde{\text{Rep}}(O_t)$. Moreover, it is not hard to show that
this category is rigid (with $[r]^*=[r]$).
Moreover, it is easy check that $\dim [r]=t^{r}$.
The category $\text{Rep}(Sp_{2t})$ is defined in a completely parallel way,
starting from the representation category of the symplectic group $Sp_{2n}$.
It is in fact easy to see that the categories $\text{Rep}(O_t)$ and
$\text{Rep}(Sp_{-t})$ are equivalent as tensor categories,
and differ only by a change of the commutativity isomorphism.
Namely, define an involutive tensor automorphism $u$ of the identity functor
of $\text{Rep}(O_t)$ (called the parity automorphism) by $u|_{[r]}=(-1)^r$,
and define a new commutativity isomorphism on $\text{Rep}(O_t)$
which differs by sign from the old one if both factors are odd (i.e., $u=-1$ on them),
and is the same as the old one if one of the factors is even (i.e., has $u=1$). Then it is easy to see that
$\text{Rep}(O_t)$ with this new commutativity is equivalent to $\text{Rep}(Sp_{-t})$
as a symmetric tensor category.\footnote{There is a similar relationship between the categories $\text{Rep}(GL_t)$ and $\text{Rep}(GL_{-t})$.}
\begin{theorem} (\cite{De1,De2}) (i)
The category $\text{Rep}(O_t)$ is a semisimple abelian symmetric
tensor category if $t\notin \Bbb Z$.
(ii) The category $\text{Rep}(O_t)$ (respectively, $\text{Rep}(Sp_t)$) has the following universal property:
if ${\mathcal C}$ is a rigid tensor category then isomorphism classes of (possibly non-faithful)
symmetric tensor functors $\text{Rep}(O_t)\to {\mathcal C}$ (respectively $\text{Rep}(Sp_t)\to {\mathcal C}$)
are in bijection with isomorphism classes of objects $X\in {\mathcal C}$ of dimension $t$ with a symmetric (respectively, skew-symmetric) isomorphism $X\to X^*$, via
$F\mapsto F([1])$.
(iii) If $t=n\in \Bbb Z$, and if $p,q$ are nonnegative integers with $p-2q=n$,
then the category $\text{Rep}(O_{t=n})$ (which is not abelian)
admits a non-faithful symmetric tensor functor $\text{Rep}(O_n)\to {\bf Rep}(OSp_{p|2q})$ to the representation category
of the supergroup $OSp_{p|2q}$,
which sends $[1]$ to the supervector space $V=\Bbb C^{p|2q}$.
(iv) We have a natural symmetric
tensor functor ${\rm Res:} \text{Rep}(O_t)\to \text{Rep}(O_{t-1})$ and
$\text{Rep}(Sp_{2t})\to \text{Rep}(Sp_{2t-2})$.
\end{theorem}
Again, (iii) and (iv) follow from (ii).
Now assume $t\notin \Bbb Z$ and let us describe the simple objects.
The simple objects $X_\lambda$ of $\text{Rep}(O_t)$ are labelled by all partitions
$\lambda=(\lambda_1,...,\lambda_r)$; namely, $X_\lambda$ is the
unique direct summand in $S^\lambda V$ which does not occur in $S^{\lambda'}V$ for any $\lambda'$ with
$|\lambda'|<|\lambda|$ (it occurs with multiplicity $1$). The object $X_\lambda$ is the interpolation
of the representation $V_\lambda$ of $O_n$ with highest weight $\sum \lambda_i\omega_i$, where
$\omega_i$ are the fundamental weights corresponding to the representation
$\wedge^i V$.
Thus, the dimension of $X_\lambda$ is given by the interpolation of the Weyl dimension formula:
$$
\dim X_\lambda(t)=
$$
$$
\prod_{i=1}^r\frac{ (\frac{t}{2}+\lambda_i-i) \binom{\lambda_i+t-r-i-1}{\lambda_i} } { (\frac{t}{2}-i) \binom{\lambda_i+r-i}{\lambda_i} }
\prod_{1\le i<j\le r}\frac{(\lambda_i-\lambda_j+j-i)(\lambda_i+\lambda_j+t-i-j)}{(j-i)(t-i-j)}.
$$
Note that since this function takes integer values at large positive integer $t$,
it is an integer-valued polynomial.
We will refer to $\text{Rep}(GL_t)$, $\text{Rep}(O_t)$, $\text{Rep}(Sp_t)$ as {\it Deligne categories}.
In this paper we will consider these categories only in the semisimple case $t\notin \Bbb Z$,
but many of our constructions can be extended to the general case.
\subsection{Tensor subcategories}
Proper tensor subcategories of the Deligne categories are easy to classify, since
they are seen at the level of the Grothendieck ring.
The category $\text{Rep}(GL_t)$ is $\Bbb Z$-graded (by $\deg X_{\lambda,\mu}=|\lambda|-|\mu|$).
So for every positive integer $N$ we have the subcategory $\text{Rep}(GL_t/\Bbb Z_N)$ of representations of degrees
divisible by $N$, and the subcategory $\text{Rep}(PGL_t)$ of representations of degree zero.
The categories $\text{Rep}(O_t)$ and $\text{Rep}(Sp_{2t})$ are $\Bbb Z_2$-graded, by $\deg(V)=1$, so
we have the subcategories $\text{Rep}(O_t/(\pm 1))$ and $\text{Rep}(Sp_{2t}/(\pm 1))$
of even representations.
It is easy to check that these are the only nontrivial tensor subcategories
of the Deligne categories.
\subsection{The fundamental groups of Deligne categories}
Denote the fundamental groups of $\text{Rep}(GL_t)$, $\text{Rep}(PGL_t)$, $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2t})$
by $GL_t$, $PGL_t$, $O_t$, $Sp_{2t}$, respectively. The following proposition provides
an explicit description of these fundamental groups.
Recall that $V$ denotes the tautological object of the Deligne category.
\begin{proposition}\label{expdes}
(i) $GL_t=GL(V)$, $PGL_t=PGL(V)$.
(ii) $O_t=O(V)$, and $Sp_{2t}=Sp(V)$.
\end{proposition}
\begin{proof}
Since $\text{Rep}(GL_t)$ is tensor-generated\footnote{A tensor category ${\mathcal{C}}$ is said to be tensor-generated by
objects $X_1,...,X_m$ if any object of ${\mathcal{C}}$ is a subquotient of a direct sum of objects of the form
$X_{i_1}\otimes...\otimes X_{i_n}$, $1\le i_1,...,i_n\le m$.}
by $V$ and $V^*$
and $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2t})$ are tensor-generated by
$V$, we find that $GL_t$ is a closed subscheme of $V\otimes V^*\oplus V^*\otimes V$,
while $O_t$ and $Sp_{2t}$ are closed subschemes of $V\otimes V^*$.
It's easy to see that the defining equations are satisfied on each of these subschemes, and
one can check that they are sufficient (i.e., modulo these equations
one already obtains the Hopf algebra $H=\oplus_X X\otimes X^*$).
\end{proof}
\begin{example}
Let us explain how this works in the example of $GL_t$.
Let us work in $\text{Rep}(GL_t)\boxtimes \text{Rep}(GL_t)$.
In this case, the algebra in question is
$$
R:=S(V^*\boxtimes V\oplus V\boxtimes V^*)/(AB=BA={\rm Id}),
$$
where $V^*\boxtimes V$ corresponds to ``matrix elements of $A$'' and $V\boxtimes V^*$ to
``matrix elements of $B$''. This algebra has a filtration by degree in $A$ and $B$. In degree $0$, we have just $\bold 1$.
In degree $1$, we additionally have $V^*\boxtimes V$ and $V\boxtimes V^*$ corresponding to $A$ and $B$, respectively.
In degree $2$, before imposing the relations, we additionally have
$S^2(V^*\boxtimes V)\oplus (V^*\boxtimes V)\otimes (V\boxtimes V^*)\oplus S^2(V\boxtimes V^*)$.
Note that $S^2(V^*\boxtimes V)=S^2V^*\boxtimes S^2V\oplus \wedge^2V^*\boxtimes \wedge^2V$. Now, the two relations
$AB-{\rm Id}=0$ and $BA-{\rm Id}=0$ kill the two subobjects
$\bold 1\boxtimes (V\otimes V^*)$ and $(V^*\otimes V)\boxtimes \bold 1$
in $(V^*\boxtimes V)\otimes (V\boxtimes V^*)$ (intersecting by $\bold 1$, as $\text{Tr}}%{\text{Tr}\,(AB)=\text{Tr}}%{\text{Tr}\,(BA)$),
which leaves us with ${\mathfrak{sl}}(V)\boxtimes {\mathfrak{sl}}(V)^*$.
Thus, the additional summands in degree 2 are:
$$
S^2V^*\boxtimes S^2V\oplus \wedge^2V^*\boxtimes \wedge^2V\oplus S^2V\boxtimes S^2V^*\oplus \wedge^2V\boxtimes \wedge^2V^*\oplus
{\mathfrak{sl}}(V)\boxtimes {\mathfrak{sl}}(V)^*.
$$
Similarly, one can show that in higher degrees $d>2$ we get one copy of $X\boxtimes X^*$ for each simple $X$ which occurs in $V^{\otimes r}\otimes V^{*\otimes s}$
with $r+s\le d$.
\end{example}
\begin{corollary} \label{liealg}
We have ${\rm Lie}GL_t={\mathfrak{gl}}(V)$, ${\rm Lie}PGL_t={\mathfrak{sl}}(V)$, ${\rm Lie}O_t={\mathfrak{o}}(V)$,
${\rm Lie}Sp_{2t}={\mathfrak{sp}}(V)$.
\end{corollary}
\begin{proof} This follows from Proposition \ref{expdes} and Proposition \ref{leaal}.
\end{proof}
We will denote these Lie algebras by ${\mathfrak{gl}}_t$, ${\mathfrak{sl}}_t$, ${\mathfrak{o}}_t$,
${\mathfrak{sp}}_{2t}$. As we have shown above, they act naturally (i.e., functorially with respect to $M$)
on every (ind-)object $M$ of the corresponding Deligne category.
\subsection{Connectedness of $GL_t$, $PGL_t$, $O_t$, $Sp_{2t}$}
Let us denote any of the group schemes $GL_t$, $PGL_t$, $O_t$, $Sp_{2t}$ by $K$ and the corresponding Lie algebra by $\mathfrak{k}$.
\begin{proposition}\label{conne}
The group scheme $K$ is connected.
\end{proposition}
\begin{remark}
Note that if $n$ is a positive integer then the group $O_n$ is not connected. However, this is due to the
existence of the determinant character for $O_n$, which does not exist for $O_t$.
\end{remark}
\begin{proof}
Let $X$ be a nontrivial simple object of $\text{Rep}(K)$. Then it is easy to see that $\mathfrak{k}$ acts nontrivially on $X$.
This implies that $O(K)^\mathfrak{k}=\bold 1$, as desired.
\end{proof}
\begin{remark}
On the contrary, it is easy to show that the group scheme $S_t$ in $\text{Rep}(S_t)$ (defined in \cite{De2}) is "totally disconnected"
in the sense that ${\rm Lie}(S_t)=0$.
\end{remark}
\section{Interpolation of the representation theory of real
reductive groups}
\subsection{Interpolation of classical symmetric pairs}
Let $(\mathfrak{g},\mathfrak{k})$ be a symmetric pair, i.e. $\mathfrak{g}$ is a complex
reductive Lie algebra, and $\mathfrak{k}$ the fixed subalgebra
of an involution $\theta: \mathfrak{g}\to \mathfrak{g}$. Let $K$ be a reductive group
whose Lie algebra is $\mathfrak{k}$. The main algebraic objects of study
in the representation theory of real reductive groups
are $(\mathfrak{g},K)$-modules. These, by definition, are locally algebraic $K$-modules with a
compatible action of $\mathfrak{g}$. The category of such modules will be denoted by $\text{Rep}(\mathfrak{g},K)$.
We would like to define the interpolation of the category
$\text{Rep}(\mathfrak{g},K)$ to complex rank in the case when the Lie algebra $\mathfrak{g}$ is
of classical type. To do so, let us give a ``categorically
friendly'' formulation of the additional structure on a locally
algebraic $K$-module $M$ that gives it a compatible $\mathfrak{g}$-action.
To this end, note that $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$, where $\mathfrak{p}$ is the
$-1$-eigenspace of $\theta$ (which is a $K$-module), and we have a bracket map
$$
\eta: \wedge^2\mathfrak{p}\to \mathfrak{k},
$$
which is a morphism of $K$-modules.
Then, a structure of a $(\mathfrak{g},K)$-module on a $K$-module $M$ is
just a morphism of $K$-modules
$$
b: \mathfrak{p}\otimes M\to M
$$
such that
\begin{equation} \label{commrel}
b\circ ({\rm Id}_\mathfrak{p} \otimes b)=a_M\circ (\eta\otimes {\rm Id}_M),
\end{equation}
as morphisms $\wedge^2\mathfrak{p}\otimes M\to M$
(where on the left hand side we regard $\wedge^2\mathfrak{p}$ as a subobject of $\mathfrak{p}\otimes \mathfrak{p}$).
So for each symmetric pair with classical $\mathfrak{g}$ (and hence $\mathfrak{k}$) we can define
the interpolation of its category of $(\mathfrak{g},K)$-modules as the
category of ind-objects $M$ of the Deligne category interpolating the category ${\bf Rep}(K)$
with a morphism $b: \mathfrak{p}\otimes M\to M$ satisfying
(\ref{commrel}). The only thing we have to do for this is to
define the appropriate object $\mathfrak{p}$ with the morphism $\eta$.
Let us explain how this works in examples, following the classification of symmetric spaces (\cite{He}).
\begin{example}\label{gt} Group type (the symmetric pair $(K\times K,K)$).
This example works in any of the Deligne categories
$\text{Rep}(GL_t)$, $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2t})$.
Namely, $\mathfrak{p}=\mathfrak{k}$, and the map $\eta: \wedge^2\mathfrak{p}\to \mathfrak{k}$
is the commutator. In other words, in this case a $(\mathfrak{g},K)$-module
is simply a $\mathfrak{k}$-module in the Deligne category
(i.e., an (ind-)object $M$ of the Deligne category with a $\mathfrak{k}$-action, which does not necessarily
coincide with the natural action of $\mathfrak{k}$ on $M$).
We denote the category of such modules by
$\text{Rep}(\mathfrak{k}_t\times \mathfrak{k}_t, K_t)$ for the corresponding $\mathfrak{k}=\mathfrak{k}_t={\mathfrak{gl}}_t,{\mathfrak{o}}_t,{\mathfrak{sp}}_t$.
\end{example}
\begin{example}\label{AI} Type AI (the symmetric pair $(GL_n,O_n)$).
The appropriate Deligne category is
$\text{Rep}(O_t)$, and $\mathfrak{p}=S^2V$, with the bracket
$$
\eta: \wedge^2\mathfrak{p}=\wedge^2(S^2V)\to \mathfrak{k}=\wedge^2V
$$
given by the formula
$$
\eta={\rm Id}_V\otimes (,)\otimes {\rm Id}_V.
$$
We denote the resulting category of $(\mathfrak{g},K)$-modules by
$\text{Rep}({\mathfrak{gl}}_t,O_t)$.
\end{example}
\begin{example}\label{AII} Type AII (the symmetric pair $(GL_{2n},Sp_{2n})$).
The appropriate Deligne category is
$\text{Rep}(Sp_{2t})$, and $\mathfrak{p}=\wedge^2V$, with the bracket
$$
\eta: \wedge^2\mathfrak{p}=\wedge^2(\wedge^2V)\to \mathfrak{k}=S^2V
$$
given by the same formula as in Example \ref{AI}.
We denote the resulting category of $(\mathfrak{g},K)$-modules by
$\text{Rep}({\mathfrak{gl}}_{2t},Sp_{2t})$.
\end{example}
\begin{example}\label{AIII} Type AIII (the symmetric pair $(GL_{n+m},GL_n\times
GL_m$)).
The appropriate Deligne category is
$\text{Rep}(GL_t)\boxtimes \text{Rep}(GL_s)$
(so we have two complex parameters).
Let $V$ and $U$ be the tautological objects
of these two categories.
Then $\mathfrak{p}=V\otimes U^*\oplus U\otimes V^*$, with the bracket
$$
\eta: \wedge^2\mathfrak{p}\to \mathfrak{k}=V\otimes V^*\oplus
U\otimes U^*
$$
given by the formula
$$
\eta=({\rm Id}_V\otimes {\rm ev}_U\otimes {\rm Id}_{V^*})\circ (p_1\otimes p_2)
-({\rm Id}_U\otimes {\rm ev}_V\otimes {\rm Id}_{U^*})\circ (p_2\otimes p_1),
$$
where $p_1,p_2$ are the projections to the first and
second summand of $\mathfrak{p}$. We denote the resulting category by
$\text{Rep}({\mathfrak{gl}}_{t+s},GL_t\times GL_s)$.
\end{example}
\begin{example}\label{BDI} Type BDI (the symmetric pair ($O_{n+m},O_n\times
O_m$)). The appropriate Deligne category is
$\text{Rep}(O_t)\boxtimes \text{Rep}(O_s)$.
Let $V$ and $U$ be the tautological objects
of these two categories.
Then $\mathfrak{p}=V\otimes U$, with the bracket
$$
\eta: \wedge^2\mathfrak{p}\to \mathfrak{k}=\wedge^2 V\oplus
\wedge^2 U
$$
given by the formula
$$
\eta=({\rm Id}_V\otimes (,)_U\otimes {\rm Id}_V)\circ \sigma_{34}
-({\rm Id}_U\otimes (,)_V\otimes {\rm Id}_U)\circ \sigma_{12}
$$
We denote the resulting category by
$\text{Rep}({\mathfrak o}_{t+s},O_t\times O_s)$.
\end{example}
\begin{example}\label{CII} Type CII (the symmetric pair $(Sp_{2(n+m)},Sp_{2n}\times
Sp_{2m})$). The appropriate Deligne category is
$\text{Rep}(Sp_{2t})\boxtimes \text{Rep}(Sp_{2s})$.
Let $V$ and $U$ be the tautological objects
of these two categories.
Then $\mathfrak{p}=V\otimes U$, with the bracket
$$
\eta: \wedge^2\mathfrak{p}\to \mathfrak{k}=S^2 V\oplus
S^2 U
$$
given by the same formula as in Example \ref{BDI}.
We denote the resulting category by
$\text{Rep}({\mathfrak{sp}}_{2(t+s)},Sp_{2t}\times Sp_{2s})$.
\end{example}
\begin{remark}
Note that in the last three examples,
one can freeze one of the parameters ($t$ or $s$) to
be a positive integer (i.e., use the usual representation category of the corresponding Lie group, rather than the Deligne category),
and consider the interpolation only with
respect to the other parameter.
\end{remark}
\begin{example} Type DIII (the symmetric pair $(O_{2n},GL_n)$).
The appropriate Deligne category is
$\text{Rep}(GL_t)$, and $\mathfrak{p}=\wedge^2V\oplus \wedge^2V^*$,
with the bracket
$$
\eta: \wedge^2\mathfrak{p}=\wedge^2(\wedge^2V\oplus \wedge^2V^*)\to
\mathfrak{k}=V\otimes V^*
$$
given by the formula
$$
\eta=({\text{id}}_V\otimes (,)\otimes {\rm Id}_{V^*})\circ P,
$$
where $P: \wedge^2(\wedge^2V\oplus \wedge^2V^*)\to \wedge^2 V\otimes \wedge^2V^*$ is the projection.
We denote the resulting category by
$\text{Rep}({\mathfrak o}_{2t},GL_t)$.
\end{example}
\begin{example} Type CI (the symmetric pair $(Sp_{2n},GL_n)$).
The appropriate Deligne category is
$\text{Rep}(GL_t)$, and $\mathfrak{p}=S^2V\oplus S^2V^*$,
with the bracket
$$
\eta: \wedge^2\mathfrak{p}=\wedge^2(S^2V\oplus S^2V^*)\to
\mathfrak{k}=V\otimes V^*
$$
given by the formula
$$
\eta=({\text{id}}_V\otimes (,)\otimes {\rm Id}_{V^*})\circ P,
$$
where $P: \wedge^2(S^2V\oplus S^2V^*)\to S^2 V\otimes S^2V^*$ is the projection.
We denote the resulting category by
$\text{Rep}({\mathfrak{sp}}_{2t},GL_t)$.
\end{example}
Note that all the above complex rank categories $\text{Rep}(\mathfrak{g},K)$ can be defined using a slightly different language.
Namely, we have a Lie algebra $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$ in $\text{Rep}(K)$, whose commutator
is composed of the usual commutator on $\mathfrak{k}$, the action of $\mathfrak{k}$ on $\mathfrak{p}$, and the map $\eta$,
and $\text{Rep}(\mathfrak{g},K)$ is nothing but the category of $\mathfrak{g}$-modules $M$ in ${\rm Ind}\text{Rep}(K)$, such that
the restriction of the $\mathfrak{g}$-action on $M$ to $\mathfrak{k}$ coincides with the natural action of $\mathfrak{k}$ on $M$.
For example, in the group type case (Example \ref{gt}), we have $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{k}$, and $\mathfrak{k}$
sits in $\mathfrak{g}$ as the diagonal subalgebra. Thus, the objects of $\text{Rep}(\mathfrak{k}\oplus \mathfrak{k},K)$
can be viewed as $\mathfrak{k}$-bimodules in $\text{Rep}(K)$ such that the diagonal $\mathfrak{k}$-action is the natural one.
In fact, the above definition becomes more natural in light of the following construction,
which also provides examples of finite dimensional $(\mathfrak{g},K)$-modules.
\begin{example}\label{fdhc}
1. Consider the setting of Example \ref{gt}. It is easy to see that we have a symmetric tensor functor
$F: \text{Rep}(K)\boxtimes \text{Rep}(K)\to \text{Rep}(\mathfrak{k}\oplus \mathfrak{k},K)$ given by $X\boxtimes Y\mapsto X\otimes Y$.
The additional action of $\mathfrak{k}$ on $X\otimes Y$ is just the action on the left component, while the natural action of $\mathfrak{k}$ is the
diagonal one. Thus, the two actions coincide iff $Y$ is a multiple of $\bold 1$.
2. In the non-group type examples above, by the universal property of Deligne categories,
we have a symmetric tensor functor $F: \text{Rep}(G)\to \text{Rep}(\mathfrak{g},K)$, where $\text{Rep}(G)$
is the Deligne category corresponding to $\mathfrak{g}=\mathfrak{g}_t$ in each of the cases (i.e. $G=GL_t$
if $\mathfrak{g}={\mathfrak{gl}}_t$, etc.)
\end{example}
\begin{proposition} \label{ff1}
The functor $F$ of Example \ref{fdhc} is fully faithful.
\end{proposition}
\begin{proof}
This follows from Propositions \ref{ff} and \ref{conne}.
\end{proof}
\subsection{The center of $U(\mathfrak{g})$.}
A fundamental role in the representation theory of real reductive groups is played by the center $Z(\mathfrak{g})$ of the enveloping algebra $U(\mathfrak{g})$.
So, let us discuss the structure of this center in our setting.
Note that $U(\mathfrak{g})$ is a bimodule over the ordinary algebra $U(\mathfrak{g})^\mathfrak{k}=\text{Hom}(\bold 1,U(\mathfrak{g}))$.
By definition, $Z(\mathfrak{g})$ is the subalgebra of $U(\mathfrak{g})^\mathfrak{k}$
consisting of elements
$z$ whose left and right action on $U(\mathfrak{g})$ are the same.
In particular, $Z(\mathfrak{g})$ is an ordinary commutative algebra (over $\Bbb C$).
Since we are in characteristic zero, we have a symmetrization map $S\mathfrak{g}\to U(\mathfrak{g})$, which is a map of $\mathfrak{g}$-modules.
Hence, the center of $U(\mathfrak{g})$ is identified, as a vector space, with $(S\mathfrak{g})^\mathfrak{g}$, and thus we have that ${\rm gr}Z(\mathfrak{g})=
(S\mathfrak{g})^\mathfrak{g}$ (where we take the associated graded algebra under the usual PBW filtration on the enveloping algebra).
\begin{proposition}\label{cent} (i) The algebra $(S\mathfrak{k})^\mathfrak{k}=\text{Hom}(\bold 1,S\mathfrak{k})$
is a polynomial algebra in infinitely many homogeneous generators $C_i$,
of degrees $i=1,2,3,...$ if $\mathfrak{k}={\mathfrak{gl}}_t$
and degrees $i=2,4,6,...$ if $\mathfrak{k}={\mathfrak{o}}_t$ or ${\mathfrak{sp}}_{2t}$.
So in Example \ref{gt}, $(S\mathfrak{g})^\mathfrak{g}$ is a polynomial algebra in two strings of such generators,
$C_i^{\rm left}$ and $C_i^{\rm right}$.
(ii) In all the other examples,
the algebra $(S\mathfrak{g})^\mathfrak{g}$ is a polynomial algebra
in infinitely many homogeneous generators $C_i$ of degrees $i=1,2,3,...$ if $\mathfrak{g}$ is of type ${\mathfrak{gl}}$
and degrees $i=2,4,6,...$ if $\mathfrak{g}$ is of type ${\mathfrak{o}}$ or ${\mathfrak{sp}}$.
(iii) The center $Z(\mathfrak{g})$ is a polynomial algebra, whose generators are obtained from the generators of $(S\mathfrak{g})^\mathfrak{g}$ by the symmetrization map.
\end{proposition}
\begin{proof}
(i) Since $K$ is connected by Proposition \ref{conne}, $(S\mathfrak{k})^\mathfrak{k}=(S\mathfrak{k})^K$, so this is just a calculation of invariants in
the Deligne category $\text{Rep}(K)$. Thus, the statement follows from classical invariant theory by looking at large integer $t$.
Namely, identifying $\mathfrak{g}$ with $\mathfrak{g}^*$, the generators of $(S\mathfrak{g})^\mathfrak{g}=(S\mathfrak{g}^*)^\mathfrak{g}$ may be written as $C_i=\text{Tr}}%{\text{Tr}\,(A^i)$.
(ii) By Proposition \ref{ff1}, the algebra $(S\mathfrak{g})^\mathfrak{g}$ may be computed in the Deligne category $\text{Rep}(G)$ as
the algebra $(S\mathfrak{g})^G$. Thus, (ii) follows from (i).
(iii) follows from (i),(ii) and the fact that ${\rm gr}Z(\mathfrak{g})=(S\mathfrak{g})^\mathfrak{g}$.
\end{proof}
In fact, we can generalize Proposition \ref{cent}(i) to the algebra $((S\mathfrak{g})^{\otimes m})^\mathfrak{g}$.
\begin{proposition}\label{multiinv}
(i) If $G=GL_t$ then $((S\mathfrak{g})^{\otimes m})^\mathfrak{g}$ is the polynomial algebra in the generators $C_w$
labeled by cyclic words (=necklaces) $w$ in letters $A_1,...,A_m$ (namely,
$C_w$ is the interpolation of $\text{Tr}}%{\text{Tr}\,(w)$). The degree of $C_w$
is the length of $w$.
(ii) If $G=O_t$ or $Sp_{2t}$ then $((S\mathfrak{g})^{\otimes m})^\mathfrak{g}$ is the polynomial algebra in the generators $C_w$
labeled by cyclic words $w$ in letters $A_1,...,A_m$ modulo reversal, except palindromic words $w$ of odd length (namely,
$C_w$ is the interpolation of $\text{Tr}}%{\text{Tr}\,(w)$). The degree of $C_w$
is the length of $w$.
\end{proposition}
\begin{proof} It suffices to check this in $\text{Rep}(G)$, where it follows from the invariant theory for classical groups.
Namely, to settle the $GL_t$-case, recall that by Weyl's Fundamental Theorem of Invariant Theory,
the ring of invariants of $m$ square matrices $A_1,...,A_m$ is generated by traces of cyclic words of these matrices,
and these traces are asymptotically independent when the matrix size goes to infinity (see e.g. \cite{CEG}, Section 11).
For $O_t$ and $Sp_{2t}$, the proof is similar; namely, one needs to use the well known fact that there is no polynomial identities
satisfied by skewsymmetric matrices (under an orthogonal or symplectic form) of arbitrary size.
\end{proof}
Using standard combinatorics (necklace counting), we get
\begin{corollary}\label{hilser}
For $G=GL_t$ the Hilbert series of $((S\mathfrak{g})^{\otimes m})^\mathfrak{g}$ is
$$
h_m(q)=\prod_{j=1}^\infty (1-mq^j)^{-1}.
$$
\end{corollary}
\subsection{Kostant's theorem}
Now we would like to generalize the results of Kostant \cite{Ko}
to Deligne categories.
\begin{proposition}\label{kos}
$S\mathfrak{g}$ is a free module over $(S\mathfrak{g})^\mathfrak{g}$. More precisely, there exists a $\mathfrak{g}$-stable graded subobject
$E\subset S\mathfrak{g}$ such that the multiplication map $E\otimes (S\mathfrak{g})^\mathfrak{g}\to S\mathfrak{g}$ is an isomorphism.
\end{proposition}
\begin{proof} It suffices to prove the result in $\text{Rep}(G)$. It is sufficient to show
that for each simple $X\in \text{Rep}(G)$, the space $\text{Hom}_\mathfrak{g}(X,S\mathfrak{g})$ is a free module over $(S\mathfrak{g})^\mathfrak{g}$.
This follows from the fact that $(S\mathfrak{g}\otimes S\mathfrak{g})^\mathfrak{g}$ is a free $(S\mathfrak{g})^\mathfrak{g}$-module, which
is a consequence of Proposition \ref{multiinv} (for $m=2$).
\end{proof}
In fact, similarly to the classical case, there is a nice choice for $E$ (at least for generic $t$).
Namely, we can define the harmonic part $H(\mathfrak{g})\subset S\mathfrak{g}$, which
by definition is the kernel of the action of the positive degree elements $(S\mathfrak{g})_+^\mathfrak{g}\subset
(S\mathfrak{g})^\mathfrak{g}$ by constant coefficient differential operators (using the identification $\mathfrak{g}\cong \mathfrak{g}^*$).
In other words, one has $H(\mathfrak{g})=((S\mathfrak{g})_+^\mathfrak{g} S\mathfrak{g})^\perp$, where the orthogonal complement is taken under
the natural nondegenerate form on $S\mathfrak{g}$ (the interpolation of the form
defined in the classical case by the formula $(f,g)=f(\partial)g(x)|_{x=0}$).
We have the multiplication map $\mu: H(\mathfrak{g})\otimes (S\mathfrak{g})^\mathfrak{g} \to S\mathfrak{g}$.
\begin{proposition}\label{kos1} If $t$ is transcendental, then the map $\mu$ is an isomorphism.
In other words, in Proposition \ref{kos}, one can choose $E=H(\mathfrak{g})$.
Moreover, the Hilbert series of $E$ and $H(\mathfrak{g})$ are the same for any $t\notin \Bbb Z$.
\end{proposition}
\begin{proof} The first statement follows from its validity for large integer $t$
(for classical representation categories), which is a classical result of Kostant \cite{Ko}.
To prove the second statement, note that, as explained above, we have a perfect pairing $H(\mathfrak{g})\otimes S\mathfrak{g}/(S\mathfrak{g})^\mathfrak{g}_+S\mathfrak{g}\to \bold 1$, which implies
that $H(\mathfrak{g})\cong (S\mathfrak{g}/(S\mathfrak{g})^\mathfrak{g}_+S\mathfrak{g})^\ast$ as graded objects, where by $\ast$ we denote the restricted dual.
Since by Proposition \ref{kos}, $S\mathfrak{g}$ is freely generated by $E$ over $(S\mathfrak{g})^\mathfrak{g}$, the result follows.
\end{proof}
Computing the Hilbert series of isotypic components of $E$ (or, equivalently, $H(\mathfrak{g})$) is an interesting problem.
This is the complex rank analog of computing Kostant's generalized exponents of representations (=$q$-analogs
of the zero weight multiplicity), and it leads to stable limits of these $q$-weight multiplicities for classical groups, studied
by R. Gupta, P. Hanlon and R. Stanley in 1980s (\cite{Gu1,Gu2,Ha,St}). Namely, for instance, for $G=GL_t$ we have the following result.
\begin{proposition}\label{sta} (\cite{St}, Proposition 8.1) Let $\lambda,\mu$ be partitions such that $|\lambda|=|\mu|$.
Then the Hilbert series of $\text{Hom}(X_{\lambda,\mu},E)$ is given by the formula
$$
h_{\text{Hom}(X_{\lambda,\mu},E)}(q)=(s_\lambda*s_\mu)(q,q^2,...),
$$
where $s_\lambda$ are the Schur polynomials, and $*$ denotes the Kronecker product
(corresponding to the tensor product of representations of the symmetric group).
\end{proposition}
\begin{example}\label{adjointrep} Let $\lambda=\mu=(1)$. Then $s_\lambda*s_\mu=s_1*s_1=s_1=\sum x_i$, so
$$
h_{\text{Hom}({\mathfrak{sl}}(V),E)}(z)=s_1(q,q^2,q^3,...)=q+q^2+q^3+...=\frac{q}{1-q}.
$$
\end{example}
Note that Proposition \ref{sta} implies the following combinatorial identity
(which is easy to obtain from \cite{St} by interpolation):
$$
\sum_{\lambda,\mu: |\lambda|=|\mu|}(s_\lambda*s_\mu)(q,q^2,...)\dim X_{\lambda,\mu}(t)=\frac{1}{1-qt^2}\prod_{j=1}^\infty (1-q^j),
$$
where $\dim X_{\lambda,\mu}(t)$ is given by formula \eqref{dimfor}.
Generalizations of Proposition \ref{sta}
to $O_t$ and $Sp_{2t}$ can be found in \cite{Ha} (Theorem 5.21, Corollary 5.17).
\subsection{Harish-Chandra modules}
By analogy with the classical case, we make the following definition.
\begin{definition} A $(\mathfrak{g},K)$-module $M$ is said to be a Harish-Chandra module if
it is finitely generated as a $\mathfrak{g}$-module (i.e., is a quotient of $U(\mathfrak{g})\otimes X$ for some
object $X\in \text{Rep}(G)$) and is finite under the action of $Z(\mathfrak{g})$ (i.e., has a finite filtration
such that $Z(\mathfrak{g})$ acts by a scalar on the successive quotients).
The category of Harish-Chandra modules is denoted by $HC(\mathfrak{g},K)$.
\end{definition}
For instance, any finite dimensional $(\mathfrak{g},K)$-module (e.g, one coming from an object of $\text{Rep}(G)$)
is automatically a Harish-Chandra module.
\begin{remark} We will see below that the finite $K$-type condition, satisfied automatically in the classical case,
does not always hold in the setting of Deligne categories.
\end{remark}
\subsection{Dual principal series Harish-Chandra bimodules}
Let us now give examples of infinite-dimensional Harish-Chandra
bimodules for $K=K_t$. Namely, let us construct
the dual principal series modules.
We start with the spherical case.
For a general categorical symmetric pair,
let us say that $M\in \text{Rep}(\mathfrak{g},K)$ is {\it spherical} if it contains
a copy of the unit object $\bold 1$ of $\text{Rep}(K)$ ("the spherical vector") which generates $M$.
Let $Z=U(\mathfrak{k})^\mathfrak{k}$, $\chi$ be a character of $Z$, and
$$
U_\chi:=U(\mathfrak{k})/(z-\chi(z),z\in Z).
$$
It is easy to see that $U_\chi\in HC(\mathfrak{k}\oplus \mathfrak{k},K)$.
Also, we see that ${\rm gr}U_\chi\cong H(\mathfrak{k})$.
We call $U_\chi$ the {\it dual spherical principal series Harish-Chandra bimodule} with central character $\chi$.
Moreover, for generic $\chi,t$ (in a suitable sense), the module
$U_\chi$ is irreducible. Indeed, for each simple $\text{Rep}(K)$-subobject
$X\subset U_\chi$ there exists $m$ such that $U^{(m)}_\chi X U^{(m)}_\chi$
(where $U_\chi^{(m)}$ is the degree $m$ part of $U_\chi$ under the PBW filtration on $U(\mathfrak{k})$)
contains $\bold 1$ for large integer $t$ and generic $\chi$, which implies the statement.
\begin{remark}
Since ${\rm gr}U_\chi\cong H(\mathfrak{k})$, $U_\chi$ does not have a finite $K$-type
(see Example \ref{adjointrep}). This shows that in general, we should not expect finite $K$-type
for irreducible Harish-Chandra modules in Deligne categories (although, as we will see below, some interesting
Harish-Chandra modules do have finite $K$-type).
\end{remark}
\begin{proposition}\label{sphe}
Any irreducible spherical Harish-Chandra bimodule $M\in \text{Rep}(\mathfrak{g},K)$ is a quotient of $U_\chi$ for some $\chi$.
In particular, $M$ contains a unique copy of $\bold 1$.
\end{proposition}
\begin{proof}
By Dixmier's version of Schur's lemma (in the categorical setting),
the center $Z$ acts in $M$ by some character $\chi$. Since $M$ contains a copy of $\bold 1$,
we have a nonzero morphism of bimodules
$$
(U_\chi\otimes U(\mathfrak{k}))/(U_\chi\otimes U(\mathfrak{k}))\mathfrak{k}_{diag}=U_\chi
$$
Since $M$ is irreducible, this morphism is surjective, as desired.
\end{proof}
The following problem is therefore interesting.
\begin{problem} Determine the set $\Sigma=\Sigma_\mathfrak{k}$
of central characters $\chi$ for which $U_\chi$ is a reducible bimodule, i.e., is not a simple algebra
in $\text{Rep}(K)$.
\end{problem}
We have just seen that this set is not everything (at least for transcendental $t$), but it is also nonempty.
Indeed, consider e.g. the case $K=GL_t$. Then if $\chi$ equals the central character $\chi_{\lambda,\mu}$ of the object $X_{\lambda,\mu}$
then $U_\chi$ is not simple, as it projects onto $X_{\lambda,\mu}\otimes X_{\lambda,\mu}^*$.
Let us compute $\chi_{\lambda,\mu}$ explicitly. To do so, we should choose generators $C_i$ of
$U(\mathfrak{k})^\mathfrak{k}=Z(\mathfrak{k})$. We have the Duflo isomorphism of algebras
$$
{\rm Duf}: S(\mathfrak{k})^\mathfrak{k}\to Z(\mathfrak{k}),
$$
defined in the same way as in the classical case (\cite{Du}; see \cite{CR} for a review).
Set $C_i={\rm Duf}(\text{Tr}}%{\text{Tr}\,(A^i))$. Then $C_i|_{V_{\lambda,\mu}}$ will be the interpolation
of $\sum_j ([\lambda,\mu]_n+\rho_n)_j^i$, where $\rho_n$ is the half-sum of positive roots,
i.e.,
$$
\chi_{\lambda,\mu}(C_i)=
\sum_j \left(\left(\lambda_j+\frac{t+1}{2}-j\right)^i-\left(\frac{t+1}{2}-j\right)^i\right)+
$$
$$
\sum_j\left(\left(-\mu_j-\frac{t+1}{2}+j\right)^i-\left(-\frac{t+1}{2}+j\right)^i\right)+P_i(t),
$$
where $P_i(t)$ is the modified Bernoulli polynomial, defined for positive integer $t$ by the formula
$$
P_i(n)=\sum_{k=1}^n \left(\frac{n+1}{2}-k\right)^i;
$$
it is derived from the exponential generating function
$$
\sum_{i\ge 0}P_i(t)\frac{z^i}{i!}=\frac{\sinh(z\frac{t+1}{2})}{\sinh(z)}.
$$
Thus, we have $\chi_{\lambda,\mu}\in \Sigma$.
In fact, it is not hard to see that $\chi_{\lambda,\mu}\in \Sigma$ not just when $\lambda$ and $\mu$ are partitions,
but actually for any complex values of $\lambda_j$ and $\mu_j$. For instance,
consider the case when $\lambda=(\ell)$ and
$\mu=0$, i.e., $X_{\lambda,\mu}=S^\ell V$.
Then we have a surjective algebra map
$$
\phi_\ell: U(\mathfrak{k})\to S^\ell V\otimes S^\ell V^*,
$$
and $\phi_\ell |_Z=\chi_{\ell,0}$.
Let $I_\ell$ be the kernel of this map. Then $I_\ell$ contains an $\ell$-independent subobject $Y_2$ of $U(\mathfrak{k})$ sitting in filtration
degree $2$, which at the graded level gives the "rank 1" equation $\wedge^2A=0$, $A\in \mathfrak{k}$ (in the categorical setting), see
\cite{BJ} (for ${\mathfrak{sl}}(V)\subset \mathfrak{k}$ this relation interpolates the quantization of the minimal coadjoint orbit).
For $\lambda\in \Bbb C$ let $\widetilde{I}_\lambda$
be the ideal $(Y_2)+(C_1-\lambda)$. Then $Q_\lambda:=U(\mathfrak{k})/\widetilde{I}_\lambda$ is a spherical
Harish-Chandra bimodule which is a quotient of $U_{\chi_{\lambda,0}}$.
It is easy to check that as an object of $\text{Rep}(K)$, $Q_\lambda$ has a decomposition
$$
Q_\lambda=\oplus_{m\ge 0}X_{m,m}
$$
(where $X_{m,m}$ is the simple summand of $S^mV\otimes S^mV^*$ not occurring in
$V^{\otimes j}\otimes V^{*\otimes j}$ for $j<m$). In particular, $Q_\lambda$ has finite $K$-type.
Using this decomposition, it is not hard to
check that $Q_\lambda$ is irreducible if $\lambda$ is not an integer.
On the other hand, if $\lambda=\ell$ is a positive integer, then
$Q_\lambda$ is a length $2$ module which can be included in the non-split exact sequence
$$
0\to \overline{Q}_\ell\to Q_\ell\to S^\ell V\otimes S^\ell V^*\to 0,
$$
where $\overline{Q}_\ell=\oplus_{m\ge \ell+1}X_{m,m}$,
and $S^mV\otimes S^mV^*=\oplus_{m=0}^\ell X_{m,m}$
are irreducible composition factors. Note that the Harish-Chandra bimodule
$\overline{Q}_\ell$ is not spherical, even though its left and right central characters coincide.
Similarly, if the length of $\lambda$ is $r$, the length of $\mu$ is $s$, and $r+s=p$, then there is a subobject $Y_{p+1}$ of $U(\mathfrak{k})$ sitting in degree $p+1$ quantizing
the relation $\wedge^{p+1}A=0$ in $S\mathfrak{k}$ that is annihilated by the homomorphism
$$
U(\mathfrak{k})\to X_{\lambda,\mu}\otimes X_{\lambda,\mu}^*,
$$
and we can define the quotient of $U(\mathfrak{k})$ by the ideal generated by $Y_{p+1}$ and the relations $C_i=\gamma_i$, $i=1,...,p$, which gives an $p$-parameter family
of spherical Harish-Chandra bimodules that are nontrivial quotients of the corresponding $U_\chi$. These are interpolations to complex rank
of quantizations of coadjoint orbits of ${\mathfrak{gl}}_n$ consisting of matrices of rank $p$ with fixed eigenvalues.
We obtain the following proposition.
\begin{proposition}\label{pointsinsigma}
For any complex $\lambda,\mu$,
the character $\chi_{\lambda,\mu}$
belongs to $\Sigma$.
\end{proposition}
It would be interesting to know if $\Sigma$ contains any other points than $\chi_{\lambda,\mu}$.
\subsection{Non-spherical Harish-Chandra bimodules}
Many more Harish-Chandra bimodules can be obtained from dual spherical principal series
by applying functors of tensoring with finite dimensional bimodules.
Namely, to each finite dimensional Harish-Chandra bimodule $Y$,
we can attach the functor $T_Y: \text{Rep}(\mathfrak{g},K)\to \text{Rep}(\mathfrak{g},K)$ given by
$T_Y(M)=M\otimes Y$ (the usual tensor product of $\mathfrak{k}\oplus \mathfrak{k}$-modules).
For an irreducible Harish-Chandra bimodule $M$, the bimodule $T_Y(M)$ typically won't be irreducible,
but one can look at its quotients corresponding to particular central characters
of the left and right action of $U(\mathfrak{k})$ (which will be Harish-Chandra bimodules, but in general will not be spherical).
In general, if $M$ is irreducible, we don't expect $T_Y(M)$ to have finite length.
However, it is not hard to check, for instance, that $Q_\lambda\otimes Y$ has finite length.
For example, take $Y=V$ (the tautological object under the left action of $\mathfrak{k}$ with the trivial right action).
We have $X_{m,m}\otimes V=X_{m+1,m}\oplus X_{(m,1),m}$ (the last summand is missing for $m=0$).
So, as ind-objects of $\text{Rep}(K)$, we have
$$
Q_\lambda\otimes V=Q_\lambda'\oplus Q_\lambda'',
$$
where $Q_\lambda'=\oplus_{m\ge 0}X_{m+1,m}$ and $Q_\lambda''=\oplus_{m\ge 1}X_{(m,1),m}$.
Interpolating from positive integer $\lambda$, one can easily show that this is in fact a decomposition of Harish-Chandra bimodules,
and the subbimodules $Q_\lambda'$ and $Q_\lambda''$ are the eigenobjects of the left action of the center.
Moreover, one can check that $Q_\lambda',Q_\lambda''$ are irreducible for non-integer $\lambda$.
More generally, given two central characters $\chi_1,\chi_2$ of $Z$, we can define the category $HC(\mathfrak{k}\oplus\mathfrak{k},K)_{\chi_1,\chi_2}$ of Harish-Chandra
bimodules in which the left action of $Z$ is via $\chi_1$ and the right action via $\chi_2$. Clearly, every
irreducible Harish-Chandra bimodule belongs to one of such categories. The following question is interesting.
\begin{question} Which of the categories $HC(\mathfrak{k}\oplus\mathfrak{k},K)_{\chi_1,\chi_2}$ are nonzero?
\end{question}
Note that if $M\in HC(\mathfrak{k}\oplus\mathfrak{k},K)_{\chi_1,\chi_2}$ and $X\subset M$ is a simple object of $\text{Rep}(K)$, then we have a nonzero morphism
$$
N(\chi_1,\chi_2,X):=(U_{\chi_1}\otimes U_{\chi_2}^{\rm op})\otimes_{U(\mathfrak{k})}X\to M
$$
(where $\mathfrak{k}$ is embedded diagonally), so
we see that $HC(\mathfrak{k}\oplus \mathfrak{k},K)_{\chi_1,\chi_2}$ is nonzero iff $N(\chi_1,\chi_2,X)\ne 0$ for some simple object $X\in \text{Rep}(K)$.
\subsection{Dual spherical principal series in the general case}
The above constructions can be generalized to the case of symmetric pairs which are not of group type.
Indeed, let us construct dual spherical principal series modules. Namely, given a character
$\chi$ of $Z=U(\mathfrak{g})^\mathfrak{g}$, consider the tensor product
${\mathcal{I}}(\chi)=U_\chi\otimes_{U(\mathfrak{k})}\bold 1$,
where $U_\chi=U(\mathfrak{g})/(z-\chi(z),z\in Z)$. Then ${\mathcal{I}}(\chi)\in HC(\mathfrak{g},K)$.
As in the group case, it is easy to show that any spherical irreducible Harish-Chandra module
is a quotient of ${\mathcal{I}}(\chi)$ for a unique $\chi$.
\begin{remark} 1. As in the classical case, the module ${\mathcal{I}}(\chi)$ may sometimes be zero. This happens whenever $\chi$ does not vanish on the ideal
$J=Z\cap U(\mathfrak{g})\mathfrak{k}\subset Z$, which may occur in case AIII (the symmetric pair $({\mathfrak{gl}}_{t+s},GL_t\times GL_s)$, $t,s\in \Bbb C$) and also in cases BDI, CII when one of the
two parameters $t,s$ is fixed to be an integer.
2. It is explained in \cite{He2} that
for classical symmetric pairs $(G,K)$,
the map $Z(\mathfrak{g})\to D(G/K)^G$ from the center of $U(\mathfrak{g})$
to the algebra of invariant differential operators
on $G/K$ is onto. This implies that in the classical case,
$U_\chi \otimes_{U(\mathfrak{k})}\Bbb C$ is the usual dual principal series
Harish-Chandra module for $G$.
\end{remark}
This gives rise to the following problem.
\begin{problem} Find the set of $\chi$ for which ${\mathcal{I}}(\chi)$ is reducible, and describe irreducible quotients of ${\mathcal{I}}(\chi)$.
\end{problem}
Also, more general Harish-Chandra modules may be obtained from ${\mathcal{I}}(\chi)$ and its quotients by tensoring with finite dimensional
Harish-Chandra modules (coming from objects of $\text{Rep}(G)$), and then taking quotients by various central characters.
Given $Y\in \text{Rep}(G)$ and a Harish-Chandra module $M$ with central character $\chi_1$, it is an interesting question for which
central characters $\chi_2$ the module
$$
(M\otimes Y)/(z-\chi_2(z),z\in Z)(M\otimes Y)
$$
is nonzero.
\subsection{Holomorphic discrete series}
In the special cases of Hermitian symmetric spaces, i.e. when $\mathfrak{p}=\u_+\oplus \u_-$ (namely, cases AIII, DIII,CI),
we can define the subcategory $HC_+(\mathfrak{g},K)$ of $HC(\mathfrak{g},K)$ of modules with a locally nilpotent action of $\u_+$.
A basic example of an object of $HC_+(\mathfrak{g},K)$ is the parabolic Verma module
$M_+(X)=U(\mathfrak{g})\otimes_{U(\mathfrak{k}\oplus \u_+)}X$, where $X\in \text{Rep}(K)$ is a simple object and $\u_+$ acts on $X$ by zero.
In this case we have the Harish-Chandra homomorphism $HC : Z(\mathfrak{g})\to Z(\mathfrak{k})$ defined as in the classical case
(namely, by the condition that $z\in HC(z)+U(\mathfrak{g})\u_+$), and the central character of $M_+(X)$ is defined by $\chi_X(z):=HC(z)|_X$.
The objects $M_+(X)$ are interpolations of holomorphic discrete series modules
in the classical case. It would be interesting to study the reducibility of $M_+(X)$.
The category $HC_+(\mathfrak{g},K)$ is a subcategory of the appropriate parabolic category O, discussed in the next section.
\section{Parabolic category O}
We would now like to extend the theory of category O to the setting of Deligne categories. Unfortunately, it is not clear how to define category O in this setting,
since for $\mathfrak{g}={\mathfrak{gl}}_t, {\mathfrak{o}}_t, {\mathfrak{sp}}_{2t}$ it is not clear what a Borel subalgebra is.
However, one can define the parabolic category O attached to a parabolic subalgebra.
Before doing so, let us review examples of parabolic subalgebras that can be defined
in the setting of Deligne categories.
\begin{example}\label{glpar} Let $G=GL_t$, $\mathfrak{g}={\mathfrak{gl}}_t$, and $t=t_1+...+t_m$, with $t_i,t\notin \Bbb Z$. Then we have a forgetful functor $\text{Rep}(G)\to \text{Rep}(L)$, where
$L=GL_{t_1}\times...\times GL_{t_m}$ is a "Levi subgroup"
(and by $\text{Rep}(L)$ we mean $\text{Rep}(GL_{t_1})\boxtimes...\boxtimes \text{Rep}(GL_{t_m})$). Let $V_i$ be the tautological objects of $\text{Rep}(GL_{t_i})$.
Then in $\text{Rep}(L)$ we have a decomposition $\mathfrak{g}=\u_+\oplus \l\oplus \u_-$, where $\u_+=\oplus_{i<j}V_i\otimes V_j^*$,
$\u_-=\oplus_{i<j}V_i^*\otimes V_j$, and $\l=\oplus_i V_i\otimes V_i^*={\rm Lie}L$. So we have the parabolic subalgebra
$\mathfrak{p}_+=\l\oplus \u_+$, with Levi subalgebra $\l$ and unipotent radical $\u_+$ (in $\text{Rep}(L)$).
Note that we have a modification of this example, where a subset of the numbers $t_i$, e.g. $t_1,...,t_s$, are positive integers,
and we use the classical representation categories ${\bf Rep}(GL_{t_i})$ instead of $\text{Rep}(GL_{t_i})$ for $1\le i\le s$.
\end{example}
\begin{example}\label{osppar} Now let $G={\mathfrak{o}}_{2t}$. Given a decomposition $t=t_0+t_1+...+t_m$ with $t_i, i\ge 1, 2t_0,2t\notin \Bbb Z$,
we have a Levi subgroup $L=O_{2t_0}\times GL_{t_1}\times...\times GL_{t_m}$. Let $V_0,V_1,...,V_m$ be the corresponding tautological objects.
Then we have $\l={\rm Lie}L$, $\u_+=\oplus_{i<j}V_i\otimes V_j^*$, $\u_-=\oplus_{i<j}V_i^*\otimes V_j$, and $\mathfrak{g}=\u_+\oplus \l\oplus \u_-$.
Also, as in Example \ref{glpar}, we may fix a subset of the numbers $2t_0,t_1,...,t_m$
to be positive integers, and use the classical representation categories instead of Deligne categories.
This includes the case when $t_0=0$, when we don't have a factor $O_{2t_0}$.
The same definition can be made for $G={\mathfrak{sp}}_{2t}$, with $L=Sp_{2t_0}\times GL_{t_1}\times...\times GL_{t_m}$.
In all of these cases, we have a parabolic subalgebra
$\mathfrak{p}_+=\l\oplus \u_+$, with Levi subalgebra $\l$ and unipotent radical $\u_+$ (in $\text{Rep}(L)$).
\end{example}
Now we can define the parabolic category O. Let $\l=\mathfrak{z}\oplus [\l,\l]$, where $\mathfrak{z}$ is the center of $\l$.
\begin{definition} The category $O(\mathfrak{g},L,\u_+)$, called the parabolic category O, is the category of finitely generated $U(\mathfrak{g})$-modules $M$ in ${\rm Ind}\text{Rep}(L)$ such that
(i) the action of $\u_+$ on $M$ is locally nilpotent;
(ii) the action of $[\l,\l]$ on $M$ coincides with its natural action (via the embedding $[\l,\l]\subset \l$),
and the action of $\mathfrak{z}$ on $M$ is semisimple.
\end{definition}
Typical objects in $O(\mathfrak{g},L,\u_+)$ are parabolic Verma modules. Namely, fix a weight $\lambda\in \mathfrak{z}^*$ and a simple object $X\in \text{Rep}(L)$.
Let $\bold 1_\lambda$ be the $\l$-module which is $\bold 1$ as an object of $\text{Rep}(L)$, and such that $\mathfrak{z}$ acts via $\lambda$, while $[\l,\l]$ acts trivially.
\begin{definition}
The parabolic Verma module $M_+(X,\lambda)$ is defined by the formula
$M_+(X,\lambda)=U(\mathfrak{g})\otimes_{U(\l\oplus \u_+)}(X\otimes \bold 1_\lambda)$, where
$\u_+$ acts on $X\otimes \bold 1_\lambda$ by zero.
We will mostly use the abbreviated notation $M(X,\lambda)$.
\end{definition}
Thus, as an ind-object of $\text{Rep}(L)$, we have $M(X,\lambda)=U(\u_-)\otimes X$.
Let $z_1,...,z_m$ be the natural basis of $\mathfrak{z}$ (namely, $z_i$ is the unit of ${\mathfrak{gl}}_{t_i}$).
Then the Lie algebra $\mathfrak{g}$ in $\text{Rep}(L)$ has a $\Bbb Z^m$-grading by eigenvalues of
the adjoint action of $z_1,...,z_m$, and the eigenobjects are finite dimensional
(i.e., are finite length objects of $\text{Rep}(L)$). Thus, for every module $M\in O(\mathfrak{g},L,\u_+)$
we can define its character with values in the Grothendieck ring of $\text{Rep}(L)$:
$$
{\rm ch}M(x_1,...,x_m)=\sum_{a_1,...,a_m} M[a_1,...,a_m]x_1^{a_1}...x_m^{a_m},
$$
where $M[a_1,...,a_m]$ is the common eigenobject of $z_1,...,z_m$ on $M$
with eigenvalues $a_1,...,a_m$.
Let $M_-(X,\lambda)$ be the module defined in the same way as
$M_+(X,\lambda)$, replacing $\u_+$ with $\u_-$. As objects we have $M_-(X,\lambda)=U(\mathfrak{n}_+)\otimes X$.
\begin{proposition}\label{stanO}
(i) The module $M_+(X,\lambda)=M(X,\lambda)$ has a maximal proper submodule $J(X,\lambda)$ and a unique irreducible quotient $L(X,\lambda)=M(X,\lambda)/J(X,\lambda)$.
(ii) Every irreducible object in $O(\mathfrak{g},L,\u_+)$ is of the form $L(X,\lambda)$ for a unique $X,\lambda$.
(iii) There is a unique $\mathfrak{g}$-invariant pairing
$$
(,)_\lambda: M_+(X,\lambda)\otimes M_-(X^*,-\lambda)\to \bold 1
$$
(the Shapovalov form) which coincides with the evaluation morphism on $X\otimes X^*$.
The left kernel of $(,)_\lambda$ is $J(X,\lambda)$.
(iv) For generic $\lambda$ (i.e., outside of countably many hypersurfaces), $M(X,\lambda)$ is irreducible (i.e., $M(X,\lambda)=L(X,\lambda)$).
\end{proposition}
\begin{proof}
The proof is standard.
Namely, (i) is proved using that every submodule of $M(X,\lambda)$ is graded by $\Bbb Z^m$, and
thus the sum of all proper submodules of $M(X,\lambda)$ is a proper submodule.
(ii) follows from the fact that any simple object of $O(\mathfrak{g},L,\u_+)$ contains a subobject annihilated by $\u_+$
(by the local nilpotence of the action of $\u_+$).
(iii) Follows since we can view the pairing in question as a homomorphism $M_+(X,\lambda)\to M_-(X^*,-\lambda)^*$, and
it's easy to see that such a homomorphism acting by the identity on the highest component exists and is unique, and its image is $L(X,\lambda)$.
Finally, (iv) follows (in the same way as in the classical case, using (iii)) from the fact that for generic $\lambda$, the pairing $B: \u_+\otimes \u_-\to \bold 1$
defined by $B:=\lambda\circ [,]$ is nondegenerate.
\end{proof}
\begin{remark}
In fact, one can determine the set of $\lambda$ for which (iv) fails by interpolating the Jantzen determinant formula
for the Shapovalov form on parabolic Verma modules (\cite{Ja}), to get a formula for the determinant of $(,)_\lambda$ on isotypic components.
\end{remark}
This gives rise to the following problem.
\begin{problem} Compute the characters of $M(X,\lambda)$ for all $X,\lambda,t,t_i$
\end{problem}
This should, in particular, lead to some stable limits of Kazhdan-Lusztig polynomials.
\section{Lie supergroups}
Representations categories of classical Lie supergroups are interpolated to complex rank quite similarly to categories of $(\mathfrak{g},K)$-modules;
namely, as before, we have a decomposition $\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{p}$, and the only difference is that $\mathfrak{p}$ is odd.
Indeed, let us define the representation categories
of the classical Lie supergroups $GL_{t|s}$ and $OSp_{t|2s}$.
To define $G=GL_{t|s}$ for $t,s\notin \Bbb Z$, let $K=GL_t\times GL_s$.
Namely, let $V,U$ be the tautological objects of $\text{Rep}(GL_t)$ and $\text{Rep}(GL_s)$, respectively. Then we have a Lie superalgebra
$\mathfrak{g}={\mathfrak{gl}}_{t|s}=\mathfrak{k}\oplus \mathfrak{p}$ in $\text{Rep}(K)$, where $\mathfrak{k}={\rm Lie}K$ and $\mathfrak{p}=V\otimes U^*\oplus U\otimes V^*$, with the supercommutator
$S^2\mathfrak{p}\to \mathfrak{k}$ defined in the obvious way (pairing the two summands in $\mathfrak{p}$).
Similarly, to define $G=OSp_{t|2s}$ for $t,2s\notin \Bbb Z$, let $K=O_t\times Sp_{2s}$.
Let $V,U$ be the tautological objects in $\text{Rep}(O_t)$, $\text{Rep}(Sp_{2s})$. Then we can define $\mathfrak{g}={\mathfrak{osp}}_{t|2s}=\mathfrak{k}\oplus \mathfrak{p}$, where $\mathfrak{k}={\rm Lie}K$, and
$\mathfrak{p}=V\otimes U$, with the obvious supercommutator $S^2\mathfrak{p}\to \mathfrak{k}$.
\begin{definition} In both cases, the category $\text{Rep}(G)$ is the category of $\mathfrak{g}$-modules in ${\rm Ind}\text{Rep}(K)$,
such that the action of $\mathfrak{k}\subset \mathfrak{g}$ is the natural action.
\end{definition}
\begin{remark} In the classical case, the algebra $\wedge \mathfrak{p}$ is finite dimensional,
and hence any $G$-module is locally finite (i.e., a sum of finite dimensional modules).
However, this is no longer the case in the Deligne category setting, which is why we are considering
$\mathfrak{g}$-modules in ${\rm Ind}\text{Rep}(K)$ rather than $\text{Rep}(K)$.
\end{remark}
We can also define this category for $t$ or $s$ being a positive integer, using the usual representation category of the corresponding group
instead of the Deligne category.
\begin{remark}
Note that any object $Y\in \text{Rep}(G)$ has a natural $\Bbb Z_2$-grading, by the number of $U$-factors mod 2.
\end{remark}
An interesting question is to compute the structure of irreducible representations of $G$.
For instance, for $G=GL_{t|s}$, we can look at irreducible representations
with locally nilpotent action of $V\otimes U^*$ (i.e., those which lie in a suitable parabolic category O).
Any such representation $L$ has a unique simple $\text{Rep}(K)$-subobject $X$ killed by $V\otimes U^*$, which determines $L$; we write $L=L(X)$.
The representations $L(X)$ are $\Bbb Z$-graded (by the number of $V$-factors) with finite dimensional homogeneous
components, so one may raise the following problem.
\begin{problem} Compute the character of $L(X)$ for each $X$, i.e., the Hilbert series of $\text{Hom}(Y,L(X))$ for all $Y\in \text{Rep}(K)$.
\end{problem}
In the classical case $(t,s\in \Bbb Z)$, this problem was solved in \cite{Se},
in terms of a particular kind of Kazhdan-Lusztig polynomials. It would be interesting
to interpolate this result to complex values of $t$ and $s$.
\section{Affine Lie algebras}
Let $\mathfrak{g}$ be a Lie algebra in a symmetric tensor category ${\mathcal{C}}$. In this case, given any commutative algebra $R$ in ${\mathcal{C}}$
(for example, an ordinary algebra over $\Bbb C$), we can form a new Lie algebra $\mathfrak{g}\otimes R$. In particular, if $R=\Bbb C[z,z^{-1}]$, we obtain
the loop algebra $L\mathfrak{g}=\mathfrak{g}[z,z^{-1}]$.
Now let $\mathfrak{g}$ be a quadratic Lie algebra, i.e., a Lie algebra with a symmetric nondegenerate inner product $B: \mathfrak{g}\otimes \mathfrak{g}\to \bold 1$
(in other words, we have a symmetric isomorphism of $\mathfrak{g}$-modules $\mathfrak{g}\cong \mathfrak{g}^*$). In this case, one can define a 1-dimensional central extension
$\widehat{\mathfrak{g}}$ of $L\mathfrak{g}$, using the 2-cocycle $\omega: L\mathfrak{g}\otimes L\mathfrak{g}\to \bold 1$, given by
$\omega|_{\mathfrak{g} z^m\otimes \mathfrak{g} z^n}=m\delta_{m,-n}B$. The Lie algebra $\widehat{\mathfrak{g}}$ is called the {\it affine Lie algebra} attached to $\mathfrak{g}$.
Moreover, we have an action of the Virasoro algebra ${\rm Vir}$ on $\widehat{\mathfrak{g}}$ by $L_n|_{\mathfrak{g} z^m}=-mz^n: \mathfrak{g} z^m\to \mathfrak{g} z^{m+n}$, and we can form the semidirect
product ${\rm Vir}{\triangleright\!\!\!<} \widehat{\mathfrak{g}}$.
In this setting, we can generalize some standard results about affine Lie algebras. For instance, we have the Sugawara construction.
Namely, for any Lie algebra $\mathfrak{g}$ in ${\mathcal{C}}$ one can define the (symmetric) Killing form ${\rm Kil}: \mathfrak{g}\otimes \mathfrak{g}\to \bold 1$ by the formula
$$
{\rm Kil}={\rm ev}_{\mathfrak{g}^*}\circ ([,]\otimes {\rm Id}_{\mathfrak{g}^*})\circ ({\rm Id}_\mathfrak{g}\otimes [,]\otimes {\rm Id}_{\mathfrak{g}^*})\circ {\rm Id}_{\mathfrak{g}\otimes\mathfrak{g}}\otimes {\rm coev}_\mathfrak{g}
$$
Now for a quadratic $\mathfrak{g}$, let us call a number $k\in \Bbb C$ {\it non-critical} if the form $B_k:=kB+\frac{1}{2}{\rm Kil}: \mathfrak{g}\otimes \mathfrak{g}\to \bold 1$
is nondegenerate. Also, for any $i,j\in \Bbb Z$ define a morphism $C_{ij}(k): \bold 1\to U(\widehat{\mathfrak{g}})$ by the formula
$$
C_{ij}(k)={\rm mult}(B_k^{-1}\cdot(z^i\otimes z^j))\text{ if }i\le j,\ C_{ij}(k)=C_{ji}(k).
$$
Let us say that a $\widehat{\mathfrak{g}}$-module $M$ in ${\rm Ind}{\mathcal{C}}$ is {\it of level $k$} if the central subobject $\bold 1$ of $\widehat{\mathfrak{g}}$ acts by $k$ in $M$.
Also, let us say that $M$ is {\it admissible} if for every finite length ${\mathcal{C}}$-subobject $X\subset M$
there exists $N\in \Bbb N$ such that for all $n\ge N$, the action map $\mathfrak{g} z^n\otimes X\to M$ is zero.
\begin{proposition}\label{sugawa} Let $M$ be an admissible ${\widehat{\mathfrak{g}}}$-module in ${\mathcal{C}}$ of non-critical level $k$.
Then the action $\widehat{\mathfrak{g}}$ on $M$ extends to an action of ${\rm Vir}{\triangleright\!\!\!<} \widehat{\mathfrak{g}}$
via the Sugawara formula:
$$
L_n=\frac{1}{2}\sum_{i+j=n}C_{ij}(k).
$$
Moreover, the Virasoro central charge of this action equals
$$
c=kB\circ B_k^{-1}.
$$
\end{proposition}
\begin{proof} The proof is standard, see e.g. \cite{Ka}.
\end{proof}
In particular, if $\mathfrak{g}$ is a simple Lie algebra (i.e., $\mathfrak{g}$ is a simple $\mathfrak{g}$-module), then ${\rm Kil}=gB$, where $g$ is the "dual Coxeter number" of $\mathfrak{g}$
(with respect to $B$). Then we get that $k$ is non-critical iff $k\ne -g$ (so $-g$ is called the critical level), and $c=\frac{k \dim\mathfrak{g}}{k+g}$.
Let us now specialize to the case when ${\mathcal{C}}=\text{Rep}(G)$, where $G=GL_t$, $O_t$, or $Sp_{2t}$, and $\mathfrak{g}={\rm Lie}(G)$,
with the form $B$ being the interpolation of the trace form (in the first case, we will also consider $\mathfrak{g}_0={\mathfrak{sl}}_t$,
which, unlike ${\mathfrak{gl}}_t$, is a simple Lie algebra). Then the Sugawara construction applies,
with the dual Coxeter numbers $g=t$ for ${\mathfrak{sl}}_t$, $g=t-2$ for ${\mathfrak{o}}_t$, and $g=2t+2$ for
${\mathfrak{sp}}_{2t}$ (note that the dual Coxeter number of ${\mathfrak{sp}}_{2n}$ is $n+1$, but we use a different normalization
of the bilinear form from the standard one, which gives twice as much).
One can also consider the theory of parabolic category O for $\widehat{\mathfrak{g}}$, similarly to Section 4. Namely, we define the category
$O_k(\widehat{\mathfrak{g}})$ to be the category of $\widehat{\mathfrak{g}}$-modules of level $k$ in $\text{Rep}(G)$ on which the action of $\mathfrak{g}$
is the natural one, and the action of $z\mathfrak{g}[z]$ is locally nilpotent. Typical objects
of $O_k(\widehat{\mathfrak{g}})$ are Verma modules
$$
M(X,k)=U(\widehat{\mathfrak{g}})\otimes_{U(\mathfrak{g}[z]\oplus \bold 1)}X=U(z^{-1}\mathfrak{g} [z^{-1}])\otimes X,\ X\in \text{Rep}(G),
$$
where
$\bold 1$ acts on $X$ by $k$ and $z\mathfrak{g} [z]$ by zero. Similarly to Section 4, this module
admits a Shapovalov form and has a unique simple quotient $L(X,k)$, and $M(X,k)=L(X,k)$ for all but countably many $k$.
In fact, by looking at the action of the Casimir operator $L_0+d$ (where $d$ is the degree operator),
one can check that the numbers $k$ for which $M(X,k)\ne L(X,k)$ are all of the form $r_1t+r_2$, where $r_1,r_2\in \Bbb Q$.
This gives rise to the following problem:
\begin{problem} Determine the characters of $L(X,k)$ in the case when $L(X,k)\ne M(X,k)$.
\end{problem}
As an example, consider the basic representation of $\widehat{\mathfrak{g}}_0$, where $G=GL_t$ and $\mathfrak{g}_0={\mathfrak{sl}}_t$, namely, $\bold V:=L(\bold 1,k=1)$.
Note that this representation is graded by powers of $z$. Thus, for any $X\in \text{Rep}(G)$, we can ask for the Hilbert series of the isotypic component of $X$,
$$
h_X(q)=\sum_n \dim\text{Hom}(X,\bold V)[-n]q^n.
$$
Let us determine $h_X(q)$. To do so, recall that in the classical case of ${\mathfrak{sl}}_n$, we have the Frenkel-Kac vertex operator construction (\cite{Ka}), which
gives the character formula
$$
{\rm ch}\bold V=\frac{\sum_{\beta\in Q}q^{\beta^2/2}e^\beta}{\prod_{j\ge 1}(1-q^j)^{n-1}},
$$
where $Q$ is the root lattice.
So we would like to interpolate this formula. For this purpose we'd like to write this sum as a linear combination of irreducible characters $\chi_\lambda$ of ${\mathfrak{sl}}_n$.
This formula is well known (see \cite{Ka}, Exercise 12.17), but we recall its derivation for reader's convenience. We have
$$
{\rm ch}\bold V=\sum_{\lambda \in Q\cap P_+}C_{\lambda,n}(q)\chi_\lambda,
$$
where
$$
C_{\lambda,n}(q)=|W|^{-1}\frac{\sum_{\beta\in Q}q^{\beta^2/2}(\Delta^2e^\beta,\chi_\lambda)}{\prod_{j\ge 1}(1-q^j)^{n-1}},
$$
$P_+$ is the set of dominant integral weights,
$(,)$ denotes the inner product defined by $(e^\beta,e^{\gamma})=\delta_{\beta,-\gamma}$, $W=S_n$ is the Weyl group, and $\Delta$ is the Weyl denominator.
By the Weyl character formula, we have
$$
\Delta \chi_\lambda=m_{\lambda+\rho}^-:=\sum_{w\in W}(-1)^w e^{w(\lambda+\rho)}.
$$
Thus we get
$$
C_{\lambda,n}(q)=|W|^{-1}\frac{\sum_{\beta\in Q}q^{\beta^2/2}(\Delta e^\beta, m_{\lambda+\rho}^-)}{\prod_{j\ge 1}(1-q^j)^{n-1}}.
$$
Now by the Weyl denominator formula, $\Delta=\sum_{w\in W}(-1)^w e^{w\rho}$, so we get
$$
C_{\lambda,n}(q)=\frac{\sum_{w\in W}(-1)^wq^{(\lambda+\rho-w\rho)^2/2}}{\prod_{j\ge 1}(1-q^j)^{n-1}}=
q^{\lambda^2/2}\frac{\prod_{\alpha>0}(1-q^{(\lambda+\rho,\alpha)})}{\prod_{j\ge 1}(1-q^j)^{n-1}}.
$$
Now we can see that if $\lambda$ and $\mu$ are partitions with $|\lambda|=|\mu|$, then
$C_{[\lambda,\mu]_n,n}(q)$ has a limit $C_{\lambda,\mu,\infty}(q)$ as $n\to \infty$.
For example, if $\lambda=\mu=0$, we get
$$
C_{0,0,\infty}(q)=\prod_{j=2}^\infty (1-q^j)^{-j+1}.
$$
In general, if $\lambda=(\lambda_1,...,\lambda_r)$ and $\mu=(\mu_1,...,\mu_s)$, we get
$$
C_{\lambda,\mu,\infty}(q)=
q^{\frac{\lambda^2+\mu^2}{2}}C_{0,0,\infty}(q)\prod_{1\le i<j\le r}\frac{1-q^{\lambda_i-\lambda_j+j-i}}{1-q^{j-i}}\prod_{1\le i<j\le s}\frac{1-q^{\mu_i-\mu_j+j-i}}{1-q^{j-i}}\times
$$
$$
\prod_{i=1}^r\prod_{j=0}^{\lambda_i-1}(1-q^{r+1+j-i})^{-1}
\prod_{i=1}^s\prod_{j=0}^{\mu_i-1}(1-q^{s+1+j-i})^{-1}
$$
For example, if $\lambda=\mu=(p)$, we get
$$
C_{p,p,\infty}(q)=q^{p^2}C_{0,0,\infty}(q)\prod_{j=1}^p (1-q^j)^{-2}=
$$
$$
q^{p^2}(1-q)^{-2}(1-q^2)^{-3}...(1-q^p)^{-p-1}(1-q^{p+1})^{-p}(1-q^{p+2})^{-p-1}...
$$
Thus, we obtain the following proposition.
\begin{proposition}\label{charform}
The Hilbert series of $\text{Hom}(X_{\lambda,\mu},\bold V)$ equals $C_{\lambda,\mu,\infty}(q)$.
\end{proposition}
Similarly, if $\widetilde{\bold V}$ is the basic representation of $\widehat{\mathfrak{g}}$, where
$\mathfrak{g}={\mathfrak{gl}}_t$, then $\widetilde{\bold V}={\bold V}\otimes {\mathcal{F}}$, where ${\mathcal{F}}$ is the standard Fock space, so the Hilbert series of $\text{Hom}(X_{\lambda,\mu},\widetilde{\bold V})$
is $\widetilde{C}_{\lambda,\mu,\infty}(q)$, where
$\widetilde{C}_{\lambda,\mu,\infty}(q)=C_{\lambda,\mu,\infty}(q)\prod_{i=1}^\infty (1-q^i)^{-1}.$
For example,
$\widetilde{C}_{0,0,\infty}(q)=\prod_{j=1}^\infty (1-q^j)^{-j}.$
\begin{remark} Note that in the classical situation, $\widetilde{\bold V}$ is a vertex operator algebra, and $\widetilde{\bold V}^G$
is known to be the affine $W$-algebra $W_n=W({\mathfrak{gl}}_n)$ of central charge $n$ (\cite{F}; see also \cite{FKRW}). Similarly,
in the Deligne category setting, $\widetilde{\bold V}$ is a vertex operator algebra in ${\rm Ind}\text{Rep}(G)$,
and $\widetilde{\bold V}^G$ is the $W_{1+\infty}$ vertex operator algebra with central charge $t$, see \cite{FKRW}.
Moreover, the spaces $\text{Hom}(X_{\lambda,\mu},\widetilde{\bold V})$ are modules over this vertex operator algebra.
\end{remark}
\section{Yangians}
As in the previous section, let $\mathfrak{g}$ be a quadratic Lie algebra in a symmetric tensor category ${\mathcal{C}}$.
In this case, following Drinfeld (\cite{Dr}; see \cite{CP} for a detailed discussion), one can define an algebra $Y(\mathfrak{g})$ in ${\mathcal{C}}$ called the {\it Yangian} of $\mathfrak{g}$, which
is a Hopf algebra deformation of $U(\mathfrak{g}[z])$. More precisely, Drinfeld gave a definition of $Y(\mathfrak{g})$ when $\mathfrak{g}$ is a simple Lie algebra in the category of vector spaces, but the definition extends verbatim to our more general setting. In particular, this construction allows us to define $Y(\mathfrak{g})$ for $\mathfrak{g}={\rm Lie}G$, where $G=GL_t$, $O_t$, or $Sp_{2t}$.
This $Y(\mathfrak{g})$ is a Hopf algebra in ${\rm Ind}\text{Rep}(G)$.
The PBW theorem for $Y(\mathfrak{g})$ says that $Y(\mathfrak{g})$ has a filtration such that ${\rm gr}Y(\mathfrak{g})=U(\mathfrak{g}[z])$ (with grading by powers of $z$); it particular, it contains $U(\mathfrak{g})$ as a Hopf subalgebra.
As usual, it is not hard to see that there is a surjective map $U(\mathfrak{g}[z])\to {\rm gr}Y(\mathfrak{g})$
(as the defining relations of the Yangian deform the defining relations of $U(\mathfrak{g}[z])$), but it
is harder to show that this map is also injective. This result was proved by Drinfeld in the classical setting,
and therefore follows in Deligne categories by interpolation.
Drinfeld's relations for $Y(\mathfrak{g})$ are rather complicated, so let us give a different, simpler presentation of $Y(\mathfrak{g})$ for $G=GL_t$, which is
an interpolation of the Faddeev-Reshetikhin-Takhtajan presentation (see \cite{Mo} for a review and references). To introduce this presentation, let $V$ be the tautological object of $\text{Rep}(G)$,
and start with the tensor algebra $A:={\bold T}(\oplus_{i\ge 0} (V\otimes V^*)_{(i)})$. Let $T_i\in \text{Hom}(\bold 1,(V^*\otimes V)\otimes A)$
be the coevaluation map
$$
\bold 1\to (V^*\otimes V)\otimes (V\otimes V^*)_{(i)}.
$$
Let $T(u)=1+T_0u^{-1}+T_1u^{-2}+...$
be the generating function of $T_i$, and let $R(u)=1+\frac{\sigma}{u}$, where $\sigma: V\otimes V\to V\otimes V$ is the permutation.
Consider the series
$$
Q(u,v):=(u-v)(R^{12}(u-v)T^{13}(u)T^{23}(v)-T^{23}(v)T^{13}(u)R^{12}(u-v))
$$
in $u^{-1}$ and $v^{-1}$ with coefficients $Q_{ij}\in \text{Hom}(\bold 1, (V^*\otimes V)\otimes (V^*\otimes V)\otimes A)$.
(so that $Q=\sum Q_{ij}u^iv^j$). Regard $Q_{ij}$ as a morphism
$$
Q_{ij}:=(V\otimes V^*)\otimes (V\otimes V^*)\to A
$$
(landing in degree $2$). Let $J$ be the ideal in $A$ generated by the images of all the $Q_{ij}$.
\begin{definition}\label{yanggln} The algebra $Y(\mathfrak{g}):=A/J$ is called the Yangian of $\mathfrak{g}$.
\end{definition}
One can show that Definition \ref{yanggln} is equivalent to the above (it is shown in the same way as in the classical case).
In particular, the copy of $U(\mathfrak{g})$ inside $Y(\mathfrak{g})$ is generated by the image of $T_0$ (regarded as a morphism $V\otimes V^*\to Y(\mathfrak{g})$);
moreover, $T_i$ corresponds in the associated graded algebra to $\mathfrak{g} z^i$. Finally, the Hopf algebra structure
is written very simply in terms of this presentation: $\Delta(T(u))=T^{12}(u)T^{13}(u)$, $\varepsilon(T(u))=1$, $S(T(u))=T(u)^{-1}$
(where $\Delta$ is the coproduct, $\varepsilon$ the counit, and $S$ is the antipode).
Besides greater simplicity than the general definition, Definition \ref{yanggln} has the important advantage that
it comes with a family of representations. Namely, since $R$ satisfies the quantum Yang-Baxter equation
$$
R^{12}(u_1-u_2)R^{13}(u_1-u_3)R^{23}(u_2-u_3)=R^{23}(u_2-u_3)R^{13}(u_1-u_3)R^{12}(u_1-u_2),
$$
we find that the assignment
$T(u)\mapsto R(u-z)$, $z\in \Bbb C$, defines a homomorphism ${\rm ev}_z: Y(\mathfrak{g})\to U(\mathfrak{g})$,
called the {\it evaluation homomorphism}. For $X\in \text{Rep}(G)$, denote by $X(z)$ the pullback ${\rm ev}_z^*X$ to a representation of $Y(\mathfrak{g})$.
Then, given any simple objects $X_1,...,X_k\in \text{Rep}(G)$ and $z_1,...,z_k\in \Bbb C$, we can construct the representation
$X_1(z_1)\otimes....\otimes X_k(z_k)$ of $Y(\mathfrak{g})$. As in the classical case, these representations are irreducible for generic
parameter values, but become reducible for special values, and other irreducible representations
are obtained as their composition factors.
This gives rise to the following problem.
\begin{problem} Classify irreducible representations of $Y(\mathfrak{g})$ in $\text{Rep}(G)$ on which the action of $\mathfrak{g}$ is the natural one
(generalizing the theory of Drinfeld polynomials, \cite{CP}) and compute their decompositions into simple objects of $\text{Rep}(G)$.
\end{problem}
It is known that in the classical setting, these irreducible representations have a rich structure, related to
geometry of quiver varieties, cluster algebras, Hirota bilinear relations, etc.
|
2111.05254
|
\section{Introduction}
Deterministic field theories (such as hydrodynamics, classical electrodynamics and general relativity) find application in all areas of physics, ranging from condensed matter physics to string theory. Recently, the whole area of classical field theory is receiving a new boost, due to the experimental advances, such as the discovery of the Quark-Gluon Plasma at RHIC and LHC \cite{QGPreview2017} and the now common-place detection of GW-mergers from compact objects by LIGO, Virgo and KAGRA \cite{Liguz2019}, which have driven the development of an ever increasing number of fluid-like theories, to describe exotic phenomena of all kinds \cite{Heller2014,Sadoogi2018,Florskoski2019,
GavassinoKhalatnikov2021,GavassinoQuasiHydro2022}. Most notably, relativistic dissipative hydrodynamics is becoming a standard tool in the study of a host of physical problems, from high-energy physics \cite{FlorkowskiReview2018} to astrophysics \cite{Shibatuz2017,CamelioBulk1_2022,CamelioBulk2_2022}.
The search for the ``correct'' field theory for describing a given phenomenon typically involves formulating a large number of alternative candidate theories, many of which are then ruled out, or proven to be equivalent to others. Usually, there is so much freedom in the construction of a phenomenological theory, that it is easy to get lost in the landscape of alternative formulations. For example, there are at least 11 different formulations of relativistic viscous hydrodynamics \cite{Eckart40,landau6,Israel_Stewart_1979,LindblomRelaxation1996,Liu1986,
carter1991,BemficaDNDefinitivo2020,BaierRom2008,
Denicol2012Boltzmann,SrickerOttinger2019,VanStableFirst2012}, 7 formulations of superfluid hydrodynamics
\cite{khalatnikov_book,prix2004,cool1995,lebedev1982,
Son2001,koby2018PhRvC,Gusakov08}, and 6 formulations of radiation hydrodynamics
\cite{Thomas1930,Weinberg1971,UdeyIsrael1982,
AnileRadiazion1992,GavassinoRadiazione,Farris2008}. However, in a relativistic setting, all this freedom comes at a price: most of the theories that one can formulate lead to completely unphysical predictions \cite{Hiscock_Insatibility_first_order}. For example, since flow of energy equals density of momentum, in some (unphysical) theories, a fluid can spontaneously accelerate, departing form equilibrium, and pushing heat in the opposite direction to conserve the total momentum \cite{Hishcock1988,GavassinoLyapunov_2020}. Pathologies of this kind constitute a serious problem for numerical simulations, because unphysical artefacts cannot be separated from physical effects.
Luckily, there is a standard procedure that allows us to the test the reliability of a relativistic theory and rule out a considerable fraction of candidate theories: the causality-stability assessment. The idea is simple: a theory can be considered reliable only if signals do not propagate faster than light (causality\footnote{{In the theory of relativity, the word ``causality'' stands for ``subluminal propagation of information'' \cite{Hawking1973,Wald,BemficaCausality2018,
Susskind1969,Fox1970}. This concept was introduced because the additional term ``$\, vx_B \,$'' in the relativistic transformation of time, $t_A=\gamma (t_B + v x_B)$, can push a future event to the past (or vice versa), provided that $x_B^2 > t_B^2$. Hence, if information could propagate faster than light, there would be an observer for whom it is propagating towards the past. To avoid grandfather-like paradoxes, it was conjectured that superluminal communication, just like communication to the past, should be impossible, because the effect must always follow the cause (hence the term causality). Logically speaking, this reasoning is not very rigorous \cite{kessence}. But it worked! The principle of causality is built into the mathematical structure of the Standard Model of particle physics \cite{Peskin_book,Eberhard1988,Keister1996} and, to date, it has never been falsified.}}), and if the state of thermodynamic equilibrium (or the vacuum, for zero-temperature theories) is stable against (possibly large\footnote{Throughout the article, we use the generic word ``perturbation'' as a synonym of ``disturbance'', namely an alteration (i.e. displacement) of a region of the medium from its equilibrium state. According to this terminology, a perturbation is not necessarily small, unless we say it explicitly. In general, the thermodynamic equilibrium state should be stable against all kinds of perturbations, both small and large \cite{Hishcock1988}, although in most situations one is able to rigorously asses stability only in the linear regime.}) perturbations. Since decades, there is a whole line of research devoted to assessing these two properties
\cite{Stuekelberg1962,
Hishcock1983,OlsonLifsh1990,GerochLindblom1990,
Geroch_Lindblom_1991_causal,Pu2010,
Kovtun2019,Lopez11,Bemfica2019_conformal1,
BrutoThird2021,GavassinoGibbs2021,
GavassinoCausality2021}. Unfortunately, the assessment procedure is complicated (especially for what concerns stability), and the proposed theories are much more numerous than those that are, then, effectively tested. It is clear that a universal and easily applicable criterion, that can be used to quickly asses if a theory is stable or not, would be a breakthrough for the field (which is exactly what this paper provides).
One aspect of the assessment is particularly problematic. When we study the dynamics of small perturbations around the vacuum state, the linearised field equations are the same in all reference frames, because we are linearising a Lorentz-convariant theory around a Lorentz-invariant state. On the other hand, if the unperturbed state has finite temperature (or chemical potential), its total four-momentum defines a preferred reference frame, so that the linearised field equations look different in different frames. This opens the doors to a counter-intuitive fact: at finite temperature, the equilibrium state may be stable in one reference frame, but unstable in another one. This paradox is possible only because different observers impose their initial data on different constant-time hypersurfaces, and hence deal with different initial-value problems \cite{Kost2000,GavassinoLyapunov_2020,GavassinoUEIT2021}. The result is that one needs to test the stability of the equilibrium in all reference frames, to be sure that a theory really makes sense. This is unfortunate, because the stability analysis in a reference frame in which the system is moving can be very cumbersome (the background is anisotropic).
The goal of this paper is to finally resolve the paradox of systems that are stable in one reference frame and unstable in others. We will prove that this can happen \textit{only if} the principle of causality is violated. The intuition behind this fact is that two observers can disagree on whether a perturbation is growing or decaying only if (by relativity of simultaneity \cite{special_in_gen}) the perturbation can be chronologically reordered, so that the two observers disagree on which part of the perturbation is in the past, and which is in the future. Since this can happen only if the perturbation propagates outside the light-cone, it follows that you need to violate causality, if you want to have two observers disagreeing on a stability assessment. This simple idea, once formulated in mathematical terms, will result into two theorems, according to which causal theories that are stable in one reference frame are also stable in any other frame.
In the following, I will first describe the physical setup of the problem, and the general physical mechanisms at the origin of the instability of dissipative theories. I will then rigorously prove the main result in section \ref{causalstablerelazione}. A reader not interested in the technical details may, however, skip to section \ref{unishiipunti}, where I will provide a simple argument that summarizes the essence of the whole paper, or directly to section \ref{apliacia}, when I will present 14 examples of concrete applications of our results to theories that are commonly used in a number of fields.
In case some readers wish to have a brief summary of how the relativistic stability assessment usually works, they can see Appendix \ref{AAAAAAAAA} for a quick overview.
Particular emphasis is given to the differences with the non-relativistic case. The mathematical and logical foundations of the method were laid in \cite{Hiscock_Insatibility_first_order}.
Throughout the paper we adopt the signature $ ( - , +, + , + ) $ and work in natural units $c=1$. The space-time is Minkowski, with metric $g$; we use global inertial coordinates, generically denoted by $x^a$ (so that $\nabla_a=\partial_a$). Finally, all observers are inertial observers, i.e. they do not accelerate and they do not rotate.
\section{Some pertinent context}
The idea that there could be a connection between causality violations and instabilities has a long history, which may be summarised in the words of \citet{Israel_2009_inbook}: ``\textit{If the source of an effect can be delayed, it should be possible for a system to borrow energy from its ground state, and this implies instability}''. This argument is a restatement of the Hawking-Ellis vacuum conservation theorem \cite{Hawking1973}, according to which, if energy can enter an empty region faster than the speed of light, then the dominant energy condition is violated, and the energy density may become negative in some reference frame. Unfortunately, these ideas are not applicable to our case, because we are not studying the stability of the vacuum state, but that of a finite-temperature equilibrium state. More importantly, causality violations can occur even in systems that obey the dominant energy condition. For example, take a barotropic perfect fluid with equation of state\footnote{The reader should not be concerned about the fact that $dP/d\rho <0$ (thermodynamic inconsistency) for some $\rho$: in our proof of principle, we only need an acausal field theory, well-defined for any $\rho \geq 0 $, with smooth coefficients in the field equations, and such that $\rho >|P|$.}
\begin{equation}\label{prruz}
P(\rho) = \dfrac{\rho}{3} \big[1+ \sin(\rho^2) \big],
\end{equation}
where $P$ is the pressure and $\rho$ is the energy density, in some fixed units. This fluid is consistent with the dominant energy condition ($\rho>|P|$), but its equations are acausal, because the speed of sound $dP/d\rho$ is unbounded above.
Luckily, it is not so hard to modify the idea of Israel, adapting it to our case of interest: we only need to replace ``energy'' with ``entropy'' and ``ground state'' with ``equilibrium state'' \cite{GavassinoCausality2021}. Let us see in more detail how this works with a simple qualitative argument.
\subsection{Acausality + Dissipation = Instability?}\label{ilprimoluiluilui}
Imagine that a signal travels between two events $p$ and $q$, which are space-like separated, i.e. $g(p-q,p-q)>0$. By relativity of simultaneity \cite{special_in_gen}, we know that there are some reference frames in which $p$ happens before $q$, and other reference frames in which $q$ happens before $p$. Hence, in some reference frames the signal is travelling superluminally from $p$ to $q$, while in other frames it travels superluminally from $q$ to $p$.
Now, imagine to repeat this experiment, placing between $p$ and $q$ a dissipative medium, which absorbs the signal along the way. Then, the signal is emitted from, say, $p$. It travels in the direction of $q$, but it decays before reaching $q$. But in those reference frames in which $q$ happens before $p$, we observe that the signal is spontaneously generated in the middle of the medium, it grows without any external influence (nothing happens at $q$), and travels to $p$. Thus, the medium is unstable to the spontaneous generation of perturbations! One may argue that this type of perturbation is not really spontaneous, because still we need an emitter/receiver at $p$ for it to occur. However, the argument still works if we send $p$ at space-like infinity, so that we are left with a medium that absorbs/emits a space-like beam, which travels from/to infinity.
The idea of the argument above is the same as that of \citet{Israel_2009_inbook}: if the cause of a signal (i.e. $p$) can be delayed, then the system can spontaneously generate a perturbation, borrowing entropy from the equilibrium state, and reversing the dissipative processes that should, instead, damp the perturbation. This implies instability.
Besides this qualitative argument, what are the concrete indications that causality and instabilities may be related? Let us have a brief summary of the present understanding of the causality-stability problem.
\subsection{Breakdown of causality and stability in infrared theories}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{JDR.png}
\caption{ Geometric visualization of the principle of causality. Take an arbitrary spacelike Cauchy 3D-surface $\Sigma$. The simplest example of such a surface is the hyperplane $\{ t=0 \}$. Divide $\Sigma$ into two regions: $\mathcal{R}$ (dark blue) and $\mathcal{R}^c$ (light blue). ``Paint in red'' all the timelike and lightlike curves that originate from $\mathcal{R}$ and propagate towards the future. The red paint will cover a set $J^+(\mathcal{R})$, called ``domain of influence of $\mathcal{R}$'', or ``causal future of $\mathcal{R}$'', or ``future light-cone of $\mathcal{R}$''. Causality demands the following: if we compare two arbitrary solutions (of the field equations) whose initial data differ on $\mathcal{R}$, but coincide on $\mathcal{R}^c$, such solutions can differ only inside $J^+(\mathcal{R})$. This is equivalent to saying that information coming from $\mathcal{R}$ can \textit{never} exit $J^+(\mathcal{R})$. }
\label{fig:JDR}
\end{center}
\end{figure}
For deterministic field theories, the principle of causality reduces to a mathematical condition on the field equations: a variation of the initial data in a region of space $\mathcal{R}$ cannot affect the solution outside the future light-cone of $\mathcal{R}$ \cite{Hawking1973,Wald,BemficaCausality2018}, see figure \ref{fig:JDR}. If the equations are linear, causality also means that the retarded Green's function has support within the future light-cone \cite{Susskind1969,Fox1970}. It turns out that many phenomenological equations in physics are not consistent with this
causality criterion and, therefore, allow for super-luminal propagation of signals. The best known example is the diffusion equation $\partial_t T =D \partial_x^2 T$, whose Green's function is
\begin{equation}\label{greenfunctionheat}
\mathcal{G}(t,x)= \dfrac{1}{\sqrt{4\pi Dt}} \exp\bigg( -\dfrac{x^2}{4Dt} \bigg) \, ,
\end{equation}
whose tails extend far beyond the future light-cone, propagating energy and information at infinite speeds. Such unphysical violations of the principle of causality usually occur in theories that are the low-frequency limit of some ``more complete'' causal theories \cite{Weymann1967,LindblomRelaxation1996,GavassinoUEIT2021}. This is, indeed, the case of the diffusion equation, that is (at least in ideal gases \cite{Israel_Stewart_1979,Denicol2012Boltzmann,Denicol_Relaxation_2011}) the low frequency limit of the telegraph equation \cite{cattaneo1958,Jou_Extended,rezzolla_book}, which is known to be causal. The same is true for the Schr{\"o}dinger equation, which is the acausal low frequency limit of the Klein-Gordon equation (with the field redefinition $\phi=e^{-imt}\psi$ \cite{Zee2003}).
\begin{equation}\label{super!}
\begin{matrix*}[l]
\text{Telegraph:} \\
\\
\text{Klein-Gordon:} \\
\end{matrix*}
\quad
\begin{matrix*}[l]
(\tau \partial_t^2 +\partial_t) T = D \partial_x^2 T \\
\\
(\partial_t^2 +m^2)\phi =\partial_x^2 \phi \\
\end{matrix*}
\quad \quad \quad \xrightarrow{\partial_t \rightarrow 0} \quad \quad \quad
\begin{matrix*}[l]
\text{Diffusion:} \\
\\
\text{Schr{\"o}dinger:} \\
\end{matrix*}
\quad
\begin{matrix*}[l]
\partial_t T =D \partial_x^2 T \\
\\
i\partial_t \psi = - \partial_x^2 \psi/2m \\
\end{matrix*}
\end{equation}
For the reason above, causality violations usually occur only on very short time-scales, where the predictions of the acausal equation differ from those of its causal progenitor. In other words, causality violations usually happen outside the regime of validity of the ``infrared approximation'', upon which the acausal equation is built. Hence, one may argue that, as long as we manage to keep the high-frequency part of the solutions small, the predictions of the acausal equation should be reliable, and the causality violations negligible \cite{Fichera1992,Day1997,PanB2008,Auriault2017}.
Unfortunately, in a relativistically covariant context, keeping the acausal high-frequency part of the solutions small is almost impossible (at least in some reference frames), if the equation is acausal \textit{and} dissipative. The first authors who noticed this issue were \citet{Hiscock_Insatibility_first_order}, who verified that any Fick-type diffusion law becomes unstable in some reference frame, due to the fast growth of some unphysical high-frequency modes (see appendix \ref{AAAAAAAAA} for a quick overview of their methodology). A similar mechanism has been observed in several other systems of equations \cite{Hishcock1983,OlsonLifsh1990,GerochLindblom1990,
Geroch_Lindblom_1991_causal,Pu2010,Kovtun2019}:
if causality is violated, and the system is dissipative, there is some reference frame in which the system becomes unstable, due to the appearance of fast-growing modes.
The fact that these instabilities usually depend on the frame of reference (i.e. the growing modes exist in some reference frames, but not in others) is deeply counterintuitive. Hence, it seemed natural to regard the unphysical growing modes as a mere ``mathematical pathology'' of the equations. Indeed, acausal field equations often do not present a good Cauchy problem for arbitrary data on space-like 3D-surfaces \cite{Susskind1969}; hence, it is not surprising that there is some reference frame in which an acausal theory ``misbehaves'' \cite{BaierRom2008}. However, this does not explain why dissipative systems are so exceptionally problematic: while non-dissipative acausal theories (like that considered by \citet{Susskind1969}) are singular only when the initial data is imposed on a characteristic surface, dissipative acausal systems are usually unstable in a continuum of reference frames \cite{Hiscock_Insatibility_first_order}. Hence, one may wonder whether acausality and dissipation are fundamentally incompatible. This is what we aim to understand here.
\section{Causality-Stability relations}\label{causalstablerelazione}
We have finally reached the central part of the paper. This section is arranged into three subsections, each one of which is a separate, stand-alone, result. In particular:
\begin{enumerate}
\item In subsection \ref{thetought} we present a more rigorous version of the argument given in subsection \ref{ilprimoluiluilui}, according to which, if a system is acausal and dissipative, then there is a reference frame in which it is unstable. Although linearity of the equations is never invoked explicitly, this argument is expected to be particularly useful for linear stability analyses (we also provide a concrete example in the Supplementary Material).
\item In subsection \ref{sez3} we present the following theorem: if a localised deviation from equilibrium decays over time uniformly in one reference frame, and its support does not exit the light cone, then it decays over time in all reference frames. This theorem is valid for both linear and non-linear field equations.
\item In subsection \ref{SiNNuoz} we present another theorem: if (in the linear regime) a causal theory predicts the existence of a growing sinusoidal plane-wave solution in one reference frame, then this theory is linearly unstable in all reference frames.
\end{enumerate}
Combined together, these results should lead us to a simple stability criterion: \textit{a dissipative theory which is stable in one reference frame is causal if and only if it is stable in all reference frames}. Note that this ``causality-stability relation'' is strongly corroborated by all the explicit stability analyses that have been performed till now (which the author is aware of) for many different theories, including the Israel-Stewart theory (both in the Eckart \cite{Hishcock1983} and in the Landau \cite{OlsonLifsh1990} flow-frame), divergence-type theories \cite{GerochLindblom1990}, Geroch-Lindblom theories \cite{Geroch_Lindblom_1991_causal}, inviscid theories for heat conduction \cite{OlsonRegularCarter1990}, first-order viscous hydrodynamics \cite{Kovtun2019,BemficaDNDefinitivo2020}, second-order viscous hydrodynamics \cite{Pu2010}, third-order viscous hydrodynamics \cite{BrutoThird2021}, and Carter's multifluid theory \cite{GavassinoStabilityCarter2022}.
Since the three arguments presented in this section are stand-alone, in each subsection we will work under slightly different assumptions (e.g. in subsection \ref{sez3} we deal with non-linear deviations from equilibrium with compact support, whereas in subsection \ref{SiNNuoz} we study a linear plane wave with infinite support). However, there are three fundamental ideas that remain the same across the whole paper:
\begin{itemize}
\item ``Causality''$\, = \,$information cannot exit the light-cone \cite{Hawking1973,Wald,BemficaCausality2018,Susskind1969,Fox1970};
\item ``Instability''$\, = \,$there is a reference frame in which deviations from equilibrium can grow in time \cite{Hiscock_Insatibility_first_order};
\item ``Dissipation''$\, = \,$there is a reference frame in which deviations from equilibrium decay in time.
\end{itemize}
Eventually, this will allow us to construct a simple ``unified argument'' (in section \ref{unishiipunti}), which combines together the three main results of this section.
\subsection{Acausal dissipative systems are not covariantly stable}\label{thetought}
We consider a small perturbation that is travelling super-luminally across a medium, disturbing the equilibrium state and violating causality. We assume that such perturbation can be modelled as a localised wave-packet (like a sound pulse), which moves along a space-like world-line. If the wave-packet is highly-oscillating (ultra-violet limit), such world-line is a characteristic of the field equations. Let us also assume that there is an observer $A$ (say, Alice), in whose reference frame the system exhibits a dissipative behaviour. Since the unperturbed state is the equilibrium state, a reasonable definition of ``dissipative behaviour'' is that all localized perturbations eventually decay to zero for large times. Hence, we can require that, in the reference frame of Alice, the intensity of the perturbation is a decreasing function of time. The Minkowski diagram of this process is presented in figure \ref{fig:fig} (left panel).
Now we immediately see the problem: since the perturbation is travelling along a space-like path, which part of this path happens ``earlier'' and which happens ``later'' depends on the frame of reference. Hence, we can surely find a second observer $B$ (say, Bob), in motion with respect to Alice, in whose reference frame the perturbation is growing in time (figure \ref{fig:fig}, right panel). Let us show it analytically.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{AliceBob.png}
\caption{Minkowski diagrams of the argument outlined in section \ref{thetought}. Reference frame of Alice (left panel): the perturbation moves super-luminally from the left to the right and its intensity decreases with time as a result of dissipation. Reference frame of Bob (right panel): the perturbation moves from the right to the left and its intensity grows with time. The two points of view are connected by a Lorentz boost. The shades of red are a color-map of the intensity of the perturbation (red large, white small); the arrows have the orientation induced by $\varphi$ (see the main text); the blue dashed lines are the light-cone.}
\label{fig:fig}
\end{center}
\end{figure}
At each point $p$ of the space-like world-line drawn by the center of the wave-packet, we may quantify the intensity of the perturbation using a Lorentz scalar $\varphi(p)$.\footnote{For example, if $T^{ab}$ is the stress-energy tensor, $\varphi(p)$ may be the typical deviation from equilibrium (averaged over the local oscillations) of the scalar field $T^{ab}T_{ab}$, in a neighbourhood of $p$. One may also take its square, to make sure that $\varphi$ is always non-negative and plays the role of a sort of ``norm'' of the perturbation.} The inverse of the relation $\varphi(p)$ defines a Lorentz-invariant parametrization on the world-line: $p(\varphi)$. Using this parametrization, and approximating the world-line to a straight line passing through the origin, we can write a relation of the form $x_A(\varphi) = w \, t_A(\varphi)$, with $w>1$ (space-like condition). If we boost this relation to Bob's frame, we obtain
\begin{equation}\label{varfuz}
t_B(\varphi) =\gamma \, (1-vw) \, t_A(\varphi) \, ,
\end{equation}
where $v$ and $\gamma$ are the boost's velocity and Lorentz factor. Taking the derivative of \eqref{varfuz}, and inverting the result, we find
\begin{equation}\label{rubbuz}
\dfrac{d\varphi}{dt_B} = \dfrac{1}{\gamma(1-vw)} \, \dfrac{d\varphi}{d t_A} \, .
\end{equation}
We see that, if $w^{-1}<v<1$, then the sign of $d\varphi/dt_B$ is opposite to that of $d\varphi/dt_A$. Thus, if the perturbation is damped in the reference frame of Alice ($d\varphi/dt_A <0$), it grows in the reference frame of Bob ($d\varphi/dt_B >0$), meaning that the equilibrium state is unstable in Bob's frame.
We can draw several conclusions from the argument above. First of all, we see that the instability can occur only if the system is both acausal and dissipative. In fact, if it were causal, then $w \leq 1$, and the factor $1-vw$ would always be positive; if it were non-dissipative, then $\varphi=\text{const}$, and equation \eqref{rubbuz} would reduce to the identity $0=0$. It is also immediately explained why the reference frames in which the system is unstable form a continuum: they are all those reference frames in which the chronological order of the events inside the perturbation is inverted, with respect to the chronological order perceived by Alice. Finally, by looking at equation \eqref{rubbuz}, we see that the instability is most violent close to $v=w^{-1}$, namely at the unstable-to-stable transition frame, where one has $d\varphi /dt_B = \infty$. This is a well-known feature of this kind of instabilities: rather than the growth rate, it is the growth time (the inverse of the rate!) that changes sign smoothly as we move from an unstable to a stable frame of reference \cite{Hiscock_Insatibility_first_order,GavassinoLyapunov_2020,GavassinoUEIT2021}.
In the Supplementary Material, we apply this argument to the super-luminal telegraph equation, showing that one can correctly predict the onset and the quantitative aspects of the instability without performing the whole stability analysis explicitly.
We can also make some additional comments:
\begin{itemize}
\item When $v>w^{-1}$, the perturbation grows with time in Bob's frame ($d\varphi/dt_B>0$); hence, we may say that the system looks ``anti-dissipative'' in Bob's frame. On the other hand, the obedience to the second law of thermodynamics ($\nabla_a s^a \geq 0$, where $s^a$ is the entropy current) is a Lorentz-invariant property of the system. This implies that the entropy grows also in the reference frame of Bob ($dS_B/dt_B \geq 0$). It follows that, in Bob's frame, the entropy is an increasing function of the intensity of the perturbation:
\begin{equation}
\dfrac{dS_B}{d\varphi} = \dfrac{dS_B}{dt_B} \, \dfrac{dt_B}{d\varphi} \geq 0 \, .
\end{equation}
In other words, the equilibrium state is not the maximum entropy state in Bob's frame\footnote{The frame-dependence of the maximum entropy state is not in contradiction with the Lorentz-invariance of the entropy \cite{GavassinoLorentzInvariance2021}, because the total entropy is Lorentz-invariant only at equilibrium \cite{Becattini2016}. Indeed, it is easy to see from figure \ref{fig:fig} (considering that $\nabla_a s^a \neq 0$ along the red arrows) that any attempt to use the Gauss theorem to prove that $S_A =S_B$ is doomed to fail.}. The recently discovered connection between instabilities and violations of the maximum entropy principle \cite{GavassinoLyapunov_2020,GavassinoGibbs2021,GavassinoCausality2021}
can be understood in the light of this simple argument.
\item It is evident from figure \ref{fig:fig} that, for the argument to be rigorous, the whole shape of the perturbation, and not just its peak, must be drifting super-luminally. Hence, our argument cannot be extended to causal systems whose group velocity happens to be super-luminal for some specific frequency (like those studied in \cite{Guoy1996,WangSuper2000,Withayachumnankul2010}, which can be stable \cite{Pu2010}). Only genuinely acausal systems \cite{Susskind1969} are affected by the present instability mechanism.
\item Since the high-frequency wave-packets travel on the ``acoustic cone'' (a.k.a. characteristic cone) of the field equations \cite{kessence}, we can conclude that the instability appears whenever the hyperplane $ \{ t_B=\text{const} \}$ is more sloping than the acoustic cone, so that a part of the future acoustic cone deeps below the hyperplane. Therefore, if the material is isotropic in the reference frame of Alice, the acausal dissipative theory is unstable in Bob's frame if the hyperplane $\{ t_B = \text{const} \}$ is ``time-like'' with respect to the acoustic metric
\begin{equation}
\tilde{g}^{ab} = g^{ab} + (1-w^{2})u_A^a u_A^b \quad \quad \quad (u_A^a = \text{Alice's four-velocity}).
\end{equation}
We will explore this point in more detail in section \ref{unishiipunti}.
\item The instability mechanism described here differs profoundly from the condensation instability of the tachyon field. In fact, the tachyon field is a causal system \cite{Susskind1969}, which is unstable in all reference frames, whereas here we are dealing with acausal systems, which are stable in some reference frames and unstable in others.
\end{itemize}
\subsection{Lorentz-invariance of dissipation}\label{sez3}
We have seen that causality violations lead to instabilities. Now we will prove that frame-dependent instabilities (namely, deviations from equilibrium that grow in Bob's frame while they decay in Alice's frame) are forbidden, if the principle of causality is respected. In this section, we will focus our attention on a localised (possibly large) ``perturbation'', namely a compactly-supported deviation of the hydrodynamic fields from their equilibrium value.
Take an arbitrary space-like Cauchy 3D-surface $\Sigma$, and decompose it into two regions $\mathcal{R}$ and $\mathcal{R}^c$, such that
\begin{equation}
\mathcal{R} \cup \mathcal{R}^c = \Sigma \quad \quad \quad \quad \mathcal{R} \cap \mathcal{R}^c = \emptyset \quad \quad \quad \quad \mathcal{R} \text{ is compact} .
\end{equation}
Using $\Sigma$ as the initial-data hypersurface, suppose that there is an initial (linear or non-linear) displacement from equilibrium, confined within $\mathcal{R}$. This is what we mean by ``localised perturbation''. Physically, such perturbation can be any kind of non-equilibrium phenomenon, like a hot spot, a soliton, a vortex ring, a chemical imbalance, or even an ``explosion'' (in $\mathcal{R}$). We construct a non-negative scalar field $\varphi$, which measures how far the system is from equilibrium at each spacetime event, and vanishes wherever the perturbation is absent (hence $\varphi =0$ on $\mathcal{R}^c$). If the theory is well-behaving, such ``perturbation-intensity field'' (namely, $\varphi$) can always be constructed, see Appendix \ref{appendoxB} (a rigorous mathematical definition of ``perturbation'' is provided in Appendix \ref{techno}). The following definition is natural \cite{Hawking1973,Wald,BemficaCausality2018}:
\begin{definition}[sub-luminality]\label{sub}
The perturbation is \textit{sub-luminal} if $\varphi(p) =0$ for any event $p \in \mathcal{D}^+(\mathcal{R}^c)$, the future Cauchy development of $\mathcal{R}^c$.
\end{definition}
\noindent An equivalent definition of sub-luminality is that $\varphi \neq 0$ only on $J^+(\mathcal{R})$ (the causal future of $\mathcal{R}$), see figure \ref{fig:fig2}, left panel. Now, if $u_A^a$ is Alice's four-velocity, we can define Alice's time-coordinate in a Lorentz-covariant fashion:
\begin{equation}
t_A = -x_a u_A^a \, .
\end{equation}
Hence, interpreting $t_A$ as a scalar field, we can define the sets
\begin{equation}
J^+_A (t):= \{ \, \text{events }p \, | \, t_A(p) \geq t \, \} \, .
\end{equation}
Each set $J^+_A (t)$ is simply the causal future of the hyperplane $t_A=t$. Then, we can make a second definition:
\begin{definition}[dissipation]\label{diss}
A sub-luminal perturbation is dissipated in the reference frame of Alice if, $\forall \, \varepsilon >0$, there exists $t_\varepsilon \in \mathbb{R}$ such that $\varphi(p)<\varepsilon$ for any event $p \in J^+(\mathcal{R}) \cap J^+_A (t_\varepsilon)$.
\end{definition}
This is a condition of uniform convergence of the perturbation to zero: after a certain time $t_\varepsilon$ (in Alice's rest frame), the intensity of the perturbation falls below $\varepsilon$ everywhere, and stays below $\varepsilon$ for $t_A \geq t_\varepsilon$ (see shades of red in figure \ref{fig:fig2}, left panel). Think of $\varepsilon$ as the instrumental resolution: at $t_\varepsilon$, the system is back in equilibrium within resolution $\varepsilon$. Analogous definitions can be made for Bob: just replace $A$ with $B$. We can finally present our theorem:
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Development.png}%
\includegraphics[width=0.48\textwidth]{CCt.png}
\caption{Left panel: Minkowski diagram of a sub-luminal perturbation (in Alice's coordinates). The blue segment is $\mathcal{R}$, where the perturbation is initially located, the black line is $\mathcal{R}^c$, where the perturbation is absent; together, $\mathcal{R}$ and $\mathcal{R}^c$ constitute the initial-data hypersurface $\Sigma$. The shaded red region is $J^+(\mathcal{R})$, where $\varphi$ can propagate. The white region above $\Sigma$ is $\mathcal{D}^+(\mathcal{R}^c)$, where $\varphi=0$. The shades of red are a color-map of $\varphi$ (red large, white small). The hyperplanes at constant $t_A$ and $t_B$ are respectively horizontal and oblique lines. Right panel: visualization of the sets $\mathcal{C}$ and $\tilde{\mathcal{C}}$ constructed in the proof of Theorem \ref{theo}.}
\label{fig:fig2}
\end{center}
\end{figure}
\begin{theorem}[Lorentz-invariance of dissipation]\label{theo}
If a sub-luminal perturbation is dissipated in the reference frame of Alice, it is also dissipated in the reference frame of Bob.
\end{theorem}
\begin{proof}
Let's assume that the sub-luminal perturbation is dissipated in Alice's frame. Then, taken an arbitrary $\varepsilon >0$, we can find a time $t_\varepsilon$, future to $\mathcal{R}$, such that $\varphi < \varepsilon$ in $J^+(\mathcal{R}) \cap J^+_A (t_\varepsilon)$. Let $\mathcal{C}$ be the closure of $J^+(\mathcal{R}) \cap [J^+_A(t_\varepsilon)]^c$. Since $\mathcal{R}$ is bounded, $\mathcal{C}$ is compact (see figure \ref{fig:fig2}, right panel). On the other hand, $t_B$ is a continuous function; hence, also the image set $t_B(\mathcal{C}) \subset \mathbb{R}$ is compact. This implies that, fixed an arbitrary $\eta>0$, the real number
\begin{equation}
\tilde{t}_\varepsilon := \eta + \max[t_B(\mathcal{C})]
\end{equation}
exists and is finite. Defined $\tilde{\mathcal{C}}:=J^+(\mathcal{R}) \cap J^+_B(\tilde{t}_\varepsilon)$, we have that $\mathcal{C} \cap \tilde{\mathcal{C}} = \emptyset$, because
\begin{equation}
\min[t_B(\tilde{\mathcal{C}})]= \tilde{t}_\varepsilon = \eta + \max[t_B(\mathcal{C})] > \max[t_B(\mathcal{C})] \, .
\end{equation}
Considering that, by definition, $\tilde{\mathcal{C}} \subset J^+(\mathcal{R}) \subseteq \mathcal{C} \cup [J^+(\mathcal{R}) \cap J^+_A (t_\varepsilon)]$, if follows that
\begin{equation}
\tilde{\mathcal{C}} \subseteq J^+(\mathcal{R}) \cap J^+_A (t_\varepsilon) \, .
\end{equation}
However, if $ \tilde{\mathcal{C}} = J^+(\mathcal{R}) \cap J^+_B(\tilde{t}_\varepsilon)$ is a subset of $J^+(\mathcal{R}) \cap J^+_A (t_\varepsilon)$, then $\varphi <\varepsilon$ in $J^+(\mathcal{R}) \cap J^+_B(\tilde{t}_\varepsilon)$.
\end{proof}
The essence of the proof can be easily understood by looking at the color-map in figure \ref{fig:fig2} (left panel): if the horizontal line $t_A = \text{const}$ is far enough in the future, the field $\varphi$ becomes arbitrarily small in the shaded region above it; then, we can always find an oblique line $t_B=\text{const}$ which slices $J^+(\mathcal{R})$ \textit{above} the horizontal line, as in the figure; in this way, we are sure that $\varphi$ is small also in Bob's frame, for a given time $t_B$ (and for later times).
Figure \ref{fig:fig2} (left panel) also shows why the condition of sub-luminality is needed: the lines $t_A=\text{const}$ and $t_B=\text{const}$ always intersect somewhere; hence, an infinite portion of the line $t_B=\text{const}$ lies in the \textit{past} of $t_A=\text{const}$, where there is no bound on $\varphi$. Therefore, if $\varphi \rightarrow +\infty$ in the down-left corner of the figure (which is possible only if causality is violated), there is no limit on how large $\varphi$ can get in Bob's frame. This is exactly what happens in the argument of section \ref{thetought}. On the other hand, causality demands that $\varphi=0$ outside $J^+(\mathcal{R})$, so that, by pushing up the oblique line, we can make sure that $t_B=\text{const}$ is in the future of $t_A=\text{const}$ within the support of $\varphi$.
\subsection{Lorentz-invariance of linear instability}\label{SiNNuoz}
Theorem \ref{theo} deals with non-linear perturbations, which are initially localised in space. However, in the linear approximation, it is usually convenient to study the evolution of sinusoidal plane waves, which have infinite support. Is there a straightforward analogue of Theorem \ref{theo} for sinusoidal plane waves?
We work with linear perturbations to a homogeneous stationary state, and call $\varphi:=\{ \delta \psi_i \}$ the array of perturbation fields $\delta \psi_i$. We take a global solution (i.e. a solution that is well defined across all Minkowski space-time) of the form
\begin{equation}\label{plainez}
\varphi = \text{``periodic field''} \times e^{\Gamma_B t_B} \quad \quad \quad (\Gamma_B \in \mathbb{R}) \, ,
\end{equation}
where the periodic part is periodic both in space and in time. On hyperplanes $\{ t_B =\text{const} \}$, we have $\varphi=\text{``periodic field''} $, which implies that the perturbation may be a plane wave (i.e. a Fourier mode) in Bob's frame. This is the type of solution that one considers while performing a linear stability analysis in Bob's frame \cite{Hiscock_Insatibility_first_order,Kost2000}. Depending on the sign of $\Gamma_B$, the perturbation grows (if $\Gamma_B >0$), decays (if $\Gamma_B <0$), or has constant intensity (if $\Gamma_B=0$), in Bob's frame. Working in Alice's frame, $\varphi$ is no longer a Fourier mode (unless $\Gamma_B=0$, see Appendix \ref{appendixB1}), but it takes the form
\begin{equation}\label{varfuzzo}
\varphi = \text{``periodic field''} \times e^{\Gamma_B \gamma (t_A-vx_A)} \, .
\end{equation}
We can orient the $x_A$-axis in a way that $v>0$. Now, let us make two assumptions:
\begin{itemize}
\item The field equations are causal \cite{Hawking1973,Wald,BemficaCausality2018};
\item The perturbation grows in Bob's frame: $\Gamma_B >0$.
\end{itemize}
Our goal is to prove that the system is linearly unstable also in Alice's frame.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{CausalIgnorace.png}
\caption{Left panel: an observer inside the red region $\mathcal{D}^+(\mathcal{R})$ cannot know what the initial state of the system was for $x_A<0$ (at $t_A=0$). Right panel: therefore, both $\varphi(t_A=0)$ and $\varphi^\star(t_A=0)=\Theta(x_A)\, \varphi(t_A=0)$ are initial states which are consistent with the data available to such observer; no experiment performed inside $\mathcal{D}^+(\mathcal{R})$ can tell $\varphi$ and $\varphi^\star$ apart. }
\label{fig:figIgnor}
\end{center}
\end{figure}
Consider an event $p \in \mathcal{D}^+(\mathcal{R})=\{t_A \geq 0 \} \cap \{x_A > t_A \}$, where $\mathcal{R}$ is the half-hyperplane (see figure \ref{fig:figIgnor})
\begin{equation}
\mathcal{R}:= \{ t_A=0 \} \cap \{ x_A >0 \} \, .
\end{equation}
By causality, $\varphi(p)$ cannot depend on the initial state of the system outside $\mathcal{R}$. In particular, if we consider an alternative solution $\varphi^\star$, whose initial data (for $t_A=0$) agrees with $\varphi$ on $\mathcal{R}$ and vanishes outside $\mathcal{R}$, i.e.
\begin{equation}
\varphi^\star(t_A=0)= \Theta(x_A) \, \varphi(t_A=0) \quad \quad \quad (\Theta = \text{Heaviside step function}) \, ,
\end{equation}
then we must have $\varphi^\star=\varphi$ on $\mathcal{D}^+(\mathcal{R})$. It follows that (for any $\varepsilon >0$, $t_A \geq 0$)
\begin{equation}
\varphi^\star\big|_{x_A=t_A+\varepsilon} \, = \, \varphi\big|_{x_A=t_A+\varepsilon} \, \propto \, e^{\Gamma_B \gamma (1-v)t_A} \, \xrightarrow{t_A \rightarrow +\infty} \infty \, ,
\end{equation}
which means that both $\varphi$ and $\varphi^\star$ have divergent amplitude at future light-like infinity (see figure \ref{fig:fig3}).
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Glue.png}
\caption{Minkowski diagram of the two solutions, $\varphi$ (left panel) and $\varphi^\star$ (right panel), in Bob's coordinates. The shades of red are a colormap of the the perturbation (the oscillatory behaviour of the periodic part is averaged out). On the grey area, we do not know the actual intensity of $\varphi^\star$. Left panel: $\varphi$ is an unstable Fourier mode (i.e. a growing plane wave) in B frame; it is well-defined across the whole space-time; its oscillation amplitude is constant along hyperplanes $ t_B = \text{const} $ (horizontal lines), and grows exponentially for growing $t_B$ ($\varphi \propto e^{\Gamma_B t_B}$). Right panel: $\varphi^\star$ is constructed on the half space-time $ \{ t_A \geq 0 \}$, by ``gluing'' initial data at $t_A=0$. On the right (on $\mathcal{R}$), we take $\varphi^\star(t_A=0)=\varphi(t_A=0)$, so that (by causality) $\varphi^\star=\varphi$ on $\mathcal{D}^+(\mathcal{R})$. On the left, we set $\varphi^\star(t_A=0)=0$ (hence $\varphi^\star=0$ on the respective Cauchy development). In this way, $\varphi^\star$ has a well-defined Fourier transform on $\{ t_A=0 \}$, but it diverges on $\mathcal{D}^+(\mathcal{R})$ (in the right-up corner), signalling an instability in Alice's frame. }
\label{fig:fig3}
\end{center}
\end{figure}
Now, it is not so surprising that $\varphi$ diverges somewhere in the future: in Alice's reference frame one has $\varphi(t_A=0) \propto \exp(-\Gamma_B \gamma v x_A)$, which is divergent at $x_A=-\infty$. Indeed, it is well-known that, if a perturbation has a divergent tail at $t_A=0$, its later exponential growth cannot be taken as an indication of instability of the field equations\footnote{For example, a perturbation of the form $\varphi=e^{t-x}$ is an exponentially growing solution ($\varphi \propto e^t$) of the causal wave equation $\nabla_a \nabla^a \varphi=0$ (that is obviously stable), with initial profile $\varphi(t=0)=e^{-x}$, which exhibits a divergent tail at $x=-\infty$.}. On the other hand, $\varphi^\star$ has a much more ``innocent'' initial state\footnote{For example, $\varphi^\star(t_A=0)$ has a well-defined Fourier transform. The reader should not be concerned about the discontinuity at $x_A=0$, because the step function can be replaced by any smooth function $\tilde{\Theta}$ such that $\tilde{\Theta}(x_A)=\Theta(x_A)$ for $x_A \in(-\infty, -1) \cup (0,+\infty)$, without affecting the result.}:
\begin{equation}
\varphi^\star(t_A=0) = \text{``periodic field''} \times \Theta(x_A) \, e^{ -\Gamma_B \gamma v x_A} \, .
\end{equation}
It is evident that, if such a perturbation diverges for later times, the system must be unstable in Alice's frame. We have, therefore, proven the following theorem:
\begin{theorem}[Lorentz-invariance of instability]\label{theo2}
If a causal (linear) theory presents a growing Fourier mode in one reference frame, then it is linearly unstable in all reference frames.
\end{theorem}
Equivalently, if a causal theory is stable in one reference frame, there cannot be any growing Fourier mode in the boosted frames (analogue of Theorem 1 for plane waves). This results generalizes Theorem III of \citet{BemficaDNDefinitivo2020} to linear systems with arbitrary linear field equations. Theorem \ref{theo2} is also a generalization of the ``inverse argument'' of \citet{GavassinoCausality2021} to theories that do not have an entropy current with strictly non-negative divergence, such as DNMR \cite{Denicol2012Boltzmann} and BDNK \cite{Bemfica2019_conformal1}. Note that, for Theorem \ref{theo2} to hold, the unperturbed state does not need to be the state of global thermodynamic equilibrium; instead, it may just be a homogeneous and stationary background state.
Let us, finally, give a less rigorous, but more intuitive, explanation of Theorem \ref{theo2}. Assume that, working in Alice's frame, we can split a given solution of the field equations into the product
\begin{equation}
\varphi = (\text{Intrinsic growth}) \times (\text{Drift}) = e^{\Gamma_A t_A} \times \varphi_D(x_A-w \, t_A) \, .
\end{equation}
Stability means $\Gamma_A <0$, causality requires $|w| \leq 1$. If we assume that $\varphi_D(x_A)=\text{``periodic field''} \times \exp(-\alpha x_A)$, with $\alpha>0$, we obtain
\begin{equation}\label{gagugo}
\varphi \propto e^{-\alpha x_A + (\Gamma_A+\alpha w)t_A} \, .
\end{equation}
Consistently with what we said before, we see that the fact that the perturbation grows in Alice's frame ($\Gamma_A +\alpha w>0$) does not necessarily mean that the theory is unstable ($\Gamma_A>0$), because a perturbation with an infinite tail (namely $\varphi =\infty$ at $x_A=-\infty$) can mimic an effective growth by drifting its tail. However, since $|w| \leq 1$, such effective growth cannot be too large in causal theories. Indeed, if we rewrite the perturbation \eqref{varfuzzo} in the form \eqref{gagugo}, we find that
\begin{equation}
\Gamma_A = \Gamma_B \gamma (1-vw) >0 \quad (\text{by causality}) \, ,
\end{equation}
signalling instability in Alice's frame. The reader can see the Appendix of \citet{GavassinoLyapunov_2020} for a similar argument.
\section{Acoustic-cone argument}\label{unishiipunti}
There is one ``global argument'', which unifies elegantly all the previous results, and gives a clear physical intuition of the underlying mechanism relating acausality and instability.
We start from a well-known fact: the outer characteristics that pass through a space-time point $p$ bound the domain of influence of $p$ \cite{Susskind1969,Kost2000}. This implies that, if we perturb a system at $p$ (e.g., by coupling the field equations with an external source), the induced disturbance will be confined within a conical-like region called (future) acoustic cone\footnote{By ``acoustic cone'' we actually mean the outermost cone: the fastest characteristic. Here, we are using the evocative word ``acoustic'' to mean that observers inside it can ``feel'' the disturbance. But such disturbance does not need to be sound in a strict sense. It may also be a shear or an Alfv\'{e}n wave. Also, note that the acoustic cone is an actual 3D-cone (like the light-cone) only if the field equations are hyperbolic, and the medium is isotropic in some frame. For anisotropic media, the shape of the acoustic cone may be distorted. Furthermore, if the field equations are parabolic, the acoustic cone degenerates to a 3D-hyperplane. But the argument still applies.} \cite{kessence,DisconziAcoustic2019}. In addition, if the unperturbed state is a state of global thermodynamic equilibrium, and if the theory is dissipative, we can assume that the perturbation will be more intense at the tip of the cone (i.e. closer to $p$), and it will become smaller as we move far away from $p$.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{SonicCone.png}
\caption{Minkowski diagram of the ``acoustic-cone argument''. An external source at $p$ (yellow star) generates a perturbation in the medium, which propagates within the outermost characteristic cone of the field equations (acoustic cone), and decays, by dissipation, as we move away from the source. If the field equations are acausal, the acoustic cone extends outside the light-cone. Hence, there is an observer Bob in whose frame the perturbation exists before $p$ has occurred. On the region $\{ t_B<t_B(p) \}$, Bob observes a solution of the source-less field equations, which is at equilibrium for $t_B \ll t_B(p)$, but grows \textit{spontaneously} as $t_B$ approaches $t_B(p)$ from below.}
\label{fig:fig55}
\end{center}
\end{figure}
Let us first consider the case in which the theory is causal. Then, the acoustic cone is contained within (or overlaps) the light-cone. Therefore, \textit{all observers} experience the events in the following order: first $p$ (external source), then the tip of the cone (``intense perturbation''), then the rest of the cone (``damped perturbation''). Hence, all observers will agree that the equilibrium state is stable against perturbations. We have recovered Theorem 1 (at least qualitatively). Furthermore, if we assume that the source at $p$ excites all the Fourier modes, it is easy to recover Theorem 2.
Let us now move to the case in which the theory is acausal. In this case, a portion of the acoustic cone exits the light-cone. Thus, there is an observer (Bob) who measures the perturbation \textit{before} $p$ has occurred. In Bob's frame, as $t_B$ approaches $t_B(p)$ from below, the portion of the acoustic cone that intersects the hyperplane $\{ t_B = \text{const} \}$ gets closer to the tip of the cone, see figure \ref{fig:fig55}. This implies that:
\begin{itemize}
\item for $t_B \ll t_B(p)$, the system is at equilibrium (the hyperplane $t_B = \text{const}$ is far from the tip of the cone);
\item for $t_B < t_B(p)$, the perturbation grows for increasing $t_B$;
\item at $t_B = t_B(p)$, the perturbation has a peak of intensity.
\end{itemize}
On the other hand, on the space-time region $\{ t_B<t_B(p) \}$, the perturbation is a solution of the field equations \textit{without} sources, because the only source is located at $p$. Therefore, we have shown that there is a solution of the source-less field equations, with initial data close to equilibrium [for $t_B \ll t_B(p)$], which departs from equilibrium at finite $t_B$ [just before $t_B(p)$]. This is a signature of instability, in Bob's frame. We have recovered the argument of section \ref{ilprimoluiluilui}: if the source of a perturbation can be delayed, then the system can spontaneously depart from equilibrium, in advance. But we have also recovered the argument of section \ref{thetought}: just identify the wave-packet of figure \ref{fig:fig} with the front of the perturbation induced by $p$ (like a discontinuity, the front travels along the boundary of the acoustic cone \cite{Hishcock1983}).
At this point, we need to make a clarification. \citet{kessence} have suggested that, if the acoustic cone is larger than the light-cone, then one should just use the acoustic cone, in place of the light-cone, to define the causal structure of the space-time, and treat observers like Bob (figure \ref{fig:fig55}) as ``inappropriate'' observers, because they are not free to set the initial data at will. In this way, all paradoxes are avoided, and one has a new notion of causality. Their reasoning is valid, but we are working in different contexts. They are interested in what would happen in a universe in which there was some physical field which breaks the general-relativistic notion of causality at the \textit{fundamental level}: for them, the limitations of Bob are real. On the other hand, here we are assuming that general-relativistic causality is fundamentally valid in our Universe (hence, Bob is physically capable of shaping the system), but we are using a field theory that contradicts such principle. This is the actual origin of all paradoxes: not equations that break causality, but Cauchy problems that combine acausal theories with initial data on arbitrary space-like surfaces \cite{Kost2000}.
\subsection{Example: the boosted heat equation anti-diffuses!}\label{bheannn}
Using the ``acoustic-cone argument'' outlined above, we are finally able to show that the instability of the heat equation in moving reference frames \cite{Kost2000} is a consequence of its acausality. To this end, we consider the following thought experiment. A heat-conductive medium is at rest in Alice's frame. For $t_A<0$, the temperature is everywhere zero. At $t_A=0$, Alice injects a Dirac-delta of energy in the location $x_A=0$. For $t_A>0$, the spike of energy diffuses across the medium, according to the heat equation. The temperature field is therefore given by \cite{Morse1953}
\begin{equation}
T(t_A,x_A) = \dfrac{\Theta(t_A)}{\sqrt{4\pi D t_A}} \exp \bigg( -\dfrac{x^2_A}{4Dt_A}\bigg) \, .
\end{equation}
It can be easily verified (see \citet{Rauch_book}, section 1.7, Problem 3) that this function is indeed a $C^\infty$ solution of the heat equation for all values of $t_A$ and $x_A$, except at the point $p=(0,0)$, which is where the spike of energy is injected by Alice. Thus, when we boost to Bob's frame (treating $T$ as a scalar field \cite{MTW_book}),
\begin{equation}
T(t_B,x_B) =\dfrac{\Theta(t_B + v x_B)}{\sqrt{4\pi D \gamma (t_B + v x_B)}} \exp \bigg[ -\dfrac{\gamma (x_B + v t_B)^2}{4D(t_B + v x_B)}\bigg] \, ,
\end{equation}
and we restrict our attention to the spacetime region $\{ t_B<0 \}$, we obtain a $C^\infty$ solution of the boosted heat equation. In figure \ref{fig:green}, we show some snapshots of such solution.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{HeatUnstable-eps-converted-to}
\caption{Boosted Green function of the heat equation, for $t_B<0$. We have set $D=30$ and $v=3/4$. Each curve represents a snapshot of $T(t_B,x_B)$, for different choices of $t_B$. If someone knows the entire history of the system, the interpretation of this figure is quite straightforward: Alice injects a spike of energy at $t_B=x_B=0$; because of acausality, a portion of such spike propagates towards the past; as it travels backward in time, the spike diffuses and flattens. On the other hand, to Bob (who cannot predict the decisions of Alice) the situation looks very different. From his perspective, the material is initially in thermodynamic equilibrium (at $t_B=-\infty$). Then, a perturbation builds up spontaneously, developing a superluminal front on the characteristic line $x_B=-t_B/v$. As time goes a head, the perturbation ``anti-diffuses'', becoming more and more peaked. Eventually, when $t_B \rightarrow 0$, the peak diverges at $x_B=0$. What we are observing is just an inversion of chronology (see figure \ref{fig:fig}).}
\label{fig:green}
\end{center}
\end{figure}
As we can see, the qualitative behaviour of $T(t_B,x_B)$ is consistent with our ``acoustic-cone argument''. Before Alice injects the spike, the temperature is already non-zero in Bob's frame: heat travels to the past! The characteristic line $x_B=-t_B/v$ (which is just the line $t_A=0$ expressed in Bob's coordinates) defines the ``acoustic cone'', and plays the role of a superluminal wave-front. There is a ``temperature wave'' on the right of such front, which is initially infinitesimal (for $t_B \ll 0$), and grows with time, ``anti-diffusing'', and becoming more and more peaked. In the end, $T$ develops a singularity at $t_B=0^-$. The very existence of a solution of this kind tells us that the boosted heat equation is ``anti-dissipative'' and unstable.
But there is more. Let us focus on the infinite strip $ \{ t_B, x_B \} \in [-1,0) \times \mathbb{R}$. As we said, $T$ is $C^\infty$ on such strip. In addition, the right tail of $T$ decays faster than exponentially, while the left tail is identically zero. Therefore, we have constructed a solution of the boosted heat equation, whose initial data at $t_B =-1$ is regular (i.e. smooth and with well-defined Fourier transform), which nevertheless develops a singularity as $t_B \rightarrow 0$. It follows that the boosted heat equation must be ill-posed \cite{Kost2000}. This fact is not surprising. The boost has inverted the chronology of the heat equation (see subsection \ref{thetought}), converting it from diffusive to ``anti-diffusive''. Hence, the boosted heat equation should share some similarities with the ``backward heat equation'', $-\partial_t T = D \partial_x^2 T$, which is renowned for its ill-posedness.
\section{Some quick applications}\label{apliacia}
As we said in the introduction, a relativistic theory should pass three tests, to be considered reliable:
\begin{itemize}
\item[(i)] Causality,
\item[(ii)] Stability in the background's rest frame,
\item[(iii)] Stability in reference frames in which the background is moving.
\end{itemize}
Usually, one is content of verifying these properties at list for linear deviations from equilibrium, although in principle conditions (i,ii,iii) should be valid also in the non-linear regime.
The main message of this paper is that, once properties (i,ii) have been tested, assessing property (iii) is superfluous. In fact, if causality is violated, we know from the argument of section \ref{thetought} that the theory will be unstable (if dissipative). Furthermore, from the acoustic-cone argument of section \ref{unishiipunti}, we are also able to predict exactly in which reference frames the problems appear. If, on the other hand, (i,ii) are respected, then, by Theorems 1 and 2, (iii) follows automatically. Below we list some direct applications of the present results, which span all areas of relativistic physics, including heavy-ion collision simulations (point 1), accretion-disk simulations (point 2), alternative theories for dissipation (points 3-7), models for turbulent flow (point 8), Chern-Simons magnetohydrodynamics (point 9) and multi-constituent fluids (points 10-14).
\begin{enumerate}
\item \citet{Plumberg2021} have shown that viscous heavy-ion collision simulations explore regimes of causality violation. This surely introduces uncertainty, but how much uncertainty? Each discrete time step in a simulation introduces error, and may ``activate'' Fourier modes. Picture this error as a small source on the right-hand side of the field equations. As shown in figure \ref{fig:fig55}, the effect of a source is dissipated away in those reference frames in which the acoustic cone points \textit{entirely} towards the future. However, in the remaining frames, it triggers growing modes. Hence, a simulation is really non-reliable if and only if a part of the acoustic cone ``sinks'' below the numerical time-step hyper-surfaces. Plotting the acoustic cone will, thus, show the real entity of the problem (the formula for the acoustic cone can be deduced from the causality analysis of \citet{BemficaPRL2021}).
\item \citet{Fragile2018} have performed relativistic viscous hydrodynamic simulations of accretion disks, adopting the \citet{landau6} theory, which is acausal: the acoustic cone is the normal hyperplane to the fluid's velocity \cite{Kost2000}. Thus, our reliability criterion (see point 1) is violated at any point where the flow velocity is not normal to the 3+1 foliation: these simulation are probably non-reliable. However, the choice of approximating the viscous stress as constant (during the primitive solve) may have had the effect of erasing the second time-derivatives, effectively collapsing the acoustic cone upon the foliation, removing the pathologies. This would explain why some of their simulations predict the existence of stable disks, which is surprising, given the violence of the acausality-induced instabilities (see section \ref{thetought}). We believe that this issue needs further investigation.
\item \citet{Pu2010} have shown that second-order viscous hydrodynamics is stable if and only if it is causal (in the linear regime). An analogous result has been found by \citet{BrutoThird2021} for third-order viscous hydrodynamics. We are in the position to predict that the same will also be true for higher-order viscous hydrodynamics.
\item \citet{Lopez11} have formulated a relativistic theory for heat conduction, proving that it satisfies conditions (i,ii). Theorem 2 implies that also condition (iii) is satisfied: the theory is stable.
\item \citet{SrickerOttinger2019} have formulated a relativistic viscous theory for liquids. In \cite{SrickerOttinger2019}, they verify that, for some choice of parameters, condition (ii) is respected. However, we can see from figures 1,2,3 of \cite{SrickerOttinger2019} that, for this same choice of parameters, the front velocity of some Fourier modes is super-luminal. Since the signal velocity is not smaller than the front velocity \cite{Krotscheck1978}, we can conclude that the liquid under consideration violates causality and is, therefore, unstable in some reference frames.
\item \citet{VanStableFirst2012} have formulated a relativistic theory for viscosity and heat conduction, showing that it respects condition (ii). However, upon inspection of the last column of their matrix $\textbf{R}$ [equation (34)], we see that the field equations are not hyperbolic \cite{Hishcock1983}, suggesting the presence of causality violations and, thus, of instabilities. Indeed, if (in $\textbf{R}$) we impose $\Gamma=\gamma \tilde{\Gamma}$ and $k=i\gamma v \tilde{\Gamma}$ (spatially homogeneous solution in a boosted frame \cite{Hiscock_Insatibility_first_order}), we find that there is one growing solution for any $v \neq 0$.
\item \citet{VanBiro2014} have formulated another theory for viscous hydrodynamics, similar to that discussed above. Unfortunately, it suffers exactly from the same problems as the previous one: the matrix $\textbf{R}$ [equation (38)] models acausal perturbations, which become unstable when boosted.
\item The Smagorinsky model \cite{Smagornki1963} is a filtered theory for modelling turbulent flows in large eddy Newtonian simulations.
\citet{Celora2021} have shown that, if the same approach is lifted to a relativistic setting, the resulting model is not ``covariantly stable'', i.e. it satisfies condition (ii) but not condition (iii). Applying Theorem 2, we can conclude that the relativistic Smagorinsky model is acausal.
\item \citet{Kiamari2021} have shown that Chern-Simons magnetohydrodynamics is causal, but unstable in the rest frame. Using Theorem 2, we can conclude that the theory must be unstable in every reference frame.
\item Many relativistic fluids can be modelled as reacting mixtures \cite{Burrows1986,BulkGavassino,Alford2020}. For a perfect-fluid reacting mixture, the rest-frame stability conditions coincide with the ``textbook'' conditions for thermodynamic stability \cite{GavassinoGibbs2021}, while the causality condition is simply the requirement that the sound-speed at frozen chemical fractions should not exceed the speed of light \cite{CamelioBulk1_2022}. Under these assumptions, by Theorem 2, a mixture is stable in all reference frames.
\item Most models for radiation hydrodynamics assume that there is a matter fluid with stress-energy tensor $M^{ab}$ and a radiation fluid with stress-energy tensor $R^{ab}$, which interact dissipatively though the equation $\nabla_a M^{ab}=-\nabla_a R^{ab}=G^b$, where $G^b$ is a hydrodynamic force \cite{Farris2008,Sadowski2013,GavassinoRadiazione}. Since $G^b$ usually does not depend on the gradients, its presence does not modify the characteristic determinant of the system. Therefore, the causality properties of the two fluids are unaffected by the coupling: if the dynamics of the matter fluid is acausal, the total radiation-hydrodynamic theory will also be acausal. On the other hand, radiation hydrodynamics is dissipative by construction \cite{Weinberg1971,GavassinoRadiazione}. Therefore, invoking the argument of section \ref{thetought}, we can conclude that all acausal fluids become unstable, when coupled with radiation through $G^b$.
\item The argument above can be easily generalised: assume that an arbitrary number of fluids and classical fields interact dissipatively through some equations $\nabla_a T^{ab}_{n}=G_{n}^b$ ($n$ is an index counting the fluids), where $G_{n}^b$ does not depend on the gradients. Then, if any of these fluids is acausal (and its dissipative coupling with the other fluids is not zero), the resulting composite system is unstable.
\item \citet{Carter_starting_point} have formulated a relativistic theory for superfluid mixtures. The simplest way of implementing dissipation in their theory is by coupling the currents through hydrodynamic forces which do not contain gradients \cite{GavassinoUEIT2021} (analogously to the case above). It follows that dissipative superfluid mixtures (and, more in general, ``multifluids'') are stable only if their non-dissipative analogue is causal. The only exception is when the dissipative coupling is mediated by quantum vortices \cite{langlois98,GavassinoIordanskii2021}, in which case the drag force depends non-linearly on the gradients, changing completely the causal structure of the system.
\item Superfluid neutron stars exhibit a phenomenon called ``entrainment'', according to which the superfluid momentum of the paired neutrons is not collinear to the flow of neutrons \cite{prix2004}. If we imagine to remove this effect, the acoustic cone becomes that of Carter's regular theory for heat conduction \cite{noto_rel}, which can be acausal, for certain equations of state \cite{OlsonRegularCarter1990}. Hence, the existence of the entrainment may be necessary to guarantee the stability of the equilibrium. The thermodynamic origin of this fact is studied in another work \cite{GavassinoStabilityCarter2022}.
\end{enumerate}
\section{Conclusions}
In this paper, we have identified the physical mechanism that connects causality, stability, and dissipation. Our reasoning can be summarised as follows. First, we have abstracted from the general notion of ``dissipation'' its key feature, namely the existence of a decaying-over-time scalar field (which measures ``how large'' a perturbation is at a point). Next, we have interpreted the word ``stability'' as the statement that all possible observers agree on the fact that such field is non-increasing with respect to their proper time. Finally, we have set up a simple argument: suppose that a perturbation moves superluminally (i.e., outside the light cone) and decays over time from the point of view of one observer. Because the perturbation is superluminal, it links causally disconnected space-time points which can, via a Lorentz transformation,
be chronologically inverted, making the decaying quantity appear increasing from the point of view
of another observer. In a nutshell, the lack of causality always allows one to transform dissipation
into ``anti-dissipation'' (i.e. dissipation backward in time). This also explains why acausal theories always turn out to be thermodynamically unstable \cite{GavassinoCausality2021}.
As a concrete example, we have studied how the retarded Green function of the heat equation transforms under Lorentz boosts. We have found that, due to relativity of simultaneity, one of its Gaussian tails must always ``sink'' to the past (no matter how small the boost velocity), so that the boosted Green function presents an advanced part. This acausal precursor undergoes an inversion of chronology: it ``anti-diffuses'', instead of diffusing (see figure \ref{fig:green}). As a consequence, thermodynamics now is time-reversed: spikes tend to pinch (instead of flattening), energy tends to concentrate (instead of spreading), and the medium wants to move away from equilibrium (rather than towards it). That is why the boosted heat equation is unstable \cite{Hiscock_Insatibility_first_order}, anti-dissipative \cite{GavassinoUEIT2021}, and ill-posed \cite{Kost2000}.
With a similar reasoning, we have rigorously proved that, instead, if a \textit{causal} theory is stable in one reference frame, it is stable in all reference frames. The reason is that Lorentz transformations can never invert the chronological order of causally connected events: a decaying subluminal perturbation cannot be Lorentz-transformed into a growing one. In other words, causality guarantees that the ``thermodynamic arrow of time'' points towards the future in all reference frames, not only in the rest frame.
Our analysis reveals that the causality-stability assessment is much easier than we thought, because the boosted-frame stability analysis (which is notoriously the most difficult part) is superfluous. Causality alone takes care of ensuring the Lorentz-invariance of a stability assessment, which can just be performed in a preferred reference frame. This result is a more general formulation of Theorem III of \citet{BemficaDNDefinitivo2020} and of the ``inverse argument'' of \citet{GavassinoCausality2021}. The main advantage of our Theorems 1 and 2 is that they do not make \textit{any assumption} about the structure of the field equations, besides causality.
We have also formulated a general criterion, based on the notion of ``acoustic cone'', which allows one to predict exactly in which reference frames an acausal theory becomes problematic. This criterion can be used to understand whether the reliability of state-of-the-art heavy-ion collision simulations \cite{Plumberg2021} is really compromised by the causality violations of Israel-Stewart-type theories.
This paper has clarified several fundamental aspects of relativistic hydrodynamics and thermodynamics, providing a definitive answer to some old open questions:
\begin{itemize}
\item[1)] \textit{What is the ``physical interpretation'' of the instabilities that we observe in relativistic hydrodynamics?} They are just dissipative processes under time reversal. Without causality, there is no absolute notion of chronology, because the ``cause'' and the ``effect'' may be exchanged via a Lorentz boost. As a result, the ``thermodynamic arrow of time'' may point towards the past, for some observers. When this happens, systems evolve away from equilibrium, rather than towards it. That is why these instabilities are present in some reference frames and not in others.
\item[2)] \textit{Is it possible to make these instabilities small enough to be irrelevant?} No! If the beginning and the end of a process can be chronologically reordered via a boost, then there is an intermediate reference frame in which they are simultaneous. In such frame, the whole process occurs instantaneously. Therefore, one cannot hope that the instabilities will grow ``slowly'' (for a given acausal theory), because there is always some reference frame in which the growth rate is infinite.
\item[3)] \textit{Why does this problem appear only when we turn on dissipation?} Because non-dissipative theories are invariant under time reversal (strictly speaking, they are invariant under CPT \cite{weinbergQFT_1995}). Hence, in the absence of dissipation, an inversion of chronology does not produce any observable effect on the laws of thermodynamics.
\item[4)] \textit{Is it possible to observe a similar phenomenon in Newtonian physics?} No. In Newtonian physics, time (and in particular chronology) is absolute. As a consequence, the thermodynamic arrow of time is Galilei-invariant, and all observers agree on whether a system is stable or not.
\end{itemize}
Theorems \ref{theo} and \ref{theo2} are also interesting from the point of view of the foundations of relativistic thermodynamics. In fact, the essence of these theorems may be summarised as follows: \textit{if a system exhibits a tendency to evolve towards thermodynamic equilibrium in one frame of reference, it exhibits the same tendency in all frames, provided that the principle of causality holds}. This suggests that, once thermodynamics is valid in one reference frame, it should ``look the same'' in all reference frames. This is perfectly in line with our recent proof \cite{GavassinoLorentzInvariance2021} of van Kampen's argument \cite{vanKampen1968} for the existence of a relativistically covariant theory of thermodynamics. There, causality and stability were implicitly assumed when the concept of ``kick'' was introduced.
\section*{Acknowledgements}
This work was supported by the Polish National Science Centre grant OPUS 2019/33/B/ST9/00942. The author thanks M. Antonelli, B. Haskell and F. Bemfica for reading the manuscript and providing useful comments. I am particularly grateful to M. Disconzi, whose observations have allowed me to significantly improve the mathematical rigour of the discussion. My gratitude also goes to the editorial team and the referees of PRX: their recommendations and criticisms played a crucial role in giving this paper its final form.
|
2103.06951
|
\section{Introduction} \label{sec:intro}
In the classic Davis-Greenstein (`D-G') theory \citep{DG51}, paramagnetic
dissipation in rotating grains drives the grains
into alignment with the interstellar
magnetic field. However, disalignment due to random collisions with gas-phase
particles renders this mechanism ineffective if the grain rotation is
excited by those same collisions.
\citet{Purcell79} noted that grains are subject to systematic
torques, fixed relative to the grain body, that can potentially drive
them to suprathermal rotation. That is, the grain's angular speed could
exceed, by a factor of several or more, the thermal rotation rate
$\omega_T$ that results when random collisions with gas particles excite
the rotation. Suprathermally rotating grains can avoid being disaligned by
these same random collisions.
\citet{Purcell79} also noted that internal mechanisms can dissipate
rotational kinetic energy into heat within the grain, driving it to rotate
around its principal axis of greatest moment of inertia, henceforth denoted
$\bmath{\hat{a}}_1$. \citet{Purcell79} described a previously unexamined
mechanism,
`Barnett dissipation', in which a paramagnetic grain attempts to magnetize
along the direction of the Barnett-equivalent field
$\mathbfit{B}_{\mathrm{BE}} = \bmath{\omega}/\gamma_g$. Here $\bmath{\omega}$
is the grain's angular velocity vector, which rapidly varies as observed in
a reference frame fixed to the grain when the grain does not rotate about
a principal axis, and $\gamma_g$ is the gyromagnetic ratio of the microscopic
spins that give rise to the grain's paramagnetism. \citet{Purcell79}
considered dissipation associated with electron paramagnetism.
\citet{LD99b} described the related phenomenon of `nuclear relaxation',
associated with nuclei such as H that are likely incorporated within grains,
and found that it can be much more efficient than Barnett dissipation for
thermally rotating grains of a wide range of sizes.
The dominant systematic torque considered by \citet{Purcell79} is due to
the recoil from H$_2$ molecules that form on the grain surface and
are subsequently ejected into the gas, with some of the released binding
energy converted to translational kinetic energy. If the molecules form only at
certain special sites on the surface, then the net recoil torque is
non-zero. In this work, we neglect gas-grain drift and photodesorption of
adatoms; with these assumptions, the net recoil torque is fixed relative to
the grain body. The special
sites are not permanent; new sites form and old sites disappear as the grain
undergoes resurfacing, e.g.~due to the accretion of atoms from the gas. As
a result, the component $\Gamma_1$ of the mean systematic torque along
$\bmath{\hat{a}}_1$ can sometimes change sign. For a suprathermally rotating
grain, in which Barnett dissipation is highly efficient, $\bmath{\omega}$ is
either nearly parallel to or nearly anti-parallel to $\bmath{\hat{a}}_1$.
Thus, when $\Gamma_1$ changes sign, the grain enters a period of spin-down,
and may ultimately spend some time rotating thermally. During these episodes,
known as `crossovers', the grain is again susceptible to disalignment via
random collisions with gas particles.
In the first study of crossovers, \citet{SM79} concluded that a grain would
become disaligned after passing through a small number of crossovers.
In the inverse process of Barnett dissipation,
thermal fluctuations prevent $\bmath{\omega}$
from lying exactly along $\bmath{\hat{a}}_1$ during periods of suprathermal
rotation. Surprisingly, \citet{LD97} found that this limits the extent of
disalignment during crossovers. However, \citet{LD99a} concluded that Barnett
fluctuations are so strong during periods of thermal rotation that a grain
can flip. That is, the sign of $\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1$
can change as a result of thermal fluctuations; $\mathbfit{J}$ is the grain's
angular momentum. \citet{LD99a} called this process `thermal flipping'.
When the grain flips, so does the direction of the systematic torque relative
to the angular momentum vector $\mathbfit{J}$. Denoting the interval
between consecutive flips as an `f-step', the time-averaged sytematic torque
will equal zero if the mean f-step duration is the same for the case that
$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1 < 0$ as for the case that
$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1 > 0$. (In making this statement,
we neglect any grain resurfacing during a crossover, as well as the
dependence of the systematic torque on the grain kinetic energy at fixed
angular momentum, and assume that the f-step duration is much less than the
crossover duration.) As a result, the time required for the grain to emerge
from a crossover may be much longer than for the
case where thermal flipping does not occur
and the effect of the systematic torque compounds uniformly over time.
\citet{LD99a} called this phenomenon `thermal trapping'.
Analyzing nuclear relaxation, \citet{LD99b} concluded that all grains that
contribute to starlight polarization would likely undergo thermal trapping,
arguing against Davis-Greenstein alignment aided by suprathermal spin-up
due to torques fixed relative to the grain body.
For simplicity, the above studies focused on oblate grains exhibiting dynamic
symmetry. That is, $I_1 > I_2 = I_3$, where $I_i$ is the moment of inertia
associated with principal axis $\bmath{\hat{a}}_i$. \citet{W09} showed that
thermal flipping does not actually occur for grains with dynamic symmetry.
\citet{W09} noted that external processes (e.g.~collisions with gas atoms
and the ejection of H$_2$ molecules from the grain surface) might induce
grain flipping; \citet{HL09} examined this possibility quantitatively.
\citet{KW17} showed that thermal flipping does
occur for grains that lack dynamic symmetry. They only provided quantitative
results for the relaxation rate for Barnett relaxation. In Section
\ref{sec:nuclear} of this work, we extend their analysis to also treat nuclear
relaxation.
Furthermore, we challenge the assumption noted above, that the mean f-step
duration is the same for the case that
$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1 < 0$ as for the case that
$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1 > 0$. Because of the systematic
torque, the magnitude $J$ of the angular momentum is more likely to increase
when $\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 > 0$ and is more
likely to decrease when
$\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 < 0$. We will refer to
an f-step with $\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 > 0$
as an `up-step' and an f-step with
$\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 < 0$ as a `down-step'.
(Note that $J$ can decrease during an up-step or increase during a down-step
since gas-atom collisions and H$_2$-formation events occur stochastically.)
In addition, the flipping probability per unit time decreases as $J$ increases.
Thus, the mean f-step duration is longer for f-steps in which $J$ increases
than for f-steps in which $J$ decreases. We conclude that the mean duration is
longer for up-steps than for down-steps. In other words, there is
a bias for f-steps to have longer duration when the systematic torque acts
to spin the grain up than when it acts to spin the grain down. Consequently,
the systematic torque does not average to zero, potentially circumventing
thermal trapping. The main purpose of this work is to examine this
possibility, employing detailed numerical simulations.
For simplicity, we consider an oblate spheroidal grain with an inhomogeneous
mass distribution. Thus, the grain exhibits geometric symmetry but not dynamic
symmetry. We assume that the mass is distributed such that the grain's center
of mass remains at its geometric center and the principal axis of largest
moment of inertia, $\bmath{\hat{a}}_1$, lies along the symmetry axis, but
$I_2 \ne I_3$.
The equations and quantities used in the simulations are derived in Sections
\ref{sec:spheroid-properties}--\ref{sec:external-processes}. Section
\ref{sec:simulations} describes the fundamental elements of the simulations.
Section \ref{sec:code-test}
presents a test of the simulation code. Two simplified versions of the code
are used to examine the case where the grain is in thermal equilibrium, for
which analytic results are available for comparison. The codes successfully
reproduce the expected results. Section \ref{sec:H2-formation}
describes our treatment of H$_2$ formation (and subsequent ejection) at special
surface sites. Sections \ref{sec:bias}--\ref{sec:D-G-alignment} contain our
main simulation results, focusing on grain dynamics in the cold neutral medium
of the diffuse ISM. Section \ref{sec:bias} presents simulation results for one
specific case, demonstrating that the mean up-step duration exceeds the mean
down-step duration in that case. Section \ref{sec:crossovers} presents
results from large suites of simulations examining crossovers, specifically
their duration and the associated grain disorientation. We find that thermal
trapping is not prevalent. The simulations in Section \ref{sec:D-G-alignment}
examine the efficiency of Davis-Greenstein alignment in the cold neutral
medium. We find that thermal trapping does not inhibit suprathermal spin-up and
D-G alignment, but the alignment time-scale (for grains lacking
superparamagnetic inclusions) is uncomfortably long. Conclusions and future
work are summarized in Section \ref{sec:conclusions}.
\section{Nuclear relaxation in non-symmetric grains} \label{sec:nuclear}
Consider a non-symmetric grain with $I_1 > I_2 > I_3$ and denote
$r_2 = I_1/I_2$ and $r_3 = I_1/I_3$. It is convenient to define a dimensionless
measure of the grain's rotational kinetic energy $E$ (when $J$ is constant),
\be
\label{eq:q}
q = \frac{2 I_1 E}{J^2} .
\ee
The kinetic energy is minimized (maximized) for rotation about
$\bmath{\hat{a}}_1$ ($\bmath{\hat{a}}_3$); thus, $1 \le q \le r_3$.
In the absence of external processes, the evolution of $q$ due to internal
relaxation can be described using the Langevin equation,
\be
\label{eq:internal-Langevin-eq}
\mathrm{d}q = A(q) \, \mathrm{d}t + \sqrt{D(q)} \, \mathrm{d}w_{\mathrm{int}} ,
\ee
where $\mathrm{d}t$ is a time step, $\mathrm{d}w_{\mathrm{int}}$ is a Gaussian
random variable with variance $\mathrm{d}t$, the drift coefficient $A(q)$ is
proportional to the average energy dissipation rate, and $D(q)$ is the
diffusion coefficient.
\citet{KW17} examined Barnett relaxation in non-symmetric
grains, deriving expressions for the drift and diffusion coefficients in the
low-frequency limit, i.e.~when the angular frequency of the grain's rotation
is much less than $T_2^{-1}$, where $T_2$ is the spin-spin relaxation time.
For Barnett relaxation, associated with electrons in, e.g., Fe atoms, this
approximation is excellent. However, in the case of nuclear relaxation, it
can fail for thermally rotating grains. Nuclear relaxation dominates
Barnett relaxation for most thermally rotating grains. Thus, in this
section, we relax the low-frequency limit.
Readers who are not familiar with \citet{KW17} may want to proceed
directly to the results in the final two paragraphs of this section.
The only term in the \citet{KW17} expression for the drift coefficient $A(q)$
that changes when the low-frequency limit is relaxed is the following function,
which is equivalent to the expression on the second line in their equation
(49):
\begin{multline}
\Theta(q, \Psi) = \left( T_2^{\prime} \right)^{-2} \int_0^{4 K(k^{\pm 2})} \mathrm{d}u
\left\{
c_1(q) \mathcal{F}_{\pm}\left(u, k^{\pm 2}\right) \times \right. \\
\left[ c_1(q) \mathcal{F}_{\pm}
\left(u, k^{\pm 2}\right) - M_1^{\prime}(q, u) \right]
+ c_2(q) \mathrm{sn} \left(u, k^{\pm 2}\right) \times \\
\left[ c_2(q) \mathrm{sn} \left(u, k^{\pm 2}\right) - M_2^{\prime}(q, u) \right]
+ c_3(q) \mathcal{F}_{\mp}\left(u, k^{\pm 2}\right) \times \\
\left. \left[ c_3(q) \mathcal{F}_{\mp}
\left(u, k^{\pm 2}\right) - M_3^{\prime}(q, u) \right] \right\}
\label{eq:define-Theta}
\end{multline}
where the $+$ ($-$) sign in $k^{\pm 2}$ and $\mathcal{F}_{\pm}$
is for $1 < q < r_2$ ($r_2 < q < r_3$);
\be
T_2^{\prime} = \Psi \times \begin{cases}
\left[ (r_2 - 1) (r_3 - q) \right]^{1/2} & 1 < q < r_2 \\
\left[ (r_3 - r_2) (q-1) \right]^{1/2} & r_2 < q < r_3
\end{cases} ;
\ee
\be
\Psi = \frac{J T_2}{I_1} ;
\ee
$K(k^{\pm 2})$ is the complete elliptic integral of the first kind,
\be
k^2 = \frac{(r_3 - r_2) (q-1)}{(r_2 -1) (r_3 -q)} ;
\ee
$\mathcal{F}_+(u, k^{\pm 2}) = \mathrm{dn}(u, k^{\pm 2})$;
$\mathcal{F}_-(u, k^{\pm 2}) = \mathrm{cn}(u, k^{\pm 2})$;
$\mathrm{sn}(u, k^{\pm 2})$, $\mathrm{cn}(u, k^{\pm 2})$, and
$\mathrm{dn}(u, k^{\pm 2})$, are the Jacobi elliptic functions;
\be
c_1(q) = \begin{cases}
[(r_2-1) (r_3-1)]^{-1/2} & , 1 < q < r_2 \\
(r_3-q)^{1/2} [(r_3-1) (r_3-r_2) (q-1)]^{-1/2} & , r_2 < q < r_3
\end{cases} ;
\ee
\be
c_2(q) = \begin{cases}
- r_2 (q-1)^{1/2} (r_2-1)^{-1} (r_3-q)^{-1/2} & , 1 < q < r_2 \\
- r_2 (r_3-q)^{1/2} (r_3-r_2)^{-1} (q-1)^{-1/2} & , r_2 < q < r_3
\end{cases} ;
\ee
\be
c_3(q) = r_3 \left( \frac{q-1}{r_3 -q} \right)^{1/2} c_1(q) ;
\ee
and $M_i^{\prime}(q, u)$ are the steady-state solutions of the following
differential equations:
\begin{multline}
\frac{\mathrm{d}
M_1^{\prime}}{\mathrm{d}u} = c_3(q) M_2^{\prime}(q,u) \mathcal{F}_{\mp}(u, k^{\pm 2})
- c_2(q) M_3^{\prime}(q,u) \mathrm{sn}(u, k^{\pm 2}) \\
+ \left(T_2^{\prime}\right)^{-1}
\left[ c_1(q) \mathcal{F}_{\pm}(u, k^{\pm 2}) - M_1^{\prime}(u) \right] ,
\label{eq:modified-Bloch1}
\end{multline}
\begin{multline}
\frac{\mathrm{d}
M_2^{\prime}}{\mathrm{d}u} = c_1(q) M_3^{\prime}(q,u) \mathcal{F}_{\pm}(u, k^{\pm 2})
- c_3(q) M_1^{\prime}(q,u) \mathcal{F}_{\mp}(u, k^{\pm 2}) \\
+ \left(T_2^{\prime}
\right)^{-1} \left[ c_2(q) \mathrm{sn}(u, k^{\pm 2}) - M_2^{\prime}(u) \right] ,
\end{multline}
\begin{multline}
\frac{\mathrm{d}M_3^{\prime}}{\mathrm{d}u}
= c_2(q) M_1^{\prime}(q,u) \mathrm{sn}(u, k^{\pm 2})
- c_1(q) M_2^{\prime}(q,u) \mathcal{F}_{\pm}(u, k^{\pm 2}) \\
+ \left(T_2^{\prime}
\right)^{-1} \left[ c_3(q) \mathcal{F}_{\mp}(u, k^{\pm 2}) - M_3^{\prime}(u) \right]
.
\label{eq:modified-Bloch3}
\end{multline}
We adopt the same conventions for the Jacobi elliptic functions as
\citet{WD03}.
From \citet{KW17}, in the low-frequency limit
\be
\Theta(q, \Psi \ll 1) =
\frac{4 \left\{ z_1 [E(k^2) + (k^2-1)
K(k^2)] + k^2 z_2 E(k^2) \right\} (q-1)}{3 k^2 (r_2-1)^2 (r_3-1) (r_3-q)}
\ee
when $1 < q < r_2$ and
\begin{multline}
\Theta(q, \Psi \ll 1) =
4 \left\{ z_2 [E(k^{-2}) + (k^{-2}-1) K(k^{-2})] + k^{-2} z_1 E(k^{-2})
\right\} \\
\times \frac{(r_3 - q)}{3 k^{-2} (r_3 - r_2)^2 (r_3-1) (q-1)}
\end{multline}
when $r_2 < q < r_3$;
$E(k^2)$ is the complete elliptic integral of the second kind,
\be
z_1 = 2 (r_3-r_2) - r_3^2 (r_2-1) + r_2^2 (r_3-1) ,
\ee
and
\be
z_2 = - (r_3-r_2) +2 r_3^2 (r_2-1) + r_2^2 (r_3-1) .
\ee
For the specific case that $r_2 = 1.3$ and $r_3 = 1.5$, we find the
steady-state solution of equations
(\ref{eq:modified-Bloch1})--(\ref{eq:modified-Bloch3}) and perform the
integration in equation (\ref{eq:define-Theta}) numerically. We
find that $\Theta(q, \Psi)$ is very close to $\Theta(q, \Psi \ll 1)$
when $\Psi \la 0.1$ and $\Theta(q, \Psi)/\Theta(q, \Psi \ll 1)$ drops
to $\approx 6 \times 10^{-3}$ when $\Psi = 10$. For a given value of
$\Psi$, $\Theta(q, \Psi)/\Theta(q, \Psi \ll 1)$ varies by less than
30 per cent as $q$ ranges from 1 to $r_3$.
\citet{KW17} expressed the drift coefficient $A(q)$ in the low-frequency
limit in the form $A(q) = - \tau^{-1}_{\mathrm{int}} A_1(q)$, where $A_1(q)$ is
a dimensionless function of $q$, given in equations (55) and (61) in
\citet{KW17}, and $\tau_{\mathrm{int}}$ is the internal relaxation
time-scale.
Given the gross uncertainties in the
theoretical modeling of Barnett and nuclear relaxation, we do not adjust the
functional form of $A_1(q)$ from its low-frequency form.
Rather, we simply adjust the relaxation time-scale according to
\be
\label{eq:tau-int-all-freq}
\tau_{\mathrm{int}}(\Psi) = \tau_{\mathrm{int}}(\Psi \ll 1) \ \left[
\frac{\Theta(q, \Psi \ll 1)}{\Theta(q, \Psi)} \right]_{\mathrm{av}} .
\ee
In the final term in equation (\ref{eq:tau-int-all-freq}), the ratio
$\Theta(q, \Psi \ll 1)/\Theta(q, \Psi)$ is averaged over $q$ for a fixed value
of $\Psi$. To within 0.5 per cent,
\be
\label{eq:tau-nuc-factor}
\left[ \frac{\Theta(q, \Psi \ll 1)}{\Theta(q, \Psi)} \right]_{\mathrm{av}} =
\left( 1 + 1.67 \, \Psi^{1.96} \right)^{1.02} .
\ee
Note that this fit is specifically for grains with $r_2 = 1.3$ and
$r_3 = 1.5$.
From equation (56) in \citet{KW17},
\be
\label{eq:tau-int-low-freq}
\tau_{\mathrm{int}}(\Psi \ll 1) = \frac{\gamma_g^2 I_1^3}{2 \chi_0 V T_2 J^2} .
\ee
As in \citet{WD03}, we take
$\gamma_g = -1.76 \times 10^7 \, \mathrm{s}^{-1} \, \mathrm{G}^{-1}$ and
$\chi_0 T_2 = 10^{-13} (15 \, \mathrm{K}/T_d) \, \mathrm{s}$ for Barnett
relaxation ($T_d$ is the dust temperature) and
$\gamma_g = 1.3 \times 10^4 \, \mathrm{s}^{-1} \, \mathrm{G}^{-1}$,
$\chi_0 = 4 \times 10^{-11} (15 \, \mathrm{K}/T_d)$, and
$T_2 = 10^{-4} \, \mathrm{s}$ for nuclear relaxation. For Barnett relaxation,
we assume that the low-frequency limit always applies.
\section{Spheroid Properties} \label{sec:spheroid-properties}
Consider a spheroid characterized by the radius $a_{\mathrm{eff}}$
of a sphere with equal volume, the ratio $\delta$ of the semilength $a$ along
the (geometric) symmetry axis to the semilength $b$ along a perpendicular
axis, and the
average mass density $\bar{\rho}$. (Recall that we take the density to vary
throughout the grain, so that dynamic symmetry is violated.)
When $\delta > 1$ the spheroid is prolate and when
$\delta < 1$ the spheroid is oblate. In either case,
$a = a_{\mathrm{eff}} \, \delta^{2/3}$. Denote the principal axes
$\bmath{\hat{a}}_1$, $\bmath{\hat{a}}_2$, and $\bmath{\hat{a}}_3$, with
corresponding moments of inertia
\be
\label{eq:I_i}
I_i = \frac{8}{15} \, \upi \bar{\rho} a_{\mathrm{eff}}^5 \, \alpha_i ,
\ee
and take the symmetry axis to lie along $\bmath{\hat{a}}_1$. We take
$\alpha_1 = \delta^{-2/3}$, its value for a uniform spheroid, and assign
smaller, but unequal, values to both $\alpha_2$ and $\alpha_3$. For a
uniform spheroid, these would be given by
$\alpha_2 = \alpha_3 = (\alpha_1 + \alpha_1^{-2})/2$. Here, we consider an
oblate grain with $\delta = 0.5$ and take
$r_2 = \alpha_1/\alpha_2 = 1.3$ and $r_3 = \alpha_1/\alpha_3 = 1.5$.
(One simple, albeit unrealistic, mass distribution reproducing these values
for $\alpha_1$, $r_2$, and $r_3$ consists of a uniform mass distribution
throughout the spheroid plus three point particles located at the surface of
the grain along the $a_1$-, $a_2$-, and $a_3$-axes, with mass fractions,
i.e.~the mass of the point mass divided by the entire mass of the grain, of
0.2229, 0.0538, and 0.0949, respectively.)
It will be convenient to also denote
$(\bmath{\hat{a}}_2, \bmath{\hat{a}}_3, \bmath{\hat{a}}_1)$ by
$(\bmath{\hat{x}}, \bmath{\hat{y}}, \bmath{\hat{z}})$.
In order to evaluate mean torques and diffusion coefficients associated
with gas-atom collisions and H$_2$ formation, it is most convenient to
adopt oblate spheroidal coordinates $(\eta, \phi^{\prime})$. The
transformation to grain-body Cartesian coordinates is
\be
\label{eq:x-oblate-spheroidal}
x = a_{\mathrm{eff}} \, \delta^{-1/3} \cos \eta \cos \phi^{\prime} ,
\ee
\be
y = a_{\mathrm{eff}} \, \delta^{-1/3} \cos \eta \sin \phi^{\prime} ,
\ee
\be
z = a_{\mathrm{eff}} \, \delta^{2/3} \sin \eta ,
\label{eq:z-oblate-spheroidal}
\ee
and $-\upi/2 \le \eta \le \upi/2$, $0 \le \phi^{\prime} < 2 \upi$.
The surface area element is
\be
\label{eq:surf-area-element}
\mathrm{d}S =
a_{\mathrm{eff}}^2 \, \delta^{-2/3} \left[ \delta^2 + \left(1-\delta^2 \right)
\sin^2 \eta \right]^{1/2} \cos \eta \, \mathrm{d}\eta \, \mathrm{d}\phi^{\prime}
\ee
and the outward-pointing unit normal is
\begin{multline}
\bmath{\hat{N}} = \left[ \delta^2 + \left(1-\delta^2 \right) \sin^2 \eta
\right]^{-1/2} \times \\
\left[
\delta \cos \eta \, \left(\bmath{\hat{x}} \cos \phi^{\prime} + \bmath{\hat{y}}
\sin \phi^{\prime} \right) + \bmath{\hat{z}} \sin \eta \right] .
\label{eq:N-hat}
\end{multline}
Along with $\bmath{\hat{N}}$, the following two vectors form an orthonormal
basis:
\be
\bmath{\hat{\phi}^{\prime}} = - \bmath{\hat{x}} \sin \phi^{\prime} +
\bmath{\hat{y}} \cos \phi^{\prime} ,
\ee
\begin{multline}
\bmath{\hat{t}} = \bmath{\hat{\phi}^{\prime}} \bmath{\times} \bmath{\hat{N}}
= \left[ \delta^2 + \left(1-\delta^2 \right) \sin^2 \eta \right]^{-1/2} \times \\
\left[ \sin \eta \, \left(\bmath{\hat{x}} \cos
\phi^{\prime} + \bmath{\hat{y}} \sin \phi^{\prime} \right) - \bmath{\hat{z}} \delta
\cos \eta \right] .
\label{eq:t-hat}
\end{multline}
For later use, we define the following integrals over the coordinate $\eta$:
\be
\label{eq:app-i1}
\mathcal{I}_1(\delta) =
\int_{-\upi/2}^{\upi/2} \mathrm{d}\eta \, \cos \eta \, [A(\delta, \eta)]^{-1}
\sin^2 \eta = \frac{2 - \delta^2 - \delta^4 g(\delta)}{4 (1 - \delta^2)} ,
\ee
\be
\label{eq:app-i2}
\mathcal{I}_2(\delta) =
\int_{-\upi/2}^{\upi/2} \mathrm{d}\eta \, \cos \eta \, [A(\delta, \eta)]^{-1} =
1 + \delta^2 g(\delta) ,
\ee
\be
\label{eq:app-i3}
\mathcal{I}_3(\delta) = \int_{-\upi/2}^{\upi/2} \mathrm{d}\eta \, \cos \eta \,
A(\delta, \eta) = 2 g(\delta) ,
\ee
\begin{multline}
\mathcal{I}_4(\delta) = \int_{-\upi/2}^{\upi/2} \mathrm{d}\eta \, \cos \eta \,
A(\delta, \eta) \sin^2 \eta \cos^2 \eta \\
= \frac{2 + \delta^2 - \delta^2 (4 - \delta^2) g(\delta)}{4 (1 - \delta^2)^2} ,
\label{eq:app-i4}
\end{multline}
where
\be
\label{eq:A-delta-eta}
A(\delta, \eta) = \left[ \delta^2 + \left( 1 - \delta^2 \right) \sin^2 \eta
\right]^{-1/2}
\ee
and
\be
g(\delta) = \frac{1}{2} \left( 1 - \delta^2 \right)^{-1/2} \ln \left[
\frac{1 + (1 - \delta^2)^{-1/2}}{-1 + (1 - \delta^2)^{-1/2}} \right] .
\ee
\section{Coordinate systems}
\label{sec:coord-systems}
We already introduced grain-body coordinates $(x,y,z)$, fixed with respect
to the grain, in Section \ref{sec:spheroid-properties}. Now consider an inertial
coordinate system, which we call `alignment coordinates'
$(x_B, y_B, z_B)$, with its origin also at the center of the spheroidal grain.
The orientation of the grain in space depends on its angular momentum
$\mathbfit{J}$ and rotational kinetic energy $E$. We denote the spherical
coordinates of $\mathbfit{J}$ in alignment coordinates by
$(J, \xi, \phi_B)$. We define `angular-momentum coordinates'
$(x_J, y_J, z_J)$ by
\be
\label{eq:x-J}
\bmath{\hat{x}}_J = \bmath{\hat{\xi}} = \bmath{\hat{x}}_B \cos \xi \cos
\phi_B + \bmath{\hat{y}}_B \cos \xi \sin \phi_B - \bmath{\hat{z}}_B \sin \xi ,
\ee
\be
\bmath{\hat{y}}_J = \bmath{\hat{\phi}}_B = - \bmath{\hat{x}}_B \sin \phi_B
+ \bmath{\hat{y}}_B \cos \phi_B ,
\ee
\be
\bmath{\hat{z}}_J = \bmath{\hat{J}} = \bmath{\hat{x}}_B \sin \xi \cos
\phi_B + \bmath{\hat{y}}_B \sin \xi \sin \phi_B + \bmath{\hat{z}}_B \cos \xi .
\label{eq:z-J}
\ee
The orientation of the grain body in angular-momentum coordinates can be
expressed using Eulerian angles $(\alpha, \gamma, \zeta)$.
We adopt the same prescription for the Eulerian angles as in section 2.5.3 in
\citet{WD03}:
Start with the grain axes
$(\bmath{\hat{a}}_2, \bmath{\hat{a}}_3, \bmath{\hat{a}}_1)$ aligned
with $(\bmath{\hat{x}}_J, \bmath{\hat{y}}_J, \bmath{\hat{z}}_J)$.
Then apply the following operations
to the grain: (1) rotate through angle $\zeta$ about
$\bmath{\hat{a}}_1 = \bmath{\hat{z}}_J$, (2) rotate through angle
$\gamma$ about $\bmath{\hat{a}}_2$, (3) rotate through angle $\alpha$ about
$\bmath{\hat{a}}_1$. Thus, the transformation between grain-body and
angular-momentum coordinates is
\begin{multline}
\bmath{\hat{x}} = \bmath{\hat{a}}_2 = \bmath{\hat{x}}_J (\cos \alpha \cos
\zeta - \sin \alpha \sin \zeta \cos \gamma) \\
+ \bmath{\hat{y}}_J (\cos \alpha
\sin \zeta + \sin \alpha \cos \zeta \cos \gamma) +
\bmath{\hat{z}}_J \sin \alpha \sin \gamma ,
\label{eq:x-hat}
\end{multline}
\begin{multline}
\bmath{\hat{y}} = \bmath{\hat{a}}_3 = - \bmath{\hat{x}}_J (\sin \alpha \cos
\zeta + \cos \alpha \sin \zeta \cos \gamma) \\
+ \bmath{\hat{y}}_J (\cos \alpha
\cos \zeta \cos \gamma - \sin \alpha \sin \zeta) + \bmath{\hat{z}}_J \cos
\alpha \sin \gamma ,
\label{eq:y-hat}
\end{multline}
\be
\bmath{\hat{z}} = \bmath{\hat{a}}_1 = \bmath{\hat{x}}_J \sin \zeta \sin \gamma
- \bmath{\hat{y}}_J \cos \zeta \sin \gamma + \bmath{\hat{z}}_J \cos \gamma .
\label{eq:z-hat}
\ee
\section{Grain rotation}
\label{sec:grain-rotation}
A description of the free rotation of a non-symmetric grain, for a given
$J$ and $q$, and its flipping dynamics can be found in section 2.5 of
\citet{WD03}. As described there, the components of the grain's
angular velocity $\bmath{\omega}$ along the principal axes involve the Jacobi
elliptic functions. For later convenience, we reproduce the expressions for
$\omega_i$ from \citet{KW17} here.
When
$1 < q < r_2$,
\be
\label{eq:omega1-low-q}
\omega_1 = \pm \frac{J}{I_1} \left( \frac{r_3-q}{r_3-1} \right)^{1/2}
\mathrm{dn}(\omega_{\mathrm{rot}} t, k^2) ,
\ee
\be
\label{eq:omega2-low-q}
\omega_2 = - \frac{J}{I_1} r_2 \left( \frac{q-1}{r_2-1} \right)^{1/2}
\mathrm{sn}(\omega_{\mathrm{rot}} t, k^2) ,
\ee
\be
\omega_3 = \pm \frac{J}{I_1} r_3 \left( \frac{q-1}{r_3-1} \right)^{1/2}
\mathrm{cn}(\omega_{\mathrm{rot}} t, k^2) ,
\label{eq:omega3-low-q}
\ee
where
\be
\label{eq:k2}
k^2 = \frac{(r_3-r_2) (q-1)}{(r_2-1) (r_3-q)}
\ee
and
\be
\omega_{\mathrm{rot}} = \frac{J}{I_1} \left[ (r_2-1) (r_3-q) \right]^{1/2} .
\ee
The grain is in the positive flip state with respect to $\bmath{\hat{a}}_1$
(i.e.~$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_1 > 0$) when the plus sign
is chosen
in both equations (\ref{eq:omega1-low-q}) and (\ref{eq:omega3-low-q}). It is
in the negative flip state with respect to $\bmath{\hat{a}}_1$ when the minus
sign is chosen in both of those cases. When $r_2 < q < r_3$,
\be
\label{eq:omega1-high-q}
\omega_1 = \pm \frac{J}{I_1} \left( \frac{r_3-q}{r_3-1} \right)^{1/2}
\mathrm{cn}(\omega_{\mathrm{rot}} t, k^{-2}) ,
\ee
\be
\label{eq:omega2-high-q}
\omega_2 = - \frac{J}{I_1} r_2 \left( \frac{r_3 - q}{r_3-r_2} \right)^{1/2}
\mathrm{sn}(\omega_{\mathrm{rot}} t, k^{-2}) ,
\ee
\be
\omega_3 = \pm \frac{J}{I_1} r_3 \left( \frac{q-1}{r_3-1} \right)^{1/2}
\mathrm{dn}(\omega_{\mathrm{rot}} t, k^{-2}) ,
\label{eq:omega3-high-q}
\ee
with $k^2$ as defined in equation (\ref{eq:k2}) and
\be
\omega_{\mathrm{rot}} = \frac{J}{I_1} \left[ (r_3-r_2) (q-1) \right]^{1/2} .
\ee
The grain is in the positive flip state with respect to $\bmath{\hat{a}}_3$
(i.e.~$\mathbfit{J} \bmath{\cdot} \bmath{\hat{a}}_3 > 0$) when the plus sign
is chosen
in both equations (\ref{eq:omega1-high-q}) and (\ref{eq:omega3-high-q}). It is
in the negative flip state with respect to $\bmath{\hat{a}}_3$ when the minus
sign is chosen in both of those cases.
The components $\omega_i$ can also be expressed in terms of the Eulerian angles:
\be
\label{eq:omega_1}
\omega_1 = \frac{J}{I_1} \cos \gamma ,
\ee
\be
\label{eq:omega_2}
\omega_2 = \frac{J}{I_2} \sin \gamma \sin \alpha ,
\ee
\be
\label{eq:omega_3}
\omega_3 = \frac{J}{I_3} \sin \gamma \cos \alpha .
\ee
As seen in equations (\ref{eq:omega1-low-q})--(\ref{eq:omega3-low-q}) and
(\ref{eq:omega1-high-q})--(\ref{eq:omega3-high-q}), the angular velocity in
grain-body coordinates is periodic in variable $\nu = \omega_{\mathrm{rot}} t$,
with period $4 K(k^{\pm 2})$, where the + (-) sign is for the case that
$1 < q < r_2$ ($r_2 < q < r_3$). Thus, from equations
(\ref{eq:omega_1})--(\ref{eq:omega_3}), the Eulerian angles $\alpha$ and
$\gamma$ are likewise periodic. The Eulerian angle $\zeta$ can be expressed
as the sum of two periodic functions, one with the same period as for
$\alpha$ and $\gamma$ and the other with an incommensurate period. When
evaluating drift and diffusion coefficients associated with external
processes, we will average over the grain rotation, since the rotation
time-scale is orders of magnitude smaller than all other relevant
time-scales. Denoting the average of a function $F$ over grain
rotation by $\langle F \rangle$,
\be
\label{eq:avg-grain-rot}
\langle F \rangle = \left[ 8 \upi K \left( k^{\pm 2} \right) \right]^{-1}
\int_0^{4 K(k^{\pm 2})} \mathrm{d}\nu \int_0^{2 \upi} \mathrm{d}\zeta \, F
\left[ \alpha(\nu), \gamma(\nu), \zeta \right] .
\ee
For a grain with dynamic symmetry, $\gamma$ is constant when $J$ and $E$ are
fixed. As seen above, $\gamma$ varies periodically for a grain that lacks
dynamic symmetry. From equations (\ref{eq:omega_1}),
(\ref{eq:omega1-low-q}), and (\ref{eq:omega1-high-q}), the average value of
$\cos^2 \gamma$ is given by
\be
\label{eq:avg-cos2-gamma}
\langle \cos^2 \gamma \rangle = \frac{r_3 - q}{r_3 - 1} \times
\begin{cases}
\langle \mathrm{dn}^2(\nu, k^2) \rangle & , 1 < q < r_2 \\
\langle \mathrm{cn}^2(\nu, k^{-2}) \rangle & , r_2 < q < r_3
\end{cases} .
\ee
The averages on the right-hand side of equation (\ref{eq:avg-cos2-gamma})
can be expressed as
\be
\label{eq:dn2-avg}
\langle \mathrm{dn}^2(\nu, k^2) \rangle = \frac{E(k^2)}{K(k^2)}
\ee
and
\be
\label{eq:cn2-avg}
\langle \mathrm{cn}^2(\nu, k^2) \rangle = \frac{\langle \mathrm{dn}^2(\nu,
k^2) \rangle -1 + k^2}{k^2} .
\ee
We will employ this result in Section \ref{sec:external-processes}, where
we will also need the following results, all derived using equations
(\ref{eq:omega1-low-q})--(\ref{eq:cn2-avg}):
\be
\label{eq:sin-2-gamma-sin-2-alpha}
\langle \sin^2 \gamma \sin^2 \alpha \rangle =
\begin{cases}
\left( 1 - \langle \mathrm{dn}^2(\nu, k^2) \rangle \right)
\frac{r_3-q}{r_3 - r_2} & , 1 < q < r_2 \\
\left( 1 - \langle \mathrm{dn}^2(\nu, k^{-2}) \rangle \right)
\frac{q-1}{r_2 - 1} & , r_2 < q < r_3
\end{cases} ,
\ee
\be
\label{eq:sin-2-gamma-cos-2-alpha}
\langle \sin^2 \gamma \cos^2 \alpha \rangle = \frac{q - 1}{r_3 - 1} \times
\begin{cases}
\langle \mathrm{cn}^2(\nu, k^2) \rangle & , 1 < q < r_2 \\
\langle \mathrm{dn}^2(\nu, k^{-2}) \rangle & , r_2 < q < r_3
\end{cases} ,
\ee
\be
\label{eq:cos-gamma-av}
\langle \cos \gamma \rangle = \begin{cases}
\pm \left( \frac{r_3-q}{r_3-1}\right)^{1/2}
\frac{\upi}{2 K(k^2)} & , 1 < q < r_2 \\
0 & , r_2 < q < r_3
\end{cases} ,
\ee
\be
\label{eq:cos-alpha-sin-gamma-av}
\langle \sin \gamma \cos \alpha \rangle = \begin{cases}
0 & , 1 < q < r_2 \\
\pm \left( \frac{q-1}{r_3-1} \right)^{1/2} \frac{\upi}{2 K(k^{-2})} & , r_2 < q
< r_3
\end{cases} ,
\ee
$\langle \sin \gamma \sin \alpha \rangle = 0$,
$\langle \sin \gamma \cos \gamma \sin \alpha \rangle = 0$,
$\langle \sin \gamma \cos \gamma \cos \alpha \rangle = 0$, and
$\langle \sin^2 \gamma \sin \alpha \cos \alpha \rangle = 0$.
In equations (\ref{eq:cos-gamma-av}) and (\ref{eq:cos-alpha-sin-gamma-av}),
the + (-) signs are for the postive (negative) flip states with respect to
$\bmath{\hat{a}}_1$ and $\bmath{\hat{a}}_3$, respectively.
From the definition of $q$ in equation (\ref{eq:q}), the expression
$E = \frac{1}{2} \sum_i I_i \omega_i^2$, and equations
(\ref{eq:omega_1})--(\ref{eq:omega_3}),
\be
q = \cos^2 \gamma + r_2 \sin^2 \gamma + r_3 \sin^2 \gamma \cos^2 \alpha .
\ee
Thus, for all values of $q$,
\be
\label{eq:avg-quantity}
r_2 \langle \sin^2 \gamma \sin^2 \alpha \rangle + r_3 \langle \sin^2 \gamma
\cos^2 \alpha \rangle = q - \langle \cos^2 \gamma \rangle .
\ee
This result will be useful in Section \ref{sec:external-processes}.
\section{External processes}
\label{sec:external-processes}
We assume that every gas-phase particle that strikes the grain returns to
the gas, via either thermal evaporation or incorporation into an H$_2$ molecule
that forms on the grain surface and is ejected. We take the arrival and
departure rates to be equal and apply a stochastic treatment for these
processes. Of the two torques associated with the interstellar magnetic field,
the Davis-Greenstein torque is treated deterministically and the Barnett
torque is omitted, since it only yields a precession in the angle
$\phi_B$, which is not relevant for any of the other dynamics under
consideration.
\subsection{Langevin equation}
Since the grain rotation time-scale is orders of magnitude smaller than all
other relevant time-scales, we average over grain rotation, assuming free
rotation, as described in Section \ref{sec:grain-rotation}, as a highly
accurate approximation. Thus, at any time $t$, the grain is characterized by
its angular momentum $\mathbfit{J}$, rotational kinetic energy $E$ or
its dimensionless measure $q$, and
flip state (with respect to either $\bmath{\hat{a}}_1$ or
$\bmath{\hat{a}}_3$, depending on the value of $q$). The time-scale for
internal relaxation (i.e.~Barnett plus nuclear) is
$\tau_{\mathrm{int}} = (\tau^{-1}_{\mathrm{Bar}} + \tau^{-1}_{\mathrm{nuc}})^{-1}$,
where the Barnett relaxation time-scale $\tau_{\mathrm{Bar}}$ is found from
equation (\ref{eq:tau-int-low-freq}) and the nuclear relaxation time-scale
is found using equations
(\ref{eq:tau-int-all-freq})--(\ref{eq:tau-int-low-freq}). Since
$\tau_{\mathrm{int}}$ is orders of magnitude shorter than the time-scales
associated with external processes, we neglect the role of external processes
in the evolution of the grain's rotational energy. That is, we simply evolve
$q$ using the Langevin equation (\ref{eq:internal-Langevin-eq}).
The angular momentum $\mathbfit{J}$ must, of course, be tracked in
alignment coordinates, which are fixed in space (see Section
\ref{sec:coord-systems}). However, for the processes with stochastic
treatments, it is easier to evaluate the change
$\mathrm{d}\mathbfit{J}$ in instantaneous angular-momentum coordinates and then
transform the result to alignment coordinates. In this manner,
$\mathrm{d}\mathbfit{J}$ is found from three coupled Langevin equations:
\be
\mathrm{d}J_{i, J} = \langle \Gamma_{i, J}(\mathbfit{J}, q, \mathrm{fs}) \rangle
\, \mathrm{d}t + \sum_{j=1}^3 \langle B_{ij, J}(\mathbfit{J}, q, \mathrm{fs})
\rangle \, \mathrm{d}w_{j, J} \ \ \ \ (i=1-3)
\ee
where $\mathrm{d}t$ is the time step and
$\mathrm{d}w_{j, J}$ are Gaussian random variables with variance
$\mathrm{d}t$.
The subscript `$J$' indicates that quantities are evaluated in
angular-momentum coordinates,
angle brackets denote averages over grain rotation, `fs' denotes the
flip state, which is positive or negative (equations \ref{eq:omega1-low-q},
\ref{eq:omega3-low-q}, \ref{eq:omega1-high-q}, \ref{eq:omega3-high-q}),
and $\langle B_{ij, J}(\mathbfit{J}, q, \mathrm{fs}) \rangle$ are components of
the matrix square root of the rotationally averaged diffusion tensor.
The components
$\langle \Gamma_{i, J}(\mathbfit{J}, q, \mathrm{fs}) \rangle$ of the rotationally
averaged mean torque and the components
$\langle C_{ij, J}(\mathbfit{J}, q, \mathrm{fs}) \rangle$ of the rotationally
averaged diffusion tensor can depend on $\mathbfit{J}$, $q$, and the flip
state. These quantities are evaluated in the following subsections.
\subsection{Collisions}
Suppose the gas, with temperature $T_{\mathrm{gas}}$,
consists of particles with mass $m$ and number density $n$.
The gas thermal speed is defined as
\be
\label{eq:v-th}
v_{\mathrm{th}} = \left( \frac{2 k_B T_{\mathrm{gas}}}{m} \right)^{1/2}
\ee
where $k_B$ is Boltzmann's constant.
The velocity of a gas particle $\mathbfit{v} = v_{\mathrm{th}} \, \mathbfit{s}$.
The `reduced velocity' $\mathbfit{s}$ is characterized
by polar angle $\theta_{\mathrm{in}}$ and azimuthal angle $\phi_{\mathrm{in}}$,
with $\bmath{\hat{N}}$ as the polar axis and $\bmath{\hat{t}}$ as the
azimuthal axis (recall equations \ref{eq:N-hat} and \ref{eq:t-hat}). Thus,
\be
\label{eq:s-hat}
\bmath{\hat{s}} = - \left( \bmath{\hat{N}} \cos \theta_{\mathrm{in}} +
\bmath{\hat{t}} \sin
\theta_{\mathrm{in}} \cos \phi_{\mathrm{in}} + \bmath{\hat{\phi}^{\prime}} \sin
\theta_{\mathrm{in}} \sin \phi_{\mathrm{in}} \right) .
\ee
The Maxwell velocity distribution is
\be
P(\mathbfit{s}) s^2 \, \mathrm{d}s \, \mathrm{d}\Omega = \upi^{-3/2} \exp(-s^2)
s^2 \, \mathrm{d}s \, \mathrm{d}\Omega
\ee
where $\mathrm{d}\Omega$ is the solid-angle element.
The velocity of the gas particle relative to a patch on the grain surface is
\be
\mathbfit{V} = v_{\mathrm{th}} s \, \bmath{\hat{s}} - \bmath{\omega}
\bmath{\times} \mathbfit{r}
\ee
where $\bmath{\omega}$ is the grain's angular velocity and $\mathbfit{r}$ is
the displacement from the grain's center of mass to the surface patch, with
components given by equations
(\ref{eq:x-oblate-spheroidal})--(\ref{eq:z-oblate-spheroidal}). Thus, the
rate at which gas particles with reduced speeds between $s$ and
$s + \mathrm{d}s$ collide with a surface patch with area $\mathrm{d}S$ (equation
\ref{eq:surf-area-element}) from within solid angle
$\mathrm{d}\Omega = \mathrm{d}(\cos \theta_{\mathrm{in}}) \, \mathrm{d}\phi_{\mathrm{in}}$ about the direction characterized by
$(\theta_{\mathrm{in}}, \phi_{\mathrm{in}})$ is
\be
\label{eq:dR-col}
\mathrm{d}R_{\mathrm{col}} = \upi^{-3/2} n v_{\mathrm{th}} \, \mathrm{d}s \, s^2
\exp(-s^2) \, \mathrm{d}(\cos \theta_{\mathrm{in}}) \, \mathrm{d}\phi_{\mathrm{in}}
\, V^{\prime} \mathrm{d}S
\ee
when $V^{\prime} > 0$ (and zero otherwise) where
\be
V^{\prime} = \left( s \bmath{\hat{s}} - \frac{\bmath{\omega} \bmath{\times}
\mathbfit{r}}{v_{\mathrm{th}}} \right) \bmath{\cdot} \left( - \bmath{\hat{N}}
\right) .
\ee
From equations (\ref{eq:x-oblate-spheroidal})--(\ref{eq:z-oblate-spheroidal}),
(\ref{eq:N-hat}), (\ref{eq:omega_1})--(\ref{eq:omega_3}), and
(\ref{eq:s-hat}),
\begin{multline}
V^{\prime} = s \cos \theta_{\mathrm{in}} + \frac{J a_{\mathrm{eff}} \delta^{-1/3}}
{I_1 v_{\mathrm{th}}} \left( 1 - \delta^2 \right) A(\delta, \eta) \sin \eta
\cos \eta \sin \gamma \\
\times \left( r_2 \sin \phi^{\prime} \sin \alpha - r_3
\cos \phi^{\prime} \cos \alpha \right) ;
\label{eq:V-prime}
\end{multline}
$A(\delta, \eta)$ is defined in equation (\ref{eq:A-delta-eta}).
We assume that $J a_{\mathrm{eff}}/I_1 v_{\mathrm{th}} \ll 1$, so the second
term in the expression for $V^{\prime}$ can be neglected unless the first
term yields a zero integral and the lower limit in integrals over $s$ can
simply be taken to be zero.
The angular momentum acquired by the grain when a gas-phase particle collides
and sticks to the surface, as observed in an inertial frame, is
\be
\label{eq:Delta-J-col}
\bmath{\Delta J}_{\mathrm{col}} = m v_{\mathrm{th}} s \, \mathbfit{r} \bmath{\times}
\bmath{\hat{s}} = m v_{\mathrm{th}} a_{\mathrm{eff}} \delta^{-1/3}
\bmath{\Delta J^{\prime}}_{\mathrm{col}} ,
\ee
where, from equations
(\ref{eq:x-oblate-spheroidal})--(\ref{eq:z-oblate-spheroidal}),
(\ref{eq:N-hat})--(\ref{eq:t-hat}), and (\ref{eq:s-hat}),
\begin{multline}
\Delta J^{\prime}_{x, \mathrm{col}} = s \left[ - A(\delta, \eta) \left( 1 -
\delta^2 \right) \sin \eta \cos \eta \sin \phi^{\prime} \cos \theta_{\mathrm{in}}
\right. \\
+ A(\delta, \eta) \delta \sin \phi^{\prime} \sin \theta_{\mathrm{in}} \cos
\phi_{\mathrm{in}} + \delta \sin \eta \cos \phi^{\prime} \sin \theta_{\mathrm{in}}
\sin \phi_{\mathrm{in}} \Big] ,
\label{eq:Delta-J-prime-x}
\end{multline}
\begin{multline}
\Delta J^{\prime}_{y, \mathrm{col}} = s \left[ A(\delta, \eta) \left( 1 -
\delta^2 \right) \sin \eta \cos \eta \cos \phi^{\prime} \cos \theta_{\mathrm{in}}
\right. \\
- A(\delta, \eta) \delta \cos \phi^{\prime} \sin \theta_{\mathrm{in}} \cos
\phi_{\mathrm{in}} + \delta \sin \eta \sin \phi^{\prime} \sin \theta_{\mathrm{in}}
\sin \phi_{\mathrm{in}} \Big] ,
\label{eq:Delta-J-prime-y}
\end{multline}
\be
\label{eq:Delta-J-prime-z}
\Delta J^{\prime}_{z, \mathrm{col}} = - s \cos \eta \sin \theta_{\mathrm{in}}
\sin \phi_{\mathrm{in}} .
\ee
The mean torque on the grain due to collisions with gas particles is
\be
\label{eq:Gamma-col-general}
\bmath{\Gamma}_{\mathrm{col}} = \int \mathrm{d}R_{\mathrm{col}} \, \bmath{\Delta
J}_{\mathrm{col}} .
\ee
From equations (\ref{eq:dR-col}), (\ref{eq:V-prime}),
(\ref{eq:Delta-J-col})--(\ref{eq:Delta-J-prime-z}), and
(\ref{eq:Gamma-col-general}),
\begin{multline}
\bmath{\Gamma}_{\mathrm{col}} = - \frac{\sqrt{\upi}}{2} n m v_{\mathrm{th}}
a_{\mathrm{eff}}^4 \delta^{-4/3} \left( 1 - \delta^2 \right)^2
\mathcal{I}_4(\delta) \frac{J}{I_1} \sin \gamma \\
\times \left( r_2 \sin \alpha \,
\bmath{\hat{x}} + r_3 \cos \alpha \bmath{\hat{y}} \right) .
\end{multline}
From equations (\ref{eq:x-hat}), (\ref{eq:y-hat}), and
(\ref{eq:avg-grain-rot}), the rotationally averaged torque is
\begin{multline}
\langle \bmath{\Gamma}_{\mathrm{col}} \rangle = - \frac{\sqrt{\upi}}{2} mn
v_{\mathrm{th}} a_{\mathrm{eff}}^4 \delta^{-4/3} \left( 1 - \delta^2 \right)^2
\mathcal{I}_4(\delta) \, \frac{\mathbfit{J}}{I_1} \\
\times \left( r_2 \langle
\sin^2 \gamma \sin^2 \alpha \rangle + r_3 \langle \sin^2 \gamma \cos^2 \alpha
\rangle \right) ;
\label{eq:Gamma-col-avg}
\end{multline}
expressions for $\mathcal{I}_4(\delta)$ and
$r_2 \langle \sin^2 \gamma \sin^2 \alpha \rangle + r_3 \langle \sin^2 \gamma
\cos^2 \alpha \rangle$ are given in equations (\ref{eq:app-i4}) and
(\ref{eq:avg-quantity}).
Note that $\langle \bmath{\Gamma}_{\mathrm{col}} \rangle = 0$ for steady
rotation about the geometric symmetry axis (i.e.~$\gamma = 0$), in
agreement with previous results \citep{PS71, RDF93}.
The diffusion coefficients are given by
\be
C_{ij, \mathrm{col}} = \int \mathrm{d}R_{\mathrm{col}} \, \Delta J_{i, \mathrm{col}}
\Delta J_{j, \mathrm{col}} .
\ee
The diffusion tensor in grain-body coordinates is diagonal, with
\be
\label{eq:C_zz-body}
C_{zz, \mathrm{col}} = \frac{2 \sqrt{\upi}}{3} nm^2
v_{\mathrm{th}}^3 a_{\mathrm{eff}}^4 \delta^{-4/3} Z_1(\delta) ,
\ee
\be
\label{eq:C_xx-body}
C_{xx, \mathrm{col}} = C_{yy, \mathrm{col}} = \frac{2 \sqrt{\upi}}{3} nm^2
v_{\mathrm{th}}^3 a_{\mathrm{eff}}^4 \delta^{-4/3} Z_2(\delta) ,
\ee
where
\be
\label{eq:Z-1}
Z_1(\delta) = \frac{3}{4} \left[ \mathcal{I}_2(\delta) - \mathcal{I}_1(\delta)
\right] = \frac{3}{16} \left[ 3 + 4 \delta^2 g(\delta) - \frac{1 - \delta^4
g(\delta)}{1 - \delta^2} \right]
\ee
and
\begin{multline}
Z_2(\delta) = \frac{3}{8} \left\{ 2 \left( 1 - \delta^2 \right)^2
\mathcal{I}_4(\delta) + \delta^2 \left[ \mathcal{I}_1(\delta) +
\mathcal{I}_3(\delta) \right] \right\} \\
= \frac{3}{32} \ \frac{4 - 3 \delta^4 + \delta^4 (2 - 3 \delta^2) g(\delta)}
{1 - \delta^2} .
\label{eq:Z-2}
\end{multline}
The integrals $\mathcal{I}_i(\delta)$ and the function $g(\delta)$ are
defined in Section \ref{sec:spheroid-properties}.
Note that, to within the approximations adopted here, $C_{zz, \mathrm{col}}$ and
$C_{xx, \mathrm{col}}$ are independent of the grain rotation and that the results
in equations (\ref{eq:C_zz-body}) and (\ref{eq:C_xx-body}) are identical to
those found by \citet{RDF93}.
Transforming the diffusion tensor to angular-momentum coordinates and
averaging over grain rotation,
\be
\label{eq:C-zz-J-avg}
\langle C_{zz, J, \mathrm{col}} \rangle = \frac{2 \sqrt{\upi}}{3} nm^2
v_{\mathrm{th}}^3
a_{\mathrm{eff}}^4 \delta^{-4/3} \left[ Z_1(\delta) \langle \cos^2 \gamma
\rangle + Z_2(\delta) \langle \sin^2 \gamma \rangle \right]
\ee
and
\begin{multline}
\langle C_{xx, J, \mathrm{col}} \rangle = \langle C_{yy, J, \mathrm{col}} \rangle =
\frac{2
\sqrt{\upi}}{3} nm^2 v_{\mathrm{th}}^3 a_{\mathrm{eff}}^4 \delta^{-4/3} \\
\times \frac{1}{2}
\left[ Z_2(\delta) \left( 1 + \langle \cos^2 \gamma \rangle \right)
+ Z_1(\delta) \langle \sin^2 \gamma \rangle \right] .
\label{eq:C-xx-J-avg}
\end{multline}
\subsection{Evaporation}
\label{sec:evap}
Consider thermal evaporation of particles of mass $m_{\mathrm{ev}}$, distributed
uniformly across the grain
surface. The total evaporation rate must equal the total collision
rate and detailed balancing applies when the evaporation temperature
$T_{\mathrm{ev}}$ equals the gas temperature $T_{\mathrm{gas}}$. Thus, as described
in Appendix B in \citet{RDF93}, the rate at which particles evaporate from a
surface patch with area $\mathrm{d}S$, with speeds between
$v_{\mathrm{th, ev}} \, s$ and $v_{\mathrm{th, ev}} (s + \mathrm{d}s)$ and from within
solid angle
$\mathrm{d}\Omega = \mathrm{d}(\cos \theta_{\mathrm{in}}) \mathrm{d}\phi_{\mathrm{in}}$
about the direction characterized by $(\theta_{\mathrm{in}}, \phi_{\mathrm{in}})$, is
\be
\label{eq:dR-evap}
\mathrm{d}R_{\mathrm{ev}} = \upi^{-3/2} \frac{m}{m_{\mathrm{ev}}} n v_{\mathrm{th}} \,
\mathrm{d}s \, s^3 \exp(-s^2) \, \mathrm{d}(\cos \theta_{\mathrm{in}}) \, \cos
\theta_{\mathrm{in}} \, \mathrm{d}\phi_{\mathrm{in}} \, \mathrm{d}S .
\ee
The evaporative thermal speed $v_{\mathrm{th, ev}}$ is defined identically to
the gas thermal speed, except $T_{\mathrm{gas}}$ and $m$ in equation
(\ref{eq:v-th}) are replaced with $T_{\mathrm{ev}}$ and $m_{\mathrm{ev}}$.
The angular momentum imparted to the grain following an evaporation event
is
\be
\label{eq:Delta-J-evap}
\bmath{\Delta J}_{\mathrm{ev}} = m_{\mathrm{ev}} \mathbfit{r} \bmath{\times} \left(
v_{\mathrm{th, ev}} \, s \, \bmath{\hat{s}} - \bmath{\omega} \bmath{\times}
\mathbfit{r} \right) = m v_{\mathrm{th}} a_{\mathrm{eff}} \delta^{-1/3}
\bmath{\Delta J^{\prime}}_{\mathrm{ev}}
\ee
where, from equations
(\ref{eq:x-oblate-spheroidal})--(\ref{eq:z-oblate-spheroidal}),
(\ref{eq:N-hat})--(\ref{eq:t-hat}), (\ref{eq:omega_1})--(\ref{eq:omega_3}), and
(\ref{eq:s-hat}),
\be
\bmath{\Delta J^{\prime}}_{\mathrm{ev}} = \bmath{\Delta J^{\prime}}_{\mathrm{ev}}(1) +
\bmath{\Delta J^{\prime}}_{\mathrm{ev}}(2) ,
\ee
\be
\label{eq:Delta-J-prime-ev-1}
\bmath{\Delta J^{\prime}}_{\mathrm{ev}}(1) = \frac{m_{\mathrm{ev}} v_{\mathrm{th, ev}}}
{m v_{\mathrm{th}}} \, \bmath{\Delta J^{\prime}}_{\mathrm{col}} ,
\ee
\begin{multline}
\Delta J^{\prime}_{x, \mathrm{ev}}(2) = \frac{m_{\mathrm{ev}}}{m} \ \frac{J
a_{\mathrm{eff}} \delta^{-1/3}}{I_1 v_{\mathrm{th}}} \left( r_3 \sin \gamma \cos
\alpha \cos^2 \eta \sin \phi^{\prime} \cos \phi^{\prime} \right. \\
- r_2 \sin \gamma \sin
\alpha \cos^2 \eta \sin^2 \phi^{\prime} - \delta^2 r_2 \sin \gamma \sin \alpha
\sin^2 \eta \\
+ \delta \cos \gamma \sin \eta \cos \eta \cos \phi^{\prime} \Big) \, ,
\label{eq:Delta-J-prime-x-ev-2}
\end{multline}
\begin{multline}
\Delta J^{\prime}_{y, \mathrm{ev}}(2) = \frac{m_{\mathrm{ev}}}{m} \ \frac{J
a_{\mathrm{eff}} \delta^{-1/3}}{I_1 v_{\mathrm{th}}} \left( r_2 \sin \gamma \sin
\alpha \cos^2 \eta \sin \phi^{\prime} \cos \phi^{\prime} \right. \\
- r_3 \sin \gamma \cos
\alpha \cos^2 \eta \cos^2 \phi^{\prime} - \delta^2 r_3 \sin \gamma \cos \alpha
\sin^2 \eta \\
+ \delta \cos \gamma \sin \eta \cos \eta \sin \phi^{\prime} \Big) \, ,
\end{multline}
\begin{multline}
\Delta J^{\prime}_{z, \mathrm{ev}}(2) = \frac{m_{\mathrm{ev}}}{m} \ \frac{J
a_{\mathrm{eff}} \delta^{-1/3}}{I_1 v_{\mathrm{th}}} \Big[ \delta \sin \gamma \sin
\eta \cos \eta \\
\left. \times \left( r_2 \sin \alpha \cos \phi^{\prime} + r_3 \cos \alpha
\sin \phi^{\prime} \right) - \cos \gamma \cos^2 \eta \right] .
\label{eq:Delta-J-prime-z-ev-2}
\end{multline}
Evaluation of the mean torque and diffusion tensor associated with evaporation
proceeds in the same way as for collisions, yielding
\begin{multline}
\langle \bmath{\Gamma}_{\mathrm{ev}} \rangle = - \frac{\sqrt{\upi}}{2} mn
v_{\mathrm{th}} a_{\mathrm{eff}}^4 \delta^{-4/3} \frac{\mathbfit{J}}{I_1}
\left\{ 2 \left[ \mathcal{I}_2(
\delta) - \mathcal{I}_1(\delta) \right] \langle \cos^2 \gamma \rangle +
\right. \\
\left. \left[ \mathcal{I}_2(\delta) - \left( 1 - 2 \delta^2
\right) \mathcal{I}_1(\delta) \right] \left( r_2 \langle \sin^2 \gamma
\sin^2 \alpha \rangle + r_3 \langle \sin^2 \gamma \cos^2 \alpha
\rangle \right) \right\}
\label{eq:Gamma-ev-avg}
\end{multline}
and
\be
\label{eq:C-ij-ev}
C_{ij, \mathrm{ev}} = \frac{T_{\mathrm{ev}}}{T_{\mathrm{gas}}} \, C_{ij, \mathrm{col}} .
\ee
The prefactor in equation (\ref{eq:C-ij-ev}) results as follows:
$C_{ij, \mathrm{col}} \propto m^2 v_{\mathrm{th}}^3$ and
$C_{ij, \mathrm{ev}} \propto m m_{\mathrm{ev}} v_{\mathrm{th}} v_{\mathrm{th, ev}}^2$,
so $C_{ij, \mathrm{ev}}/C_{ij, \mathrm{col}} = m_{\mathrm{ev}} v_{\mathrm{th, ev}}^2 /
(m v_{\mathrm{th}}^2) = T_{\mathrm{ev}}/T_{\mathrm{gas}}$.
When $r_2 = r_3$ and $\gamma = 0$, equations (\ref{eq:Gamma-ev-avg}) and
(\ref{eq:C-ij-ev}) reduce to equations C17 and C21 in \citet{RDF93} for
the mean torque and diffusion tensor for an oblate spheroid with
dynamic symmetry rotating steadily about the symmetry axis.
\subsection{Formation and ejection of H$_2$}
Now consider the case that particles depart the grain as newly formed
H$_2$ molecules. For simplicity, we will assume that the molecules have a
fixed kinetic energy $E_{\mathrm{H}2}$. With $m_{\mathrm{H}2} = 2 m$, the
departure speed is $v_{\mathrm{H}2} = (2 E_{\mathrm{H}2}/m_{\mathrm{H}2})^{1/2}$.
For convenience, define $T_{\mathrm{H}2} = E_{\mathrm{H2}}/k_B$.
\subsubsection{Uniformly distributed formation sites}
\label{sec:uniform-sites}
First suppose that the formation sites fully cover the grain surface, with
a fixed number density per unit surface area. If the departing molecules
are distributed uniformly in solid angle, then the analysis in Section
\ref{sec:evap} applies with minor modification. Integrating equation
(\ref{eq:dR-evap}) over $s$, the departure rate from a surface patch is
\be
\label{eq:dR-H2}
\mathrm{d}R_{\mathrm{H}2} = \frac{\upi^{-3/2}}{2} \frac{m}{m_{\mathrm{H}2}} n
v_{\mathrm{th}} \, \mathrm{d}(\cos \theta_{\mathrm{in}}) \, \cos \theta_{\mathrm{in}} \,
\mathrm{d}\phi_{\mathrm{in}} \, \mathrm{d}S .
\ee
The angular momentum imparted to the grain when an H$_2$ molecule departs
is identical to that in equation (\ref{eq:Delta-J-evap}) except that
$m_{\mathrm{ev}}$ is replaced with $m_{\mathrm{H}2}$ and
$v_{\mathrm{th, ev}} \, s$ is replaced with $v_{\mathrm{H}2}$. Thus,
\be
\bmath{\Delta J^{\prime}}_{\mathrm{H}2} = \bmath{\Delta J^{\prime}}_{\mathrm{H2}}(1) +
\bmath{\Delta J^{\prime}}_{\mathrm{H2}}(2) ,
\ee
where the components of $\bmath{\Delta J^{\prime}}_{\mathrm{H2}}(2)$ are given by
equations (\ref{eq:Delta-J-prime-x-ev-2})--(\ref{eq:Delta-J-prime-z-ev-2}),
except with $m_{\mathrm{ev}}$ replaced by $m_{\mathrm{H}2}$, and
$\bmath{\Delta J^{\prime}}_{\mathrm{H2}}(1)$ is given by equation
(\ref{eq:Delta-J-prime-ev-1}), except with
$m_{\mathrm{ev}} v_{\mathrm{th, ev}}$ replaced by $m_{\mathrm{H}2} \, v_{\mathrm{H}2}/s$.
In evaluating the mean torque, the term involving
$\bmath{\Delta J^{\prime}}_{\mathrm{H2}}(1)$ vanishes upon integration. Thus,
\be
\label{eq:Gamma-H2}
\langle \bmath{\Gamma}_{\mathrm{H}2} \rangle = \langle \bmath{\Gamma}_{\mathrm{ev}}
\rangle .
\ee
With the assumption that $J a_{\mathrm{eff}}/I_1 v_{\mathrm{th}} \ll 1$, only
the term involving $\bmath{\Delta J^{\prime}}_{\mathrm{H2}}(1)$ contributes to
the diffusion tensor. In the calculation of $C_{ij, \mathrm{ev}}$, a term
$\int_0^{\infty} ds \, s^5 \exp(-s^2)$ arises. This term evaluates to unity
and is replaced by the factor $\frac{1}{2}$ in equation (\ref{eq:dR-H2})
in the calculation of $C_{ij, \mathrm{H}2}$.
Thus,
\be
\frac{C_{ij, \mathrm{H}2}}{C_{ij, \mathrm{ev}}} = \frac{\frac{1}{2} m
m_{\mathrm{H}2} v_{\mathrm{th}} v_{\mathrm{H}2}^2}{m m_{\mathrm{ev}} v_{\mathrm{th}}
v_{\mathrm{th, ev}}^2} .
\ee
Finally,
\be
\label{eq:C-ij-H2}
C_{ij, \mathrm{H}2} = \frac{1}{2} \, \frac{T_{\mathrm{H}2}}{T_{\mathrm{ev}}} \
C_{ij, \mathrm{ev}} = \frac{1}{2} \,
\frac{T_{\mathrm{H}2}}{T_{\mathrm{gas}}} \ C_{ij, \mathrm{col}} .
\ee
\subsubsection{Special formation sites}
\label{sec:special-sites}
Now consider a grain where H$_2$ formation only occurs at a set of $N_s$
special surface sites. We randomly select the position
$(\eta_i, \phi^{\prime}_i)$ of
each site from a uniform distribution (in surface area) over the surface.
From equation (\ref{eq:surf-area-element}), the surface area element is
\be
\mathrm{d}S = a_{\mathrm{eff}}^2 \delta^{-2/3} \mathrm{d}u \, \mathrm{d}\phi^{\prime}
\ee
with
\be
\mathrm{d}u = \left[ \delta^2 + (1-\delta^2)
\sin^2 \eta \right]^{1/2} \cos \eta \, \mathrm{d}\eta .
\ee
Thus, for each site $\phi^{\prime}_i$ is selected randomly from a uniform
distribution in $\phi^{\prime}$ (0 to $2 \upi$) and $\eta_i$ is selected
randomly from a uniform distribution in $u$, where $u$ and $\eta$ are
related by
\begin{multline}
u = \frac{1}{2} \left[ 1 + \sin \eta \sqrt{\delta^2 + \left(1-\delta^2
\right) \sin^2 \eta} \right]
+ \frac{\delta^2}{2 \sqrt{1-\delta^2}} \\
\times \left\{ \ln \left[
\left( 1 + \sqrt{1-\delta^2} \right) \left( \sqrt{1-\delta^2} \sin \eta +
\sqrt{\delta^2 + \left(1-\delta^2 \right) \sin^2 \eta} \right) \right] \right.
\\
- 2 \ln \delta \bigg\} .
\end{multline}
As $\eta$ ranges from $-\upi/2$ to $\upi/2$, $u$ ranges from 0 to
\be
u_{\mathrm{max}} = 1 + \frac{\delta^2}{\sqrt{1-\delta^2}} \ln \left( \frac{1 +
\sqrt{1-\delta^2}}{\delta} \right) = 1 + \delta^2 g(\delta) .
\ee
We will assume that molecule formation occurs at the same rate at each
surface site. Thus, the rate per site at which molecules are ejected is
\be
R_{\mathrm{H}2}(\mathrm{per \ site}) = \frac{R_{\mathrm{col}}}{2 N_s} ,
\ee
where $R_{\mathrm{col}}$ is the total rate at which gas atoms collide with the
grain and the factor of $\frac{1}{2}$ appears since there are 2 H atoms per
H$_2$ molecule. From equations (\ref{eq:dR-col}), (\ref{eq:V-prime}),
(\ref{eq:surf-area-element}), and (\ref{eq:app-i2}),
\be
R_{\mathrm{col}} = \sqrt{\upi} n v_{\mathrm{th}} a_{\mathrm{eff}}^2 \delta^{-2/3} \left[ 1
+ \delta^2 g(\delta) \right] .
\ee
A systematic torque only has the potential to maintain suprathermal grain
rotation if it has a non-zero component along
$\bmath{\hat{z}} = \bmath{\hat{a}}_1$. Otherwise, from equations
(\ref{eq:x-hat}) and (\ref{eq:y-hat}), the component of the rotationally
averaged systematic
torque along $\bmath{\hat{J}} = \bmath{\hat{z}}_J$ vanishes in the limit
$\gamma \rightarrow 0$, which characterizes suprathermal rotation.
From equation (\ref{eq:Delta-J-prime-z}), the component of the torque along
$\bmath{\hat{z}}$ vanishes if the outgoing molecules are uniformly
distributed in solid angle, or if they depart along the surface normal
$\bmath{\hat{N}}$. This is a consequence of the spheroidal shape, for which
the components of $\mathbfit{r}$ (the displacement from the grain center to
the surface patch) and $\bmath{\hat{N}}$ that are
perpendicular to $\bmath{\hat{z}}$ lie in the same direction, so that
$\mathbfit{r} \bmath{\times} \bmath{\hat{N}}$ has zero component along
$\bmath{\hat{z}}$.
So rather than taking the outgoing molecules to be uniformly distributed in
solid angle, we randomly pick angles $\theta_{\mathrm{out}, i}$ (from a
uniform distribution in $\cos^2 \theta_{\mathrm{out}}$ between
$[(\cos \theta_{\mathrm{out}})_{\mathrm{min}}]^2$ and 1) and
$\phi_{\mathrm{out}, i}$ (uniformly distributed between 0 and $2 \upi$) for each
site such that a molecule departing site $i$ has velocity
\begin{multline}
\mathbfit{v}_{\mathrm{H}2} = v_{\mathrm{H}2} \left( \bmath{\hat{N}}_i \cos
\theta_{\mathrm{out}, i} + \bmath{\hat{t}}_i \sin \theta_{\mathrm{out}, i} \cos
\phi_{\mathrm{out}, i} \right. \\
\left. + \bmath{\hat{\phi}^{\prime}}_i \sin \theta_{\mathrm{out}, i}
\sin \phi_{\mathrm{out}, i} \right) .
\label{eq:v-H2-special-sites}
\end{multline}
The form of the distribution for $\theta_{\mathrm{out}, i}$ was chosen for
consistency with the treatment of the case with uniformly distributed
formation sites. In our simulations, we adopt
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0.8$.
In contrast to the case where the surface is uniformly covered with
H$_2$-formation sites, the mean torque associated with
$\bmath{\Delta J^{\prime}}_{\mathrm{H}2}(1)$ does not vanish. This term accounts
for the systematic torque, while the term associated with
$\bmath{\Delta J^{\prime}}_{\mathrm{H}2}(2)$ combines with the torque due to
collisions with atoms to account for the drag torque. The mean systematic
torque is
\be
\bmath{\Gamma}_{\mathrm{H2, \, sys}} = R_{\mathrm{H2}}(\mathrm{per \ site})
\sum_{i=1}^{N_s} \bmath{\Delta J}_{\mathrm{H2}}(1)_{i} .
\ee
The rotationally averaged systematic torque is
\begin{multline}
\langle \mathbf{\Gamma}_{\mathrm{H2, \, sys}} \rangle = \sqrt{\upi} m n
v_{\mathrm{th}} v_{\mathrm{H2}} a^3_{\mathrm{eff}} \delta^{-1} \left[ 1 + \delta^2
g(\delta) \right] \\
\times \left( Q_1 \langle \cos \gamma \rangle + Q_2
\langle \sin \gamma \cos \alpha \rangle \right) \bmath{\hat{J}} ,
\label{eq:Gamma-H2-sys-rot-avg}
\end{multline}
where
\be
Q_1 = - \frac{1}{N_s} \sum_{i=1}^{N_s} \cos \eta_i \sin \theta_{\mathrm{out}, i}
\sin \phi_{\mathrm{out}, i} ,
\ee
\begin{multline}
Q_2 = \frac{1}{N_s} \sum_{i=1}^{N_s} \left[ \left( 1 - \delta^2 \right)
A(\delta, \eta_i) \sin \eta_i \cos \eta_i \cos \phi^{\prime}_i \cos
\theta_{\mathrm{out}, i} \right. \\
- \delta A(\delta, \eta_i) \cos \phi^{\prime}_i
\sin \theta_{\mathrm{out}, i} \cos \phi_{\mathrm{out}, i} \\
+ \delta \sin \eta_i
\sin \phi^{\prime}_i \sin \theta_{\mathrm{out},i} \sin \phi_{\mathrm{out},i} \Big] ,
\end{multline}
and $\langle \cos \gamma \rangle$ and
$\langle \sin \gamma \cos \alpha \rangle$ are evaluated in equations
(\ref{eq:cos-gamma-av}) and (\ref{eq:cos-alpha-sin-gamma-av}).
The contribution to the drag torque is evaluated similarly, except that
$\bmath{\Delta J}_{\mathrm{H2}}(1)$ is replaced with
$\bmath{\Delta J}_{\mathrm{H2}}(2)$. After some algebra, we find that
\begin{multline}
\langle \mathbf{\Gamma}_{\mathrm{H2, \, drag}} \rangle = - \sqrt{\upi} m n
v_{\mathrm{th}} a^4_{\mathrm{eff}} \delta^{-4/3} \left[ 1 + \delta^2
g(\delta) \right] \frac{\mathbfit{J}}{I_1} \\
\times \left( Q_3 \langle \cos^2 \gamma
\rangle + Q_4 \langle \sin^2 \gamma \sin^2 \alpha \rangle +
Q_5 \langle \sin^2 \gamma \cos^2 \alpha \rangle \right) ,
\label{eq:Gamma-H2-drag}
\end{multline}
where
\be
Q_3 = \frac{1}{N_s} \sum_{i=1}^{N_s} \cos^2 \eta_i ,
\ee
\be
Q_4 = \frac{r_2}{N_s} \sum_{i=1}^{N_s} \left( \cos^2 \eta_i \sin^2 \phi^{\prime}_i
+ \delta^2 \sin^2 \eta_i \right) ,
\ee
\be
Q_5 = \frac{r_3}{N_s} \sum_{i=1}^{N_s} \left( \cos^2 \eta_i \cos^2 \phi^{\prime}_i
+ \delta^2 \sin^2 \eta_i \right) ,
\ee
and $\langle \cos^2 \gamma \rangle$,
$\langle \sin^2 \gamma \sin^2 \alpha \rangle$,
and $\langle \sin^2 \gamma \cos^2 \alpha \rangle$ are evaluated in
equations (\ref{eq:avg-cos2-gamma}), (\ref{eq:sin-2-gamma-sin-2-alpha}),
and (\ref{eq:sin-2-gamma-cos-2-alpha}).
In the limit of uniform surface coverage of H$_2$-formation sites, equation
(\ref{eq:Gamma-H2-drag}) reproduces equation (\ref{eq:Gamma-H2}).
The diffusion tensor in grain-body coordinates is given by
\begin{multline}
C_{ij, \mathrm{H}2} = 2 \sqrt{\upi} n m^2 v_{\mathrm{th}} v^2_{\mathrm{H}2}
a^4_{\mathrm{eff}} \delta^{-4/3} \left[ 1 + \delta^2 g \left( \delta \right)
\right] \\
\times \frac{1}{N_s} \sum_{\rho = 1}^{N_s} \Delta
\tilde{J}_{\mathrm{H}2, i}(1)_{\rho} \Delta \tilde{J}_{\mathrm{H}2, j}(1)_{\rho} ,
\label{eq:C-ij-H2-special-sites}
\end{multline}
where
\begin{multline}
\Delta \tilde{J}_{\mathrm{H}2, x}(1)_{\rho} = - A(\delta, \eta_{\rho}) \left( 1 -
\delta^2 \right) \sin \eta_{\rho} \cos \eta_{\rho} \sin \phi_{\rho}^{\prime} \cos
\theta_{\mathrm{out}, \rho} \\
+ A(\delta, \eta_{\rho}) \delta \sin \phi_{\rho}^{\prime}
\sin \theta_{\mathrm{out}, \rho} \cos \phi_{\mathrm{out}, \rho} \\
+ \delta \sin \eta_{\rho}
\cos \phi_{\rho}^{\prime} \sin \theta_{\mathrm{out}, \rho} \sin \phi_{\mathrm{out}, \rho} ,
\label{eq:Delta-J-tilde-x}
\end{multline}
\begin{multline}
\Delta \tilde{J}_{\mathrm{H}2, y}(1)_{\rho} = A(\delta, \eta_{\rho}) \left( 1 -
\delta^2 \right) \sin \eta_{\rho} \cos \eta_{\rho} \cos \phi_{\rho}^{\prime} \cos
\theta_{\mathrm{out}, \rho} \\
- A(\delta, \eta_{\rho}) \delta \cos \phi_{\rho}^{\prime}
\sin \theta_{\mathrm{out}, \rho} \cos \phi_{\mathrm{out}, \rho} \\
+ \delta \sin \eta_{\rho}
\sin \phi_{\rho}^{\prime} \sin \theta_{\mathrm{out}, \rho} \sin \phi_{\mathrm{out}, \rho} ,
\label{eq:Delta-J-tilde-y}
\end{multline}
\be
\label{eq:Delta-J-tilde-z}
\Delta \tilde{J}_{\mathrm{H}2, z}(1)_{\rho} = - \cos \eta_{\rho} \sin
\theta_{\mathrm{out}, \rho} \sin \phi_{\mathrm{out}, \rho} .
\ee
Transforming to angular-momentum coordinates and averaging over grain rotation,
\begin{multline}
\langle C_{xx, J, \mathrm{H}2} \rangle = \langle C_{yy, J, \mathrm{H}2} \rangle =
\frac{1}{2} C_{xx, \mathrm{H}2} \left( 1 - \langle \sin^2 \gamma \sin^2 \alpha
\rangle \right) \\
+ \frac{1}{2} C_{yy, \mathrm{H}2} \left( 1 - \langle \sin^2 \gamma
\cos^2 \alpha \rangle \right) + \frac{1}{2} C_{zz, \mathrm{H}2} \langle \sin^2
\gamma \rangle ,
\label{eq:C-xx-J-H2-sites}
\end{multline}
\begin{multline}
\langle C_{zz, J, \mathrm{H}2} \rangle = C_{xx, \mathrm{H}2} \langle \sin^2 \gamma
\sin^2 \alpha \rangle + C_{yy, \mathrm{H}2} \langle \sin^2 \gamma \cos^2 \alpha
\rangle \\
+ C_{zz, \mathrm{H}2} \langle \cos^2 \gamma \rangle ,
\label{eq:C-zz-J-H2-sites}
\end{multline}
and the off-diagonal elements all vanish. If
$C_{xx, \mathrm{H}2} = C_{yy, \mathrm{H}2}$, then equations (\ref{eq:C-xx-J-H2-sites})
and (\ref{eq:C-zz-J-H2-sites}) adopt the form of equations
(\ref{eq:C-xx-J-avg}) and (\ref{eq:C-zz-J-avg}).
Since only the three diagonal element of $C_{ij, \mathrm{H}2}$ are needed, we
define three additional dimensionless efficiency factors,
\be
Q_6 = \frac{1}{N_s} \sum_{\rho = 1}^{N_s} \left[ \Delta \tilde{J}_{\mathrm{H}2, x}
(1)_{\rho} \right]^2 ,
\label{eq:Q_6-def}
\ee
\be
Q_7 = \frac{1}{N_s} \sum_{\rho = 1}^{N_s} \left[ \Delta \tilde{J}_{\mathrm{H}2, y}
(1)_{\rho} \right]^2 ,
\label{eq:Q_7-def}
\ee
\be
Q_8 = \frac{1}{N_s} \sum_{\rho = 1}^{N_s} \left[ \Delta \tilde{J}_{\mathrm{H}2, z}
(1)_{\rho} \right]^2 .
\label{eq:Q_8-def}
\ee
From equation (\ref{eq:C-ij-H2-special-sites}),
$C_{xx, \mathrm{H}2} \propto Q_6$,
$C_{yy, \mathrm{H}2} \propto Q_7$, and
$C_{zz, \mathrm{H}2} \propto Q_8$.
\subsection{Davis-Greenstein Torque} \label{sec:D-G}
\citet{DG51} evaluated the rotation-averaged torque due to paramagnetic
dissipation in the interstellar magnetic field for the case of an oblate grain
with dynamic symmetry:
\be
\label{eq:DG-torque}
\langle \bmath{\Gamma}_{\mathrm{DG}} \rangle = -
\tau_{\mathrm{DG}}^{-1} \left[ 1 + \left( r_2 - 1 \right) \sin^2 \gamma \right]
\left( J_{x, B} \, \bmath{\hat{x}}_B + J_{y,B} \bmath{\hat{y}}_B \right) ,
\ee
where the Davis-Greenstein timescale is
\begin{multline}
\tau_{\mathrm{DG}} = \frac{2 \alpha_1 \bar{\rho} a_{\mathrm{eff}}^2}{5 \chi_0 T_2 B^2}
\\
\approx 1.52 \times 10^6 \, \mathrm{yr} \left( \frac{\alpha_1 \bar{\rho}}
{3 \, \mathrm{g} \, \mathrm{cm}^{-3}} \right) \left( \frac{a_{\mathrm{eff}}}
{0.1 \, \mu \mathrm{m}} \right)^2
\left( \frac{T_d}{15 \, \mathrm{K}} \right) \left( \frac{B}{5 \mu \mathrm{G}}
\right)^{-2} .
\label{eq:tau-DG}
\end{multline}
We assume that for electron paramagnetism,
$\chi^{\prime \prime}/\omega = \chi_0 T_2 = 10^{-13} (T_d/15 \, \mathrm{K})^{-1}
\, \mathrm{s}$, where $\chi^{\prime \prime}$ is the imaginary component of the
magnetic susceptibility. Recall that $\gamma$ is constant for a freely
rotating grain with
dynamic symmetry. Rather than attempt a detailed analysis of the torque
for the case of a grain lacking dynamic symmetry, we simply adopt the
Davis-Greenstein result in equation (\ref{eq:DG-torque}), replacing
$\sin^2 \gamma$ with $\langle \sin^2 \gamma \rangle$.
\section{Simulations} \label{sec:simulations}
In this section, we describe the fundamental elements of the simulation
codes used in this work.
\subsection{Dimensionless variables}
For numerical integration of the equations of motion, we adopt dimensionless
variables
\be
J^{\prime} = \frac{J}{I_1 \omega_T}
\ee
and
\be
t^{\prime} = \frac{t}{\tau_{\mathrm{drag}}} ,
\ee
where the thermal rotation rate is
\begin{multline}
\omega_T = \left( \frac{15 k T_{\mathrm{gas}}}{8 \upi \bar{\rho} a_{\mathrm{eff}}^5}
\right)^{1/2} \\
= 1.6573 \times 10^5 \left( \frac{\bar{\rho}}{3 \, \mathrm{g} \,
\mathrm{cm}^{-3}} \right)^{-1/2} \left( \frac{T_{\mathrm{gas}}}{100 \,
\mathrm{K}} \right)^{1/2} \left( \frac{a_{\mathrm{eff}}}{0.1 \, \mu
\mathrm{m}} \right)^{-5/2} \mathrm{s}^{-1}
\label{eq:omega-T}
\end{multline}
and the drag time-scale is
\begin{multline}
\tau_{\mathrm{drag}} = \frac{3 I_1 \delta^{4/3}}{4 \sqrt{\upi} m n v_{\mathrm{th}}
a_{\mathrm{eff}}^4} = 1.045 \times 10^5 \, \alpha_1 \delta^{4/3} \\
\times \left(
\frac{\bar{\rho}}{3 \, \mathrm{g} \, \mathrm{cm}^{-3}} \right) \left(
\frac{a_{\mathrm{eff}}}{0.1 \, \mu \mathrm{m}} \right) \left( \frac{n}
{30 \, \mathrm{cm}^{-3}} \right)^{-1} \left( \frac{T_{\mathrm{gas}}}{100 \,
\mathrm{K}} \right)^{-1/2} \mathrm{yr} .
\label{eq:tau-drag}
\end{multline}
\subsection{Internal relaxation} \label{subsec:int-relax}
In dimensionless variables, the Langevin equation for internal relaxation
becomes
\be
\label{eq:internal-Langevin-eq-dimensionless}
\mathrm{d}q = - A_1(J^{\prime}, q) \,
\frac{\tau_{\mathrm{drag}}}{\tau_{\mathrm{int}}(J^{\prime})}
\, \mathrm{d}t^{\prime} + B_1[b(J^{\prime}), q] \,
\sqrt{\frac{\tau_{\mathrm{drag}}}{\tau_{\mathrm{int}}(J^{\prime})}} \,
\mathrm{d}w^{\prime}_{\mathrm{int}}
\ee
where $\mathrm{d}w^{\prime}_{\mathrm{int}}$ is a Gaussian random variable with
variance $\mathrm{d}t^{\prime}$,
\be
A_1(J^{\prime}, q) = - \tau_{\mathrm{int}}(J^{\prime}) \, A(q) ,
\ee
and
\be
B_1[b(J^{\prime}), q] = \sqrt{\tau_{\mathrm{int}}(J^{\prime}) \, D[b(J^{\prime}), q]} .
\ee
Random numbers and Gaussian random variables are computed using modified
versions of the routines {\sc ran2} and {\sc gasdev} from \citet{Press92}.
The internal drift and diffusion coefficients $A_1(q)$ and $B_1(b,q)$
are computed using equations (55), (61), and (81) in \citet{KW17}, adopting
their assumption that $D[b(J^{\prime}), q] = 0$ when $q=r_3$. The function
$b(J^{\prime})$ is defined by
\be
\label{eq:b-def}
b(J^{\prime}) = \frac{J^2}{2 I_1 k T_d} =
\frac{\alpha_1}{2} \, \frac{T_{\mathrm{gas}}}{T_{\mathrm{d}}}(J^{\prime})^2 .
\ee
From the final two paragraphs in Section \ref{sec:nuclear}, we take
\be
\tau_{\mathrm{int}}(J^{\prime}) = \left\{ \left[ \tau_{\mathrm{Bar}}(J^{\prime})
\right]^{-1} + \left[ \tau_{\mathrm{nuc}}(J^{\prime}) \right]^{-1} \right\}^{-1} ,
\label{eq:tau-int-defined}
\ee
\begin{multline}
\tau_{\mathrm{Bar}}(J^{\prime}) = 6.77 \times 10^{6} \, \alpha_1 \left(
\frac{\bar{\rho}}{3 \, \mathrm{g} \, \mathrm{cm}^{-3}} \right)^2 \left(
\frac{a_{\mathrm{eff}}}{0.1 \, \mu \mathrm{m}} \right)^7 \left(
\frac{T_{\mathrm{gas}}}{100 \, \mathrm{K}} \right)^{-1} \\
\times \left(
\frac{T_{\mathrm{d}}}{15 \, \mathrm{K}} \right) \left( J^{\prime} \right)^{-2}
\, \mathrm{s} \ ,
\end{multline}
\begin{multline}
\tau_{\mathrm{nuc}}(J^{\prime}) = 1.36 \times 10^{-5} \, \tau_{\mathrm{Bar}}(J^{\prime})
\bigg\{ 1 + 410 \bigg[ \left( \frac{\bar{\rho}}{3 \, \mathrm{g} \,
\mathrm{cm}^{-3}} \right)^{-1/2} \\
\times \left( \frac{T_{\mathrm{gas}}}{100 \, \mathrm{K}}
\right)^{1/2} \left( \frac{a_{\mathrm{eff}}}{0.1 \, \mu \mathrm{m}} \right)^{-5/2}
\bigg]^{1.96} \left( J^{\prime} \right)^{1.96} \bigg\}^{1.02} .
\label{eq:tau-nuc-defined}
\end{multline}
Fig.~\ref{fig:int-relax-times} shows the internal relaxation times
from the above three equations versus $J^{\prime}$, adopting parameter values
suitable for a silicate grain in the cold neutral medium (CNM):
$r_2 = 1.3$, $r_3 = 1.5$, $\bar{\rho} = 3 \, \mathrm{g} \, \mathrm{cm}^{-3}$,
$T_{\mathrm{gas}} = 100 \, \mathrm{K}$, $T_d = 15 \, \mathrm{K}$, and
$a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$.
(Throughout this paper, we will denote the base-10 logarithm by `log' and the
natural logarithm by `ln'.) When following the evolution of
$q$ in simulations, we must ensure that the time step size
$\mathrm{d}t^{\prime} \ll \tau^{\prime}_{\mathrm{int}}$.
\begin{figure}
\includegraphics[width=90mm]{fig1.eps}
\caption{
Internal relaxation times normalized to the drag time, from equations
(\ref{eq:tau-int-defined})--(\ref{eq:tau-nuc-defined}), for
$r_2 = 1.3$, $r_3 = 1.5$, $\bar{\rho} = 3 \, \mathrm{g} \, \mathrm{cm}^{-3}$,
$T_{\mathrm{gas}} = 100 \, \mathrm{K}$, $T_d = 15 \, \mathrm{K}$, and
$a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$.
}
\label{fig:int-relax-times}
\end{figure}
We construct interpolation tables for $A_1(q)$ and $B_1(b, q)$, using
{\sc mathematica}. As seen in
Fig. 1 in \citet{KW17}, these functions approach zero very steeply at
$q=r_2$. They also approach zero at $q=r_3$, as does $A(q)$ at $q=1$.
In order to obtain precise values of $A_1(q)$ and $B_1(b, q)$ for all $q$,
we construct tables for $B_1(b,q)$ for six separate ranges
of $q$: (1) $q \in [1 + 10^{-15}, 1.01]$ with uniform spacing in
$\ln(q-1)$, (2) $q \in [1.01, 1.28]$ with uniform spacing in
$q$, (3) $q \in [1.28, 1.3 - 10^{-15}]$ (recall that $r_2=1.3$) with
uniform spacing in $\ln(r_2-q)$, (4) $q \in [1.3 + 10^{-15}, 1.32]$
with uniform spacing in $\ln(q-r_2)$, (5) $q \in [1.32, 1.49]$ with
uniform spacing in $q$, (6) $q \in [1.49, 1.5 - 10^{-15}]$ (recall
that $r_3 = 1.5$) with uniform spacing in $\ln(r_3-q)$.
For each range, we take 2000 values of $q$. We
take 451 values of $b$, spaced uniformly in $\ln b$, from
$b_{\mathrm{min}} = 4.45 \times 10^{-10}$ to $b_{\mathrm{max}}= 5 \times 10^5$.
As seen in equation (81) in \citet{KW17}, $B_1(b, q)$ becomes independent of
$b$ as $b \rightarrow 0$. Thus, for $b < b_{\mathrm{min}}$, the minimum value of
$b$ in the interpolation tables, we simply take $b = b_{\mathrm{min}}$. Over the
full range of $q$, the fractional error due to this approximation is always
$< 10^{-10}$.
We also construct interpolation tables for $A_1(q)$ in ranges (2)--(5).
For $A_1(q)$ in ranges (1) and (6), we employ the asymptotic formulas in
\citet{KW17}, their equations (59) and (62). These are modified slightly
as $q$ deviates from 1 and $r_3$ to ensure that the asymptotic formulas
yield exactly the same result as the interpolation table when $q = 1.01$
and $q = 1.49$, which are the (1)--(2) and (5)--(6) boundaries, respectively.
Thus, we take
\be
A_1(q) = c_{A<} \, (q-1) \left[ 1 - \upsilon_< (q-1) \right] \ \ \ ,
\ \ \ q < 1.01
\ee
and
\be
A_1(q) = c_{A>} \, (r_3 - q) \left[ 1 - \upsilon_> (r_3 - q) \right] \ \ \ ,
\ \ \ q > 1.49
\ee
with
\be
c_{A<} = \frac{r_3^2 (r_2-1) + r_2^2 (r_3 - 1)}{2} ,
\ee
\be
c_{A>} = \frac{r_3 - r_2 + r_2^2 (r_3 - 1)}{2} ,
\ee
\be
\upsilon_< = 100 \left[ 1 - \frac{100 A_1(q = 1.01)}{c_{A<}} \right] ,
\ee
and
\be
\upsilon_> = 100 \left[ 1 - \frac{100 A_1(q = 1.49)}{c_{A>}} \right] .
\ee
These expressions are exact when $q=1.01$ and $1.49$ and have fractional
errors less than $3 \times 10^{-6}$ in range (1) and $2 \times 10^{-5}$ in
range (6).
Lacking a first-principles theory of Barnett relaxation, it is not clear
how to treat the boundaries at $q=1$ and $q=r_3$. In test runs, we
found frequent overshooting of $q=1$ (where the diffusion coefficient is
non-zero) but not of $q=r_3$ (where the diffusion coefficient is taken to
be zero). In test simulations assuming thermal equilibrium, we found that the
distribution function for $q$ resulting from the simulation best agrees with
the theoretical distribution if a `reflecting' boundary condition is adopted
at $q=1$. That is, if the value of $q$ resulting from the Langevin equation
is $1-\epsilon$, then we instead set $q=1+\epsilon$.
The treatment of the boundary at $q=r_3$ does not significantly
affect the results. Thus, we adopt `reflecting' boundary conditions at both
$q=1$ and $q=r_3$.
In contrast,
\citet{W09} and \citet{KW17} reduced the time step
and repeated the step whenever the simulation overshot $q=1$. Since this
prescription involves discarding randomly chosen variables,
it can introduce statistical biases. Thus, we reject that prescription
here.
\subsection{Angular momentum evolution}
In dimensionless variables, the Langevin equations for the grain's
angular-momentum components become
\be
\label{eq:external-Langevin-eq-dimensionless}
\mathrm{d}J^{\prime}_{i, J} = \langle \Gamma^{\prime}_{i, J}(\mathbfit{J}^{\prime}, q,
\mathrm{fs}) \rangle \, \mathrm{d}t^{\prime} + \sum_{j=1}^3 \langle
B^{\prime}_{ij, J}(\mathbfit{J}^{\prime}, q, \mathrm{fs}) \rangle
\, \mathrm{d}w^{\prime}_{j, J} \ \ \ \ (i=1-3) ,
\ee
where $\mathrm{d}w^{\prime}_{j, J}$ are Gaussian random variables with
variance $\mathrm{d}t^{\prime}$,
\be
\label{eq:Gamma-prime}
\langle \Gamma^{\prime}_{i, J}(\mathbfit{J}^{\prime}, q, \mathrm{fs}) \rangle =
\frac{\tau_{\mathrm{drag}} \langle \Gamma_{i, J}(\mathbfit{J}^{\prime}, q,
\mathrm{fs}) \rangle}{I_1 \omega_T} ,
\ee
\be
\label{eq:B-prime}
\langle B^{\prime}_{ij, J}(\mathbfit{J}^{\prime}, q, \mathrm{fs}) \rangle =
\frac{\tau^{1/2}_{\mathrm{drag}} \langle B_{ij, J}(\mathbfit{J}^{\prime}, q,
\mathrm{fs}) \rangle}{I_1 \omega_T} ,
\ee
and
$\langle \Gamma_{i, J}(\mathbfit{J}^{\prime}, q, \mathrm{fs}) \rangle$ and
$\langle B_{ij, J}(\mathbfit{J}^{\prime}, q, \mathrm{fs}) \rangle$ are the
components of the rotationally averaged mean torque and diffusion tensor
resulting from all of the external processes under consideration,
respectively.
In the following sections, we will assume that all of the particles departing
the grain surface do so either via thermal evaporation or within H$_2$
molecules formed at special surface sites.
\subsection{Collisions and thermal evaporation}
First consider the case that all of the particles departing the grain surface
do so via thermal evaporation.
From equations (\ref{eq:Gamma-col-avg}), (\ref{eq:Gamma-ev-avg}),
(\ref{eq:avg-quantity}), (\ref{eq:app-i1}), (\ref{eq:app-i2}),
(\ref{eq:app-i4}), and (\ref{eq:Gamma-prime}), the dimensionless rotationally
averaged mean torque, arising from both collisions and evaporation, is given by
\begin{multline}
\langle \bmath{\Gamma^{\prime}} \rangle_{\mathrm{col + ev}} = \langle
\bmath{\Gamma^{\prime}}_{\mathrm{col}} \rangle + \langle
\bmath{\Gamma^{\prime}}_{\mathrm{ev}} \rangle \\
= - \left[ Z_1(\delta)
\langle \cos^2 \gamma \rangle + Z_2(\delta) \left( q - \langle \cos^2 \gamma
\rangle \right) \right] \bmath{J^{\prime}} ,
\label{eq:mean-torque-thermal-eq}
\end{multline}
where $Z_1(\delta)$ and $Z_2(\delta)$ are defined in equations (\ref{eq:Z-1})
and (\ref{eq:Z-2}), respectively.
Note that $Z_1(\delta) \rightarrow 1$ and $Z_2(\delta) \rightarrow 1$
as $\delta \rightarrow 1$ (i.e.~as the spheroid approaches a sphere).
Thus, $\langle \bmath{\Gamma^{\prime}} \rangle = - \mathbfit{J}^{\prime}$
for a sphere, motivating the definition of the drag time-scale in equation
(\ref{eq:tau-drag}).
From equations (\ref{eq:C-zz-J-avg}), (\ref{eq:C-xx-J-avg}),
(\ref{eq:C-ij-ev}), and (\ref{eq:B-prime}),
the dimensionless rotationally averaged diffusion tensor, arising from both
collisions and evaporation, is given by
\be
\label{eq:Czz-thermal-eq}
\langle C^{\prime}_{zz, J} \rangle_{\mathrm{col + ev}}
= \alpha_1^{-1} \left( 1 + \frac{T_{\mathrm{ev}}}
{T_{\mathrm{gas}}} \right) \left[ Z_1(\delta) \langle \cos^2 \gamma
\rangle + Z_2(\delta) \langle \sin^2 \gamma \rangle \right] ,
\ee
\begin{multline}
\langle C^{\prime}_{xx, J} \rangle_{\mathrm{col + ev}} = \langle C^{\prime}_{yy, J}
\rangle_{\mathrm{col + ev}} \\
= \alpha_1^{-1} \left( 1 + \frac{T_{\mathrm{ev}}}{T_{\mathrm{gas}}} \right) \
\frac{1}{2}
\left[ Z_2(\delta) \left( 1 + \langle \cos^2 \gamma \rangle \right)
+ Z_1(\delta) \langle \sin^2 \gamma \rangle \right] .
\label{eq:Cxx-thermal-eq}
\end{multline}
For thermal equilibrium, $T_{\mathrm{ev}} = T_{\mathrm{gas}}$.
\citet{RL99} derived the mean torque and diffusion coefficients due to
collisions and evaporation for an oblate spheroid with dynamic symmetry. They
presented results in an inertial frame, corresponding to our alignment
coordinates. With $T_{\mathrm{ev}} = T_d$ and
$r_2=r_3$, our expressions (equations
\ref{eq:mean-torque-thermal-eq}--\ref{eq:Cxx-thermal-eq}, with the diffusion
tensor transformed from angular-momentum to alignment coordinates using
equations \ref{eq:x-J}--\ref{eq:z-J}) reduce to
to the \citet{RL99} expressions (their equations A10--A17). Note that
our $Z_1(\delta)$ and $Z_2(\delta)$ correspond to their functions
$\Gamma_{\parallel}(e)$ and $\Gamma_{\perp}(e)$, respectively. Note also that
they define their dimensionless quantities somewhat differently than we do,
so that the ratio of our dimensionless mean torque to theirs equals
$Z_1(\delta)$ and the ratio of our dimensionless diffusion tensor components
to theirs equals $Z_1(\delta)/\alpha_1$.
\subsection{Collisions and H$_2$ formation at special sites}
\label{sec:coll-H2-special}
Now consider the case that all of the particles departing the grain surface
do so within H$_2$ molecules formed at special surface sites.
From equations (\ref{eq:Gamma-col-avg}), (\ref{eq:Gamma-H2-sys-rot-avg}),
and (\ref{eq:Gamma-H2-drag}), the dimensionless systematic and drag torques are
\begin{multline}
\langle \bmath{\Gamma^{\prime}}_{\mathrm{H2, sys}} \rangle = \frac{3 v_{\mathrm{H2}}}
{4 a_{\mathrm{eff}} \omega_T} \delta^{1/3} \left[ 1 + \delta^2 g(\delta) \right] \\
\times
\left( Q_1 \langle \cos \gamma \rangle + Q_2 \langle \sin \gamma \cos \alpha
\rangle \right) \bmath{\hat{J}} ,
\label{eq:H2-sys-dimensionless}
\end{multline}
\begin{multline}
\langle \bmath{\Gamma^{\prime}}_{\mathrm{drag}} \rangle_{\mathrm{col + H2}} =
\langle \bmath{\Gamma^{\prime}}_{\mathrm{col}} \rangle +
\langle \bmath{\Gamma^{\prime}}_{\mathrm{H2, drag}} \rangle = \\
- \frac{3}{8} J^{\prime} \bmath{\hat{J}} \left[ \left( 1 - \delta^2 \right)^2
\mathcal{I}_4(\delta) q + \left\{ 2 \left[ 1 + \delta^2 g(\delta) \right]
Q_3 - \left( 1 - \delta^2 \right)^2 \mathcal{I}_4(\delta) \right\} \right. \\
\times \langle \cos^2 \gamma \rangle
+ 2 \left[ 1 + \delta^2 g(\delta) \right] \Big( Q_4
\langle \sin^2 \gamma \sin^2 \alpha \rangle \\
+ Q_5 \langle \sin^2 \gamma
\cos^2 \alpha \rangle \Big) \bigg] .
\label{eq:torque-drag-plus-H2}
\end{multline}
From equations (\ref{eq:C-xx-J-H2-sites}) and (\ref{eq:C-zz-J-H2-sites}),
only the diagonal elements of $C_{ij, \mathrm{H}2}$ are needed. From equation
(\ref{eq:C-ij-H2-special-sites}),
\be
C^{\prime}_{xx, \mathrm{H}2} = \frac{3 m v^2_{\mathrm{H2}}}{2 k_B T_{\mathrm{gas}}} \
\alpha_1^{-1} \left[ 1 + \delta^2 g(\delta) \right] Q_6 \, ;
\label{eq:C-xx-prime-H2}
\ee
$C^{\prime}_{yy, \mathrm{H}2}$ and $C^{\prime}_{zz, \mathrm{H}2}$ are of identical form,
with $Q_6$ replaced by $Q_7$ and $Q_8$, respectively.
The components of the dimensionless rotationally averaged diffusion tensor for
collisions are given in equations (\ref{eq:Czz-thermal-eq}) and
(\ref{eq:Cxx-thermal-eq}),
omitting the term $T_{\mathrm{ev}} / T_{\mathrm{gas}}$.
\subsection{Davis-Greenstein torque}
From equation (\ref{eq:DG-torque}), and the last sentence in Section
\ref{sec:D-G}, we take the dimensionless Davis-Greenstein torque to be
\be
\label{eq:DG-torque-dimensionless}
\langle \bmath{\Gamma}^{\prime}_{\mathrm{DG}} \rangle = - \frac{\tau_{\mathrm{drag}}}
{\tau_{\mathrm{DG}}} \left[ 1 + \left( r_2 - 1 \right) \langle \sin^2 \gamma
\rangle \right]
\left( J^{\prime}_{x, B} \, \bmath{\hat{x}}_B + J^{\prime}_{y,B}
\bmath{\hat{y}}_B \right) .
\ee
It is somewhat more efficient to compute $\mathrm{d}\mathbfit{J}$ due to the
Davis-Greenstein torque in alignment coordinates rather than in
angular-momentum coordinates.
\subsection{Omission of the Barnett torque}
The term
$J^{\prime}_{x, B} \, \bmath{\hat{x}}_B + J^{\prime}_{y,B} \bmath{\hat{y}}_B$
in equation (\ref{eq:DG-torque-dimensionless}) can also be expressed as
$J^{\prime} \sin \xi ( \bmath{\hat{\xi}} \cos \xi + \bmath{\hat{J}} \sin \xi)$.
As seen in equations
(\ref{eq:mean-torque-thermal-eq})--(\ref{eq:C-xx-prime-H2})
and the preceding sentence,
none of the torques or diffusion coefficients considered so far depend
explicitly on the coordinate $\phi_B$. The
Barnett torque, due to the interaction of the grain's Barnett magnetic moment
with the interstellar magnetic field, yields a rapid precession of
the grain angular momentum about the field direction, i.e.~rapid change of
$\phi_B$. Since none of the processes here depend on $\phi_B$, for simplicity
we omit the Barnett torque and set $\phi_B = 0$ at the end of each time step.
\subsection{Rotational averages}
As described in Section \ref{sec:external-processes} and seen in equations
(\ref{eq:mean-torque-thermal-eq})--(\ref{eq:DG-torque-dimensionless}),
several rotational
averages involving the Eulerian angles (e.g.~$\langle \cos^2 \gamma \rangle$)
are needed. As derived in Section \ref{sec:grain-rotation}, these are
expressed in terms of $\langle \mathrm{dn}^2(\nu, k^2) \rangle$,
$\langle \mathrm{cn}^2(\nu, k^2) \rangle$, and $\upi/[2 K(k^2)]$. We tabulate
each of these quantities for $10^4$ values of $k^2$ between 0 and 0.9999
and interpolate. When $k^2 > 0.9999$, we employ the approximations
\be
\langle \mathrm{dn}^2(\nu, k^2) \rangle \approx \left[ \ln \left(
\frac{4}{\sqrt{1 - k^2}} \right) \right]^{-1}
\ee
and
\be
K(k^2) \approx \left( 1 + \frac{1-k^2}{4} \right) \ln \left(
\frac{4}{\sqrt{1 - k^2}} \right) - \frac{1-k^2}{4} ;
\ee
$\langle \mathrm{cn}^2(\nu, k^2) \rangle$ is obtained from identity
(\ref{eq:cn2-avg}).
\subsection{Thermal averages over $q$}
For sufficiently low $J^{\prime}$, $q$ can reach values higher than $r_2$
and flipping can occur. Of course, it is necessary to follow the evolution
of $q$ in this case. For high $J^{\prime}$, the flipping probability is
negligible, so it is sufficient to average the torque and diffusion
coefficients, assuming a thermal distribution of $q$ values.
Fig.~\ref{fig:prob-q-r2} shows the probability that $q>r_2$ for a thermal
distribution, as a function of $b$. We follow the evolution of $q$ when
$b < b_{\mathrm{crit}} = 300$ and average over $q$ when $b > b_{\mathrm{crit}}$,
using equation (63) in \citet{KW17} for the thermal-equilibrium distribution
of $q$. Interpolation tables for thermally averaged quantities were generated
using {\sc mathematica}. Whenever $b$ crosses $b_{\mathrm{crit}}$ from above,
a value of $q$ is randomly chosen from its thermal-equilibrium distribution
for $b = b_{\mathrm{crit}}$.
\begin{figure}
\includegraphics[width=90mm]{fig2.eps}
\caption{
The probability that $q > r_2$ for a thermal distribution, as a function of
$b$.
}
\label{fig:prob-q-r2}
\end{figure}
\subsection{Time step size} \label{sec:time-step-size}
In the high-$b$ regime, for which we average over $q$, we take a constant
time step size $\mathrm{d}t^{\prime} = k_{\mathrm{high}}$. In the low-$b$ regime,
we take the step size
$\mathrm{d}t^{\prime} = k_{\mathrm{low}} \tau_{\mathrm{int}}^{\prime}$.
In most simulations, we take $k_{\mathrm{low}} = 10^{-2}$ and
$k_{\mathrm{high}} = 10^{-4}$.
\subsection{Grain flipping}
When $q < r_2$ ($q > r_2$), the grain is in either the positive or negative
flip state with respect to $\bmath{\hat{a}}_1$ ($\bmath{\hat{a}}_3$).
Whenever $q=r_2$ is crossed, the flip state is chosen randomly, with equal
probability to be positive or negative. A grain flips when it starts with
$q<r_2$ in one flip state with respect to $\bmath{\hat{a}}_1$, makes an
excursion to $q > r_2$, and returns to $q<r_2$ in the opposite flip state
with respect to $\bmath{\hat{a}}_1$.
\section{Code test: thermal equilibrium}
\label{sec:code-test}
In order to test both the theoretical development in Section
\ref{sec:external-processes} and our simulation code, we performed simulations
with two simplified codes adopting thermal equilibrium.
\subsection{Distribution of $q$}
In the first code, external processes are omitted; the angular
momentum and grain temperature are held constant and $q$ is evolved using
equation (\ref{eq:internal-Langevin-eq-dimensionless}). In these
simulations, the time step is taken to be $10^{-3} \, \tau_{\mathrm{int}}$ and
the the total duration of the simulation is $10^8 \, \tau_{\mathrm{int}}$.
The code returns the distribution function $f(q)$; the fraction of the time
that the grain has dimensionless energy parameter between $q$ and
$q+\mathrm{d}q$ is $f(q) \mathrm{d}q$. These are compared with the
theoretical distribution $f_{\mathrm{TE}}(q)$, given by equation (63) in
\citet{KW17}. Fig.~\ref{fig:q-hist-eq} shows results from the simulations
(heavy dashed curves) and theory (light solid curves) for two values of
$b$ (defined in eq.~\ref{eq:b-def}),
$b=1.031$ and $b=10.39$. There is a vertical asympotote at $q=r_2=1.3$,
which the analytic equation better captures (with a finite number of values
of $q$ for which $f_{\mathrm{TE}}$ is evaluated) than the simulation. Otherwise,
apart from slight deviations at $q=1$ and $q=r_3$, the results are identical.
\begin{figure}
\includegraphics[width=90mm]{fig3.eps}
\caption{
The thermal-equilibrium
distribution function $f(q)$ from simulations (heavy dashed curves) and
theory (light solid curves) for two values of $b$, as indicated.
}
\label{fig:q-hist-eq}
\end{figure}
\subsection{Distribution of $J^{\prime}$}
In the second test, we check that the simulation correctly reproduces the
distribution $f(J^{\prime})$, defined such that the fraction of the time that
the grain has dimensionless angular momentum between $J^{\prime}$ and
$J^{\prime} + \mathrm{d} J^{\prime}$ equals $f(J^{\prime}) \, \mathrm{d} J^{\prime}$,
in thermal equilibrium.
\subsubsection{Simulation} \label{sec:thermal-equil-J-dist-sim}
In this code, the only
external processes are collisions and thermal evaporation. Thus, the mean
torque and diffusion coefficients are given by equations
(\ref{eq:mean-torque-thermal-eq})--(\ref{eq:Cxx-thermal-eq}), with
$T_{\mathrm{ev}} = T_{\mathrm{gas}} = T_d$. The actual value of the temperature does
not matter, since we are examining the distribution of $J^{\prime}$ rather than
the distribution of $J$. In order to obtain results in a reasonable run time,
we average all quantities over a thermal distribution of $q$, for all values
of $b$. We take the time step and duration of the simulation equal to
$10^{-5} \, \tau_{\mathrm{drag}}$ and $10^5 \, \tau_{\mathrm{drag}}$, respectively.
Table \ref{tab:parameters} indicates the adopted parameter values.
\begin{table*}
\caption{Adopted parameter values for simulations.}
\label{tab:parameters}
\begin{tabular}{llllll}
\hline
symbol &
quantity &
section \ref{sec:thermal-equil-J-dist-sim} &
section \ref{sec:bias} &
section \ref{sec:crossovers} &
section \ref{sec:D-G-alignment} \\
& & thermal equil & f-step bias & crossovers suite 1 & D-G case 1 \\
\hline
$\delta$ & spheroid semilength ratio & 0.5 & 0.5 & 0.5 & 0.5 \\
$r_2$ & $I_1/I_2$ & 1.3 & 1.3 & 1.3 & 1.3 \\
$r_3$ & $I_1/I_3$ & 1.5 & 1.5 & 1.5 & 1.5 \\
$\bar{\rho}$ & grain mean density (g$\,$cm$^{-3}$) & 3.0 & 3.0 & 3.0 & 3.0 \\
$n_{\mathrm{H}}$ & gas H number density (cm$^{-3}$) & 30 & 30 & 30 & 30 \\
$a_{\mathrm{eff}}$ & grain effective radius ($\mu$m) & 0.2 & 0.2 & 0.2 & 0.2 \\
$T_{\mathrm{gas}}$ & gas temperature (K) & NA & 100 & 100 & 100 \\
$T_d$ & dust temperature (K) & NA & 15 & 15 & 15 \\
$N_s$ & number of H$_2$-formation sites & NA & $5.5 \times 10^5$ &
$5.5 \times 10^5$ & $5.5 \times 10^5$ \\
$E_{\mathrm{H}2}$ & H$_2$ kinetic energy (eV) & NA & 0.2 & 0.2 & 0.2 \\
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}}$ & see Section \ref{sec:H2-formation} &
NA & 0.8 & 0.8 & 0.8 \\
$B$ & interstellar magnetic field ($\mu$G) & NA & NA & NA & 5.0 \\
$t^{\prime}_{\mathrm{life}}$ & H$_2$-formation site lifetime & NA & NA & NA & 1 \\
$k_{\mathrm{low}}$ & low-$b$ time-step parameter & NA & $10^{-2}$ & $10^{-2}$ &
$10^{-2}$ \\
$k_{\mathrm{high}}$ & high-$b$ time-step parameter & $10^{-5}$ & NA & $10^{-4}$ &
$10^{-4}$ \\
$t^{\prime}$ & duration of simulation (if fixed) & $10^5$ & NA & NA & $10^3$ \\
\hline
\end{tabular}
\end{table*}
\subsubsection{Theoretical distribution function}
\label{sec:theoretical-dist-funcs}
For a freely rotating body, the Lagrangian $L$ equals the rotational kinetic
energy $E$,
\be
\label{eq:Lagrangian}
L = E = \frac{1}{2} I_1 \omega_1^2 + \frac{1}{2} I_2 \omega_2^2 +
\frac{1}{2} I_3 \omega_3^2
\ee
and the square of the angular momentum is
\be
J^2 = I_1^2 \omega_1^2 + I_2^2 \omega_2^2 + I_3^2 \omega_3^2 .
\ee
The components of the angular velocity along the principal axes can be
expressed in terms of the Eulerian angles:
\be
\omega_1 = \dot{\zeta} \cos \gamma + \dot{\alpha} ,
\ee
\be
\omega_2 = \dot{\zeta} \sin \alpha \sin \gamma + \dot{\gamma} \cos \alpha ,
\ee
\be
\omega_3 = \dot{\zeta} \cos \alpha \sin \gamma - \dot{\gamma} \sin \alpha ,
\ee
where dots denote time derivatives.
Inserting these into equation (\ref{eq:Lagrangian}) for the Lagrangian,
the momenta conjugate to the Eulerian angles are easily obtained;
e.g.~$p_{\zeta} = \partial L / \partial \dot{\zeta}$. After some algebra, we
find $E$ and $J$ in terms of the Eulerian angles and their conjugate
momenta, yielding
\begin{multline}
J^{\prime} = \left\{ \left( p^{\prime}_{\alpha} \right)^2 + \left[ \cos \alpha \,
p^{\prime}_{\gamma} + \frac{\sin \alpha}{\sin \gamma} \left( p^{\prime}_{\zeta} -
\cos \gamma \, p^{\prime}_{\alpha} \right) \right]^2 + \right. \\
\left. \left[ \sin \alpha \, p^{\prime}_{\gamma}
- \frac{\cos \alpha}{\sin \gamma} \left( p^{\prime}_{\zeta} - \cos \gamma \,
p^{\prime}_{\alpha} \right) \right]^2 \right\}^{1/2} ,
\end{multline}
\begin{multline}
q = \left( J^{\prime} \right)^{-2}
\left\{ \left( p^{\prime}_{\alpha} \right)^2 + r_2 \left[ \cos \alpha \,
p^{\prime}_{\gamma} + \frac{\sin \alpha}{\sin \gamma} \left( p^{\prime}_{\zeta} -
\cos \gamma \, p^{\prime}_{\alpha} \right) \right]^2 \right. \\
\left. + r_3 \left[ \sin \alpha \,
p^{\prime}_{\gamma}
- \frac{\cos \alpha}{\sin \gamma} \left( p^{\prime}_{\zeta} - \cos \gamma \,
p^{\prime}_{\alpha} \right) \right]^2 \right\} ,
\end{multline}
where $p_i^{\prime} = p_i / (I_1 \omega_T)$.
The grain rotational states are uniformly distributed in the
6-dimensional phase space defined by the Eulerian angles and their
conjugate momenta, but neither $J^{\prime}$ nor $q$ depends explicitly on
$\zeta$.
Define the density of states $\rho(J^{\prime}, q)$ such that the number of
states with dimensionless angular momentum and rotational enegy between
$J^{\prime}$ and $J^{\prime} + \mathrm{d}J^{\prime}$ and $q$ and $q+\mathrm{d}q$ is
proportional to
$\rho(J^{\prime}, q) \, \mathrm{d}J^{\prime} \, \mathrm{d}q$.
To estimate the density of states, we calculate $J^{\prime}$
and $q$ for $(230)^5$ combinations of
$(\gamma, \alpha, p^{\prime}_{\zeta}, p^{\prime}_{\gamma}, p^{\prime}_{\alpha})$,
with the angles uniformly distributed from $0$ to $2 \upi$ and the conjugate
momenta distributed logarithmically from $7 \times 10^{-4}$ to $3$. We have
tried other values for the number of combinations and range of values of the
momenta and found that the distribution function for the relevant range of
$J^{\prime}$ and $q$ is well converged.
The thermal-equilibrium distribution of $J^{\prime}$ is
\be
f(J^{\prime}) =
\int_1^{r_3} \mathrm{d}q \, \rho(J^{\prime}, q) \, \exp \left[ - \frac{\alpha_1}{2}
\left( J^{\prime} \right)^2 \right] ;
\ee
the form of the Boltzmann factor follows from equations (\ref{eq:q}),
(\ref{eq:I_i}), and (\ref{eq:omega-T}). The temperature does not explicitly
appear because the variable is $J^{\prime}$ rather than $J$.
\subsubsection{Results}
Fig.~\ref{fig:J-hist-eq} shows the results from both simulation and theory,
which agree perfectly.
\begin{figure}
\includegraphics[width=90mm]{fig4.eps}
\caption{
$J^{\prime} f(J^{\prime})$, normalized to its maximum value,
for thermal equilibrium. The solid curve is for
the simulation results and the boxes are the theoretical result.
}
\label{fig:J-hist-eq}
\end{figure}
\section{H$_2$ formation} \label{sec:H2-formation}
In the following sections, we will assume that all H atoms depart the grain
surface within H$_2$ molecules, formed at special surface sites.
\citet{DW97} took the surface area per H$_2$-formation site equal
to $l^2$, with $l^2 = 100 \,$\AA$^2$ as a fiducial value. They also
considered larger values of $l^2$. The surface area of an oblate spheroid is
\be
S = 2 \upi a_{\mathrm{eff}}^2 \delta^{-2/3} \left[ 1 + \delta^2 g(\delta) \right] .
\ee
For $a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$ and $\delta = 0.5$,
$S = 5.5 \times 10^7 \,$\AA$^2$. Thus, for
$l^2 = 100 \,$\AA$^2$, the number of H$_2$-formation sites is
$N_s = 5.5 \times 10^5$. We will adopt this value in our simulations.
The translational kinetic energy of the ejected H$_2$ molecule is not well
known. In most simulations, we take $E_{\mathrm{H}2} = 0.2 \, \mathrm{eV}$,
but we also perform some simulations with $E_{\mathrm{H}2} = 0.05 \, \mathrm{eV}$
and $1.0 \, \mathrm{eV}$.
At the start of each simulation, the locations of all $N_s$ sites, as well
as the departure directions at the sites, are chosen randomly as described
in Section \ref{sec:special-sites}, taking
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0.8$.
To provide a sense for the magnitude of the dimensionless efficiency factors
$Q_i$ ($i$ = 1--8) defined in Section \ref{sec:special-sites}, Table
\ref{tab:special-sites} tabulates their values for
one particular grain realization with $N_s = 5.5 \times 10^5$.
\begin{table}
\caption{Efficiency factors associated with H$_2$ formation for one grain
realization with number of special sites $N_s = 5.5 \times 10^5$.}
\label{tab:special-sites}
\begin{tabular}{ll}
\hline
$Q_1$ & $-4.00 \times 10^{-4}$ \\
$Q_2$ & $3.13 \times 10^{-4}$ \\
$Q_3$ & $0.600$ \\
$Q_4$ & $0.521$ \\
$Q_5$ & $0.599$ \\
$Q_6$ & $9.04 \times 10^{-2}$ \\
$Q_7$ & $9.01 \times 10^{-2}$ \\
$Q_8$ & $5.40 \times 10^{-2}$ \\
\hline
\end{tabular}
\end{table}
If a grain reaches suprathermal rotation, then $q \approx 1$ and
$\gamma \approx 0$. Setting the systematic and drag torques
(equations \ref{eq:H2-sys-dimensionless} and \ref{eq:torque-drag-plus-H2})
equal yields the equilibrium value of the dimensionless angular momentum:
\be
J^{\prime}_{\mathrm{eq}} = \frac{\delta^{1/3} Q_1 v_{\mathrm{H2}}}{Q_3 a_{\mathrm{eff}}
\omega_T} .
\label{eq:Jp_eq_from_Qs}
\ee
In this expression, a positive (negative) value corresponds to rotation with
positive (negative) flip state with respect to $\bmath{\hat{a}}_1$.
Given the relevant parameters in Table \ref{tab:parameters} and the efficiency
factors in Table \ref{tab:special-sites},
$J^{\prime}_{\mathrm{eq}} = 395$. In equilibrium, this grain is in
the negative flip state with respect to $\bmath{\hat{a}}_1$.
Fig.~\ref{fig:Jp-eq-hist} shows the distribution of $J^{\prime}_{\mathrm{eq}}$
derived from $6.4 \times 10^6$ different grain realizations randomly
constructed as described in Section \ref{sec:special-sites}, with the
relevant parameters as in Table \ref{tab:parameters}, except for
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}}$ equal to both 0 and 0.8.
\begin{figure}
\includegraphics[width=90mm]{fig5.eps}
\caption{
Distribution of $J^{\prime}_{\mathrm{eq}}$ for randomly constructed grain
realizations with $(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0$ or 0.8
(see text). Parameters are as in Table \ref{tab:parameters}, with
$E_{\mathrm{H2}} = 0.2 \, \mathrm{eV}$, and $N_s = 5.5 \times 10^5$. The
distribution function $f(J^{\prime}_{\mathrm{eq}})$ is defined such that the
fraction of grain with dimensionless equilibrium angular momentum between
$J^{\prime}_{\mathrm{eq}}$ and
$J^{\prime}_{\mathrm{eq}} + \mathrm{d}J^{\prime}_{\mathrm{eq}}$ equals
$f(J^{\prime}_{\mathrm{eq}}) \, \mathrm{d}J^{\prime}_{\mathrm{eq}}$.
}
\label{fig:Jp-eq-hist}
\end{figure}
From equations (\ref{eq:C_zz-body}), (\ref{eq:C_xx-body}),
(\ref{eq:C-ij-H2-special-sites}), and (\ref{eq:Q_6-def})--(\ref{eq:Q_8-def}),
in the limit of uniform surface coverage of formation sites with
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0$, $Q_6$ and
$Q_7 \rightarrow Q_{6, \mathrm{un}}$ and $Q_8 \rightarrow Q_{8, \mathrm{un}}$, where
\be
Q_{6, \mathrm{un}} = \frac{Z_2(\delta)}{3 [1 + \delta^2 g(\delta)]} ,
\ee
\be
Q_{8, \mathrm{un}} = \frac{Z_1(\delta)}{3 [1 + \delta^2 g(\delta)]} .
\ee
Recall that the functions $Z_1(\delta)$ and $Z_2(\delta)$ are defined in
equations (\ref{eq:Z-1}) and (\ref{eq:Z-2}). For $\delta = 0.5$,
$Q_{6, \mathrm{un}} = 0.1187$ and $Q_{8, \mathrm{un}} = 0.1501$.
In our suite of $6.4 \times 10^6$ different grain realizations with
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0$, $Q_6$ and $Q_7$ are always
within 1\% of $Q_{6, \mathrm{un}}$ and $Q_8$ is always within 1\% of
$Q_{8, \mathrm{un}}$. For the suite with
$(\cos \theta_{\mathrm{out}})_{\mathrm{min}} = 0.8$, $Q_6$ and $Q_7$ are always
close to $0.76 \, Q_{6, \mathrm{un}}$ and $Q_8$ is always close to
$0.36 \, Q_{8, \mathrm{un}}$.
In Section \ref{sec:crossovers}, where we examine the duration of and
disalignment during a single crossover, the efficiency factors
$Q_1$--$Q_8$ associated with H$_2$ formation are held fixed throughout the
simulation. In Section \ref{sec:D-G-alignment}, examining Davis-Greenstein
alignment, existing formation sites are destroyed and new formation sites
are formed, so that $Q_1$--$Q_8$ vary throughout the simulation. For
simplicity, and to keep the total number of special formation sites,
$N_s$, constant, we assume that when a site is destroyed, another
site forms at the same time. The probability that no sites are destroyed
during time $\Delta t_{\mathrm{rep}}$ is given by
\be
P_{\mathrm{none}} = \exp \left( - \frac{N_s \Delta t_{\mathrm{rep}}}{t_{\mathrm{life}}}
\right) ,
\ee
where $t_{\mathrm{life}}$ is the site lifetime. Thus, we take the time until
the next site-replacement event to be
\be
\Delta t_{\mathrm{rep}} = - \frac{t_{\mathrm{life}} \ln P}{N_s} ,
\ee
where $P$ is a random number between 0 and 1. During a site-replacement
event, one existing site is randomly chosen for destruction. A new site
is immediately formed, with parameters chosen randomly following the same
prescription for constructing the sites at the start of the simulation.
We perform simulations with three values of
$t^{\prime}_{\mathrm{life}} = t_{\mathrm{life}}/\tau_{\mathrm{drag}} = 1$, 10, and
$10^3$.
\section{F-step duration bias} \label{sec:bias}
In this section, we present an example to illustrate the f-step duration
bias described in Section \ref{sec:intro}. Recall that we define an f-step
as the interval between consecutive flips and define up-steps and down-steps
as f-steps with
$\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 > 0$ and
$\mathbfit{J} \bmath{\cdot} \Gamma_1 \bmath{\hat{a}}_1 < 0$, respectively.
In an up-step, the systematic torque acts so as to spin the grain up to
higher angular momentum; the opposite applies in a down-step.
We ran four suites of simulations, with parameter values as given in Table
\ref{tab:parameters}. The special sites are held fixed and
are identical for all of the simulations. The values of the efficiency
factors associated with H$_2$ formation are given in Table
\ref{tab:special-sites}. The Davis-Greenstein torque is not included. The
time step parameter $k_{\mathrm{low}} = 10^{-2}$; $k_{\mathrm{high}}$ is not
relevant, since the high-$b$ regime never applies.
Each suite consists of $4 \times 10^6$ simulations and each simulation starts
with $J^{\prime} = 2$ and $q = 1.01$. Each simulation is terminated when the
flip state with respect to $\bmath{\hat{a}}_1$ changes. Thus, each simulation
corresponds approximately to one f-step. Note that
\citet{KW17} adopted a more realistic,
but also more complicated, definition of a flip; the simple approximation
here is sufficient for our purpose. Suites 1 and 2 are for an up-step
and a down-step, respectively. Likewise for suites 3 and 4,
except that the parameter $b$, which depends on $J^{\prime}$ and is defined in
equation (\ref{eq:b-def}), is held fixed throughout the simulation in these
cases. To be clear, the value of $J^{\prime}$ changes in all of the simulations,
but in suites 3 and 4, we do not adjust the value of $b$ accordingly. Only
the internal relaxation is affected by the value of $b$; by holding it constant,
we artificially eliminate the dependence of the flipping probability per unit
time on $J^{\prime}$.
Table \ref{tab:bias} shows the mean value of the (dimensionless) f-step
duration, $t^{\prime}_{\mathrm{av}}$, and the mean value of the change in the
(dimensionless) angular momentum, $(\Delta J^{\prime})_{\mathrm{av}}$, for each
suite. Both $t^{\prime}_{\mathrm{av}}$ and $|(\Delta J^{\prime})_{\mathrm{av}}|$ are
nearly identical for suites 3 and 4, as expected since the internal relaxation
is taken to be independent of $J^{\prime}$ in these simulations. Also as
expected, the
values of $t^{\prime}_{\mathrm{av}}$ and $(\Delta J^{\prime})_{\mathrm{av}}$ are larger
for suite 1 (up-steps) than for suites 3 and 4, and are both smaller for suite
2 (down-steps).
\begin{table}
\caption{Mean f-step duration and change in angular momentum.}
\label{tab:bias}
\begin{tabular}{llll}
\hline
Suite & Description & $t^{\prime}_{\mathrm{av}}$ & $(\Delta J^{\prime})_{\mathrm{av}}$ \\
\hline
1 & up-step & $2.54 \times 10^{-5}$ & $5.78 \times 10^{-3}$ \\
2 & down-step & $2.39 \times 10^{-5}$ & $-5.44 \times 10^{-3}$ \\
3 & up-step (fixed $b$) & $2.46 \times 10^{-5}$ & $5.59 \times 10^{-3}$ \\
4 & down-step (fixed $b$) & $2.46 \times 10^{-5}$ & $-5.60 \times 10^{-3}$ \\
\hline
\end{tabular}
\end{table}
The bottom panel of Fig.~\ref{fig:bias} shows the histogram $N/N_{\mathrm{max}}$
of $\log t^{\prime}$ for suite 3, normalized at its peak. The top panel shows
the difference $\Delta N/N_{\mathrm{max}}$ between the histogram for each of the
other three suites and suite 3. The difference is relatively small for suite 4;
it should approach zero as the number of simulations per suite increases.
For suite 1 (up-steps), there is an excess at the longest times, compensated
by a deficit at shorter times. The opposite trend applies for suite 2
(down-steps).
\begin{figure}
\includegraphics[width=90mm]{fig6.eps}
\caption{
Bottom panel: Histogram $N/N_{\mathrm{max}}$ of $\log t^{\prime}$ for suite 3
(up-steps, constant $b$), normalized at its peak. Top panel: The difference
$\Delta N/N_{\mathrm{max}}$ between the histogram for suites 1 (up-steps, dashed),
2 (down-steps, solid), and 4 (down-steps, fixed $b$, dotted).
}
\label{fig:bias}
\end{figure}
\section{Crossovers} \label{sec:crossovers}
In this section, we examine the duration of individual crossovers, as well
as the disalignment during the crossover. Each simulation begins with the
construction of the special sites, as described in Section
\ref{sec:H2-formation}. We demand that $J^{\prime}_{\mathrm{eq}} \ge 50$;
otherwise, the construction is discarded and a new one is generated.
The initial dimensionless angular momentum is set equal to
$J^{\prime}_c = (1 - e^{-1}) J^{\prime}_{\mathrm{eq}} \approx 0.632 \, J^{\prime}_{\mathrm{eq}}$. The flip state is chosen such that the systematic torque spins the
grain down, to lower values of $J^{\prime}$. During the simulation, the special
sites are held fixed. The simulation ends when
$J^{\prime} = J^{\prime}_c$, in the flip state such that the systematic torque
is spinning the grain up.
These simulations include internal relaxation and only two external
processes: collisions and H$_2$ formation at special sites (section
\ref{sec:coll-H2-special}). The Davis-Greenstein torque is not included.
Initially, the grain is oriented such that the alignment angle
$\xi = 0$. Thus, the value of $\cos \xi$ at the end of the simulation
indicates the disalignment: $\cos \xi = 1$ implies no angular deviation of
the angular momentum vector during the crossover.
Now suppose that the grain is constrained to always rotate about
$\bmath{\hat{a}}_1$ and is only subject to the mean systematic and drag
torques. That is, stochastic elements are neglected. In this case, the
grain spins down from $J^{\prime}_c$ to $J^{\prime} = 0$ and then back up to
$J^{\prime}_c$, without any flip and with the final angular momentum pointing
in the opposite direction as the initial angular momentum. In this case,
the duration $t^{\prime}_s$ of the crossover (normalized to the drag time)
can be found analytically:
\be
t^{\prime}_s = \frac{4}{3 [1 + \delta^2 g(\delta)] Q_3} \ln \left( \frac{1 +
J^{\prime}_c/J^{\prime}_{\mathrm{eq}}}{1 - J^{\prime}_c/J^{\prime}_{\mathrm{eq}}} \right)
\approx \frac{1.987}{[1 + \delta^2 g(\delta)] Q_3} .
\ee
If thermal trapping is important, then we would expect that
$t^{\prime}/t^{\prime}_s \gg 1$, where $t^{\prime}$ is the actual duration of the
crossover as found in the simulation.
We ran four suites of simulations. The parameter values for suite 1
are given in Table \ref{tab:parameters}. Each suite consists of
$5.6 \times 10^4$ separate simulations with identical input parameter values;
only the seed for the random number generator differs among the simulations
within a suite. For suites 2 and 3, different values are adopted for the
kinetic energy of the outgoing H$_2$ molecules:
$E_{\mathrm{H}2} = 0.05 \, \mathrm{eV}$ and
$1.0 \, \mathrm{eV}$. For suites 1--3, the step-size parameters
(see Section \ref{sec:time-step-size}) are taken to be $k_{\mathrm{low}} = 10^{-2}$
and $k_{\mathrm{high}} = 10^{-4}$. Suite 4 serves as a convergence
check, with $E_{\mathrm{H}2} = 0.2 \, \mathrm{eV}$, $k_{\mathrm{low}} = 10^{-3}$,
and $k_{\mathrm{high}} = 10^{-5}$.
Figures \ref{fig:cross-hist-cxi}--\ref{fig:cross-hist-Jp-min} show the
following three distribution functions for suites 1--3:
(1) $\cos \xi$; (2) $t^{\prime}/t^{\prime}_s$, the ratio
of the crossover duration $t^{\prime}$ to the duration $t^{\prime}_s$ for the
simple case described above; and (3) $\log J^{\prime}_{\mathrm{min}}$, where
$J^{\prime}_{\min}$ is the minimum value of $J^{\prime}$ during the crossover.
In each case, $f_i(u) \, \mathrm{d}u$ equals the fraction of simulations for
which the argument lies between $u$ and $u+\mathrm{d}u$. The three distribution
functions are denoted by subscripts `$\xi$', `$t$', and `$J$', respectively,
for $\cos \xi$, $t^{\prime}/t^{\prime}_s$, and $\log J^{\prime}_{\mathrm{min}}$.
\begin{figure}
\includegraphics[width=90mm]{fig7.eps}
\caption{
The distribution function $f_{\xi}(\cos \xi)$ for suites of crossover simulations
with the energy $E_{\mathrm{H}2}$ of the departing H$_2$ molecule as
indicated. See the text for other parameter values.
}
\label{fig:cross-hist-cxi}
\end{figure}
\begin{figure}
\includegraphics[width=90mm]{fig8.eps}
\caption{
Same as Figure \ref{fig:cross-hist-cxi} except for the distribution function
$f_t(t^{\prime}/t^{\prime}_s)$. Line types as in Figure \ref{fig:cross-hist-cxi}.
}
\label{fig:cross-hist-rat}
\end{figure}
\begin{figure}
\includegraphics[width=90mm]{fig9.eps}
\caption{
Same as Figure \ref{fig:cross-hist-cxi} except for the distribution function
$f_J(\log J^{\prime}_{\mathrm{min}})$. Line types as in Figure
\ref{fig:cross-hist-cxi}.
}
\label{fig:cross-hist-Jp-min}
\end{figure}
Generally, the disalignment is mild and the duration is comparable to
$t^{\prime}_s$. That is, thermal trapping is not a prevalent condition.
The distribution
functions display clear trends as a function of the H$_2$ kinetic energy.
As $E_{\mathrm{H}2}$ increases, the distributions shift towards more
disalignment, longer duration, and lower minimum value of $J^{\prime}$.
Table \ref{tab:outliers} shows the fraction of the simulations within each
suite for which $\cos \xi$ and $J^{\prime}_{\mathrm{min}}$ are less than the
lower limits in Figs.~\ref{fig:cross-hist-cxi} and
\ref{fig:cross-hist-Jp-min} and for which $t^{\prime}/t^{\prime}_s$ is greater
than the upper limit in Fig.~\ref{fig:cross-hist-rat}. The tails of the
distributions favor longer durations and lower $J^{\prime}_{\mathrm{min}}$ for
lower $E_{\mathrm{H}2}$, though the precision far out in the tails is, of
course, low.
\begin{table}
\caption{Fraction of the crossover simulations satisfying outlying
conditions.}
\label{tab:outliers}
\begin{tabular}{llll}
\hline
Condition & $E_{\mathrm{H}2} = 0.05 \, \mathrm{eV}$ &
$E_{\mathrm{H}2} = 0.2 \, \mathrm{eV}$ &
$E_{\mathrm{H}2} = 1.0 \, \mathrm{eV}$ \\
\hline
$\cos \xi < 0.9$ & $3.55 \times 10^{-2}$ &
$6.24 \times 10^{-2}$ & $0.195$ \\
$\cos \xi < 0.8$ & $2.66 \times 10^{-2}$ &
$2.67 \times 10^{-2}$ & $8.93 \times 10^{-2}$ \\
$\cos \xi < 0.5$ & $1.79 \times 10^{-2}$ &
$1.10 \times 10^{-2}$ & $2.35 \times 10^{-2}$ \\
$\cos \xi < 0$ & $1.07 \times 10^{-2}$ &
$5.32 \times 10^{-3}$ & $6.62 \times 10^{-3}$ \\
$\cos \xi < -0.5$ & $4.80 \times 10^{-3}$ &
$2.00 \times 10^{-3}$ & $2.43 \times 10^{-3}$ \\
$t^{\prime}/t^{\prime}_s > 1.05$ & $4.91 \times 10^{-2}$ &
$3.30 \times 10^{-2}$ & $1.78 \times 10^{-2}$ \\
$t^{\prime}/t^{\prime}_s > 2.0$ & $1.98 \times 10^{-2}$ &
$7.68 \times 10^{-4}$ & $0$ \\
$t^{\prime}/t^{\prime}_s > 3.0$ & $1.24 \times 10^{-2}$ &
$5.36 \times 10^{-5}$ & $0$ \\
$t^{\prime}/t^{\prime}_s > 4.0$ & $7.73 \times 10^{-3}$ &
$1.79 \times 10^{-5}$ & $0$ \\
$t^{\prime}/t^{\prime}_s > 10.0$ & $9.82 \times 10^{-4}$ &
$0$ & $0$ \\
$J^{\prime}_{\mathrm{min}} < 10^{-1}$ & $9.88 \times 10^{-3}$ &
$2.05 \times 10^{-3}$ & $1.34 \times 10^{-3}$ \\
$J^{\prime}_{\mathrm{min}} < 3 \times 10^{-2}$ & $5.07 \times 10^{-3}$ &
$9.29 \times 10^{-4}$ & $5.89 \times 10^{-4}$ \\
$J^{\prime}_{\mathrm{min}} < 10^{-2}$ & $3.11 \times 10^{-3}$ &
$5.00 \times 10^{-4}$ & $2.86 \times 10^{-4}$ \\
$J^{\prime}_{\mathrm{min}} < 3 \times 10^{-3}$ & $1.89 \times 10^{-3}$ &
$2.32 \times 10^{-4}$ & $7.14 \times 10^{-5}$ \\
$J^{\prime}_{\mathrm{min}} < 10^{-3}$ & $7.14 \times 10^{-4}$ &
$3.57 \times 10^{-5}$ & $0$ \\
\hline
\end{tabular}
\end{table}
As noted above, simulation suites 1--3 take step-size parameters
$k_{\mathrm{low}} = 10^{-2}$ and $k_{\mathrm{high}} = 10^{-4}$. To check if these
yield sufficient convergence, suite 4 repeats suite 1, except with
$k_{\mathrm{low}} = 10^{-3}$ and $k_{\mathrm{high}} = 10^{-5}$. We find that the
distribution functions are virtually identical for these two suites.
The largest discrepancy is for $f_{\xi}(\cos \xi)$. To illustrate the
fine agreement even for this case, we examine
the cumulative distribution function,
$f_{\xi, \mathrm{cum}}(\cos \xi)$, starting at $\cos \xi = 1$. That is,
$f_{\xi, \mathrm{cum}}(\cos \xi) = \int_{\cos \xi}^1 f_{\xi}(u) \mathrm{d}u$.
Figure
\ref{fig:cross-conv} shows $\Delta(\cos \xi)$ versus $\cos \xi$, where
$\Delta(\cos \xi)$ is the fractional difference between the cumulative
distribution function for the two suites.
\begin{figure}
\includegraphics[width=90mm]{fig10.eps}
\caption{
The fractional difference $\Delta(\cos \xi)$ between the cumulative
distribution function (starting at $\cos \xi = 1$) for two simulation suites
with $E_{\mathrm{H}2} = 0.2 \, \mathrm{eV}$. The time-step parameters for the
two suites are $(k_{\mathrm{low}}, k_{\mathrm{high}}) = (10^{-2}, 10^{-4})$ and
$(10^{-3}, 10^{-5})$.
}
\label{fig:cross-conv}
\end{figure}
As noted in Section \ref{subsec:int-relax}, we adopt `reflecting' boundary
conditions at $q=1$ and $q=r_3$, since this choice best reproduces the
distribution function for $q$ in the case of thermal equilibrium. Otherwise,
this choice is arbitrary. As a check on the extent to which this choice
affects the results, we repeated all four simulation suites with a different
prescription for the boundaries. In these runs, if a time step yields
$q < 1$ or $q > r_3$, then a new value of $\mathrm{d}w^{\prime}_{\mathrm{int}}$ is
selected
randomly and $\mathrm{d}q$ is recomputed. The distribution functions resulting
from suites with the two different boundary prescriptions differ very slightly.
\section{Davis-Greenstein alignment} \label{sec:D-G-alignment}
In this section, we examine the efficiency of Davis-Greenstein alignment.
We constructed seven suites of simulations which include internal relaxation,
collisions with gas atoms, H$_2$ formation at time-varying special sites
(Section \ref{sec:H2-formation}), and the Davis-Greenstein torque. Each
suite consists of 504 simulations. At the start of each simulation, the
grain is randomly constructed as in previous sections, the alignment angle
$\xi$ is randomly chosen from a uniform distribution in $\cos \xi$
(with $-1 \le \cos \xi \le 1$), and $J^{\prime}$ is set equal to 2.5, which
corresponds to the peak of the distribution $f_J(\log J^{\prime}_{\mathrm{min}})$
for the suite of crossover simulations with
$E_{\mathrm{H}2} = 0.2 \, \mathrm{eV}$ (Fig.~\ref{fig:cross-hist-Jp-min}).
For our reference suite (`case 1'), we take parameter values as in Table
\ref{tab:parameters}; the total
duration of the simulation (normalized to the drag time-scale)
$t^{\prime} = 10^3$. For each of the other suites, one or more of the
parameter values are changed, as indicated in Table \ref{tab:D-G-runs}.
In cases 2 and 3, the site lifetime is increased. In case 4, thermal trapping
is artificially prohibited, in the following extreme manner:
Each time the low-$b$ regime is entered, after the first $q=r_2$ crossing, the
flip state (with respect to both $\bmath{\hat{a}}_1$ and $\bmath{\hat{a}}_3$)
is always chosen such as to spin the grain up. Case 5 is a convergence check,
with the two parameters $k_{\mathrm{low}}$ and $k_{\mathrm{high}}$ both reduced
by an order of magnitude; the duration $t^{\prime}$ of the simulation is also
reduced by an order of magnitude to avoid unmanageable run times. In case 6,
the dust temperature is increased to $T_d = 20 \, \mathrm{K}$. Finally, in case
7, a smaller grain size, $a_{\mathrm{eff}} = 0.05 \, \mu \mathrm{m}$,
is considered.
In this case, the number of special sites $N_s$ is reduced in proportion to
the grain surface area (i.e.~by a factor of 16). Since the internal relaxation
time $\tau_{\mathrm{int}}$ decreases for the smaller grain, the duration of the
simulation is also decreased, to 100 drag times, to avoid unmanageable run
times.
\begin{table*}
\caption{Cases for Davis-Greenstein simulations.}
\label{tab:D-G-runs}
\begin{tabular}{lllll}
\hline
Case & Difference from Case 1 & $\langle \mathrm{RRF_{av}}(10) \rangle$ &
$\langle \mathrm{RRF_{av}}(10^2) \rangle$ &
$\langle \mathrm{RRF_{av}}(10^3) \rangle$ \\
\hline
1 & NA & 0.038 & 0.187 & 0.265 \\
2 & $t^{\prime}_{\mathrm{life}} = 10^3$ & 0.046 & 0.269 & 0.698 \\
3 & $t^{\prime}_{\mathrm{life}} = 10$ & 0.014 & 0.190 & 0.413 \\
4 & prohibited thermal trapping (see text) & 0.028 & 0.201 & 0.413 \\
5 & $k_{\mathrm{low}}= 10^{-3}$, $k_{\mathrm{high}} = 10^{-5}$, $t^{\prime} =10^2$ &
0.039 & 0.152 & NA \\
6 & $T_d = 20 \, \mathrm{K}$ & 0.049 & 0.131 & 0.214 \\
7 & $a_{\mathrm{eff}} = 0.05 \, \mu \mathrm{m}$; $N_s = 34,400$; $t^{\prime} =10^2$
& 0.094 & 0.412 & NA \\
\hline
\end{tabular}
\end{table*}
As an illustration of the grain dynamics,
Figs.~\ref{fig:example-case1} and \ref{fig:example-case2} show
$J^{\prime}_{\mathrm{eq}}$, $\log J^{\prime}$, and $\cos \xi$ versus $t^{\prime}$ for
one of the 504 Davis-Greenstein simulations, for cases 1 and 2,
respectively. These cases differ only in that $t^{\prime}_{\mathrm{life}} = 1$
for case 1 and $10^3$ for case 2. Consequently, $J^{\prime}_{\mathrm{eq}}$ and
$J^{\prime}$ fluctuate much more rapidly for case 1 than for case 2. The plots
are generated using $10^3$ output times (one per drag time); the fluctuations
of $J^{\prime}_{\mathrm{eq}}$ and $J^{\prime}$ are actually more pronounced than
indicated in the figure, as would be seen if more output times were used in
the figure construction.
\begin{figure}
\includegraphics[width=90mm]{fig11.eps}
\caption{
$J^{\prime}_{\mathrm{eq}}$, $\log J^{\prime}$, and $\cos \xi$ versus $t^{\prime}$
for one of the 504 case-1 Davis-Greenstein simulations.
}
\label{fig:example-case1}
\end{figure}
\begin{figure}
\includegraphics[width=90mm]{fig12.eps}
\caption{
$J^{\prime}_{\mathrm{eq}}$, $\log J^{\prime}$, and $\cos \xi$ versus $t^{\prime}$
for one of the 504 case-2 Davis-Greenstein simulations.
}
\label{fig:example-case2}
\end{figure}
The Rayleigh reduction factor,
\be
\mathrm{RRF} = \frac{3}{2} \left( \cos^2 \xi - \frac{1}{3} \right) ,
\label{eq:RRF-defined}
\ee
is a useful measure of the grain alignment efficiency.
In the Rayleigh limit, the linear dichroism of a grain rotating about
$\bmath{\hat{a}}_1$ is proportional to RRF \citep{LD85}.
For alignment, RRF must exceed
zero; larger values of RRF correspond to higher degrees of alignment.
Fig.~\ref{fig:RRF} shows $\mathrm{RRF_{av}}(10)$,
$\mathrm{RRF_{av}}(10^2)$, and $\mathrm{RRF_{av}}(10^3)$ versus $\cos \xi_0$ for
case 1,
where $\mathrm{RRF_{av}}(t^{\prime})$ is the time-averaged Rayleigh reduction
factor, from the start of the simulation to dimensionless time
$t^{\prime} = t/\tau_{\mathrm{drag}}$, and $\xi_0$ is the initial value of $\xi$.
That is, the time average is taken over the
first 10 drag times, the first $10^2$ drag times, and the entire simulation
(with duration $10^3$ drag times). Each point represents one of the 504
simulations. For the adopted parameters,
$\tau_{\mathrm{DG}}/\tau_{\mathrm{drag}} = 73.3$. That is, the alignment time-scale
equals 73.3 times the drag time-scale.
Thus, the plot of $\mathrm{RRF_{av}}(t^{\prime})$
versus $\cos \xi_0$ resembles that of equation (\ref{eq:RRF-defined}) for
early $t^{\prime}$. By $t^{\prime} = 10^3$, the time-averaged Rayleigh reduction
factor no longer shows a dependence on the initial value of the alignment
angle $\xi_0$ and is positive for most simulations.
\begin{figure}
\includegraphics[width=90mm]{fig13.eps}
\caption{
The Rayleigh reduction factor time-averaged over the first 10 drag times, the
first 100 drag times, and the full $10^3$-drag time simulation duration,
versus $\cos \xi_0$, for the 504 case-1 Davis-Greenstein simulations.
}
\label{fig:RRF}
\end{figure}
For each of the seven simulation suites, table \ref{tab:D-G-runs} provides
the values of $\langle \mathrm{RRF_{av}}(t^{\prime}) \rangle$, the average of
$\mathrm{RRF_{av}}(t^{\prime})$ over all 504 simulations (with $t^{\prime} = 10$,
$10^2$, and $10^3$, where applicable). On times
$\sim 10^2$--$10^3 \, \tau_{\mathrm{drag}}$, paramagnetic dissipation does yield
partial grain alignment. As expected, the alignment is more efficient for
larger values of the H$_2$-formation site lifetime (cases 1--3). Thermal
trapping does not prevent D-G alignment, though the alignment may be somewhat
more efficient in case 4, where thermal trapping is artificially prohibited in
an extreme manner. Comparing the results for cases 1 and 5, we conclude that
the adopted values for the time-step parameters $k_{\mathrm{low}}$ and
$k_{\mathrm{high}}$ are reasonable. That is, the simulations appear to be
well converged.
For all except case 7, the grain size
$a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$, for which (given the adopted
CNM-like parameter values) $\tau_{\mathrm{drag}} = 1.3 \times 10^5 \, \mathrm{yr}$
and $\tau_{\mathrm{DG}}/\tau_{\mathrm{drag}} = 73$. Thus, times
$\sim 10^2$--$10^3 \, \tau_{\mathrm{drag}}$ equate to 13--130$\,$Myr.
From simulations of the multiphase interstellar medium, \citet{Peters17}
find a broad distribution of dust residence times in the CNM, with a median
around $7 \,$Myr. Thus, the time required for D-G alignment of grains with
$a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$ seems uncomfortably long.
Furthermore, $\tau_{\mathrm{drag}} \propto a_{\mathrm{eff}}$ and
$\tau_{\mathrm{DG}} \propto a_{\mathrm{eff}}^2$ (eqs.~\ref{eq:tau-drag} and
\ref{eq:tau-DG}). Thus, for $a_{\mathrm{eff}} = 0.05 \, \mu \mathrm{m}$,
$\tau_{\mathrm{drag}} = 3.3 \times 10^4 \, \mathrm{yr}$ and
$\tau_{\mathrm{DG}}/\tau_{\mathrm{drag}} = 18$. As seen from the case-7 result,
moderately efficient alignment is achieved on a time
$\sim 10^2 \, \tau_{\mathrm{drag}} \sim 3.3 \,$Myr. Thus, D-G alignment in the CNM
is more plausible for relatively small grains than for relatively large
grains, whereas observations reveal that only the relatively large grains are
well aligned \citep{KM95}. This contradiction between the model and
observations has long plagued the D-G theory, even prior to the modifications
by Purcell. In their original work on thermal trapping, considering only
Barnett relaxation, \citet{LD99a} found thermal trapping
to be more severe for smaller grains, possibly resolving the contradiction.
However, with the introduction of nuclear relaxation, \citet{LD99b} concluded
that all grains are likely thermally trapped. We conclude that thermal
trapping is not prevalent for either $a_{\mathrm{eff}} = 0.2 \, \mu \mathrm{m}$
or $a_{\mathrm{eff}} = 0.05 \, \mu \mathrm{m}$.
\citet{JS67} noted that if grains contain superparamagnetic
inclusions, then the D-G alignment time-scale could be dramatically reduced.
If only the relatively large grains contain superparamagnetic inclusions,
then D-G alignment could be consistent with the \citet{KM95} results
\citep{M86}. We will examine D-G alignment for the case of grains with
superparamagnetic inclusions in future work. From the results in this work,
we conclude that D-G alignment without superparamagnetic inclusions is
unlikely to account for alignment of the relatively large grains
responsible for the observed optical and infrared starlight polarization, in
the diffuse ISM. However, it is the long
alignment time, rather than any thermal trapping effect, that renders it
unlikely.
\citet{HLM14} argued that small grains must be aligned to some extent
in order to explain the observed ultraviolet starlight polarization and
proposed that the observations could be used to estimate the interstellar
magnetic field strength. They examined D-G alignment of small grains, assuming
that these grains are thermally trapped. Our results show that suprathermal
spin-up may be important for the small grains, potentially yielding higher
degrees of alignment. However, this conclusion is sensitive to the details of
the H$_2$-formation model. We adopted $t^{\prime}_{\mathrm{life}} = 1$ in our
simulation with $a_{\mathrm{eff}} = 0.05 \, \mu \mathrm{m}$ (case 7), but smaller
values are plausible and would yield a smaller degree of alignment.
Furthermore, \citet{WD01force} argued that a model in which the grain surface
is saturated in chemisorption sites, as considered in section
\ref{sec:uniform-sites} above, is plausible. In this case, any systematic
torque would be negligible, precluding suprathermal spin-up. We will more
carefullly examine D-G alignment of small grains in future work.
\section{Conclusions} \label{sec:conclusions}
In this study, we first extended the analysis of Barnett relaxation in
\citet{KW17} beyond the low-frequency limit, enabling an approximate treatment
of both Barnett and nuclear relaxation in thermally rotating grains. Since
no first-principles theory of Barnett or nuclear relaxation has been developed
to date, there is considerable uncertainty in the quantitative expressions
for the drift and diffusion coefficients. We followed \citet{KW17} in
assuming that the diffusion coefficient goes to zero at $q=r_3$. We also
neglected any deviation of the functional forms for the drift and diffusion
coefficients from their low-frequency forms.
Next, we developed theoretical expressions for the mean torque and diffusion
coefficients for several external processes, including collisions of gas-phase
particles with the grain, thermal evaporation from the grain surface, and
the formation of H$_2$ molecules, followed by their ejection from the grain
surface. These apply for the special case of a spheroidal grain with a
non-uniform mass density (with the center of mass at the center of the
spheroid and the principal axis of greatest moment of inertia lying along
the spheroid symmetry axis). We adopted several simplifications in the
analysis of H$_2$-formation. The translational kinetic energy of the ejected
molecules is taken to be constant. In the case of special formation sites on
the grain surface, the ejection rate is taken to be equal at all of the sites
and the molecules depart along a single direction at each site.
From large simulation suites, in which the Langenvin equations for both
internal and external processes are integrated, we reach the following
conclusions. First, the mean duration of up-steps (when the systematic torque
acts so as to spin the grain up) exceeds the mean duration of down-steps.
Second, thermal trapping is not prevalent during crossovers. Third, the
Davis-Greenstein mechanism, with suprathermal spin-up, can drive grains into
alignment in the cold neutral medium, without significant impediment from
thermal trapping. However, it does not appear to be a viable explanation of
grain alignment in the diffuse ISM, at least for the relatively large
grains that are responsible for optical and infrared starlight polarization,
since the alignment time-scale is long. The D-G mechanism could, however,
potentially yield some small-grain alignment, with observable consequences
for ultraviolet starlight polarization. Future work will examine this
possibility in greater detail.
Currently, the consensus view is that radiative torques dominate in the
alignment of relatively large grains; see \citet{L07} and \citet{ALV15}
for reviews. This view has resulted, in part, from the conclusion of
\citet{LD99b} that grains subjected only to torques that are fixed relative
the grain body are thermally trapped. We conclude that, even without thermal
trapping, D-G alignment does not effectively align large grains.
Detailed models have found that,
in the radiative-torque alignment scenario, grains can pass through
crossovers and can reach aligned states characterized by either suprathermal
or thermal rotation \citep{WD03, HL09}. Thus, the main result of this paper,
that the mean duration of up-steps exceeds the mean duration of down steps,
could have significant implications for radiative-torque alignment as well
as for D-G alignment. In future work, we will adapt the computational and
theoretical tools developed here to a study of radiative-torque alignment.
We will also consider grains with superparamagnetic inclusions.
Finally, \citet{Purcell79} focused on grain alignment, but also noted that
the tensile stress within a suprathermally rotating grain could possibly
disrupt the grain, depending on its structure. Recently, Hoang and
collaborators \citep[e.g.][]{HTLA19, H19, H20, LH20} have developed this idea
in detail, with a focus on radiative torques. The grain equilibrium rotational
rates for the model of H$_2$-formation torques adopted here are shown in
Fig.~\ref{fig:Jp-eq-hist} and equation (\ref{eq:Jp_eq_from_Qs}). Comparing
with Fig.~13 in \citet{LH20} and Figs.~12--15 in \citet{DW97}, these are
comparable to results from previous studies of H$_2$-formation torques and
can exceed the rotational rates arising from radiative torques, for conditions
typical of the CNM. Thus, our conclusion that thermal trapping is not prevalent
could have significant implications for the rotational disruption of grains,
especially in environments where radiative torques are weak. See Section 9.4
in \citet{LH20} for more details. The uncertainties associated with the
H$_2$-formation model, noted at the end of Section \ref{sec:D-G-alignment}
above, will need to be resolved in order to clarify the importance of the
resulting torques to grain disruption.
\section*{Acknowledgements}
We are grateful to Bruce Draine and the anonymous referee for helpful comments
on the manuscript.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the
corresponding author.
\bibliographystyle{mnras}
|
1309.4297
|
\section{Introduction}
\label{sec:intro}
Cepheid variables are key component in astrophysical research, as they are an indispensable link in the extragalactic distance ladder. However, for the most part, current research focuses on Classical Cepheids with periods less than 60 days. There are good reasons for this: in terms of sufficient phase coverage, they take less time to observe, and they have a well defined period-luminosity (PL) relation. Also, Cepheids with periods greater than 100 days obey a different relation (\cite{bird}), which has not been nearly as well documented. In this work, we follow the same conventions as \cite{bird}, defining these ``Ultra Long Period Cepheids'' (ULPCs) to be any fundamental mode Cepheids with a pulsation period greater than 80 days.
\par
Even though ULPCs have generally not been studied as intensely as shorter period Cepheids, there are many advantages to acquiring data on these ULPCs. As the PL relation indicates, the longer the period of a star, the brighter it is. This means that these variables can be observed at much greater distances than are currently possible. This added brightness would allow us to use Cepheids to measure distances out to 100 Mpc and beyond \cite{bird}. In this work, our goal is to study the structural properties of ULPC light curves. We are interested in comparing the light curves of ULPCs to shorter period Classical Cepheids as well as to longer period Mira-like variables (Miras).
\begin{figure*}[!t]
\centering
\includegraphics[height=\textwidth, angle=-90]{fig1.eps}
\caption{ULPC light curves with their respective Fourier fits. The absolute V (open squares) and I (filled squares) magnitudes are represented along the y-axis, and the phase of pulsation is along the x-axis.}
\label{fig:ulplcv}
\end{figure*}
\section[]{Data}
\label{sec:data}
\subsection{ULPCs}
\label{sec:ulpcs}
There is very limited data for ULPCs available right now, but we were able to compile data for 17 ULPCs in 6 different galaxies, including the Large and Small Magellanic Clouds (LMC/SMC), NGC 6822, NGC 55, NGC 300, and I Zw 18.
\par
For the six stars in the LMC and the SMC, we obtained the V-band data from \cite{bird} and the I-band data from \cite{moffett}. We also acquired additional V- and I-band data through the McMaster Cepheid Photometry Archive\footnote{http://crocus.physics.mcmaster.ca/Cepheid/} for four ULPCs in the LMC. \cite{freedman} had some additional data points for HV 883 and HV 2447. Also, for HV 2883 and HV 5497, we acquired data from \cite{martin}, again through the McMaster Archive. \cite{bird} also had V- and I-band data for the five stars in NGC 55, the three stars in NGC 300, and the one star in NGC 6822. Again, using the McMaster Archive, additional photometry was acquired for NGC 300-1 (as named by \cite{bird}) from \cite{freedman}. The photometric V- and I-band data for the two stars in I Zw 18 were obtained from \cite{fiorentino}.
\par
The periods for these stars were also compiled in \cite{bird}. In Table \ref{tab:ulpcs} we have listed all the ULPCs in our data set together with their periods and host galaxies. In this Table, the names of the stars (Column 1) are consistent with those used by \cite{bird}. Column 3 contains the distance modulus (DM) values for the stars' host galaxies listed in Column 2. The magnitudes (Column 5) are the absolute, reddening corrected, mean V-band magnitudes as calculated by our Fourier analysis with orders as shown in Column 6. All the data for these stars was obtained from \cite{bird} and references therein, however for the stars where more data was compiled, footnotes have been added. Figure \ref{fig:ulplcv} presents the ULPC light curves used in this work.
\begin{table}
\begin{center}
\begin{minipage}{\columnwidth}
\renewcommand{\thefootnote}{\thempfootnote}
\caption{Data on the ULPCs that we used.}
\centering
\begin{tabular}{@{}llrrrr@{}}
\hline
Name & Galaxy & DM & Period [days] & $M_V$ & N\\
\hline
HV 883\footnote{Additional data from \cite{freedman}.\label{fn:f}} & LMC & \multirow{4}{*}{18.50} & 133.6 & -6.30 & 4 \\
HV 2447\footref{fn:f} & LMC & & 118.7 & -6.47 & 4 \\
HV 2883\footnote{Additional data from \cite{martin}.\label{fn:m}} & LMC & & 109.2 & -5.98 & 3 \\
HV 5497\footref{fn:m} & LMC & & 98.6 & -6.62 & 4 \\
HV 821 & SMC & \multirow{2}{*}{18.93} & 127.5 & -6.99 & 4 \\
HV 829 & SMC & & 84.4 & -7.00 & 4 \\
NGC 6822 - 1 & NGC 6822 & 23.31 & 123.9 & -5.41 & 3 \\
NGC 55 - 1 & NGC 55 & \multirow{5}{*}{26.43} & 175.9 & -7.17 & 4 \\
NGC 55 - 2 & NGC 55 & & 152.1 & -6.87 & 3 \\
NGC 55 - 3 & NGC 55 & & 112.7 & -6.21 & 3 \\
NGC 55 - 4 & NGC 55 & & 97.7 & -5.89 & 4 \\
NGC 55 - 5 & NGC 55 & & 85.1 & -5.58 & 3 \\
NGC 300 - 1\footref{fn:f} & NGC 300 & \multirow{3}{*}{26.37} & 115.8 & -6.15 & 3 \\
NGC 300 - 2 & NGC 300 & & 89.1 & -6.66 & 3 \\
NGC 300 - 3 & NGC 300 & & 83.0 & -7.11 & 2 \\
V1 & I Zw 18 & \multirow{2}{*}{31.30} & 129.8 & -7.30 & 2 \\
V15 & I Zw 18 & & 125.0 & -7.62 & 2 \\
\hline
\label{tab:ulpcs}
\end{tabular}
\end{minipage}
\end{center}
Note: DM is the distance modulus of the host galaxy, $M_V$ is the absolute $V$-band averaged magnitude after reddening correction, and $N$ is the order of Fourier fit.
\end{table}
\subsection{OGLE-III Catalog}
\label{sec:ogle}
The third installment of the Optical Gravitational Lensing Experiment (OGLE-III) contains photometric data for many variable stars in the Large and Small Magellanic Clouds. We were interested in the V- and I-band photometric light curves data for fundamental mode Cepheids and longer period Mira-like variables. We also obtained the periods for all the stars we used from the catalog. From OGLE-III LMC Cepheid catalog (\cite{soslmc}), we chose 1,804 (out of 1,849) Cepheids based on what data was available for these stars. We then obtained the V- and I-band photometric light curve data for these Cepheids. Similarly, of the 2,626 available fundamental mode Cepheids in the SMC (\cite{sossmc}), we selected 2,596. The OGLE-III catalog also contains photometric and period data for "long period variables" (LPVs). These LPVs are classified into three different types: Miras, OSARGs, and SRVs (\cite{sos}). We chose 1,407 of the listed 1,663 Mira variables and again used both their V- and I-band photometric data.
\subsection{NGC 300 Cepheid Data}
\label{sec:ngc}
\cite{gieren} acquired V- and I-band photometric data for 64 Cepheids from NGC 300 with periods ranging from 11 to 115 days. However, three of the Cepheids have periods greater than 80 days, so they are included as ULPCs, not as regular Cepheids of NGC 300. The remaining 61 stars and their light curve data were added to our Classical Cepheid data set.
\section[]{Fourier Analysis}
\label{sec:fourier}
We characterized the structure of all the observed light curves in our data set using the technique of Fourier decomposition. That is, the following function was fit to the observed data for
a given star:
\begin{equation}
\mathrm{mag}(t)=A_{0} + \sum_{k=1}^{N} A_{k} \sin(k\omega t + \phi_{k})
\label{eq:fourier}
\end{equation}
Here $\omega = {2\pi /P}$, where $P$ is the period in days, $A_k$ and $\phi_k$ represent the amplitude and phase-shift for $k^{\mathrm{th}}$-order respectively, and $N$ is the order of the fit. To determine the order, we used several different techniques. For the ULPCs, we looked at the Fourier fits of each star individually and visually determined which order gave the best representation. The resulting orders are presented in last Column of Table \ref{tab:ulpcs}. To determine the optimum orders for the Classical Cepheids and the Miras, we ran the Fourier analysis with several different orders and after inspecting some of the fits, concluded that the orders listed in Column 5 of Table \ref{tab:ccmiradata} gave the best representations of our data. This was a universal value that we used for all the stars of the given type in the given galaxy. In order to quantify the structure of the light curve, we used the Fourier parameters $R_{k1}$ and $\phi_{k1}$ defined as follow (\cite{simon}):
\begin{equation}
R_{k1} = \frac{A_{k}}{A_{1}}; \quad \phi_{k1} = \phi_{k} - k\phi_{1}
\label{eq:rk1def}
\end{equation}
where $k$ is set to be $2$ and $3$.
\begin{table}
\centering
\begin{minipage}{\columnwidth}
\renewcommand{\thefootnote}{\thempfootnote}
\caption{Information on the Classical Cepheids and Miras that we used in comparison with the ULPCs.}
\centering
\begin{tabular}{@{} l l r c r @{}}
\hline
Galaxy & Star Type & Stars & Band & Order \\
\hline
\multirow{4}{*}{LMC} & \multirow{2}{*}{Cepheids} & \multirow{2}{*}{1,804} & V & 5 \\
& & & I & 8 \\
& \multirow{2}{*}{Miras} & \multirow{2}{*}{1,407} & V & 3 \\
& & & I & 3 \\
\multirow{2}{*}{SMC} & \multirow{2}{*}{Cepheids} & \multirow{2}{*}{2,598} & V & 4 \\
& & & I & 8 \\
\multirow{2}{*}{NGC 300} & \multirow{2}{*}{Cepheids} & \multirow{2}{*}{61} & V & 3 \\
& & & I & 3 \\
\hline
\label{tab:ccmiradata}
\end{tabular}
\end{minipage}
\end{table}
\subsection{Error on Fourier Parameters}
\label{sec:fouriererr}
In order to determine the accuracy of our results, we used techniques developed by \cite{petersen} to calculate the errors on $R_{k1}$ and $\phi_{k1}$ parameters as follows:
\begin{equation}
\sigma^2(R_{k1})=A_{1}^{-4}\epsilon^2(A_1^2+A_k^2)
\end{equation}
\begin{equation}
\sigma^2(\phi_{k1})=\epsilon^2(A_k^{-2}+k^2 A_1^{-2})
\end{equation}
where $A$ is the coefficient of the Fourier decomposition as shown in Equation (\ref{eq:fourier}) and $\epsilon$ is related to the sum of the squared residuals, $[vv]$:
\begin{equation}
\epsilon^2=\frac{2}{J}\frac{[vv]}{J-2N-1}, \quad \mathrm{and}\quad[vv]=\sum_{j=1}^Jv_jv_j
\end{equation}
In the above equation, $J$ is the number of data points, and $v_j$ is the $j^{th}$ residual. Even though these are the approximations that \cite{petersen} derived, he states that they still provide valuable information on the accuracy of the Fourier parameters.
\begin{figure}
\centering
\includegraphics[height=\columnwidth, angle=-90]{fig2.eps}
\caption{The graph of the V (open squares) and I (solid squares) band light curves for OGLE-LMC-LPV-01408 as classified in the OGLE-III database. Note that the I-band data has large dispersion. This is because of the variation in period and amplitude of the Miras' light curves. The Fourier fit (solid black lines) is an approximate ``average'' of the data points.}
\label{fig:miralcv}
\end{figure}
\subsection{Fourier Fit to the Miras Data}
\label{sec:miraerr}
Miras are known to have periods and amplitudes that are not constant like the Classical Cepheids. This made compiling the light curves slightly more difficult, because the data points did not always form a single, well-defined curve when the measurement times were converted to phases of one period. The I-band data was especially problematic. We dealt with this by using a low enough order of the Fourier fit (as given in Table \ref{tab:ccmiradata}) so as to essentially take an average of the light curve. This is demonstrated in Figure \ref{fig:miralcv}, where the solid squares are the I-band measurements. The solid black lines are calculated the Fourier fits: the I-band fit is approximately an average of the spread of data points. In order to calculate the Fourier parameters stated above, we needed at least third order terms (as for some of the Miras, a higher order fit produced some artifacts in fitted light curves). Therefore, when we calculated the Fourier parameters and plotted the fitted light curves, we restricted to a third order fit.
\begin{table*}
\centering
\renewcommand{\thefootnote}{\thempfootnote}
\caption{Fitted Fourier parameters for ULPCs.}
\centering
\begin{tabular}{@{}lccccc@{}}
\hline
Name & $\log(P)$ & $R_{21}$ & $\phi_{21}$ & $R_{31}$ & $\phi_{31}$ \\
\hline
\multicolumn{6}{c}{V-band Results} \\
\hline
HV 883 & 2.126 & $0.230\pm0.003$ & $3.464\pm0.064$ & $0.103\pm0.003$ & $18.737\pm0.293$ \\
HV 2447 & 2.074 & $0.066\pm0.005$ & $3.495\pm1.204$ & $0.027\pm0.005$ & $\ast 13.280\pm7.182$ \\
HV 2883 & 2.038 & $0.342\pm0.005$ & $3.876\pm0.053$ & $0.095\pm0.004$ & $14.692\pm0.500$ \\
HV 5497 & 1.994 & $0.242\pm0.005$ & $3.766\pm0.094$ & $0.034\pm0.004$ & $12.617\pm3.840$ \\
HV 821 & 2.106 & $0.070\pm0.005$ & $4.175\pm0.975$ & $0.046\pm0.005$ & $14.026\pm2.261$ \\
HV 829 & 1.926 & $0.391\pm0.007$ & $3.935\pm0.061$ & $0.125\pm0.006$ & $14.065\pm0.419$ \\
NGC6822-1 & 2.093 & $0.345\pm0.004$ & $3.477\pm0.040$ & $0.104\pm0.003$ & $18.843\pm0.329$ \\
NGC55-1 & 2.245 & $0.185\pm0.024$ & $2.534\pm0.783$ & $0.080\pm0.024$ & $17.162\pm3.891$ \\
NGC55-2 & 2.182 & $0.227\pm0.108$ & $2.243\pm2.401$ & $0.099\pm0.104$ & $\ast 13.699\pm11.396$ \\
NGC55-3 & 2.052 & $0.139\pm0.010$ & $4.069\pm0.543$ & $0.110\pm0.010$ & $15.427\pm0.889$ \\
NGC55-4 & 1.990 & $0.160\pm0.087$ & $4.732\pm3.636$ & $0.040\pm0.085$ & $\ast 14.177\pm52.827$ \\
NGC55-5 & 1.930 & $0.417\pm0.120$ & $3.567\pm0.993$ & $0.138\pm0.104$ & $\ast 18.434\pm6.275$ \\
NGC300-1 & 2.064 & $0.339\pm0.022$ & $3.561\pm0.248$ & $0.117\pm0.020$ & $18.318\pm1.611$ \\
NGC300-2 & 1.950 & $0.310\pm0.055$ & $3.870\pm0.727$ & $0.273\pm0.054$ & $14.172\pm1.132$ \\
NGC300-3 & 1.919 & $0.289\pm0.169$ & $3.491\pm2.494$ & $\cdots$ & $\cdots$ \\
V1 & 2.113 & $0.072\pm0.027$ & $\ast 2.938\pm5.207$& $\cdots$ & $\cdots$ \\
V15 & 2.097 & $0.102\pm0.032$ & $3.985\pm3.198$ & $\cdots$ & $\cdots$ \\
\hline
\multicolumn{6}{c}{I-band Results} \\
\hline
HV 883 & 2.126 & $0.230\pm0.003$ & $3.991\pm0.771$ & $0.100\pm0.019$ & $14.480\pm2.085$ \\
HV 2447 & 2.074 & $0.066\pm0.005$ & $6.239\pm2.215$ & $0.155\pm0.075$ & $12.815\pm3.684$ \\
HV 2883 & 2.038 & $0.342\pm0.005$ & $4.593\pm0.101$ & $0.055\pm0.006$ & $15.922\pm1.968$ \\
HV 5497 & 1.994 & $0.242\pm0.005$ & $4.421\pm0.522$ & $0.059\pm0.021$ & $\ast 18.001\pm6.235$ \\
HV 821 & 2.106 & $0.070\pm0.005$ & $4.010\pm0.435$ & $0.071\pm0.009$ & $15.023\pm1.915$ \\
HV 829 & 1.926 & $0.391\pm0.007$ & $4.675\pm0.409$ & $0.031\pm0.042$ & $\ast 15.066\pm42.719$ \\
NGC6822-1 & 2.093 & $0.345\pm0.004$ & $4.298\pm0.073$ & $0.086\pm0.004$ & $14.215\pm0.575$ \\
NGC55-1 & 2.245 & $0.185\pm0.024$ & $2.540\pm3.850$ & $0.048\pm0.069$ & $\ast 17.758\pm30.560$ \\
NGC55-2 & 2.182 & $0.227\pm0.108$ & $\ast 3.934\pm23.429$ & $0.182\pm0.219$ & $13.983\pm8.321$ \\
NGC55-3 & 2.052 & $0.139\pm0.010$ & $4.665\pm0.635$ & $0.062\pm0.017$ & $\ast 16.176\pm4.574$ \\
NGC55-4 & 1.990 & $0.160\pm0.087$ & $\ast 4.380\pm4.331$ & $0.149\pm0.104$ & $15.050\pm5.481$ \\
NGC55-5 & 1.930 & $0.417\pm0.120$ & $4.401\pm1.088$ & $0.053\pm0.094$ & $\ast 14.927\pm34.289$ \\
NGC300-1 & 2.064 & $0.339\pm0.022$ & $4.467\pm0.312$ & $0.093\pm0.014$ & $14.352\pm1.767$ \\
NGC300-2 & 1.950 & $0.310\pm0.055$ & $4.035\pm3.804$ & $0.211\pm0.092$ & $14.703\pm2.784$ \\
NGC300-3 & 1.919 & $0.289\pm0.169$ & $\ast 4.278\pm5.438$ & $\ast 0.329\pm0.607$ & $\ast 13.559\pm9.979$ \\
V1 & 2.113 & $0.072\pm0.027$ & $5.354\pm1.773$ & $0.166\pm0.033$ & $18.158\pm1.441$ \\
V15 & 2.097 & $0.102\pm0.032$ & $4.475\pm2.454$ & $0.035\pm0.083$ & $\ast 14.684\pm69.933$ \\
\hline
\label{tab:param}
\end{tabular}
\\
Note: $P$ in column 2 represents the period. Entries with $\ast$ are not included in Figure \ref{fig:fp}.
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textheight, angle=-90]{fig4.eps}
\caption{Graphs of the Fourier parameters as calculated by the formulas stated in Section \ref{sec:fourier}. The x-axis is the base 10 logarithm of the period of the star in days, and the y-axis is the value of the various Fourier parameters. The black points are the ULPCs with the gray points being the Classical Cepheids and Miras from the galaxies stated in Section \ref{sec:data}. All the data also have error bars as calculated using the method developed by \cite{petersen} described in Section \ref{sec:fouriererr}.}
\label{fig:fp}
\end{figure*}
\subsection{Fourier Parameters}
\label{sec:fourierresults}
Our main goal was to compare the Classical Cepheids, ULPCs and Miras through their light curves structure, by using Fourier analysis technique (see, for example, \cite{simon}). Figure \ref{fig:fp} shows plots of the $\phi_{21}$, $\phi_{31}$, $R_{21}$, and $R_{31}$ Fourier parameters in both the V- and I-bands, where the values for ULPCs are given in Table \ref{tab:param}. In this Figure, the solid black squares are the ULPCs, and the various shades of gray are the Classical Cepheids and the Miras from their respective galaxies. Although the error bars are large, and there are many outliers in the Mira data, there does seem to be a trend in the LMC and SMC Classical Cepheids that leads to the ULPCs and beyond to the Miras fairly continuously.
\par
We were able to reconstruct well known features of these Fourier parameter plots (\cite{simon}). In both the V- and I-band graphs of $\phi_{21}$, the resonance at a value of $\log(P)=1$, or at period $P$ of $10$ days, is clearly visible. After this dramatic drop in the values of $\phi_{21}$, the Classical Cepheids seem to lie along a fairly flat relation, and although we don't have much data between $1.8<\log(P)<2.0$, the ULPCs seem to follow a similar trend. We see similar results for the other Fourier parameter graphs. For example, in the $\phi_{31}$ graphs we again see a resonance at $\log(P)\sim 1$, then a flattening of the relation (though in this case, the ULPCs do not quite line up as well). Even though there is not much data for stars between $1.8<\log(P)<2.0$, and the error bars on most of the data are quite large, Fourier parameters ($R_{21}$, $R_{31}$, $\phi_{21}$ \& $\phi_{31}$) for ULPCs seem to be extending from the trends defined by Classical Cepheids, and overlapped with the parameters space occupied by Miras.
\section{Discussion \& Conclusion}
\begin{figure*}
\centering
\includegraphics[height=\columnwidth]{VPL.eps}
\includegraphics[height=\columnwidth]{IPL.eps}
\caption{Distance modulus (DM; the distance modulus for different galaxies are given in Table \ref{tab:ulpcs}) corrected V- and I-band PL relations. The LMC and SMC Cepheids are shown as crosses in cyan (or light) and blue (or dark) colors, respectively. The Miras data are displayed as (red) open squares. The black filled circles are the data for ULPCs compiled in Section \ref{sec:data}, which are clearly an extension of the PL relations defined by the Classical Cepheids.}
\label{fig:plcompare}
\end{figure*}
By analyzing the light curves of Ultra Long Period Cepheid variable stars, we looked for a relation between shorter period variables (Classical Cepheids) and longer period variables (Miras). We used Fourier analysis to quantitatively measure the structural properties of the ULPCs' light curves and compare them to the light curves of Classical Cepheids as well as to long period Mira variables in the LMC. It is interesting to take note of the locations of the ULPCs more specifically, with respect to the trends produced by the Miras and the Classical Cepheids. In the graphs of the Fourier parameters (Figure \ref{fig:fp}), the ULPCs seem to be more consistent with the Miras. They lie more with the longer period variables than along the relations created by the Classical Cepheids. This suggests that when using light curves information to classify variable stars with period longer than 80 days, it is possible that some ULPCs might mis-classified as Miras (and vice versa) due to their overlapped Fourier parameters. In contrast, the plots of period and luminosity can be used to distinguish Classical Cepheids and ULPCs from Miras, as they are well separated in the PL relation (see Figure \ref{fig:plcompare}). Unfortunately, the error bars on Figure \ref{fig:fp} are quite large, and we have a very small sample of ULPCs. However, once we obtain more data points both for the light curves of known ULPCs and for many more as of yet undiscovered ULPCs, we will be able to learn much more about how it relates to currently known types such as Classical Cepheids and Mira variables.
\section*{Acknowledgment}
The authors thank SUNY Oswego, National Central University (NCU) and the Graduate Institute for Astronomy at NCU, and the National Science Foundation's Office of International Science \& Engineering's award 1065093. CCN thanks the funding from National Science Council of Taiwan under the contract NSC101-2112-M-008-017-MY3. We would also like to thank G. Fiorentino for her helpful comments when drafting this paper. Finally, this project wouldn't have been possible without the ULPC data compiled by J. Bird and G. Fiorentino.
|
1309.4464
|
\section{Introduction to Weyl semimetals}
The earliest classification of the forms of matter in nature, typically
presented to us in our early school days, consists of \emph{solids},
\emph{liquids} and \emph{gases}. High school physics textbooks and
experience later teach us that solids can be further classified based
on their electronic properties as \emph{conductors} and \emph{insulators}.
They tell us that as long as the electrons in a solid are non-interacting,
solids with partially filled bands are metals or conductors while
those with no partially filled bands and a gap between the valence
and the conduction bands are insulators or semiconductors. Solid state
physics courses in college add another phase to this list: if the
gap is extremely small or vanishing, or if there is a tiny overlap
between the valence and the conduction bands, the material is \emph{semimetallic}
and has markedly different electronic properties from metals and insulators.
Graphene (\citet{GeimGraphene}) -- a two dimensional (2D) sheet of
carbon atoms -- is the most celebrated example of a semimetal with
a vanishing gap. In this system, the conduction and valence bands
intersect at certain points in momentum space known as Dirac points.
The dispersion near these points is linear and electrons at nearby
momenta act like massless relativistic particles, thus stimulating
the interest of condensed matter and high-energy physicists alike.
In the last couple of years, there has been growing interest in a
seemingly close cousin of graphene -- the so-called \emph{Weyl semimetal}
(WSM) (\citet{PyrochloreWeyl,KrempaWeyl,ChenIridate,TurnerTopPhases,VafekDiracReview,VolovikBook,VolovikFlatBands}).
Like graphene, its band structure has a pair of bands crossing at
certain points in momentum space; unlike graphene, this is a three-dimensional
(3D) system. Near each such \emph{Weyl point} or \emph{Weyl node},
the Hamiltonian resembles the Hamiltonian for the Weyl fermions that
are well-known in particle physics literature:
\begin{equation}
H_{\mbox{Weyl}}=\sum_{i,j\in\{x,y,z\}}\hbar v_{ij}k_{i}\sigma_{j}\label{eq:Weyl Hamiltonian}
\end{equation}
where $\hbar$ is the reduced Planck's constant, $v_{ij}$ have dimensions
of velocity, $k_{i}$ is the momentum relative to the Weyl point and
$\sigma_{i}$ are Pauli matrices in the basis of the bands involved.
Thus, the name \emph{Weyl semimetal}.
A closer look, however, unveils a plethora of differences between
graphene and WSMs because of their different dimensionality. An immediate
consequence of the form of $H_{\mbox{Weyl}}$ is that the Weyl points
are topological objects in momentum space. Since all three Pauli matrices
have been used up in $H_{\mbox{Weyl}}$, there is no matrix that anticommutes
with $H_{\mbox{Weyl}}$ and gaps out the spectrum. There are then
only two ways a Weyl point can be destroyed. The first is by annihilating
it with another Weyl point of opposite chirality, either by explicitly
moving the Weyl points in momentum space and merging them or by allowing
for scattering to occur between different Weyl nodes; the latter requires
the violation of translational invariance. The second way is by violating
charge conservation via superconductivity. Thus, given a band structure,
the Weyl nodes are stable to arbitrary perturbations as long as charge
conservation and translational invariance is preserved. Disorder,
in general, does not preserve the latter symmetry; however, if the
disorder is smooth, many properties of the WSM that rely on the topological
nature of the band structure should survive. In contrast, Dirac nodes
in graphene can be destroyed individually by breaking lattice point
group symmetries.
The topological stability of the Weyl nodes crucially relies on the
intersecting bands being non-degenerate. For degenerate bands, terms
that hybridize states within a degenerate subspace can in general
gap out the spectrum. Thus, the WSM phase necessarily breaks at least
one out of time-reversal and inversion symmetries, as the presence
of both will make each state doubly degenerate.
Based on (\ref{eq:Weyl Hamiltonian}), each Weyl point in a WSM can
be characterized by a \emph{chirality quantum number} $\chi$ defined
as $\chi=\mbox{sgn}[\mbox{det}(v_{ij})]$. The physical significance
of the chirality is as follows. An electron living in a Bloch band
feels an effective vector potential $\boldsymbol{A}(\boldsymbol{k})=i\left\langle u(\boldsymbol{k})|\boldsymbol{\nabla_{k}}u(\boldsymbol{k})\right\rangle $
because of spatial variations of the Bloch state $|u(\boldsymbol{k})\rangle$
within a unit cell. The corresponding field strength $\boldsymbol{F}(\boldsymbol{k})=\boldsymbol{\nabla}_{\boldsymbol{k}}\times\boldsymbol{A}(\boldsymbol{k})$,
known as the Berry curvature or the Chern flux, plays the role of
a magnetic field corresponding to the vector potential $\boldsymbol{A}(\boldsymbol{k})$.
It can be shown that $\boldsymbol{F}(\boldsymbol{k})$ near a Weyl
node of chirality $\chi$ satisfies
\begin{equation}
\frac{1}{2\pi}\oint_{FS}\boldsymbol{F}(\boldsymbol{k})\cdot\mathrm{d}\boldsymbol{S}(\boldsymbol{k})=\chi\label{eq:monopole chirality}
\end{equation}
where the integral is over any Fermi surface enclosing the Weyl node
and the area element $\mathrm{d}\boldsymbol{S}(\boldsymbol{k})$ is
defined so as to point away from the occupied states. Since $\boldsymbol{F}(\boldsymbol{k})$
acts like a magnetic field in momentum space, (\ref{eq:monopole chirality})
suggests that a Weyl node acts like a magnetic monopole in momentum
space whose magnetic charge equals its chirality. Equivalently, a
Fermi surface surrounding the Weyl node is topologically non-trivial;
it has a Chern number, defined as the Berry curvature integrated over
the surface as in (\ref{eq:monopole chirality}), of $\chi$ ($-\chi$)
for an electron (a hole) Fermi surface.
\begin{figure}
\begin{centering}
\includegraphics[width=0.5\columnwidth]{FermiArcs}
\par\end{centering}
\caption{Weyl semimetal with a pair of Weyl nodes of opposite chirality (denoted
by different colors green and blue) in a slab geometry. The surface
has unusual Fermi arc states (shown by red curves) that connect the
projections of the Weyl points on the surface. $C$ is the Chern number
of the 2D insulator at fixed momentum along the line joining the Weyl
nodes. The Fermi arcs are nothing but the gapless edge states of the
Chern insulators strung together. \label{fig:Weyl-semimetal}}
\end{figure}
\citet{NielsenFermionDoubling1,NielsenFermionDoubling2} showed that
the total magnetic charge in a band structure must be zero, which
implies that the total number of Weyl nodes must be even, with half
of each chirality. The argument is simple and runs as follows. Each
2D slice in momentum space that does not contain any Weyl nodes can
be thought of as a Chern insulator. Since Weyl nodes emit Chern flux,
the Chern number changes by $\chi$ as one sweeps the slices past
a Weyl node of chirality $\chi$. Clearly, the Chern numbers of slices
will be periodic across the Brillouin zone if and only if there are
as many Weyl nodes of chirality $\chi$ as there are of chirality
$-\chi$. Such a notion of chirality does not exist for graphene or
the surface states of topological insulators, which also consist of
2D Dirac nodes, because the Berry phase around a Fermi surface is
$\pi$ which is indistinguishable from $-\pi$.
The fact that each Weyl node is chiral and radiates Chern flux leads
to another marvelous phenomenon absent in two dimensions -- the \emph{chiral
anomaly}. The statement is as follows: suppose the universe (or, for
condensed matter purposes, the band structure) consisted only of Weyl
electrons of chirality $\chi$ and none of chirality $-\chi$. Then,
the electromagnetic current $j_{\chi}^{\mu}$ of these electrons in
the presence of electromagnetic fields $\boldsymbol{E}$ and $\boldsymbol{B}$
would satisfy ($e>0$ is the unit electric charge and $\hbar$ is
the reduced Planck's constant)
\begin{equation}
\partial_{\mu}j_{\chi}^{\mu}=-\chi\frac{e^{3}}{4\pi^{2}\hbar^{2}}\boldsymbol{E}\cdot\boldsymbol{B}\label{eq:chiral anomaly}
\end{equation}
i.e., charge would not be conserved! (\ref{eq:chiral anomaly}) can
equivalently be written in terms of the electromagnetic fields strength
$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$, where $A_{\mu}$
is the vector potential, as
\begin{equation}
\partial_{\mu}j_{\chi}^{\mu}=-\chi\frac{e^{3}}{32\pi^{2}\hbar^{2}}\epsilon^{\mu\nu\rho\lambda}F_{\mu\nu}F_{\rho\lambda}\label{eq:chiral anomaly-1}
\end{equation}
where $\epsilon^{\mu\nu\rho\lambda}$ is the antisymmetric tensor.
(\ref{eq:chiral anomaly}) and (\ref{eq:chiral anomaly-1}) seem absurd;
however, they makes sense instantly when one recalls that in reality,
Weyl nodes always come in pairs of opposite chiralities and the total
current $j_{+}^{\mu}+j_{-}^{\mu}$ is therefore conserved. In fact,
the requirement of current conservation is an equally good argument
for why the total chirality of the Weyl nodes must vanish. Classically,
currents are always conserved no matter what the dispersion. Thus,
(\ref{eq:chiral anomaly}) is a purely quantum phenomenon and is an
upshot of the path integral for Weyl fermions coupled to an electromagnetic
field not being invariant under separate gauge transformations on
left-handed and right-handed Weyl fermions, even though the action
is. This will be explained in more detail in Sec. \ref{sec:The-chiral-anomaly}.
The purpose of this brief review is to recap some of the strange transport
phenomena associated with the chiral anomaly in WSMs that have been
discussed in the literature so far. The field is mushrooming, so we
make no attempt to be exhaustive. Instead, we describe results that
are relatively simple, experiment-friendly and \emph{firsts}, to the
best of our knowledge. This is an introductory review targeted mainly
towards readers new to the subject. Thus, the results are sketched
rather than expounded, and readers interested in further details of
any result are encouraged to follow up by consulting the original
work.
Before embarking on the review, we skim over another striking feature
of WSMs -- surface states known as \emph{Fermi arcs}. Although this
review does not focus on the Fermi arcs, they are such a unique and
remarkable characteristic of WSMs that it would be grossly unfair
to review WSMs without mentioning Fermi arcs.
Topological band structures are invariably endowed with topologically
protected surface states, and WSMs are no exception. The Fermi surface
of a WSM on a slab consists of unusual states known as Fermi arcs.
These are essentially a 2D Fermi surface; however, part of this Fermi
surface is glued to the top surface and the other, to the bottom.
On each surface, Fermi arcs connect the projections of the bulk Weyl
nodes of opposite chiralities onto the surface, as shown in Figure
\ref{fig:Weyl-semimetal} for the case of two Weyl nodes. A simple
way to understand the presence of Fermi arcs is by recalling that
momentum space slices not containing Weyl nodes are Chern insulators
whose Chern numbers change by unity as one sweeps the slices past
a Weyl node. Thus, if the slices far away from the nodes have a Chern
number of $0$, i.e., the insulators are trivial, the slices between
the Weyl nodes are all Chern insulators with unit Chern number. The
Fermi arcs, then, are simply the edge states of these insulators.
Once WSMs with clean enough surfaces are found, the Fermi arcs should
be observable in routine photoemission experiments.
Alternately, Fermi arcs can also be understood as the states left
behind by gapping out a stack of 2D Fermi surfaces by interlayer tunneling
in a chiral fashion (\citet{HosurFermiArcs}). In particular, consider
a toy model consisting of a stack of alternating electron and hole
Fermi surfaces. For short-ranged interlayer tunneling, each point
on each Fermi surface can hybridize in two ways -- either with a state
in the layer above or with a state in the layer below. If the interlayer
tunneling is momentum dependent, such that the preferred hybridization
is different for different parts of the Fermi surface, then the points
at which the hybridization preference changes become Weyl nodes in
the bulk while the end layers have leftover segments that do not have
partners to hybridize with and survive as the Fermi arcs. This is
shown in Fig \ref{fig:Fermi-arcs-layering}. In this picture, the
topological nature of the Fermi arcs is not apparent; however, the
way they connect projections of Weyl nodes on the surface becomes
transparent. This is in contrast to the previous description of Fermi
arcs, in which figuring out what boundary conditions correspond to
what connectivity for the Fermi arcs is a highly non-trivial task.
The layering picture gives a systematic way to generate Fermi arcs
of the desired shape and connectivity, thus facilitating theoretical
studies of Fermi arcs significantly.
\begin{figure}
\begin{centering}
\includegraphics[width=0.3\columnwidth]{evenlayers}~~\includegraphics[width=0.3\columnwidth]{oddlayers}
\par\end{centering}
\caption{Fermi arcs as the residual states of the process of gapping out a
stack of 2D Fermi surfaces with momentum-dependent interlayer tunneling.
Dotted (dashed) blue curves $C$ represent 2D electron (hole) Fermi
surfaces, solid red lines on the end layers labeled $S$ or $S^{\prime}$
denote Fermi arcs, and $\Delta_{\boldsymbol{k}}$ and $t_{\boldsymbol{k}}$
are momentum-dependent interlayer hopping amplitudes whose relative
magnitudes change at the black dots $\boldsymbol{K}_{1}$ and $\boldsymbol{K}_{2}$
as one moves along the Fermi surface in any layer. Changing the boundary
conditions in the left figure by peeling off the topmost layer gives
the right figure, which has a different Fermi arc structure. (\citet{HosurFermiArcs})
\label{fig:Fermi-arcs-layering}}
\end{figure}
The rest of the review is organized as follows. We begin by recapping
electric transport in WSMs, which characterizes the linear dispersion
and not the chiral anomaly \emph{per se}, in Sec. \ref{sec:Electric-transport}.
This is followed by an intuitive explanation for the anomaly in Sec.
\ref{sec:Poor-man's-approach}. A more formal derivation of the anomaly
ensues in Sec. \ref{sec:The-chiral-anomaly}, and is succeeded by
a description of several simple but striking transport experiments
that can potentially serve as signatures of the anomaly in Sec. \ref{sec:Anomaly-induced-magnetotransport}.
We conclude with a discussion of the systems in which WSMs have been
predicted and the promise the field of WSMs and gapless topological
phases holds.
\section{Electric transport -- bad metal or bad insulator?\label{sec:Electric-transport}}
The optical conductivity of metals is characterized by a zero frequency
Drude peak, whose width is determined by the dominant current relaxation
process, in the real, longitudinal part. The Drude peak appears because
a metal has gapless excitations which carry current and generically,
have non-zero total momentum of the electrons. In the DC limit, relaxation
from a current-carrying state to the ground state typically involves
electrons scattering off of impurities or, at finite temperatures,
interactions with phonons. Each scattering processes produces its
own characteristic temperature ($T$) dependence -- the disorder dependent
DC conductivity is $T$ independent as long as the impurities are
dilute and static, while the rate of inelastic scattering off of phonons
grows rapidly with temperature, giving rise to a $T^{5}$ dependence
of the DC resistivity. Importantly, both these processes, besides
relaxing the current, relax the total electron momentum as well. On
the other hand, electron-electron interactions conserve momentum and
cannot contribute to the conductivity.
Band insulators, on the other hand, have a vanishing DC conductivity
simply because they have a band gap, but show a bump in the optical
conductivity when the frequency becomes large enough to excite electrons
across the gap. A similar bump occurs in the temperature dependence
as well when electrons can be excited across the gap thermally. Weak
disorder does not change either behavior significantly.
How do WSMs behave? They obviously must rank somewhere between metals
and insulators. But are they better thought of as conductors with
a vanishingly small density of states at the Fermi level, or as insulators
with a vanishing band gap? This question was addressed recently, in
the continuum limit (\citet{HosurWeylTransport,BurkovNodalSemimetal})
as well as using a lattice model with eight Weyl nodes (\citet{RosensteinTransport}).
\citet{HosurWeylTransport} showed that WSMs, like graphene (\citet{FritzGrapheneConductivity})
and 3D Dirac semimetals (\citet{GoswamiDiracTransport}), actually
exhibit a phenomenon that neither metals nor insulators do -- DC transport
driven by Coulomb interactions between electrons alone, even in a
clean system. This is because all these systems possess a particle-hole
symmetry about the charge neutrality points, at least to linear order
in deviations from these points in momentum space. As a result, there
exist current carrying states consisting of electrons and holes moving
in opposite directions with equal and opposite momenta. Since the
total momentum is zero, Coulomb interactions can indeed relax these
states. A quantum Boltzmann calculation gives the DC conductivity
at temperature $T$
\begin{equation}
\sigma_{dc}(T)=\frac{e^{2}}{h}\frac{k_{B}T}{\hbar v_{F}(T)}\frac{1.8}{\alpha_{T}^{2}\log\alpha_{T}^{-1}}\label{eq:DCcoulomb}
\end{equation}
where $v_{F}(T)$ and $\alpha_{T}$ are the Fermi velocity and fine
structure constant renormalized (logarithmically) to energy $k_{B}T$.
(\ref{eq:DCcoulomb}) can be understood within a picture of thermally
excited electrons diffusively, as follows. Einstein's relation $\sigma_{dc}(T)=e^{2}D(T)\frac{dn(T)}{d\mu}$
expresses $\sigma_{dc}$ in terms of the density of states at energy
$k_{B}T$, $\frac{dn(T)}{d\mu}\sim\frac{(k_{B}T)^{2}}{(\hbar v_{F})^{3}}$,
and the diffusion constant $D(T)=v_{F}^{2}\tau(T)$, where $\tau(T)$
is the temperature dependent transport lifetime. Now, the scattering
cross-section for Coulomb interactions must be proportional to $\alpha^{2}$
because scattering matrix elements are proportional to $\alpha$.
Since $T$ is the only energy scale in the problem, $\tau^{-1}(T)\sim\alpha^{2}T$
on dimensional grounds. This immediately gives (\ref{eq:DCcoulomb})
upto logarithmic factors. This behavior has already been seen approximately
in the pyrochlore iridates Y$_{2}$Ir$_{2}$O$_{7}$ (\citet{WeylResistivityMaeno}),
Eu$_{2}$Ir$_{2}$O$_{7}$ under pressure (\citet{EuIridateExperiments})
and Nd$_{2}$Ir$_{1-x}$Rh$_{x}$O$_{7}$ ($x\approx0.02\mbox{-}0.05$)
(\citet{UedaNd2Ir2O7}), all of which are candidate WSMs.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\columnwidth]{longconductivities}
\par\end{centering}
\caption{Linear-log plot of the optical conductivity of a WSM with Gaussian
disorder computed within a Born approximation. The disorder strength
is characterized by $\gamma$ (or $\omega_{0}=2\pi v_{F}^{3}/\gamma$).
(\citet{HosurWeylTransport})\label{fig:disorder conductivity}}
\end{figure}
On the other hand, non-interacting WSMs with chemical potential disorder
act like bad metals in some of their transport properties (\citet{HosurWeylTransport,BurkovNodalSemimetal,WeylMultiLayer}).
Fig \ref{fig:disorder conductivity} shows the temperature and frequency
dependence of the optical conductivity of disordered WSMs. Like metals,
it has a Drude peak whose height is set by the disorder strength.
However, the width of the peak goes as $T^{2}$, unlike metals where
the peak width has a weaker dependence -- $\sqrt{T}$ -- on temperature.
Thus, the conductivity at small non-zero frequency falls faster as
the temperature is lowered in WSMs as compared to ordinary metals.
At high frequencies, the behavior is entirely different. At $\hbar\omega\gg k_{B}T$,
the conductivity grows linearly with the frequency: $\sigma_{xx}(\omega)=\frac{e^{2}}{12h}\frac{\omega}{v_{F}}$
per Weyl node. This is expected from dimensional analysis since the
only physical energy scale under these circumstances is the frequency.
Importantly, such behavior is unparalleled in metals or insulators.
In summary, WSMs are neither metallic nor insulating in most of their
electric transport properties. However, if one is forced to put a
finger on which of the two more common phases they behave like, (bad)
`metals' is more accurate than (bad) `insulators'.
\section{The chiral anomaly -- poor man's approach\label{sec:Poor-man's-approach}}
We now turn to the main focus of this review -- the chiral anomaly
and related anomalous magnetotransport. This is where the story really
starts to get fascinating and WSMs start displaying a slew of exotic
properties unheard of in conventional electronic phases. To start
off, we present a quick caricaturistic derivation of the anomaly to
give the reader a feel for the microscopic physics that is at play
here.
\begin{figure}
\begin{centering}
\includegraphics[width=0.3\columnwidth]{chargepumping}
\par\end{centering}
\caption{Charge pumping between Weyl nodes in parallel electric and magnetic
fields in the quantum limit. Each point in the dispersions is a Landau
level. Filled (empty) circles denote occupied (unoccupied) states.
Only occupation of the chiral zeroth Landau levels are shown because
they are the only ones that participate in the pumping.\label{fig:Charge-pumping}}
\end{figure}
In a magnetic field $\boldsymbol{B}$, the spectrum of the Hamiltonian
for a single Weyl node consists of Landau levels of degeneracy $g=\frac{BA_{\perp}}{h/e}$,
where $A_{\perp}$ is the cross-section transverse to $\boldsymbol{B}$,
dispersing along $\boldsymbol{B}$. Crucially, the zeroth Landau disperses
only one way, the direction of dispersion depending on the chirality
of the Weyl node.
\begin{eqnarray}
\epsilon_{n} & = & v_{F}\mbox{sign}(n)\sqrt{2\hbar|n|eB+(\hbar\boldsymbol{k}\cdot\hat{\boldsymbol{B}})^{2}},\, n=\pm1,\pm2,\dots\nonumber \\
\epsilon_{0} & = & -\chi\hbar v_{F}\boldsymbol{k}\cdot\hat{\boldsymbol{B}}
\end{eqnarray}
Suppose the temperature and the chemical potential are much smaller
than $v_{F}\sqrt{\hbar eB}$. Then, only the zeroth Landau level is
relevant for the low energy physics and we are in the so-called ``quantum
limit''. If an electric field $\boldsymbol{E}$ is now applied in
the same direction as $\boldsymbol{B}$, all the states move along
the field according to $\hbar\dot{\boldsymbol{k}}=-e\boldsymbol{E}$.
The key point is that the zeroth Landau level is chiral, i.e., it
disperses only one way for each Weyl node. Therefore, motion of the
states along $\boldsymbol{E}$ corresponds to electrons disappearing
from right-moving band and reappearing in the left-moving one, as
depicted in Fig \ref{fig:Charge-pumping}. In other words, the charge
in each of the $g$ chiral Landau bands is non-conserved, and each
of these bands exhibits a \emph{1D chiral anomaly}, given by $\partial Q_{\chi}^{1D}/\partial t=e\chi L_{B}|\dot{\boldsymbol{k}}|/2\pi=-e^{2}\chi L_{B}|\boldsymbol{E}|/h$,
where $L_{B}$ is the system size in the direction of $\boldsymbol{B}$.
Multiplying by $g$ gives the 3D result
\begin{equation}
\frac{\partial Q_{\chi}^{3D}}{\partial t}=g\frac{\partial Q_{\chi}^{1D}}{\partial t}=-V\frac{e^{3}}{4\pi^{2}\hbar^{2}}\boldsymbol{E}\cdot\boldsymbol{B}
\end{equation}
where $V=A_{\perp}L_{B}$ is the system volume. This is the same as
(\ref{eq:chiral anomaly}) in the special case of a translationally
invariant system, in which the current due to a single Weyl node is
divergence free $\boldsymbol{\nabla}\cdot\boldsymbol{j}_{\chi}=0$.
The above ``quantum limit'' derivation is the simplest and most
intuitive way to understand (\ref{eq:chiral anomaly}). The result,
however, is not restricted to quantum limit. Indeed, charge pumping
between Weyl nodes of opposite chiralities was recently shown in a
purely semiclassical formalism as well, by \citet{SonSpivakWeylAnomaly}.
\section{The chiral anomaly -- general derivation\label{sec:The-chiral-anomaly}}
A more general field theoretic to understand the chiral anomaly is
by making the following observation. Just as there exists a quantum
Hall state in two dimensions which carries chiral 1D gapless edge
states, there is an analogous 4D state whose surface has chiral 3D
gapless edge states with opposite chiralities localized on opposite
surfaces (\citet{Zhang4DQH}). Now, in the absence of any internode
scattering, the two nodes of a WSM can be thought of as such surface
states, and the anomaly must emerge from the surface theory of the
hypothetical 4D quantum Hall state.
The bulk effective theory for electromagnetic fields for the 4D state
contains the topological third Chern-Simons term:
\begin{equation}
S_{QH}^{4D}=\int\mathrm{d}^{5}x\left(\frac{e^{3}}{8\pi^{2}\hbar^{2}}\epsilon^{\mu\nu\rho\sigma\lambda}A_{\mu}\partial_{\nu}A_{\rho}\partial_{\sigma}A_{\lambda}+A_{\mu}j^{\mu}\right)
\end{equation}
just as that in a 2D quantum Hall state contains the second Chern
Simons ter
\footnote{It also contains the usual Maxwell term $S_{Max}\sim\int\mathrm{d}^{5}xF_{\mu\nu}F^{\mu\nu}$.
As we shall see in a moment, the anomaly stems from broken gauge invariance
of the Chern-Simons action at the boundary. $S_{Max}$ is gauge invariant
everywhere, so it does not contribute to the anomalous physics and
will thus be dropped. An important difference between 2+1D and 4+1D
Chern-Simons theories is that the former contains only one derivative
and hence, dominates the Maxwell term in the long-distance physics,
whereas the latter has two derivatives and competes with the Maxwell
term even at long distances
}. $S_{QH}^{4D}$ is well-defined in the bulk, but it violates gauge
invariance at the boundary. As a consequence, some of the gauge degrees
of freedom become physical and survive as gapless fermionic states
glued to the boundary, viz., the Weyl nodes. To see the anomaly in
this picture, let us generalize the procedure outlined by \citet{Maeda2DAnomaly}
in two dimensions and insert a step function $\Theta(x_{4})$ to simulate
a surface normal to $x_{4}$, the imaginary dimension, into the action:
\begin{equation}
\tilde{S}_{QH}^{4D}=\int\mathrm{d}^{5}x\left(\frac{e^{3}}{8\pi^{2}\hbar^{2}}\Theta(x_{4})\epsilon^{\mu\nu\rho\sigma\lambda}A_{\mu}\partial_{\nu}A_{\rho}\partial_{\sigma}A_{\lambda}+A_{\mu}j^{\mu}\right).
\end{equation}
The equation of motion $\frac{\partial\mathcal{L}}{\partial A_{\mu}}=\partial_{\nu}\left(\frac{\partial\mathcal{L}}{\partial_{\nu}A_{\mu}}\right)$
now contains an extra term on the right hand side proportional to
$\partial_{4}\Theta(x_{4})=\delta(x_{4})$, which is clearly localized
on the boundary. After an integration by parts, the boundary current
can be shown to satisfy (\ref{eq:chiral anomaly}). This is the essence
of the Callan-Harvey mechanism, which is well-known in the context
of the 2D integer quantum Hall effect but is equally well applicable
here. In this picture, the charge carried by the boundary states is
not conserved because it can always vanish into the bulk and reappear
on a different boundary.
A third approach to understanding the chiral anomaly entails encapsulating
the anomaly in the action itself rather than computing the chiral
current. The advantage of such an approach is that once the effective
action for the electromagnetic fields is known, physical transport
properties can be derived immediately. Below, we sketch one such technique
known as the Fujikawa rotation technique (see, for example, \citet{HosurRyuChiralTISC})
for deriving such an action. The technique transforms the action of
a WSM with two Weyl nodes into a massless Dirac action supplemented
by a topological $\theta$-term. The latter is a consequence of the
anomaly and is absent in an ordinary Dirac action. This was the approach
adopted in some recent works (\citet{ZyuninBurkovWeylTheta,GoswamiFieldTheory,ChenAxionResponse}).
Consider the continuum Euclidean action
\begin{equation}
S_{W}=\int\mathrm{d}^{4}x\bar{\psi}\gamma^{\mu}(\hbar i\partial_{\mu}-eA_{\mu}-b_{\mu}\gamma^{5})\psi
\end{equation}
where $\psi$ and $\bar{\psi}=\psi^{\dagger}\gamma^{0}$ are Grassman
spinor fields, $b_{\mu}$ is a constant 4-vector, $\gamma^{\mu},\,\mu=0\dots3$
are the standard $4\times4$ Dirac matrices and$\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$
is the chirality or ``handedness'' operator. In other words, right-handed
and left-handed Weyl nodes correspond to eigenstates of $\gamma^{5}$
with eigenvalues $+1$ and $-1$, respectively. All the five $\gamma$-matrices
anti-commute with one another. $S_{W}$ describes a Weyl metal (not
a WSM, unless $b_{0}=0$) with two Weyl nodes separated in momentum
space by $\boldsymbol{b}=(b_{x},b_{y},b_{z})$ and in energy by $b_{0}$
coupled to the electromagnetic field. $S_{W}$ is clearly invariant
under a chiral gauge transformation
\begin{equation}
\psi\to e^{-i\theta(x)\gamma^{5}/2}\psi\mbox{ or }\psi_{\pm}\to e^{\mp i\theta(x)/2}\psi_{\pm},\gamma^{5}\psi_{\pm}=\pm\psi_{\pm}\label{eq:chiralgt}
\end{equation}
which suggests that the chiral current $j_{ch}^{\mu}=e\bar{\psi}\gamma^{\mu}\gamma^{5}\psi$
is conserved. This is clearly wrong because we know that $j_{ch}^{\mu}$
conservation is violated according to (\ref{eq:chiral anomaly}).
What went wrong?
The flaw in the above argument becomes obvious when one realizes that
in a real condensed matter system, the right-handed and left-handed
Weyl nodes are connected at higher energies, so $\psi_{+}$ and $\psi_{-}$
cannot be gauge transformed separately as in (\ref{eq:chiralgt}).
In other words, the true action of the system changes under (\ref{eq:chiralgt}),
and the change comes from the regularization of the theory at high
energies.
To compute this change, let us perform such a transformation and see
what happens. If we choose $\theta(x)=(2\boldsymbol{b}\cdot\boldsymbol{r}-2b_{0}t)/\hbar$,
$b_{\mu}$ gets eliminated from $S_{W}$, leaving behind a massless
Dirac action $S_{D}=\int\mathrm{d}^{4}x\bar{\psi}i\gamma^{\mu}(\hbar\partial_{\mu}+ieA_{\mu})\psi$
in which both the chiral current $j_{ch}^{\mu}$ as well as the total
current $j^{\mu}=e\bar{\psi}\gamma^{\mu}\psi$ are truly conserved.
However, the measure of the path integral $\mathcal{Z}=\int\mathcal{D}\psi\mathcal{D}\bar{\psi}e^{-S_{W}[\psi,\bar{\psi}]}$
is not invariant under (\ref{eq:chiralgt}), which signals an anomaly.
More precisely, the Jacobian of the transformation is non-trivial
and can be interpreted as an additional term in the Dirac action:
\begin{equation}
\mathcal{D}\psi\mathcal{D}\bar{\psi}\to\mathcal{D}\psi\mathcal{D}\bar{\psi}\mathrm{det}\left[e^{i\theta(x)\gamma^{5}}\right]\equiv\mathcal{D}\psi\mathcal{D}\bar{\psi}e^{-S_{\theta}/\hbar}\implies S_{\theta}=-i\hbar\mathrm{Tr}\left[\theta(x)\gamma^{5}\right]\label{eq:Sthetadef}
\end{equation}
thus giving $S_{W}=S_{D}+S_{\theta}$
\footnote{Strictly speaking, the gauge transformation (\ref{eq:chiralgt}) to
remove $b_{\mu}$ from $S_{W}$ must be done in a series of infinitesimal
steps. However, the contribution to $S_{\theta}$ from each step happens
to be the same, so we are justified in doing the transformation at
once
} Information about the violation of chiral gauge invariance at high
energies is now expected to be contained in $S_{\theta}$.
The meaning of the trace in (\ref{eq:Sthetadef}) is highly subtle,
and it is not a simple trace of the matrix $\gamma^{5}$ multiplied
by a scalar function $\theta(x)$. If it were, $S_{\theta}$ would
vanish because $\gamma^{5}$ is traceless. Rather, it represents a
sum over a complete basis of fermionic states with suitable regularization.
A natural basis choice is the eigenstates of the Dirac operator $\slashed D=\gamma^{\mu}(\hbar\partial_{\mu}+ieA_{\mu})$;
a regularization method traditionally used in particle physics literature
is the heat kernel regularization, which exponentially suppresses
states at high energy. Thus,
\begin{eqnarray}
\mathrm{Tr}\left[\theta(x)\gamma^{5}\right] & = & \lim_{M\to\infty}\int\mathrm{d}^{4}x\sum_{n}\phi_{n}^{*}(x)e^{-\epsilon_{n}^{2}/M^{2}}\theta(x)\gamma^{5}\phi_{n}(x)\nonumber \\
& = & \lim_{M\to\infty}\int\mathrm{d}^{4}x\sum_{n}\phi_{n}^{*}(x)e^{-\slashed D^{2}/M^{2}}\theta(x)\gamma^{5}\phi_{n}(x)\label{eq:trace for Stheta}
\end{eqnarray}
where
\begin{eqnarray}
\slashed D\phi_{n}(x) & = & \epsilon_{n}\phi_{n}(x)\nonumber \\
\int d^{4}x\phi_{n}^{*}(x)\phi_{m}(x)=\delta_{nm} & , & \sum_{n}\phi_{n}^{*}(x)\phi_{n}(y)=\delta(x-y)\label{eq:fermionbasis}
\end{eqnarray}
The right hand side in (\ref{eq:trace for Stheta}) can be evaluated
by Fourier transforming to momentum space and using the completeness
relations in (\ref{eq:fermionbasis}) (See \citet{ZyuninBurkovWeylTheta}
or \citet{GoswamiFieldTheory} for details). The result is $S_{\theta}$
in terms of the electromagnetic fields:
\begin{equation}
S_{\theta}=\frac{ie^{2}}{32\pi^{2}\hbar^{2}}\int\mathrm{d}^{4}x\theta(x)\epsilon^{\mu\nu\rho\lambda}F_{\mu\nu}F_{\rho\lambda}=\frac{ie^{2}}{4\pi^{2}\hbar^{2}}\int\mathrm{d}^{4}x\theta(\boldsymbol{r},t)\boldsymbol{E}\cdot\boldsymbol{B}\label{eq:theta-term}
\end{equation}
Note that electromagnetic fields entered the derivation via the Dirac
operator $\slashed D$ in the regularization step in (\ref{eq:Sthetadef}).
One could choose a different regularization; a natural choice for
condensed matter systems would be to add non-linear terms to the dispersion.
However, electromagnetic fields will still enter the derivation via
minimal coupling and the final result for $S_{\theta}$ should be
the same. In fact, it turns out that some sort of regularization is
unavoidable, because the right hand side of (\ref{eq:Sthetadef})
without it is the difference of two divergent terms and is thus ill-defined.
We had earlier anticipated precisely this fact -- the chiral anomaly
is nothing but a shadow of the violation of chiral symmetry at high
energies in the low energy physics, and must originate in the high
energy regularization of the theory.
(\ref{eq:theta-term}) is eerily similar to the magnetoelectric term
that appears in the action of a 3D topological insulator. There is
an important difference, however. In topological insulators, $\theta(\boldsymbol{r},t)$
takes a constant value $\theta=\pi$ whereas here it is a spacetime
dependent scalar field. This subtle difference immediately leads to
novel topological properties in WSMs, some of which we discuss below.
Moreover, if translational symmetry is broken and the Weyl nodes are
gapped out by charge density wave order at the wavevector that connects
them, it can be shown that $\theta(\boldsymbol{r},t)$ survives as
phase degree freedom of the density wave, which is the Goldstone mode
of the translational symmetry breaking process (\citet{WeylCDW}).
\section{Anomaly induced magnetotransport\label{sec:Anomaly-induced-magnetotransport}}
Having understood the basic idea of the chiral anomaly, we now describe
several transport signatures of this effect.
\subsection{Negative magnetoresistance\label{sub:Negative-magnetoresistance}}
One of the first transport signatures of the chiral anomaly was pointed
out more than 30 years ago by Nielsen and Ninomiya (\citet{NielsenABJ}).
They noted that since Weyl nodes are separated in momentum space,
any charge imbalance created between them by an $\boldsymbol{E}\cdot\boldsymbol{B}$
field or otherwise requires large momentum scattering processes in
order to relax. In a sufficiently clean system, such processes are
relatively weak, resulting in a large relaxation time. An immediate
consequence of this is that the longitudinal conductivity along an
applied magnetic field, which is proportional to the relaxation time,
is extremely large. Moreover, a WSM in a magnetic field reduces to
a large number -- equal to the degeneracy of the Landau levels --
of decoupled 1D chains dispersing along the field as show in Fig \ref{fig:Charge-pumping}.
Therefore, the conductivity is proportional to the magnetic field
or, equivalently, the resistivity decreases with increasing magnetic
field. This phenomenon is termed as \emph{negative magnetoresistance}.
Being one of the simplest signatures of the anomaly, negative magnetoresistance
has also been the first one to have been observed experimentally (\citet{BiSbKimMagnetoTransportExpt}).
The material on which the experiment was performed was Bi$_{0.97}$Sb$_{0.03}$.
Bi (Sb) is known to have topologically trivial (non-trivial) valence
bands in the sense of a 3D strong topological insulator. Therefore,
it is possible to fine tune Bi$_{1-x}$Sb$_{x}$ to the critical point
separating the two phases (\citet{FuKaneTIInversion,Murakami2007,HsiehTIDirac}).
The critical point has a single Dirac node in its bands structure.
\citet{BiSbKimMagnetoTransportExpt} applied a magnetic field at the
critical point, which not only created Landau levels but also split
the Dirac node into two Weyl nodes. The magnetoconductivity was subsequently
measured, the main result of which for our purposes is shown in Fig
\ref{fig:Magnetoresistance-BISb}. Beyond a very small field strength,
there is a clear negative contribution to the resistivity that is
enormous for longitudinal fields but very small for transverse ones.
The positive contributions to the resistivity at very small fields
are attributed to weak anti-localization of the Dirac node at the
critical point, while those at very large fields are probably because
the large number of 1D modes dispersing along the field become independent
1D systems and can be easily localized.
\begin{figure}
\begin{centering}
\includegraphics[width=0.5\columnwidth]{MagnetoResistanceExpt}
\par\end{centering}
\caption{Magnetoresistance (MR) of Bi$_{1-x}$Sb$_{x}$ tuned to a quantum
critical point as a function of the magnetic field $B$. $\theta=90^{\circ}$
($\theta=0$) corresponds to longitudinal (transverse) magnetoresistance.
$B$ seemingly splits the Dirac node into two Weyl nodes in addition
to creating Landau levels. The initial rise in MR at very small fields
is attributed to weak antilocalization of the Dirac node, while the
chiral anomaly is responsible for the subsequent flattening or decrease
in it, as explained in Sec \ref{sub:Negative-magnetoresistance}.
The effect of the anomaly is clearly more pronounced for longitudinal
configurations. The upturn at very large fields is probably due to
localization of the 1D modes that constitute the quantum limit picture
described in the text. (Figure from \citet{BiSbKimMagnetoTransportExpt})\textbf{\label{fig:Magnetoresistance-BISb}}}
\end{figure}
\subsection{Anomalous Hall and chiral magnetic effects \label{sub:AHECME}}
The next set of effects we shall discuss are the anomalous Hall effect
(AHE) and the chiral magnetic effect (CME). Although the effects appear
to be quite different, their derivation using the field theory outlined
in Sec \ref{sec:The-chiral-anomaly} shows that they are simply related
by Lorentz transformation and are conveniently discussed together.
By carrying out an integration by parts in (\ref{eq:theta-term}),
dropping boundary terms and Wick rotating to real time, $S_{\theta}$
can be written as
\begin{equation}
S_{\theta}=-\frac{e^{2}}{8\pi^{2}\hbar}\int\mathrm{d}t\mathrm{d}\boldsymbol{r}\partial_{\mu}\theta\epsilon^{\mu\nu\rho\lambda}A_{\nu}\partial_{\rho}A_{\lambda}
\end{equation}
Varying with respect to the vector potential gives the currents $j_{\nu}=\frac{e^{2}}{4\pi^{2}\hbar}\partial_{\mu}\theta\epsilon^{\mu\nu\rho\lambda}\partial_{\rho}A_{\lambda}$.
Recalling that $\theta(x)=(2\boldsymbol{b}\cdot\boldsymbol{r}-2b_{0}t)/\hbar$
gives
\begin{equation}
\boldsymbol{j}=\frac{e^{2}}{2\pi^{2}\hbar^{2}}\boldsymbol{b}\times\boldsymbol{E}+\frac{e^{2}}{2\pi^{2}\hbar^{2}}b_{0}\boldsymbol{B}\label{eq:ahecme}
\end{equation}
The first term in (\ref{eq:ahecme}) is the AHE in the plane perpendicular
to $\boldsymbol{b}$. This can be understood based on the picture
for Fermi arcs presented in the introduction. We quickly repeat the
argument here: each 2D slice of momentum space perpendicular to $\boldsymbol{b}$
can be thought of as an insulator with a gap that depends on the momentum
parallel to $\boldsymbol{b}$. Since Weyl nodes are sources of unit
Chern flux with a sign proportional to their chirality, the slices
between $\boldsymbol{k}=-\boldsymbol{b}/2$ and $\boldsymbol{k}=\boldsymbol{b}/2$
are Chern insulators with unit Chern number, while those outside this
region are trivial insulators. Each of these Chern insulators has
unit Hall conductivity and as a result, the WSM has a Hall conductivity
proportional to $\boldsymbol{b}$. This result has been derived in
several recent works, both in lattice as well as in continuum models
(\citet{ChenAxionResponse,FangChernSemimetal,GoswamiFieldTheory,RanQHWeyl,VafekDiracReview,WeylMultiLayer,ZyuninBurkovWeylTheta}),
and has been accepted unanimously.
The second term in (\ref{eq:ahecme}), known as the chiral magnetic
effect (CME), is subtler, and has created some controversy. It predicts
an equilibrium dissipationless current parallel to the magnetic field
if the two Weyl nodes are at different energies. \citet{ZhouSemiclassicalTransport}
obtained the same result in a semiclassical limit. The CME is (deceptively,
as we will see) easy to understand in the DC continuum limit: if the
Weyl nodes are at different energies, the chiral zeroth Landau level
states from the two nodes will have different occupations and their
currents will not cancel each other.
However, as pointed out by \citet{VazifehEMResponse}, this seems
to be at odds with some basic results of band theory. In particular,
a DC magnetic field reduces the 3D WSM to a highly degenerate 1D system
dispersing along the field. The total current DC along the field is
\begin{equation}
j_{B}=\int_{k_{L}}^{k_{R}}g\frac{\partial\epsilon}{\partial k}\mathrm{d}k=g(\epsilon_{R}-\epsilon_{L})\label{eq:jB}
\end{equation}
where $k_{L}$ and $k_{R}$ are the momenta of the Fermi points of
the 1D system, $g=\frac{BA_{\perp}}{h/e}$ is the Landau level degeneracy
and $\epsilon(k)$ is the 1D dispersion. At equilibrium, $\epsilon_{R}=\epsilon_{L}$,
which implies $j_{B}$ vanishes. \citet{VazifehEMResponse} argued
that the CME was an artifact of linearizing the dispersion near the
Weyl nodes, and is in fact absent in a full lattice model. The CME
seems wrong for another reason too. If a state carries a net DC current
$\boldsymbol{J}$, then Ohmic power $\sim\boldsymbol{J}\cdot\boldsymbol{E}$
can be supplied to or extracted from it by applying an appropriate
electric field $\boldsymbol{E}$. But a ground state is already the
lowest energy state, so it is not possible to extract energy from
it. Therefore it cannot carry a DC current (\citet{BasarTriangleAnomaly}).
Soon after, though, this claim was countered by \citet{ChenAxionResponse}
who showed, by rederiving the CME in a lattice model, that while the
CME is unambiguously non-zero at finite momentum $\boldsymbol{q}$
and frequency $\omega$, its DC limit depends on the order of the
limits $\omega\to0$ and $\boldsymbol{q}\to0$. If $\omega\to0$ is
taken first, one obtains a static system in which the electrons are
always in equilibrium and the CME vanishes as predicted by Ref. \citet{VazifehEMResponse}.
However, the effect survives if the $\boldsymbol{q}\to0$ limit is
evaluated first and is precisely that predicted by \ref{eq:ahecme}.
In this case, the electrons are not in equilibrium except \emph{at
the limit}, and neither the band theory argument nor the Ohmic dissipation
argument presented above applies.
\subsection{Non-local transport\label{sub:Non-local-transport}}
Consider a 3D piece of ordinary metal with four contacts attached
to it, as shown in Fig \ref{fig:nonlocaltransport} (left). The two
contacts on the left, labeled source ($S$) and drain ($D$) inject
a current $J$ through the sample, whose typical path is indicated
by the arrows connecting the $S$ and the $D$ leads. As one moves
away from $S$ and $D$, the voltage drop between the upper and lower
surfaces falls because smaller and smaller segments of the current
lines are encountered by a path that goes vertically from the top
to the bottom surface. In particular, it can be shown that in the
``quantum limit'' where the mean free path is limited only by the
contacts, the nonlocal voltage drop $V_{nl}$ decays on the scale
of the sample thickness $d$.
\citet{ParameswaranNonLocalTransport} showed that the situation in
WSMs in the presence of local magnetic fields is strikingly different.
As a consequence of the anomaly, a local magnetic field applied parallel
to the injected current generates an imbalance in the occupation numbers
of the two Weyl nodes locally. This imbalance diffuses over a length
scale $l$ determined by the rate of internode scattering processes
and hence, can be quite large -- even larger than the sample thickness
$d$. Thus, at distances $L\sim l\gg d$ there are no Ohmic voltages
but there exists a \emph{valley} imbalance, borrowing nomenclature
from semiconductor physics. Detecting the valley imbalance is challenging,
however, because it does not couple to electric fields. The anomaly
comes to the rescue, and the last piece of the puzzle entails applying
a local probe magnetic field. This couples to the valley imbalance
and produces a local electric field which can then be measured by
conventional methods.
\begin{figure}
\begin{centering}
\includegraphics[width=0.3\columnwidth]{nonlocalmetal}~~\includegraphics[width=0.5\columnwidth]{ParameswaranNonLocalTransport}
\par\end{centering}
\caption{(Left) A current $J$ driven between the source ($S$) and drain ($D$)
leads in an ordinary piece of metal takes the path indicated by the
arrows inside the sample. The Ohmic voltage between the top and the
bottom surfaces decays over the length scale of the sample thickness
$d$ as one moves away from the injection region, so that $V_{nl}\sim V_{sd}e^{-L/d}$.
(Right) In a WSM, a local magnetic field $B_{g}$ along the injected
current generates a valley imbalance as a result of the anomaly. This
imbalance diffuses slowly if the intervalley scattering processes
are weak, and can be detected far away from the injection region by
a probe magnetic field $B_{p}$ which converts it into an electric
field. (Right figure from \citet{ParameswaranNonLocalTransport})\label{fig:nonlocaltransport}}
\end{figure}
\subsection{Chiral gauge anomaly\label{sub:Chiral-gauge-anomaly}}
The final effect we will describe is the response to a chiral gauge
field, i.e., a gauge field that has opposite signs for Weyl nodes
of opposite chirality. One way to create such a field is by applying
appropriate strains, as has been done in graphene (\citet{PeresGrapheneElectronicRev})
to apply opposite gauge fields to the two Dirac nodes there. More
generally, chiral gauge fields can be created in WSMs by exploiting
the fact that a general Weyl metal can be created from a Dirac semimetal
by breaking time-reversal and inversion symmetries. Then, fluctuations
in the perturbations that break these symmetries naturally act like
chiral gauge fields.
\citet{QiWeylAnomaly} pointed out that chiral gauge fields in a WSM
induce an anomaly not only in the chiral current but also in the total
electric current. The argument is simple. Suppose $A_{\mu}$ and $a_{\mu}$
are the electromagnetic and the chiral gauge fields, respectively,
and $F_{\mu\nu}$ and $f_{\mu\nu}$ the corresponding field strengths.
Then, a Weyl node of chirality $\chi$ feels a total gauge field $\mathcal{A}_{\mu}^{\chi}=A_{\mu}+\chi a_{\mu}$
and hence, suffers from an anomaly $\partial_{\mu}j_{\chi}^{\mu}=-\chi\frac{e^{3}}{4\pi^{2}\hbar^{2}}\epsilon^{\mu\nu\rho\lambda}\partial_{\mu}\mathcal{A}_{\nu}^{\chi}\partial_{\rho}\mathcal{A}_{\lambda}^{\chi}$
according to (\ref{eq:chiral anomaly-1}). Consequently, the conservation
laws for the chiral current $j_{ch}^{\mu}=j_{+}^{\mu}-j_{-}^{\mu}$
and the total current $j^{\mu}=j_{+}^{\mu}+j_{-}^{\mu}$ read
\begin{eqnarray}
\partial_{\mu}j_{ch}^{\mu} & = & -\frac{e^{3}}{16\pi^{2}\hbar^{2}}\epsilon^{\mu\nu\rho\lambda}\left(F_{\mu\nu}F_{\rho\lambda}+f_{\mu\nu}f_{\rho\lambda}\right)\label{eq:chiral chiral anomaly}\\
\partial_{\mu}j^{\mu} & = & -\frac{e^{3}}{8\pi^{2}\hbar^{2}}\epsilon^{\mu\nu\rho\lambda}F_{\mu\nu}f_{\rho\lambda}\label{eq:chiral gauge anomaly}
\end{eqnarray}
(\ref{eq:chiral chiral anomaly}) is the usual chiral anomaly which
receives additional contribution from the chiral gauge fields. (\ref{eq:chiral gauge anomaly}),
however, states that even the total current is not conserved if both
electromagnetic and chiral gauge fields are present. This can be rewritten
in terms of the chiral ``magnetic field'' $\boldsymbol{\beta}=\boldsymbol{\nabla}\times\boldsymbol{a}$
and chiral ``electric field'' $\boldsymbol{\varepsilon}=-\boldsymbol{\nabla}a_{0}-\frac{\partial\boldsymbol{a}}{\partial t}$
as
\begin{equation}
\partial_{\mu}j^{\mu}=-\frac{e^{3}}{2\pi^{2}\hbar^{2}}\left(\boldsymbol{\beta}\cdot\boldsymbol{E}+\boldsymbol{\varepsilon}\cdot\boldsymbol{B}\right)\label{eq:chiral gauge anomaly-1}
\end{equation}
\begin{figure}
\begin{centering}
\includegraphics[width=0.3\columnwidth]{ferrovortexanomaly}
\par\end{centering}
\caption{A vortex in the magnetization (small blue arrows) results in a chiral
magnetic field $\boldsymbol{\beta}$ along the vortex axis. An electric
field in the same direction then generates a one way current moving
along the vortex axis. (\citet{QiWeylAnomaly})}
\end{figure}
If time-reversal symmetry in the system is broken by ferromagnetism,
then $\boldsymbol{a}$ is proportional to the magnetic moment and
the chiral magnetic field corresponds to a ferromagnetic vortex. The
first term above, then, predicts a chiral current propagating along
the vortex axis. In the quantum limit picture of Sec \ref{sec:Poor-man's-approach},
this current is carried by zeroth Landau levels states, which now
disperse in the same direction for both Weyl nodes. An immediate question
is, how can charge not be conserved in a real system? The answer comes
from the realization that unlike $A_{\mu}$, $a_{\mu}$ is a physical
field and must be single valued. Since $a_{\mu}=0$ in vacuum, the
total flux $\boldsymbol{\beta}=\boldsymbol{\nabla}\times\boldsymbol{a}$
must vanish in a finite system. Equivalently, the total winding number
of all the vortices must be zero. Thus, there are equal numbers of
chiral and antichiral modes so that the total current is, in fact,
conserved. These modes are in different spatial regions, so the gauge
anomaly (\ref{eq:chiral gauge anomaly}) is still meaningful locally.
The second term is generated by a time-dependent magnetization, and
describes charge and current modulations in response to magnetization
fluctuations in the presence of $\boldsymbol{B}$. In other words,
it describes a coupling between plasmons (charge fluctuations) and
magnons (magnetic fluctuations), which is not present in ordinary
metals or semimetals.
\section{Materials realizations}
Theorists have occasionally enthused over 3D Dirac (\citet{Abrikosov})
and Weyl (\citet{NielsenFermionDoubling1,NielsenFermionDoubling2,NielsenABJ})
band structures for decades and it has long been known that the A-phase
of superfluid Helium-3 has Weyl fermions (\citet{VolovikBook,MengWeylSC}).
However, interest in them has recently been rekindled by the prediction
that some simple electronic systems realize them.
The first materials prediction was in the pyrochlore iridates family
-- R$_{2}$Ir$_{2}$O$_{7}$ -- where `R' is a rare earth element
(\citet{PyrochloreWeyl,KrempaWeyl,ChenIridate}). These candidate
WSMs are inversion symmetric but break time-reversal symmetry via
a special kind of anti-ferromagnetic ordering -- the `all-in/all-out'
ordering -- in which all the spins on a given tetrahedron point either
towards the center or away from it, and the ordering alternates on
adjacent tetrahedra. Available transport data on these materials are
roughly consistent with linearly dispersing bands (\citet{WeylResistivityMaeno,EuIridateExperiments,UedaNd2Ir2O7});
however, the evidence is far from conclusive as yet. In its footsteps
followed several proposals to engineer WSMs in topological insulators
heterostructures. Alternately stacking topological and ferromagnetic
insulators was shown to produce inversion symmetric WSMs in a certain
parameter regime with Weyl nodes separated along the stacking direction
(\citet{WeylMultiLayer}), while replacing the ferromagnetic insulators
with trivial, time-reversal symmetric ones and applying an electric
field perpendicular to the layers could give time-reversal symmetric
WSMs (\citet{HalaszWeyl}). A third option is to magnetically dope
a quantum critical point separating a topological and trivial insulator
(\citet{ChoTItoWeyl}). An advantage of this realization, as we saw
in Sec \ref{sub:Chiral-gauge-anomaly}, is that magnetization can
be used as a handle to dynamically modify the band structure; in return,
magnetic textures and fluctuations are endowed with physical properties
that uniquely characterize the topological nature of the underlying
bands (\citet{QiWeylAnomaly}).
Some closely related phases have been predicted in real materials
as well. A ferromagnetic spinel HgCr$_{2}$Se$_{4}$ has been predicted
to form a double WSM, i.e., a WSM in which the Weyl nodes have magnetic
charge of $\pm2$ (\citet{FangChernSemimetal}). Passing light through
a cleverly designed photonic crystal gives it a dispersion that can
be fine-tuned to have with ``Weyl'' line nodes, i.e., a pair of
non-degenerate bands intersecting along a line (\citet{PhotonicCrystalWeyl}).
The surface states of such a crystal has flat bands, implying photon
states with zero velocity. Unlike electronic systems, such a band
structure can be obtained while preserving time-reversal and inversion
symmetries because there is no Kramers degeneracy for photons. Breaking
these symmetries reduces the line node to Weyl points. Finally, ab
initio calculations have predicted dispersions with degenerate Weyl
nodes in $\beta$-cristobalite BiO$_{2}$ (\citet{YoungDiracSemimetal}),
A$_{3}$Bi where A=Na or K (\citet{WangA3Bi}) and Cd$_{3}$As$_{2}$
(\citet{WangCd3As2}). Whereas the first two are inversion symmetric
Dirac semimetals, a combination of broken inversion symmetry and unbroken
crystal symmetries in Cd$_{3}$As$_{2}$ holds Weyl nodes of opposite
chiralities degenerate but gives them distinct Fermi velocities.
\section{Summary}
We have made a humble attempt at introducing Weyl semimetals and recapping
the recent theoretical and experimental developments in the transport
studies of this phase. The basics of WSMs can be summarized in three
main points. These points are interrelated and any one can be deduced
from any other.
\begin{itemize}
\item The first is the definition itself, that it has a band structure with
non-degenerate bands intersecting at arbitrary points in momentum
space. An immediate consequence of such band intersections is that
Weyl points are topologically robust as long as translational symmetry
is present. Moreover, the dispersion in the neighborhood of these
points is linear. We reviewed the transport properties contingent
on the linear dispersion in Sec \ref{sec:Electric-transport}. There,
we stated that unlike metals, WSMs can have a finite DC conductivity
driven purely by electron-electron interactions. We concluded the
section by stating that WSMs resemble metals more than insulators
based on disorder dependent transport.
\item The second key feature of WSMs is the chiral anomaly. Weyl nodes can
be characterized by a chirality quantum number of $\pm1$; the chiral
anomaly says that chiral charge, i.e., the number of quasiparticles
around Weyl nodes of fixed chirality, is not conserved in the presence
of parallel electric and magnetic fields. In other words, an $\boldsymbol{E}\cdot\boldsymbol{B}$
field pumps charge between Weyl nodes of opposite chiralities. We
presented two detailed derivations of the anomaly in Sec \ref{sec:The-chiral-anomaly}.
The first treats Weyl nodes as the surface states of a 4D quantum
Hall system. In this picture, the anomaly is understood as charge
pumping between opposite surfaces of a topological phase. The second
works entirely in three dimensions, and captures the anomaly in a
term in the action that supplements the one that gives the Weyl nodes
a linear dispersion. In this derivation, it becomes apparent that
the anomaly exists because separate gauge transformations on Weyl
fermions of opposite chiralities is forbidden by states at higher
energies.\\
Once the anomaly was introduced in some detail, we presented several
recent theoretical predictions for anomalous transport. These included
a negative contribution to magnetoresistance that has nothing to do
with weak localization (Sec \ref{sub:Negative-magnetoresistance}),
anomalous Hall effect as well as a current along an applied magnetic
field (Sec \ref{sub:AHECME}), extremely slow decay of a voltage in
the presence of a magnetic field in the same direction as the voltage
(Sec \ref{sub:Non-local-transport}), and non-conservation of ordinary
current due to fluctuations in the background fields that split a
Dirac node into Weyl nodes thus giving a WSM (Sec \ref{sub:Chiral-gauge-anomaly}).
We also flashed experimental results on Bi$_{1-x}$Sb$_{x}$ which
shows striking negative magnetoresistance consistent with the chiral
anomaly.
\item The third key characteristic of WSMs is peculiar surface states known
as Fermi arcs. These can either be thought of as the edge states of
Chern insulators layered in momentum space, or as the remnants of
the process of destroying a stack of electron and hole Fermi surfaces
in a chiral fashion. Once suitable materials are found, these states
should be easily observable in photoemission experiments.
\end{itemize}
We wrapped up the review by touching upon several systems in which
WSMs have been predicted to occur. So far, these include certain pyrochlore
iridates, heterostructures based on topological insulators and systems
with Dirac nodes perturbed by suitable symmetries. Bi$_{0.97}$Sb$_{0.03}$
placed in a magnetic field belongs to the last category. Magnetotransport
data on it is the best evidence thus far of the WSM phase being realized
in a real system. Nonetheless, the naturalness of 3D band crossings
and the number of materials predictions that have been made in a short
time are compelling indications that the field of WSMs and gapless
topological phases is well and truly blossoming.
\section{Acknowledgments}
We are indebted to Jerome Cayssol for inviting us to write the review.
We would like to thank Daniel Bulmash and Ashvin Vishwanath for valuable
feedback on the manuscript and the David and Lucile Packard Foundation
for financial support.
\bibliographystyle{plainnat}
|
1309.4317
|
\section{Introduction}
\label{sec:intro}
The use of highly resistive material for Kinetic Inductance Detectors is promising because it is expected to increase their responsivity through the high kinetic inductance L$_{s}$ and allow a more efficient direct optical coupling \cite{LeducAPL10}.
To make KIDs based on highly resistive materials, one needs to adapt the microwave design typically used for aluminium KIDs. A KID chip consists essentially of multiple resonators coupled to a microwave throughline, which is connected to the readout electronics.
The throughline has a characteristic impedance $Z_{0}$ which must match the 50 $\Omega$ impedance of the readout circuit: a mismatch is expected to be at the origin of losses in the transmission and box resonances at the origin of parasitic dips.
$Z_{0}$ depends in part on the surface inductance L$_{s}$, which is related to the resistivity $\rho$ of the metallic film. A 100 nm thick Al film has typically L$_{s} \sim$ 0.1 pH, whereas L$_{s}\sim$ 44 pH for the TiN film used in this study.
The $Z_{0}$ variations due to the increase of L$_{s}$ can be determined using SONNET electromagnetic 2D simulations. For a typical throughline on silicone substrate having a central line width s = 10 $\mu m$ and a gap width w = 6 $\mu m$, $Z_{0}$ $\approx 53$ $\Omega$ for Al, whereas for L$_{s}$ = 40 pH, $Z_{0}$ becomes frequency dependent, with a mean value at $\approx$ 180 $\Omega$.
To solve the problem of impedance mismatch, we have tested the option of an hybrid design, with a throughline in aluminum to read the TiN resonators. Two slightly different designs have been made and measured: the transmission obtained is close to the lossless transmission.The dark KID properties have been studied on a large number of resonators and the best NEP measured is as low as $4.4$ $\times$ $10^{-20}W\sqrt{Hz}$.
\section{Method}
\label{sec:Method}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{Fig_chip.eps}
\caption{a,b,c) photographs of the throughline and one resonator coupler for the three designs studied. The material used for the KID central line and the groundplanes are indicated on each figures. The insets gives a view of each chip in a larger scale. (d) Zoom of the coupling area for the Duo chip: a 2 $\mu m$ gap separates the TiN and Al groundplanes. }
\label{fig:design}
\end{figure*}
The three designs used for this study are shown in Fig.~\ref{fig:design}.
The first design, Mono, is a standard non hybrid design and has been measured for comparison to see experimentally the effect of an impedance mismatch.
The second design is essentially made of aluminium, with only the KID central lines in TiN. It is called KIDmix because the resonators corresponds to a microwave TiN line having an Al groundplane. With this design, the KID properties may be affected by the presence of aluminium, especially if the superconducting critical temperature T$_{c}$ of TiN is larger than the Al one. Our Al films have T$_{c}$ = 1.2 K and TiN has 0 K $\leq T _{c} \leq$ 4.5 K depending on its stoichiometry. The third design, Duo, has been made for future use of TiN and other high resistive materials having $T_{c} \geq$ 1.2 K.
It consists of an Al throughline and TiN resonators which are included in rectangular TiN groundplanes.
All designs have a 2 cm length straight throughline, with a central line s = 10 $\mu m$ and a gap width w = 6 $\mu m$.
The throughline is surrounded by 16 half waves meandered CPW resonators, having a 3 $\mu m$ central line width, a 2 $\mu m$ gap and a length between 3.1 and 4.5 mm. The length sets the resonance frequencies, which we want to be between 4 and 9 GHz to match with our setup specifications.
It takes into account the geometrical inductance $ L_{g} $ but also the frequency shift $ f_{exp} /f_{g} $ due to the kinetic inductance L$_{s}$.
The resonators are capacitively coupled to the throughline via a T shaped coupler. The coupling length set the coupling quality factor $ Q_{c} $ of each resonator, which is related to the measured (loaded) quality factor Q by the relation $Q^{-1}=Q_{c}^{-1}+Q_{i}^{-1}$ \cite{Mazin_thesis}.
The internal quality factor $Q_{i}$ is not known a priori so the coupling lengths are between 70 $\mu m$ and 560 $\mu m$ corresponding to $Q_{c}$ between 6 $\times$ $10^{3}$ and 3 $\times$ $10^{6}$.
In the Duo design, the TiN and Al groundplanes are separated near the throughline by a gap, to avoid the presence of a possibly dirty interface close to the coupler, which might create some excess noise in the KID transmission \cite{note2umgap}. Ideally, the Al groundplane width on both sides of the throughline should be as large as its central line and gaps, to avoid any effect of the TiN groundplanes on $Z_{0}$. But this implies a larger distance between throughline and resonators, thereby reducing $Q_{c}$. To minimize the effect of the TiN groundplanes keeping the desired coupling, the Al groundplane width has been set at 4 $\mu m$ near the resonators and the TiN groundplanes have been reduced to rectangle of 700 $\mu m$ width separated by 1300 $\mu m$ Al groundplanes.
\\
The 50 nm thick TiN film has been produced by magnetron sputtering onto a highly resistive silicon substrate, with a $N_{2}/Ar$ flow rate of 3.58/30 and patterned using standard contact lithography and dry etching with an SF6/Ar gasmixture.
This is followed by deposition of a 100 nm Al film which is patterned using wet etching.
The TiN film has been characterized by XPS depth profiling and XRD: except on the top 1 nm layer, the film is contaminant free under the sensitivity of 0.5 at$\%$ and the (111) and (200) crystallographic orientations are present, giving rise to diffraction peaks of similar amplitudes.
Resistivity measurements have been made at low temperature on 2 different chips: $ \rho $ $ \approx $ 130 $ \mu\Omega cm $ for both measurement but the superconducting transition has been seen at 0.8 K on the first chip and 1.1 K on the second.
This discrepancy is unlikely due to the film quality according to the XPS results and may come from the high T$_{c}$ dependence of TiN with nitrogen content, especially around T$_{c}$ = 1 K.
Indeed, T$_{c}$ = 1.5 K has been measured in another TiN film from the same source and same quality, the only difference between the two films was a change of $ 10^{-3} $ in the ratio of N2/Ar flow rate used during the deposition.
The chips used in this study were located, on the wafer, near the first chip having $ T_{c} $= 0.8 K which will be used for the following analysis. The superconducting gap $\Delta$ = 125 $\mu eV$ is determined from the relation $\Delta$ = 1.81 k$_{B}$T$_{c}$ \cite{EscoffierPRL04}. The kinetic inductance L$_{s}$ $\approx$ 44 pH is estimated using the relation L$_{s} \approx \hbar \rho/\pi\Delta t$ \cite{LeducAPL10} with t the film thickness.
\\
All chips have been measured with the same setup, during separate cooldowns. The chip measured is mounted on a sample holder cooled at 100 mK in an adiabatic demagnetization refrigerator. The sample holder is placed in a light tight box \cite{BaselmansJLTP08}, surrounded by a cryoperm and a superconducting magnetic shield. The complex transmission is measured using a commercial Vector Network Analyzer, the noise is measured using an Agilent synthetizer coupled to an IQ mixer and the lifetime is obtained through the response to an optical pulse. More details on the setup and the measurements can be found ref \cite{BaselmansJLTP08}.
\section{Design comparison}
\label{Results1}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{Fig_S21.eps}
\caption{(Color online) (a) Magnitude of the complex transmission $S_{21}$ measured at 100 mK on chips having the Mono, Duo and KIDmix designs, compared to an aluminum only chip taken as a reference for the setup noise. (b) $S_{21}$ temperature dependence. F sweep measured at 100 mK and 500 mK for the Mono chip. Inset: temperature sweep at 5 GHz on the Mono and Duo chips. The transmission of Mono exhibits frequency and temperature dependent losses. In contrast, the Duo curve follow closely the lossless transmission. KIDmix has a temperature independent transmission and the shallow dip seen at 7-8 GHz is then attributed to additional setup loss during this measurement.}
\label{fig:S21}
\end{figure*}
The magnitude of the complex transmission $S_{21}$ measured for the three different designs, at 100 mK and with a readout power of -100 dBm is shown in Fig.~\ref{fig:S21}a. between 4 and 9 GHz. Only few sharp KID dips are visible due to the low frequency band resolution of the sweeps. The result of the same measurement on an aluminum chip is added as a reference for the setup transmission: $S_{21}$ is flat at 60 dB, corresponding to the system transmission and decreases exponentially above 7 GHz due to one amplifier cutoff. \cite{Baselmans_AIP09}.
The Mono chip exhibits an additional frequency dependent reduction of the transmission, up to 15 dB around 7 GHz.
On the other hand, the Duo chip curve follow closely the lossless transmission. The KIDmix chip is also close, but shows an unexpected 15 dB reduction in the 7 to 8 GHz region.
In Fig.~\ref{fig:S21}b., the $S_{21}$ transmission of the Mono chip is presented at 100 mK and 500 mK. The S21 reduction due to the impedance mismatch is frequency dependent but also temperature dependent. For comparison, the inset present the temperature dependence of the transmission, between 100 mK and 1.3K for the three chips at the fixed frequency 5 GHz. For the Duo and KIDmix chips, the curve is dominated by the aluminum transmission: the superconducting transition appear at 1.25 K and $S_{21}$ is almost temperature independent below 800 mK. In the Mono chip, the TiN superconducting transition is seen at 800 mK and the transmission depends on T even below 300 mK.
The fact that the KIDmix transmission is temperature independent shows that the reduction seen between 7 and 8 Ghz is not due to the presence of TiN. Indeed, it has been identified in following experiments to be a parasitic setup reflection present during the measure of the KIDmix chip: it has been cancelled by retightening one coaxial connector on the 4K stage.
This results show that the throughline impedance match the 50 Ohms circuit impedance for the Duo and KIDmix chips.
\section{KID properties}
\label{Results2}
\begin{table}
\centering
\caption{Summary of the KID properties obtained for all KIDs: internal quality factor $Q_{i}$, internal power $P_{i}$ in dBm, frequency response $\delta x/\delta N_{qp}$, phase $S_{\theta}$ and amplitude $S_{R}$ noise at 100 Hz in dBc/Hz, lifetime $\tau$ in ms. The frequency response is obtained from the fit at T $ \geq $ 0.2 K of x=(f(T)-f(0))/f(0) versus $N_{qp}(T)$ calculated with $N_{0}$=8.7 $\times$ $10^{9}eV^{-1}\mu m^{-1}$\cite{LeducAPL10}, for more details see ref \cite{BaselmansJLTP08} .}
\label{tab:1}
\begin{tabular}{c|c|c|c|c|c|c}
\hline\noalign{\smallskip}
-- & $Q_{i}$ & $P_{i}$ & $\delta x/\delta N_{qp}$ & $S_{\theta}$ & $S_{R}$ & $\tau$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Mean value & 1.8 $\times$ $10^{5}$ & -62 & 6.7 $\times$ $10^{-10}$ & -62 & -70 & 1.2 \\
Best value & 4.6 $\times$ $10^{5}$ & -56 & 2.4 $\times$ $10^{-9}$ & -77 & -83 & 5.6 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
The properties of the microresonators have been measured and analysed following the method described in details in Ref \cite{BaselmansJLTP08}.
For each chips, multiples dips have been detected ; the measured frequencies of all KIDs in all chips correspond to the fundamental and first harmonic of the geometric frequencies, shifted by a factor fexp/fg=0.2$\pm$0.02. In this study, 5, 13 and 18 KID dips have been fully characterized in, respectively, the Mono, Duo and KIDmix chips.
In the Mono chip, almost all KID dips were asymmetric. This is probably another consequence of the impedance mismatch: the standing waves induce impedance variations near the KIDs coupling arms. It gives rise to an overestimation of the quality factors, but shouldn't affect the determination of the other properties.
The mean and best values of the KID properties are summarized in Table ~\ref{tab:1} for all KIDs of the three designs. They are varying slightly between KIDs on the same chip and we have not seen any clear systematic properties change between the three designs.
The mean value for the internal quality factor $Q_{i}$ is 1.8 $\times$ $10^{5}$, which agrees well with the values reported by Vissers et al.\cite{Vissers_APL10} on TiN films having both (111) and (100) XRD peaks: our results then support their conclusions on the dependence of $Q_{i}$ with crystalline orientation.
The internal powers, calculated for the highest readout power (typically -100dBm) above which an excess noise due to current saturation is observed, are about 10 dB lower than in the case of similar Al KIDs having the same thickness.
On the other hand, the lifetime measured at 100 mK varies widely from one KID to another, even in the same chip. Half of the KIDs (distributed on all the chips) exhibit a lifetime between 0.2 and 0.3 ms, in good agreement with Ref.\cite{LeducAPL10}, but a surprisingly long lifetime has been observed in some KIDs, up to 5.3ms at 100mK. The result of this measurement for the KID $\#1$ having the longer lifetime is presented Fig.~\ref{fig:lifetime}.
Fig.~\ref{fig:lifetime} also shows the temperature dependence of the lifetime for KID $\#1$ and KID $\#2$ having the typical 0.2 ms lifetime at 100 mK. Both curves have a non monotonic temperature dependence, with a maximum value at T $\approx$ 100 mK $\approx$ 0.13 T$_{c}$. Also, both temperature dependences do not follow the Kaplan theory \cite{KaplanPRB76} above this value.
However, we believe that the relaxation time measured is the actual recombination time because 1) A careful optimization of the setup has been done to screen/absorb straylight and illuminate the superconducting film only [Jochem LTD14], 2) With the same setup, we have measured on a low noise Al KID, a lifetime from the pulse technique in good agreement with the one determined from the fluctuations in the noise spectra [deVissers PRL10], 3) As shown Fig.~\ref{fig:lifetime} a very similar lifetime value is obtained from both phase and amplitude data, and is independent of the optical pulse power.
The electrical Noise Equivalent Power is calculated following ref \cite{BaselmansJLTP08}. The mean value obtained for the KIDs having the short 0.2 to 0.3 ms lifetime is 5.4 $\times$ $10^{-19}$ $W\sqrt{Hz}$. The NEP being inversely proportional to the lifetime, an even smaller value is obtained in the KIDs having the higher lifetime values and the best NEP is as low as 4.4 $\times$ $10^{-20}$ $W\sqrt{Hz}$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Fig_lifetime.eps}
\caption{Left) (Color online) Phase and amplitude relaxation in KID $\#1$ (DUO chip, f$_{exp}$ $\approx$5.7 GHz, Q$_{i}$ $\approx$2$\times$ $10^{4}$, P$_{i}$ $\approx$-59 dBm) under an optical pulse and for different pulse power (blue). The lifetime deduced from the low angle fits\cite{BaselmansJLTP08} (red) are 5.6 ms and 5.1 ms from resp. the phase and the amplitude. (Right) Temperature dependence of the lifetime in phase (closed symbols) and amplitude (open symbols) for KID $\#1$ (square) and KID $\#2$ (circle) (KIDmix chip, f$_{exp}$ $\approx$3.0 GHz, Q$_{i}$ $\approx$5$\times$ $10^{5}$, P$_{i}$ $\approx$-62 dBm).}
\label{fig:lifetime}
\end{figure}
\section{Conclusion}
\label{Conclusion}
Two new microwave hybrid designs have been made in order to read high L$_{s}$ microresonators via an Al throughline.
They have been tested experimentally using a TiN film having L$_{s}\approx$ 44 pH and T$_{c}$ = 0.8 K.
No impedance mismatch is seen in the $S_{21}$ transmission and the KID dips are symmetrics.
The dark KID properties have been measured in 36 resonators. The lifetime varies from one KID to another, going from 0.2 to 5.7 ms. The long lifetime observed in some KIDs gives rise to a NEP as low as 4.4 $\times$ $10^{-20}$ $W\sqrt{Hz}$.
The origin of the discrepancy in the lifetime values in a high quality TiN film is still open to question, and may be due to the strong dependence of the superconducting properties with stoichiometry in this material.
|
1006.5004
|
\part{\partial}
\define\em{\emptyset}
\define\imp{\implies}
\define\ra{\rangle}
\define\n{\notin}
\define\iy{\infty}
\define\m{\mapsto}
\define\do{\dots}
\define\la{\langle}
\define\bsl{\backslash}
\define\lras{\leftrightarrows}
\define\lra{\leftrightarrow}
\define\Lra{\Leftrightarrow}
\define\hra{\hookrightarrow}
\define\sm{\smallmatrix}
\define\esm{\endsmallmatrix}
\define\sub{\subset}
\define\bxt{\boxtimes}
\define\T{\times}
\define\ti{\tilde}
\define\nl{\newline}
\redefine\i{^{-1}}
\define\fra{\frac}
\define\un{\underline}
\define\ov{\overline}
\define\ot{\otimes}
\define\bbq{\bar{\QQ}_l}
\define\bcc{\thickfracwithdelims[]\thickness0}
\define\ad{\text{\rm ad}}
\define\Ad{\text{\rm Ad}}
\define\Hom{\text{\rm Hom}}
\define\End{\text{\rm End}}
\define\Aut{\text{\rm Aut}}
\define\Ind{\text{\rm Ind}}
\define\IND{\text{\rm IND}}
\define\ind{\text{\rm ind}}
\define\Res{\text{\rm Res}}
\define\res{\text{\rm res}}
\define\Ker{\text{\rm Ker}}
\define\Gal{\text{\rm Gal}}
\redefine\Im{\text{\rm Im}}
\define\sg{\text{\rm sgn}}
\define\tr{\text{\rm tr}}
\define\dom{\text{\rm dom}}
\define\supp{\text{\rm supp}}
\define\card{\text{\rm card}}
\define\bst{\bigstar}
\define\he{\heartsuit}
\define\clu{\clubsuit}
\define\di{\diamond}
\define\a{\alpha}
\redefine\b{\beta}
\redefine\c{\chi}
\define\g{\gamma}
\redefine\d{\delta}
\define\e{\epsilon}
\define\et{\eta}
\define\io{\iota}
\redefine\o{\omega}
\define\p{\pi}
\define\ph{\phi}
\define\ps{\psi}
\define\r{\rho}
\define\s{\sigma}
\redefine\t{\tau}
\define\th{\theta}
\define\k{\kappa}
\redefine\l{\lambda}
\define\z{\zeta}
\define\x{\xi}
\define\vp{\varpi}
\define\vt{\vartheta}
\define\vr{\varrho}
\redefine\G{\Gamma}
\redefine\D{\Delta}
\define\Om{\Omega}
\define\Si{\Sigma}
\define\Th{\Theta}
\redefine\L{\Lambda}
\define\Ph{\Phi}
\define\Ps{\Psi}
\redefine\aa{\bold a}
\define\bb{\bold b}
\define\boc{\bold c}
\define\dd{\bold d}
\define\ee{\bold e}
\define\bof{\bold f}
\define\hh{\bold h}
\define\ii{\bold i}
\define\jj{\bold j}
\define\kk{\bold k}
\redefine\ll{\bold l}
\define\mm{\bold m}
\define\nn{\bold n}
\define\oo{\bold o}
\define\pp{\bold p}
\define\qq{\bold q}
\define\rr{\bold r}
\redefine\ss{\bold s}
\redefine\tt{\bold t}
\define\uu{\bold u}
\define\vv{\bold v}
\define\ww{\bold w}
\define\zz{\bold z}
\redefine\xx{\bold x}
\define\yy{\bold y}
\redefine\AA{\bold A}
\define\BB{\bold B}
\define\CC{\bold C}
\define\DD{\bold D}
\define\EE{\bold E}
\define\FF{\bold F}
\define\GG{\bold G}
\define\HH{\bold H}
\define\II{\bold I}
\define\JJ{\bold J}
\define\KK{\bold K}
\define\LL{\bold L}
\define\MM{\bold M}
\define\NN{\bold N}
\define\OO{\bold O}
\define\PP{\bold P}
\define\QQ{\bold Q}
\define\RR{\bold R}
\define\SS{\bold S}
\define\TT{\bold T}
\define\UU{\bold U}
\define\VV{\bold V}
\define\WW{\bold W}
\define\ZZ{\bold Z}
\define\XX{\bold X}
\define\YY{\bold Y}
\define\ca{\Cal A}
\define\cb{\Cal B}
\define\cc{\Cal C}
\define\cd{\Cal D}
\define\ce{\Cal E}
\define\cf{\Cal F}
\define\cg{\Cal G}
\define\ch{\Cal H}
\define\ci{\Cal I}
\define\cj{\Cal J}
\define\ck{\Cal K}
\define\cl{\Cal L}
\define\cm{\Cal M}
\define\cn{\Cal N}
\define\co{\Cal O}
\define\cp{\Cal P}
\define\cq{\Cal Q}
\define\car{\Cal R}
\define\cs{\Cal S}
\define\ct{\Cal T}
\define\cu{\Cal U}
\define\cv{\Cal V}
\define\cw{\Cal W}
\define\cz{\Cal Z}
\define\cx{\Cal X}
\define\cy{\Cal Y}
\define\fa{\frak a}
\define\fb{\frak b}
\define\fc{\frak c}
\define\fd{\frak d}
\define\fe{\frak e}
\define\ff{\frak f}
\define\fg{\frak g}
\define\fh{\frak h}
\define\fii{\frak i}
\define\fj{\frak j}
\define\fk{\frak k}
\define\fl{\frak l}
\define\fm{\frak m}
\define\fn{\frak n}
\define\fo{\frak o}
\define\fp{\frak p}
\define\fq{\frak q}
\define\fr{\frak r}
\define\fs{\frak s}
\define\ft{\frak t}
\define\fu{\frak u}
\define\fv{\frak v}
\define\fz{\frak z}
\define\fx{\frak x}
\define\fy{\frak y}
\define\fA{\frak A}
\define\fB{\frak B}
\define\fC{\frak C}
\define\fD{\frak D}
\define\fE{\frak E}
\define\fF{\frak F}
\define\fG{\frak G}
\define\fH{\frak H}
\define\fJ{\frak J}
\define\fK{\frak K}
\define\fL{\frak L}
\define\fM{\frak M}
\define\fN{\frak N}
\define\fO{\frak O}
\define\fP{\frak P}
\define\fQ{\frak Q}
\define\fR{\frak R}
\define\fS{\frak S}
\define\fT{\frak T}
\define\fU{\frak U}
\define\fV{\frak V}
\define\fZ{\frak Z}
\define\fX{\frak X}
\define\fY{\frak Y}
\define\ta{\ti a}
\define\tb{\ti b}
\define\tc{\ti c}
\define\td{\ti d}
\define\te{\ti e}
\define\tf{\ti f}
\define\tg{\ti g}
\define\tih{\ti h}
\define\tj{\ti j}
\define\tk{\ti k}
\define\tl{\ti l}
\define\tm{\ti m}
\define\tn{\ti n}
\define\tio{\ti\o}
\define\tp{\ti p}
\define\tq{\ti q}
\define\ts{\ti s}
\define\tit{\ti t}
\define\tu{\ti u}
\define\tv{\ti v}
\define\tw{\ti w}
\define\tz{\ti z}
\define\tx{\ti x}
\define\ty{\ti y}
\define\tA{\ti A}
\define\tB{\ti B}
\define\tC{\ti C}
\define\tD{\ti D}
\define\tE{\ti E}
\define\tF{\ti F}
\define\tG{\ti G}
\define\tH{\ti H}
\define\tI{\ti I}
\define\tJ{\ti J}
\define\tK{\ti K}
\define\tL{\ti L}
\define\tM{\ti M}
\define\tN{\ti N}
\define\tO{\ti O}
\define\tP{\ti P}
\define\tQ{\ti Q}
\define\tR{\ti R}
\define\tS{\ti S}
\define\tT{\ti T}
\define\tU{\ti U}
\define\tV{\ti V}
\define\tW{\ti W}
\define\tX{\ti X}
\define\tY{\ti Y}
\define\tZ{\ti Z}
\define\sha{\sharp}
\define\Mod{\text{\rm Mod}}
\define\Ir{\text{\rm Irr}}
\define\sps{\supset}
\define\app{\asymp}
\define\uP{\un P}
\define\bnu{\bar\nu}
\define\bc{\bar c}
\define\bp{\bar p}
\define\br{\bar r}
\define\bg{\bar g}
\define\hC{\hat C}
\define\bE{\bar E}
\define\bS{\bar S}
\define\bP{\bar P}
\define\bce{\bar\ce}
\define\tce{\ti\ce}
\define\bul{\bullet}
\define\uZ{\un Z}
\define\che{\check}
\define\cha{\che{\a}}
\define\bfU{\bar\fU}
\define\Rf{\text{\rm R}}
\define\tfK{\ti{\fK}}
\define\tfD{\ti{\fD}}
\define\tfC{\ti{\fC}}
\define\bfK{\bar{\fK}}
\define\ufK{\un{\fK}}
\define\bat{\bar\t}
\define\ucl{\un\cl}
\define\ucm{\un\cm}
\define\dcl{\dot{\cl}}
\define\udcl{\un{\dot{\cl}}}
\define\udcm{\un{\dot{\cm}}}
\define\cir{\circ}
\define\ndsv{\not\dsv}
\define\prq{\preceq}
\define\tss{\ti{\ss}}
\define\tSS{\ti{\SS}}
\define\ciss{\ci_\ss}
\define\cits{\ci_{\tss}}
\define\Upss{\Up^\ss}
\define\Uptss{\Up^{\tss}}
\define\tcj{\ti{\cj}}
\define\tcp{\ti{\cp}}
\define\tcf{\ti{\cf}}
\define\tcb{\ti{\cb}}
\define\tcy{\ti{\cy}}
\define\y{\ti r}
\define\ttir{\ti t_{\y}}
\define\tgtir{\tg{\y}}
\define\sscj{\ss_\cj}
\define\tsstcj{\tss_{\tcj}}
\define\ccl{\check{\cl}}
\define\uucl{\un{\ucl}}
\define\bUp{\bar{\Up}}
\define\tip{\ti\p}
\define\chR{\check R}
\define\wtW{\widetilde{W}}
\define\AO{A}
\define\BRI{B1}
\define\BR{B2}
\define\BT{BT}
\define\BO{Bo}
\define\BTI{TB}
\define\DL{DL}
\define\HC{H}
\define\CHT{C1}
\define\CH{C2}
\define\EH{E}
\define\IW{I}
\define\IM{IM}
\define\GN{GN}
\define\MA{M}
\define\RO{R}
\define\SH{Sh}
\define\ST{S}
\define\LU{L}
\subhead 1. Statement\endsubhead
Let $\kk$ be a field. We assume that $\kk$ is algebraically closed, unless otherwise specified. Let G be a
connected reductive algebraic group over $\kk$. Let $\cb$ be the variety of Borel subgroups of $G$. Let $B\in\cb$,
let $T$ be a maximal torus of $B$ and let $N$ be the normalizer of $T$ in $G$. Let $W=N/T$ be the Weyl group. For
any $w\in W$ let $G_w=B\dw B$ where $\dw\in N$ represents $w$. The following is a restatement of Theorem 7.1 in
Bruhat's 1956 paper \cite{\BR}.
($*$) {\it The sets $G_w$ (with $w\in W$) form a partition of $G$.}
\nl
(Actually in \cite{\BR} it is assumed that $\kk=\CC$ and that $G$ is semisimple; the name "Borel subgroup" is not
used in \cite{\BR}. A variant of ($*$) over real numbers is also considered in \cite{\BR}.)
The partition $G=\sqc_{w\in W}G_w$ has been called the "Bruhat decomposition" in the Chevalley Seminar
\cite{\CH, p.148}. In this talk we will examine the history (see no.2) and applications (see no.3,4) of this
decomposition.
\subhead 2. History\endsubhead
In 1809, Gauss introduced his elimination method for solving systems of linear equations. As a consequence,
almost any matrix in $G=GL_n(\CC)$ is a product $LU$ where $L$ (resp. $U$) is a lower (resp. upper) triangular
matrix. The consideration of the subset $LU$ of $G$ is equivalent (up to translation) to the consideration of the
piece $G_{w_0}$ ($w_0$ being the element of $W$ of maximal length for the standard length function $l:W@>>>\NN$)
in the Bruhat decomposition of $G$.
Actually, the Gauss elimination method and the $LU$ decomposition already appear in Chapter 8 of the chinese
classic "The Nine Chapters on the Mathematical Art" (from the 2nd century, Han dynasty). Here the method of
solving systems of linear equations is presented by means of examples involving systems with up to five unknowns.
(This is also the first place where negative numbers appear in the literature.)
In his 1934 paper \cite{\EH}, Ehresmann shows that any partial flag manifold of $G=GL_n(\CC)$ admits a
decomposition into finitely many complex cells, generalizing earlier work of Schubert (around 1880) for the
Grassmannians. The decomposition is in terms of a fixed full flag and the pieces of the decomposition are clearly
stable under the stabilizer of that fixed flag. Ehresmann's decomposition of the full flag manifold can now be
viewed as induced by the Bruhat decomposition. Ehresmann also parametrizes his cells in terms of certain tableaux
of integers (for the full flag manifold of $GL_4(\CC)$ he describes explicitly the $4!$ tableaux which appear)
but he does not interpret these tableaux in terms of the Weyl group.
In his 1951 paper \cite{\ST} (submitted in 1949), Steinberg identifies the orbits of $GL_n(F_q)$ on pairs of
complete flags in $F_q^n$ with permutations of $n$ objects (see p.275,276), a result very close to $(*)$ for
$GL_n$ over a finite field.
In their 1950 book \cite{\GN, p.122}, Gelfand and Naimark state and prove $(*)$ for $G=SL_n(\CC)$.
In his 1954 announcement \cite{\BRI}, Bruhat formulates for the first time ($*$) for general semisimple groups
over $\CC$ and states that he has verified it for all classical groups.
One of the consequences of ($*$) is that (when $\kk=\CC$), $\cb$ has no odd integral homology and no torsion in
even integral homology. This was first proved by Bott \cite{\BO} in 1954 (independently of ($*$)) using Morse
theory.
A proof of ($*$) (with $\kk=\CC$) valid for any $G$ was given in the 1956 paper \cite{\HC} of Harish-Chandra;
this is the proof reproduced in Bruhat's 1956 paper \cite{\BR}. A proof of ($*$) for arbitrary $\kk$ was given in
Chevalley's 1955 paper \cite{\CHT}; in this paper Chevalley mentions the existence of Harish-Chandra's proof
(which was unpublished at the time). In \cite{\BTI}, Borel and Tits proved a version of ($*$) valid over an
arbitrary field.
\subhead 3. Significance\endsubhead
By allowing one to reduce many questions about $G$ to questions about the Weyl group $W$, Bruhat
decomposition is indispensable for the understanding of both the structure and representations of $G$.
We shall illustrate this by several examples. (A further example is given in no.4.)
The order of a Chevalley group over a finite field was computed in \cite{\CHT} (using Bruhat decomposition) in
terms of the exponents of the Weyl group.
In the representation theory of a reductive group over a finite field a key role is played by the Iwahori-Hecke
algebra (introduced in \cite{\IW}); this is a deformation of the group algebra of the Weyl group whose definition
is based on the Bruhat decomposition. In the same theory the varieties (introduced in \cite{\DL}) obtained by
taking inverse image of a Bruhat double coset under the Lang map have turned out to be very useful.
In the representation theory of complex reductive groups,
to understand the character of irreducible representations one needs the local intersection cohomology of
the closure of a Bruhat double coset.
In the representation theory of split $p$-adic reductive groups, a key role is played by the affine Hecke algebra
whose definition is based on the generalization of the Bruhat decomposition given by Iwahori and Matsumoto
\cite{\IM}. (The non-split case was treated by Bruhat and Tits in \cite{\BT}.)
The folowing is a reformulation of ($*$):
(i) the orbits of $G$ acting on $\cb\T\cb$ by simultaneous conjugation are in natural bijection with $W$.
\nl
Assume now that $G$ is semisimple, simply connected and $\kk=\CC$. Note that (i) has been extended in two
different directions as follows.
(ii) Let $G_\RR$ be the group of real points for a fixed real structure on $G$. In 1966 Aomoto \cite{\AO} showed
that that the conjugation action of $G_\RR$ on $\cb$ has finitely many orbits and in 1979 Rossmann \cite{\RO}
gave a parametrization of the orbits in terms of Weyl groups.
(iii) Let $K$ be the identity component of the group of fixed points of an involution $\s$ of $G$ (as an
algebraic group). In 1979 Matsuki \cite{\MA} showed that the conjugation action of $K$ on $\cb$ has finitely many
orbits and gave an explicit parametrization of the orbits (they are in bijection with a set of orbits as in (ii)).
The local intersection cohomology of the orbit closures in (iii) plays a key role for understanding the character
of irreducible representations of $G_\RR$ as in (ii).
\subhead 4. Bruhat decomposition and conjugacy classes\endsubhead
Assume that $G$ is semisimple and that the characteristic of $\kk$ is not a bad prime for $G$. By studying the
interaction of Bruhat decomposition with conjugacy classes in $G$ we obtain a surprising connection between the
set $\uuG$ of unipotent conjugacy classes in $G$ and the set $\uW$ of conjugacy classes in $W$.
Let $C\in\uW$, let $d_C=\min(l(w);w\in C)$ and let $C_{min}=\{w\in C;l(w)=d_C\}$. We have the following result,
see (\cite{\LU, 0.4(i)}):
(a) {\it Let $w\in C_{min}$. There is a unique $\g\in\uuG$ such that $\g\cap G_w\ne\em$ and such that whenever
$\g'\in\uuG$, $\g'\cap G_w\ne\em$, we have $\g\sub\bar\g'$. Moreover, $\g$ depends only on $C$, not on $w$; we
denote it by $\g_C$.}
\nl
Thus we obtain a map $\Ph:\uW@>>>\uuG$, $C\m\g_C$.
Let $\uW_{el}$ be the set of conjugacy classes in $W$ which are elliptic (that is consist of elements with no
eigenvalue $1$ in the reflection representation of $W$). Here are some properties of $\Ph$.
(b) $\Ph$ is surjective;
(c) $\Ph|_{\uW_{el}}:\uW_{el}@>>>\uuG$ is injective.
(d) If $C\in\uW_{el}$ and $w\in C_{min}$, then $\Ph(C)$ is the unique unipotent class $\g$ of $G$ such that
$\g\cap G_w$ is a union of finitely many $B$-orbits for the conjugacy action
of $B$ on $G_w$. Moreover, if $g\in\Ph(C)\cap G_w$,
the dimension of the centralizer of $g$ in $B$ (resp. $G$) is equal to $0$ (resp. $d_C$).
(e) If $C\in\uW-\uW_{el}$, then $\Ph(C)$ has a simple description in terms of the map analogous to $\Ph$ for a
Levi subgroup of a proper parabolic subgroup of $G$.
\nl
There is substantial evidence that the definition and properties of $\Ph$ are valid without assumption on the
characteristic of $\kk$ and that the first assertion of (d) is valid for any $C\in\uW$.
For example, if $G=GL_n(\kk)$ then both $\uW$ and $\uuG$ may be identified with the set $P_n$ of partitions of
$n$ (using the cycle types for $\uW$ and the sizes of the Jordan blocks for $\uuG$) and $\Ph$ becomes the
identity map.
If $G=Sp_{2n}(\kk)$ then $W$ can be naturally identified with a subgroup of the symmetric group in $2n$ letters
hence we have a natural map $i:\uW@>>>P_{2n}$ (neither injective nor surjective in general); also, via the
obvious imbedding $G\sub GL_{2n}(\kk)$ we obtain an imbedding of $\uuG$ into the set of unipotent classes of
$GL_{2n}(\kk)$ hence we have a natural injective map $j:\uuG@>>>P_{2n}$. It turns out that the image of $i$
coincides with the image of $j$ and $\Ph:\uW@>>>\uuG$ is characterized by $j\Ph(C)=i(C)$ for all $C$.
\widestnumber\key{\GN}
\Refs
\ref\key{\AO}\by K.Aomoto\paper On some double coset decompositions of complex semisimple groups\jour J. Math.
Soc. Japan\vol18\yr1966\pages1-44\endref
\ref\key{\BTI}\by A.Borel and J.Tits\paper Groupes r\'eductifs\jour Publ. Math. IHES\vol27\yr1965\pages55-152
\endref
\ref\key{\BO}\by R.Bott\paper On torsion in Lie groups\jour Proc.Nat.Acad.Sci.\vol40\yr1954/\pages586-588\endref
\ref\key{\BRI}\by F.Bruhat\paper Representations induites des groupes de Lie semisimples complexes\jour Comptes
Rendues Acad. Sci. Paris\vol238\yr1954\pages437-439\endref
\ref\key{\BR}\by F.Bruhat\paper Sur les representations induites des groupes de Lie\jour Bull. Soc. Math. France
\vol84\yr1956\pages97-205\endref
\ref\key{\BT}\by F.Bruhat and J.Tits\paper Groupes r\'eductifs sur un corps local,I\jour Publ. Math. IHES\vol41
\yr1972\pages5-276\endref
\ref\key{\CHT}\by C.Chevalley \paper Sur certains groups simples\jour Tohoku Math.J.\vol7\yr1955\endref
\ref\key{\CH}\by C.Chevalley \book Classification des groupes alg\'ebriques semisimples\publ Springer\endref
\ref\key{\DL}\by P.Deligne and G.Lusztig\paper Representations of reductive groups over finite fields\jour Ann.
math.\vol103\yr1976\pages103-161\endref
\ref\key{\EH}\by C.Ehresmann\paper Sur la topologie de certains espaces homogenes\jour Ann. Math.\yr1934\endref
\ref\key{\GN}\by I.M.Gelfand and M.A.Naimark\book Unitarnye predstavleniya klassiceskih grupp\bookinfo Trudy Mat.
Inst. Steklov\vol36\publaddr Moscow\yr1950\endref
\ref\key{\HC}\by Harish-Chandra\paper On a lemma of Bruhat\jour J.Math.Pures Appl.\vol35\yr1956\pages203-210
\endref
\ref\key{\IW}\by N.Iwahori\paper On the structure of a Hecke ring of Chevalley groups over a finite field\jour
J. Fac. Sci. Univ.Tokyo\vol10\yr1964\endref
\ref\key{\IM}\by N.Iwahori and H.Matsumoto\paper On some Bruhat decomposition and the structure of the Hecke
rings of $p$-adic Chevalley groups\jour Publ. Math. IHES\vol25\yr1965\pages5-48\endref
\ref\key{\LU}\by G.Lusztig\paper From conjugacy classes in the Weyl group to unipotent classes\jour
arxiv:1003.0412\endref
\ref\key{\MA}\by T.Matsuki\paper The orbits of affine symmetric spaces under the action of minimal parabolic
subgroups\jour J. Math. Soc. Japan\vol31\yr1979\pages331-357\endref
\ref\key{\RO}\by W.Rossmann\paper The structure of semisimple symmetric spaces\jour Canad. J. Math.\vol31\yr1979
\pages157-180\endref
\ref\key{\SH}\by K.Shen\book The Nine Chapters on the Mathematical Art\publ Oxford Univ. Press\yr1999\endref
\ref\key{\ST}\by R.Steinberg\paper A geometric approach to the representations of the full linear group over a
Galois field\jour Trans. Amer. Math. Soc.\vol71\yr1951\pages274-282\endref
\endRefs
\enddocument
|
2009.12886
|
\section{Introduction}
\subsection{Exponential mixing of the geodesic flow}
Let $\mathbb{H}^{d+1}$ be the hyperbolic $(d+1)$-space. Let $G=\operatorname{SO}(d+1,1)^{\circ}$, which is the group of orientation preserving isometries of $\mathbb{H}^{d+1}$. Let
$\Gamma<G$ be a non-elementary, torsion-free, geometrically finite discrete subgroup with parabolic elements. Denote by $\delta$ the critical exponent of $\Gamma$, which is defined as the abscissa of convergence of the Poincar\'e series $\sum_{\gamma\in \Gamma}e^{-sd(o,\gamma o)}$. Set $M=\Gamma\backslash \H^{d+1}$, so $M$ contains cusps. We consider the geodesic flow $(\mathcal{G}_t)_{t\in \mathbb{R}}$ acting on the unit tangent bundle $\operatorname{T}^1(M)$ over $M$. The invariant measure for the flow we will work with is the Bowen-Margulis-Sullivan measure $m^{\operatorname{BMS}}$, which is supported on the non-wandering set of the geodesic flow and is known to be the unique probability measure with maximal entropy $\delta$ \cite{OtPe}.
Our main result is establishing exponential mixing of the geodesic flow.
\begin{thm}
\label{main thm}
The geodesic flow is exponentially mixing with respect to $m^{\operatorname{BMS}}$: there exists $\eta>0$ such that for any functions $\phi, \psi\in C^1(\operatorname{T}^1(M))$ and any $t>0$, we have
\begin{equation*}
\int_{\operatorname{T}^1(M)} \phi\cdot\psi\circ\mathcal G_t\ \ddm^{\operatorname{BMS}} =m^{\operatorname{BMS}} (\phi) m^{\operatorname{BMS}} (\psi)+O(\lVert \phi \rVert_{C^1} \lVert \psi\rVert_{C^1}e^{-\eta t}),
\end{equation*}
where $\|\cdot\|_{C^1}$ is the $C^1$-norm with respect to the Riemannian metric on $\operatorname{T}^1(M)$.
\end{thm}
For a geometrically finite discrete subgroup $\Gamma$, Sullivan~\cite{Sul} proved the ergodicity of the geodesic flow with respect to $m^{\operatorname{BMS}}$ and Babillot~\cite{Bab} proved that the geodesic flow is mixing with respect to $m^{\operatorname{BMS}}$.
When $\delta>d/2$, Theorem \ref{main thm} was proved by Mohammadi-Oh~\cite{MoOh} and Edwards-Oh \cite{EO} using the representation theory of $L^2(M)$ and the spectral gap of Laplace operator~\cite{LP}.
When $\Gamma$ is convex cocompact, i.e., geometrically finite without parabolic elements, Theorem~\ref{main thm} and its corollaries were proved by Naud \cite{Nau}, Stoyanov \cite{Sto} and Sarkar-Winter \cite{SaWi} building on the work of Dolgopyat \cite{Dol}. Therefore, the main contribution of our work lies in the groups with small critical exponent and with parabolic elements, completing the story of exponential mixing of the geodesic flow on a geometrically finite hyperbolic manifold.
Using Roblin's transverse intersection argument \cite{Rob, OS, OhWi}, we obtain the decay of matrix coefficients (Theorem \ref{thm:matrix}) from Theorem \ref{main thm}. Theorem \ref{main thm} and \ref{thm:matrix} are known to have many immediate applications in number theory and geometry. To name a few, see \cite{MMO} for counting closed geodesics and \cite{KeOh} for shrinking target problems
\subsection{Resonance free region}
Recall $M=\Gamma\backslash\H^{d+1}$. Consider the Laplace operator $\Delta_M$ on $M$. Lax and Phillips completely described its spectrum on $L^2(M)$. The half line $[d^2/4,\infty)$ is the continuous spectrum and it contains no embedded eigenvalues. The rest of the spectrum (point spectrum) is finite and starting at $\delta (d/2-\delta)$ if $\delta>d/2$ and is empty if $\delta\leq d/2$. Let $S$ be the set of eigenvalues of $\Delta_M$. The resolvent of the Laplacian $$R_M(s)=(\Delta_M-s(d-s))^{-1}:L^2(M)\to L^2(M)$$ is well-defined and analytic on$\{\Re s>d/2,\ s(d-s)\notin S\}$.
Guillarmou and Mazzeo showed that $R_M(s)$ has a meromorphic continuation to the whole complex plan as an operator from $C^\infty_c(M)$ to $C^\infty(M)$ with poles of finite rank \cite{GM}. These poles are called resonances. Patterson showed that on the line $\Re s= \delta$, the point $s=\delta$ is the unique pole of $\Gamma(s-\frac{d}{2}+1)R_M(s)$ and it is a simple pole \cite{Pat1}. We use Theorem \ref{main thm} to further obtain a resonance free region.
\begin{thm}\label{cor:resonance}
There exists $\eta>0$ such that on the half-plane $ \Re s>\delta-\eta$, $s=\delta$ is the only resonance for the resolvent $R_M(s)$ if $\delta\notin d/2-\mathbb N_{\geq 1}$; otherwise, $R_M(s)$ is analytic on $\Re s>\delta-\eta$.
\end{thm}
In the convex cocompact case, a resonance free region of the resolvent is closely related with a zero free region of the Selberg zeta function. But in the geometrically finite case, such relation is not well understood except for the surface case.
\subsection{On the proof of the main theorem}
The proof of Theorem \ref{main thm} can be reduced to the case when $\Gamma$ is Zariski dense and then the proof falls into two parts:
we code the geodesic flow and prove a Dolgopyat-type spectral estimate for the corresponding transfer operator.
When the non-wandering set of the geodesic flow is compact, the coding is well-studied and we have, for example, the Bowen-Series' coding \cite{BKS}, Bowen's coding \cite{Bowen} and Ratner's coding \cite{Rat}. When manifolds contain cusps, only some partial knowledge is available. Dal'bo-Peign\'{e} \cite{DaPe, DaPe1} and Babillot-Peign\'{e} \cite{BaPe} provided the coding for generalized Schottky groups. Stadlbauer \cite{Sta} and Ledrappier-Sarig \cite{LeSa} provided the coding for non-uniform lattices in $\operatorname{SO}(2,1)^{\circ}$ and they made use of the fact that such a discrete subgroup is a free group and has a nice fundamental domain in $\mathbb{H}^2$. Our coding works for general geometrically finite discrete subgroups with parabolic elements and is partly inspired by the works of Lai-Sang Young \cite{You} and Burns-Masur-Matheus-Wilkinson \cite{BMMW}. We show that the study of the geodesic flow $(\mathcal{G}_t)_{t\in \mathbb{R}}$ on $\operatorname{T}^1(M)$ can be reduced to an expanding map on the boundary. The intuitive idea behind the scenes is to find a $2d$-dimensional submanifold $\widetilde{\mathcal{S}}$ in $\operatorname{T}^1(\H^{d+1})$ transversal to the geodesic flow such that its image $\mathcal{S}$ in $\operatorname{T}^1(M)$ is a Poincar\'{e} section to the geodesic flow. Let $\pi(\widetilde{\mathcal{S}})$ be the image of $\widetilde{\mathcal{S}}$ in the boundary $\partial \H^{d+1}$ under the visual map $\pi$ which maps each vector to the forward endpoint of the geodesic it decides. The key observation is that the return map of $\mathcal{S}$ can be understood as an expanding map on $\pi(\widetilde{\mathcal{S}})$. It is in fact not a complete surprise and similar idea appeared in the Bowen-Series' coding where the cutting sequence of an oriented geodesic is related with a sequence of expanding maps of the forward endpoint of this geodesic. In the previous works proving exponential mixing of flows, the argument was usually delicate and technical. The expanding map on the boundary point of view allows us not only to carry out the analysis but to do it in a neat way and such advantage will be more apparent for higher dimensional manifolds.
The precise description of the expanding map on the boundary is as follows. We consider the upper-half space model for $\mathbb{H}^{d+1}$ and without loss of generality, we may assume that $\infty$ is a parabolic fixed point of $\Gamma$. Let $\operatorname{Stab}_{\infty}(\Gamma)$ be the group of stabilizers of $\infty$ in $\Gamma$ and $\Gamma_{\infty}$ be a maximal normal abelian subgroup in $\operatorname{Stab}_{\infty}(\Gamma)$. Set $\Delta_{0}:=\Delta_{\infty}$ to be a fundamental domain of $\Gamma_{\infty}$ in $\partial \mathbb{H}^{d+1}\backslash \{\infty\}$ (see Section \ref{sec:cusps} for details). Denote by $\Lambda_{\Gamma}$ the limit set of $\Gamma$ and $\mu$ the Patterson-Sullivan measure, which is a finite measure supported on $\Lambda_{\Gamma}$.
\begin{customprop}{\ref{prop:coding}}
There are constants $C_1>0$, $\lambda,\,\epsilon_0\in (0,1)$, a countable collection of disjoint, open subsets $\sqcup_j\Delta_j$ in $\Delta_0$ and an expanding map $T:\sqcup_j\Delta_j\to \Delta_0$ such that:
\begin{enumerate
\item $\sum_{j}\mu(\Delta_j)=\mu(\Delta_0)$.
\item For each $j$, there is an element $\gamma_j\in \Gamma$ such that $\Delta_j=\gamma_j \Delta_0$ and $T|_{\Delta_j}=\gamma_j^{-1}$.
\item For each $\gamma_j$, it is a uniform contraction: $|\gamma_j'(x)|\leq \lambda$ for all $x\in \Delta_0$.
\item For each $\gamma_j$, $|\mathrm D(\log |\gamma_j'|)(x)|<C_1$ for all $x\in \Delta_0$, where $\mathrm D(\log |\gamma_j'|)(x)$ is the differential of the map $z\mapsto \log |\gamma_j'(z)|$ at $x$.
\item Let $R$ be the function on $\sqcup_j \Delta_j$ given by $R(x)=\log|\mathrm D T(x)|$. Then
$\int e^{\epsilon_o R}\mathrm{d}\mu<\infty.$
\end{enumerate}
\end{customprop}
Moreover, this coding satisfies the \textbf{uniform nonintegrable condition (UNI)} (Lemma~\ref{lem:uni}).
The construction of the coding is by induction. There are three main ingredients in it: separation between parabolic fixed points (Lemma \ref{lem:pdistance}), doubling property and friendliness of Patterson-Sullivan measure (Section \ref{sec:double} and \ref{sec:friendliness}) and the recurrence of the geodesic flow (Lemma \ref{lem:recurrence}). The main result is Proposition \ref{keylemma} which says that the measure of the set that has not been partitioned at time $n$ decays exponentially. To prove this proposition, a fine structure on the remaining set is introduced to analyze how the good part and bad part at time $n$ evolve at time $n+1$. One of the difficulties comes from the Patterson-Sullivan measure: we only have doubling properties and friendliness of Patterson-Sullivan measure available, which give the estimate of the measure of a ball and the neighborhood of an affine subspace, but the bad part may not be of these two shapes. We overcome this difficulty by defining the notion of equivalent classes of parabolic fixed points. With these three main ingredients available, we think our coding has a chance to be adapted to other setting like variable curvature setting and it seems to us that the main work will be proving the desired properties of Patterson-Sullivan measure: in the current setting, we acquire these properties using the results by Stratmann-Velani \cite{StrVel} and Das-Fishman-Simmons-Urba\'{n}ski \cite{DFSU} .
Proposition \ref{lem:loc} is the bridge between the geodesic flow $(\mathcal{G}_t)_{t\in \mathbb{R}}$ on $\operatorname{T}^1(M)$ and the expanding map $T$ on $\Delta_0$. We show that the geodesic flow is a factor of a hyperbolic skew product flow constructed using $T$.
Lastly, our proof of obtaining a Dolgopyat-type spectral estimate is influenced by the one in~\cite{ArMe, AGY, BaVa, Dol, Nau, Sto}. In the current setting, the set where nontrivial dynamics takes place is the set of limit points of $\Gamma$ in $\Delta_0$, which is a fractal set. It is not a John domain so it doesn't fall into the general framework described in \cite{AGY}. We provide an argument tailored to this setting and here we also use the doubling property and friendliness of Patterson-Sullivan measure.
\begin{comment}
There are other approaches to study exponential mixing of the geodesic flow and the resolvent of the Laplacian. For example, Bougain-Dyatlov proved that the Fourier transform of the Patterson-Sullivan measure of a convex cocompact discrete subgroup $\Gamma$ in $\operatorname{SO}(2,1)^{\circ}$ decays polynomially and as an application, they obtained an essential spectral gap of the Selberg zeta function which depends only on the critical exponent of $\Gamma$. In Magee-Naud, they used Bourgain-Dyatlov's result as an alternative to Dolgopyat's argument. Generalizing Bourgain-Dyatlov's result to general geometrically finite discrete subgroups in $\operatorname{SO}(d+1,1)^{\circ}$ is challenging. In Li-Naud-Pan, they proved the Fourier decay of the Patterson-Sullivan measure for a Zariski dense Schottky group in $\operatorname{SO}(3,1)^{\circ}$ where new
ingredients include some representation theory and regularity estimates for stationary measures of certain random walks on linear groups.
\end{comment}
\textbf{Organization of the paper.}
\begin{itemize}
\item In Section \ref{sec:pre}, we gather the basic facts and preliminaries about hyperbolic spaces, geometrically finite discrete subgroups, the structure of cusps, Patterson-Sullivan measure, and Bowen-Margulis-Sullivan measure.
\item In Section \ref{sec:reduction zariski}, we prove that Theorem \ref{main thm} can be reduced to Zariski dense case.
\item In Section \ref{sec:geo}, we state the results of the coding (Proposition \ref{prop:coding}, Lemma \ref{lem:uni}, \ref{lem:l1}). We construct a hyperbolic skew product flow and state the result that it is exponential mixing (Theorem \ref{thm:skew}). We show that the geodesic flow on $\operatorname{T}^1(M)$ is a factor of this hyperbolic skew product flow (Proposition \ref{lem:loc}) and deduce the exponential mixing of the geodesic flow from Theorem \ref{thm:skew}.
\item In Section \ref{sec:parmea}, we provide an explicit description of the action of an element $\gamma\in \Gamma$ on $\partial \mathbb{H}^{d+1}$ and the estimate on the norm of the derivative of $\gamma$ (Section \ref{sec:explicit}). We list the basics for the multi cusp case (Section \ref{sec:multi}). The doubling property and the friendliness of Patterson-Sullivan measure are proved in Section \ref{sec:double} and \ref{sec:friendliness}.
\item In Section \ref{sec:code}, we start with the construction of the coding for one cusp case, which is also the first step for multi cusp case. The main result is exponential decay of the remaining set (Proposition \ref{keylemma}). Section \ref{sec:sep}-\ref{sec:energy} are devoted to the proof the Proposition \ref{keylemma}. The coding for the multi cusp case will be provided in Section \ref{sec:exptail} and \ref{sec:codmulti}. The results of the coding (Proposition \ref{prop:coding}, Lemma \ref{lem:uni}, \ref{lem:l1}) will be provided in Section \ref{sec:exptail}-\ref{sec:UNI}.
\item In Section \ref{sec:spegap}, we prove a Dolgopyat-type spectral estimate for the corresponding transfer operator and the main result is an $L^2$-contraction proposition (Proposition \ref{L2contracting}).
\item In Section \ref{sec:expmix}, we finish the proof of Theorem \ref{thm:skew}.
\item In Section \ref{sec:res}, we prove the application of obtaining a resonance free region for the resolvent $R_{M}(s)$ (Theorem \ref{cor:resonance}).
\end{itemize}
\textbf{Notation.}
In the paper, given two real functions $f$ and $g$, we write $f\ll g$ if there exists a constant $C>0$ only depending on $\Gamma$ such that $f\leq Cg$. We write $f\approx g$ if $f\ll g$ and $g\ll f$.
\subsection*{Acknowledgement}
We would like to thank S\'{e}bastien Gou\"{e}zel and Carlos Matheus for helpful discussion and thank Amie Wilkinson for suggesting the paper by Lai-Sang Young. The second author would like to express her gratitudes to Hee Oh for introducing her to this circle of areas.
Part of this work was done while two authors were in Bernoulli center for the workshop: Dynamics, Geometry and Combinatorics, we would like to thank the organizers and the hospitality of the center.
\section{Preliminary of hyperbolic spaces and PS measure}\label{sec:pre}
\subsection{Hyperbolic spaces}
We will use the upper-half space model for $\H^{d+1}$: $$\H^{d+1}=\{x=(x_1,\ldots,x_{d+1})\in\mathbb R^{d+1}:\,x_{d+1}>0 \}.$$ Let $o=(0,\cdots,0,1)\in\H^{d+1}$. For $x\in \H^{d+1}$, write $h(x)$ for the height of the point $x$, which is the last coordinate of $x$. The Riemannian metric on $\H^{d+1}$ is given by $$\mathrm{d} s^2=\frac{\mathrm{d} x_1^2+\cdots+\mathrm{d} x_{d+1}^2}{x_{d+1}^2}.$$
Let $\partial \H^{d+1}$ be the visual boundary. On $\partial\H^{d+1}=\mathbb R^d\cup\{\infty \}$, we have the spherical metric, denoted by $d_{\mathbb{S}^d}(\cdot,\cdot)$.
We also have the Euclidean metric, denoted by
$d_{E}(x,x')$ or $|x-x'|$ for any $x,x'\in \partial \H^{d+1}$. As this metric will be used most frequently, we will simply write $d(\cdot, \cdot)$ when there is no confusion.
For $g\in G$, it acts on $\partial\H^{d+1}$ conformally. For $x\in\partial\H^{d+1}$, let $|g'(x)|$ be the linear distortion of the conformal action of $g$ at $x$ with respect to the euclidean metric, which is also the norm of the derivative seen as a linear map on tangent spaces. Let $|g'(x)|_{\S^d}$ be the norm with respect to the spherical metric.
We have the relation
\begin{equation}\label{equ:change}
|g'(x)|_{\S^d}=\frac{1+|x|^2}{1+|gx|^2}|g'(x)|.
\end{equation}
Another formula for $|g'(x)|_{\mathbb{S}^d}$ is
\[ |g'(x)|_{\S^d}=e^{-\beta_x(g^{-1}o,o)} ,\]
where $\beta_x(\cdot,\cdot)$ is the Busemann function given by $\beta_x(z,z')=\lim_{t\to +\infty}d(z,x_t)-d(z',x_t)$ with $x_t$ an arbitrary geodesic ray tending to $x$
\subsection{Geometrically finite discrete subgroups}
Let $\Gamma$ be a torsion-free, non-elementary discrete subgroup in $G$. We list some basics of geometrically finite discrete subgroups.
The \textbf{limit set} $\Lambda_{\Gamma}$ is the set of accumulation points in $\H^{d+1}\cup\partial\H^{d+1}$ of an orbit $\Gamma x$ for some $x\in \mathbb{H}^{d+1}$.
As we assume $\Gamma$ is torsion-free, $\Lambda_{\Gamma}$ is contained in $\partial \mathbb{H}^{d+1}$. The convex core of $M$ is $C(M)=\Gamma\backslash \text{hull}(\Lambda_{\Gamma})\subset M$, where $\text{hull}(\Lambda_{\Gamma})$ is the convex hull of $\Lambda_\Gamma$.
A limit point $x\in\Lambda_{\Gamma}$ is called conical, if there exists a geodesic ray tending to $x$ and a sequence of elements $\gamma_n\in\Gamma$ such that $\gamma_no$ converges to $x$, and the distance between $\gamma_no$ and the geodesic ray is bounded. A subgroup $\Gamma'$ of $\Gamma$ is called parabolic, if $\Gamma'$ fixes only one point in $\partial\H^{d+1}$. A point $x\in\Lambda_\Gamma$ is called a {\textbf{parabolic fixed point}}, if its stabilizer ${\rm{Stab}}_{\Gamma}(x)$ is parabolic. A parabolic fixed point is called {\textbf{bounded parabolic}}, if the quotient $\mathrm{Stab}_{\Gamma}(x)\backslash(\Lambda_{\Gamma}-\{x\})$ is compact.
A horoball based at $x\in \partial \mathbb{H}^{d+1}$ is the set $\{y\in\H^{d+1}:\,\ \beta_x(y,o)<t \}$ for some $t\in\mathbb R$. The boundary of a horoball is called a horosphere. We call a horoball $H$ based at a parabolic fixed point $x\in\Lambda_{\Gamma}$ a horocusp region, if we have $\gamma H\cap H=\emptyset$ for any $\gamma\in\Gamma-\mathrm{Stab}_{\Gamma}(x)$ . Then the image of $H$ in $M$ under the quotient map, $\Gamma\backslash \Gamma H$, is isometric to $\mathrm{Stab}_{\Gamma}(x)\backslash H$ and is called a proper horocusp of $M$.
\begin{defn}[Geometrically finite discrete subgroup~\cite{Bow},~\cite{Ratc}]\label{def:geofinite} A non-elementary discrete subgroup $\Gamma<\operatorname{SO}(d+1,1)^{\circ}$ is called geometrically finite if it satisfies one of the following equivalent conditions:
\begin{itemize}
\item There is a (possibly empty) finite union $V$ of proper horocusps of $M$, with disjoint closures, such that $C(M)-V$ is compact.
\item Every limit point of $\Gamma$ is either conical or bounded parabolic.
\end{itemize}
\end{defn}
\subsection{Structure of cusps}\label{sec:cusps}
Assume that $\Gamma$ is a geometrically finite discrete subgroup with parabolic elements. We describe a fundamental region for a parabolic fixed point. Suppose $\infty$ is a parabolic fixed point of $\Gamma$. Let $\Gamma_\infty^{'}=\mathrm{Stab}_\Gamma(\infty)$ be the parabolic subgroup of $\Gamma$ fixing $\infty$. Then $\Gamma_\infty^{'}$ acts isometrically on each horosphere $\{x\in\H^{d+1}:\ x_{d+1}=a\}\simeq \mathbb R^d$ with the induced Euclidean metric. The following is a result of Biberbach (see~\cite[Page 5]{GM} or~\cite[Section 2.2]{Bow}).
\begin{lem}[Biberbach]
\label{lem:biberbach}
Let $\Gamma_\infty'$ be a discrete subgroup of the isometry group of $\mathbb R^d$. There exist a maximal normal abelian subgroup $\Gamma_\infty \subset \Gamma_\infty^{'}$ of finite index and an affine subspace $Z\subset \mathbb{R}^d$ of dimension $k$, invariant under $\Gamma_{\infty}'$, such that $\Gamma_{\infty}$ acts as a group of translations of rank $k$ on $Z$. Moreover, using orthogonal decomposition $(y,z)\in Y\times Z\simeq\mathbb R^{d-k}\times \mathbb{R}^k$, we can write each element $\gamma\in \Gamma_\infty^{'}$ in the form
\begin{equation*}
\gamma(y,z)=(A_\gamma y, R_\gamma z+b_\gamma),
\end{equation*}
where $A_{\gamma}\in \operatorname{O}(d-k)$ and $R_{\gamma}\in \operatorname{O}(k)$ with $R_{\gamma}^m=\operatorname{Id}$ for some $m\in \mathbb{N}$.
If $R_\gamma=\operatorname{Id}$, then $\gamma\in\Gamma_\infty$.
\end{lem}
The dimension $k$ is called the \textbf{rank} of the parabolic fixed point $\infty$.
As $\Gamma$ is geometrically finite, $\infty$ is a bounded parabolic fixed point. So we can take a fundamental domain $\Delta_\infty$ for the action $\Gamma_{\infty}$ on $\mathbb{R}^d\subset \H^{d+1}$ in the form
\[\Delta_\infty=B_Y(C)\times \Delta_\infty', \]
such that
$$\Lambda_{\Gamma}\subset \{\infty\}\cup \left(\cup_{\gamma\in\Gamma_\infty}\gamma\overline{\Delta}_\infty\right),$$
where $B_Y(C)=\{y\in \mathbb R^{d-k}:\, |y|\leq C \}$ and $\Delta_\infty'\subset \mathbb{R}^k$ is a fundamental domain for the translation action of $\Gamma_{\infty}$ on $\mathbb{R}^k$.
\subsection{PS measure and BMS measure}\label{sec:PS}
\textbf{Patterson-Sullivan measure.} Recall $\delta$ is the critical exponent of $\Gamma$. Patterson~\cite{Pat2} and Sullivan~\cite{Sul1} constructed a $\Gamma$-invariant conformal density $\{\mu_y \}_{y\in\H^{d+1}}$ of dimension $\delta$ on $\Lambda_{\Gamma}$, which is a set of finite Borel measures such that for any $y,z\in \H^{d+1}$, $x\in \partial \H^{d+1}$ and $\gamma \in \Gamma$,
\begin{equation}
\label{ps quasi}
\frac{\mathrm{d}\mu_y}{\mathrm{d}\mu_z}(x)=e^{-\delta\beta_{x}(y,z)}\,\,\,\text{and}\,\,\, (\gamma)_{*}\mu_y=\mu_{\gamma y},
\end{equation}
where $\gamma_{*}\mu_{y}(E)=\mu_{y}(\gamma^{-1}E)$ for any Borel subset $E$ of $\partial \H^{d+1}$. This family of measures is unique up to homothety and the action of $\Gamma$ on $\partial \H^{d+1}$ is ergodic relative to the measure class defined by these measures.
As $\mu_y$'s are absolutely continuous with respect to each other, for most of the paper, we will consider $\mu_{o}$ and denote it by $\mu$ for short. We call it the Patterson-Sullivan measure (or PS measure). The following quasi-invariance property of the PS measure will be frequently used: for any Borel subset $E$ of $\partial \H^{d+1}$ and any $\gamma\in \Gamma$,
\begin{equation}
\mu(\gamma E)=\int_E|\gamma'(x)|_{\S^d}^\delta\mathrm{d}\mu(x).
\end{equation}
\textbf{Bowen-Margulis-Sullivan measure.} Let $\partial^2 (\H^{d+1})=\partial\H^{d+1}\times\partial\H^{d+1}-\text{Diagonal}$.
The Hopf parametrization of $\operatorname{T}^1(\mathbb{H}^{d+1})$ as $\partial^2 (\H^{d+1})\times \mathbb{R}$ is given by
\begin{equation*}
v\mapsto (x,x_-,s=\beta_{x}(o,v_*)),
\end{equation*}
where $x$ and $x_-$ are the forward endpoint and backward endpoint of $v$ respectively, and $v_*\in \mathbb{H}^{d+1}$ is the based point of $v$. The geodesic flow on $\operatorname{T}^1(\H^{d+1})$ is represented by the translation on $\mathbb{R}$-coordinate.
The Bowen-Margulis-Sullivan measure (or BMS measure) on $\operatorname{T}^1(\H^{d+1})$ is defined by
\begin{equation*}
\mathrm{d}\tilde{m}^{\operatorname{BMS}}(x,x_-,s)=e^{\delta \beta_{x}(o,x_*)} e^{\delta \beta_{x_-}(o,x_*)}\mathrm{d} \mu(x) \mathrm{d}\mu(x_-)\mathrm{d} s,
\end{equation*}
where $x_*$ is the based point of the unit tangent vector given by $(x,x_-,s)$.
It is invariant under the geodesic flow $\mathcal G_t$ from the definition.
The group $\Gamma$ acts on $\partial^2 (\H^{d+1})\times \mathbb{R}$ by
\begin{equation*}
\gamma (x,x_-,s)=(\gamma x, \gamma x_-,s-\beta_{x}(o,\gamma^{-1}o)).
\end{equation*}
This together with \eqref{ps quasi} implies that $\tilde{m}^{\operatorname{BMS}}$ is left $\Gamma$-invariant hence $\tilde{m}^{\operatorname{BMS}}$ induces a measure $m^{\operatorname{BMS}}$ on $\operatorname{T}^1(M)$ which is the BMS measure on $\operatorname{T}^1(M)$. For geometrically finite discrete subgroups, Sullivan showed that $m^{\operatorname{BMS}}$ is finite and ergodic with respect to the action of the geodesic flow~\cite{Sul}. Otal and Peign\'{e} showed that $m^{\operatorname{BMS}}$ is the unique measure supported on the nonwandering set of the geodesic flow with maximal entropy $\delta$~\cite{OtPe}.
After normalization, we suppose that $m^{\operatorname{BMS}}$ is a probability measure.
\section{Reduction to Zariski dense case}
\label{sec:reduction zariski}
The group $\operatorname{SO}(d+1,1)$ is Zariski closed and connected and the subgroup $\operatorname{SO}(d+1,1)^\circ$ is its analytic connected component containing identity. For a subgroup $\Gamma$ of $\operatorname{SO}(d+1,1)^\circ$, it is said to be Zariski dense in $\operatorname{SO}(d+1,1)^\circ$ if it is Zariski dense in $\operatorname{SO}(d+1,1)$. The proof of Theorem \ref{main thm} can be reduced to Zariski dense case.
\begin{thm}
\label{zariski}
Assume that $\Gamma<\operatorname{SO}(d+1,1)^{\circ}$ is a Zariski dense, torsion-free, geometrically finite subgroup with parabolic elements. The geodesic flow $(\mathcal{G}_t)_{t\in \mathbb{R}}$ on $\operatorname{T}^1(M)$ is exponentially mixing with respect to $m^{\operatorname{BMS}}$: there exists $\eta>0$ such that for any functions $\phi, \psi\in C^1(\operatorname{T}^1(M))$ and any $t>0$, we have
\begin{equation*}
\int_{\operatorname{T}^1(M)} \phi\cdot\psi\circ\mathcal G_t\ \ddm^{\operatorname{BMS}} =m^{\operatorname{BMS}} (\phi) m^{\operatorname{BMS}} (\psi)+O(\lVert \phi \rVert_{C^1} \lVert \psi\rVert_{C^1}e^{-\eta t}).
\end{equation*}
\end{thm}
\begin{proof}[\textbf{From Theorem \ref{zariski} to Theorem \ref{main thm}}]
Suppose $\Gamma$ is not Zariski dense. Let $H$ be the Zariski closure of $\Gamma$ in $\operatorname{SO}(d+1,1)$ and let $H_1$ be the Zariski connected component of $H$ containing identity. Let $\Gamma_1=\Gamma\cap H_1$. Then $\Gamma_1$ is a finite index subgroup of $\Gamma$ and the Zariski closure of $\Gamma_1$ is $H_1$. We will only consider $\Gamma_1$ because the exponential mixing of $\Gamma$ follows from the same statement for $\Gamma_1$ by taking covering space.
Let $H_o$ be the analytic connected component of $H_1$ containing identity. Since $\Gamma$ is non-elementary, the group $H_o$ doesn't fix any point on the boundary. By a classic result (see~\cite{BZ} for example), up to conjugacy, $H_o$ preserves a hyperbolic subspace $\H^m$ with $m\leq d$ and the restriction of $H_o$ to $\H^m$ contains $\operatorname{SO}(m,1)^{\circ}$ with compact kernel. Preserving subspace is a Zariski closed condition, we know that $H_1$ also preserves $\H^m$ and the restriction of $H_1$ to $\H^m$ satisfies the same properties as $H_o$. Since $\Gamma_1$ is a torsion free discrete subgroup, the restriction map $\Gamma_1\rightarrow \Gamma_1|_{\H^m}$ is injective. Then the Zariski closure of $\Gamma_1|_{\H^m}$ also contains $\operatorname{SO}(m,1)^{\circ}$.
At most passing to an index 4 subgroup, we can suppose that $\Gamma_1|_{\H^m}$ is a subgroup of $\operatorname{SO}(m,1)^\circ$. Hence $\Gamma_1|_{\H^m}$ is Zariski dense in $\operatorname{SO}(m,1)^{\circ}$ and geometrically finite. (Definition \ref{def:geofinite} (2) implies that $\Gamma_1|_{\H^m}$ is still geometrically finite.) The BMS measure $m^{\operatorname{BMS}}$ of $\Gamma_1$ on the unit tangent bundle $\Gamma_1\backslash \operatorname{T}^1\H^{d+1}$ is actually supported on $\Gamma_1\backslash \operatorname{T}^1\H^m$, which is the Zariski dense case.
\end{proof}
\section{The geodesic flow and the boundary map}
\label{sec:geo}
For the rest of the paper, our standing assumption is
\begin{align*}
&\Gamma<G\,\,\,\text{Zariski dense, torsion-free, geometrically finite with parabolic elements}\\
&\text{and}\,\,\,\infty\,\,\,\text{is a parabolic fixed point of}\,\,\, \Gamma.
\end{align*}
Let $\Delta_0:=\Delta_\infty$ be a fundamental domain for $\infty$ described in Section~\ref{sec:cusps}. In Section~\ref{sec:code}, we will construct a coding of the limit set satisfying the following properties.
\begin{prop}
\label{prop:coding}
There are constants $C_1>0$, $\lambda,\,\epsilon_0\in (0,1)$, a countable collection of disjoint, open subsets $\sqcup_j\Delta_j$ in $\Delta_0$ and an expanding map $T:\sqcup_j\Delta_j\to \Delta_0$ such that:
\begin{enumerate
\item $\sum_{j}\mu(\Delta_j)=\mu(\Delta_0)$.
\item For each $j$, there is an element $\gamma_j\in \Gamma$ such that $\Delta_j=\gamma_j \Delta_0$ and $T|_{\Delta_j}=\gamma_j^{-1}$.
\item For each $\gamma_j$, it is a uniform contraction: $|\gamma_j'(x)|\leq \lambda$ for all $x\in \Delta_0$.
\item For each $\gamma_j$, $|\mathrm D(\log |\gamma_j'|)(x)|<C_1$ for all $x\in \Delta_0$, where $\mathrm D(\log |\gamma_j'|)(x)$ is the differential of the map $z\mapsto \log |\gamma_j'(z)|$ at $x$.
\item Let $R$ be the function on $\sqcup_j \Delta_j$ given by $R(x)=\log|\mathrm D T(x)|$. Then
$\int e^{\epsilon_o R}\mathrm{d}\mu<\infty.$
\end{enumerate}
\end{prop}
Denote by $\mathcal{H}=\{\gamma_j\}_j$ the set of inverse branches of $T$. The last property is known as the exponential tail property and
we will prove another form instead:
\begin{equation}
\label{sum}
\sum_{\gamma\in\mathcal H }|\gamma'|_\infty^{\delta-\epsilon_o}<\infty,
\end{equation}
where $|\gamma'|_\infty=\sup_{x\in\Delta_0}|\gamma'(x)|$. \eqref{sum} is equivalent to Proposition~\ref{prop:coding} (5): Proposition~\ref{prop:coding} (5) can be deduced from \eqref{sum} using quasi-invariance of PS measure;
conversly, Proposition~\ref{prop:coding} (4) implies that $|\gamma'(x)|\approx |\gamma'|_{\infty}$ for every $x\in \Delta_0$, so Proposition~\ref{prop:coding} (5) implies \eqref{sum}.
Using Proposition~\ref{prop:coding}, it can be shown that there exists a $T$-invariant ergodic probability measure $\nu$ on $\Delta_0$ which is absolutely continuous with respect to PS measure and the density function $\bar{f}_0$ is a positive Lipschitz function bounded away from $0$ and $\infty$ on $\Delta_0\cap\Lambda_\Gamma$ (see for example~\cite[Lemma 2]{You}).
The coding satisfies \textbf{uniform nonintegrable condition (UNI)}.
Let
\begin{align*}
&R_n(x):=\sum_{0\leq k\leq n-1}R(T^k(x))\,\,\,\text{for}\,\,\,x\,\,\,\text{with}\,\,\,T^k(x)\in \sqcup_j \Delta_j\,\,\,\text{for all}\,\,\,0\leq k\leq n-1,\\
&\mathcal{H}_n=\{\gamma_{j_1}\cdot \ldots \cdot \gamma_{j_n}:\,\gamma_{j_k}\in \mathcal{H}\,\,\,\text{for}\,\,\,1\leq k\leq n\}.
\end{align*}
For $\gamma\in\mathcal H^n$, we have
$R_n(\gamma x)=-\log|\gamma'(x)|$. Set
\begin{equation}
\label{constant c2}
C_2=C_1/(1-\lambda).
\end{equation}
Then by Proposition~\ref{prop:coding} (3) and (4), we obtain for any $\gamma\in \mathcal{H}_n$,
\begin{equation}
\label{uniform contraction}
\sup_{x\in \Delta_0}\left| \mathrm D(\log |\gamma'|)(x)\right|\leq C_2.
\end{equation}
\begin{lem}[UNI]\label{lem:uni}
There exist $\r>0$ and $\epsilon_0>0$ such that for any $C>1$ the following holds for any large $n_0$. There exist $j_0\in\mathbb N$ and $\{\gamma_{mj}:1\leq m\leq 2, 1\leq j\leq j_0\}$ in $\mathcal H_{n_0}$ such that for any $x\in \Lambda_\Gamma\cap\overline\Delta_0$ and any unit vector $e\in\mathbb R^d$ there exists $j$ such that for all $y\in B(x,\r)$
\begin{equation}\label{equ:uni}
|\partial_e(\tau_{1j}-\tau_{2j})(y)|\geq \epsilon_0,
\end{equation}
where $\tau_{mj}(x)=R_{n_0}(\gamma_{mj}x)$.
Moreover, for all $m,j$,
\begin{equation}\label{equ:hm}
|\mathrm D\tau_{mj}|_\infty\leq C_2, \,\,\,
|\gamma_{mj}'|_\infty\leq \epsilon_0/C.
\end{equation}
\end{lem}
The expanding map in the coding gives a contracting action in a neighborhood of $\infty$.
\begin{lem}\label{lem:l1}
There exist $0<\lambda<1$ and a neighbourhood $\Lambda_-$ of $\infty$ in $\Lambda_{\Gamma}$ such that $\Lambda_{-}$ is disjoint from $\overline{\Delta}_0$ and for any $\gamma\in\mathcal H$ and any $y,y'\in \Lambda_-$,
\begin{equation}\label{equ:lam1}
\gamma^{-1}(\Lambda_{-})\subset\Lambda_{-},\ \
d_{\S^d}(\gamma^{-1}y,\gamma^{-1}y')\leq \lambda d_{\S^d}(y,y').
\end{equation}
\end{lem}
The proofs of these results will be postponed to Section~\ref{sec:code}. Proposition~\ref{prop:coding} and Lemma~\ref{lem:l1} will be proved at the end of Section~\ref{sec:codmulti} and Lemma~\ref{lem:uni} will be proved in Section~\ref{sec:UNI}.
\subsection{A semiflow over hyperbolic skew product}
\textbf{Hyperbolic skew product.} We construct a hyperbolic skew product using Lemma~\ref{lem:l1}.
Let $\Lambda_{+}=\Lambda_{\Gamma}\cap\left(\sqcup_j\Delta_j\right)$ and $\Lambda_{-}$ be given as Lemma~\ref{lem:l1}.
Define the map $\hat{T}$ on $\Lambda_+\times \Lambda_-$ by
\begin{equation}
\label{equ:expanding}
\hat{T}(x,x_-)=(\gamma_j^{-1}x,\gamma_j^{-1}x_-)\,\,\,\text{for}\,\,\, (x,x_-)\in \Lambda_+\times \Lambda_- \,\,\,\text{with}\,\,\, x\in\Delta_j,
\end{equation}
where $\gamma_j$ is given as in Proposition \ref{prop:coding} (2). Lemma~\ref{lem:l1} implies $\gamma^{-1}\Lambda_-\subset \Lambda_-$ for any $\gamma \in \mathcal{H}$. So $\hat{T}$ is well-defined.
Let $p:\Lambda_+\times \Lambda_-\to \Lambda_+$ be the projection to the first coordinate.
This gives rise to a semiconjugacy between $\hat{T}$ and $T$. We equip $\Lambda_+\times\Lambda_-$ with the metric
$$d((x,x_-),(x',x_-'))=d_E(x,x')+d_{\S^d}(x_-,x_-').$$
\eqref{equ:lam1} implies that the action of $\hat{T}$ on the fibre $\{ x\}\times\Lambda_-$ is contracting. Using this observation,we obtain
\begin{prop}\label{prop:dis}
\begin{enumerate}
\item
There exists a unique $\hat{T}$-invariant, ergodic probability measure $\hat\nu$ on $\Lambda_+\times \Lambda_-$ whose projection to $\Lambda_+$ is $\nu$.
\item
We have a disintegration of $\hat{\nu}$ over $\nu$: for any continuous function $w$ on $\Lambda_+\times\Lambda_-$,
\begin{equation*}
\int_{\Lambda_+\times\Lambda_-} w\mathrm{d}\hat{\nu}=\int_{\Lambda_+}\int_{\Lambda_-}w\mathrm{d}\nu_x(x_-)\mathrm{d}\nu(x).
\end{equation*}
Moreover, there exists $C>0$ such that for any Lipschitz function $w$ on $\Lambda_+\times \Lambda_-$, defining $\bar{w}(x)=\int w\mathrm{d}\nu_x$, we have
\begin{equation*}
\|\bar{w}\|_{\rm Lip}\leq C\|w\|_{\rm Lip}.
\end{equation*}
\end{enumerate}
\end{prop}
\begin{proof}
For the first statement, see~\cite[Theorem A]{Kloeckner} or~\cite[Proposition 1]{BM}. For the second statemetn, see~\cite[Proposition 3, Proposition 6]{BM}, where they consider Riemannian manifold case but the same proofs also work in our fractal case.
\end{proof}
\begin{rem*}
The measure $\hat\nu$ is actually independent of the choice of the stable direction $\Lambda_-$: any $\Lambda_-$ satisfying Lemma~\ref{lem:l1} will lead to the same measure $\hat{\nu}$.
\end{rem*}
\textbf{Hyperbolic skew product flow.}
Let $R:\Lambda_+\to \mathbb{R}_+$ be the function given in Proposition~\ref{prop:coding}. By abusing notation, define $R:\Lambda_+\times\Lambda_-\to \mathbb R_+$ by setting $R(x,x_-)=R(x)$. Define the space
\begin{equation*}
\Lambda^{R}=\{(x,x_-,s)\in \Lambda_+\times\Lambda_-\times \mathbb{R}:\,0\leq s< R(x,x_-)\}.
\end{equation*}
Let $R_n=\sum_{j=0}^{n-1}R\circ \hat{T}^j$. The hyperbolic skew product flow $\{\hat{T}_t\}_{t\geq 0}$ over $\Lambda^R$ is defined by $\hat{T}_t(x,x_-,s)=(\hat{T}^n(x,x_-),s+t-R_n(x,x_-))$ for $\hat{\nu}$-almost every $x$, where $n$ is the nonnegative integer such that $0\leq s+t-R_n(x,x_-)<R(\hat{T}^n(x,x_-))$. We equip $\Lambda^R$ with the measure $\mathrm{d}\hat{\nu}^R:=\mathrm{d}\hat{\nu}\times \mathrm{d} t/\bar{R}$, where $dt$ is Lebesgue measure on $\mathbb{R}_+$ and $\bar{R}=\int_{\Lambda_+\times\Lambda_-}R\mathrm{d}\hat{\nu}$. This is a $\hat{T}_t$-invariant ergodic measure.
\begin{rem*}
We don't use the commonly used ``suspension space" construction to construct $\Lambda^R$.
The reason is that we will use a cutoff function in the proof of Theorem~\ref{main thm} and such cutoff functions are ill-defined in the suspension space, which is a quotient space of $\Lambda_+\times \Lambda_-\times \mathbb{R}$.
\end{rem*}
For any $L^{\infty}$ function $w:\Lambda^{R}\to \mathbb{R}$, the Lipschitz norm of $w$ is defined by
\begin{equation}
\label{equ:function norm}
\|w\|_{\operatorname{Lip}}=|w|_\infty+\sup_{ (y,a)\neq(y',a')\in \Lambda^R}\frac{|w(y,a)-w(y',a')|}{d(y,y')+|a-a'|}.
\end{equation}
In Section~\ref{sec:expmix}, we will prove that $\hat{T}_t$ is exponential mixing with respect to $\hat{\nu}^R$.
\begin{thm}\label{thm:skew}
There exist $\epsilon_1>0$ and $C>1$ such that for any Lipschitz functions $u,w$ on $\Lambda^R$ and any $t>0$, we have
\begin{equation*}
\left|\int u\ w\circ\hat{T}_t\mathrm{d} \hat \nu^R-\int u \mathrm{d} \hat \nu^R\int w\mathrm{d} \hat \nu^R\right|\leq Ce^{-\epsilon_1 t}\|u\|_{\operatorname{Lip}}\|w\|_{\operatorname{Lip}}.
\end{equation*}
\end{thm}
\subsection{Exponential mixing of geodesic flow}
\textbf{The map from $\Lambda^R$ to $\operatorname{T}^1(M)$.} We construct a map from $\Lambda^R$ to $\operatorname{T}^1(M)$ which allows us to deduce the exponential mixing of the geodesic flow from that of $\hat{T}_t$.
Recall the Hopf parametrization in Section~\ref{sec:PS}. We introduce the following time change map to have the function $R$ given by derivative (see Proposition~\ref{prop:coding}):
\begin{equation*}
\tilde{\Phi}:\Lambda^R\to \partial^2(\H^{d+1})\times \mathbb{R},\,\,\,
(x,x_-,s)\mapsto (x,x_-,s-\log(1+|x|^2)).
\end{equation*}
Note that $\Lambda_+\times \{\infty\}\times \{0\}$ is mapped to the unstable horosphere based at $\infty$ and passing $o$. The map $\tilde{\Phi}$ induces a map $\Phi:\Lambda^R\to \operatorname{T}^1(M)$, where we use the Hopf parametrization to identify $\operatorname{T}^1(M)$ with $\Gamma\backslash \partial^2(\H^{d+1})\times \mathbb{R}$. The map $\Phi$ defines a semiconjugacy between two flows:
\begin{equation}
\label{equ:flow conj}
\Phi\circ \hat{T}_t=\mathcal{G}_t\circ \Phi,\,\,\,\text{for}\,\,\,t\geq 0.
\end{equation}
To see this, note that for any $(x,x_-,s)\in \Lambda^R$, we have the expresssion
\begin{align*}
&\hat{T}_t(x,x_-,s)=(\hat{T}^n(x,x_-),s+t-R_n(x,x_-)),\ \ \hat{T}^n(x,x_-)=\gamma^{-1}(x,x_-)\,\,\,\text{for some}\,\,\,\gamma\in \mathcal H^n.
\end{align*}
By straightforward computation, we obtain
\begin{equation*}
\tilde{\Phi}\circ \hat{T}_t(x,x_-,s)=\mathcal{G}_t\circ \gamma^{-1}\tilde{\Phi}(x,x_-,s),
\end{equation*}
which leads to (\ref{equ:flow conj}) by passing to the quotient space.
\vspace{2mm}
\textbf{Relating $\hat{\nu}^{R}$ with $m^{\operatorname{BMS}}$.}
The map $\Phi$ is not injective in general. Nevertheless, we are able to use $(\Lambda^R,\hat{T}_t,\hat{\nu}^R)$ to study $(\operatorname{T}^1(M), \mathcal{G}_t,m^{\operatorname{BMS}})$. The main result is the following proposition.
\begin{prop}
\label{lem:loc}
The map $\Phi:(\Lambda^R,\hat{T}_t,\hat{\nu}^R)\to (\operatorname{T}^1(M),\mathcal{G}_t,m^{\operatorname{BMS}})$ is a factor map, i.e.,
\begin{equation*}
\Phi_*\hat{\nu}^R=m^{\operatorname{BMS}}\,\,\, \text{and}\,\,\,\Phi\circ \hat{T}_t=\mathcal{G}_t\circ \Phi\,\,\,\text{for all}\,\,\,t\geq 0.
\end{equation*}
\end{prop}
We need two lemmas to prove this proposition.
\begin{lem}\label{lem:V}
There exists a measurable subset $U$ in $\Lambda^R$ such that by setting $V=\Phi(U)$ in $\operatorname{T}^1(M)$, the restriction map of $\Phi$ on $U$ gives a bijection between $U$ and $V$. Moreover, the set $V$ is of positive $BMS$ measure.
\end{lem}
\begin{proof}
We make use of the following commutative diagram
\begin{equation*}
\begin{tikzcd}
\Lambda^R \arrow[d,"\Phi"] \arrow[r, "\tilde{\Phi}"] & \partial^2(\H^{d+1})\times \mathbb{R}\arrow[ld,"\pi"] \\
\operatorname{T}^1(M) &
\end{tikzcd}
\end{equation*}
where $\pi$ is the covering map. Let $\epsilon>0$ be a number such that $\epsilon<\inf_{(x,x_-)\in \Lambda_+\times \Lambda_-}R(x,x_-)$. Set $S=\Lambda_+\times \Lambda_-\times [0,\epsilon)$. The restriction map $\tilde{\Phi}|_S$ gives a bijection between $S$ and its image. Pick any $x\in S$. As $\pi$ is a covering map, there exists an open set $W\subset \partial^2(\H^{d+1})\times \mathbb{R}$ containing $\tilde{\Phi}(x)$ such that the restriction map $\pi|_W$ is a bijection. The sets $U=\tilde{\Phi}^{-1}(W\cap \tilde{\Phi}(S))$ in $\Lambda^R$ and $V=\pi(W\cap \tilde{\Phi}(S))$ satisfy the proposition.
\end{proof}
\begin{lem}\label{lem:stable}
Let $\cal Q'$ be any subset in $\Lambda^R$ with full $\hat{\nu}^R$ measure and $\cal Q$ be any subset in $\operatorname{T}^1 (M)$ with full $m^{\operatorname{BMS}}$ measure. Then there exist $x\in \cal Q'$ and $y\in \cal Q$ such that $\Phi(x)$ and $y$ are in the same stable leaf.
\end{lem}
\begin{proof}
The idea of the proof is straightforward: we make use of the local product description of $\hat{\nu}^R$ and $m^{\operatorname{BMS}}$.
Let $\Phi_U$ be the restriction of $\Phi$ on $U$. In view of Lemma~\ref{lem:V}, we can consider the measure $\Phi_U^*(m^{\operatorname{BMS}}|_V)$ on $U$, the pull back of $m^{\operatorname{BMS}}|_V$, and denote it by $m$ for simplicity. We can choose $U$ and $V$ sufficiently small so that $m$ is given by
\begin{equation*}
\mathrm{d} m(x,x_-,t)=cD(x,x_-)^{-2\delta}\mathrm{d}\mu(x)\mathrm{d}\mu(x_-)dt,
\end{equation*}
where $c$ is a positive constant and $D(x,x_-)=e^{\beta_{x}(o,x_*)/2} e^{\beta_{x_-}(o,x_*)/2}$ known as the visual distance.
Let $p:\Lambda^R\to \Lambda_+\times \mathbb{R}$ be the projection map, forgetting the $\Lambda_-$-coordinate. Then the pushforward measure $p_*m$ is given by
\begin{equation*}
\mathrm{d} p_*m(x,t)=c\mathrm{d}\mu(x)\mathrm{d} t\int_{\{x\}\times \Lambda_-\times \{t\}\cap U} D(x,x_-)^{-2\delta}\mathrm{d}\mu(x_-).
\end{equation*}
So it is absolutely continuous with respect to the measure $\mathrm{d}\nu\otimes \mathrm{d} t$.
We can find a set of the form $B=B_+\times \Lambda_-\times (t_1,t_2)$ such that $\nu(B_+)>0$ and $m(B\cap U)>0$. The pushforward measure $p_*(\hat{\nu}^R|_B)$ is given by
\begin{equation}
\label{eqn:absolute continuity}
dp_*(\hat{\nu}^R|_B)=\mathrm{d}\nu \otimes \mathrm{d} t.
\end{equation}
On the one hand, we have that $p({\cal{Q}}'\cap B)$ is a conull set in $p(B)$ with respect to $p_*(\hat{\nu}^R|_B)$.
On the other hand, we consider $B':=\Phi_U^{-1}({\cal{Q}}\cap V)\cap B$. It is a set with positive $m$ measure and hence $p_*(B')$ is of positive $p_*(m)$ measure. The fact that $dp_*(m)$ is absolutely continuous with respect to $\mathrm{d}\nu\otimes \mathrm{d} t$ and (\ref{eqn:absolute continuity}) imply that $p_*(B')$ is of positive $p_*(\hat{\nu}^R|_B)$ measure. Therefore,
$$p({\cal{Q}}'\cap B)\cap p(B')\neq \emptyset.$$
Let $(x,t)$ be a point in the intersection. Then the points $(x,x_-,t)\in \cal{Q}'$ and $\Phi(x,x_-',t)\in \cal{Q}$ satisfy the conditions of the lemma.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{lem:loc}]
Let $f$ be a $C^1$ function on $\operatorname{T}^1(M)$ with finite $C^1$-norm. Since $m^{\operatorname{BMS}}$ is ergodic~\cite{Sul}, by Birkhoff ergodic theorem, for $m^{\operatorname{BMS}}$-a.e. $y$ in $\operatorname{T}^1(M)$
\begin{equation}\label{equ:bms}
\lim_{T\rightarrow+\infty}\frac{1}{T}\int_{0\leq t\leq T} f(\mathcal G_t y)\mathrm{d} t=\int f\mathrm{d} m^{\operatorname{BMS}}.
\end{equation}
Let $\cal Q$ be the set of points at which (\ref{equ:bms}) hold and it is a set of full $m^{\operatorname{BMS}}$ measure.
We consider $f\circ \Phi$, which can be thought as the lifting of $f$ to $\Lambda^R$. It is $\hat{\nu}^R$-integrable. Since $\hat{T}_t$ is mixing with respect to $\hat{\nu}^R$, by Birkhoff ergodic theorem, for $\hat{\nu}^R$-a.e. $x$,
\[ \lim_{T\rightarrow+\infty}\frac{1}{T}\int_{0\leq t\leq T} f\circ\Phi(\hat{T}_t x)\mathrm{d} t=\int f\circ\Phi\mathrm{d} \hat{\nu}^R. \]
Using the semiconjugacy $\Phi\circ\hat{T}_t=\mathcal G_t\circ\Phi$, we actually have
\begin{equation}
\label{equ:nu}
\lim_{T\rightarrow+\infty}\frac{1}{T}\int_{0\leq t\leq T} f(\mathcal G_t\Phi x)\mathrm{d} t=\int f\circ\Phi\mathrm{d} \hat{\nu}^R.
\end{equation}
Let $\cal Q'$ be the set of points at which (\ref{equ:nu}) hold and it is a set of full $\hat{\nu}^R$ measure.
By Lemma~\ref{lem:stable}, there exist points $x\in \Lambda^R$ and $y\in \operatorname{T}^1(M)$ such that $\Phi(x)$ and $y$ are in the same stable leaf. Due to $d(\mathcal G_t y,\mathcal G_t\Phi x)=0$ as $t\rightarrow+\infty$ and the uniform continuity of $f$,
\[ \lim_{T\rightarrow+\infty}\left(\frac{1}{T}\int_{0\leq t\leq T} f(\mathcal G_t y)\mathrm{d} t-\frac{1}{T}\int_{0\leq t\leq T} f(\mathcal G_t\Phi x)\mathrm{d} t\right)=0.\]
Therefore, we can deduce that
\[\int f\mathrm{d} m^{\operatorname{BMS}}=\int f\circ\Phi\mathrm{d} \hat{\nu}^R. \]
The above equation holds for every $C^1$ function on $\operatorname{T}^1(M)$. The proof is complete.
\end{proof}
\textbf{Proof of Theorem~\ref{zariski}.} We are ready to prove Theorem~\ref{zariski}. With Theorem~\ref{thm:skew} and Proposition~\ref{lem:loc} available, the work lies in the comparing the norm of the functions on $\Lambda^{R}$ with that on $\operatorname{T}^1(M)$. This is not obvious. Consider two points of the form $(y,a)$ and $(y',a)$ in $\Lambda^R$. By (\ref{equ:function norm}), $d((y,a),(y',a))$ remains the same when $a$ changes. But if these two points are projected to $\operatorname{T}^1(M)$, changing $a$ means flowing these two points by the geodesic flow and $d(\Phi((y,a),\Phi(y',a)))$ will change. Moreover, the function $R$ used to define $\Lambda^R$ is unbounded, making the argument more complex.
\begin{proof}[Proof of Theorem~\ref{zariski}]
Let $u,v$ be any two $C^1$-functions on $\operatorname{T}^1(M)$ with finite $C^1$-norm. Without loss of generality, we may assume that $m^{\operatorname{BMS}}(u)=0$. Set $U=u\circ \Phi$ and $W=w\circ \Phi$. Using the semiconjugacy of $\Phi$, we obtain
\begin{equation*}
\int u\cdot w\circ \mathcal{G}_t \mathrm{d} m^{\operatorname{BMS}}=\int U \cdot W\circ \hat{T}_t\mathrm{d}\hat{\nu}^R.
\end{equation*}
We use a cutoff function to relate the norms of $U,W$ with those of $u,w$. Let $\epsilon>0$ be a constant less than $\epsilon_1/2$. Let $\tau_t$ be a Lipschitz function on $[0,\infty)$ such that $\tau_t=1$ on $[0,\epsilon t]$, $\tau_t =0$ on $(\epsilon t+1,\infty)$ and $|\tau_t|_{\operatorname{Lip}}<2$. Set $U_t=U\cdot \tau_t$ and $W_t=W\cdot \tau_t$. For any two points $(y,a)$ and $(y',a')$ (we may assume $a\geq a'$), we have
\begin{align*}
&|U_t(y,a)-U_t(y',a')|\\
\leq& |U_t(y,a)-U_t(y,a')|+|U_t(y,a')-U_t(y',a')|\\
\leq &\tau_t(a')|U(y,a)-U(y,a')|+|u|_{\infty} |\tau_t(a)-\tau_t (a')|+\tau_t(a')|U(y,a')-U(y',a')|\\
\ll & |u|_{C^1}|a-a'|+|u|_{\infty} |a-a'|+e^{\epsilon t} |u|_{C^1} d(y,y'),
\end{align*}
where to obtain the last inequality, we use the fact $d(\Phi(y,a'),\Phi(y',a'))\leq e^{a'}d(y,y')$ and $\tau_t\neq 0$ only on $[0,\epsilon t+1]$. Therefore, we have
\begin{equation}
\label{UT}
\lVert U_t\rVert_{\operatorname{Lip}}\ll e^{\epsilon t} \lVert u\rVert_{C^1}.
\end{equation}
A verbatim of the above argument also implies $\lVert W_t\rVert_{\operatorname{Lip}}\ll e^{\epsilon t} \lVert w\rVert_{C^1}$.
We also need the following $L^1$-estimate. Using the exponential tail condition (Proposition~\ref{prop:coding} (4)), we obtain
\begin{align}\label{ut}
\nonumber&\lvert U_t-U\rvert_{L^1(\hat{\nu}^R)}\leq |u|_{\infty} \int \max\{R(x)-\epsilon t,0\}\mathrm{d}\nu(x)\\
\ll & |u|_{\infty} \int e^{\epsilon_o (R(x)-\epsilon t)}\mathrm{d}\nu(x)\ll e^{-\epsilon_o \epsilon t}|u|_{\infty}.
\end{align}
The similar estimate holds for $W_t-W$.
As $m^{\operatorname{BMS}}(u)=0$, we have
\begin{equation}
\label{UL}
|\int U_t\mathrm{d}\hat{\nu}^R|\ll e^{-\epsilon_o \epsilon t}|u|_{\infty}.
\end{equation}
Using Theorem~\ref{thm:skew} together with \eqref{UT}, \eqref{ut} and \eqref{UL}, we obtain
\begin{align*}
&|\int U \cdot W\circ \hat{T}_t\mathrm{d}\hat{\nu}^R|\\
\leq &|\int U_t\cdot W_t\circ\hat{T}_t\mathrm{d}\hat{\nu}^R|+|\int (U-U_t)\cdot W_t\circ\hat{T}_t\mathrm{d}\hat{\nu}^R|+|\int U\cdot (W-W_t)\circ\hat{T}_t\mathrm{d}\hat{\nu}^R |\\
\ll &|\int U_t\mathrm{d}\hat{\nu}^R|\cdot|\int W_t\mathrm{d}\hat{\nu}^R|+e^{-\epsilon_1 t}\|U_t\|_{\operatorname{Lip}}\|W_t\|_{\operatorname{Lip}}+|w|_\infty|U-U_t|_{L^1(\hat{\nu}^R)}+|u|_\infty|W-W_t|_{L^1(\hat{\nu}^R)}\\
\ll&(e^{-(\epsilon_1-2\epsilon)t}+e^{-\epsilon_o\epsilon t} )|u|_{C^1}|w|_{C^1}.
\end{align*}
Due to $\epsilon<\epsilon_1/2$, the proof is complete.
\end{proof}
\section{Parabolic fixed points and Measure estimate}
\label{sec:parmea}
In this section, we provide a detailed description of the $\Gamma$-action on $\partial\H^{d+1}$ and different types of estimate for the PS measure.
\subsection{Explicit computation}
\label{sec:explicit}
Let $\mathcal P$ be the set of parabolic fixed points in $\partial\H^{d+1}$. Two parabolic fixed points are called equivalent if they are in the same $\Gamma$-orbit. Let $\mathbf P$ be a complete set of inequivalent parabolic fixed points. Geometrically finiteness implies that $\mathbf P$ is finite.
We fix a collection of pairwise disjoint horoballs based at parabolic fixed points as follows.
Let $H_\infty$ be the horoball based at $\infty$ given by $\mathbb R^d\times \{x\in\mathbb R:\, x> 1\}$. Without loss of generality, we may assume that $H_{\infty}$ is a horocusp region for $\infty$. We attach the horoball $\gamma H_{\infty}$ to the parabolic fixed point $\gamma \infty$. For other parabolic fixed point $p$ in $\mathbf{P}$, we fix a horoball $H_p$ based at $p$ which is a horocusp region for $p$ and attach $\gamma H_p$ to $\gamma p$. We choose the horoballs in such a way that they are pairwise disjoint.
For $p\in \mathcal{P}$, we define $h_p$ as the height of $H_p$ based at $p$, that is
\[h_p:=h(H_p)=\sup_{y\in H_p} h(y). \]
\begin{lem}\label{lem:gammax}
If $\gamma\in \Gamma$ does not fix $\infty$, then for any $x\in \mathbb{H}^{d+1}\cup \partial \mathbb{H}^{d+1}$, we have
\begin{equation*}
\gamma x=h_p \frac{x-(p',0)}{|x-(p',0)|^2}\begin{pmatrix} A & 0\\ 0 & 1\end{pmatrix}+(p,0)
\end{equation*}
\begin{equation}
\label{equ:gamma-1}
\gamma^{-1}x=h_p\frac{x-(p,0)}{|x-(p,0)|^2}\begin{pmatrix}A^{-1} & 0\\ 0 & 1\end{pmatrix}+(p',0),
\end{equation}
where $p=\gamma\infty$, $p'=\gamma^{-1}\infty$ and $A$ is in $\operatorname{SO}(d)$.
\end{lem}
\begin{proof}
By Proposition A.3.9 (2) in~\cite{BP}, the action of $\gamma$ on the upper half space is given by
\begin{equation*}
\gamma x=\lambda \iota(x) \begin{pmatrix}
A & 0\\ 0 &1
\end{pmatrix}+(b,0),
\end{equation*}
where $A$ is in $\operatorname{SO}(d)$, $\lambda\in\mathbb R^+$, $b\in \mathbb R^d$ and $\iota(x)$ either equals $x$ or is given by an inversion with respect to a unit sphere centered at $\mathbb R^d\times\{0\}$. In fact, this is the Bruhat decomposition of $G$. Since $\gamma$ does not fix $\infty$, $\iota(x)$ is an inversion. We have for any $x\in \mathbb{H}^{d+1}$
\begin{equation*}
\gamma x=\lambda \frac{x-(x',0)}{|x-(x',0)|^2} \begin{pmatrix}
A & 0\\ 0 &1
\end{pmatrix}+(b,0)
\end{equation*}
with $x'\in \mathbb{R}^d$. Hence $b=\gamma\infty=p$ and $x'=\gamma^{-1}\infty=p'$.
Note that
\[ h(\gamma x)=\lambda h(x)/|x-(p',0)|^2. \]
Since $\gamma$ maps the original horoball $H_\infty=\mathbb R^d\times \{ x> 1\}$ to the horoball $H_p$ based at $p$, it follows from the above formula that
\[h_{p}=h(H_p)=h(\gamma H_\infty)=\sup_{x\in \mathbb R^d\times \{1\} }h (\gamma x)=\lambda.\qedhere \]
\end{proof}
For $x\in\partial\H^{d+1}$ and $r>0$, set $B(x,r)$ to be the ball centred at $x$ of radius $r$ in $\partial\H^{d+1}$ with respect to Euclidean metric.
\begin{lem}[Explicit computation]\label{lem:explicit}
If $\gamma\in \Gamma$ does not $\infty$, then
for any $r>0$ and $x\in\partial\H^{d+1}$,
\begin{itemize}
\item $\gamma^{-1}B(p,r)=B(p',h_p/r)^c$,
\item $|(\gamma^{-1})'(x)|=h_p/d(x,p)^2,\,\,\, |\gamma'(x)|=h_p/d(x,p')^2$
\end{itemize}
where $p=\gamma\infty$ and $p'=\gamma^{-1}\infty$.
\end{lem}
\begin{proof}
The first equation follows from \eqref{equ:gamma-1} easily.
In view of Lemma~\ref{lem:gammax}, the computation of the derivative of inversion maps gives the expression of $|\gamma'(x)|$ and $|(\gamma^{-1})'(x)|$.
\end{proof}
\begin{lem}\label{lem:rg1}
Let $p=\gamma \infty$ and $q$ be any two different parabolic fixed points. If $\gamma$ does not fix $\infty$, then
\begin{align}
\label{upper bound height}
&h_{\gamma^{-1}q}\geq \frac{h_p h_q}{d(p,q)^2+h_q^2},\\%\leq \frac{h_p h_q}{(d(p,q)-h_q)^2}.
\label{lower bound height}
&h_{\gamma^{-1}q}\leq \frac{h_p h_q}{(d(p,q)-h_p/2)^2}.
\end{align}
\end{lem}
\begin{proof}
Using \eqref{equ:gamma-1}, we obtain
\[h_{\gamma^{-1}q}=h(H_{\gamma^{-1}q})=h(\gamma^{-1}H_q)\geq h(\gamma^{-1}(q,h_q))=\frac{h_ph_q}{d(q,p)^2+h_q^2}. \]
For \eqref{lower bound height}, we have
\begin{equation*}
h_{\gamma^{-1}q}=\sup_{y\in \partial H_q}h(\gamma^{-1}y)=\sup_{y\in \partial H_q} \frac{h_p h(y)}{|y-(p,0)|^2}.
\end{equation*}
Note that for every $y\in \partial H_q$, we have $|y-(p,0)|^2\geq d_{E}(y',p)^2\geq (d(p,q)-h_p/2)^2$, where $y'$ is the projection of $y$ to $\partial \mathbb{H}^{d+1}$ and this yields \eqref{lower bound height}.
\end{proof}
\subsection{Multi cusps}\label{sec:multi}
Suppose that there are $j$ elements in $\mathbf{P}$, a complete set of inequivalent parabolic fixed points and set $p_1=\infty$. For each $p_i$, we consider a coordinate change transformation:
let $g_i$ be an element in $G$ such that $g_ip_i=\infty$. This $g_i$ is not unique and we can choose a $g_i$ such that $g_iH_{p_i}=H_\infty=\mathbb R^d\times\{x> 1 \}$. We will frequently make use of the following commutative diagram:
\begin{equation}
\label{commutative diagram}
\begin{tikzcd}
\H^{d+1} \arrow{r}{g_i} \arrow[swap]{d}{\Gamma} & \H^{d+1} \arrow{d}{g_i \Gamma g_{i}^{-1}} \\%
\H^{d+1} \arrow{r}{g_i}& \H^{d+1}.
\end{tikzcd}
\end{equation}
On the right hand side of the diagram, the acting group is $g_i\Gamma g_i^{-1}$ and $\infty$ is a parabolic fixed point of the group.
Once $g_i$'s are fixed, set the horoball $H_{g_ip}:=g_iH_p$ for $p\in\mathcal P$ and the height $h_{g_ip}$ is defined as before.
The results in Section~\ref{sec:cusps} hold for each $p_i$. We have the group $(g_i\Gamma g_i^{-1})_\infty$, which is a maximal normal abelian subgroup in $\operatorname{Stab}_{g_i\Gamma g_i^{-1}}(\infty)$.
Write
\begin{equation}
\label{pistabilizer}
\Gamma_{p_i}=g_i^{-1}(g_i\Gamma g_i^{-1})_\infty g_i.
\end{equation}
Let $g_i\Delta_{p_i}$ be a fundamental region for $\infty$ under the $g_i\Gamma g_i^{-1}$-action. We can choose $g_i\Delta_{p_i}$ in such a way that
\begin{equation}\label{glpi}
\{p_i,\ 1\leq i\leq j\}\cap (\cup_{1\leq k\leq j}\overline{\Delta}_{p_k})=\emptyset.
\end{equation}
Set $$\Delta=\cup_{1\leq k\leq j}\Delta_{p_k}.$$
For every parabolic fixed point $p=\gamma p_i$ with $\gamma\in\Gamma$, we fix a choice of $\gamma$ such that $\gamma^{-1}p_i\in\overline{\Delta}_{p_i}$ and we call $\gamma$ a \textbf{top representation} of $p$. Set $$x_p:=\gamma^{-1}p_i\,\,\, \text{and}\,\,\, x_{g_ip}:=g_ix_p\in g_i\overline{\Delta}_{p_i}.$$
As $g_i\gamma^{-1}g_i^{-1}\infty=g_i\gamma^{-1}p_i=x_{g_ip}$, the element $g_i\gamma g_i^{-1}$ is also a top representation of $g_ip$ for the group $g_i\Gamma g_i^{-1}$. In this way, we fix the top representations for parabolic fixed points of $g_i\Gamma g_i^{-1}$.
By the same argument as Lemma~\ref{lem:gammax}, we obtain
\begin{align}\label{equ:coordinate}
&g_iy=h_{p_i}\frac{y-(p_i,0)}{|y-(p_i,0)|^2}\begin{pmatrix}
A & 0\\ 0 &1
\end{pmatrix}+(g_i\infty,0),\\
\label{equ:coordinate2}
&g^{-1}_iy=h_{p_i}\frac{y-(g_i\infty,0)}{|y-(g_i\infty,0)|^2}\begin{pmatrix}
A^{-1} & 0\\ 0 &1
\end{pmatrix}+(p_i,0).
\end{align}
\begin{lem}\label{lem:height}
There exists $C>1$ such that
for $1\leq i\leq j$ and for any parabolic fixed point $p$ in $\Delta$, we have
\begin{equation*}
1/C\leq h_{g_ip}/h_p\leq C.
\end{equation*}
\end{lem}
\begin{proof}
Consider the action of $g_i^{-1}$ on $\partial \mathbb{H}^{d+1}$. Applying the same computation as the proof of \eqref{upper bound height} to the point $p_i=g_i^{-1}\infty$ and $p$, we obtain
\begin{equation*}
h_{g_ip}=h(g_iH_p)\geq \frac{h_{p_i} h_{p}}{d(p_i,p)^2+h_{p}^2}.
\end{equation*}
It follows from \eqref{glpi} that $d(p_i,p)$ is bounded for $p\in\Delta$. Then $h_{g_ip}\geq h_p/C$.
For the other inequality, applying the same computation as the proof of \eqref{upper bound height} with $g_ip_1=g_i\infty$ and $g_ip$ we have
\[h_p=h(g_i^{-1}H_{g_ip})\geq \frac{h_{g_ip_1}h_{g_ip} }{d(g_ip_1,g_ip)^2+h_{g_ip}^2 }. \]
It follows from \eqref{glpi} that $d(g_ip_1,g_ip)$ is bounded for $p\in\Delta$. Then $h_{p}\geq h_{g_ip}/C$.
\end{proof}
\begin{lem}\label{lem:bilip}
For $1\leq i\leq j$, the map $g_i:\Delta\to g_i\Delta$ is bi-Lipschitz.
\end{lem}
\begin{proof}
By \eqref{equ:coordinate}, we have
$$d(g_ix,g_iy)=h_{p_i}d\left(\frac{x-p_i}{|x-p_i|^2},\frac{y-p_i}{|y-p_i|^2}\right)\leq C|x-y|=Cd(x,y),$$
where the inequality due to \eqref{glpi}.
For the other direction, we use \eqref{equ:coordinate2} to obtain
\[ d(g_i^{-1}x,g_i^{-1}y)=h_{p_i}d\left(\frac{x-g_ip_1}{|x-g_ip_1|^2},\frac{y-g_ip_1}{|y-g_ip_1|^2}\right)\leq C|x-y|=Cd(x,y), \]
where the inequality is due to that $d(x,g_ip_1)$ is bounded below for $x\in g_i\Delta$ by \eqref{glpi}.
\end{proof}
\textbf{Patterson-Sullivan measure under conjugation.} In the presence of multi cusps,
we need to consider the Patterson-Sullivan measure for the conjugation of $\Gamma$. Recall that $\{\mu_{y}\}_{y\in \H^{d+1}}$ is the $\Gamma$-invariant conformal density of dimension $\delta$ and we denoted $\mu_o$ by $\mu$ for short. For each $g_i$ with $1<i\leq j$, set $\Gamma_i=g_i\Gamma g_i^{-1}$. The limit set $\Lambda_{\Gamma_i}$ is $g_i\Lambda_{\Gamma}$ and the critical exponent of $\Gamma_i$ equals $\delta$. For every $y\in \H^{d+1}$, define the following measure
\begin{equation*}
\tilde{\mu}_{y}:=(g_i)_{*}\mu_{g_i^{-1}y},
\end{equation*}
where $(g_i)_{*}\mu_{g_i^{-1}y}(E)=\mu_{g_i^{-1}y}(g_i^{-1}E)$ for any Borel subset $E$ of $\partial \H^{d+1}$. It is easy to check that $\tilde{\mu}_{y}$ is supported on $\Lambda_{\Gamma_i}$ and for any $y,z\in \H^{d+1}$, $x\in \partial \H^{d+1}$ and $\gamma \in \Gamma_i$,
\begin{equation*}
\frac{\mathrm{d}\tilde{\mu}_{y}}{\mathrm{d}\tilde{\mu}_{z}}(x)=e^{-\delta\beta_{x}(y,z)}\,\,\, \text{and}\,\,\,(\gamma)_{*}\tilde{\mu}_{y}=\tilde{\mu}_{\gamma y}.
\end{equation*}
It follows from the uniqueness of $\Gamma_{i}$-invariant conformal density that this construction gives exactly the $\Gamma_i$-invariant conformal density on $\Lambda_{\Gamma_i}$ of dimension $\delta$. In later sections, we will denote $\tilde{\mu}_o$ by $\mu_{\Gamma_i}$ and the above analysis yields that for any Borel subset $E$ of $\partial\H^{d+1}$
\begin{equation}\label{conjugation}
e^{-\delta d(o,g_io)} \mu(g_i^{-1}E)\leq \mu_{\Gamma_i}(E)\leq e^{\delta d(o,g_io)} \mu(g_i^{-1}E) .
\end{equation}
\subsection{Doubling property of PS measure}
\label{sec:double}
We start with two results: Proposition~\ref{double} and Lemma~\ref{lem:bpr}, deduced from ~\cite[Theorem 2]{StrVel}. They used spherical metric, but locally it is equivalent to euclidean metric.
\begin{prop}\label{double}
\begin{itemize}
\item (Doubling property) For every $C>1$, there exists $\epsilon<1$ such that for every $x\in\Lambda_\Gamma\cap\Delta$ and $1/C\geq r>0$,
\[\mu(B(x,r))>\epsilon \mu(B(x,Cr)). \]
\item (Growth of measure) There exists $C_{3}>1$, such that for every $x\in\Lambda_\Gamma\cap\Delta$ and $r<1/C_{3}$,
\[2\mu(B(x,r))<\mu(B(x,C_{3}r)). \]
\end{itemize}
\end{prop}
\begin{lem}\label{lem:bpr}
Let $p$ be a parabolic fixed point in $\Delta$ of rank $k$. For $0<r\leq h_p$,
\begin{equation}\label{equ:bpr}
\mu(B(p,r))\approx r^{2\delta-k}h_p^{k-\delta}.
\end{equation}
\end{lem}
\begin{lem}
\label{lem:annulusquasi}
For every $C>1$, there exists $C'>0$ such that for every parabolic fixed point $p=\gamma\infty\in \Delta$ with $\gamma$ a top representation, for $r\leq h_p$ and for any Borel subset $E\subset B(p, Cr)-B(p, r/C)$, we have
\begin{equation*}
h_p^{\delta}\mu(\gamma^{-1}E)/C'\leq \mu(E)\leq C' h_p^{\delta}\mu(\gamma^{-1}E).
\end{equation*}
\end{lem}
\begin{proof}
As the PS measure is quasi-invariant, we have
\begin{align*}
\mu(\gamma^{-1}E)=\int_{x\in E} |(\gamma^{-1})' x|^{\delta} \left(\frac{1+| x|^2}{1+|\gamma^{-1} x|^2}\right)^{\delta}\mathrm{d}\mu(x).
\end{align*}
Then the lemma can be proved by using Lemma~\ref{lem:explicit} to estimate $|(\gamma^{-1})' x|^{\delta}$ and $|\gamma^{-1} x|^2$.
\end{proof}
\begin{lem}\label{lem:rre}
There exist constants $c>0$ and $C>1$ such that for every parabolic fixed point $p\neq \infty$, if $r\leq h_p/C$, then
\[\mu(B(p,r)-B(p,r/\sqrt{e}))\geq c\mu(B(p,r)). \]
\end{lem}
\begin{proof}
We first notice that we only need to consider $p\in \Delta_{0}$. Because $\Gamma_\infty\Delta_{0}$ covers the limit set in $\mathbb R^d$, we can always find a $\gamma_1$ in $\Gamma_{\infty}$ such that $\gamma_1 p\in\Delta_{0}$. Then using the derivative of $\gamma_1$ and the quasi invariance of PS measure, we obtain
\[\frac{\mu(B(p,r)-B(p,r/\sqrt{e}))}{\mu(B(p,r))}\approx \frac{\mu(B(\gamma_1p,r)-B(\gamma_1p,r/\sqrt{e}))}{\mu(B(\gamma_1p,r))}. \]
We only need to give a lower bound to $\mu(B(p,r)-B(p,r/\sqrt{e}))$ and then use \eqref{equ:bpr} to obtain Lemma \ref{lem:rre}.
Assume $p\in \Delta_0$ is of rank $k$. Consider the case when $p=\gamma \infty$ with $\gamma\in \Gamma$ a top representation of $p$.
We claim that there exists a constant $C>1$ such that for $\gamma_1\in \Gamma_{\infty}$,
with $\gamma \gamma_1\Delta_0\subset B(p,r)-B(p,r/\sqrt{e})$, we have
\begin{equation}\label{equ:gammah}
\mu(\gamma \gamma_1\Delta_0)\gg \frac{h_p^{\delta}}{(d(\gamma_1\Delta_0,x_p)+C)^{2\delta}}.
\end{equation}
Proof of the claim: By Lemma~\ref{lem:annulusquasi}, we have $\mu(\gamma \gamma_1\Delta_0)\approx h_p^\delta\mu(\gamma_1\Delta_0)$. Using the derivative of $\gamma_1$ and the quasi invariance of PS measure, we have $$\mu(\gamma_1\Delta_0)\geq 1/(d(\gamma_1\Delta_0,x_p)+C)^{2\delta}$$ for some constant $C>1$.
By Lemma~\ref{lem:explicit} , we have $\gamma^{-1}(B(p,r)-B(p,r/\sqrt e))=B(x_p,\sqrt eh_p/r)-B(x_p,h_p/r)$. Let $C'=\operatorname{diam}(\Delta_0)$. The number of $\gamma_1\Delta_0$'s in such region is at least
\begin{equation}\label{equ:volest}
\operatorname{Vol}_{\mathbb{R}^k}\left(B\left(x_p,\sqrt{e}h_p/r-C'\right)-B\left(x_p,h_p/r+C'\right)\right)/\operatorname{Vol}_{\mathbb{R}^k}(\Delta_0)\gg h_p^kr^{-k}
\end{equation}
where $\mathbb{R}^k$ is the subspace described in Lemma~\ref{lem:biberbach}.
\eqref{equ:volest} and \eqref{equ:gammah} imply
\begin{align*}
\mu(B(p,r)-B(p,r/\sqrt{e}))\geq \sum_{\gamma_1\Delta_0\subset B(x_p,\sqrt eh_p/r)-B(x_p,h_p/r)}\mu(\gamma \gamma_1\Delta_0)\gg h_p^{k-\delta} r^{2\delta-k}.
\end{align*}
Consider general case when $p=\gamma p_i$ with $\gamma\in \Gamma$ a top representation of $p$. We estimate the measure $\mu(\gamma \gamma_1 \Delta_{p_i})$ for any $\gamma_1\in \Gamma_{p_i}$ satisfying $\gamma \gamma_1\Delta_{p_i}\subset B(p,r)-B(p,r/\sqrt{e})$. Using ~\eqref{conjugation}, we have
\begin{equation*}
\mu(\gamma \gamma_1\Delta_{p_i})\approx \mu_{\Gamma_i} (g_i \gamma \gamma_1 \Delta_{p_i}),
\end{equation*}
where $\Gamma_i:=g_i\Gamma g_i^{-1}$. Lemma~\ref{lem:bilip} yields
\begin{equation}
\label{measure inclusion}
g_i\gamma \gamma_1 \Delta_{p_i}\subset g_i(B(p,r)-B(p,r/\sqrt{e}))\subset B(g_ip, Cr)-B(g_ip,r/(C\sqrt{e})).
\end{equation}
So we can use the argument for the previous case to obtain
\begin{equation}
\label{fundamental domain measure}
\mu_{\Gamma_i}(g_i\gamma \gamma_1\Delta_{p_i})\approx r^{2\delta} h_{g_ip}^{-\delta}.
\end{equation}
Then we count the number of $\gamma\gamma_1\Delta_{p_i}$'s in $B(p,r)-B(p,r/\sqrt{e})$. It equals the number of $g_i\gamma \gamma_1\Delta_{p_i}$'s in $g_i(B(p,r)-B(p,r/\sqrt{e}))$. The map $g_i$ maps $B(p,r)$ and $B(p,r/\sqrt{e})$ to two spheres and the distance between $g_i B(p,r)$ and $g_i B(p,r/\sqrt{e})$ is at least $(1-1/\sqrt{e})r/C$. The map $g_i\gamma^{-1}g_i^{-1}$ maps $g_i B(p,r)$ and $g_i B(p,r/\sqrt{e})$ to two spheres and let $R$ and $p'$ be the radius and the center of the outer sphere respectively. Using \eqref{measure inclusion} and Lemma~\ref{lem:explicit}, we have
\begin{equation}
\label{outer radius}
R\in (h_{g_ip}/(Cr), C\sqrt{e}h_{g_ip}/r).
\end{equation}
For every $x\in B(g_ip,Cr)-B(g_ip, r/(C\sqrt{e}))$, we have $|(g_i\gamma^{-1}g_i^{-1})'(x)|\in (h_{g_ip}/(C^2r^2), C^2eh_{g_ip}/r^2)$. So the distance between $g_i\gamma^{-1}B(p,r)$ and $g_i\gamma^{-1}B(p,r/\sqrt{e})$ is at least $(1-1/\sqrt{e}) h_{g_ip}/(C^3r)$. This distance estimate together with \eqref{outer radius} implies there exists some constant $c\in (0,1)$ such that
\begin{equation*}
g_i\gamma^{-1} (B(p,r)-B(p,r/\sqrt{e}))\supset B(p', R)-B(p',cR).
\end{equation*}
The number of $g_i\gamma_1\Delta_{p_i}$ in $g_i\gamma^{-1}(B(p,r)-B(p,r/\sqrt{e}))$ is at least
\begin{equation}
\label{number of fundamental domain}
\operatorname{Vol}_{\mathbb{R}^k}\left(B\left(p',R-C''\right)-B\left(p',cR+C''\right)\right)/\operatorname{Vol}_{\mathbb{R}^k}(g_i\Delta_{p_i})\gg R^k\gg h_{g_ip}^k r^{-k},
\end{equation}
where $C''=\operatorname{diam}(g_i\Delta_{p_i})$. A lower bound for $\mu(B(p,r)-B(p,r/\sqrt{e}))$ can be obtained using Lemma~\ref{lem:height}, \eqref{fundamental domain measure} and \eqref{number of fundamental domain}.
\end{proof}
\subsection{Friendliness of PS measure}
\label{sec:friendliness}
For any $r>0$, set
\begin{equation}
\label{equ:def neighborhood1}
N_r(\Delta_0):=\{x\in \Delta_0:\,\ d(x,\partial \Delta_0)\leq r \}.
\end{equation}
\begin{lem}\label{lem:boud}
There exist
$0<\epsilon,\lambda<1$ such that for all $r<1$
\begin{equation}\label{equ:bou}
\mu(N_{\epsilon r}(\Delta_0))\leq \lambda\mu(N_r(\Delta_0)).
\end{equation}
\end{lem}
Recall from Section~\ref{sec:cusps} that $\Delta_0=B_Y(C)\times \Delta_0'$. Let $l'$ be a facet of $\Delta_0'$ and $l=B_Y(C)\times l'$. Let $\gamma$ be the element in $\Gamma_{\infty}$ identifying $l'$ with the opposite facet $l''$ so $\gamma$ also identifies $B_Y(C)\times l'$ with $B_Y(C)\times l''$. Set
\begin{equation*}
N_r(l):=\{x\in \Delta_0\cup \gamma^{-1}\Delta_0:\,d(x,l)\leq r\}.
\end{equation*}
Lemma \ref{lem:boud} is deduced from the following lemma.
\begin{lem}\label{lem:part}
There exist $0<\epsilon,\lambda<1$ such that for all $r<1$
\begin{equation*}
\mu(N_{\epsilon r}(l))\leq \lambda \mu(N_r(l)).
\end{equation*}
\end{lem}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:boud}}]
Assume that $\infty$ is a rank $k$ cusp. If $\infty$ is not a cusp of maximal rank, then note that $\partial B_Y(C)\times \Delta_0'=\{|y|=C\}\times \Delta_0'$ does not intersect $\Lambda_{\Gamma}$. A small neighborhood of this boundary has zero PS measure. Therefore, we just need to consider the neighborhood of $l$'s. Using the quasi-invariance of PS measure and Lemma~\ref{lem:part}, we obtain
\begin{equation*}
\mu(N_{\epsilon r}(\Delta_0))\leq \sum_{l}\mu(N_{\epsilon r}(l))\leq \lambda \sum_{l}\mu(N_{r}(l))\leq Ck\lambda \mu(N_r(\Delta_0)).
\end{equation*}
We can replace $\epsilon$ by $\epsilon^n$ and using Lemma \ref{lem:part} repeatedly, which will yield an arbitrary small $\lambda$ in Lemma~\ref{lem:boud}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:part}]
The proof is similar to the argument of using Lemma 3.11 to deduce Lemma 3.10 in~\cite{DFSU}. Let $L$ be the hyperplane containing $l$ and $N_r(L)$ be the $r$-neighborhood of $L$. ~\cite[Lemma 3.11]{DFSU} is stated in spherical metric but locally spherical metric is equivalent to the euclidean metric. So ~\cite[Lemma 3.11]{DFSU} implies that there exists $\epsilon>0$ such that
for every $\xi\in E:= \Lambda_\Gamma\cap N_{\epsilon r}(l)$, there exists $0<\rho_{\xi}<1$ satisfying
\begin{equation}\label{equ:nlr}
\mu(B(\xi,\rho_{\xi})\cap (N_r(L)-N_{\epsilon r}(L)))\geq c\mu (B(\xi,\rho_{\xi})),
\end{equation}
where $0<c<1$ is a constant only depending on $\Gamma$. The family $\{B(\xi,\rho_{\xi})\}_{\xi\in E}$ forms a covering of $E$.
It follows from Vitali covering Lemma that there exists a disjoint subcollection $\{B(\xi,\rho_{\xi}) \}_{\xi\in I}$ with $I\subset E$ countable, such that
\[\cup_{\xi\in I} B(\xi,5\rho_{\xi})\supset \cup_{\xi\in E} B(\xi,\rho_{\xi})\supset E. \]
The set $B(\xi,\rho_{\xi})\cap (N_r(L)-N_{\epsilon r}(L))$ may not be contained in $N_r(l)-N_{\epsilon r}(l)$, but we can cover it by some translations of $N_r(l)-N_{\epsilon r}(l)$.
By elementary computation, we can use no more than $k_0$ number of elements $\gamma_j$'s in $\Gamma_{\infty}$ with $k_0$ depending on $\Delta_0$ such that
\begin{equation*}
\cup_{j}\gamma_j(N_r(l)-N_{\epsilon r}(l))\supset B(\xi,\rho_\xi)\cap (N_r(L)-N_{\epsilon r}(L)).
\end{equation*}
Using this inclusion, the quasi-invariance of the PS measure, \eqref{equ:nlr} and doubling property in Proposition~\ref{double}, we have
\begin{align*}
&k_0\mu(N_r(l)-N_{\epsilon r}(l))
\geq c\sum_{\xi\in I}\mu(B(\xi,\rho_\xi)\cap(N_r(L)-N_{\epsilon r}(L))\\
\geq &c\sum_{\xi\in I}\mu(B(\xi,\rho_{\xi}))
\geq c\epsilon'\sum_{\xi \in I} \mu(B(\xi,5\rho_{\xi}))\geq c\epsilon'\mu(N_{\epsilon r}(l)),
\end{align*}
where $\epsilon'>0$ is a constant given in Proposition~\ref{double}.
We conclude that
\begin{equation*}
\mu (N_{\epsilon r}(l))\leq \lambda\mu(N_r(l)). \qedhere
\end{equation*}
\end{proof}
\subsection*{Definition and property of ``flower" $J_p$}
We introduce ``flower" $J_p$, the building block for the coding. Actually, $J_p$'s are almost the union of a countable subcollection of open subsets $\Delta_j$ in the coding. The advantage of considering $J_p$ is that $J_p$ has a clean boundary which makes it possible to estimate the measure.
{\textbf{For the rest of the section, let $p=\gamma\infty$ be a parabolic fixed point in $\Delta$ with $\gamma\in \Gamma$ a top representation of $p$ and $x_{p}=\gamma^{-1}\infty$.}}. Let $\eta\in (0,1)$. We define the set $J_{p,\eta}$ as follows. By Lemma~\ref{lem:explicit}, we have
$$\gamma^{-1}B(p,\eta h_p)=B(x_p,1/\eta)^c. $$
First suppose that $\infty$ is a parabolic fixed point of maximal rank. Then $\mathbb{R}^d\subset \partial \mathbb{H}^{d+1}$ is tessellated by the translations of $\Delta_0$. Take $R_{p,\eta}$ to be the smallest parallelotope tiled by the translations of $\Delta_0$ such that it contains $B(x_p,1/\eta)$. Let
\begin{equation}
J_{p,\eta}=\gamma R_{p,\eta}^c,\\%=\gamma (\cup_{\gamma_1\in N_p}\gamma_1\overline{\Delta}_0).
\end{equation}
\begin{equation}
N_p=\{\gamma_1\in \Gamma_{\infty}:\, \gamma_1\Delta_0\subset R_{p,\eta}^c\}
=\{\gamma_1\in \Gamma_{\infty}:\,\gamma \gamma_1\Delta_0\subset J_{p,\eta}\}.
\end{equation}
For the general case when $\infty$ is a rank $k$ cusp, let $Z$ be the affine subspace in $\partial \mathbb{H}^{d+1}$ described in Lemma~\ref{lem:biberbach} where elements in $\Gamma_{\infty}$ act as translations, and $\Delta_{0}=B_{Y}(C)\times \Delta_0'$. So $Z$ is tessellated by the translations of $\Delta_0'$.
Take $R_{p,\eta}$ in $Z$ to be the smallest parallelotope tiled by the translations of $\Delta_0'$ such that $B_Y(2/\eta)\times R_{p,\eta}$ contains $B(x_p,1/\eta)$. Set \begin{equation}\label{flower}
J_{p,\eta}=\gamma (B_Y(2/\eta)\times R_{p,\eta})^c\subset B(p,\eta h_p),
\end{equation}
\begin{equation}
\label{flower group}
N_p=\{\gamma_1\in\Gamma_{\infty}:\, \gamma_1\Delta_0\subset (B_Y(2/\eta)\times R_{p,\eta})^c \}=\{\gamma_1\in \Gamma_{\infty}:\,\gamma \gamma_1\Delta_0\subset J_{p,\eta}\}.
\end{equation}
This construction yields
\begin{equation}\label{flower1}
J_{p,\eta}\cap\Lambda_\Gamma=\gamma(\cup_{\gamma_1\in N_p}\gamma_1(\overline{\Delta_0}\cap\Lambda_{\Gamma})),
\end{equation}
\begin{equation}\label{flower3}
d((\gamma \gamma_1)^{-1}\infty,\Delta_0)=d(\gamma_1^{-1}x_p,\Delta_0)\geq 1/\eta \,\,\,\text{for any}\,\,\, \gamma_1\in N_p.
\end{equation}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=2.5]
\path[fill=olive] (3.5,0) rectangle (5.5,1.5);
\draw[fill= white] (3.75,0.25) rectangle (5.25,1.25);
\draw (0,0) rectangle (2,1.5);
\path[fill=olive] (1,0.375)circle [radius=0.125];
\path[fill=olive] (1,0.625)circle [radius=0.125];
\path[fill=olive] (0.875,0.5)circle [radius=0.125];
\path[fill=olive] (1.125,0.5)circle [radius=0.125];
\node[above] at (1,0.75) {$p=\gamma\infty$};
\path[fill=black] (1,0.5) circle [radius=0.01];
\draw [thick, <-] (2.25,0.5) -- (3.25,0.5);
\node[above] at (2.75, 0.5) {$\gamma$};
\end{tikzpicture}
\end{center}
\caption{Flower}\label{fig:flower}
\end{figure}
\begin{rem*}
The shape of $J_{p,\eta}$ when $p$ is of maximal rank may be different than that when $p$ is not of maximal rank. But they satisfy the same property and estimate. Hence we can treat both cases together.
\end{rem*}
\begin{lem}\label{lem:jp}
There exists $0<c_{4}<1$ such that
\[B(p,c_{4}\eta h_p)\subset J_{p,\eta}\subset B(p,\eta h_p), \]
\[B(x_p,1/\eta)\subset (\gamma^{-1} J_{p,\eta})^c\subset B(x_p, 1/(c_{4}\eta)). \]
\end{lem}
\begin{proof}
Due to compactness of $\Delta_0$, there exists $c_{4}$ such that $(\gamma^{-1}J_{p,\eta})^c=(B_Y(2/\eta)\times R_{p,\eta})\subset B(x_p,1/(c_{4}\eta))$, which implies the lemma.
\end{proof}
\textbf{In following, we abbreviate $J_{p,\eta}$ to $J_p$.} For $r>0$, let
\begin{align}
\label{equ:def neighborhood 2}
&N_r(\partial J_p):=\{x\in J_p^c:\, d(x,\partial J_p)\leq r \},\\
&N_r(\partial\gamma^{-1} J_p):=\{x\in (\gamma^{-1}J_p)^c:\, d(x,\partial \gamma^{-1}J_p)\leq r \}.\nonumber
\end{align}
\begin{lem}\label{lem:jpr}
Fix $C>1$. There exists $0<c<1$ depending on $\eta$ such that for any $r<h_p$,
\begin{equation}
\label{recdouble}
\mu(N_{C\eta r}(\partial J_p))\leq c\mu(N_r(\partial J_p)).
\end{equation}
Moreover, $c$ tends to zero as $\eta$ tends to zero.
\end{lem}
\begin{proof}
We divide into the cases when $r$ lies in different intervals. Let $\beta=C\sqrt{\eta}$.
\begin{itemize}
\item Case $A$: {{$r\leq\eta h_p$}}
By Lemma~\ref{lem:explicit}, we have $|(\gamma^{-1})'x|=h_p/d(x,p)^2$. Using Lemma~\ref{lem:jp}, we have
\begin{equation}
\label{location}
N_r(\partial J_p)\subset B(p, 2\eta h_p)-B(p,c_{4}\eta h_p).
\end{equation}
Hence for $x\in N_r(\partial J_p)$
\begin{equation*}
|(\gamma^{-1})'x|\in [1/(4\eta^2h_p),1/(c_{4}^2\eta^2 h_p)].
\end{equation*}
Then
\begin{equation}\label{equ:gammaJ}
N_{r/(4\eta^2h_p)} (\partial\gamma^{-1}J_p)\subset \gamma^{-1}N_r(\partial J_p) \subset N_{r/(c^2_4\eta^2h_p)}(\partial\gamma^{-1}J_p).
\end{equation}
Notice that $\partial\gamma^{-1} J_p=\partial (B_Y(2/\eta)\times R_{p,\eta})$ and $R_{p,\eta}$ is a parallelotope tiled by the translations of $\Delta_0'$.
\begin{itemize}
\item Case $A_1$: {$r\leq \eta^2h_p$}
Recall that Lemma \ref{lem:boud} is proved using Lemma \ref{lem:part}. Using the same argument, we obtain an analog of Lemma \ref{lem:boud} for $N_r(\partial \gamma^{-1} J_p)$. Using this version of Lemma~\ref{lem:boud} with $\epsilon=4\beta/c_{4}^2$ , the inequality $r/(4\eta^2 h_p)< 1$ and \eqref{equ:gammaJ}, we have
\begin{align*}
\mu( \gamma^{-1}N_{\beta r}(\partial J_p))\leq \mu(N_{\beta r/(c_{4}^2 \eta^2 h_p)}(\partial\gamma^{-1}J_p))\leq \lambda \mu(N_{r/(4\eta^2 h_p)}(\partial\gamma^{-1}J_p))\leq \lambda\mu( \gamma^{-1}N_r(\partial J_p)).
\end{align*}
Using Lemma~\ref{lem:annulusquasi}, we obtain
\begin{equation*}
\frac{\mu(N_{\beta r}(\partial J_p))}{\mu(N_r(\partial J_p))}\leq C\frac{\mu(\gamma^{-1}N_{\beta r}(\partial J_p))}{\mu(\gamma^{-1} N_{r}(\partial J_p))} \leq C\lambda,
\end{equation*}
where $\lambda$ tends to zero as $\eta$ tends to zero.
\item Case $A_2$: {$\eta^{3/2} h_p<r\leq \eta h_p$}
We compute the measure by counting the number of translations of $\Delta_0$. Let $\gamma' \Delta_0$ be any fundamental domain contained in $\gamma^{-1}N_r(\partial J_p)$ with $\gamma'\in \Gamma_{\infty}$. Using the quasi-invariance of PS measure and (\ref{location}), we obtain that $\mu(\gamma'\Delta_0)\approx \eta^{2\delta}\mu(\Delta_0)$.
By Lemma~\ref{lem:annulusquasi}, \eqref{equ:gammaJ} and by counting the number of fundamental domains which intersects the neighborhood of $\partial \gamma^{-1}J_p$, we have
\begin{equation*}
\mu(N_{\beta r}(\partial J_p))\ll h_{p}^{\delta}\mu(N_{\beta r/(c_{4}^2 \eta^2 h_p)}(\partial\gamma^{-1}J_p))\ll h_p^{\delta}\cdot(1/\eta)^{k-1} \cdot (\beta r/(c_{4}^2 \eta^2 h_p))\cdot\eta^{2\delta}\mu(\Delta_0).
\end{equation*}
Meanwhile, as $r/(4\eta^2h_p)\geq 1/4\eta^{1/2}$, we have
\begin{equation*}
\mu(N_r(\partial J_p))\gg h_p^{\delta}\mu(N_{r/(4\eta^2 h_p)}(\partial\gamma^{-1}J_p))\gg h_p^{\delta}\cdot (1/\eta)^{k-1}\cdot (r/(4\eta^2 h_p))\cdot \eta^{2\delta}\mu(\Delta_0).
\end{equation*}
Therefore,
\[ \mu(N_{\beta r}(\partial J_p)\ll \beta \mu(N_r(\partial J_p)). \]
\end{itemize}
\item Case $B$: {$\eta^{1/2} h_p\leq r\leq h_p$}
We handle this case using \eqref{equ:bpr}. By Lemma~\ref{lem:jp} and the inequality $\beta r\geq \eta h_p$, we have
\begin{equation*}
\mu(N_{\beta r}(\partial J_p))\leq \mu(J_p\cup N_{\beta r}(\partial J_p))\leq \mu (B(p,\eta h_p+\beta r))\leq \mu (B(p,2\beta r))\ll (2\beta r)^{2\delta-k}h_p^{k-\delta}.
\end{equation*}
Meanwhile, we have
\begin{equation*}
\mu(J_p\cup N_r (\partial J_p))\geq \mu (B(p,r))\gg r ^{2\delta-k}h_p^{k-\delta}.
\end{equation*}
Hence
\begin{equation*}
\frac{\mu(N_r(\partial J_p)-N_{\beta r}(\partial J_p))}{\mu (N_{\beta r}(\partial J_p))}=\frac{\mu (J_p\cup N_r(\partial J_p))-\mu (J_p\cup N_{\beta r}(\partial J_p))}{\mu (N_{\beta r}(\partial J_p))} \gg \frac{r ^{2\delta-k}h_p^{k-\delta}-(2\beta r)^{2\delta-k}h_p^{k-\delta} }{(2\beta r)^{2\delta-k}h_p^{k-\delta}}.
\end{equation*}
Therefore
\begin{equation*}
\mu(N_{\beta r}(\partial J_p))\ll \beta^{2\delta-k} \mu(N_r(\partial J_p)).
\end{equation*}
\end{itemize}
Now to prove (\ref{recdouble})
we consider $\eta^{1/2} r$ and $r$. Then one of them belongs to $(0,\eta^2 h_p)\cup [ \eta^{3/2} h_p,\eta h_p]\cup [\eta^{1/2} h_p,h_p]$. Inequality (\ref{recdouble}) follows from the observation that
\begin{equation*}
\frac{\mu(N_{C\eta r}(\partial J_p))}{\mu(N_r(\partial J_p))
\leq \min\left\{\frac{\mu(N_{C\eta r}(\partial J_p))}{\mu(N_{\beta r}(\partial J_p))}, \frac{\mu(N_{\beta r}(\partial J_p))}{\mu(N_r(\partial J_p))}\right \}. \qedhere
\end{equation*}
\end{proof}
\section{Coding of limit set}\label{sec:code}
In this section, we construct the coding and prove Proposition~\ref{prop:coding}, Lemma~\ref{lem:uni} and Lemma~\ref{lem:l1}.
\subsection{Coding procedure}
\label{coding procedure}
The visual map $\pi:\operatorname{T}^1(\mathbb{H}^{d+1})\to \partial \mathbb{H}^{d+1}$ is defined by
\begin{equation*}
\pi(x)=\lim_{t\to \infty} \mathcal{G}_t(x),
\end{equation*}
which maps $x$ to the forward endpoint in $\partial \mathbb{H}^{d+1}$ of the geodesic defined by $x$.
Recall that we fix $p_1$ as $\infty$ and $H_{\infty}$ is the horoball based at $\infty$ given by $\mathbb{R}^d\times \{x\in \mathbb{R}:\,x>1\}$. Let $\widetilde{H_{\infty}}$ be the corresponding unstable horosphere, that is, it is an unstable horosphere based at $\infty$ and the basepoints of the unit tangent vectors in $\widetilde{H_{\infty}}$ form $H_{\infty}$. Set
\begin{equation*}
\widetilde{\Omega_0}=\{x\in \widetilde{H_{\infty}}:\,\pi(x)\in \Delta_0\}.
\end{equation*}
Take
\begin{align*}
&h_n=e^{-n},\\
& \eta\in (0,1)\text{ a sufficiently small constant to be specified at the end of the proof of Proposition \ref{keylemma}}.
\end{align*}
All the constants appearing later will be independent of $\eta$ unless we state it explicitly.
Let
\begin{align*}
&H_p(\eta)\,\,\, \text{be the horoball based at}\,\,\, p\,\,\, \text{with height equal to}\,\,\, \eta h_{p},\\
& \mathcal C_\eta=\Gamma\backslash\cup_{p\in \mathcal P}\Gamma \operatorname{T}^1(H_p(\eta)).
\end{align*}
The construction is by induction. Let $\Omega_0:=\Delta_{0}$.
\begin{itemize}
\item
For $n\in\mathbb N$, let
\begin{equation}
\label{good parabolic fixed points}
P_{n+1}=\{p\in\mathcal P:\,\ \eta h_p\in(h_{n+1},h_n],\ B(p,\eta h_p)\subset \Omega_n,\ d(p,\partial \Omega_n)>h_n/(4\eta) \}.
\end{equation}
\item For any $p\in P_{n+1}$, write $p=\gamma p_i$ with $\gamma\in \Gamma$ a top representation of $p$. If $p_i=\infty$, let $J_p$ and $N_p$ be defined as (\ref{flower}) and \eqref{flower group} respectively. Otherwise, we use the following commutative diagram to define $J_p$:
\[ \begin{tikzcd}
\H^{d+1} \arrow{r}{g_i} \arrow[swap]{d}{\Gamma} & \H^{d+1} \arrow{d}{g_i \Gamma g_{i}^{-1}} \\%
\H^{d+1} \arrow{r}{g_i}& \H^{d+1}.
\end{tikzcd}
\]
Note that $g_ip=(g_i\gamma g_i^{-1})\infty\in g_i\Delta_0$. So we can define $J_{g_ip}$ as (\ref{flower}) and set
\begin{equation*}
J_p:=g_i^{-1}J_{g_ip},\ \
N_p=\{\gamma_1\in \Gamma_{p_i}:\,\gamma \gamma_1\Delta_{p_i}\subset J_{p}\},
\end{equation*}
where $\Gamma_{p_i}$ is a subgroup of $\Gamma$ defined in \eqref{pistabilizer}.
\item Set
\begin{align*}
&\Omega_{n+1}=\Omega_n-D_{n+1}=\Omega_n-\cup_{p\in P_{n+1}}J_{p},\\
&\widetilde{\Omega_{n+1}}=\{x\in \widetilde{\Omega_0}:\,\pi(x)\in \Omega_n\}.
\end{align*}
\end{itemize}
It is worthwhile to point out that in this construction, we use Lemma~\ref{lem:jp},~\ref{lem:height} and~\ref{lem:bilip} to obtain the relation
\begin{equation}
\label{equ:jp}
B\left(p,\eta h_p/C_{5}\right)\subset g_i^{-1}B\left(g_ip, c_{4}\eta h_{g_ip}\right)\subset J_p\subset g_i^{-1}B\left(g_ip,\eta h_{g_ip}\right)\subset B\left(p, C_{5}\eta h_{p}\right),
\end{equation}
for some constant $C_{5}>1$. Here we can take $\eta$ to be small enough such that $C_{5}\eta<1$ and $B(p,C_{5}\eta h_p)\subset \Delta_0$.
Using the definition of $J_p$, \eqref{equ:jp} and the separation property (Lemma \ref{lem:pdistance}), it can be shown that the sets $J_p$'s with $p\in P_n$ and $n\in \mathbb{N}$ are mutually disjoint. In Proposition \ref{keylemma}, it will be shown that the union $\cup_n\cup_{p\in P_n}J_p$ is conull in $\Delta_0$ with respect to PS measure $\mu$. By \eqref{flower1} and the construction of $J_p$, we have $J_{p}\cap\Lambda_\Gamma=\cup_{\gamma_1\in N_p}\gamma\gamma_1\overline{\Delta}_{p_i}\cap\Lambda_\Gamma$. So the countable disjoint union
\[ \bigcup_{n\in\mathbb N}\bigcup_{p=\gamma p_i\in P_n}\bigcup_{\gamma_1\in N_p}\gamma\gamma_1\Delta_{p_i} \]
is also conull in $\Delta_0$ with respect to PS measure. On each set $\gamma\gamma_1\Delta_{p_i}$ we have an expanding map given by $(\gamma\gamma_1)^{-1}$ which maps this set to $\Delta_{p_i}$. For one cusp case, these are the countable collection of disjoint open subsets and the expanding map. When there are multi cusps, this is the first step to construct the coding and the rest will be provided in Section \ref{sec:exptail} and \ref{sec:codmulti}.
\bigskip{}
The main result of this section is the following proposition.
\begin{prop}\label{keylemma}
There exist $\epsilon_0>0$ and $N>0$ such that for all $n>N$, we have
\[\mu(\Omega_n)\leq (1-\epsilon_0)^n. \]
\end{prop}
For one cusp case, this yields Proposition \ref{prop:coding} (1). Moreover, the exponential tail \eqref{sum} will follow from Proposition \ref{keylemma} rather directly and it will be proved in Section~\ref{sec:exptail}. To prove this proposition, we need a lot of preparations and we postpone its proof to the end of Section~\ref{sec:energy}.
\subsection{Separation}
\label{sec:sep}
\begin{lem}[Separation property]
\label{lem:pdistance}
For any two different parabolic fixed points $p, p'$, we have
\begin{equation*}
d(p,p')>\sqrt{h_p h_{p'}}.
\end{equation*}
\end{lem}
The proof is elementary, due to disjointness of horoballs. This property plays a key role in the construction of the coding and the proof of Proposition~\ref{keylemma}.
\begin{lem}
\label{lem:separation}
The distance between any two connected components of $\partial\Omega_n$ is strictly greater than $h_n/(2\eta)$.
\end{lem}
\begin{proof}
For any two $p,p'\in P_n$, using (\ref{equ:jp}) and Lemma~\ref{lem:pdistance}, we obtain
\begin{align*}
\sqrt{h_ph_{p'}}-C_{5}\eta (h_p+h_{p'})\geq \frac{h_n}{\eta}-2C_{5} h_{n-1}\geq \frac{h_n}{2\eta}.
\end{align*}
The distance between $J_{p}$ and $\partial\Omega_n$ is also greater than
\begin{align*}
\frac{h_{n-1}}{4\eta} -C_{5}\eta h_p \geq \frac{h_{n-1}}{4\eta} -C_{5}h_{n-1}>\frac{h_n}{2\eta}.
\end{align*}
By induction, for different connected components of $\Omega_{n}$, their distance is at least $h_{n}/(2\eta)$.
\end{proof}
This lemma is the reason why we require the parabolic fixed points in $P_{n+1}$ away from the boundary of $\Omega_{n}$. This condition makes different components of $\partial \Omega_{n}$ separable, which makes it possible to apply the friendliness of PS measure in return.
\subsection{Equivalent classes in $Q_n$}
\label{sec:equivalent classes}
We need more knowledge of $\Omega_n$ and we introduce the following set: for $n\in\mathbb N$, define
\begin{equation*}
Q_{n+1}=\{p\in\mathcal P:\, \eta h_{p}\in (h_{n+1},h_n],\ B(p,\eta h_{p})\cap \Omega_n\neq\emptyset,\ d(p,\partial \Omega_n)\leq h_n/(4\eta)\}.
\end{equation*}
The points in neighborhoods of $Q_{n+1}$ correspond to vectors in $\widetilde{\Omega}_0$ entering the cusps around time interval $[n,n+1)$, which is the bad part of $\Omega_n$.
We consider a subset of $Q_n$:
\begin{equation*}
Q_{n}'=\{p\in Q_{n}:\,\ B(p,\sqrt{\eta h_nh_p})\cap\partial \Omega_{n-1}\neq\emptyset \}.
\end{equation*}
The geometric meaning of this artificial radius $\sqrt{\eta h_nh_p}$ will be explained in Lemma~\ref{lem:np}.
The proof of Proposition \ref{keylemma} consists of estimating the measure $\mu(B(p,\sqrt{\eta h_nh_p})\cap\Omega_{n-1})$. As the ball $B(p,\sqrt{\eta h_nh_p})\Omega_{n-1}$ may not be a full ball, we will pair it with another partial ball and use the doubling property of the PS measure. Before going into the details, it will be illuminating to provide a sketch on the ideas. For $p\in Q_n'$, the component in $\partial \Omega_{n-1}$ closest to it is either $\partial \Omega_0$ or some $\partial J_q$. If it is $\partial \Omega_0$, notice that $\partial \mathbb{H}^{d+1}$ is tessellated by the translations of $\Delta_0$, so the symmetry property of these translations gives the point $p'$ to pair with $p$ (if $p$ is around the corners of $\partial \Omega_0$, we may need more than one point to pair with $p$). If it is some $\partial J_q$, write $q=\gamma^{-1}p_i$ with $\gamma^{-1}\in \Gamma$ a top representation of $q$. We map $B(p,\sqrt{\eta h_n h_p})$ and $\partial J_q$ by $g_i\gamma $ and get a picture similar to the previous case: in particular, $g_i\gamma \partial J_q$ is a parallelotope. We find the paring point of $g_i\gamma p$ and map it back to get the one of $p$. The work lies modifying the radius $\sqrt{\eta h_n h_p}$ such that it is suitable for both $p$ and its pairing point.
\bigskip{}
\textbf{Finding the radius}
\textbf{For Lemma~\ref{lem:rprq} - Lemma~\ref{lem:pjq}, we consider $p\in Q_n'$ such that the component in $\partial \Omega_{n-1}$ closest to $p$ is $\partial J_{q}$ with $q\in \cup_{l=1}^{n-1}P_l$. Write $q= \gamma^{-1}p_i$ for some $\gamma\in \Gamma$ a top representation of $q$.}
\begin{lem}
\label{lem:rprq}
There exists $C>1$ such that we have
\begin{align*}
\eta h_p\leq h_{n-1}\leq C\eta^3 h_q,\ \
\frac{\eta h_{g_i p}}{C}\leq h_{n-1}\leq C \eta^3 h_{g_iq}.
\end{align*}
\end{lem}
\begin{proof}
It follows from Lemma~\ref{lem:pdistance} that
\begin{equation*}
d(p,q)\geq \sqrt{h_p h_q}\geq \sqrt{h_{n-1} h_q/(e\eta)}.
\end{equation*}
Meanwhile by \eqref{equ:jp}, we have
\begin{equation*}
d(p,q)\leq d(p,\partial J_{q})+d(\partial J_{q}, q)\leq h_{n-1}+C_{5}\eta h_q\leq (1+C_{5})\eta h_q.
\end{equation*}
So the above two inequalities lead to first statement. The second statement follows easily from the first statement and Lemma~\ref{lem:height}.
\end{proof}
\begin{lem}
\label{lem:annulus}
There exists $C>1$ such that
\begin{equation*}
B(g_ip, h_{g_ip})\subset B(g_iq, C\eta h_{g_iq})-B(g_iq, \eta h_{g_iq}/C).
\end{equation*}
\end{lem}
\begin{proof}
For any $\xi\in \partial B(g_ip, h_{g_ip})$, an upper bound for $d(\xi, g_iq)$ is given by
\begin{align*}
d(\xi, g_iq)\leq &d(\xi, g_ip)+d(g_ip,\partial J_{g_iq})+d(\partial J_{g_iq},g_iq)
\leq h_{g_ip}+C h_{n-1}+\eta h_{g_iq} \leq C\eta h_{g_iq}.
\end{align*}
A lower bound for $d(\xi, g_iq)$ is given by
\begin{align*}
d(\xi, g_iq)\geq &d(g_iq, \partial J_{g_iq}) -d(g_ip,\partial J_{g_iq})-h_{g_ip}
\geq c_{4} \eta h_{g_iq} -Ch_{n-1}-h_{g_ip} \geq c_{4} \eta h_{g_iq} -C\eta^2h_{g_iq}.
\end{align*}
Hence by taking $\eta$ sufficiently small, we reach the conclusion.
\end{proof}
\begin{lem}
\label{lem:rgammaprp}
There exists $C>1$ such that we have
\begin{equation*}
\frac{ h_{g_ip}}{C\eta^2 h_{g_iq}}\leq h_{g_i\gamma p}\leq \frac{Ch_{g_ip}}{\eta^2 h_{g_i q}}.
\end{equation*}
\end{lem}
Note that $(g_i\gamma g_i^{-1})g_iq=\infty$. This lemma can be shown by a straightforward computation using Lemma~\ref{lem:rg1},~\ref{lem:rprq} and~\ref{lem:annulus}.
For any $m\geq n$, set
\begin{equation}
\label{equ:rpm}
\tilde{r}_{p,m}=\sqrt{\frac{h_mh_{g_i\gamma p}}{\eta h_{g_i q}}}.
\end{equation}
This is almost the replacement for the radius $\sqrt{\eta h_{m}h_p}$: we consider the radius of the ball $g_i\gamma B(p,\sqrt{\eta h_{m} h_p})$. We use Lemmas~\ref{lem:explicit},~\ref{lem:height},~\ref{lem:bilip},~\ref{lem:annulus} to estimate the derivative of $g_i\gamma g_i^{-1}$ and use ~\ref{lem:rgammaprp} further to deduce that this radius is $\tilde{r}_{p,m}$ up to a constant. The following lemma is to pick a correct constant $C_{6}>1$ such that $\tilde{r}_{p,m}/C_{6}$ will guarantee the equivalent classes we introduce later are well-defined. It will also be clear that another advantage of using $\tilde{r}_{p,n}/C_{6}$ is that it is independent of the choice of the points in the equivalent classes.
\begin{lem}\label{lem:pjq}
There exists $C_{6}>1$ such that for any point $p'$, if $d(g_i\gamma p',g_i\gamma \partial J_q)\leq \tilde{r}_{p,n}/C_{6}$, then $d(p',\partial J_q)\leq h_n$.
\end{lem}
\begin{proof}
By Lemma~\ref{lem:rgammaprp},~\ref{lem:height} and~\ref{lem:rprq}, we have
\begin{equation}\label{equ:rpmless}
\tilde{r}_{p,n}\leq \frac{C\sqrt{\eta h_nh_{g_ip}}}{\eta^2h_{g_i q}}\leq \frac{Ch_{g_ip}}{\eta h_{g_iq}}\leq C\eta.
\end{equation}
Hence it follows from Lemma~\ref{lem:jp} that for any $C_{6}>1$, if $d(g_i\gamma p', g_i\gamma \partial J_q)\leq \tilde{r}_{p,n}/C_{6}$, then $$g_i\gamma p'\in B(x_{g_iq}, 2/(c_{4}\eta))-B(x_{g_iq},1/(2\eta)).$$
For any $x$ in the line segment between $g_i\gamma p'$ and $g_i\gamma \partial J_{q}$, we have $|(g_i\gamma^{-1} g_{i}^{-1})'x|\leq 4\eta^2 h_{g_iq}$.
By Lemma~\ref{lem:bilip} and \eqref{equ:rpmless}, we obtain
\begin{align*}
d(p',\partial J_q)\leq Cd(g_ip',g_i\partial J_{q})\leq C\eta^2 h_{g_iq} d(g_i\gamma p',g_i\gamma \partial J_q)\leq C\eta^2\tilde{r}_{p,n}h_{g_iq}/C_{6} \leq C\etah_{g_i p}/C_{6}.
\end{align*}
By taking $C_{6}>1$ large enough, we have $d(p', \partial J_q)\leq h_n$.
\end{proof}
\bigskip{}
\textbf{Definition of equivalent classes}
Now we define equivalent classes in $Q_n$. We define them by induction. For $Q_1$,
\begin{itemize}
\item for $p\in Q_1-Q_1'$, set the equivalent class $C(p)$ of $p$ to be $\{p\}$.
\item for $p\in Q_1'$, set
\begin{equation*}
C(p)=\{\gamma_1 p:\,\gamma_1 B(p,\sqrt{\eta h_1 h_p})\cap \partial \Omega_0\neq \emptyset,\,\,\, \gamma_1\in \Gamma_{\infty}\}.
\end{equation*}
\end{itemize}
Set
\begin{equation*}
Q_1'':=\cup_{p\in Q_1} C(p),
\end{equation*}
and for any $p\in Q_1''$ and $m\geq 1$, define
\begin{align*}
r_{p,m}=\sqrt{\eta h_m h_p},\ \ B_{p,m}=B(p,r_{p,m}).
\end{align*}
Suppose we have defined $Q_n''$. We define the equivalent classes in $Q_{n+1}$ and the set $Q_{n+1}''$ as follows:
\begin{itemize}
\item[Case 1] for $p\in Q_{n+1}'-\cup_{l\leq n}Q_l''$ such that the component in $\partial \Omega_n$ closest to $p$ is $\partial \Omega_0$, set
\begin{equation*}
C(p)=\{\gamma_1p:\, \gamma_1B(p,\sqrt{\eta h_{n+1}h_{p}})\cap \partial \Omega_0\neq\emptyset,\,\, \gamma_1\in \Gamma_{\infty}\}.
\end{equation*}
For any $p'\in C(p)$ and $m\geq n+1$, define
\begin{align*}
r_{p',m}=\sqrt{\eta h_m h_{p'}},\ \ B_{p',m}=B(p,r_{p',m}).
\end{align*}
\item[Case 2] for $p\in Q_{n+1}'-\cup_{l\leq n}Q_l''$ such that the component in $\partial \Omega_n$ closest to $p$ is some $J_{q}$, write $q=\gamma^{-1}p_i$ with $\gamma^{-1}$ a top representation of $q$. Let $\tilde{r}_{p,n}$ and $C_{6}$ be as given in (\ref{equ:rpm}) and Lemma~\ref{lem:pjq} respectively. Set
\begin{equation}
\label{equ:cp}
C(p)=\{(g_i\gamma)^{-1} \gamma_1g_i\gamma p:\, \gamma_1B(g_i\gamma p,\tilde{r}_{p,n}/C_{6})\cap \,g_i\gamma\partial J_q\neq\emptyset,\,\,\gamma_1\in (g_i\Gamma g_i^{-1})_{\infty}\}.
\end{equation}
If $C(p)=\emptyset$ under \eqref{equ:cp}, let $C(p)=\{ p\}$. In this case, the point $p$ is near the boundary but not within the distance given in \eqref{equ:cp}.
For any $p'\in C(p)$ and $m\geq n+1$, define
\begin{align}
\label{correct radius}
&r_{p',m}=\frac{1}{C_{6}}\sqrt{\frac{h_m h_{g_i\gamma p'}}{\eta h_{g_i q}}}\,\,\,(\text{which equals}\,\,\,r_{p,m}),\\
\label{correct ball}
&B_{p',m}=(g_i\gamma)^{-1}B(g_i\gamma p', r_{p',m}).
\end{align}
\item[Case 3] for $p\in Q_{n+1}- \cup_{l\leq n}Q_l''$ such that $p$ does not belong to the union of equivalent classes defined in the previous two cases, set $C(p)=\{p\}$ and for any $m\geq n+1$, define
\begin{align*}
r_{p,m}=\sqrt{\eta h_m h_p},\ \ B_{p,m}=B(p,r_{p,m}).
\end{align*}
\end{itemize}
Set
\begin{equation*}
Q_{n+1}''=\bigcup_{p\in Q_{n+1}-\cup_{l\leq n}Q_l''}C(p).
\end{equation*}
Then $\cup_{1\leq l\leq (n+1)}Q_l''\supset \cup_{1\leq l\leq (n+1)}Q_l$.
In the following discussion of the points $p$'s in $Q_{n}''$, if the definition of $p$ involves a boundary component of $\partial \Omega_{n-1}$, we will need to consider this boundary component a lot of the times. For simplicity, we call the boundary component used to define $p\in Q_{n}''$ \textbf{the associated boundary component of $p$}.
\bigskip{}
\textbf{Uniformity among equivalent classes}
For $p\in Q_{n}''$, its equivalent class $C(p)$ may contain points whose associated horospheres don't appear in the time interval $[n-1,n)$. In the following lemmas, we show that, up to some constant, the points in $C(p)$ are ``uniform".
\begin{lem}
\label{lem:puniform}
There exists $C_{7}>1$ such that for any $p\in Q_{n}''$ and any $p'\in C(p)$ we have
\begin{equation*}
1/C_{7}\leq h_{p}/h_{p'}\leq C_{7}.
\end{equation*}
\end{lem}
It suffices to to prove Lemma~\ref{lem:puniform} for the case when $\# C(p)\geq 2$ and the associated component of $p$ in $\partial \Omega_{n-1}$ is some $\partial J_q$. Write $q=\gamma^{-1}p_i$ with $\gamma^{-1}$ a top representation of $q$. Let $r_{p,m}$ and $B_{p,m}$ be defined as in \eqref{correct radius} and \eqref{correct ball} respectively. We first show the following estimate.
\begin{lem}[Location of balls]
\label{lem:eclocation}
There exists a constant $C>1$ such that
\begin{align}
&B(g_i\gamma p, r_{p,m})\subset B \left(x_{g_iq}, C/\eta\right)-B(x_{g_i q},1/(C\eta)),\label{inverseinclusion} \\
&g_iB_{p,m}\subset B(g_iq, C\eta h_{g_iq})-B(g_iq,\eta h_{g_i q}/C). \label{inclusion}
\end{align}
\end{lem}
\begin{proof}
We deduce from Lemma~\ref{lem:explicit} and \eqref{equ:jp} that
\begin{align*}
g_i\gamma g_i^{-1}( \partial J_{g_iq,\eta})\subset B(x_{g_iq},1/(c_{4}\eta))-B(x_{g_i q},1/\eta).
\end{align*}
It follows from \eqref{equ:rpmless} that there exists a point $p'\in C(p)$ such that $r_{p',n}\leq C\eta$. Meanwhile, the construction of the equivalent class $C(p)$ implies that $r_{p,m}=r_{p',m}$. Hence we obtain (\ref{inverseinclusion}). We use Lemma~\ref{lem:explicit} again to obtain (\ref{inclusion}).
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:puniform}]
We prove the following explicit estimate:
\begin{equation}
\label{bounded interval}
h_{g_i p}\approx \eta^2 h_{g_iq} h_{g_i\gamma p}.
\end{equation}
This together with Lemma~\ref{lem:height} will lead to Lemma~\ref{lem:puniform}. Note that $h_{g_i\gamma p}\leq C$, with $C$ a constant depending on $\Gamma$. Essentially we apply Lemma~\ref{lem:rg1} to the points $g_i q=(g_i\gamma^{-1}g_i^{-1})\infty$, $g_ip=(g_i \gamma^{-1} g_i^{-1}) g_i\gamma p$ and the map $g_i\gamma^{-1} g_i^{-1}$ to obtain
\begin{equation*}
\begin{split}
&h_{g_ip}\geq
\frac{h_{g_iq} h_{g_i\gamma p}}{d(g_i\gamma p, x_{g_iq})^2+h_{g_i\gamma p}^2} \geq \frac{\eta^2 h_{g_iq} h_{g_i\gamma p}}{C},\\
&h_{g_ip}\leq \frac{h_{g_iq} h_{g_i\gamma p}}{(d(g_i\gamma p, x_{g_iq})-h_{g_i\gamma p}/2)^2} \leq \frac{\eta^2 h_{g_iq} h_{g_i\gamma p}}{C}.\qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{lem}
\label{lem:universal}
There exists $C_{8}>1$ such that for any $p\in Q''_{n}$ and any $m\geq n$, the ball $B_{p,m}$ satisfies
\begin{equation*}
B(p, \sqrt{\eta h_p h_m}/C_{8})\subset B_{p,m} \subset B(p, C_{8}\sqrt{\eta h_p h_m}).
\end{equation*}
\end{lem}
\begin{proof}
It is enough to prove the case where $\# C(p)\geq 2$ and the associated component of $p$ in $\partial \Omega_{n-1}$ is some $\partial J_q$. Write $q=\gamma^{-1}p_i$ with $\gamma^{-1}$ a top representation of $q$.
Note that by Lemma~\ref{lem:explicit} and~\ref{lem:eclocation}, we have for every $x\in g_iB_{p,m}$, $|(g_i\gamma g_i^{-1})' x|\approx 1/(\eta^2 h_{g_iq})$.
We use this derivative estimate and \eqref{bounded interval} to figure out the map $g_{i}\gamma^{-1}g_i^{-1}$ on $B(g_i\gamma p, r_{p,m})$ and obtain
\begin{equation*}
B(g_ip, \sqrt{\eta h_{g_ip}h_m}/C)\subset g_i B_{p,m}\subset B(g_ip, C\sqrt{\eta h_{g_ip}h_m}).
\end{equation*}
We use Lemma~\ref{lem:height} and~\ref{lem:bilip} to finish the proof.
\end{proof}
\bigskip{}
\textbf{Well-definedness of equivalent classes }
\begin{lem}\label{lem:welldefine}
For any two equivalent classes $C(p')$ and $C(p'')$, they are either the same or disjoint.
\end{lem}
\begin{proof}
Assume that these two equivalent classes are not the same and the intersection is nonempty.
\medskip{}
\textbf{Case 1}:
Suppose one of these two equivalent classes just consists of one point,
say $\#C(p')=1$ and $\#C(p'')\geq 2$. We may assume that $p''\in Q_n'$ for some $n$.
In view of the construction of equivalent classes, we assume further that the associated component of $p''$ is some $\partial J_q$. Write $q=\gamma^{-1}p_i$.
The same argument also works for $\partial\Omega_0$. As $p'$ belongs to the equivalent class $C(p'')$, it follows from the definition that
\begin{equation}
\label{eqn:welldef}
B(g_i\gamma p', r_{p',n})\cap g_i \gamma \partial J_q\neq \emptyset,\,\,\, B(g_i\gamma p'', r_{p'',n})\cap g_i \gamma \partial J_q\neq \emptyset,
\end{equation}
where $r_{p',n}$ is defined as in \eqref{correct radius} and equals $r_{p'',n}$.
The fact that $C(p')$ just consists of $p'$ implies $p'\in Q_l-\cup_{i<l}Q_i''$ for some $l<n$. Meanwhile, as $\partial J_q$ is the associated component of $p''$, by Lemma~\ref{lem:rprq} and~\ref{lem:puniform}, we have
\begin{equation*}
h_{q}\geq h_{p''}/(C\eta^2)\geq h_{p'}/(C\eta^2)\geq h_l/(C\eta^3).
\end{equation*}
Hence $\partial J_q\subset \partial\Omega_l$.
It follows from (\ref{eqn:welldef}) and Lemma~\ref{lem:pjq} that
$$d(p',\partial J_q)\leq h_n< \sqrt{\etah_l h_{p'}}.$$
So $p'\in Q_l'-\cup_{i<l}Q_i''$. \eqref{eqn:welldef} yields
\begin{equation*}
B(g_i\gamma p',r_{p',l})\cap g_i\gamma \partial J_q\neq \emptyset,\,\,\, B(g_i\gamma p'',r_{p'',l})\cap g_i\gamma \partial J_q\neq \emptyset.
\end{equation*}
As $l<n$, $C(p')$ contains $p''$, which is a contradiction.
\textbf{Case 2}: Suppose that $\# C(p'),\,\#C(p'')\geq 2$. Without loss of generality, we may assume that $p'\in Q_m'-\cup_{l<m}Q_l''$ and $p''\in Q_n'-\cup_{l<n}Q_l''$ and $m\leq n$.
Let $p\in C(p')\cap C(p'')$. Then it follows from the construction of equivalent classes and Lemma~\ref{lem:pjq} that there are boundary components $\partial_1$ and $\partial_2$ in $\partial \Omega_{n-1}$ such that
\[d(p,\partial_1)\leq h_{m-1},\,\,\,\ d(p,\partial_2)\leq h_{n-1}. \]
On the one hand, as $\partial_1$ and $\partial_2$ are in $\partial \Omega_{n-1}$, if they are distinct, Lemma~\ref{lem:separation} states that their distance is greater than $h_{n-1}/(2\eta)$. On the other hand, using Lemma~\ref{lem:puniform}, we obtain
\begin{equation*}
h_n/h_m\geq h_{p''}/(eh_{p'})=(h_{p'}/h_{p})(h_{p}/h_{p''})\geq 1/(eC_{7}^2).
\end{equation*}
Then
\begin{equation*}
d(\partial_1,\partial_2)\leq h_{m-1}+h_{n-1}\leq (1+eC_{7}^2)h_{n-1}<h_{n-1}/(2\eta).
\end{equation*}
We conclude that $\partial_1=\partial_2$.
There are two possibilities for $\partial_1$. One possibility is that $\partial_1$ is some $\partial J_q$. Write $q=\gamma^{-1}p_i$ with $\gamma^{-1}$ a top representation of $q$. As $g_i\gamma p$ is related with $g_i\gamma p'$ and $g_i\gamma p''$ by elements in $(g_i\Gamma g_i^{-1})_{\infty}$, we have $\gamma_1 g_i\gamma p''=g_i\gamma p'$ for some $\gamma_1\in (g_i\Gamma g_i^{-1})_{\infty}$. As $m\leq n$, we have
\begin{equation*}
\emptyset\neq B(g_ip'', r_{p'',n})\cap g_i\gamma \partial J_{q}\subset B(g_ip'', r_{p'',m})\cap g_i \gamma \partial J_{q}.
\end{equation*}
As a result, we have $C(p')= C(p'')$.
The other possibility is that $\partial_1=\partial \Omega_0$. It follows directly from the construction of equivalent classes that
\begin{equation*}
\emptyset\neq B(p'', r_{p'',n})\cap \partial \Omega_0\subset B(p'', r_{p'',m})\cap \partial \Omega_0.
\end{equation*}
Hence $C(p')=C(p'')$.
\end{proof}
\subsection{Auxiliary sets $A_n$ and $B_n$ in $\Omega_n$}
\label{sec:auxiliary sets}
We introduce auxiliary sets $A_n$ and $B_n$ in $\Omega_n$. By Lemma~\ref{lem:welldefine}, the set $Q_n''$ is disjoint with $\cup_{1\leq l\leq (n-1)}Q_n''$. For any $p\in Q_n''$ and any $m\geq n$, we have defined the ball $B_{p,m}$.
Note that it follows from the construction of $Q_{n}''$ that if $\#C(p)=1$, then the full ball $B_{p,n}$ is contained in $\Omega_n$.
For each $n$, we define
\begin{align*}
B_{n}=\Omega_n\cap \bigcup_{p\in \cup_{1\leq l\leq n}Q_l''} B_{p,n}\,\,\,\text{and}\,\,\,\ \ A_{n}=\Omega_{n}-B_{n}.
\end{align*}
\subsubsection*{$B_n$ and cusps}
We establish the relation between the set $B_n$ and the points in the cusps at time $t=n$.
\begin{lem}\label{lem:np}
For $x\in\widetilde{\Omega_0}$, if $\mathcal{G}_n x\in \mathcal C_\eta$, then there exists a parabolic fixed point $p$ with $\eta h_p>h_n$ such that
\[d(\pi(x),p)<\sqrt{\eta h_ph_n}. \]
\end{lem}
\begin{proof}
By assumption, in the universal cover $\operatorname{T}^1(\mathbb{H}^{d+1})$, the point $\mathcal G_n x$ is contained in a horoball $H_p(\eta)$. Hence $h_n<\eta h_p$. If $h_n\leq \eta h_p/2$, then we can use Pythagorean's theorem to conclude that $d(\pi(x),p)\leq \sqrt{\eta h_ph_n}$ (see Figure~\ref{fig:radius}). If $h_n\geq \eta h_p/2$, then $d(\pi(x),p)\leq \eta h_p/2\leq \sqrt{\eta h_ph_n}$.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=2]
\draw [domain=0:2*pi, samples=200] plot ({cos(\x r)},{sin(\x r)});
\draw (-2,-1) -- (2,-1);
\draw (0,0) node[above]{$O$};
\draw [fill] (0,0) circle [radius=.02];
\draw (0,-1/2) node[above left]{$D$};
\draw [fill] (0,-1/2) circle [radius=.02];
\draw (-{sqrt(3)/2},-1/2) -- ({sqrt(3)/2},-1/2);
\draw (0,-1) node[below]{$p$};
\draw [fill] (0,-1) circle [radius=.02];
\draw (0,0) -- (0,-1);
\draw (0,0) -- ({sqrt(3)/2},-1/2);
\draw ({sqrt(3)/2},-1/2) node[right]{$C$};
\draw (-2/3,-1/2) node[above left]{$\mathcal G_nx$};
\draw [fill] (-2/3,-1/2) circle [radius=.02];
\draw (-2/3,-1) node[below left]{$\pi(x)$};
\draw [fill] (-2/3,-1) circle [radius=.02];
\draw [->, dashed] (-2/3,1.5) -- (-2/3,-1);
\draw (-2/3,1.5) node[left]{$x$};
\draw [fill] (-2/3,1.5) circle [radius=.02];
\end{tikzpicture}
\end{center}
\caption{Radius}\label{fig:radius}
\end{figure}
\begin{lem}\label{lem:cuspBn}
Fix $c_{9}<\min\{1/C_{5},1/C_{8}^2\}$, where $C_{5}$ and $C_{8}$ are constants given in \eqref{equ:jp} and Lemma \ref{lem:universal} respectively. For any $x\in\widetilde{\Omega_n}$, if $\mathcal{G}_n x\in \mathcal C_{c_{9}\eta}$, then $\pi(x)\in B_n$.
\end{lem}
\begin{proof}
For $x\in \widetilde{\Omega_n}$, if $\mathcal{G}_n x\in\mathcal C_{c_{9}\eta}$, then it follows from Lemma~\ref{lem:np} that there exists a parabolic fixed point $p$ with $c_{9}\eta h_p>h_n$ such that
\begin{equation}\label{equ:pix}
d(\pi(x),p)<\sqrt{c_{9}\eta h_p h_n}\leq \eta h_p.
\end{equation}
By the definition of $P_n$ and $Q_n$, this $p$ must belong to $\bigcup_{j< n}(P_j\cup Q_j)$.
If $p$ is in some $P_j$, then by (\ref{equ:jp}) we hav
\[ \eta h_p/C_{5}<d(\pi(x),p), \]
contradicting the assumption that $c_{9}<1/C_{5}$. So $p$ must be in some $Q_j$. We use the construction of $B_n$, Lemma~\ref{lem:universal} and \eqref{equ:pix} to conclude that $\pi(x)\in B_n$.
\end{proof}
\subsubsection*{Parabolic fixed points, $B_n$ and different generations}
\begin{lem}\label{lem:Pn}
We have $P_{n+1}\cap(\cup_{l\leq n} Q_l'')=\emptyset$.
\end{lem}
\begin{proof}
If not, suppose $p\in P_{n+1}$ is also contained in an equivalent class $C(p')$ with $p'\in Q_m'-(\cup_{1\leq l\leq m-1}Q_l'')$ and $m\leq n$. Let $\partial$ be the associated boundary component of $p'$. If $\partial$ is some $\partial J_q$, we use Lemma~\ref{lem:pjq} to deduce that $d(p,J_q)\leq h_m$. If $\partial$ is $\partial \Omega_0$, we use the construction of the equivalent classes to obtain $d(p, \partial \Omega_0)<h_m$. By Lemma~\ref{lem:puniform}, we have $h_m/h_n\leq h_{p'}/h_p\leq C_{7}$. Hence by the definition of $P_{n+1}$
\begin{equation*}
d(p,\partial)\geq h_n/(2\eta)\geq h_m/(2C_{7}\eta)> h_m,
\end{equation*}
which is a contradiction.
\end{proof}
\begin{lem}\label{lem:pbn}
There exists a constant $0<c_{10}<1$ such that for any $p\in P_{n+1}\cup Q_{n+1}''$, we have
\begin{equation}\label{equ:pbn}
d(p,B_n)\geq c_{10}h_n/\eta.
\end{equation}
\end{lem}
\begin{proof}
Let $p\in P_{n+1}\cup Q_{n+1}''$ and $B_{q,n}$ be a ball in $B_n$. By Lemma~\ref{lem:Pn}, $p$ and $q$ are two different parabolic fixed points. We have
\begin{align*}
d(p, B_{q,n})&\geq d(p,q)-C\sqrt{\eta h_q h_n}\,\,\,\,\,\,\,\,\,\,\,\, \text{(by Lemma~\ref{lem:universal})}\\
&\geq \sqrt{h_ph_q}-C\sqrt{\eta h_q h_n}\,\,\,\,\,\,\,\,\,\,\,\, \text{(by Lemma~\ref{lem:pdistance})}\\
&=\sqrt{h_q}(\sqrt{h_p}-C\sqrt{\eta h_n})\\
&\geq \sqrt{h_n/\eta}\left(\sqrt{h_n/(e\eta)}-C\sqrt{\eta h_n}\right)\,\,\,\,\,\,\,\,\,\,\,\, \text{(by Lemma~\ref{lem:universal})}\\
&\geq h_n/(C\eta).\qedhere
\end{align*}
\end{proof}
Recall that $D_{n+1}=\cup_{p\in P_{n+1}} J_p$ and $\Omega_{n+1}=\Omega_n-D_{n+1}$.
\begin{lem}\label{lem:fullball}
1. We have the followings:
\begin{align}
\label{equ:generation}
D_{n+1}\cap B_n&=\emptyset\,\,\,\text{and}\,\,\, \left(\cup_{p\in Q''_{n+1}} B_{p,n+1}\right)\cap B_n=\emptyset,
\\\nonumber
A_{n+1}&=(A_n-D_{n+1}-A_n\cap B_{n+1})\cup(A_{n+1}\cap B_n),
\\
\label{equ:anbn}
A_n\cap B_{n+1}&=\cup_{p\in Q_{n+1}''}(B_{p,n+1}\cap\Omega_{n+1}),\ A_{n+1}\cap B_n=B_n-B_{n+1}.
\end{align}
2. For $p\in Q_n''$ and $m\geq n$, we have $B_{p,m}\cap \Omega_m=B_{p,m}\cap\Omega_n$.
\end{lem}
\begin{proof}
We obtain \eqref{equ:generation} using Lemma~\ref{lem:jp},~\ref{lem:universal} and~\ref{lem:pbn}. The rest of the first statement can be obtained easily.
For $m>l\geq n$, by $D_{l+1}\cap B_l=\emptyset$ and $B_{p,m}\cap \Omega_l\subset B_{l}$, we know that
\[ B_{p,m}\cap\Omega_{l+1}=B_{p,m}\cap (\Omega_{l}-D_{l+1})=B_{p,m}\cap\Omega_{l}, \]
which implies the second part of the statement.
\end{proof}
\subsection{Energy exchange argument}\label{sec:energy}
We are ready to prove Proposition \ref{keylemma}.
\begin{lem
\label{lem:sub2}
There exists $c_{11}>0$ such that
\begin{equation}
\label{equ:sub2}
\mu(B_{n}\cap A_{n+1})>c_{11}\mu(B_{n}).
\end{equation}
\end{lem}
The definition of equivalent classes is mainly used in this lemma. The idea is that the left hand side of \eqref{equ:sub2} can be expressed as a sum over equivalent classes and over an equivalent class, we obtain a full ball whose measure we are able to estimate.
\begin{proof}[Proof of Lemma \ref{lem:sub2}]
We claim that for any $p,p'\in \cup_{1\leq l\leq n}Q''_l$, we have $B_{p,n}\cap B_{p',n}=\emptyset$. The first equation in Lemma~\ref{lem:fullball} verifies the case when $p\in Q''_l$ and $p'\in Q''_j$ with $l\neq j$.
When $p,p'\in Q''_l$, using Lemma~\ref{lem:pdistance}, and~\ref{lem:universal}, we have
\begin{align*}
&d(B_{p,l}, B_{p',l})\geq d(p,p')-C\sqrt{\eta h_p h_l}-C\sqrt{\eta h_{p'}h_l}\\
\geq & \sqrt{h_p h_{p'}}-C\sqrt{\eta h_p h_l}-C\sqrt{\eta h_{p'}h_l}
\geq h_l/(C\eta) -2Ch_{l-1}>0,
\end{align*}
showing the claim.
So $\mu(B_n\cap A_{n+1})$ can be divided into the sum over $p\in \cup_{1\leq l\leq n}Q''_l$. Using Lemma~\ref{lem:welldefine}, we can further group the sum into equivalent classes. Due to \eqref{equ:anbn}, $\mu(B_n\cap A_{n+1})=\mu(B_n-B_{n+1})$. Then the proof of \eqref{equ:sub2} is reduced to proving that there exists $c_{11}>0$ such that for each equivalent class $C(p)$, we have
\[ \sum_{p'\in C(p)} \mu((B_{p',n}-B_{p',n+1})\cap\Omega_n)\geq c_{11} \sum_{p'\in C(p)}\mu(B_{p',n}\cap \Omega_n).\]
We first consider the equivalent classes defined in Case 1 and Case 3 in page \pageref{correct radius}.
Then by the definition of equivalent class and quasi-invariance of PS measure, we obtain
\begin{align*}
\frac{\sum_{p'\in C(p)} \mu((B_{p',n}-B_{p',n+1})\cap\Omega_n)}{\sum_{p'\in C(p)}\mu(B_{p',n}\cap \Omega_n)}\geq \frac{\mu(B_{p,n}-B_{p,n+1})}{C\mu(B_{p,n})}=\frac{\mu(B(p,r_{p,n})-B(p,r_{p,n+1}))}{C\mu(B(p,r_{p,n}))}\geq \frac{c}{C},
\end{align*}
where the last inequality follows from Lemma~\ref{lem:rre}.
Next we consider the equivalent classes defined in Case 2 in page \pageref{correct radius}. Suppose the associated boundary component of $p$ is $\partial J_{q}$ with $q=\gamma^{-1}p_i$ and $\gamma^{-1}$ is a top representation of $q$. We first assume that $p_i=\infty$. By Lemma~\ref{lem:annulusquasi} and \eqref{inclusion} for any Borel subset $E\subset B_{p,n}$, we have
\begin{align*}
h^{\delta}_q\mu(\gamma E)/C\leq\mu(E)\leq Ch^{\delta}_q\mu(\gamma E).
\end{align*}
We have
\begin{align}
\label{energyba}
&\frac{\sum_{p'\in C(p)} \mu((B_{p',n}-B_{p',n+1})\cap\Omega_n)}{\sum_{p'\in C(p)}\mu(B_{p',n}\cap \Omega_n)}
\geq \frac{\sum_{p'\in C(p)} \mu((B (\gamma p',r_{p',n})-B(\gamma p',r_{p',n+1}))\cap\gamma J_q^c)}{C\sum_{p'\in C(p)}\mu(B (\gamma p',r_{p',n})\cap \gamma J_q^c)}
\end{align}
For each $p'\in C(p)$, we can write $p'=\gamma^{-1} \gamma_1 \gamma p$ with $\gamma_1\in \Gamma_{\infty}$. We have
\begin{align*}
&\mu (B (\gamma p',r_{p',n})\cap \gamma J_q^c)=\mu (\gamma_1 B (\gamma p, r_{p,n})\cap \gamma J_q^c)
\approx \mu (B (\gamma p, r_{p,n})\cap \gamma^{-1}_1\gamma J_q^c),
\end{align*}
where we use the quasi-invariance of the PS measure and (\ref{inverseinclusion}) to compute the derivate of $\gamma_1$. Summing over $p'\in C(p)$, we can get a full ball. Similarly, we have
\begin{align*}
\mu((B (\gamma p', r_{p',n})-B (\gamma p',r_{p',n+1}))\cap \gamma J_q^c)
\approx \mu((B(\gamma p, r_{p,n})-B(\gamma p,r_{p,n+1}))\cap \gamma_1^{-1}\gamma J_q^c).
\end{align*}
We use these two observations and Lemma~\ref{lem:rre} to conclude
\begin{align*}
(\ref{energyba})\geq &\frac{\mu(B(\gamma p, r_{p,n})-B(\gamma p, r_{p,n+1}))}{C\mu(B (\gamma p, r_{p,n}))}\geq \frac{c}{C}.
\end{align*}
For general $p_i$, let $g_ip_i=\infty$ and $\Gamma_i=g_i\Gamma g_i^{-1}$. Using \eqref{conjugation}, we obtain
\begin{align}
\label{energyba2}
\frac{\sum_{p'\in C(p)} \mu((B_{p',n}-B_{p',n+1})\cap \Omega_n)}{\sum_{p'\in C(p)} \mu (B_{p',n}\cap \Omega_n)
\approx\frac{\sum_{p'\in C(p)} \mu_{\Gamma_i}(g_i(B_{p',n}-B_{p',n+1})\cap g_i\Omega_n)}{\sum_{p'\in C(p)} \mu_{\Gamma_i} (g_iB_{p',n}\cap g_i\Omega_n)}.
\end{align}
This fraction can be estimated the same way as we estimate (\ref{energyba}). So
\begin{equation*}
(\ref{energyba2})\geq c/C.\qedhere
\end{equation*}
\end{proof}
Let $C_{12}=2C_{5} C_{3}+4C_{8}$, where $C_{3}$, $C_{5}$ and $C_{8}$ are constants given by Proposition \ref{double}, \eqref{equ:jp} and Lemma \ref{lem:universal} respectively.
Let
\begin{equation*}
\Omega_n'=\{x\in\Omega_n:\ d(x,\partial\Omega_n)\leq C_{12}h_n \}.
\end{equation*}
This is the set of points with distance less than $C_{12} h_n$ to the boundary of $\Omega_{n}$.
\begin{lem}[Boundary estimate]\label{lem:bou}
There exists $c_{13}>0$ depending on $C_{12}\eta$ such that
\begin{equation*}
\mu(\Omega_n')\leq c_{13}\mu(\Omega_n)
\end{equation*}
and $c_{13}$ tends to 0 as $C_{12}\eta$ tends to 0.
\end{lem}
\begin{proof}
The boundary $\partial\Omega_n$ consists of
$\partial\Omega_0$ and $\partial J_p$ with $p\in\cup_{1\leq l\leq n}P_l$. For any $p\in \cup_{1\leq l\leq n}P_l$, write $p=\gamma p_i$ with $\gamma\in \Gamma$ a top representation of $p$ and $\Gamma_i=g_i\Gamma g_i^{-1}$. Recall the definitions \eqref{equ:def neighborhood1} and \eqref{equ:def neighborhood 2}. Note that $h_{n}/(4\eta)\leq h_p$. It follows from Lemmas~\ref{lem:height} and~\ref{lem:bilip} that there exists $C>1$ such that $h_{n}/(C\eta)<h_{g_ip}$ and
\begin{equation*}
g_iN_{C_{12}h_n}(\partial J_p)\subset N_{CC_{12} h_n}(\partial J_{g_ip}),\,\,\, N_{h_n/(C\eta)}(\partial J_{g_ip})\subset g_i N_{h_n/(4\eta)}(\partial J_p).
\end{equation*}
It follows from \eqref{conjugation}, Lemma \ref{lem:boud} and
\ref{lem:jpr} that there exists $c>0$ such that
\begin{align*}
&\mu(\Omega_n')=\mu(N_{C_{12} h_n}(\partial{\Omega_0}))+\sum_{p\in \cup_{1\leq l\leq n}P_l}\mu(N_{C_{12}h_n}(\partial J_p))\\
\leq &c\mu(N_{h_n/(4\eta)}(\partial \Omega_0))+C'\sum_{p\in \cup_{1\leq l\leq n}P_l}\mu_{\Gamma_i}(N_{CC_{12}h_n}(\partial J_{g_ip}))\\
\leq &c\mu(N_{h_n/(4\eta)}(\partial \Omega_0))+cC'\sum_{p\in \cup_{1\leq l\leq n}P_l}\mu_{\Gamma_i}(N_{h_n/(C\eta)}(\partial J_{g_ip}))\\
\leq &c\mu(N_{h_n/(4\eta)}(\partial \Omega_0))+cC'^2\sum_{p\in \cup_{1\leq l\leq n}P_l}\mu(N_{h_n/(4\eta)}(\partial J_p))
\leq cC'^2\mu(\Omega_n),
\end{align*}
where the last inequality is due to Lemma~\ref{lem:separation} and $C'=\max_{i}e^{\delta d(o,g_io)}$.
\end{proof}
\begin{lem
\label{lem:sub1i}
There exists $0<c_{14}<1$ such that
\[\mu(A_n\cap (D_{n+1}\cup B_{n+1} ))\leq c_{14}\mu(A_n)+\mu(\Omega_n'). \]
\end{lem}
\begin{proof}
By Lemma~\ref{lem:fullball}, we have
\begin{equation*}
A_n\cap B_{n+1}\cap (\Omega_n-\Omega_n')\subset \bigcup_{p\in Q_{n+1}''}B_{p,n+1}.
\end{equation*}
We consider the points $p\in Q_{n+1}''$ such that $B_{p,n+1}$ intersects the set on left. Denote the set of such points by $Q_{n+1}'''$. By Lemma~\ref{lem:universal}, we have
\begin{equation*}
B_{p,n+1}\subset B(p,C_{8}\sqrt{\eta h_ph_{n+1}})\subset B(p,C_{8}h_n).
\end{equation*}
Then its distance to $\partial\Omega_n$ is greater than $(C_{12}-2C_{8})h_n\geq C_{12}h_n/2$. So $Q_{n+1}'''$ must be a subset of points in Case 3 in page \pageref{correct radius},
and $B_{p,n+1}=B(p,\sqrt{\eta h_ph_{n+1}})\subset B(p,h_n)$.
For $p\in P_{n+1}$, by \eqref{equ:jp}, we have $J_p\subset B(p,C_{5}\eta h_p)\subset B(p,C_{5}h_n)$.
For points in the set $P_{n+1}\cup Q_{n+1}'''$, by a similar computation with that in Lemma~\ref{lem:separation}, the balls $B(p,C_{5}h_n)$'s are of distance $h_{n+2}/(2\eta)$ apart from each other.
By \eqref{equ:pbn}, for $p\in P_{n+1}\cup Q_{n+1}'''$
\begin{equation}\label{equ:Hp}
d(p,B_n)\geq c_{10}h_n/\eta\geq C_{12}h_n/2.
\end{equation}
Hence
\[B(p, C_{5}h_{n})\subset B(p,C_{12} h_n/2)\subset A_n. \]
Then by doubling property in Proposition~\ref{double}
\[\mu(B(p, C_{5}h_{n}))\leq c_{14}\mu(B(p,C_{12} h_n/2)). \]
The balls $B(p,C_{12}h_n/2)$'s are disjoint.
Adding them together, we obtain
\begin{align*}
&\mu(A_n\cap(D_{n+1}\cup B_{n+1})-\Omega_n')\leq \sum_{p\in P_{n+1}\cup Q_{n+1}'''}\mu(B(p,C_{5}h_n))\\
\leq& c_{14} \sum_{p\in P_{n+1}\cup Q_{n+1}'''}\mu(B(p,C_{12}h_n/2))\leq c_{14}\mu(A_n).\qedhere
\end{align*}
\end{proof}
\bigskip{}
Set $A_n'=A_n-\Omega_n'$ which is the set of points in $A_n$ with distance at least $C_{12}h_n$ to the boundary $\partial\Omega_n$.
\begin{lem
\label{lem:sub1ii}
There exist $N$ and $c_{15}>0$ depending on $\eta$ such that
\begin{equation*}
\mu(\cup_{l=1}^N D_{n+l})\geq c_{15}\mu(A_n').
\end{equation*}
\end{lem}
Let $\widetilde{A_n}$ be the subset of $\widetilde{\Omega_n}$ such that $\pi(\widetilde{A_n})=A_n$. The key point of the proof is that we can use the recurrence property of the geodesic flow on $\mathcal G_n(\widetilde{A_n})$, since Lemma~\ref{lem:cuspBn} tells us that $\mathcal G_n(\widetilde{A_n})$ stays in compact subset. We introduce some notations. We assumed that there are $j$ cusps in $M$ and $\{p_i\}_{1\leq i\leq j}$ is a complete set of inequivalent parabolic fixed points. We used the notation $H_{p_i}$ to denote the horoball based at $p_i$. Let $H^s_{p_i}\subset \operatorname{T}^1(\mathbb{H}^{d+1})$ be the strong stable horosphere, that is,
\begin{equation*}
H_{p_i}^s:=\{x\in \operatorname{T}^1(H_{p_i}):\,\text{the basepoint of}\,x\,\text{is at}\,\partial H_{p_i}\,\text{and}\,\pi(x)=p_i\}.
\end{equation*}
By abusing the notation, we also use $H_{p_i}^s$ to denote its image in the quotient space $\operatorname{T}^1(M)$.
For every $x\in \operatorname{T}^1(\mathbb{H}^{d+1})$ and $\epsilon>0$, set $W^u(x,\epsilon)$ to be the local strong unstable manifold at $x$, that is,
\begin{equation*}
W^u(x,\epsilon):=\{y\in\operatorname{T}^1(\mathbb{H}^{d+1}):\,\lim_{t\to -\infty}d(\mathcal{G}_{t}x,\mathcal{G}_ty)=0,\ d^u(y,x)\leq \epsilon\},
\end{equation*}
where $d(\cdot,\cdot)$ is the Riemannian metric on $\operatorname{T}^1(\mathbb{H}^{d+1})$ and $d^u(\cdot,\cdot)$ is the Riemannian metric restricted on the strong unstable manifold.
Denote by $W$ in $\operatorname{T}^1(M)$ the non-wandering set of the geodesic flow.
\begin{lem}\label{lem:recurrence}
Let $K$ be any compact subset in $W$. Then there exists $U_0>0$ such that for every $x$
in $K$ and every $H^s_{p_i}$ in $\operatorname{T}^1(M)$, there exists a time $t\in[0,U_0]$ such that $\mathcal G_t(W^u(x,1))$ meets $H^s_{p_i}$.
\end{lem}
\begin{proof}
Let $\epsilon<1/10$ and consider $Z_1=\cup_{x\in \partial H^s_{p_i}}W^u(x,\epsilon)$ and $Z_2=\cup_{x\in \partial H^s_{p_i}}W^u(x,5\epsilon)$ in $\operatorname{T}^1(M)$. Then $Z_1$ is a transversal section to the geodesic flow. By ergodicity of geodesic flow on non-wandering set $W$, there exists a point $y$ such that its negative time orbit is dense and there exists $t_0\geq 0$ such that
$\mathcal G_{t_0} y\in Z_1 $.
We can cover the compact set $K$ with a finite number of balls of radius $\epsilon$. There exists $t_1>0$ such that $\mathcal G_{[-t_1,0]}y$ intersects every ball.
Fix any $x$ in $K$. There exists $x'\in W^u(x,\epsilon)$ and $-s\in [-t_1,0]$ such that $d(x',\mathcal{G}_{-s}y)\leq 2\epsilon$ and $\mathcal G_{-s}y$ are in the same strong stable manifold (that is to say, $\lim_{t\to \infty}d(\mathcal{G}_t x', \mathcal{G}_t (\mathcal{G}_{-s}y))=0$). Therefore
\[d(\mathcal G_{s+t_0}x',\mathcal G_{t_0} y)\leq 2\epsilon. \]
Using $\mathcal G_{t_0} y\in Z_1$ and local product structure, we have $\mathcal G_{s+t_0}x'\in \mathcal G_{s_1}Z_2$ for some $s_1\in[-\epsilon,\epsilon]$. Due to the definition of $Z_2$, we can find $x''\in W^u(x,6\epsilon)$ such that $\mathcal G_{s+t-s_1}x''\in H^s_{p_i}$.
\end{proof}
The following lemma is a straightforward corollary of Lemma~\ref{lem:recurrence}. Recall that $c_{9}>0$ is the constant given in Lemma \ref{lem:cuspBn}. Let $K_{c_{9}\eta}=W-\mathcal C_{c_{9}\eta}$. The base of non-wandering set in $M$ is the convex core $C(M)$ and the base of $\mathcal C_{c_{9}\eta}$ is a union of proper horocusps. By Definition~\ref{def:geofinite}, we know that $K_{c_{9}\eta}$ is compact.
\begin{lem}\label{lem:parabolic}
Let $U_0$ be the constant in Lemma~\ref{lem:recurrence} with $K=K_{c_{9}\eta}$. For every $x$ in $\Delta_0\cap\Lambda_\Gamma$ and $n\in\mathbb N$, if $\mathcal G_n \tilde{x}$ is in $K_{c_{9}\eta}$, where $\tilde{x}$ is the point in $\widetilde{\Omega_0}$ such that $\pi(\tilde{x})=x$, then the ball $B(x,h_n)$ contains a parabolic fixed point with height in $h_n[e^{-U_0},1]$.
\end{lem}
\begin{proof}
Let $\tilde{B}$ be the set in $\widetilde{\Omega_0}$ such that $\pi(\tilde{B})=B(x,h_n)$. We have $\mathcal G_n\tilde{B}=W^u(\mathcal G_n\tilde{x},1)$. As $\mathcal G_n\tilde{x}\in K_{c_{9}\eta}$, by Lemma~\ref{lem:recurrence}, there exists $t\in[0,U_0]$ such that $\mathcal G_tW^u(\mathcal G_n\tilde{x},1)$ intersects some $H^s_{p_i}$. Hence in the universal cover $\operatorname{T}^1(\H^{d+1})$, the unstable leaf $\mathcal G_tW^u(\mathcal G_n\tilde{x},1)$ is tangent to a horoball. Let $q$ be the basepoint of the horoball. Then $q$ is in $B(x,h_n)$.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:sub1ii}}]
Set $N=U_0+2\lfloor -\log \eta\rfloor+2$. We claim that: There exits $C'>1$ depending on $\eta$ such that $\mathop{\cup}_{1\leq l\leq N}P_{n+l}$ is a $C'h_n$ dense set in $A_n'\cap\Lambda_\Gamma$. That is to say, for every $x\in A_n'\cap\Lambda_\Gamma$, there exists some $p\in \bigcup_{1\leq l\leq N}P_{n+l}$ such that $d(x,p)\leq C'h_n$.
\medskip{}
Let $k=\lfloor -\log \eta\rfloor$. Fix any point $x\in A_n'\cap\Lambda_\Gamma$. We consider the position of $x$ in $\Omega_{n+k}$.
\begin{itemize}
\item[Case 1] Suppose $x\notin \Omega_{n+k}$. Then $x\in J_{p}$ for some $p\in \cup_{1\leq l\leq k}P_{n+l}$. So $d(x,p)\leq C_{5}\eta h_p\leq C_{5}h_n$.
\item[Case 2] Suppose $x\in \Omega_{n+k}$ and $d(x,\partial \Omega_{n+k})<3h_{n+k}/\eta$. As $x\notin \Omega_n'$, we have $d(x,\partial \Omega_n)\geq C_{12}h_n$. Meanwhile, we have $3h_{n+k}/\eta< C_1 h_n$. Consequently, the connected component in $\partial \Omega_{n+k}$ closest to $x$ is some $\partial J_p$ with $p\in \cup_{1\leq l\leq k} P_{n+l}$. Hence
\begin{equation*}
d(x,p)\leq d(x,\partial J_p)+d(\partial J_p,p)\leq 3h_{n+k}/\eta+C_{5} \eta h_p\leq Ch_n.
\end{equation*}
\item[Case 3]
Suppose $x\in A_{n+k}\cap\Lambda_\Gamma$ and $d(x,\partial\Omega_{n+k})>2h_{n+k}/\eta$. By Lemma~\ref{lem:cuspBn}, $\mathcal{G}_{n+k}\tilde x\in K_{c_{9}\eta}$. It then follows from Lemma~\ref{lem:parabolic} that $B(x, h_{n+k})$ contains a parabolic fixed point $p$ with height in $h_{n+k}[e^{-U_0},1]$. Let $j=\lfloor-\log (\eta h_p)\rfloor$, then $j\in n+2k+[0,U_0+1]$. Let's consider the position of $p$.
\begin{itemize}
\item Suppose $p\in P_{j+1}$. Then $d(x,p)\leq h_{n+k}$.
\item Suppose $p\notin P_{j+1}$ and $p\notin \Omega_j$. Note that the conditions on $x$ and $p\in B(x,h_{n+k})$ imply that $p\in \Omega_{n+k}$. So there exists some $q\in \cup_{l=1}^{j-n-k}P_{n+k+j}$ such that $p\in J_q$. We obtain
\begin{equation*}
d(x,q)\leq d(x,p)+d(p,q)\leq h_{n+k}+C_{5}\eta h_q\leq C h_{n+k}.
\end{equation*}
\item Suppose $p\notin P_{j+1}$ and $p\in\Omega_j$.
Because $\eta h_p\in (h_{j+1},h_j]$, we must have $p\in Q_{j+1}$.
By the definition of $Q_{j+1}$, we have $d(p,\partial\Omega_j)\leq h_j/\eta$. Observe that
\begin{equation*}
d(p,\partial\Omega_{n+k})\geq d(x,\partial \Omega_{n+k})-d(x,p)>2h_{n+k}/\eta-h_{n+k}>h_j/\eta.
\end{equation*}
So there exists $q\in \cup_{l=1}^{j-n-k}P_{n+k+l}$ such that $d(p,J_{q,\eta})\leq h_j/\eta$. This implies
\begin{equation*}
d(x,q)\leq d(x,p)+d(p,q)\leq h_{n+k}+h_j/\eta+C_{5}\eta h_q\leq Ch_{n+k}.
\end{equation*}
\end{itemize}
\item[Case 4]
Suppose $x\in B_{n+k}$ and $d(x,\partial \Omega_{n+k})\geq 3h_{n+k}/\eta$. As $x\in A_n$, we have $x\in B_{n+k}-B_n$. So there exists $p\in \cup_{1\leq l \leq k}Q_{n+l}''$ such that $x\in B_{p, n+k}$. By (\ref{lem:universal}), we have $$B_{p,n+k}\subset B(p,C_{8}\sqrt{\eta h_p h_{n+k}})\subset B(p,C_{8}\sqrt{h_n h_{n+k}}).$$
Since $h_{k}\geq \eta$, for any $y\in \partial B_{n+k}$, using Lemma~\ref{lem:universal}, we have
\begin{equation*}
d(y,\partial \Omega_{n+k})\geq d(x,\partial \Omega_{n+k})-d(x,y)\geq 3h_{n+k}/\eta-2C_{8}\sqrt{h_n h_{n+k}}\geq 2h_{n+k}/\eta.
\end{equation*}
So the point $y$ belongs to Case 3. It follows that there exists $q\in \cup_{1\leq l\leq N}P_{n+l}$ such that
\begin{equation*}
d(x,q)\leq d(x,y)+d(y,q)\leq C_{8}\sqrt{h_{n}h_{n+k}}+d(y,q)\leq Ch_n.
\end{equation*}
\end{itemize}
Finally, by \eqref{equ:jp} we know that for $p\in\cup_{1\leq l\leq N}P_{n+l} $, the balls $B(p,\eta h_p/C_{5})$ are disjoint.
Using the claim and doubling property (Proposition~\ref{double}),
\begin{align*}
&\mu(\cup_{l=1}^N D_{n+l})\geq \sum_{p\in\cup_{1\leq l\leq N}P_{n+l}}\mu(B(p,\eta h_p/C_{5}))\\
\geq& c_{15}\sum_{p\in\cup_{1\leq l\leq N}P_{n+l}}\mu(B(p,C' h_n ))\geq c_{15}\mu(A_n'\cap\Lambda_\Gamma)=\mu(A_n'),
\end{align*}
finishing the proof.
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{keylemma}}]
We will prove the following statement and Proposition~\ref{keylemma} is a direct consequence of this: for $\eta$ sufficiently small, there exist $N$ and $c_0>0$ depending on $\eta$ such that
\[\mu(\cup_{l=1}^N D_{n+l})\geq c_0\mu(\Omega_n). \]
Recall that $c_{11},c_{13}$ and $c_{14}$ are the constants given in Lemma \ref{lem:sub2}, \ref{lem:bou} and \ref{lem:sub1i} respectively. We can take $c_{13}$ small enough such that $c_{13}<c_{11}$ and $c_{13}+c_{14}<1$.
Write $t_n=\frac{\mu(A_n)}{\mu(B_n)}$, which makes sense even $\mu(B_n)=0$. Then by Lemma~\ref{lem:fullball},~\ref{lem:sub2},~\ref{lem:sub1i}, and~\ref{lem:bou}
\begin{align*}
t_{n+1}&=\frac{\mu(A_{n+1})}{\mu(B_{n+1})}=\frac{\mu(A_n)+\mu(B_n\cap A_{n+1})-\mu(A_n\cap(D_{n+1}\cup B_{n+1}))}{\mu(B_n)-\mu(B_n\cap A_{n+1})+\mu(A_n\cap B_{n+1})}\\
&\geq \frac{\mu(A_n)+c_{11}\mu(B_n)-(c_{14}\mu(A_n)+c_{13}\mu(\Omega_n))}{\mu(B_n)-c_{11}\mu(B_n)+(c_{14}\mu(A_n)+c_{13}\mu(\Omega_n))}=\frac{t_n-(c_{14}+c_{13})t_n+(c_{11}-c_{13})}{1+(c_{14}+c_{13})t_n-(c_{11}-c_{13})}=f(t_n).
\end{align*}
Here $f$ is fractional function and of the form $f(t)=\frac{a_1t+a_2}{b_1t+b_2}$ with $a_i,b_i>0$, which is a convex function. Hence
$$\inf_{t\in\mathbb R^+ }f(t)\geq\min\{\frac{a_1}{b_1},\ \frac{a_2}{b_2} \}=\min\{\frac{1-(c_{14}+c_{13})}{c_{14}+c_{13}},\frac{c_{11}-c_{13}}{1-(c_{11}-c_{13})} \}=q(c_{13}). $$
By $t_n>0$,
there is a uniform lower bound of $t_n$ for all $n\in\mathbb N$.
Then use Lemma~\ref{lem:sub1ii} to obtain the desired statement:
\begin{align*}
&\mu(\cup_{l=1}^N D_{n+l})\geq c_{15}\mu(A_n')\geq c_{15}(\mu(A_n)-\mu(\Omega_n'))\\
\geq &c_{15}(\mu(A_n)-c_{13}\mu(\Omega_n))=c_{15}(\frac{t_n}{1+t_n}-c_{13})\mu(\Omega_n).
\end{align*}
If $c_{13}$ is small enough, then $\frac{t_n}{1+t_n}\geq\frac{q(c_{13})}{1+q(c_{13})}>c_{13}$. Then we can fix a small $\eta$ in Lemma~\ref{lem:bou} such that $c_{13}$ satisfies these restriction.
\end{proof}
\begin{comment}
content...
These parabolic fixed points give us a nice covering of $\Delta_0-G_1$
\begin{lem}
\[\cup_{p\in S_1}B(p,4r_1)\supset \Delta_0-G_1. \]
\end{lem}
\begin{proof}
Otherwise, there exists $\xi\in(\Delta_0-G_1)\cap\Lambda_\Gamma$ such that $d(\xi,S_1)>4r_1$. By Lemma~\ref{lem:parabolic}, there exists a parabolic fixed point $q$ in $B(\xi,r_1)\subset \Delta_0-G_1'$ with height in $r_1[\epsilon_h,1]$. Then
\[d(q,S_1)\geq d(\xi,S_1)-d(\xi,q)>3r_1. \]
Hence we can add $q$ to $S_1$, which contradicts to the maximality of $S_1$.
\end{proof}
For each $p=gp_0\in S_1$, we take out a collection fundamental domains such that the union has $4$ boundary segments and
\[B(p,\eta h_p)\supset \cup I_{g,m}\supset B(p,\eta h_p/2). \]
Let
\[\Delta_1=\Delta_0-\cup_{p\in S_1}(\cup I_{g,m}). \]
By $d(p,p')>2r_1>2h_p$, different regions are separated, that is $d(\cup I_{g,m},\cup I_{g',m})>2r_1-2\eta h_p>r_1$. Then by doubling property
\[\mu(\cup_{p\in S_1}(\cup I_{g,m}))=\sum_{p\in S_1}\mu(\cup I_{g,m})\geq \sum_{p\in S_1}\mu(B(p,\eta h_p/2))\geq \epsilon_1\sum_{p\in S_1}\mu(B(p,4r_1))\geq \epsilon_1\mu(\Delta_0-G_1) \]
By \eqref{equ:bou}, if $8r_1\leq \epsilon$
\begin{equation}\label{equ:estbou}
\mu(G_1)=\mu\{x\in\Delta_0|\ d(x,\partial\Delta_0)\leq 4r_1\}\leq \lambda\mu \{ x\in\Delta_0,\ d(x,\partial\Delta_0)\leq 4r_1/\epsilon\}\leq \lambda\mu(\Delta_0).
\end{equation}
Hence $\mu(\cup_{p\in S_1}(\cup I_{g,m}))\geq\epsilon_1(1-\lambda)\mu(\Delta_0)$, which implies
\[\mu(\Delta_1)\leq (1-\epsilon_1(1-\lambda))\mu(\Delta_0). \]
the main difference in the inductive step is the estimate of $G_2$, that is \eqref{equ:estbou}. We write $\cal C$ for a connected component of the boundary of $\Delta_1$, they have distance at least $r_1$ due to our construction. By \eqref{equ:bou},
\begin{align*}
\mu(G_2)&=\mu\{x\in\Delta_1|\ d(x,\partial\Delta_1)\leq 4r_2 \}=\sum_{ C\subset\Delta_1}\mu\{x\in\Delta_1|\ d(x,C)\leq 4r_2 \}\\
&\leq \lambda\sum_{ C\subset\Delta_1}\mu\{x\in\Delta_1|\ d(x,C)\leq 4r_2/\epsilon \}
\end{align*}
By $4r_2/\epsilon\leq r_1/2$ (recall $\epsilon>8r_1$), different region are disjoint and we obtain
\[ \sum_{ C\subset\Delta_1}\mu\{x\in\Delta_1|\ d(x, C)\leq 4r_2/\epsilon \}=\mu\{x\in\Delta_1|\ d(x,\partial\Delta_1)\leq 4r_2/\epsilon\}\leq\mu (\Delta_1). \]
Therefore
\[\mu(G_2)\leq\lambda\mu(\Delta_1)\text{ and }\mu(\Delta_2)\leq (1-\epsilon_1(1-\lambda))\mu(\Delta_1). \]
This inequality will tell us that our coding exhaust $\Delta_0$ in PS measure sense and replace \eqref{equ:muik} to obtain an exponential tail for out coding.
Hence, this gives us a desired coding.
\end{comment}
\subsection{Exponential tail}\label{sec:exptail}
For one cusp case, we have described how to construct the countable collection of disjoint open subsets in $\Delta_0$ and the expanding map in Section \ref{coding procedure}. When there are multi cusps, the coding is constructed in two steps and we describe the first step here and finish the rest in Section \ref{sec:codmulti}.
Suppose that there are $j$ cusps. For each cusp, we apply the construction in Section \ref{coding procedure} to the group $\Gamma_i=g_i\Gamma g_i^{-1}$ and the region $g_i\Delta_{p_i}$. In particular, we have Proposition \ref{keylemma} holds for $g_i\Delta_{p_i}$. Mapping this construction back to $\Delta_{p_i}$ by $g_i^{-1}$ and putting the construction for each cusp together, we have: there is a countable collection of disjoint open subsets $\sqcup_{i,k}\Delta_{p_i,k}$ in $\Delta(=\sqcup_i \Delta_{p_i})$ and an expanding map $T_0:\sqcup_{i,k}\Delta_{p_i,k}\to \Delta$ such that
\begin{itemize}
\item $\sum_{i,k}\mu(\Delta_{p_i,k})=\mu(\Delta)$.
\item For each $\Delta_{p_i,k}$, it is a subset in $\Delta_{p_i}$. The expanding map $T_0$ maps $\Delta_{p_i,k}$ to some $\Delta_{p_l}$ and there is an element $\gamma_0\in \Gamma$ such that $\Delta_{p_i,k}=\gamma_0 \Delta_{p_l}$ and $T_0=\gamma_0^{-1}$ on $\Delta_{p_i,k}$. Denote by $\mathcal{H}_0$ the set of inverse branches of $T_0$.
\end{itemize}
For an element $\gamma_0$ in $\mathcal H_0$, if $\gamma_0$ maps $\Delta_{p_l}$ into some $\Delta_{p_i,k}$, then we define
\begin{equation}\label{infty}
|\gamma_0'|_\infty=\sup_{x\in\Delta_{p_l}}|\gamma_0'(x)|.
\end{equation}
The infinity norm of the derivative of a composition map is defined similarly.
We prove the following.
\begin{lem}
\label{lem:h0}
There exists $\epsilon>0$ such that
\begin{equation}\label{h0}
\sum_{\gamma_0\in\mathcal H_0} |\gamma_0'|_{\infty}^{\delta-\epsilon}<\infty.
\end{equation}
\end{lem}
For one cusp case, this gives the exponential tail \eqref{sum}. When there are multi cusps, \eqref{h0} can be understood as that the map $T_0$ satisfies the exponential tail property.
We start the proof of Lemma \ref{lem:h0} with the following result. Denote by $\cup_n P_n$ the set of ``good parabolic fixed points" which appear in the first step of the construction of the coding for multi cusp case and are defined similarly as \eqref{good parabolic fixed points}.
\begin{lem}\label{lem:ng}
There exists $C>0$ such that for any parabolic fixed point $p=\gamma p_i\in \Delta_0\cap \cup_n P_n$, we have for any $\epsilon\in (0,\delta-k/2)$,
\begin{equation*}
\sum_{\gamma_1\in N_p}|(\gamma \gamma_1)'|_{\infty}^{\delta-\epsilon}\leq C(2\delta-k-2\epsilon)^{-1}h_p^{-\epsilon} \eta^{-2\epsilon}\mu(J_{p}),
\end{equation*}
where $k$ is the rank of the parabolic fixed point $p$ and $N_p$ is defined in \eqref{flower group}.
\end{lem}
\begin{proof}
We first consider the case when $p=\gamma \infty$. By Lemma~\ref{lem:explicit}, we have for every $x\in \Delta_0$ and every $\gamma_1\in N_p$,
\begin{equation*}
|(\gamma \gamma_1)'(x)|=|\gamma'(\gamma_1x)|=\frac{h_p}{d(\gamma_1x,x_{\gamma})^2}.
\end{equation*}
As $\cup_{\gamma_1\in N_p}\gamma_1\Delta_0\subset B(x_{\gamma}, 1/\eta)^c$ where $x_{\gamma}=\gamma^{-1}\infty$, we use general polar coordinates to obtain
\begin{equation}
\label{equ:tail1}
\sum_{\gamma_1\in N_p}|(\gamma \gamma_1)'|^{\delta-\epsilon}_{\infty}\ll h_p^{\delta-\epsilon}\sum_{\gamma_1\in N_p}\frac{1}{d(\gamma_1\Delta_0,x_{\gamma})^{2\delta-2\epsilon}}\ll \frac{h_p^{\delta-\epsilon} \eta^{2\delta-2\epsilon-k}}{2\delta-2\epsilon-k}.
\end{equation}
Meanwhile, by the quasi-invariance of PS measure and \eqref{equ:change}, we have for every $\gamma_1\in N_p$
\begin{equation*}
\mu(\gamma \gamma_1 \Delta_0)=\int_{x\in \Delta_0}|(\gamma \gamma_1)'(x)|^{\delta}_{\mathbb{S}^n}\mathrm{d}\mu(x)\approx \int_{x\in \Delta_0}|(\gamma \gamma_1)'(x)|^{\delta}\mathrm{d}\mu(x)\approx\frac{\mu(\Delta_0)h_p^{\delta}}{d(\gamma_1\Delta_0,x_{\gamma})^{2\delta}}.
\end{equation*}
Therefore,
\begin{equation}
\label{equ:tail2}
\mu(\cup_{\gamma_1\in N_p} \gamma\gamma_1\Delta_0)\gg \mu(\Delta_0)h_p^{\delta}\sum_{\gamma_1\in N_p}\frac{1}{d(\gamma_1\Delta_0,x_{\gamma})^{2\delta}}\gg \frac{h_p^{\delta}\eta^{2\delta-k}}{2\delta-k}.
\end{equation}
Hence (\ref{equ:tail1}) and (\ref{equ:tail2}) together yield the statement for the case when $p=\gamma \infty$.
For the general case when $p=\gamma p_i$ with $g_i p_i=\infty$. Note that for every $\gamma_1\in N_p$, we have
$\gamma \gamma_1=g_i^{-1}(g_i\gamma \gamma_1 g_i^{-1}) g_i$. Hence by Lemma~\ref{lem:height} and~\ref{lem:bilip}
\begin{equation*}
h_{g_ip}\approx h_p,\ \ |(\gamma \gamma_1)'|_{\infty}=\sup_{x\in\Delta_{p_i}}|(\gamma\gamma_1)'(x)|\approx \sup_{x\in g_i\Delta_{p_i}}|(g_i \gamma \gamma_1 g_i^{-1})'(x)|= |(g_i \gamma \gamma_1 g_i^{-1})'|_{\infty}.
\end{equation*}
Write $\Gamma_i=g_i\Gamma g_i^{-1}$. Using \eqref{conjugation}, we obtain
\begin{equation*}
\mu(\gamma \gamma_1 \Delta_{p_i})\approx \mu_{\Gamma_i}(g_i\gamma \gamma_1g_i^{-1}(g_i\Delta_{p_i})).
\end{equation*}
We have $x_{g_ip}=(g_i\gamma g_i^{-1})^{-1}\infty\in g_i\overline{\Delta}_{p_i}$.
Because $g_ip=g_i\gamma g_i^{-1}\infty$ and $g_i \gamma_1 g_i^{-1}(g_i\Delta_{p_i})\subset B(x_{g_ip},1/\eta)$ for any $\gamma_1\in N_p$, we are able to compare $\sum_{\gamma_1} |(g_i \gamma \gamma_1 g_{i}^{-1})'|_{\infty}$ with $\mu_{\Gamma_i}(\cup_{\gamma_1} g_i \gamma \gamma_1 g_i^{-1}(g_i \Delta_{p_i}))$ as above and this will prove Lemma \ref{lem:ng} for the general case.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma \ref{lem:h0}}]
We only need to sum the inverse branches in $\mathcal{H}_0$ whose images are in $\Delta_{0}$. For a general inverse branches whose image is in $\Delta_{p_j}$, we consider the group $g_j\Gamma g_j^{-1}$ and the inequality can be proved in the same fashion. By Lemma~\ref{lem:ng} and Proposition~\ref{keylemma}, for any sufficiently small $\epsilon\in (0,1)$,
\begin{align*}
&\sum_{n\in\mathbb N}\sum_{p=\gamma p_i\in P_n\cap \Delta_0}\sum_{\gamma_1\in N_p}|(\gamma \gamma_1)'|_{\infty}^{\delta-\epsilon}
\ll \sum_{n\in \mathbb N}\sum_{p\in P_n}\mu(J_{p})h_p^{-\epsilon}\eta^{-2\epsilon}\\
\leq& \eta^{-2\epsilon}\sum_{n\in \mathbb N}\mu(\Omega_n)e^{\epsilon(n+1)}\leq \eta^{-2\epsilon}\sum_{n\geq N}(1-\epsilon_0)^ne^{\epsilon(n+1)}+\eta^{-2\epsilon}\sum_{n< N}\mu(\Omega_n)e^{\epsilon(n+1)}.
\end{align*}
By choosing an $\epsilon$ small enough such that $(1-\epsilon_0)e^{\epsilon}<1$, the above sum is finite.
\end{proof}
\subsection{Coding for multi cusps}\label{sec:codmulti}
We caution the readers that the symbol $\gamma$ was used to denote a top representation in the previous subsection but it will be used to denote an inverse branch here.
Without loss of generality, we may suppose that $T_0$ is irreducible, which means there doesn't exist a nonempty subset of $I_1\subsetneq \{1,\cdots,j \}$ such that
$$T_0(\cup_{i\in I_1}\Delta_{p_i})\subset \cup_{i\in I_1}\Delta_{p_i}.$$
Otherwise, we can restrict $T_0$ to this subcollection and consider the restriction of $T_0$ on it.
For multi cusp case, we use $T_0$ to construct the countable collection of disjoint open subsets $\sqcup_k \Delta_k$ and the expanding map $T:\sqcup_k \Delta_k\to \Delta_0$ in Proposition \ref{prop:coding}. For $x\in \Delta_0=\Delta_{p_1}$, define the first return time
\[n(x)=\inf\{n\in\mathbb N:\,T_0^n(x)\in\Delta_{p_1} \}.\]
Set $n(x)=\infty$ if $T_0^n(x)$ doesn't come back to $\Delta_{p_1}$ for all $n\in\mathbb N$ or $T_0^n(x)$ lies outside of the domain of definition of $T_0$ for some $n$.
The expanding map $T$ is given by
\begin{equation*}
T(x)=T_0^{n(x)}(x)\,\,\,\text{for}\,\,\, x\,\,\,\text{such that}\,\,\, n(x)<\infty.
\end{equation*}
Recall that we have a countable collection of disjoint open subsets $\sqcup_{l,k}\Delta_{p_l,k}$ and $T_0|_{\Delta_{p_l,k}}=\gamma^{-1}$ for some $\gamma^{-1}\in \mathcal{H}_0$.
As $T$ is a composition of multiples of $T_0$'s, we have
\begin{itemize}
\item either $T(x)=\gamma^{-1}x$ with $\gamma\in \mathcal{H}_0$ and $\gamma: \Delta_{p_1}\to \Delta_{p_1}$,
\item or $T(x)=\gamma_{n(x)}^{-1}\cdots \gamma_1^{-1} x$ with $\gamma_{l}\in \mathcal{H}_0$ for $l=1,\ldots,n(x)$, $\gamma_1: \Delta_{p_k}\to \Delta_{p_1}$ for some $k\neq 1$ and $\gamma_{n(x)}:\Delta_{p_1}\to \Delta_{p_l}$ for some $l\neq 1$.
\end{itemize}
The string $\gamma_{n(x)}^{-1}\cdots \gamma_1^{-1}$ gives an open subset $\gamma_1\cdots \gamma_{n(x)}\Delta_{p_1}$, which is an open subset for the coding, and on this subset, $T$ is given by $\gamma_{n(x)}^{-1}\cdots \gamma_1^{-1}$.
To prove (1), (3) and (4) in Proposition \ref{prop:coding}, we start with a lemma similar to Lemma~\ref{lem:l1}. Define
\begin{equation*}
U_i=g_i^{-1}B(g_i\Delta_{p_i},1/(2\eta))^c\,\,\,\text{for}\,\,\,1\leq i\leq j.
\end{equation*}
\begin{lem}\label{lem:l1'}
If $\gamma$ is an inverse branch in $\mathcal H_0$ which maps $\Delta_{p_i}$ into $\Delta_{p_l}$, then $\gamma^{-1}U_l\subset U_i$.
\end{lem}
\begin{proof}
As $\gamma p_i\in \Delta_{p_l}$, the definition of $U_l$ implies
\begin{equation*}
g_lU_l\subset B(g_l\gamma p_i,1/(2\eta))^c.
\end{equation*}
Because the maps $g_i$'s are bi-Lipschitz (Lemma~\ref{lem:bilip}), we obtain
\begin{equation}
\label{invcon1}
g_i U_l\subset B(g_i \gamma p_i,1)^c=B(g_i\gamma g_i^{-1}\infty,1)^c.
\end{equation}
By Lemma~\ref{lem:explicit}, we have
\begin{align}
\label{invcon2}
(g_i\gamma g_i^{-1})^{-1} B(g_i\gamma g_{i}^{-1}\infty, 1)^c=B((g_i\gamma g_i^{-1})^{-1}\infty, h_{g_i\gamma g_{i}^{-1}\infty}
\subseteq B((g_i\gamma g_i^{-1})^{-1}\infty,1).
\end{align}
By \eqref{flower3}, we obtain
\begin{equation}
\label{invcon3}
d(g_i\Delta_{p_i}, (g_i\gamma g_i^{-1})^{-1}\infty)\geq 1/\eta.
\end{equation}
Combining (\ref{invcon1})-(\ref{invcon3}) together, we conclude that
\begin{equation*}
g_i \gamma^{-1}U_l \subset B((g_i\gamma g_i^{-1})^{-1}\infty,1)\subset B(g_i\Delta_{p_i},1/(2\eta))^c.\qedhere
\end{equation*}
\end{proof}
We prove Proposition \ref{prop:coding} (1) and \eqref{sum}.
The proof is to consider an induced map and reduce the number of cusps by $1$ at a time.
Let $q=p_j$. Denote $\cup_{1\leq i\leq j-1}\Delta_{p_i}=\Delta-\Delta_{q}$ by $X_1$ and for $x\in X_1$, define
\[n_1(x)=\inf\{n\in\mathbb N:\, T_0^n(x)\in X_1 \}.\] The map $T_1$ is given by $T_1(x)=T_0^{n_1(x)}(x)$ for $x$ such that $n_1(x)<\infty$. Since $T_0$ is irreducible, this induced system is also irreducible on $X_1$.
Write
\begin{align*}
&\mathcal{H}_q:=\text{the set of the inverse branches of}\,\,\,T_0\,\,\,\text{which are from}\,\,\,\Delta_q\,\,\,\text{to}\,\,\,\Delta_q,\\
&\mathcal{H}_p:=\text{the set of the inverse branches of}\,\,\,T_0\,\,\,\text{which are from}\,\,\,X_1\,\,\,\text{to}\,\,\,X_1,\\
&\mathcal{H}_{pq}:=\text{the set of the inverse branches of}\,\,\,T_0\,\,\,\text{which are from}\,\,\,X_1\,\,\,\text{to}\,\,\,\Delta_q,\\
&\mathcal{H}_{qp}:=\text{the set of the inverse branches of}\,\,\,T_0\,\,\,\text{which are from}\,\,\,\Delta_q\,\,\,\text{to}\,\,\,X_1.
\end{align*}
As $T_1$ is a composition of multiples of $T_0$'s, we have
\begin{itemize}
\item either $T_1(x)=\gamma^{-1}x$ with $\gamma\in \mathcal{H}_p$,
\item or $T_1(x)=\gamma_{n_1(x)}^{-1}\cdots \gamma_1^{-1} x$ with $\gamma_1\in \mathcal{H}_{qp}$, $\gamma_{n_1(x)}\in \mathcal{H}_{pq}$ and $\gamma_l\in \mathcal{H}_q$ for $l=2,\ldots, n_1(x)-1$.
\end{itemize}
The string $\gamma_{1}\cdots \gamma_{n_1(x)}$ is an inverse branch of $T_1$. Set
\begin{align*}
&\mathcal H_1:=\text{the set of all inverse branches of}\,\,\, T_1,\\
&\mathcal{H}_q^n:=\{\gamma_1\cdots \gamma_n:\,\gamma_i\in\mathcal{H}_q\,\,\,\text{for}\,\,\,1\leq i\leq n\}\,\,\,\text{for every}\,\,\,n\in \mathbb{N}.
\end{align*}
\begin{lem}\label{lem:gammacoding}
There exists $C>0$ such that for every $n\in \mathbb{N}$ and for every $\gamma\in\mathcal H_q^n$, we have
\begin{equation*}
|\gamma' (x)|\geq |\gamma'|_\infty /C \,\,\,\text{for any}\,\,\,x\in \Delta_q.
\end{equation*}
\end{lem}
\begin{proof}
We first notice that $|\gamma'(x)|\approx |(g_j\gamma g_j^{-1})'(g_jx)|$.
Write $p=\gamma p_j$. By Lemma~\ref{lem:explicit}, we have
\[| (g_j\gamma g_j^{-1})' (y)|=\frac{h_{g_jp}}{d(y,g_j\gamma^{-1}p_j)^2}.\]
By Lemma~\ref{lem:l1'}, we have $d(g_j\Delta_q,g_j\gamma^{-1}p_j)>1/(2\eta)$. Then for every $y\in g_j\Delta_{p_j}=g_j\Delta_q$, the distance
$d(y,g_j\gamma^{-1}p_j)\in [d(g_j\Delta_q,g_j\gamma^{-1}p_j)\pm \operatorname{diam}(g_j\Delta_q)]$, which implies the lemma.
\end{proof}
\begin{lem}\label{lem:hl}
There exist $C>0$, $\epsilon>0$ such that for every $l\in\mathbb N$
\begin{equation*}
\sum_{1\leq k\leq l,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_l)'|_\infty^{\delta}<C(1-\epsilon)^l.
\end{equation*}
\end{lem}
\begin{proof}
Claim: there exists $\epsilon>0$ such that for every $n\in \mathbb{N}$ and for any $h\in \mathcal H_q^n$, we have
\begin{equation}
\label{measure decrease}
\sum_{\gamma\in \mathcal H_q}\mu(h\gamma\Delta_q)\leq (1-\epsilon)\mu(h\Delta_q).
\end{equation}
Proof of the claim: for measurable set $E\subset \Delta_q$, by Lemma~\ref{lem:gammacoding}
\begin{equation}\label{equ:hE}
\mu(hE)=\int_E |h'(x)|_{\S^d}^\delta\mathrm{d}\mu(x)\approx \int_E |h'(x)|^\delta\mathrm{d}\mu(x) \in \mu(E) |h'|^{\delta}_\infty[1/C,1].
\end{equation}
Write $F=\cup_{\gamma\in\mathcal H_q} \gamma\Delta_q$. Since $T_0$ is irreducible, we have $\mu(F)<\mu(\Delta_q)$. By \eqref{equ:hE}
\[\sum_{\gamma\in\mathcal H_q}\mu(h\gamma\Delta_q)=\mu(hF)\leq |h'|_\infty \mu(F)=\frac{\mu(F)}{\mu(\Delta_q-F)}|h'|_\infty \mu(\Delta_q-F)\leq C' \mu(h(\Delta_q-F)) .\]
So we have
\begin{equation*}
(1+1/C')\sum_{\gamma\in \mathcal{H}_q}\mu(h\gamma \Delta_q)\leq \mu(hF)+\mu (h(\Delta_q-F))\leq \mu(h\Delta_q).
\end{equation*}
Using \eqref{measure decrease}, Lemma~\ref{lem:gammacoding} and \eqref{equ:hE} with $E=\Delta_q$, we obtain
\begin{align*}
&\sum_{1\leq k\leq l,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_l)'|_\infty^{\delta}\leq C \sum_{1\leq k\leq l,\ \gamma_k\in\mathcal H_q}\mu(\gamma_1\cdots \gamma_l\Delta_q)\\
\leq &C \sum_{1\leq k\leq l-1,\ \gamma_k\in\mathcal H_q}(1-\epsilon)\mu(\gamma_1\cdots\gamma_{l-1}\Delta_q) \leq C (1-\epsilon)^l.\qedhere
\end{align*}
\end{proof}
\begin{proof}[\textbf{Proof of Proposition \ref{prop:coding} (1) and \eqref{sum}}] We first use Proposition \ref{keylemma}, \eqref{h0}, Lemma \ref{lem:gammacoding} and \ref{lem:hl} to prove that for the expanding map $T_1$, we have
\begin{enumerate}
\item There exists $\epsilon_1>0$ such that
\begin{equation}
\label{induce map 1}
\sum_{\gamma\in\mathcal H_1}|\gamma'|_\infty^{\delta-\epsilon_1}<\infty,
\end{equation}
where $|\gamma'|_\infty$ is defined as in \eqref{infty}.
\item There exists $\epsilon>0$ such that for every $n\in \mathbb{N}$,
\begin{equation}
\label{induce map 2}
\mu(\{x\in X_1:\,n_1(x)>n+1\})\ll (1-\epsilon)^n.
\end{equation}
\end{enumerate}
The second statement in particular implies that the map $T_1$ is defined almost everywhere in $\Delta-\Delta_q$.
Due to Lemma~\ref{lem:hl}, we can find a large $l_0$ such that $\sum_{1\leq k\leq l_0,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_{l_0})'|_\infty^{\delta}<1$. Then using \eqref{h0} and submultiplicativity $|(\gamma_1\gamma_2)'|_\infty\leq |(\gamma_1)'|_\infty|(\gamma_2)'|_\infty$, we obtain
$$\sum_{1\leq k\leq l_0,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_{l_0})'|_\infty^{\delta-\epsilon}<\infty,$$
where $\epsilon>0$ is the constant given by \eqref{h0}. Hence we can find $0<\epsilon_1<\epsilon$ small such that
\begin{equation*}
\sum_{1\leq k\leq l_0,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_{l_0})'|_\infty^{\delta-\epsilon_1}<1.
\end{equation*}
Using submultiplicativity, we obtain constants $C>0,\rho<1$ such that for $l\in\mathbb N$
\begin{equation}\label{equ:1kl}
\sum_{1\leq k\leq l,\ \gamma_k\in\mathcal H_q}|(\gamma_1\cdots \gamma_{l})'|_\infty^{\delta-\epsilon_1}\leq C\rho^l.
\end{equation}
Denote $\sum_{\gamma\in \mathcal{H}_1}|\gamma'|_{\infty}^{\delta-\epsilon_1}$ by $E_q$. For every inverse branch of $T_1$, it can be uniquely decomposed as $\gamma_0\gamma_1\cdots\gamma_{l}\gamma_{l+1}$ with $\gamma_{l+1}\in\mathcal H_{pq}$, $\gamma_{i}\in\mathcal H_q$ with $i=1,\cdots l$ and $\gamma_0\in\mathcal H_{qp}$.
Using this expression and submultiplicativity,
we obtain
\begin{equation}\label{equ:ep}
E_q\leq \sum_{\gamma\in\mathcal H_p}|\gamma'
|_\infty^{\delta-\epsilon_1}+\sum_{l\geq 1} (\sum_{\gamma_0\in\mathcal H_{qp}} |\gamma_0'|_\infty^{\delta-\epsilon_1})(\sum_{\gamma_{l+1}\in\mathcal H_{pq}} |\gamma_{l+1}'|_\infty^{\delta-\epsilon_1})
(\sum_{\gamma_i\in\mathcal H_q,1\leq i\leq l}|(\gamma_1\cdots \gamma_l)'|_\infty^{\delta-\epsilon_1}).
\end{equation}
Therefore $E_p$ is also finite due to \eqref{equ:ep}, \eqref{equ:1kl} and \eqref{h0}.
The set of $x$ such that $T_0^n(x)$ is outside of domain of definition of $T_0$ for some $n$ has zero PS measure by Proposition~\ref{keylemma}. We only need to consider the set of $x$ such that $T_0^n(x)$ is in the domain of definition for every $n$.
If $x\in X_1$ with $n_1(x)>n+1$, then $x$ must be in $\gamma_0\gamma_1\cdots \gamma_n\Delta_q$ with $\gamma_0\in\mathcal H_{qp}$ and $\gamma_i\in\mathcal H_q$ for $1\leq i\leq n$. Therefore
Lemma~\ref{lem:hl} implies
\begin{align*}
&\mu(\{x\in X_1:\,n_1(x)>n+1\})\leq \sum_{\gamma_0\in\mathcal H_{qp}}\sum_{\gamma_i\in\mathcal H_q,1\leq i\leq n}\mu( \gamma_0\gamma_1\cdots\gamma_n\Delta_q)\\
\leq& (\sum_{\gamma_0\in\mathcal H_{qp}} |\gamma_0'|_\infty^{\delta})(\sum_{\gamma_i\in\mathcal H_q,1\leq i\leq n}|(\gamma_1\cdots \gamma_n)'|_\infty^{\delta})\ll (1-\epsilon)^n.
\end{align*}
We keep reducing the number of cusps by considering the set $X_2:=X_1-\Delta_{p_{j-1}}$ and the induced map $T_2:X_2\to X_2$ which is constructed similar to $T_1$: in particular, the inverse branches of $T_2$ are compositions of elements in $\mathcal{H}_1$. Analogs of Lemma~\ref{lem:gammacoding} and \ref{lem:hl} for $T_2$ and the replacements of Proposition \ref{keylemma} and \eqref{h0} are \eqref{induce map 2} and \eqref{induce map 1} respectively. Using these three ingredients, we can show the properties like \eqref{induce map 1} and \eqref{induce map 2} also hold for $T_2$. The proof of Proposition~\ref{prop:coding}(1) and \eqref{sum} will be finished by repeating this.
\end{proof}
Now, we will finish proving the rest of results for the coding except Lemma~\ref{lem:uni} (UNI).
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:l1}}]
Take
\begin{equation}\label{equ:Lambda-}
\Lambda_{-}=\Lambda_\Gamma\cap \{|x|>1/(2\eta) \}=\Lambda_\Gamma\cap U_1.
\end{equation}
The contracting map $\gamma$ from $\Delta_0$ to $\Delta_0$ is a composition of maps in $\mathcal H_0$, so the inclusion follows directly from Lemma~\ref{lem:l1'}.
Write $p=\gamma \infty$. By Lemma~\ref{lem:explicit},
\[|(\gamma^{-1})'(x)|_{\S^d}=\frac{h_p}{d(x,p)^2}\frac{1+|x|^2}{1+|\gamma^{-1}x|^2}. \]
For $x\in\Lambda_{-}$,
as $p=\gamma\infty\in\Delta_0$, we have
\begin{equation*}
\frac{1+|x|^2}{d(x,p)^2}\leq \frac{1+|x|^2}{(|x|-\operatorname{diam}(\Delta_0))^2}.
\end{equation*}
The right hand side of the inequality is around $1$ as $|x|\geq 1/(2\eta)$.
For $\gamma^{-1}x\in \Lambda_{ -}$, we have $|\gamma^{-1}x|\geq 1/(2\eta)$. Hence $|(\gamma^{-1})'(x)|_{\S^d}\leq\lambda$ for some $\lambda$ independent of $\gamma$.
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:coding} (3)}]
By Lemma~\ref{lem:explicit}, we have
\[|\gamma'(x)|=\frac{h_p}{d(x,\gamma^{-1}\infty)^2}. \]
By Lemma~\ref{lem:l1}, we have $\gamma^{-1}\infty\in\Lambda_{-}$, which implies $d(\gamma^{-1}\infty,x)\geq 1/(2\eta)$. Hence
\[
|\gamma'(x)|\leq (2\eta)^2h_p\leq 4\eta^2.\qedhere\]
\end{proof}
{\textbf{Proof of Proposition~\ref{prop:coding} (4).}} We need the following lemma, which will also be needed in later sections.
\begin{lem}\label{lem:der}
Let $\gamma$ be any element in $\Gamma$ which does not fix $\infty$. For any $x\in \Delta_0$ and any unit vector $e\in\mathbb R^d$, we have
\[\partial_e\log | \gamma'(x)|=-\frac{2\langle x-\xi,e \rangle}{|x-\xi|^2} \]
where $\xi=\gamma^{-1}\infty$.
\end{lem}
\begin{proof}
It can be shown using Lemma~\ref{lem:explicit} and elementary computation.
\end{proof}
For $\gamma\in\mathcal H$, Proposition~\ref{prop:coding} (4) can be deduced using Lemma~\ref{lem:der} and the observation that $|\gamma^{-1}\infty|\geq 1/(2\eta)$ (Lemma~\ref{lem:l1} and~\eqref{equ:Lambda-}) .
\subsection{Verifying UNI}\label{sec:UNI}
We prove Lemma~\ref{lem:uni} in this part.
Let $\Gamma_f$ be the semigroup generated by $\gamma$ in $\mathcal H$ and $\Gamma_b$ be the semigroup generated by $\gamma^{-1}$ with $\gamma\in\mathcal H$. Let $\Lambda_f$ and $\Lambda_b$ be the limit set of $\Gamma_f$ and $\Gamma_b$ on $\partial\H^{d+1}$, that is the set of accumulation points of orbit $\Gamma_f o$ and $\Gamma_b o$ for some $o\in\H^{d+1}$ respectively. It follows from the definition that the limit set $\Lambda_f$ is $\Gamma_f$-invariant and $\Lambda_b$ is $\Gamma_b$-invariant. Due to~\cite[Proposition 3.19]{Kap} (convergence property of M\"obius transformation), we have that $\Lambda_f$ is a $\Gamma_f$-minimal set and $\Lambda_{b}$ is a $\Gamma_b$-minimal set.
\begin{lem}\label{lem:dense}
The limit set $\Lambda_b$ is not contained in an affine subspace in $\mathbb R^d\cup\{\infty \}$ or a sphere in $\mathbb R^d$.
\end{lem}
\begin{proof}
Let $A$ be an affine subspace or a sphere with minimal dimension which contains $\Lambda_b$. Because $\Lambda_b$ is $\Gamma_b$ invariant, the semigroup $\Gamma_b$ must preserve $A$, so does the Zariski closure of $\Gamma_b$. The Zariski closure of a semigroup is a group (see for example~\cite[Lemma 6.15]{BQ}). The Zariski closures of $\Gamma_f$ and $\Gamma_b$ are the same. Hence $\Gamma_f$ also preserves $A$ and $\Lambda_f$ is in $A$. \textbf{We claim: $\mu(\Lambda_f)=\mu(\Lambda_{\Gamma}\cap \Delta_0)>0$}. Then because $\Gamma$ is Zariski dense, by~\cite[Corollary 1.4]{FS}, we conclude that $\mu(A)$ is non zero if and only if $A=\mathbb R^d$, finishing the proof.
Proof of the claim: Let $x$ be any point in $\Lambda_{\Gamma}\cap \Delta_0$ such that $T^nx\in\Lambda_{\Gamma}\cap \Delta_0$ for every $n\in\mathbb N$. We can write $x=\gamma_n T^n(x)\in \gamma_n\Delta_0$ for some $\gamma_n\in\mathcal H^n$. Fix any $y\in \Lambda_f$, it follows from Proposition~\ref{prop:coding} (3) that $d(\gamma_n y, \gamma_n T^n(x))\to 0$. So $\gamma_n y\rightarrow x$ and $x\in\Lambda_f$. Due to Proposition~\ref{prop:coding} (1), the set of $x$'s such that $T^nx\in\Lambda_{\Gamma}\cap \Delta_0$ for every $n\in\mathbb N$ is a conull set in $\Lambda_{\Gamma}\cap \Delta_0$. Hence $\mu(\Lambda_f)=\mu(\Lambda_{\Gamma}\cap \Delta_0)$.
\end{proof}
\begin{lem}
\label{nonvanish}
For every $x\in\Lambda_\Gamma\cap\overline\Delta_0$, there exist pairs of points $(\xi_{1m},\xi_{2m}),\ m=1,\cdots, k_x$ in the limit set $\Lambda_b$ and $\epsilon_x'>0$ such that for every unit vector $e\in \mathbb R^d$ there exists $m$,
\[|\langle \frac{x-\xi_{1m}}{|x-\xi_{1m}|^2}-\frac{x-\xi_{2m}}{|x-\xi_{2m}|^2},e \rangle|>2\epsilon_x'>0. \]
\end{lem}
\begin{proof}
The map $inv_x:\xi \mapsto \frac{x-\xi}{|x-\xi|^2}$ is an inversion and this map is injective. If there exists a unit vector $e\in\mathbb R^d$ such that
\[\langle \frac{x-\xi_{1}}{|x-\xi_{1}|^2}-\frac{x-\xi_{2}}{|x-\xi_{2}|^2},e \rangle=0 \]
for all $\xi_1,\xi_2$ in $\Lambda_b$, then $inv_x(\Lambda_b)$ is contained in an affine subspace parallel to $e^\perp$. Hence $\Lambda_b$ itself is contained in an affine subspace in $\mathbb R^d\cup\{\infty \}$ or a sphere in $\mathbb R^d$, which contradicts Lemma~\ref{lem:dense}. Therefore, for every unit vector $e\in\mathbb R^d$, there exist $\xi_1,\xi_2$ in $\Lambda_b$ such that
\[\langle \frac{x-\xi_{1}}{|x-\xi_{1}|^2}-\frac{x-\xi_{2}}{|x-\xi_{2}|^2},e \rangle\neq0. \]
We use continuity and compactness to finish the proof.
\end{proof}
\begin{lem}\label{lem:Lam2}
Let $\xi$ be any point in $\Lambda_b$. For any $\epsilon_2,\epsilon_3>0$, there exists $n_\xi\in\mathbb N$ such that for any $n\geq n_\xi$, there exists $\gamma$ in $\mathcal H^n$ satisfying
\[ d_{\S^d}(\gamma^{-1}\infty,\xi)\leq \epsilon_2,\ |\gamma'|_\infty\leq \epsilon_3. \]
\end{lem}
\begin{proof}
Since $\Lambda_b$ is $\Gamma_b$ minimal, for any point $\xi'\in\Lambda_2$, there exists a sequence $\{\gamma_n^{-1}\}$ in $\Gamma_b$ such that $\gamma_n^{-1}\xi'$ converges to $\xi$ and $|\gamma_n'|_\infty$ tends to zero.
By Lemma~\ref{lem:l1}, we know that $\gamma_n^{-1}\Lambda_{-}$ also converges to $\xi$. Hence we can always find a $\gamma$ in $\Gamma_f$ with $|\gamma'|_\infty\leq \epsilon_3$ and $d_{\S^d}(\gamma^{-1}\Lambda_{-},\xi)\leq \epsilon_2$. Let $n_\xi$ be the unique number such that $\gamma\in\mathcal H^{n_\xi}$.
For any $\gamma_1\in \cup_{n\geq 1}\mathcal H^{n}$, we have $|(\gamma_1\gamma)'|_\infty\leq |\gamma_1'|_\infty|\gamma'|_\infty\leq |\gamma'|_\infty$ and
\[d_{\S^d}((\gamma_1\gamma)^{-1}\infty,\xi)=d_{\S^d}(\gamma^{-1}(\gamma_1^{-1}\infty),\xi)\leq d_{\S^d}(\gamma^{-1}\Lambda_{-},\xi)\leq\epsilon_2. \]
Therefore, for any $m= n_\xi+n$, choose any $\gamma_1\in\mathcal H^n$ and then $\gamma_1\gamma\in\mathcal H^m$ and it satisfies Lemma~\ref{lem:Lam2}.
\end{proof}
Combining the above two lemmas, by Lemma~\ref{lem:der} and the formula $R_n(\gamma x)=-\log|\gamma'(x)|$ for $\gamma\in\mathcal H^n$, we have
\[|\partial_e(R_n\circ \gamma_{1m}-R_n\circ \gamma_{2m})(y)|=|\langle \frac{y-\gamma_{1m}^{-1}\infty}{|y-\gamma_{1m}^{-1}\infty|^2}-\frac{y-\gamma_{2m}^{-1}\infty}{|y-\gamma_{2m}^{-1}\infty|^2},e \rangle|. \]
Using this expression, Lemma \ref{nonvanish}, \ref{lem:Lam2} and continuity, we obtain
\begin{lem}\label{lem:gamma}
For every $x\in\Lambda_\Gamma\cap\overline\Delta_0$, there exist $\epsilon_x, \epsilon_x'>0$ such that for any $\epsilon_3>0$, there exists $n_x\in\mathbb N$ such that the following holds for any $n\geq n_x$. There exist $k_x\in\mathbb N$, $\gamma_{im}\in\mathcal H^n$ with $i=1,2$ and $m=1,\ldots, k_x$ satisfying
\begin{itemize}
\item $|\gamma'_{im}|<\epsilon_3$ for every $i=1,2$ and $m=1,\ldots,k_x$.
\item for any unit vector $e\in\mathbb R^d$, there exists $m\in\{1,\ldots,k_x\}$ such that for any $y\in B(x,\epsilon_x)$,
\[|\partial_e(R_n\circ \gamma_{1m}-R_n\circ \gamma_{2m})(y)|\geq\epsilon_x'>0. \]
\end{itemize}
\end{lem}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:uni}}]
For every $x\in \Lambda_\Gamma\cap\overline\Delta_0$, we apply Lemma \ref{lem:gamma} to $x$ and get two constants $\epsilon_x, \epsilon_x'>0$. Since $\Lambda_\Gamma\cap\overline\Delta_0$ is compact, we can find a finite set $\{x_1,\cdots,x_l \}$ such that $\cup B(x_j,\epsilon_{x_j}/2)\supset \Lambda_\Gamma\cap\overline\Delta_0$. Let $\epsilon_0=\inf\{\epsilon_{x_j}' \}$ and $\r=\inf\{\epsilon_{x_j}/2 \}$. Take $\epsilon_3=\epsilon_0/C$
and $n_0\geq \sup_{1\leq j\leq l}\{n_{x_j} \}$. Fix any $n\geq n_0$. Then for every $x_j$, there exists a finite set $\{\gamma_{im}\}$ in $\mathcal{H}^{n}$ satisfying results in Lemma~\ref{lem:gamma}. We put all these $\gamma_{im}$'s together and this is the finite set in $\mathcal{H}^n$ described in Lemma \ref{lem:uni}. For any $x\in\Lambda_\Gamma\cap\overline\Delta_0$, it is contained in some $B(x_j,\epsilon_{x_j}/2)$. Then $B(x,\r)\subset B(x_j,\epsilon_{x_j})$. The family $\{\gamma_{im}\}$ for $x_j$ will satisfy nonvanishing condition on $B(x,\r)$, that is for every unit vector $e\in\mathbb R^d$ there exists $m$ such that
for any $y\in B(x,\r)$
\[|\partial_e(R_{n_0}\circ \gamma_{1m}-R_{n_0}\circ \gamma_{2m})(y)|\geq\epsilon_0>0. \]
Finally, the inequality $|\mathrm D\tau_{im}|_\infty\leq C_2$ is due to \eqref{uniform contraction}.
\end{proof}
\section{Spectral gap and Dolgopyat-type spectral estimate}\label{sec:spegap}
\begin{comment}
Let us comment the difference of Dolgopyat's argument between our case and that in~\cite{ArMe},~\cite{Dol},~\cite{Nau} and~\cite{Sto}.
First difference is cancellation lemma.
In~\cite{ArMe} or~\cite{Dol}, the measure on the unstable leaf is Lebesgue or absolute continuous. So they only need to find a positive portion of intervals such that the angles of fixed two branches has positive distance to $2\pi\mathbb Z$. In~\cite{Nau} and~\cite{Sto}, they use the separation property of the limit set and of the coding. They roughly proved that at a suitable scale, for three nearby cylinders, there exists at least one cylinder such that the angles are different. The separation property is the key to obtain different angles.
In our case, the separation property is still true by using an important property of Patterson-Sullivan measure for geometrically finite case, that is Proposition~\ref{double}
Second difference is a boundary issue. Because we use measure property to find separation, our cancellation lemma cannot apply to points near the boundary of $\Delta_0$. Hence we need to carefully treat points near boundary using friendliness of PS measure.
\end{comment}
In this section, we prove a Dolgopyat-type spectral estimate and the main result is Proposition~\ref{L2contracting}. Our argument is influenced by the one in~\cite{ArMe, AGY, BaVa,Dol, Nau,Sto} and there is some technical variation in the current setting. The proof involves proving a cancellation lemma (Lemma~\ref{key}) and using it to obtain $L^2$ contraction. The rough idea is as follows. Denote the set $\Delta_0\cap \Lambda_{\Gamma}$ by $\Lambda_0$. With the UNI property (Lemma~\ref{lem:uni}) available, for each ball $B(y,r)$ with $y\in \Lambda_0$, one uses the doubling property of the PS measure to find a point $x\in B(y,r)\cap \Lambda_0$ such that cancellation happens on $B(x,r')$. Then, to run the classical argument, one needs to find finitely many such pairwise disjoint balls $B(x_i,r')$'s contained in $\Delta_0$ such that $\sqcup B(x_i,Dr')$ covers $\Delta_0$ for some $D>1$. The difficulty lies in that the balls $B(x_i,r)$'s are produced using PS measure so the position of $B(x_i,r')$'s is in some sense random and some $B(x_i,r')$ may not be fully contained in $\Delta_0$. To overcome this, we find $B(x_i,r')$'s which only cover a subset of $\Delta_0$ and divide the proof of Proposition~\ref{L2contracting} into the cases when the iteration is small and when the iteration is large.
\subsection{Twisted transfer operators}
For $s\in \mathbb{C}$, let $L_s$ be the twisted transfer operator defined by
\begin{equation}
L_s(u)(x)=\sum_{\gamma\in\mathcal H} |\gamma'(x)|^{\delta+s}u(\gamma x)
\end{equation}
For $u:\Delta_0\to \mathbb{C}$, define
\begin{equation*}
\lVert u\rVert_{\text{Lip}}=\max \{|u|_{\infty},\,\lvert u\rvert_{\text{Lip}}\},
\end{equation*}
where $\lvert u\rvert_{\text{Lip}}=\sup_{x\neq y} |u(x)-u(y)|/d(x,y)$, where $d(\cdot, \cdot)$ is the Euclidean distance. Denote by $\text{Lip}(\Delta_0)$ the space of functions $u:\Delta_0\to \mathbb{C}$ with $\lVert u\rVert_{\text{Lip}} <\infty$. We also introduce a family of equivalent norms on $\text{Lip}(\Delta_0)$:
\begin{equation*}
\lVert u\rVert_b=\max\{|u|_{\infty},\,\lvert u\rvert_{\text{Lip}}/(1+|b|)\},\,\,\,b\in \mathbb{R}.
\end{equation*}
With Proposition~\ref{prop:coding} available, we obtain the following lemma by a verbatim of the proof of~\cite[Proposition 2.5]{ArMe}.
\begin{lem}
\label{well-defined}
Write $s=\sigma+i b$.
The family $s\mapsto L_s$ of operators on $\operatorname{Lip}(\Delta_0)$ is continuous on $\{s\in\mathbb C:\ \sigma>-\epsilon_o\}$, where $\epsilon_o$ is given as in Proposition~\ref{prop:coding} (4). Moreover, $\sup_{|\sigma|<\epsilon_o} \lVert L_s\rVert_b<\infty$.
\end{lem}
\vspace{2mm}
Define the PS measure $\mu_E$ on $\Delta_0$ with respect to the Euclidean metric by
\[\mathrm{d}\mu_E(x)=(1+|x|^2)^\delta\mathrm{d} \mu(x). \]
Using the quasi-invariance of the PS measure $\mu$, we obtain that the operator $L_0$ preserves the measure $\mu_E$ by a straightforward computation.
Our main result of Dolgopyat argument is the following $L^2$ contracting proposition.
\begin{prop}\label{L2contracting}
There exist $C>0,\ \beta <1, \epsilon>0$ and $b_0>0$ such that for all $v$ in $\operatorname{Lip}(\Delta_0)$, $m\in\mathbb N$ and $s=\sigma+ib$ with $|\sigma|\leq\epsilon$ and $|b|>b_0$, we have
\begin{equation*}
\int|L_s^{m}v|^2\mathrm{d}\mu_E\leq C\beta^m\|v\|^2_b.
\end{equation*}
\end{prop}
The proof will be given at the end of Section~\ref{sec:L2contractoin}.
\begin{comment}
Let
For any measurable subset $E$ of $\Delta_0$, we have
\begin{align*}
\tilde\nu(T^{-1}E)=\sum_{j}\tilde\nu(\gamma_j(E))=\int_E\sum_j \frac{f_0(\gamma_jx)}{f_0(x)}|\mathrm D \gamma_j'x|^\delta \mathrm{d}\tilde\nu(x)=\int_E \tilde{L}_01(x)\mathrm{d}\tilde\nu(x)=\int_E 1(x)\mathrm{d}\tilde\nu(x)=\tilde\nu(E)
\end{align*}
So in fact the invariant measure $\nu$ is given by the formula \eqref{nu}.
\end{comment}
Recall that $\nu$ is the unique $T$-invariant ergodic probability measure on $\Delta_0\cap \Lambda_{\Gamma}$ which is absolutely continuous with respect to the PS measure $\mu$ with a positive Lipschitz density function $\bar f_0$. So $\nu$ is also absolutely continuous with respect to $\mu_{E}$ with a positive Lipschitz density function $f_0$. Based on these, it is a classical result that the operator $L_0$ acts on $\operatorname{Lip}(\Delta_0)$ and has a spectral gap and a simple isolated eigenvalue at $1$ with $f_0$ the corresponding eigenfunction.
For $\sigma\in \mathbb{R}$ close enough to $0$, $L_{\sigma}$ acting on $\text{Lip}(\Delta_0)$ is a continuous perturbation of $L_0$ (see Lemma~\ref{well-defined}). Hence, it has a unique eigenvalue $\lambda_{\sigma}$ close to $1$, and the corresponding eigenfunction $f_{\sigma}$ (normalized so that $\int f_{\sigma}=1$) belongs to $\text{Lip}(\Delta_0)$, strictly positive, and tends to $f_0$ in $\text{Lip}(\Delta_0)$ as $\sigma\to 0$. Choose a sufficiently small $\epsilon\in (0,\epsilon_o)$ such that for $\sigma\in (-\epsilon,\epsilon)$, $f_{\sigma}$ is well defined and
\begin{equation*}
1/2\leq \lambda_{\sigma}\leq 2,\,\,\, f_{0}/2\leq f_{\sigma}\leq 2 f_{0},\,\,\, |f_0|_{\text{Lip}}/2\leq |f_{\sigma}|_{\text{Lip}}\leq 2|f_0|_{\text{Lip}}.
\end{equation*}
For $s=\sigma+ib$ with $|\sigma|<\epsilon$ and $b\in \mathbb{R}$, define a modified transfer operator $\tilde{L}_{s}$ by
\begin{equation}
\tilde{L}_{s}(u)=(\lambda_{\sigma}f_{\sigma})^{-1}L_{s}(f_{\sigma}u).
\end{equation}
It satisfies $\tilde{L}_{\sigma}1=1$, and $|\tilde{L}_{s}u|\leq \tilde{L}_{\sigma}|u|$.
\begin{lem}[Lasota-Yorke inequality]
\label{Lasota-Yorke}
There is a constant $C_{16}>1$ such that
\begin{equation}
|\tilde{L}^n_s v|_{\operatorname{Lip}} \leq C_{16} (1+|b|) |v|_{\infty} +C_{16} \lambda^n |v|_{\operatorname{Lip}}
\end{equation}
holds for any $s=\sigma+ib$ with $|\sigma|<\epsilon$, and all $n\geq 1,\,v\in \operatorname{Lip}(\Delta_0)$, where $\lambda$ is given as in Proposition~\ref{prop:coding}.
\end{lem}
The proof of this lemma is a verbatim of proof of \cite[Lemma 2.7]{ArMe}. The following lemma can be deduced from
Lemma~\ref{Lasota-Yorke} by a straightforward computation.
\begin{lem}\label{lem:Lb}
We have $\lVert \tilde{L}^n_s\rVert_b\leq 2C_{16}$ for all $s=\sigma+ib$ with $|\sigma|<\epsilon$ and all $n\geq 1$.
\end{lem}
\subsection{Cancellation lemma}
\begin{comment}
\begin{defn}
For $b\in\mathbb R$, let
\begin{align*}
\mathcal{C}_b=&\{(u,v):u,v\in Lip(\Delta_0),u>0,0\leq |v|\leq u,\\
& |u(x)-u(y)|,|v(x)-v(y)|\leq |b|u(x)|x-y| \}.
\end{align*}
\end{defn}
\end{comment}
The main result of this subsection is the cancellation lemma (Lemma~\ref{key}) and the proof is inspired by the proof of analogous results in~\cite{Nau} and~\cite{Sto}. We start with detailing all the constants.
Let $C_{17}$ be the constant which will be specified in \eqref{cone const 3}. We define the cone
\begin{defn}
For $b\in \mathbb{R}$, let
\begin{align*}
\mathcal{C}_b=&\{(u,v):\,u,\,v\in \text{Lip}(\Delta_0),\,u>0,\,0\leq |v|\leq u,\,|\log u|_{\text{Lip}}\leq C_{17} |b|,\\
&|v(x)-v(y)|\leq C_{17} |b| u(y)d(x,y)\,\,\,\text{for all}\,\,\,x,y\in \Delta_0\}.
\end{align*}
\end{defn}
Let $r>0$ and $\epsilon_0>0$ be the same constants as the ones in Lemma~\ref{lem:uni}. We apply Lemma~\ref{lem:uni} with $C=16C_{17}$. Let $n_0$ be a sufficiently large integer which satisfies Lemma~\ref{lem:uni} and the inequality
\begin{equation}
\label{cone const 1}
\lambda^{n_0}C_{17}(1+\operatorname{diam}(\Delta_0))\leq 1.
\end{equation}
Let $\gamma_{mj}$, with $m=1,2$, $j=1,\ldots,j_0$ be the inverse branches given by Lemma~\ref{lem:uni}.
Let $k\in\mathbb N$ be such that
\begin{equation}
\label{equ:k}
k\epsilon_0>16(C_2+\epsilon_0),
\end{equation}
where $C_2$ is given in \eqref{constant c2}.
Note that the measure $\nu$ is absolutely continuous with respect to the PS measure $\mu$. Let $D>0$ be such that for all $x\in\Lambda_\Gamma\cap \Delta_0$ and $r'\leq 1/C_{3}$ with $C_{3}$ given in Proposition \ref{double}
\begin{equation}
\label{equ:D}
\nu(B(x,Dr'))>\nu(B(x,(k+2)r')).
\end{equation}
Let $\epsilon_2>0$ be such that
\begin{equation}
\label{equ:epsilon3}
(2C_{17}\epsilon_2+1/4)e^{2C_{17}\epsilon_2}\leq 3/4.
\end{equation}
Let $\epsilon_3>0$ be such that
\begin{equation}\label{equ:epsilon2}
\epsilon_3(D+2)< \min\{\epsilon_2,\ r,\ 1/C_{3}\},\ \epsilon_3(D+2)(C_2+\epsilon_0)<3\pi/2,\ \epsilon_3k \epsilon_0<\pi.
\end{equation}
Recall the notation $\tau_{mj}$ introduced in Lemma \ref{lem:uni}. For $s=\sigma+ib\in\mathbb C$, define
$$A_{s,\gamma_{mj}}(v)(x)=e^{(s+\delta)\tau_{mj}(x)}f_\sigma(\gamma_{mj}x)v(\gamma_{mj}x).$$
\begin{lem}\label{key}
There exists $0< \eta_0<1$ such that the following holds. For $s=\sigma+ib$ with $|\sigma|\leq \epsilon$, $|b|>1$, for $(u,v)\in C_b$, and for any $y\in\Lambda_0$, there exists $x\in B(y,\epsilon_3D/|b|)\cap \Lambda_\Gamma$ such that we have the following:
there exists $j\in \{1,\ldots,j_0\}$ such that one of the following inequalities holds on $B(x,\epsilon_3/|b|)$:
\begin{align*}
\textbf{type } \gamma_{1j}:\ |A_{s,\gamma_{1j}}(v)+A_{s,\gamma_{2j}}(v)|\leq\eta_0A_{\sigma,\gamma_{1j}}(u)+A_{\sigma,\gamma_{2j}}(u),\\
\textbf{type } \gamma_{2j}:\ |A_{s,\gamma_{1j}}(v)+A_{s,\gamma_{2j}}(v)|\leq A_{\sigma,\gamma_{1j}}(u)+\eta_0 A_{\sigma,\gamma_{2j}}(u).
\end{align*}
\end{lem}
We first prove a quick estimate.
\begin{lem}\label{lem:inf}
Let $\epsilon_2$ be the constant defined in \eqref{equ:epsilon3}.
For any $|b|>1$, for $(u,v)\in C_b$ and for a ball $Z$ of radius $\epsilon_2/|b|$, we have
\begin{enumerate}
\item
$\inf_Zu\geq e^{-2C_{17}\epsilon_2}\sup_Zu;$
\item
either $|v|\leq \frac{3}{4}u $
for all $x\in Z$
or $|v|\geq \frac{1}{4}u $
for all $x\in Z$.
\end{enumerate}
\end{lem}
\begin{proof}
The first inequality due to $|\log u(x)-\log u(y)
\leq C_{17}|b||x-y|$ for every $x,y\in\Delta_0$.
Suppose there exists $x_0\in Z$ such that $|v(x_0)|\leq \frac{1}{4}u(x_0)$. Then
\begin{align*}
|v(x)|&\leq |v(x)-v(x_0)|+\frac{1}{4}u(x_0)\leq C_{17}|x-x_0||b|\sup_Zu+\frac{1}{4}\sup_Zu\\
&\leq (2C_{17}\epsilon_2+\frac{1}{4})\sup_Zu\leq (2C_{17}\epsilon_2+\frac{1}{4})e^{2C_{17}\epsilon_2}\inf_Z u\leq \frac{3}{4}u(x).\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{key}]
It follows \eqref{equ:D} that there exists $x_0\in (B(y,\epsilon_3 D/|b|)-B(y,(k+2)\epsilon_3/|b|))\cap\Lambda_\Gamma$.
Let $B_1=B(y,\epsilon_3/|b|)$, $B_2=B(x_0,\epsilon_3/|b|)$ and $\hat{B}$ the smallest ball containing $B_1\cup B_2$. For all $x\in B_1,x'\in B_2$, we have
\begin{equation}\label{equ:xx'}
d(x,x')\in \frac{\epsilon_3}{b}[k,D+2].
\end{equation}
In view of \eqref{equ:epsilon2}, the radius of $\hat{B}$ is smaller than $\epsilon_2/|b|$ and it is contained in $B(y,r)$. Let $e_0=(y-x_0)/|y-x_0|$.
By Lemma~\ref{lem:uni} for the point $y$ there exists $j$ in $\{1,\cdots,j_0\}$ such that \eqref{equ:uni} holds for $B(y,r)$ with $e=e_0$. From now on, $j$ is fixed, so we abbreviate $(\gamma_{1j},\gamma_{2j})$ to $(\gamma_1,\gamma_2)$ and $(\tau_{1j},\tau_{2j})$ to $(\tau_1,\tau_2)$.
Due to $|\gamma_m'|_\infty\leq \lambda\leq 1$, the radius of $\gamma_{m}\hat{B}$ is smaller than $\epsilon_{2}/|b|$. So we can apply Lemma~\ref{lem:inf} to $\gamma_{m}\hat{B}$ and we have that
either
$|v(\gamma_mx)|\geq\frac{1}{4} u(\gamma_mx)$ for all $x\in \hat{B}$ or $|v(\gamma_mx)|\leq\frac{3}{4} u(\gamma_mx)$ for all $x\in \hat{B}$. Suppose that
\[|v(\gamma_mx)|\leq \frac{3}{4}u(\gamma_mx) \]
holds for some $m\in\{1,2 \}$ for all $x\in \hat B$. Then Lemma~\ref{key} can be proved by a straightforward computation.
Suppose that for $x\in \hat{B}$ and $m=1,2$
\begin{equation}\label{equ:notsmall}
|v(\gamma_mx)|\geq \frac{1}{4}u(\gamma_mx).
\end{equation}
\textbf{Claim:} Under the assumption of \eqref{equ:notsmall}, there exists $C_{18}>0$ independent of $b$ and $(u,v)$ such that for $l\in \{1,2\}$, we have
\begin{equation}\label{claim}
\text{either}\,\,\,\left|\frac{A_{s,\gamma_1}(v)}{A_{s,\gamma_2}(v)}\right|\leq C_{18}\text{ for all }x\in B_l\text{ or }\left|\frac{A_{s,\gamma_2}(v)}{A_{s,\gamma_1}(v)}\right|\leq C_{18}\text{ for all }x\in B_l.
\end{equation}
\begin{proof}[Proof of the claim]
Fix any $x_0\in \Delta_0$. Due to $|\tau_m'|_{\infty}\leq C_2$ (see \eqref{uniform contraction}), we have for any $x\in \hat{B}$, $$|\tau_1(x)-\tau_2(x)|\leq |\tau_1(x_0)-\tau_2(x_0)|+2C_2|x-x_0|.$$ Hence there exists a constant $C({\tau_1,\tau_2})$ depending on $\tau_1,\tau_2$ such that
\[\left|\frac{A_{s,\gamma_1}(v)}{A_{s,\gamma_2}(v)}\right|\leq C(\tau_1,\tau_2)\frac{f_\sigma(\gamma_1x)u(\gamma_1x)}{f_\sigma(\gamma_2x)u(\gamma_2x)}. \]
For the middle term,
\[\frac{f_\sigma(\gamma_1x)}{f_\sigma(\gamma_2x)}\leq 4\frac{\sup f_0}{\inf f_0}. \]
Since the radius of $\gamma_2B_l$ is less than $\epsilon_2/|b|$, using Lemma~\ref{lem:inf}, we have for every $x$ in $B_l$
\[ \frac{u(\gamma_1x)}{u(\gamma_2x)}\leq \frac{\sup_{B_l}u(\gamma_1)}{\inf_{B_l}u(\gamma_2)}\leq e^{2C_{17}\epsilon_2}\frac{\sup_{B_l}u(\gamma_1)}{\sup_{B_l}u(\gamma_2)}. \]
Putting these together, we have
\[\left|\frac{A_{s,\gamma_1}(v)}{A_{s,\gamma_2}(v)}\right|\leq C_{18}\frac{\sup_{B_l}u(\gamma_1)}{\sup_{B_l}u(\gamma_2)} \]
where $C_{18}=4C(\tau_1,\tau_2)e^{2C_{17}\epsilon_2}\frac{\sup f_0}{\inf f_0}$. We have a similar inequality for $\left|\frac{A_{s,\gamma_2}(v)}{A_{s,\gamma_1}(v)}\right|$. Note that either $\frac{\sup_{B_l}u(\gamma_1)}{\sup_{B_l}u(\gamma_2)}\leq 1$ or $\frac{\sup_{B_l}u(\gamma_2)}{\sup_{B_l}u(\gamma_1)}\leq 1$. The proof of the claim finishes .
\end{proof}
Now we start to compute the angle and our definitions are only for $x\in\hat{B}$. The function $\arg(v(\gamma_mx))$ is well defined because $|v(\gamma_mx)|\geq u(\gamma_mx)/4>0$. Let
\[\Theta(x)=b(\tau_1(x)-\tau_2(x)), \ V(x)=\arg(v(\gamma_1x))-\arg(v(\gamma_2x)), \]
and let $$\Phi(x)=\Theta(x)+V(x).$$
We apply Lemma~\ref{lem:uni} to $\hat{B}$ and obtain that for $x\in \hat{B}$,
\[|\partial_{e_0}\Theta(x)|\geq |b|\epsilon_0, \,\,\,|\Theta'(x)|\leq |b|C_2 . \]
For the angle function,
by \eqref{equ:notsmall} and \eqref{equ:hm}, we have for $i\in\{1,2\}$ and $x,x'\in\hat{B}$
\begin{align*}
|\arg v(\gamma_ix)-\arg v(\gamma_ix')|&=|\operatorname{Im}(\log v(\gamma_ix)-\log v(\gamma_ix'))|\leq \frac{|v(\gamma_ix)-v(\gamma_ix')|}{|v(\gamma_ix)|}\\
&\leq C_{17}|b|\frac{u(\gamma_ix)|\gamma_ix-\gamma_ix'|}{|v(\gamma_ix)|}\leq |b|\epsilon_0|x-x'|/4.
\end{align*}
This implies that for $x,x'\in\hat{B}$
\[ |V(x)-V(x')|\leq |b|\epsilon_0|x-x'|/2. \]
Combining the estimates for $\Theta$ and $V$, we obtain for $x,x'\in\hat{B}$
\begin{equation}\label{equ:phileq}
|\Phi(x)-\Phi(x')|\leq b(C_2+\epsilon_0)|x-x'|,
\end{equation}
and for $x, x+te_0\in \hat{B}$ with $t\in\mathbb R^+$,
\[|\Phi(x)-\Phi(x+te_0)|\geq b\epsilon_0 t/2.\]
Hence for $x_1=y,\ x_2=x_0$ which are the centers of $B_1$ and $B_2$ respectively, by \eqref{equ:xx'},
\begin{equation}\label{equ:phixx'}
|\Phi(x_1)-\Phi(x_2)|\in \epsilon_3[k\epsilon_0/2,(D+2)(C_2+\epsilon_0)].
\end{equation}
Let
$\epsilon_4=\epsilon_3k\epsilon_0/8.$
We claim that there exists $l\in \{1,2\}$ such that
\begin{equation}\label{equ:phiz}
d(\Phi(x_l),2\pi\mathbb Z)>\epsilon_4.
\end{equation}
If not so, then both the distance from $\Phi(x_1)$ to $2\pi \mathbb Z$ and that from $\Phi(x_2)$ to $2\pi\mathbb Z$ are less than $\epsilon_4$. By \eqref{equ:phixx'} and \eqref{equ:epsilon2}
\[|\Phi(x_1)-\Phi(x_2)|\leq \epsilon_3(D+2)(C_2+\epsilon_0) \leq 3\pi/2< 2\pi-2\epsilon_4. \]
Hence $\Phi(x_1),\Phi(x_2)$ are in a ball $(2n\pi-\epsilon_4,2n\pi+\epsilon_4)$ with $n\in\mathbb Z$. This implies that
\[|\Phi(x_1)-\Phi(x_2)|\leq 2\epsilon_4=\epsilon_3k\epsilon_0/4, \]
contradicting with \eqref{equ:phixx'}.
Without loss of generality, we may assume \eqref{equ:phiz} holds for $x_1$. For any $x$ in the ball $B_1$, by \eqref{equ:phileq} and \eqref{equ:k}
\[|\Phi(x)-\Phi(x_l)|\leq (C_2+\epsilon_0)\epsilon_3\leq k\epsilon_3\epsilon_0/16=\epsilon_4/2. \]
Combined with \eqref{equ:phiz}, we have
\begin{equation}
\label{equ:awayinteger}
d(\Phi(x),2\pi\mathbb Z)\geq\epsilon_4/2.
\end{equation}
In conclusion, there exists $l\in \{1,2\}$ such that for all $x\in B_l$, $d(\Phi(x),2\pi\mathbb Z)>\epsilon_4/2$ and \eqref{claim} holds. Without loss of generality, we may assume $|A_{s,\gamma_1}(v)(x)/A_{s,\gamma_2}(v)(x)|\leq C_{18}$ for all $x\in B_l$. By an elementary inequality \cite[Lemma 5.12]{Nau}, there exists $0<\eta_0<1$ depending on $\epsilon_4$ and $C_{18}$ such that on $B_l$
\begin{equation*}
|A_{s,\gamma_1}(v)+A_{s,\gamma_2}(v)|\leq \eta_0|A_{s,\gamma_1}(v)|+|A_{s,\gamma_2}(v)|\leq\eta_0A_{\sigma,\gamma_1}(u)+A_{\sigma,\gamma_2}(u).\qedhere
\end{equation*}
\end{proof}
For $b$ with $|b|\geq 1$, let
\begin{equation}\label{equ:deltab}
\Delta_b=\{x\in\Delta_0|\ d(x,\partial\Delta_0)>\frac{\epsilon_3(D+1)}{|b|} \}.
\end{equation}
For any $(u,v)\in \mathcal C_b$, We can find $\{x_i\}_{1\leq i\leq l_0}\subset\Lambda_0:=\Lambda_{\Gamma}\cap \Delta_0$ such that $B(x_i,\epsilon_3/|b|)$'s are disjoint balls contained in $\Delta_0$,
\[\Lambda_0\cap\Delta_b\subset \cup_{1\leq i\leq l_0}B(x_i,2\epsilon_3D/|b|), \]
and on each $B(x_i,\epsilon_3/|b|)$ one of the $2j_0$ inequalities in Lemma~\ref{key} holds. In fact, suppose we have already found some points $x_i$'s but $\cup B(x_i,2\epsilon_3D/|b|)$ don't cover the set $\Lambda_0\cap\Delta_b$. Then for a point $y\in \Lambda_0\cap \Delta_b-\cup B(x_i,2\epsilon_3D/|b|)$, we apply Lemma~\ref{key} to $y$ and obtain a point $x\in B(y,\epsilon_3 D/|b|)\cap \Lambda_0$ such that Lemma~\ref{key} holds on $B(x,\epsilon_3/|b|)$. Moreover, the ball $B(x,\epsilon_3/|b|)$ is contained in $\Delta_0$ and it is disjoint from $\cup B(x_i,\epsilon_3/|b|)$.
Let $B_i=B(x_i,\epsilon_3/|b|)$ and $\tilde{B_i}=B(x_i,\epsilon_3/(3|b|))$ for $i=1,\cdots,l_0$. Let $\eta\in[\eta_0,1)$ and define a $C^1$ function $\chi:\Delta_0\rightarrow[\eta,1]$ as follows:
it equals 1 outside of $\cup_{m,j,i} \gamma_{mj} B_i$; for each $B_i$, if $B_i$ is of type $\gamma_{mj}$, let $\chi(\gamma_{mj}(y))=\eta$ for $y\in \tilde{B_i}$ and $\chi\equiv 1$ on other $\gamma_{m'j'}B_i$. We can choose $\eta$ close to 1 and independent of $b$ such that $|\chi'(x)|\leq |b|$ for all $x\in \Delta_0$.
\begin{cor}
\label{cone ineq}
Under the same assumptions as in Lemma~\ref{key}, for $(u,v)\in \mathcal{C}_b$ and $\chi=\chi(b,u,v)$ a $C^1$ function described as above, we have
\begin{equation*}
\label{equ:con ineq}
|\tilde{L}_s^{n_0} v|\leq \tilde{L}_\sigma^{n_0}(\chi u).
\end{equation*}
\end{cor}
Define $J_i=B(x_i,2\epsilon_3D/|b|)$ for $i=1,\cdots,l_0$ and let $\tilde{B}=\cup\tilde{B_j}$.
\begin{prop}
\label{contr int}
Suppose that $w$ is a positive Lipschitz function with $|\log w(x)-\log w(y)|\leq K|b||x-y|$ for some $K>0$. Then
\begin{equation}
\label{equ:contr int}
\int_{\tilde{B}}w\mathrm{d}\nu \geq \epsilon_4\int_{\Delta_b} w\ \mathrm{d}\nu,
\end{equation}
with $\epsilon_4=\epsilon_5 e^{-4\epsilon_3DK}$, where $\epsilon_5$ comes from doubling property only depending on $D$ and $\nu$.
\end{prop}
\begin{proof}
Since $\cup_j J_i$ covers $\Delta_b$, it is sufficient to prove for each $i$ we have a similar inequality.
Due to hypothesis, we obtain $\inf_{\tilde{B_i}}w\geq e^{-4\epsilon_3DK}\sup_{J_i}w$. By doubling property, there exists $\epsilon_5$ depending on $D$ such that
\[\nu(\tilde{B_i})\geq\epsilon_5\nu(J_i). \]
Therefore
\[\int_{\tilde{B_i}}w\ \mathrm{d}\nu\geq \nu(\tilde{B_i})\inf_{\tilde{B_i}}w\geq \epsilon_5\nu(J_i) e^{-4\epsilon_3DK}\sup_{J_i}w\geq \epsilon_4\int_{J_i}w\ \mathrm{d}\nu.\qedhere \]
\end{proof}
\subsection{Invariance of Cone Condition}
We define the constants
\begin{align}
\label{cone const 2}
&C_{17}'=16(\delta+\epsilon)C_2|f_0|_{\infty} |f_0^{-1}|_{\infty}+16 |f_0^{-1}|_{\infty}|f_0|_{\text{Lip}}+4C_2+2,\\
\label{cone const 3}
&C_{17}=\max\{8|f_0^{-1}|_{\infty} |f_0|_{\text{Lip}} +(\delta+3)C_2+1+4|f_0|_{\infty}|f_0^{-1}|_{\infty}C_{17}', 6C_{16}\}.
\end{align}
\begin{lem}
\label{cone invariance}
Let $C_{17}>0$ be the constant defined in (\ref{cone const 3}) and $n_0$ be the constant defined in (\ref{cone const 1}). For $s=\sigma+ib$ with $|\sigma|<\epsilon$ and $|b|>1$, for $(u,v)\in \mathcal{C}_b$,
we have
\begin{equation}
(\tilde{L}^{n_0}_{\sigma}(\chi u),\,\tilde{L}^{n_0}_s v)\in \mathcal{C}_b,
\end{equation}
where $\chi=\chi (b,u,v)$ is the same as the one in Corollary~\ref{cone ineq}.
\end{lem}
The proof is a verbatim of the proof of~\cite[Lemma 2.12]{ArMe}.
\subsection{$L^2$ contraction for bounded iterations}
In this part, we will prove Proposition~\ref{L2contracting} for the case when $m$ bounded by $\log |b|$. Compared with~\cite{AGY}, where they can finish the proof of an analog of Proposition~\ref{L2contracting} at this stage, we have the difficulty about the boundary. More precisely, Proposition \ref{contr int} is one of the ingredients to obtain Proposition \ref{L2contracting}. Now the integration region of the right hand side of \eqref{equ:contr int} is $\Delta_b$, which is smaller than $\Delta_0$, so it just enables us to obtain $L^2$ contraction for bounded iterations. For large iteration, we will use a Lipschitz contraction lemma (Lemma \ref{lem:Lipcontracting}) to obtain $L^2$ contraction in the next subsection.
\begin{lem}\label{lem:vLip}
For $|b|>1$ and $v\in \operatorname{Lip}(\Delta_0)$, if $|v|_{\operatorname{Lip}}\geq C_{17} |b| |v|_{\infty}$, then
\[\|\tilde{L}_s^{n_0}v\|_b\leq \frac{9}{10}\|v\|_b. \]
\end{lem}
\begin{proof}
We have
\[|\tilde{L}_s^{n_0}v|_\infty\leq |v|_\infty\leq \frac{1}{C_{17}|b|}|v|_{Lip}\leq \frac{2}{C_{17}}\|v\|_b. \]
By Lemma~\ref{Lasota-Yorke}, we obtain
\[|\tilde{L}_s^{n_0}v|_{\operatorname{Lip}}\leq C_{16}(1+|b|)|v|_\infty+C_{16}\lambda^{n_0}|v|_{\operatorname{Lip}}\leq (1+|b|)(\frac{C_{16}(1+|b|)}{C_{17}|b|}+C_{16}\lambda^{n_0})\|v\|_b. \]
The proof is complete by using the inequalities $C_{17}\geq 6C_{16}$ and $\lambda^{n_0}C_{17}\leq 1$.
\end{proof}
\begin{lem}
\label{L2bounded}
There exist $C_{19}>0$ and $\beta<1$ such that for all $s=\sigma+ib$ with $|\sigma|<\epsilon$ and $|b|$ large enough and $m\leq [C_{19}\log|b|]$
\begin{equation}\label{equ:L2Linfinty}
\int |\tilde{L}^{mn_0}_{s} v|^2\mathrm{d}\nu \leq \beta^m\|v\|_b^2.
\end{equation}
\end{lem}
\begin{proof}
If for all $0\leq p\leq m-1$, we have $|\tilde{L}_s^{pn_0}v|_{\text{Lip}}\geq C_{17} |b| |\tilde{L}_s^{pn_0}v|_{\infty}$, then by Lemma~\ref{lem:vLip},
\[\int |\tilde{L}_s^{mn_0}v|^2\mathrm{d}\nu\leq \|\tilde{L}_s^{mn_0}v\|_b^2\leq (\frac{9}{10})^m\|v\|_b^2. \]
Otherwise, suppose $p$ is the smallest integer such that $|\tilde{L}_s^{pn_0}v|_{\operatorname{Lip}}\leq C_{17}|b||\tilde{L}_s^{pn_0}v|_\infty$. We consider $v'=\tilde{L}_s^{pn_0}v$. Then Lemma~\ref{lem:vLip} implies $\|v'\|_b\leq (\frac{9}{10})^p\|v\|_b$. We only need to show that
\[\int|\tilde{L}_s^{(m-p)n_0}v'|^2\mathrm{d}\nu\leq \beta^{m-p}\|v'\|_b^2. \]
We reduce to the case when $p=0$, that is $|v|_{\operatorname{Lip}}\leq C_{17}|b||v|_\infty$.
Define $u_0\equiv 1,\,v_0=v/|v|_{\infty}$ and induitively,
\begin{equation*}
u_{m+1}=\tilde{L}^{n_0}_{\sigma}(\chi_{m}u_{m}),\,\,\, v_{m+1}=\tilde{L}^{n_0}_{s}(v_m),
\end{equation*}
where $\chi_m=\chi (b,u_m,v_m)$. It is immediate that $(u_0,v_0)\in \mathcal{C}_b$, and it follows from Lemma~\ref{cone invariance} that $(u_m,v_m)\in \mathcal{C}_b$ for all $m$. Hence in particular the $\chi_m$'s are well defined.
We will show that there exist $\beta_1\in (0,1)$, $\kappa>0$ and $C>0$ such that for all $m$
\begin{equation}
\label{induction eq}
\int u_{m+1}^2 \mathrm{d}\nu \leq \beta_1 \int u_m^2 \mathrm{d}\nu+C|b|^{-\kappa}.
\end{equation}
Then note that
\begin{equation*}
|\tilde{L}^{mn_0}_s v|=|v|_{\infty} |\tilde{L}^{mn_0}_s v_0|=|v|_{\infty} |v_m|\leq |v|_{\infty} u_m.
\end{equation*}
As a result,
\begin{equation*}
\int |\tilde{L}^{mn_0}_s v|^2\mathrm{d}\nu\leq |v|_{\infty}^2 \int u_m^2 \mathrm{d}\nu \leq |v|^2_{\infty}(\beta_1^m \int u^2_0 \mathrm{d}\nu+C|b|^{-\kappa}\sum_{0\leq l\leq m-1}\beta_1^l ) \leq(\beta_1^m+C|b|^{-\kappa}/(1-\beta_1)) |v|_{\infty}^2.
\end{equation*}
We can find $C_{19}>0$ and $\beta<1$ such that for any large enough $|b|$, \eqref{equ:L2Linfinty} holds for all $m\leq [C_{19}\log|b|]$.
Now we prove \eqref{induction eq}. By definition
\begin{align*}
u_{m+1}(x)&=\lambda_{\sigma}^{-n_0} f_{\sigma}^{-1}(x) \sum_{\gamma\in \mathcal H^{n_0}} |\gamma'(x)|^{\delta+\sigma} f_{\sigma}(\gamma x) \chi_{m}(\gamma x) u_m(\gamma x)\\
&= \lambda_{\sigma}^{-n_0} f_{\sigma}^{-1}(x)\sum_{\gamma\in \mathcal H^{n_0}} \left(|\gamma'(x)|^{\delta/2} f_{\sigma}^{1/2}(\gamma x) u_m(\gamma x)\right) \left(|\gamma'(x)|^{\delta/2+\sigma} f_{\sigma}^{1/2}(\gamma x) \chi_m(\gamma x)\right),
\end{align*}
so by Cauchy-Schwarz
\begin{align*}
u_{m+1}^2(x)\leq& (\lambda_{\sigma}^{-n_0} f_{\sigma}(x))^{-2}\left(\sum_{\gamma\in \mathcal H^{n_0}} |\gamma'(x)|^{\delta} f_{\sigma}(\gamma x) u_{m}^2(\gamma x)\right) \left(\sum_{\gamma\in \mathcal H^{n_0}} |\gamma'(x)|^{\delta+2\sigma} f_{\sigma}(\gamma x) \chi_m^2(\gamma x)\right)\\
\leq & \xi (\sigma) \tilde{L}^{n_0}_0 (u^2_m) \tilde{L}^{n_0}_{2\sigma} (\chi_m^2)
\end{align*}
where (noting that $\lambda_0=1$)
\begin{equation*}
\xi (\sigma) =(\lambda^{-2}_{\sigma} \lambda_{2\sigma})^{n_0} \left|\frac{f_0}{f_{\sigma}}\right|_{\infty} \left|\frac{f_{2\sigma}}{f_{\sigma}}\right|_{\infty} \left| \frac{f_{\sigma}}{f_0}\right|_{\infty} \left|\frac{f_{\sigma}}{f_{2\sigma}}\right|_{\infty}.
\end{equation*}
As in Proposition~\ref{contr int}, we write $\Delta_0=\tilde{B}\sqcup \tilde{B}^c$. Let $\mathcal{H}_c$ be the set of inverse branches given by Lemma~\ref{lem:uni}. If $y\in \tilde{B}$, then there exists $\gamma_i\in \mathcal{H}_{c}$ such that
\begin{align*}
(\tilde{L}^{n_0}_{2\sigma}\chi^2_{m})(y)\leq & \lambda^{-n_0}_{2\sigma} f_{2\sigma} (y)^{-1} \left(\eta^2 |\gamma'_i(y)|^{\delta+2\sigma} f_{2\sigma}(\gamma_iy)+\sum_{\gamma\in \mathcal H^{n_0}\backslash \{\gamma_i\}} |\gamma'(y)|^{\delta+2\sigma} f_{2\sigma}(y)\chi_m^2(\gamma y) \right)\\
=& (\tilde{L}^{n_0}_{2\sigma} 1)(y)-(1-\eta^2) \lambda^{-n_0}_{2\sigma} f_{2\sigma}(y)^{-1} |\gamma'_i(y)|^{\delta+2\sigma}f_{2\sigma}(\gamma_iy)\\
\leq & 1-(1-\eta^2) 2^{-(n_0+2)} \inf f_0 \cdot |f_0|_{\infty}^{-1} \cdot \inf_{\{\gamma_i\in \mathcal{H}_c\}} |\gamma'_i|^{\delta+2\sigma}=\eta_1<1,
\end{align*}
In this way we obtain that there exists $\eta_1<1$ such that
\begin{equation*}
u^2_{m+1}(y)\leq
\begin{cases}
\xi (\sigma) \eta_1 (\tilde{L}^{n_0}_0 u_m^2)(y), & y\in \tilde{B},\\
\xi (\sigma) (\tilde{L}^{n_0}_0 u_m^2)(y), & y\in \tilde{B}^c.
\end{cases}
\end{equation*}
Since $(u_m,v_m)\in \mathcal{C}_b$, it follows in particular that $|\log u_m|_{\text{Lip}}\leq C_{17} |b|$. Hence by \eqref{cone const 1},
\begin{equation*}
u_m^2(\gamma x)/u_m^2(\gamma y)\leq \exp (2C_{17}\lambda^{n_0} |b|d(x,y))\leq \exp (2|b|d(x,y)).
\end{equation*}
Let $w=\tilde{L}^{n_0}_0(u_m^2)$. Then
\begin{align*}
\frac{w(x)}{w(y)}=\frac{f_0(y)\sum_{\gamma\in \mathcal H^{n_0}}|\gamma'(x)|^{\delta} f_0(\gamma x)u_m^2(\gamma x) }{f_0(x) \sum_{\gamma\in \mathcal H^{n_0}} |\gamma'(y)|^{\delta} f_0(\gamma y) u_m^2(\gamma y)}\leq \exp \left(\left(2|f_0^{-1}|_{\infty} |f_0|_{\text{Lip}} +\delta C_2 +2|b|\right) d(x,y)\right).
\end{align*}
Hence $|\log w|_{\text{Lip}}\leq K|b|$ with $K=2|f_0^{-1}|_{\infty} |f_0|_{\text{Lip}} +2C_2+2$. Using Proposition~\ref{contr int}, we have
\begin{equation*}
(1-\eta_1) \int_{\tilde{B}} w\mathrm{d}\nu \geq \epsilon_4 (1-\eta_1)\int_{\Delta_b} w \mathrm{d}\nu.
\end{equation*}
Setting $\beta'=1-\epsilon_4(1-\eta_1)$, we can further write
\begin{equation*}
\eta_1\int_{\tilde{B}} w\mathrm{d}\nu+\int_{\Delta_b-\tilde{B}} w\mathrm{d}\nu\leq \beta' \int_{\Delta_b} w\mathrm{d}\nu\leq \beta'\int_{\Delta_0} w \mathrm{d}\nu.
\end{equation*}
Hence
\begin{align}
\int_{\Delta_b} u^2_{m+1}\mathrm{d}\nu \leq & \xi (\sigma) \left(\eta_1\int_{\tilde{B}} \tilde{L}^{n_0}_0(u^2_m)\mathrm{d}\nu +\int_{\Delta_b-\tilde{B}} \tilde{L}^{n_0}_0 (u^2_m)\mathrm{d}\nu\right)\nonumber\\
\label{bounded contraction 1}
\leq &\xi(\sigma) \beta' \int_{\Delta_0} \tilde{L}^{n_0}_0(u^2_m)\mathrm{d}\nu =\xi (\sigma) \beta' \int_{\Delta_0} u^2_m\mathrm{d}\nu.
\end{align}
By \eqref{equ:bou}, \eqref{equ:deltab} and $|u_{m+1}|\leq 1$,
\begin{equation}
\label{bounded contraction 2}
\int_{\Delta_b-\Delta_0}u_{m+1}^2 \mathrm{d}\nu\leq \nu(\Delta_0-\Delta_b)\leq C|b|^{-\kappa}.
\end{equation}
Finally we can shrink $\epsilon$ if necessary so that $\xi (\sigma)\beta'<1$ for $|\sigma|<\epsilon$ and then \eqref{bounded contraction 1} and \eqref{bounded contraction 2} imply \eqref{induction eq}.
\end{proof}
\subsection{Proof of Proposition~\ref{L2contracting}}\label{sec:L2contractoin}
\begin{lem}\label{lem:Linfty2}
There exist $\epsilon\in (0,1),\, \tau\in (0,1)$ and $C_{20}>0$ such that for all $s=\sigma+ib$ with $|\sigma|<\epsilon$, $n\geq 1$ and $v\in \operatorname{Lip}(\Delta_0)$, we have
\begin{equation*}
|\tilde{L}^n_s v|^2_{\infty} \leq C_{20} (1+|b|)\tau^n |v|_{\infty} \lVert v\rVert_b+C_{20}B^n|v|_{\infty} \int |v| \mathrm{d}\nu
\end{equation*}
where $B>1$ is a constant depending on $\epsilon$ and it tends to $1$ as $\epsilon\to 0$.
\end{lem}
\begin{proof}
We have
\begin{align*}
|\tilde{L}^n_s v(x)|\leq & \lambda_{\sigma}^{-n} f_{\sigma}^{-1}(x) \sum_{\gamma\in \mathcal H^n} |\gamma'(x)|^{\delta+\sigma} f_{\sigma}(\gamma x) |v|(\gamma x)\\
=& \lambda^{-n}_{\sigma} f_{\sigma}^{-1}(x) \sum_{\gamma\in \mathcal H^n} \left(|\gamma'(x)|^{\delta/2+\sigma} f_{\sigma}^{1/2}(\gamma x) |v|^{1/2}(\gamma x)\right) \left(|\gamma'(x)|^{\delta/2} f^{1/2}_{\sigma}(\gamma x) |v|^{1/2}(\gamma x)\right).
\end{align*}
Using Cauchy-Schwarz, we obtain
\begin{align*}
|\tilde{L}^n_s v(x)|^2 \leq (\lambda_{\sigma}^{-2}\lambda_{2\sigma})^n \xi(\sigma) \tilde{L}^n_{2\sigma}(|v|) (x) \cdot \tilde{L}^n_0(|v|)(x),
\end{align*}
where $\xi(\sigma)=|f_0/f_{\sigma}|_{\infty} |f_{2\sigma}/f_{\sigma}|_{\infty} |f_{\sigma}/f_0|_{\infty} |f_{\sigma}/f_{2\sigma}|_{\infty}\leq 64$. Hence
\begin{equation}
\label{bound}
|\tilde{L}^n_s v|^2_{\infty} \leq 64 B^n |v|_{\infty} |\tilde{L}^n_0(|v|)|_{\infty},
\end{equation}
where $B>1$ is a constant depending on $\epsilon$ with $B\to 1$ as $\epsilon\to 0$.
Since $\tilde{L}_0$ is a normalized transfer operator for the uniformly expanding map $T$, there exists $\tau_1\in (0,1)$ such that $|\tilde{L}^n_0 v|_{\infty}\leq C\tau_1^n\lVert v\rVert_{\text{Lip}}$ for all $v\in \text{Lip}(\Delta_0)$ with $\int v \mathrm{d}\nu=0$. (This is a consequence of spectral gap of quasi-compact operator $\tilde{L}_0$.) Hence by decomposing $|v|$ into $(|v|-\int |v|\mathrm{d}\nu)+\int |v|\mathrm{d}\nu$, we obtain
\begin{equation*}
| \tilde{L}^n_0(|v|)|_{\infty} \leq 2C\tau_1^n \lVert v\rVert_{\text{Lip}}+\int |v|\mathrm{d}\nu.
\end{equation*}
Substituting into (\ref{bound}), we have
\begin{equation*}
|\tilde{L}^n_s v|_{\infty}^2\leq 128 C(B\tau_1)^n (1+|b|)|v|_{\infty} \lVert v\rVert_b +64 B^n |v|_{\infty} \int |v| \mathrm{d}\nu.
\end{equation*}
Finally, shrink $\epsilon$ if necessary so that $\tau=B\tau_1<1$.
\end{proof}
\begin{lem}\label{lem:Lipcontracting}
There exist $C>0,\ \epsilon \in (0,1),\, A>0$ and $\beta\in (0,1)$ such that
\begin{equation*}
\lVert \tilde{L}^{mn_0}_s v\rVert_b\leq C\beta^m \lVert v\rVert_b
\end{equation*}
for all $m\geq A\log |b|,\, s=\sigma+ib$ with $|\sigma|<\epsilon$ and $|b|$ large enough, and all $v\in \operatorname{Lip}(\Delta_0)$.
\end{lem}
\begin{proof}
Let $N=[C_{19}\log|b|]n_0$. Using Lemma~\ref{lem:Linfty2} for $\tilde{L}_s^N v$ and $n=lN$, Lemma~\ref{lem:Lb} and \eqref{equ:L2Linfinty}, we obtain
\begin{align*}
|\tilde{L}_s^{(l+1)N}v |_\infty^2&\leq C_{20}(1+|b|)\tau^{lN}|\tilde{L}_s^N v|_\infty\|\tilde{L}_s^Nv\|_b+C_{20}B^{lN}|\tilde{L}_s^Nv|_\infty (\int |\tilde{L}_s^Nv|^2\mathrm{d} \nu)^{1/2}.
\\
&\leq 2C_{16}C_{20}(1+|b|)\tau^{lN}|v|_\infty\|v\|_b+2C_{16}C_{20}B^{lN}|v|_\infty \beta^{N/2}\|v\|_b.
\end{align*}
We fix $l$ depending on $\tau, C_{19}$ and $n_0$ such that $(1+|b|)\tau^{lN/2}\leq 1$. Then by shrinking $B$ if necessary, there exists $\beta_1<1$, such that
\begin{equation}\label{equ:l+1}
|\tilde{L}_s^{(l+1)N}v|_\infty\leq \beta_1^{(l+1)N}\|v\|_b.
\end{equation}
For Lipschitz norm, we have
\begin{align*}
|\tilde{L}_s^{(l+2)N}v|_{\operatorname{Lip}}&\leq C_{16}(1+|b|)|\tilde{L}_s^{(l+1)N}v|_\infty+C_{16}\lambda^N|\tilde{L}_s^{(l+1)N}v|_{\operatorname{Lip}}\\
&\leq C_{16}(1+|b|)\beta_1^{(l+1)N}\|v\|_b+C_{16}^2\lambda^N((1+|b|)|v|_\infty+\lambda^{(l+1)N}|v|_{\operatorname{Lip}})\\
&\leq C_{16}^2(1+|b|)\|v\|_b(\beta_1^{(l+1)N}+\lambda^N+\lambda^{(l+2)N})\leq 3C_{16}^2(1+|b|)\beta_2^N\|v\|_b,
\end{align*}
for some $\beta_2<1$, where we use Lemma~\ref{Lasota-Yorke} to get the first inequality and \eqref{equ:l+1} to get the second one. For the infinity norm, by \eqref{equ:l+1} and Lemma~\ref{lem:Lb}, we obtain
\[|\tilde{L}_s^{(l+2)N}v|_\infty\leq 2C_{16}\beta_1^{(l+1)N}\|v\|_b. \]
Combining these two norm estimates, we obtain
\begin{equation}\label{equ:l+2}
\|\tilde{L}_s^{(l+2)N}v\|_b\leq C_{16}^2(2\beta_1^{(l+1)N}+3\beta_2^{N})\|v\|_b\leq \beta_3^{(l+2)N/n_0}\|v\|_b,
\end{equation}
for some $\beta_3<1$ if $|b|$ is large enough to absorb the constant $6C_{16}^2$.
Let $A=2(l+2)C_{19}$ and $N_1=(l+2)N/n_0=(l+2)[C_{19}\log|b|]\leq A\log|b|$. For $m\geq A\log|b|$, we can write $m=dN_1+r$ with $r\in\mathbb N$ and $r< N_1$. Therefore by \eqref{equ:l+2} and Lemma~\ref{lem:Lb},
\begin{equation*}
\|\tilde{L}_s^{mn_0}v\|_b=\| \tilde{L}_s^{dN_1n_0}(\tilde{L}_s^{rn_0}v)\|_b\leq\beta_3^{dN_1}\|\tilde{L}_s^{rn_0}v\|_b\leq 2C_{16}\beta_3^{dN_1}\|v\|_b\leq 2C_{16}(\sqrt{\beta_3})^m\|v\|_b.\qedhere
\end{equation*}
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{L2contracting}}]
It is sufficient to prove that for all $m\in \mathbb{N}$,
\begin{equation}\label{LLs}
\int |\tilde{L}_s^{mn_0}v|^2\mathrm{d}\nu\leq C\beta^m\|v\|^2_b.
\end{equation}
Then for any $k\in N$, suppose $k=mn_0+r$ with $0\leq r<n_0$. We have
\[\int |L_s^kv|^2\mathrm{d}\mu_E\leq C\lambda_\sigma^k\int |\tilde{L}_s^k(f_{\sigma}^{-1}v)|^2\mathrm{d}\nu\leq C\lambda_\sigma^k \beta^m\|\tilde{L}_s^r(f_{\sigma}^{-1}v)\|_b^2\leq C\lambda_\sigma^k\beta^m\|f_{\sigma}^{-1}v\|_b^2\leq C\lambda_{\sigma}^{k}\beta^m \|v\|_b^2.\]
By choosing $\epsilon$ small such that $\lambda_\sigma^{n_0}\beta<1$ for any $|\sigma|<\epsilon$, we obtain Proposition~\ref{L2contracting}.
It remains to prove \eqref{LLs}. For $m> A\log|b|$, by Lemma~\ref{lem:Lipcontracting}, we obtain
\[\int|\tilde{L}_s^{mn_0}v|^2\mathrm{d}\nu\leq \|\tilde{L}_s^{mn_0}v \|_b^2\leq C\beta^m\|v\|_b^2. \]
For $A\log|b|\geq m\geq C_{19}\log|b|$, by \eqref{equ:L2Linfinty} and Lemma~\ref{lem:Lb}, we know
\[\int|\tilde{L}_s^{mn_0}v|^2\mathrm{d}\nu\leq \beta^{[C_{19}\log|b|]}\|\tilde{L}_s^{(m-[C_{19}\log|b|])n_0}v \|_b^2\leq 2C_{16}\beta^{[C_{19}\log|b|]}\|v\|^2_b\leq 2C_{16}\beta_1^m\|v\|^2_
\]
for some $\beta_1=\beta^{C_{19}/A}<1$.
The case when $m\leq C_{19}\log |b|$ has been verified in Lemma~\ref{L2bounded}.
\end{proof}
\section{Exponential mixing}
\label{sec:expmix}
In this section, we prove Theorem~\ref{thm:skew}. As a first step, an analogous result concerning expanding semiflow will be proved. Let $T:\Lambda_+\to \Lambda_+$ be the uniformly expanding map and $R:\Lambda_+\to \mathbb{R}_+$ be the roof function as defined in Proposition~\ref{prop:coding}. Set $\Lambda_+^{R}=\{(x,t)\in \Lambda_+\times \mathbb{R}: 0\leq t<R(x)\}$. We define a semi-flow $T_t:\Lambda_+^R\to \Lambda_+^R$ by $T_s(x,t)=(T^nx, t+s-R_n(x))$ where $n$ is the unique integer satisfying $R_n(x)\leq t+s<R_{n+1}(x)$. Recall that $\nu$ is the unique $T$-invariant ergodic probability measure on $\Lambda_+$. Then the flow $T_t$ preserves the probability measure $\nu^{R}=\nu\times \operatorname{Leb}/(\nu\times \operatorname{Leb})(\Lambda_+^R)$. We will also use the probability measure $\mu_E^{R}=\mu_E\times \operatorname{Leb}/(\mu_E\times \operatorname{Leb})(\Lambda_+^R)$ on $\Lambda_+^R$. We show that $T_t$ is exponentially mixing.
For a bounded function on $\Lambda_+^R$, we define two norms. Set
\begin{align*}
&\|U\|_{\mathcal B_0}=|U|_\infty+\sup_{ (x,a)\neq(x',a')\in\Lambda_+^R}\frac{|U(x,a)-U(x',a')|}{d(x,x')+|a-a'|},\\
&\|V\|_{\mathcal B_1}=|V|_\infty+\sup_{x\in \Lambda_+} \frac{\operatorname{Var}_{(0,R(x))}\{t\mapsto V(x,t) \}}{R(x)},
\end{align*}
where $\operatorname{Var}_{(0,R(x))}\{t\mapsto V(x,t) \}$ is the total variation of the function $t\mapsto V(x,t)$ on the interval $(0,R(x))$.
\begin{thm}\label{semiflow}
There exist $C>0,\ \epsilon>0$ such that for all $t>0$ and for any two functions $U,\ V$ on $\Lambda_{+}^R$ with $\|U\|_{\mathcal B_0},\ \|V\|_{\mathcal B_1}$ finite, we have
\[\left|\int U\cdot V\circ T_t\mathrm{d}\mu_E^R-\left(\int U\mathrm{d}\mu_E^R\right)\left(\int V\mathrm{d}\nu^R\right)\right|\leq Ce^{-\epsilon t}\|U\|_{\mathcal B_0}\|V\|_{\mathcal B_1}. \]
\end{thm}
\begin{rem}
Applying this theorem to the function $U(x,t)\frac{\mathrm{d}\nu}{\mathrm{d}\mu_E}(x)$, we obtain
\begin{equation}
\label{semiflownu}
\left|\int U\cdot V\circ T_t\mathrm{d}\nu^R-\left(\int U\mathrm{d}\nu^R\right) \left(\int V\mathrm{d}\nu^R\right)\right|\leq Ce^{-\epsilon t}\|U\|_{\mathcal B_0}\|V\|_{\mathcal B_1}.
\end{equation}
\end{rem}
With Proposition~\ref{L2contracting} available, Theorem~\ref{semiflow} can be proved essentially along the same lines as the proof of \cite[Theorem 7.3]{AGY} (see also \cite[Section 7.5]{AGY}). We provide a sketch of the proof here. For a pair of functions $U,V$, let $\rho(t)=\int U\cdot V\circ T_t\mathrm{d}\mu_E^R$ be the correlation function and the observation is that the Laplace transform of $\rho$, denoted by $\hat{\rho}$, can be expressed as a sum of twisted transfer operators $L_s$ \cite[Lemma 7.17]{AGY}. One shows that $\hat{\rho}$ admits an analytic continuation to a neighborhood of each point $s=ib$ and this part of the argument uses the quasi-compactness of the twisted transfer operators \cite[Lemma 7.21, 7.22]{AGY}. When $|b|$ is large, the Dolgopyat-type estimate (Proposition~\ref{L2contracting}), which is a replacement of~\cite[Proposition 7.7]{AGY} in the current setting, is used to imply that $\hat{\rho}$ admits an analytic extension to a strip $\{s=\sigma+ib\in \mathbb{C}: |\sigma|<\sigma_0\}$ for all sufficiently small $\sigma_0$ \cite[Corollary 7.20]{AGY}. The result of exponential mixing then follows from the classical Paley-Wiener theorem \cite[Theorem 7.23]{AGY}.
The difference between our result and that in~\cite{AGY} is the classes of functions in concern. The only adjustment we need to make is~\cite[Lemma 7.18]{AGY}, which is a norm estimate for $C^1$ functions in their paper, but for functions with finite $\mathcal B_0$ norm in the current setting. The precise statement is as follows. For a function $U:\Lambda_+\to \mathbb{R}$ with $\| U\|_{\mathcal B_0}<\infty$ and $s\in \mathbb{C}$, set $\hat{U}_{s}(x)=\int_0^{R(x)} e^{-ts}U(x,t)\mathrm{d} t $.
\begin{lem}
There exists $C>0$ such that for $s=\sigma+ib$ with $|\sigma|\leq\epsilon_o/4$ ($\epsilon_o$ is given as in Proposition \ref{prop:coding} (5)), the function $L_s\hat{U}_{-s}$ is Lipschitz on $\Delta_0$ and
\[\|L_s\hat{U}_{-s}\|_b\leq \frac{C\|U\|_{\mathcal B_0}}{\max\{ 1,|b|\}}. \]
\end{lem}
\begin{proof}
We first prove for $x\in\Lambda_{+}$ we have
\begin{equation*}
\label{laplaceU}
|\hat{U}_{-s}(x)|\leq\frac{Ce^{\epsilon_oR(x)/2}}{\max\{1,|b|\}}\|U\|_{\mathcal B_0}.
\end{equation*}
By definition, we have
\begin{equation*}
\hat{U}_{-s}(x)=\int_0^{R(x)}U(x,t)e^{ts}\mathrm{d} t.
\end{equation*}
The case when $|b|\leq 1$ is easy. When $|b|>1$, one uses integration by parts and the fact that $U$ is Lipschitz with respect to $t$ to obtain
\[ |\hat{U}_{-s}(x)|\leq (2|U|_\infty e^{\epsilon_oR(x)/4}+|U|_{\operatorname{Lip}}R(x)e^{\epsilon_o R(x)/4})/\max\{1,|b|\}.\]
Then
\[|L_s\hat{U}_{-s}|\leq \frac{C\|U\|_{\mathcal B_0}}{\max\{1,|b|\}}L_\sigma(e^{\epsilon_oR/2}). \]
Observe that by \eqref{sum}
\begin{equation}
\label{boundedeigen}
L_\sigma(e^{\epsilon_oR/2})=\sum_{\gamma\in\mathcal H}|\gamma'(x)|^{\delta+\sigma}e^{\epsilon_o R(\gamma x)/2}\leq \sum_{\gamma\in\mathcal H}|\gamma'(x)|^{\delta-3\epsilon_o/4}<\infty.
\end{equation}
So $|L_s\hat{U}_{-s}|\leq \frac{C\|U\|_{\mathcal B_0}}{\max\{1,|b|\}} $.
We estimate the Lipschitz norm of $L_s\hat{U}_{-s}$. We have
\[L_s\hat{U}_{-s}(x)-L_s\hat{U}_{-s}(y)=\sum_{\gamma\in\mathcal H}|\gamma'(x)|^{\delta+s}(\hat{U}_{-s}(\gamma x)-\hat{U}_{-s}(\gamma y))+(|\gamma'(x)|^{\delta+s}-|\gamma'(y)|^{\delta+s})\hat{U}_{-s}(\gamma y). \]
The term $|\gamma'(x)|^{\delta+s}-|\gamma'(y)|^{\delta+s}$ can be estimated using Proposition~\ref{prop:coding} (4).
For the term $\hat{U}_{-s}(\gamma x)-\hat{U}_{-s}(\gamma y)$, suppose that $R(\gamma x)\geq R(\gamma y)$, we use Proposition~\ref{prop:coding} (4) again and get
\begin{align*}
|\hat{U}_{-s}(\gamma x)-\hat{U}_{-s}(\gamma y)|&\leq |R(\gamma x)-R(\gamma y)||U|e^{\sigma R(\gamma x)}+\int_0^{R(\gamma y)}|U(\gamma x,t)-U(\gamma y,t)|e^{t\sigma}\mathrm{d} t\\
&\leq (C_1 e^{\sigma R(\gamma x)}+R(\gamma y)e^{\sigma R(\gamma y)})\|U\|_{\mathcal B_0}d(x,y).
\end{align*}
Then we use \eqref{boundedeigen} to conclude that there exists some $C$ (independent of $U$) such that
\[|L_s\hat{U}_{-s}|_{\operatorname{Lip}}\leq C \|U\|_{\mathcal B_0}.\qedhere \]
\end{proof}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:skew}}]
Now Theorem~\ref{thm:skew} can be proved using the same lines as the proof of~\cite[Theorem 2.7]{AGY} (see also~\cite[Section 8.2]{AGY}). In particular, in the proof of~\cite[Lemma 8.3]{AGY}, we use \eqref{semiflownu} to replace~\cite[Theorem 7.3]{AGY} and Proposition~\ref{prop:dis} (2) to relate the measures $\hat{\nu}^{R}$ and $\nu^R$.
\end{proof}
\section{Resonance free region}
\label{sec:res}
Recall that $\Gamma$ is a geometrically finite discrete subgroup in $G=\operatorname{SO}(d+1,1)^{\circ}$. We begin by defining the measures $m^{\operatorname{BR}}$, $m^{\operatorname{BR_*}}$ and $m^{\operatorname{Haar}}$. Recall the definition of the BMS measure on $\operatorname{T}^1(\mathbb{H}^{d+1})\cong \partial^2(\mathbb{H}^{d+1})\times \mathbb{R}$:
\begin{equation*}
\mathrm{d}\tilde{m}^{\operatorname{BMS}}(x,x_-,s)=e^{\delta \beta_x(o,x_*)} e^{\delta \beta_{x_-}(o,x_*)} \mathrm{d}\mu(x) \mathrm{d} \mu(x_-)\mathrm{d} s,
\end{equation*}
where $x_*$ is the based point of the unit tangent vector given by $(x,x_-,s)$. We define the measures $\tilde{m}^{\operatorname{BR}}$, $\tilde{m}^{\operatorname{BR_*}}$ and $\tilde{m}^{\operatorname{Haar}}$ on $\operatorname{T}^1(\mathbb{H}^{d+1})\cong \partial^2(\mathbb{H}^{d+1})\times \mathbb{R}$ similarly as follows:
\begin{align*}
\mathrm{d}\tilde{m}^{\operatorname{BR}}(x,x_-,s)&=e^{d\beta_x(o,x_*)} e^{\delta \beta_{x_-}(o,x_*)} \mathrm{d} m_o(x) \mathrm{d} \mu(x_-)\mathrm{d} s;\\
\mathrm{d}\tilde{m}^{\operatorname{BR_*}}(x,x_-,s)&=e^{\delta \beta_x(o,x_*)} e^{\delta \beta_{x_-}(o,x_*)} \mathrm{d}\mu(x) \mathrm{d} m_o(x_-)\mathrm{d} s;\\
\mathrm{d}\tilde{m}^{\operatorname{Haar}}(x,x_-,s)&=e^{d \beta_x(o,x_*)} e^{d\beta_{x_-}(o,x_*)} \mathrm{d} m_o(x) \mathrm{d} m_o(x_-)\mathrm{d} s,
\end{align*}
where $m_o$ is the unique probability measure on $\partial(\mathbb{H}^{d+1})$ which is invariant under the stabilizer of $o$ in $G$.
These measures are all left $\Gamma$-invariant and induce measures on $\operatorname{T}^1(\Gamma\backslash \mathbb{H}^{d+1})$, which we will denote by $m^{\operatorname{BR}}$, $m^{\operatorname{BR_*}}$ and $m^{\operatorname{Haar}}$ respectively. Here we do not normalize the BMS measure to a probability measure, which is different from the previous part.
By \cite[Theorem 5.8]{OhWi}, Theorem \ref{main thm} implies exponential decay of matrix coefficients.
\begin{thm}
\label{thm:matrix}
There exists $\eta>0$ such that for any compactly supported functions $\phi, \psi\in C^1(\operatorname{T}^1(M))$, we have
\begin{equation*}
e^{(d-\delta)t}\int_{\operatorname{T}^1(M)} \phi\cdot\psi\circ\mathcal G_t\ \mathrm{d} m^{\operatorname{Haar}}=\frac{m^{\operatorname{BR_*}} (\phi) m^{\operatorname{BR}} (\psi)}{m^{\operatorname{BMS}}(\operatorname{T}^1(M))}+O(\lVert \phi \rVert_{C^1} \lVert \psi\rVert_{C^1}e^{-\eta t})
\end{equation*}
for all $t>0$, where $O$ depends on the supports of $\phi,\psi$.
\end{thm}
For $x,y\in\H^{d+1}$ and $T>0$, let
\[N(T,x,y)=\#\{\gamma\in\Gamma\,|\,d(x,\gamma y)\leq T \}, \]
where $d$ is the hyperbolic distance on $\H^{d+1}$.
In \cite{MoOh}, it was shown that Theorem \ref{thm:matrix} implies the following:
\begin{cor}\label{cor:counting}
There exists $\eta>0$ such that for any $x,y\in\H^{d+1}$ and $T>0$, we have
\[N(T,x,y)=c_{x,y}e^{\delta T}+O(e^{(\delta-\eta)T}), \]
where $c_{x,y}>0$ is a constant depending on $x,y$.
\end{cor}
\begin{proof}[\textbf{Proof of Corollary~\ref{cor:resonance}}]
For $x,y\in\H^{d+1}$ and $s\in\mathbb C$ with $\Re s>\delta$, let $P_s(x,y)$ be the Poincar\'e series defined by
\[P_s(x,y)=\sum_{\gamma\in\Gamma} e^{-sd(x,\gamma y)}. \]
We first prove that $P_s(x,y)$ is meromorphic on $\Re s>\delta-\eta$ with a unique pole $s=\delta$.
By Fubini's theorem
\[P_s(x,y)=\int_{0}^\infty \frac{1}{s}e^{-sT}N(T,x,y)\mathrm{d} T=\int_{0}^\infty \frac{1}{s}e^{-(s-\delta)T}c_{x,y}\mathrm{d} T+\int_{0}^\infty \frac{1}{s}e^{-sT}(N(T,x,y)-c_{x,y}e^{\delta T})\mathrm{d} T. \]
The first part is a meromorphic function of $s$ having a unique pole at $s=\delta$. The second part, it follows from Corollary~\ref{cor:counting} that it is absolutely convergence if $\Re s>\delta-\eta$, hence it is analytic on $\Re s>\delta-\eta$. Then we use ~\cite[Theorem 7.3]{GM} to deduce that the resolvent $R_M(s)$ is also analytic on $\{s\in \mathbb{C}:\, \delta-\eta<\Re s<\delta\}$.
\end{proof}
|
1207.6863
|
\section{#1}}
\newcommand\setulen[2]{\setlength\unitlength{.#1#2pt}}
\def\SK {S_\HK}
\def\slz {\ensuremath{\mathrm{SL}(2,\zet)}}
\def\sse {\scriptsize }
\newcommand\Surf[2] {\Sigma_{#1,#2}}
\def\tauHH {\tau^{}_{\!H,H}}
\def\tauHHv {\tau^{}_{\!H,H^*_{}}}
\def\tauHvH {\tau^{}_{\!H^*_{}\!,H}}
\def\Times {\,{\times}\,}
\def\TK {T_\HK}
\def\To {\,{\to}\,}
\def\twodim {two-di\-men\-si\-o\-nal}
\def\uvi {{t}}
\def\Vect {\ensuremath{\mathcal V}\mbox{\sl ect}}
\def\Vectk {\ensuremath{\mathcal V\mbox{\sl ect}_\ko}}
\def\Vee {{}^{\vee\!}}
\def\Xs {X^{*_{}}_{\phantom:}}
\def\zet {{\ensuremath{\mathbb Z}}}
\def\ak {\ensuremath{a_k}}
\def\bk {\ensuremath{b_k}}
\def\dk {\ensuremath{d_k}}
\def\ek {\ensuremath{e_k}}
\def\sk {\ensuremath{S_k}}
\def\tjk {\ensuremath{t_{j,k}}}
\def\wi {\ensuremath{\omega_i}}
\def\Ri {\ensuremath{\R_i}}
\newcommand\includepichtft[1] {{\begin{picture}(0,0)(0,0)
\scalebox{.304}{\includegraphics{imgs/pic_htft_#1.eps}}\end{picture}}}
\newcommand\Includepichtft[1] {{\begin{picture}(0,0)(0,0)
\scalebox{.38}{\includegraphics{imgs/pic_htft_#1.eps}}\end{picture}}}
\newcommand\INcludepichtft[2] {{\begin{picture}(0,0)(0,0)
\scalebox{.#2}{\includegraphics{imgs/pic_htft_#1.eps}}\end{picture}}}
\newcommand\includepichopf[1] {{\begin{picture}(0,0)(0,0)
\scalebox{.304}{\includegraphics{imgs/pic_hopf_#1.eps}}\end{picture}}}
\newcommand\Includepichopf[1] {{\begin{picture}(0,0)(0,0)
\scalebox{.38}{\includegraphics{imgs/pic_hopf_#1.eps}}\end{picture}}}
\newcommand\eqpic[4]{\begin{eqnarray}
\begin{picture}(#2,#3){}\end{picture}\nonumber\\
\raisebox{-#3pt}{ \begin{picture}(#2,#3) #4 \end{picture} }
\label{#1} \\~\nonumber \end{eqnarray} }
\newcommand\Eqpic[4]{\begin{eqnarray}
\begin{picture}(#2,#3){}\end{picture}\nonumber\\
\raisebox{-#3pt}{ \begin{picture}(#2,#3) #4 \end{picture} }
\nonumber \\[-3pt]~\label{#1} \end{eqnarray} }
\documentclass[12pt]{article}
\usepackage{latexsym, amsmath, amsthm, amsfonts, enumerate, amssymb, bbm, xspace, xypic }
\usepackage{fancybox}
\usepackage[all]{xy}
\usepackage[mathscr]{eucal}
\usepackage{graphicx} \usepackage{rotating}
\usepackage{epstopdf,hyperref}
\setlength\textwidth{17cm} \hoffset -20mm
\setlength\textheight{23.3cm} \topmargin= -21mm
\newtheorem{thm}{Theorem}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{conv}[thm]{Convention}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\theoremstyle{definition}
\newtheorem{rem}[thm]{Remark}
\newtheorem{defi}[thm]{Definition}
\begin{document}
\def\cir{\,{\circ}\,}
\numberwithin{equation}{section}
\numberwithin{thm}{section}
\begin{flushright}
{\sf ZMP-HH/12-13}\\
{\sf Hamburger$\;$Beitr\"age$\;$zur$\;$Mathematik$\;$Nr.$\;$447}\\[2mm]
July 2012
\end{flushright}
\vskip 3.5em
\begin{center}
\begin{tabular}c \Large\bf Higher genus mapping class group invariants \\[2mm]
\Large\bf from factorizable Hopf algebras
\end{tabular}
\end{center}\vskip 2.1em
\begin{center}
~J\"urgen Fuchs\,$^{\,a}$,~
~Christoph Schweigert\,$^{\,b}$,~
~Carl Stigner\,$^{\,a}$
\end{center}
\vskip 9mm
\begin{center}\it$^a$
Teoretisk fysik, \ Karlstads Universitet\\
Universitetsgatan 21, \ S\,--\,651\,88\, Karlstad
\end{center}
\begin{center}\it$^b$
Organisationseinheit Mathematik, \ Universit\"at Hamburg\\
Bereich Algebra und Zahlentheorie\\
Bundesstra\ss e 55, \ D\,--\,20\,146\, Hamburg
\end{center}
\vskip 5.3em
\noindent{\sc Abstract}
\\[3pt]
Lyubashenko's construction associates representations of mapping class groups
\Mapgn\ of Rie\-mann surfaces of any genus $g$ with any number $n$ of
holes to a factorizable ribbon category. We consider this construction as
applied to the category of bimodules over a finite-dimensional factorizable
ribbon Hopf algebra $H$. For any such Hopf algebra we find an invariant of
\Mapgn\ for all values of $g$ and $n$. More generally, we obtain such invariants
for any pair $(H,\omega)$, where $\omega$ is a ribbon automorphism of $H$.
\\
Our results are motivated by the quest to understand higher genus
correlation functions of bulk fields in two-dimensional conformal field
theories with chiral algebras that are not necessarily semisimple, so-called
logarithmic conformal field theories.
\newpage
\section{Introduction}
The mapping class groups of Riemann surfaces with holes form an interesting
system with deep properties and rich relations to geometry and arithmetic.
It is therefore remarkable that a relatively simple algebraic structure -- a
finite-dimensional factorizable ribbon Hopf algebra $H$ -- gives rise
\cite{lyub6} to a family of (projective) representations of all these mapping
class groups. The construction of mapping class group representations in
\cite{lyub6} does not require $H$ to be semisimple.
For semisimple $H$ the so obtained system of representations obeys even tighter
constraints: it is part of a so-called modular functor, or a three-dimensional
topological field theory. In the present article we do \emph{not} require
semisimplicity.
Another algebraic structure leading to a system of representations of mapping
class groups are vertex algebras which arise in chiral conformal
field theories. More specifically, in this case the mapping class group
representations are derived from the monodromies of the conformal blocks
\cite{FRbe} associated with the vertex algebra. In fortunate situations
the representation category of a vertex algebra is equivalent to the one of a
factorizable ribbon Hopf algebra at least as an abelian category.
Surprisingly, such a ``Kazhdan-Lusztig correspondence'' seems to work
particularly well for some classes of vertex algebras for which the category in
question is \emph{not} semisimple \cite{fgst2,naTs2}. The chiral conformal field
theories associated with these cases are known as {\em logarithmic} theories.
In a full, local conformal field theory, one aims in particular at constructing
correlators of bulk fields as specific \emph{bilinear} combinations of conformal
blocks. Translating the situation to the Hopf algebra setting, this amounts to
considering the mapping class group representations coming from the
factorizable ribbon Hopf algebra $H\oti H\op_{}$, the enveloping algebra of $H$.
(By factorizability, the category of $H\oti H\op_{}$-modules is equivalent as a
ribbon category to the Drinfeld center of \HMod.)
In \cite{fuSs3} we have constructed, for any ribbon automorphism $\omega$ of
$H$, an invariant of the mapping class group of the torus with one hole. The
construction in \cite{fuSs3} is based on a family of symmetric Frobenius
algebras $\Fomega$ in the braided monoidal category \HBimod,
which is braided equivalent to $(H{\otimes} H\op_{})$\Mod.
In \cite{fuSs4} we have in addition
derived, for the case that the automorphism $\omega$ is the identity morphism,
integrality properties of the partition function, i.e.\ of the correlator for
the torus without holes, relating it to the Cartan matrix of the category \HMod.
This shows in particular that the so obtained invariants
are non-zero.
In the present paper we solve the general problem of obtaining mapping class
group invariants at \emph{arbitrary} genus. Given a factorizable Hopf algebra
$H$ and a ribbon automorphism $\omega$ of $H$, we identify,
for each non-negative value of $g$ and of $n$, a natural invariant
$\Corw gn$ under the action of the mapping class group \Mapgn\ of Riemann
surfaces of genus $g$ with $n$ holes on a space of morphisms
that is obtained by the construction of Lyubashenko \cite{lyub6}
when taking all $n$ insertions to be given by the $H$-bimodule $\Fomega$.
Rephrased in conformal field theory terms, we identify natural candidates for
bulk correlation functions in a full, local conformal field theory and prove
their modular invariance, for any number of insertions and at any genus.
This paper is organized as follows. In Section \ref{sec:basic} we introduce
pertinent concepts and notations that are needed to describe the morphisms
$\Corw gn$ and to state our main result. This assertion, Theorem \ref{thm:main},
is formulated in Section \ref{sec:thm}. To establish it requires quite a few
detailed calculations which, for the case $\omega \eq \id_H$, take up Sections
\ref{sec:lemmata} and \ref{sec:proofmain}. Section \ref{sec:lemmata} is
essentially a collection of lemmas that are instrumental in Section
\ref{sec:proofmain}; their proofs can be safely skipped by readers primarily
interested in the results. Finally, in Section \ref{sec:omega} we complete
our main result by extending the analysis of Sections \ref{sec:lemmata} and
\ref{sec:proofmain}, and thus the proof of Theorem \ref{thm:main}, to the
case of non-trivial ribbon automorphisms.
We expect that our considerations generalize from the categories \HBimod\ to a
larger class of braided finite tensor categories \C. In particular, the analogue
of the $H$-bimodule $\Fomega$ should be the coend of a natural functor from
$\C\op \Times \C$ to the enveloping category $\C \,{\boxtimes}\, \C^{\rm rev}$.
Accordingly we formulate various statements in such a more general context, e.g.\
we give the invariants $\Corw gn$ first as morphisms in \HBimod\ in entirely
categorical terms (see formula \erf{Sk_morph}) before we present their concrete
expressions as linear maps (see \erf{CorrgnH}). However, a generalization of
our main result to such a context is still elusive. Concretely, the explicit
expressions for the coalgebra structure of the coend in \erf{pic-Hb-Frobalgebra}
involve the integral of the Hopf algebra $H$ over the field \ko\
defining the category. What we
are missing is a corresponding structure of the \emph{category} \HMod\ that
comes from the integral of $H$ and endows the coend with a coalgebra structure.
\section{Background}\label{sec:basic}
In this section we collect some basic definitions and notation for a class of
Hopf algebras and for representations of mapping class groups associated with
these Hopf algebras, which will be needed for formulating our main result,
Theorem \ref{thm:main}.
\subsection{Factorizable Hopf algebras}
Throughout this paper, the symbol \ko\ stands for an algebraically closed field
of characteristic zero, while $H$ is a \findim\ ribbon Hopf algebra over \ko,
which in addition is factorizable. In the sequel, for brevity we will refer
to $H$ just as a \emph{factorizable ribbon Hopf algebra}, suppressing
finite-dimensionality over \ko.
All modules and bimodules in this paper will be finite-dimensional as \ko-vector
spaces as well. Similarly, all categories to be considered are assumed to be
abelian and \ko-linear, with all morphism sets being \findim\ \ko-vector spaces.
We denote by $m$, $\eta$, $\Delta$, $\eps$ and $\apo$ the product,
unit, coproduct, counit and antipode of the Hopf algebra $H$.
Let us recall the meaning of a Hopf algebra to be factorizable ribbon:
\begin{defi} ~\nxl1
(a)\, A Hopf algebra $H \,{\equiv}\, (H,m,\eta,\Delta,\eps,\apo)$ is called
\emph{quasitriangular} iff it comes with an invertible element $R$ of $H\otik H$
such that the coproduct and opposite coproduct are intertwined by $R$, i.e.\
$R\, \Delta\, R^{-1} \eq \tauHH \cir \Delta\,{\equiv}\,\Delta^{\!\rm op}_{}$, and
\be
(\Delta \oti \id_H) \circ R = R_{13}\cdot R_{23} \qquand
(\id_H \oti \Delta) \circ R = R_{13}\cdot R_{12} \,.
\labl{deqf-qt}
The element $R$ is called the \emph{R-matrix} of $H$.
\\[2pt]
(b)\, For $(H,R)$ a quasitriangular Hopf algebra, the invertible element
$Q \,{:=}\, R_{21}\,{\cdot}\, R$ of $H\otik H$
is called the \emph{monodromy matrix} $Q$ of $H$.
\\[2pt]
(c)\, A quasitriangular Hopf algebra $(H,R)$ is called a \emph{ribbon} Hopf
algebra iff it comes with a central invertible element $v\iN H$ obeying
\be
\apo \circ v = v \,, \qquad \eps \circ v = 1 \qquand
\Delta \circ v = (v\oti v) \cdot Q^{-1} .
\labl{def-ribbon}
$v$ is called the \emph{ribbon element} of $H$.
\\[2pt]
(d)\, A quasitriangular Hopf algebra $(H,R)$ is called \emph{factorizable}
iff the monodromy matrix $Q$ can be expressed as $\sum_\ell h_\ell \oti k_\ell$,
where $\{h_\ell\}$ and $\{k_\ell\}$ are two vector space bases of $H$.
\end{defi} \smallskip
Here for \ko-vector spaces $V$ and $W$ the linear map $\tau_{V,W}\colon V\otik W
\,{\stackrel\simeq\to}\, W\otik V$ is the flip map that exchanges the factors
in a tensor product. Also recall that a \findim\ Hopf algebra $H$ has a
left integral $\Lambda\iN H \,{\equiv}\, \Homk(\ko,H)$
and a right cointegral $\lambda\iN\Hs \,{\equiv}\, \Homk(H,\ko)$ (as well as
a right integral and a left cointegral), which are unique up to scalars.
Moreover, if $H$ is factorizable, then it is unimodular, i.e.\ the
integral $\Lambda\iN H$ is two-sided.
With applications in logarithmic conformal field theory in mind \cite{fgst}, it
should be appreciated that factorizability can be formulated for Hopf algebras
which do not have an R-matrix, but still a monodromy matrix.
Factorizability of $H$ is equivalent to invertibility of the \emph{Drinfeld map}
$f_Q \iN \Homk(\Hs,H)$, which is given by
$f_Q \,{:=}\, (d_H\oti \id_H) \cir (\idHs\oti Q)$, with $d$ the
evaluation morphism in \Vectk. The Drinfeld map, and likewise the map
$f_{Q^{-1}}$ that is obtained when replacing the monodromy matrix by its
inverse, is not just a linear map from \Hs\ to $H$, but it also intertwines
the left-coadjoint action of $H$ on \Hs\ and the left-adjoint action of $H$ on
itself. If $f_Q$ is invertible, then $f_Q(\lambda)$ is a non-zero multiple of
$\Lambda$. The normalizations of the integral and cointegral can then be chosen
in such a way that $\lambda\cir\Lambda \eq 1$ and $f_Q(\lambda) \eq \Lambda$
(this determines $\lambda$ and $\Lambda$ uniquely up to a common sign factor).
Doing so one arrives at the following identities \cite[(5.18)]{fuSs3},
which we present graphically:
\Eqpic{fQS_Psi} {430} {41} { \put(-4,-4){
\put(0,0) { \Includepichtft{97e}
\put(-2.5,100) {\sse$ H $}
\put(14.5,3.4) {\sse$ Q $}
\put(43.6,35) {\sse$ \apo $}
\put(73.1,75.5) {\sse$ \lambda $}
\put(69.4,59.9) {\sse$ m $}
\put(128.3,3.2) {\sse$ Q^{-1} $}
\put(130,100) {\sse$ H $}
}
\put(167,46) {$ = $}
\put(193,13) { \Includepichtft{97g}
\put(-2.2,79) {\sse$ H $}
\put(7.7,-1) {\sse$ \Lambda $}
\put(14.4,40.2) {\sse$ \Delta $}
\put(30.8,79) {\sse$ H $}
}
\put(256,46) {$ = $}
\put(294,0) { \Includepichtft{97fA}
\put(-3.1,100) {\sse$ H $}
\put(31.1,3.2) {\sse$ Q^{-1} $}
\put(43.6,35) {\sse$ \apo $}
\put(73.1,75.5) {\sse$ \lambda $}
\put(69.4,59.9) {\sse$ m $}
\put(112.5,3.4) {\sse$ Q $}
\put(128.8,100) {\sse$ H $}
} } }
Such diagrams are to be read from bottom to top.
Below we will often suppress the labels indicating the product and
coproduct of $H$, as well as those for the antipode $\apo$ (which
is drawn as an empty circle) and its inverse $\apoi$ (full circle).
The following facts about the category \HMod\ of finite-dimensional left modules
over a factorizable ribbon
Hopf algebra $H$ are well-known: \HMod\ is a braided
rigid monoidal category, and even a factorizable ribbon category. Moreover, it
is a finite tensor category in the sense of \cite{etos}, i.e.\ it has finitely
many isomorphism classes of simple objects, each of them has a projective
cover, and every object has finite length.
We endow the category \HBimod\ of finite-dimensional $H$-bimodules in an
analogous manner with the structure of a finite factorizable ribbon category:
We use the pull-back of left and right actions along the coproduct to obtain
the structure of a monoidal category: for $H$-bimodules $(X,\rho_X,\ohr_X)$
and $(Y,\rho_Y,\ohr_Y)$ we define the left and right actions of $H$ on the
tensor product vector space $X \otik Y$ by
\be
\bearl
\rho_{X\otimes Y}^{} := (\rho_X \oti \rho_Y) \circ (\id_H \oti \tau_{H,X} \oti
\id_Y) \circ (\Delta \oti \id_X \oti \id_Y) \qquand
\Nxl3
\ohr_{X\otimes Y}^{} := (\ohr_X \oti \ohr_Y) \circ (\id_X \oti \tau_{Y,H} \oti
\id_H) \circ (\id_X \oti \id_Y \oti \Delta) \,.
\eear
\labl{def-tp}
The monoidal unit for this tensor product is the one-dimensional vector
space \ko\ with left and right $H$-action given by the counit,
$\one_{H\text{-Bimod}} \eq (\ko,\eps,\eps)$. To endow the monoidal category
\HBimod\ with a braiding, we use the action of the R-matrix of $H$ from the
right and the action of its inverse from the left. When regarding the resulting
braiding morphism $c_{X,Y}^{}$ in \HBimod\ as a linear map, i.e.\ as a morphism
in the category \Vect\ of \ko-vector spaces, we represent it pictorially as
\eqpic{bibraid} {130} {46} {
\put(0,52) {$ c_{X,Y}^{} ~= $}
\put(59,0) {\Includepichtft{90}
\put(-17,9) {\sse$ R^{-1} $}
\put(15.5,93) {\sse$ \ohr_Y^{} $}
\put(24,-9.2) {\sse$ X $}
\put(25.8,108) {\sse$ Y $}
\put(29.8,25) {\sse$ \rho_X^{} $}
\put(42,56) {\sse$ \tau_{X,Y}^{} $}
\put(42.3,-9.2) {\sse$ Y $}
\put(42.8,108) {\sse$ X $}
\put(47.2,38.8) {\sse$ \rho_Y^{} $}
\put(50.2,94) {\sse$ \ohr_X^{} $}
\put(75,68) {\sse$ R $}
} }
Here the quarter disks refer to left or right actions of the Hopf
algebra $H$, while the crossing is just the flip map of vector spaces.
(As the braiding on the category of vector spaces is symmetric, the use of
over- and under-crossings in these pictures does not contain any mathematical
information, but is merely for graphical convenience.)
Using the equivalence of $H$-bimodules with $H\Otik H$-modules, it is not hard
to verify that these prescriptions endow
\HBimod\ with the structure of a braided monoidal category.
This is the braided monoidal category we are interested in in this paper. We
endow it with more structure: we introduce right and left duals by associating
to a bimodule $X \eq (X,\rho,\ohr)\iN\HBimod$ the bimodules
\be
X^\vee := (X^*,\rhov,\ohrv) \qquand {}^{\vee}\!X:= (X^*,\rhoV,\ohrV\,)
\labl{def-duals}
with left and right $H$-actions defined by
\Eqpic{Hbim_dualactions} {420} {49} { \put(-21,19){
\put(0,35) {$\rhov ~:= $}
\put(34,0) {\Includepichtft{96c}
\put(-2.5,-9.2) {\sse$ H $}
\put(10,-9.2) {\sse$ X^*_{} $}
\put(31.3,49.8) {\sse$ \rho $}
\put(42.2,94) {\sse$ X^*_{\phantom|} $}
}
\put(126,35) {$\ohrv ~:= $}
\put(175,0) {\Includepichtft{96d}
\put(-5,-9.2) {\sse$ X^*_{} $}
\put(11,60.7) {\sse$ \ohr $}
\put(29.8,94) {\sse$ X^*_{\phantom|} $}
\put(41,-9.2) {\sse$ H $}
}
\put(259,35) {$\rhoV ~:= $}
\put(298,0) {\Includepichtft{96e}
\put(-2.4,-9.2) {\sse$ H $}
\put(13.6,94) {\sse$ X^*_{\phantom|} $}
\put(35,54.7) {\sse$ \rho $}
\put(44,-9.2) {\sse$ X^*_{} $}
}
\put(390,35) {$\ohrV ~:= $}
\put(429,0) {\Includepichtft{96f}
\put(11,54.1) {\sse$ \ohr $}
\put(27,-9.2) {\sse$ X^*_{\phantom|} $}
\put(-2.9,94) {\sse$ X^*_{\phantom|} $}
\put(45,-9.2) {\sse$ H $}
} } }
The left and right dualities are naturally compatible -- the category \HBimod\
has a natural structure of a sovereign tensor category. To see this, denote
by $t \,{:=}\, uv^{-1}$ the product of the Drinfeld element
\be
u := m \circ (\apo\oti\id_H) \circ \tauHH \circ R ~\in H
\labl{u-R}
with the inverse of the ribbon element $v$ of $H$; $t$ is a group-like element
of $H$. Then the family of endomorphisms
\eqpic{pivX} {170} {38} {
\put(0,39) {$ \pi_X ~:= $}
\put(50,0) {\Includepichtft{97a}
\put(-4.5,-8) {\sse$ \Xs $}
\put(-3,88) {\sse$ \Xs $}
\put(28,28) {\sse$ \uvi $}
\put(58,20) {\sse$ \uvi $}
}
\put(127,39) {$\in \Endk(X^*_{\phantom|}) $}
}
is a natural monoidal isomorphism between the left and right duality functors
and thus endows \HBimod\ with the structure of a sovereign tensor category.
Being sovereign, \HBimod\ is also endowed with a balancing and thus has the
structure of a ribbon category. From here on, the symbol \HBimod\ stands for this
sovereign ribbon category. Explicitly, the twist endomorphism $\theta_X$ of an
$H$-bimodule $(X,\rho,\ohr)$ is given by acting with the ribbon element $v$ from
the left and with its inverse $v^{-1}$ from the right \cite[Lemma\,4.8]{fuSs3},
\be
\theta_X = \rho \circ (\id_H \oti \ohr) \circ (v \oti \id_X \oti v^{-1}) \,.
\labl{deftwist}
\begin{rem}
The category \HBimod\ with this structure of sovereign ribbon category is
braided equivalent to $(H{\otimes_\ko} H\op_{})$\Mod\ and thus to
$\HMod \,{\boxtimes}\, \HMod^{\rm rev}$. This is a very
natural category indeed: it can be regarded as a categorification of the notion
of enveloping algebra. Factorizibility of \HBimod\ amounts to the statement
that \HBimod\ is braided equivalent to the Drinfeld center of \HMod.
\end{rem}
\subsection{The handle Hopf algebra and half-monodromies}\label{subsec:hHa}
We now recall \cite{maji25,lyub8} that any finite sovereign braided tensor
category \C\ contains a canonical Hopf algebra object. It can be constructed
as the coend
\be
\HK = \Coend FX = \coend X
\labl{defK}
of the functor $F$ that maps a pair $(X,Y)$ of objects of \C\ to the object
$X^\vee \oti Y \iN \C$. As a coend, $K$ comes with a dinatural family
$(\iHK_X)_{X\in\C}$ of morphisms $\iHK_X\iN\Hom_\C(X^\vee{\otimes}\, X, \HK)$.
The structure morphisms of the Hopf algebra object \HK\ are obtained with the
help of the family $\iHK$ and the braiding and duality of \C. They can be
found e.g.\ in \cite{lyub6,vire4}; we refrain from reproducing them here.
We refer to $K$ as the \emph{handle Hopf algebra} for the category \C.
The Hopf algebra \HK\ is also endowed with a Hopf pairing. We call the finite
sovereign braided tensor category \C\ a \emph{\fftc} iff this Hopf pairing is
non-degenerate. This terminology is motivated by the fact that in the case of
$\C \eq \HMod$, non-degeneracy of the Hopf pairing is equivalent to invertibility
of the Drinfeld map, i.e.\ to factorizability of $H$ \cite[Eq.\,(5.14)]{fuSs3}.
If and only if \C\ is factorizable in this sense, then the Hopf algebra \HK\ has
(two-sided) integrals and cointegrals \cite[Prop.\,5.2.9]{KEly}.
Let now $H$ be a factorizable ribbon Hopf algebra and $\C \eq \HMod$. In this case,
which was already considered in \cite{lyub8}, the coend $\HK\eq H^*\coa$ is the
dual space $\Hs$ endowed with the coadjoint action of $H$. Similarly, if \C\ is
the braided tensor category \HBimod, the bimodule $\HK \,{=:}\, \Haa$
can be described as follows: the underlying vector space is the tensor
product $\Hs\otik\Hs$, and introducing the morphisms
\eqpic{def_lads} {370} {49} {
\put(0,52) {$ \rho\coa ~:= $}
\put(42,0) { \includepichopf{48}
\put(6.5,-9.2){\sse$ H $}
\put(23.4,-9.2) {\sse$ \Hss $}
\put(32.3,65.6) {\sse$ \apo $}
\put(64.5,114) {\sse$ \Hss $}
}
\put(167,52) {and}
\put(228,52) {$ \ohr\coar ~:= $}
\put(278,0) { \includepichtft{48c}
\put(-4,-9.2) {\sse$ \Hss $}
\put(8.8,60.5) {\sse$ \apo $}
\put(37,114) {\sse$ \Hss $}
\put(53.4,-9.2) {\sse$ H $}
\put(61.4,10.5) {\sse$ \apoi $}
} }
the bimodule structure is given by
\be
\Haa = (\Hs \Otik\, \Hs,\rho\coa\oti\idHs,\idHs\oti\rho\coar) \,.
\labl{defHaa}
(For the dinatural family of the coend \Haa\ see \cite[Eq.\,(A.29)]{fuSs3}.)
\medskip
Below we will recall that representations of mapping class groups can be
constructed with the help of morphisms involving the Hopf algebra \HK\ in a
\fftc\ \C. Here we provide the main building blocks of those morphisms. It
is natural to use the universal property of \HK\ as a coend
to specify these building blocks in terms of dinatural families.
\\[-2.36em]
\def\leftmargini{1.57em}~\\[-1.45em]\begin{enumerate}\addtolength{\itemsep}{-3pt}
\item
Denote by $(\theta_X)_{X\in\C}$ the twist on the tensor category
$\C$. Then we define an endomorphism $\TK$ of $\HK$ in terms of dinatural
families by
\eqpic{p9} {92}{24} {
\put(0,0) {\Includepichtft{26b}
\put(-11.1,39.7){\small$\TK$}
\put(-5,-9.2) {\sse$ X^{\!\vee} $}
\put(17.4,25.7) {\sse$ \iHK_X $}
\put(15.3,-9.2) {\sse$ X $}
\put(6.8,63.3) {\sse$ \HK $}
}
\put(47,28) {$=$}
\put(83,0) {\Includepichtft{26a}
\put(-11.9,14) {\sse$ \theta_{\!X^{\!\vee}_{}}^{}$}
\put(-3,-9.2) {\sse$ X^{\!\vee} $}
\put(8.4,63.3) {\sse$ \HK $}
\put(18.9,37.7) {\sse$ \iHK_X $}
\put(17.4,-9.2) {\sse$ X $}
}
\put(133,46) {\catpic}
}
Here we indicate explicitly in the figure that the diagram has to be read in
the ribbon category \C\ (rather than, as in all pictures displayed so far, in
\Vectk), so that in particular over- and under-crossings must be carefully distinguished.
\item
Similarly, the monodromies
$c_{Y^\vee_{\phantom,}\!,X} \cir c_{X,Y^\vee_{\phantom,}}$
of \C\ allow us to deduce an endomorphism \QQ\ of $K\oti K$ from the equality
\eqpic{QHH} {135} {45} {
\put(0,0) {\Includepichtft{103e}
\put(-4.2,60.1){\small$ \QQ $}
\put(-9.8,21) {\sse$ \iHK_X $}
\put(-4,-9.2) {\sse$ X^{\!\vee} $}
\put(5.3,109) {\sse$ \HK $}
\put(7,-9.2) {\sse$ X $}
\put(26.5,-9.2){\sse$ Y^{\!\vee} $}
\put(28.8,109) {\sse$ \HK $}
\put(38,-9.2) {\sse$ Y $}
\put(42.4,21) {\sse$ \iHK_Y $}
}
\put(62,50) {$ = $}
\put(97,0) {\Includepichtft{103f}
\put(-6,-9.2) {\sse$ X^{\!\vee} $}
\put(-5.8,89) {\sse$ \iHK_X $}
\put(8,-9.2) {\sse$ X $}
\put(6.1,109) {\sse$ \HK $}
\put(22.8,27.3){\sse$ c $}
\put(22.8,58.4){\sse$ c $}
\put(32,-9.2) {\sse$ Y^{\!\vee} $}
\put(36,109) {\sse$ \HK $}
\put(46.2,89) {\sse$ \iHK_Y $}
\put(46,-9.2) {\sse$ Y $}
}
\put(181,91) {\catpic}
}
of dinatural transformations.
\item
We use the endomorphism \QQ\ to define an endomorphism $S_\HK\iN\EndC(K)$ by
\be
\SK := (\eps_\HK \oti \id_\HK) \circ \QQ \circ (\id_\HK \oti \Lambda_\HK) \,,
\labl{S-HK}
where $\eps_\HK$ and $\Lambda_\HK$ are the counit and the two-sided integral of
the Hopf algebra \HK, respectively.
\item
For any object $Y\iN\C$ we use the monodromies $c_{Y,-} \cir c_{-,Y}$ to
define an endomorphism $\QB_Y\iN\EndC(\HK\oti Y)$:
\eqpic{QHX} {120} {44} { \put(0,1){
\put(0,0) {\begin{picture}(0,0)(0,0)
\scalebox{.38}{\includegraphics{imgs/pic_htft_103a.eps}}\end{picture}
\put(-6.6,60.5){$ \QB_Y $}
\put(-4,-9.2) {\sse$ X^{\!\vee} $}
\put(6.8,109) {\sse$ \HK $}
\put(7,-9.2) {\sse$ X $}
\put(31.3,-9.2){\sse$ Y $}
\put(30.4,109) {\sse$ Y $}
\put(14.3,21.5){\sse$ \iHK_X $}
}
\put(60,50) {$ := $}
\put(94,0) {\begin{picture}(0,0)(0,0)
\scalebox{.38}{\includegraphics{imgs/pic_htft_103b.eps}}\end{picture}
\put(-6,-9.2) {\sse$ X^{\!\vee} $}
\put(7.4,109) {\sse$ \HK $}
\put(8,-9.2) {\sse$ X $}
\put(22.2,27.3){\sse$ c $}
\put(22.2,58.4){\sse$ c $}
\put(34,-9.2) {\sse$ Y $}
\put(34.3,109) {\sse$ Y $}
\put(17.8,90) {\sse$ \iHK_X $}
} }
\put(165,93) {\catpic}
}
\item
It can now be checked that for any object $Y\iN\C$ the morphism
$\rho_Y^\HK \iN \HomC(\HK\oti Y,Y)$ defined by
\be
\rho_Y^K := (\eps_\HK\oti \id_Y)\cir \QB_Y
\labl{def-rho}
endows $Y$ with the structure of a left \HK-module.
\end{enumerate}
\begin{rem}\label{rem:YD}
Via the duality, the dinaturality morphisms of the coend $K$ provide
a right coaction $(\id_X \oti \iHK_X) \cir (b_X \oti \id_X)$ of $K$ on any
object of \C. This coaction fits together with the action $\rho^K_X$ to
endow the object $X$ with a structure of left-right Yetter-Drinfeld module in
the monoidal category \C. Moreover, any morphism in \C\ is compatible with
this structure, so that we indeed have a fully faithful embedding of \C\ into
the category of left-right Yetter-Drinfeld modules over $K$.
\end{rem}
\subsection{Representations of mapping class groups}\label{ssec:mpg}
One of the main results of \cite{lyub11} is the construction of representations
of mapping class groups for surfaces with holes (i.e., with open disks excised).
Denote by \Mapgn\ the mapping class group of closed oriented surfaces of genus
$g$ with $n$ boundary components.
Various finite presentations of \Mapgn\ have been discussed in the
literature. Since our purpose is to check invariance under the mapping class
group, for us it is sufficient to display a finite set of generators.
One such set of generators arises from the exact sequence
\be
1\rightarrow\BG gn\rightarrow\Mapgn\rightarrow \Map g0\rightarrow 1 \,,
\ee
(compare \cite[Thm.\,9.1]{FAma}), where $\BG gn$ is a central extension of the
surface braid group by $\zet^n$. Owing to this sequence, one can take as a
generating set the union of those for a presentation of \Map g0\ \cite{wajn}
and for a presentation of $\BG gn$ \cite{scotG3};
this particular presentation has been advocated and used in \cite{lyub6,lyub11}.
To describe these generators, we first introduce a collection of cycles
$a_m$, $b_m$, $d_m$, $e_m$ and $t_{j,k}$ on a genus-$g$ surface $\Surf gn$
with $n$ holes. The following picture indicates these cycles on $\Surf gn$
(see \cite[Figs.\,2\,\&\,6]{lyub6}):
\Eqpic{surf_gn_PIC} {420} {42} { \put(35,0){ \setlength\unitlength{1.3pt}
\put(-42,0) {\INcludepichtft{132f}{65}}
\put(-23,38) {\sse$b_1$}
\put(23,38.5) {\sse$b_{m{-}1}$}
\put(49,46) {\sse$a_m$}
\put(71,38) {\sse$b_m$}
\put(61,54) {\sse$d_m$}
\put(61,22) {\sse$e_m$}
\put(125,54) {\sse$S_l$}
\put(127,38) {\sse$b_{l}$}
\put(188,38) {\sse$b_{k}$}
\put(253,39) {\sse$b_{g}$}
\put(223,26) {\sse$t_{j,k}$}
\put(291,80) {\sse$U_1$}
\put(307,44) {\sse$U_j$}
\put(307,31) {\sse$U_{j+1}$}
\put(291,-1) {\sse$U_n$}
} }
For brevity we refer to the inverse Dehn twist about any of these
cycles by the same symbol as for the cycle itself. The shaded region in the
picture \eqref{surf_gn_PIC} is a neighborhood of the $l$th handle with the
topology of a one-holed torus; we denote this region by $F_l$, and by $F_l'$ the
slightly smaller neighborhood that is indicated by the dotted line inside $F_l$.
The generators considered in \cite{lyub6,lyub11} are then the following:
\def\leftmargini{1.53em}~\\[-2.65em]\begin{enumerate}\addtolength{\itemsep}{-3pt}
\item
Braidings $\wi$, for $i \eq 1,2,...\,,n{-}1$, which permute the $i$th and
$i{+}1$\,st boundary circle.
\item
Dehn twists $\Ri$ about the $i$th boundary circle, for $i \eq 1,2,...\,,n$.
\item
Homeomorphisms $S_l$, for $l \eq 1,2,...\,,g$, which act as the identity
outside the region $F_l$ and as a modular S-transformation of the one-holed
torus $F_l' \,{\subset}\, F_l^{}$.
\item
Inverse Dehn twists in tubular neighborhoods of the cycles $a_m$
and $e_m$, for $m \eq 2,3,$ $...\,,g$.
\item
Inverse Dehn twists in tubular neighborhoods of the cycles $b_m$ and $d_m$,
for $m \eq 1,2,$ $...\,,g$.
\item
Inverse Dehn twists in tubular neighborhoods of the cycles $t_{j,k}$, for
$j\eq 1,2,...\,,n{-}1$ and $k \eq 1,2,...\,,g$.
\end{enumerate}
\noindent
This system of generators is not minimal. Specifically, the generators $\sk$
can be expressed in terms of the generators $\bk$ and $\dk$. Nevertheless
we keep the $\sk$ in the list, because in the case
$g \eq 1$ and $n \eq 0$, $S \eq \sk$ and $T \eq \dk$ are the usual S- and
T-transformations generating the modular group \slz. As already pointed out,
since the aim of the present article is to determine invariants, it is
sufficient to know the action of some set of generators; relations are not
needed.
\medskip
The representations of our interest involve decorations of the boundary circles
of $\Surf gn$ by (not necessarily distinct) objects $X_1, X_2, ...\,, X_n$ of a
\fftc\ \C. Denote by $\mathfrak N \eq \mathfrak N(X_1{,}...{,} X_n)$ the
subgroup of the symmetric group $\mathfrak S_n$ that is generated by
those permutations $\sigma\iN \mathfrak S_n$ for which $X_i$ and $X_{\sigma(i)}$
are non-isomorphic for at least one value of the label $i$, and set
\be
X := \bigoplus_{\sigma\in \mathfrak N}
X_{\sigma(1)} \oti X_{\sigma(2)} \oti \cdots \oti X_{\sigma(n)} \,.
\labl{defX}
Then the representation space for \Mapgn\ relevant to us is the vector space
\be
V^{X}_{g:n} := \HomC(K^{\otimes g},X)
\labl{Vpqg}
of morphisms of \C. In line with the designation `handle Hopf algebra' for \HK,
this involves one copy of \HK\ for each handle of $\Surf gn$.
To describe the action of \Mapgn\ on the space $V^{X}_{g:n}$, we introduce the
following collections of morphisms: First, the endomorphisms \LAwi\ with $i \iN
\{1,2,...\,,n{-}1\}$ and \LARj\ with $j \iN \{1,2,...\,,n\}$ of $X$ that act as
\be
\bearl
\LAwi{\big|}_{X_1\otimes X_2\otimes\cdots\otimes X_n}
:= \id_{X_1\otimes\cdots\otimes X_{i-1}} \oti c_{X_i,X_{i+1}}
\oti\id_{X_{i+2}\otimes\cdots\otimes X_n}
\qquand \\{}\\[-.6em]
\LARj{\big|}_{X_1\otimes X_2\otimes\cdots\otimes X_n}
:= \id_{X_1\otimes\cdots\otimes X_{j-1}} \oti \theta_{X_j}
\oti\id_{X_{j+1}\otimes\cdots\otimes X_n}
\eear
\labl{LyubactC1}
on the direct summand $X_1\oti X_2\oti\cdots\oti X_n$ of $X$ and analogously on
the other summands in \erf{defX}, with $c$ and $\theta$ the braiding and the
twist of \C, respectively. Second, the endomorphisms
\be
\bearl
\LASk := \id_{\HK^{\otimes g-k}_{}}\oti\SK\oti\id_{\HK^{\otimes k-1}}
\,, \\{}\\[-.6em]
\LAal := \id_{\HK^{\otimes g-l}_{}}
\oti[\QQ\cir (\TK\oti\TK)]\oti \id_{\HK^{\otimes l-2}_{}}
\,, \\{}\\[-.6em]
\LAbk := \id_{\HK^{\otimes g-k}_{}}
\oti(\SK^{-1}\cir \TK\cir \SK)\oti\id_{\HK^{\otimes k-1}_{}}
\,, \\{}\\[-.6em]
\LAdk := \id_{\HK^{\otimes g-k}_{}}\oti\TK\oti\id_{\HK^{\otimes k-1}}
\,, \\{}\\[-.6em]
\LAel := \id_{\HK^{\otimes g-l}_{}}\oti \big[ (\TK\oti
\theta_{\HK^{\otimes l-1}}) \circ \QB_{K^{\otimes l-1}_{}} \big]
\eear
\labl{LyubactC2}
of $K^{\otimes g}_{}$, where $k \iN \{1,2,,...\,,g\}$ and $l \iN \{2,3,,...\,,g\}$,
and where $\TK$, $\SK$, \QQ\ and $\QB_Y$ are the morphisms introduced
in Section \ref{subsec:hHa}. And third, for any $j\iN \{1,2,...\,,n{-}1\}$ and
$k \iN \{1,2,...\,,g\}$ the linear map \LAtjk\ that maps any morphism
$f\iN \HomC(K^{\otimes g},X_1\oti\cdots\oti X_n)$ to
\be
\bearll
\LAtjk(f) := \!\!& \big(\, \big[\, ( \id_{X_1\otimes\cdots\otimes X_j}
\oti \tilde d_{X_{j+1}\otimes\cdots\otimes X_n} )
\circ
( f \oti \id_{\Vee X_n\otimes\cdots\otimes \Vee X_{j+1}} )
\\{}\\[-.6em]& \quad
\circ\, \{ \id_{K^{\otimes g-k}_{}} \oti [
\QB_{K^{\otimes k-1}\otimes \Vee X_n\otimes\cdots\otimes \Vee X_{j+1}} \cir
(\TK \oti \theta_{K^{\otimes k-1}\otimes \Vee X_n\otimes\cdots\otimes
\Vee X_{j+1}}) ] \} \,\big]
\\{}\\[-.8em]& \hspace*{23.9em}
\otimes\, \id_{X_{j+1}\otimes\cdots\otimes X_n} \,\big)
\\{}\\[-.6em]& \hspace*{21.8em}
\circ\, \big(
\id_{\HK^{\otimes g}_{}} \oti \tilde b_{X_{j+1}\otimes\cdots\otimes X_n} \big)
\eear
\labl{LyubactC3}
in $\HomC(K^{\otimes g},X_1\oti\cdots\oti X_n)$ and acts analogously on
morphisms in $\HomC(K^{\otimes g},X_{\sigma(1)} \oti X_{\sigma(2)} $
\linebreak[0]$
{\otimes}\, \cdots \oti X_{\sigma(n)})$ for $\sigma\iN \mathfrak N$. (A
graphical description of the map $\LAtjk$ will be given in picture
\erf{t_act} below.)
Then we can rephrase the results of \cite[Sect.\,4]{lyub6} and
\cite[Sect.\,3]{lyub11} as follows:
\begin{prop}\label{Lyubact_prop}
For any collection of objects $X_i$ of \C\ as above, the linear endomorphisms
of the space $V^{X}_{g:n}$ that act as
\be
\pi^{X}_{g:n}(\gamma) := \left\{
\bearll
\big( z_\gamma \big)_{*}^{}
& {\rm for}~~ \gamma \eq \omega_i,\,\Ri
~~(\,i \eq 1,2,...\,,n{-}1 ~{\rm resp.}~ i \eq 1,2,...\,,n\,)
\,, \\{}\\[-.6em]
\big( z_\gamma \big)^{*}_{}
& {\rm for}~~ \gamma \eq S_k ~(\,k \eq 1,2,...\,,g\,)
\\{}\\[-.8em]&
~{\rm or}~~ \gamma \eq a_m, b_m, d_m, e_m ~(\,m \eq 1,2,...\,,g
~{\rm resp.}~ m \eq 2,3,...\,,g\,)
\,, \\{}\\[-.6em]
\LAtjk & {\rm for}~~ \gamma \eq t_{j,k}
~(\,j\eq 1,2,...\,,n{-}1 ~{\rm and}~ k \eq 1,2,...\,,g\,)
\eear \right.
\labl{piXgn}
generate a projective representation $\pi^{X}_{g:n}$ of the
mapping class group \Mapgn\ on the vector space $V^{X}_{g:n}$.
\end{prop}
\begin{rem}
In \cite{lyub6,lyub11} the role of source and target in the vector space
$V^{X}_{g:n}$ \erf{Vpqg}, and correspondingly the role of pre- and
post-composition in the formulas \erf{piXgn}, is interchanged.
Accordingly in our description the inverses of the morphisms $z_\gamma$
used in \cite{lyub6,lyub11} appear.
\end{rem}
\begin{rem}\label{rem:pq}
In applications (for details see Remark \ref{rem:pq2} below) one is also
interested in the following variant. Partition the set of boundary circles
of the surface into two subsets of sizes
$p$ and $q$ and denote by \Mapgpq\ the subgroup of the mapping class group
\Mapgppq\ that leaves each of these two subsets separately invariant.
Further, denote the corresponding decorations by $X_1, X_2, ...\,, X_p$ and
by $Y_1, Y_2, ...\,, Y_q$, respectively, and define objects $X$ and $Y$
analogously as in \erf{defX}. Finally note that the right duality of \C\
provides a linear isomorphism
\be
\varphi:\quad \HomC(K^{\otimes g}{\otimes}\,Y,X)
\stackrel\cong\longrightarrow \HomC(K^{\otimes g},X\oti Y^\vee) \,.
\ee
Then
\be
\pi^{Y,X}_{g,p,q}(\gamma)
:= \varphi^{-1} \circ \pi^{X\otimes Y^\vee}_{g,p+q}(\gamma) \circ \varphi
\labl{piXYgpq}
defines a \rep\ $\pi^{Y,X}_{g,p,q}$ of the group \Mapgpq\ on the space
$\HomC(K^{\otimes g}{\otimes}\,Y,X)$.
\end{rem}
\section{Mapping class group invariants}\label{sec:thm}
We now consider the category $\C \eq \HBimod$ of
finite-dimensional bimodules over a factorizable ribbon Hopf algebra $H$.
Recall that there is a canonical Hopf algebra $K$ in \HBimod, obtainable as the
coend \erf{defK} for \HBimod. Its bimodule structure is given in \erf{defHaa};
for a detailed description of the structure morphisms of $K$ as a Hopf algebra,
see \cite[App.\,A.3]{fuSs3}. Also recall the action $\pi_{g:n}^X$ of \Mapgn\
described in Proposition \ref{Lyubact_prop}.
Our goal is to provide, given the Hopf algebra $H$ in \Vectk, and thus the
Hopf algebra $K$ in \HBimod, the following collection of data:
\begin{itemize}
\item
An object $F$ in the category \HBimod\ that carries a natural structure of
a commutative symmetric Frobenius algebra.
\item
For any choice of non-negative integers $g$ and $n$ a morphism
\be
\Cor gn \,\in\, \HomHH(K^{\otimes g}, F^{\otimes n})
\labl{taskCor}
that is invariant under the action $\pi_{g:n}^X$ of \Mapgn\ with
$X \eq F^{\otimes n}$ (which corresponds to taking
$X_1 \eq \cdots \eq X_n \eq F$ as objects in formula \erf{defX}).
\end{itemize}
\begin{rem}
Any such object $F$ is a candidate for a \emph{space of bulk states} in a
conformal quantum field theory whose chiral data are described by the category
\HMod\ of $H$-modules. The morphisms $\Cor gn$ are then candidates for modular
invariant bulk correlation functions of the conformal field theory, for world
sheets of any genus $g$ and for any number $n$ of bulk field insertions. The
classification of spaces of bulk states for given chiral data and
the construction of correlation functions for a given space of bulk states
are two of the most desirable issues in the study of conformal field theory.
\\
In order to allow for a consistent interpretation as a partition function,
$\Cor 10$, i.e.\ the zero-point correlator on a torus, should be non-zero.
\end{rem}
It is worth stressing that the object $F$ is by no means uniquely determined
by the existence of invariants \erf{taskCor}. Indeed a whole family
$\{\Fomega\}$ of commutative symmetric Frobenius algebras in \HBimod\ that
are candidates for such an object, as well as the corresponding invariant
morphisms for the case that $g\eq 1$ and $n \,{\in}\, \{0,1\}$,
have already been obtained in \cite{fuSs3}. Here $\omega$ is
any choice of a ribbon automorphism of the Hopf algebra $H$. Moreover,
for the case of the identity automorphism it was shown \cite{fuSs4} that
the resulting invariant for $g \eq 1$ and $n \eq 0$ is non-zero.
The main result of the present paper is that each of these $H$-bimodules
$\Fomega$ indeed has all the desired properties: for any choice of ribbon
automorphism $\omega$ we are able to provide the morphism $\Cor gn$ and
establish its \Mapgn-invariance for arbitrary integers $g,n \,{\ge}\, 0$.
On the other hand, generically the family $\{\Fomega\}$
can \emph{not} be expected to
exhaust all solutions to the problem posed above; our results
do not suggest any concrete approach to such a classification.
\medskip
As it turns out, the presence of a general ribbon automorphism $\omega$
only constitutes a minor modification of the issues that already arise
in the case of the identity automorphism. Accordingly the bimodule of central
importance for our discussion is $F^{\!\idsm_H}$, the one obtained when
$\omega \eq \id_H$. Henceforth we slightly abuse notation and simply use
the symbol $F$ for this object. For the family $\{\Fomega\}$ obtained in
\cite{fuSs3}, setting $\omega \eq \id_H$ yields the \emph{coregular bimodule}
in \HBimod. By this we mean the dual of the regular bimodule $(H,m,m)$, i.e.\
the vector space \Hs\ endowed with the dual of the regular left and right
actions of $H$ on itself. Explicitly,
\be
\Hb = (\Hs,\brho,\bohr)
\ee
with $\brho \iN \Hom(H\oti\Hs,\Hs)$ and $\bohr \iN \Hom(\Hs\oti H,\Hs)$ given by
\be
\bearl
\brho:= (d_H\oti\idHs) \cir (\idHs\oti m\oti\idHs) \cir (\idHs\oti\apo\oti b_H)
\cir\tauHHv \qquand
\nxl3
\bohr:= (d_H\oti\idHs)\cir(\idHs\oti m\oti\idHs)
\cir(\idHs\oti\id_H\oti\tauHvH)\cir(\idHs\oti b_H \oti\apoi) \,.
\eear
\labl{rhorho}
It has been demonstrated \cite{fuSs3} that the coregular bimodule $F$ carries a
natural structure of a commutative symmetric Frobenius algebra in the ribbon
category \HBimod. Moreover, $F$ can be characterized as the coend of a suitable
functor from $H\Mod\op \Times H\Mod$ to \HBimod. This way the structural
morphisms endowing $F$ with the structure of an algebra in \HBimod\ can be
obtained with the help of the universal property of this coend. In contrast,
the coalgebra structure of $F$ involves the integral and cointegral of $H$.
Specifically, the product $m_F$ is the dual of the coproduct of $H$ and the
unit $\eta_F$ is the dual of the counit of $H$, in the coproduct $\Delta_F$
the cointegral $\lambda\iN H^*$ enters, and the counit $\eps_F$ is the dual
of the integral $\Lambda\iN H$. Explicitly, in graphical notation we have
\Eqpic{pic-Hb-Frobalgebra} {440} {47} { \put(0,-3){
\put(0,45) {$ m\bico~= $}
\put(48,0) {\Includepichtft{79a}
\put(-5.9,-8.8) {\sse$ \Hss $}
\put(6.5,-8.8) {\sse$ \Hss $}
\put(31.5,34.7) {\sse$ \Delta $}
\put(49.7,106.8){\sse$ \Hss $}
}
\put(142,45) {$ \eta\bico~= $}
\put(185,24) {\Includepichtft{81j}
\put(-5.9,23.6) {\sse$ \eps $}
\put(10.7,44.1) {\sse$ \Hss $}
}
\put(243,45) {$ \Delta\bico~= $}
\put(291,0) {\Includepichtft{82a}
\put(-4.3,-8.8) {\sse$ \Hss $}
\put(11.1,26.8) {\sse$ \Delta $}
\put(17.7,43.8) {\sse$ \apo $}
\put(21.5,71) {\sse$ \lambda $}
\put(32.8,59.3) {\sse$ m $}
\put(48.2,89.4) {\sse$ \Hss $}
\put(61.1,89.4) {\sse$ \Hss $}
}
\put(395,45) {$ \eps\bico~= $}
\put(438,24) {\Includepichtft{82b}
\put(-4.3,-8.8) {\sse$ \Hss $}
\put(15.8,16.3) {\sse$ \Lambda $}
} } }
We are now in a position to give our result for the morphisms $\Cor gn$ in the
category $\C \eq \HBimod$. We first present $\Cor gn$ in purely categorical
terms: as an element of the morphism space
$\Hom_\C(K^{\otimes n},F^{\otimes n})$ of the category \C\ it is given by
\eqpic{Sk_morph} {180} {148} { \put(0,13){
\put(0,129) {$\Cor gn :=~$}
\put(60,0) {\Includepichtft{132a}
\put(-8.7,-12) {\sse$ \underbrace{\hspace*{7.1em}}_{g~ \rm factors} $}
\put(28.3,288) {\sse$ \overbrace{\hspace*{7.6em}}^{n~ \rm factors} $}
\put(-5,-9.2) {\sse$ K $}
\put(8.8,-9.2) {\sse$ K $}
\put(27,-9.2) {\sse$ \cdots $}
\put(49.2,-9.2) {\sse$ K $}
\put(27.2,278) {\sse$ F $}
\put(49.5,278) {\sse$ F $}
\put(66,278) {\sse$ \cdots $}
\put(86,278) {\sse$ F $}
\put(65,176) {\sse$ \rho^K_F $}
\put(78.6,198) {\sse$ m_F^{} $}
\put(71.3,219) {\sse$ \Delta_F $}
\put(79.8,12.2) {\sse$ \eta_F^{} $}
}
\put(186,256) {\catpic}
} }
for $n \,{>}\, 0$, and as $\Cor g0 \,{:=}\, \eps_F \cir \Cor g1$.
\medskip
Let us describe the rationale for arriving at this expression for $\Cor gn$.
\def\leftmargini{1.57em}~\\[-2.65em]\begin{enumerate}\addtolength{\itemsep}{-3pt}
\item
Draw a skeleton for the surface $\Surf g{n}$, including outward-oriented edges
attached to the boundary components, and label each edge of this skeleton
with the Frobenius algebra $F$ in \C. (Instead of with edges, a priori one may
want to work with ribbons, but owing to $\theta_F \eq \id_F$ this is insignificant.)
\item
Orient the internal edges in such a way that each of the vertices of the
skeleton has either two incoming and one outgoing edge or vice versa.
Then label each vertex with the product $m_F$ or coproduct $\Delta_F$ of $F$,
depending on whether two or one of its edges are incoming.
\item
Further, for each handle of the surface, attach another edge
labeled by the handle Hopf algebra $K \iN \C$ to the corresponding
loop of the skeleton, and label the resulting new trivalent vertex
by the action $\rho^K_F$, as defined in \erf{def-rho}, of the Hopf algebra $K$
on the object in \C\ underlying the Frobenius algebra $F$.
\item
The resulting graph is interpreted as a morphism in \C. Using the fact that
the algebra $F$ is commutative symmetric Frobenius, the vertices can be
rearranged in such a way that one ends up with the morphism given in
\erf{Sk_morph}.
\end{enumerate}
\noindent
Our main result can now be stated as follows:
\begin{thm}\label{thm:main}
Let $\C \eq \HBimod$ for $H$ a finite-dimensional factorizable ribbon Hopf
algebra. Then for any pair of integers $g,n \,{\ge}\,0$ the morphism $\Cor gn$
is invariant under the action $\pi_{g:n}^{F^{\otimes n}}$ of the mapping class
group $\Mapgn$ described in Proposition \ref{Lyubact_prop}.
\end{thm}
\medskip
\begin{rem}\label{rem:pq2}
As already pointed out, a major motivation for our investigations are
applications to conformal field theory. In that context, the morphisms $\Cor gn$
describe correlation functions of bulk fields. As such they not only
have to be invariant under the relevant action of the mapping class group,
but must also satisfy so-called factorization constraints
(for a precise formulation in the semisimple case see \cite{fjfrs}). The latter
constraints relate correlators for surfaces of different topology and
necessarily involve correlators with (using quantum field theory terminology)
both incoming and outgoing field insertions. We do not have anything to say
about factorization constraints in this paper, but including the possibility
of having both incoming and outgoing insertions is
easy. Indeed, for any choice of non-negative integers $g$, $p$ and $q$,
according to Remark \ref{rem:pq} we have a \rep\ of \Mapgpq\ on the space
$\HomHH(K^{\otimes g}\oti F^{\otimes q}, F^{\otimes p})$. A morphism
\be
\Corr gpq\in \HomHH(K^{\otimes g}\oti F^{\otimes q}, F^{\otimes p})
\ee
that is invariant under this action is then obtained as follows. We have
$\Corr gp0 \eq \Cor gp$ as given in \erf{Sk_morph}, while for $q \,{>0}$ the
morphism $\Corr gpq$ is obtained from $\Cor gp$ by just replacing the unit
$\eta_F \iN \HomHH(\one,F) \,{\equiv}\, \HomHH(F^{\otimes 0},F)$
by the morphism in $\HomHH(F^{\otimes q},F)$ that is given by
a $q{-}1$-fold product of $F$.
\end{rem}
\begin{rem}
As already pointed out, we actually find a family of suitable objects $\Fomega$,
labeled by ribbon automorphisms $\omega$ of $H$, and provide invariant vectors
$\Corw g {p,q}$ in the corresponding morphism spaces $\HomHH\big(K^{\otimes g}
\oti (\Fomega)^{\otimes q}_{}, (\Fomega)^{\otimes p}_{}\big)$ for all values
of $g$, $p$ and $q$. Thus for any ribbon automorphism $\omega$ we obtain
candidates for the bulk state space and for bulk correlation functions in
conformal field theory. For brevity we have
concentrated above to the case of the identity automorphism.
Sections \ref{sec:lemmata} and \ref{sec:proofmain} below contain
the proof of our main result for $\omega \eq \id_H$, while the proof for
the general case will be accomplished in Section \ref{sec:omega}.
\end{rem}
The picture \erf{Sk_morph} represents $\Cor gn$ as a morphism of
the braided tensor category of $H$-bi\-modules. We conclude this section
by expressing $\Corr gpq$ as a \ko-linear map, thereby obtaining a pictorial
description in the category of vector spaces. To this end we insert
explicit expressions for all the structural morphisms appearing in
\erf{Sk_morph}. The expressions for the structural morphisms of $F$
have already been displayed in the picture \erf{pic-Hb-Frobalgebra}.
It therefore suffices to describe in addition the morphisms $\Corr g11$
with one incoming and one outgoing insertion of $F$.
Let us first consider the case $g \eq p \eq 1$ and $q \eq 0$:
\begin{lem}
The morphism $\Corr 110 \,{\equiv}\, \Cor 11$ satisfies the following chain
of equalities of linear maps:
\Eqpic{PF_1} {440} {57} {\setulen95
\put(-3,59) { $ \Cor 11 ~= $ }
\put(63,0) {\INcludepichtft{123f_2}{361}
\put(-5.5,-9.2) {\sse$ \Hss $}
\put(14.4,-9.2) {\sse$ \Hss $}
\put(25.2,41) {\sse$ Q^{-1} $}
\put(33.5,15) {\sse$ Q $}
\put(61,118.7) {\sse$ \lambda $}
\put(73,139) {\sse$ \Hss $}
}
\put(163,59) {$=$}
\put(199,0) {\INcludepichtft{123h_1A}{361}
\put(-4.5,-9.2) {\sse$ \Hss $}
\put(28,-9.2) {\sse$ \Hss $}
\put(14.5,33) {\sse$ Q^{-1} $}
\put(47.5,10.2) {\sse$ Q $}
\put(45.1,118.5) {\sse$ \lambda $}
\put(62,139) {\sse$ \Hss $}
}
\put(271,59) {$=$}
\put(307,0) {\INcludepichtft{123i_1}{361}
\put(-5,-9.2) {\sse$ \Hss $}
\put(36,-9.2) {\sse$ \Hss $}
\put(20,19) {\sse$ Q^{-1} $}
\put(55.6,19) {\sse$ Q $}
\put(59.7,78.5) {\sse$ \lambda $}
\put(32,139) {\sse$ \Hss $}
}
\put(390,59) {$=$}
\put(422,0) {\INcludepichtft{123k}{361}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8.2,-9.2) {\sse$ \Hss $}
\put(36,10.5) {\sse$ \Lambda $}
\put(49.5,139) {\sse$ \Hss $}
} }
Here $Q$, $\lambda$ and $\Lambda$ are the monodromy matrix, the cointegral and
the integral of $H$, respectively.
\end{lem}
\begin{proof}
Insert the expressions for $m_F$, $\eta_F$ and $\Delta_F$ as well as for the
\HK-action $\rho^K_F$ (with braidings according to \erf{bibraid} and with the
formula for $\rho^K_X$ that we will present for a general $H$-bimodule $X$ in
\erf{Lyubact_HKH} below) into \erf{Sk_morph} with $g\eq n\eq 1$. Then by using
associativity of the product $m$ of $H$, we arrive at the first picture in the
chain \erf{PF_1} of equalities. The second equality follows by using several
times the anti-(co)algebra property of the antipode $\apo$ of $H$.
In the resulting morphism we can recognize the left-adjoint $H$-action
on the right leg of the inverse monodromy matrix $Q^{-1}$. The third
equality is then just the statement that the morphism $f_{Q^{-1}}$
intertwines the left-adjoint and left-coadjoint actions. The last
equality follows with the help of the identity \erf{fQS_Psi} together with
$(\apo\oti\apo)\cir Q \eq \tauHH\cir Q$, the anti-coalgebra property
of the inverse antipode and the fact that, by unimodularity of $H$,
$\apo\cir\Lambda \eq \Lambda$.
\end{proof}
To proceed to $\Corr 111$ we simply note that, by just using the unit property
of $\eta_F$ and the Frobenius property and associativity of $F$, we have
\be
\Corr 111 = m_F \circ \big[\,\big(\Corr 111 \cir (\id_K\oti\eta_F) \big)
\oti \id_F \,\big] = m_F \circ ( \Corr 110 \oti \id_F) \,.
\labl{111from110}
With the result \erf{PF_1} of Lemma 2.6 and the explicit form of the
multiplication $m_F$ from formula \erf{pic-Hb-Frobalgebra}, this amounts to
\eqpic{Corr12H} {300} {48} {\setulen80
\put(5,60) {$\Corr 111 ~=$}
\put(101,0) {\includepichtft{131aA}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8.2,-9.2) {\sse$ \Hss $}
\put(47.2,-9.2) {\sse$ \Hss $}
\put(36,19.5) {\sse$ \Lambda $}
\put(86,139) {\sse$ \Hss $}
}
\put(219,60) {$ \equiv $}
\put(261,0) {\includepichtft{131bA}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8.2,-9.2) {\sse$ \Hss $}
\put(51.2,-9.2) {\sse$ \Hss $}
\put(36,15.5) {\sse$ \Lambda $}
\put(43,87.5) {\sse$ \ohrad $}
\put(89,139) {\sse$ \Hss $}
} }
with $\ohrad$ the right-adjoint action of $H$ on itself.
This expression extends in a straightforward manner to
\eqpic{CorrgnH} {300} {117} { \put(0,18){ \setulen90
\put(0,120) {$\Corr g11=~$}
\put(81,0) {\INcludepichtft{131c}{342}
\put(-3.9,-13) {\sse$ \underbrace{\hspace*{14.5em}}
_{g~ {\rm factors~of}~\Hs{\otimes}\Hs} $}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8.2,-9.2) {\sse$ \Hss $}
\put(44.5,-9.2) {\sse$ \Hss $}
\put(56.5,-9.2) {\sse$ \Hss $}
\put(76.5,-9.2) {\sse$ \dots\dots$}
\put(111.5,-9.2) {\sse$ \Hss $}
\put(123.5,-9.2) {\sse$ \Hss $}
\put(154.5,-9.2) {\sse$ \Hss $}
\put(234,260) {\sse$ \Hss $}
\put(36,180.5) {\sse$ \Lambda$}
\put(70,130.5) {\sse$ \Lambda$}
\put(137,69.5) {\sse$ \Lambda$}
\put(43,220) {\sse$ \ohrad $}
\put(91.4,169.4) {\sse$ \ohrad $}
\put(158.2,108.5){\sse$ \ohrad $}
} } }
Finally, the morphisms with $q$ or $p$ larger than 1 are obtained by pre- and
post-composition of \erf{CorrgnH} with an appropriate multiple product and
multiple coproduct of $F$, respectively.
Explicitly, writing $m_F^{(2)}\eq m_F$ and $m_F^{(l)}\eq m_F
\cir (m_F^{(l-1)} \oti \id_F)$ for $l\,{>}\,2$ as well as $m_F^{(0)} \eq \eta_F$
and $m_F^{(1)} \eq \id_F$, and analogously for $\Delta_F^{(l)}$, we have
\be
\Corr gpq = \Delta_F^{(p)} \circ \Corr g11 \circ
\big( \id^{}_{K^{\otimes g}_{}} \oti m_F^{(q)} \big)
\labl{gpqfromg11}
for all $p,q$ and $g$.
\section{Useful identities}\label{sec:lemmata}
In this section we present a few preliminary results that will be instrumental
in the proof of Theorem \ref{thm:main}. Our first task is to specialize the
half-monodromies and related morphisms that we introduced, in Section
\ref{subsec:hHa}, for a general \fftc\ \C\ to the specific case of
$\C \eq \HBimod$. To do so we use the specific form \erf{bibraid} of the
braiding in \HBimod\ and the explicit expressions \cite{lyub6,vire4} for the
dinatural family and for the Hopf algebra
structure morphisms of $K$. For the morphism $\QB_X$ defined in formula
\erf{QHX} we then obtain, for any bimodule $X \eq (X,\rho_X,\ohr_X)$,
\eqpic{Qq_X} {150} {49} {\setulen80
\put(9,57) {$ \QB_X =~$}
\put(80,0) {\includepichtft{133c}
\put(-5.4,-9.2) {\sse$ \Hss $}
\put(19.3,-9.2) {\sse$ \Hss $}
\put(27.4,140.4) {\sse$ \Hss $}
\put(52.8,140.4) {\sse$ \Hss $}
\put(92.5,-9.2) {\sse$ X $}
\put(92.9,140.4) {\sse$ X $}
\put(53,4.5) {\sse$ Q $}
\put(72,50.7) {\sse$ Q^{-1} $}
\put(103.1,68.7) {\sse$ \ohr_X^{} $}
\put(80.9,101.8) {\sse$ \rho_X^{} $}
} }
where $Q$ is the monodromy matrix of $H$,
while the endomorphism \erf{QHH} of $K\oti K$ becomes
\eqpic{QQ_Haa} {240}{63} {
\put(22,57) {$ \QQ ~= $}
\put(81,0) { \Includepichtft{121b}
\put(-4.3,-9.2) {\sse$ \Hss $}
\put(20.7,-9.2) {\sse$ \Hss $}
\put(29.4,139) {\sse$ \Hss $}
\put(44.8,3.9) {\sse$ Q $}
\put(54.8,139) {\sse$ \Hss $}
\put(62.8,12.8) {\sse$ \apoi $}
\put(78.7,-9.2) {\sse$ \Hss $}
\put(86.6,53.1) {\sse$ Q^{-1} $}
\put(87.6,77.9) {\sse$ \apo $}
\put(102.9,-9.2) {\sse$ \Hss $}
\put(112.2,139) {\sse$ \Hss $}
\put(137.7,139) {\sse$ \Hss $}
} }
The S- and T-transformations \erf{S-HK} and \erf{p9} involve the monodromy
matrix and cointegral, and the ribbon element $v$, respectively:
\eqpic{S_KH-TKH} {410}{45} { \put(0,4){
\put(0,39) {$ \SK ~= $}
\put(50,-6) { \Includepichtft{121cA}
\put(-4.5,-9.2) {\sse$ \Hss $}
\put(13,51.8) {\sse$ Q^{-1} $}
\put(36.5,-9.2) {\sse$ \Hss $}
\put(32.5,87) {\sse$ \lambda $}
\put(44.6,106) {\sse$ \Hss $}
\put(57.4,12.1) {\sse$ Q $}
\put(73.5,56) {\sse$ \lambda $}
\put(85.6,106) {\sse$ \Hss $}
}
\put(180,39) {and}
\put(230,39) {$ \TK ~= $}
\put(278,0) { \Includepichtft{121a}
\put(-4.4,-9.2) {\sse$ \Hss $}
\put(12,15) {\sse$ v $}
\put(39.6,94) {\sse$ \Hss $}
\put(61.2,-9.2) {\sse$ \Hss $}
\put(89.7,34.5) {\sse$ v^{-1} $}
\put(105.3,94) {\sse$ \Hss $}
} } }
The partial monodromy action \erf{def-rho} of $K$ on an $H$-bimodule $X$,
which is obtained by composing $\QB_X$ in \erf{Qq_X} with
$\eps_K \oti \id_X \eq \eta^\vee \oti \eta^\vee \oti\id_X$ (and which, as noted
in Remark \ref{rem:YD}, fits together with the natural $K$-coaction to a
Yetter-Drinfeld structure) is
\eqpic{Lyubact_HKH} {120} {48} {
\put(0,47) {$ \rho^K_X=~ $}
\put(50,0) { \Includepichtft{133f}
\put(-5.4,-8.5) {\sse$ \Hss $}
\put(10,-8.5) {\sse$ \Hss $}
\put(46.2,-8.5) {\sse$ X $}
\put(31,6.8) {\sse$ Q $}
\put(13,46.8) {\sse$ Q^{-1} $}
\put(54.7,68.8) {\sse$ \ohr_X^{} $}
\put(35.2,90.5) {\sse$ \rho_X^{} $}
\put(46.6,108.6) {\sse$ X $}
} }
i.e.\ the natural $K$-action is nothing but the $H$-bimodule action composed
with variants of the Drinfeld map.
We will also need the inverse of the isomorphism $\SK$; it is given by
\eqpic{S_KHinv} {155}{45} { \put(0,4){
\put(0,42) {$ \SK^{-1} ~= $}
\put(50,-6) { \Includepichtft{121gA}
\put(-4.5,-8.1) {\sse$ \Hss $}
\put(16.4,43.8) {\sse$ Q $}
\put(41.5,-8.1) {\sse$ \Hss $}
\put(32.4,87) {\sse$ \lambda $}
\put(44.1,106) {\sse$ \Hss $}
\put(59.3,4.1) {\sse$ Q^{-1} $}
\put(78.7,39) {\sse$ \lambda $}
\put(90.6,106) {\sse$ \Hss $}
} } }
Next, for further use we note that, since the R-matrix intertwines the coproduct
and opposite coproduct of $H$, conjugating by the monodromy matrix preserves
the coproduct:
\eqpic{removeQs} {205} {31} {
\put(0,0) {\Includepichtft{133d}
\put(23,-9.2) {\sse$ H $}
\put(1,8.2) {\sse$ Q $}
\put(47,8.6) {\sse$ Q^{-1}$}
\put(16.5,74) {\sse$ H $}
\put(33,74) {\sse$ H $}
}
\put(85,33) {$=~\Delta_H~=$}
\put(159,0) {\Includepichtft{133e}
\put(23,-9.2) {\sse$ H $}
\put(-1.9,8.6) {\sse$ Q^{-1}$}
\put(50.7,8.6) {\sse$ Q$}
\put(16.5,74) {\sse$ H $}
\put(33,74) {\sse$ H $}
} }
We will also make use of the following result that can be formulated in the
general setting of ribbon categories:
\begin{lemma}\label{lem:2prod_mon}
For any commutative Frobenius algebra $A$ in a ribbon category \C\ the equalities
\eqpic{2prod_mon} {250} {38} {\setulen90
\put(0,0) { \INcludepichtft{139a}{342}
\put(-4.3,-9.2) {\sse$ A $}
\put(8.3,-9.2) {\sse$ A $}
\put(20,47.6) {\sse$ c $}
\put(21.5,-9.2) {\sse$ A $}
\put(28,30.7) {\sse$ c $}
\put(44.5,101) {\sse$ A $}
}
\put(82,43) {$=$}
\put(120,0) { \INcludepichtft{139c}{342}
\put(-4.3,-9.2) {\sse$ A $}
\put(8.3,-9.2) {\sse$ A $}
\put(20,47.6) {\sse$ c $}
\put(21.5,-9.2) {\sse$ A $}
\put(28,30.7) {\sse$ c $}
\put(44.2,101) {\sse$ A $}
}
\put(203,43) {$=$}
\put(240,0) { \INcludepichtft{139b}{342}
\put(-4.3,-9.2) {\sse$ A $}
\put(8.3,-9.2) {\sse$ A $}
\put(12.8,101) {\sse$ A $}
\put(21.8,-9.2) {\sse$ A $}
}
\put(297,80) {\catpic}
}
hold.
\end{lemma}
\begin{proof}
The first equality is a direct consequence of the Frobenius property of $A$.
The second equality follows by consecutively using associativity, commutativity,
and associativity combined with the Frobenius property.
\end{proof}
We are now in a position to establish various convenient identities that are
satisfied for any factorizable ribbon Hopf algebra $H$.
First we obtain various equalities involving the coproduct and monodromy matrix.
\begin{lemma}
The following identities hold:
\eqpic{Q2Delta} {100} {38} {\setulen80
\put(0,0) {\includepichtft{138a}}
\put(21.2,-9.5) {\sse$ H $}
\put(6.6,112) {\sse$ H $}
\put(19.7,112) {\sse$ H $}
\put(32.2,112) {\sse$ H $}
\put(-2.8,3.2) {\sse$Q^{-1}$}
\put(41.5,1.8) {\sse$Q$}
\put(78,44) {$=$}
\put(120,0) {
\put(0,0) {\includepichtft{138b}}
\put(11.7,-9.2) {\sse$ H $}
\put(-3.6,112) {\sse$ H $}
\put( 9.5,112) {\sse$ H $}
\put(22.2,112) {\sse$ H $}
} }
and
\eqpic{Q3Delta} {150} {55} {\setulen80
\put(0,-9) {
\put(0,0) {\includepichtft{141aA}}
\put(9.5,166.1) {\sse$ H $}
\put(33.1,166.1) {\sse$ H $}
\put(39.2,-9.8) {\sse$ H $}
\put(48.2,166.1) {\sse$ H $}
\put(63.7,166.1) {\sse$ H $}
\put(9.2,16.4) {\sse$Q^{-1}$}
\put(80.3,2.5) {\sse$Q$}
\put(116,70) {$=$}
\put(146,0) {
\put(0,0) {\includepichtft{141eA}}
\put(-4.1,166.1) {\sse$ H $}
\put(18.9,166.1) {\sse$ H $}
\put(25.4,-9.8) {\sse$ H $}
\put(35.1,166.1) {\sse$ H $}
\put(50.7,166.1) {\sse$ H $}
} } }
and
\eqpic{Pic_QQ3} {370} {38} {\setulen80
\put(0,0) { \includepichtft{134dA}
\put(24,114.1) {\sse$ H $}
\put(37.1,114.1) {\sse$ H $}
\put(45,-11.2) {\sse$ H $}
\put(54,114.1) {\sse$ H $}
\put(67.5,114.1) {\sse$ H $}
\put(0,8) {\sse$Q$}
\put(78.6,8.5) {\sse$Q^{-1}$}
}
\put(116,52) {$=$}
\put(143,0) { \includepichtft{134eA}
\put(23.3,114.1) {\sse$ H $}
\put(36.7,114.1) {\sse$ H $}
\put(44.2,-11.2) {\sse$ H $}
\put(54,114.1) {\sse$ H $}
\put(67.5,114.1) {\sse$ H $}
\put(0,8) {\sse$Q$}
\put(78.6,6.9) {\sse$Q^{-1}$}
}
\put(264,52) {$=$}
\put(300,0) { \includepichtft{134fA}
\put(-2.5,114.1) {\sse$ H $}
\put(11.5,114.1) {\sse$ H $}
\put(19,-11.2) {\sse$ H $}
\put(28,114.1) {\sse$ H $}
\put(40.7,114.1) {\sse$ H $}
}
\put(378,52) {$=$}
\put(409,0) { \includepichtft{134gA}
\put(-3.7,114.1) {\sse$ H $}
\put(9.3,114.1) {\sse$ H $}
\put(17.5,-11.2) {\sse$ H $}
\put(25.6,114.1) {\sse$ H $}
\put(39.8,114.1) {\sse$ H $}
} }
and
\eqpic{inv_Qq3} {410} {47} { \put(0,2) { \setulen80
\put(-2,0) { \includepichtft{135dA}
\put(-6.2,-11.2) {\sse$ \Hss $}
\put(21.2,-11.2) {\sse$ H $}
\put(22.8,135.2) {\sse$ H $}
\put(32.6,60.9) {\sse$Q$}
\put(37.9,135.2) {\sse$ H $}
\put(51.2,13.2) {\sse$Q^{-1}$}
\put(54.2,135.2) {\sse$ H $}
}
\put(103,57) {$=$}
\put(144,0) { \includepichtft{135e}
\put(-11.5,53) {\sse$Q^{-1}$}
\put(-6.5,-11.2) {\sse$ \Hss $}
\put(21.2,-11.2) {\sse$ H $}
\put(22.7,135.2) {\sse$ H $}
\put(39.2,135.2) {\sse$ H $}
\put(55.2,135.2) {\sse$ H $}
\put(64.6,21.3) {\sse$Q$}
}
\put(254,57) {$=$}
\put(292,0) { \includepichtft{135f}
\put(-3,45.4) {\sse$Q^{-1}$}
\put(-6.2,-11.2) {\sse$ \Hss $}
\put(15.2,-11.2) {\sse$ H $}
\put(15.7,135.2) {\sse$ H $}
\put(32.1,135.2) {\sse$ H $}
\put(48.3,135.2) {\sse$ H $}
\put(58.5,20.7) {\sse$Q$}
}
\put(395,57) {$=$}
\put(431,0) { \includepichtft{135gA}
\put(-6.2,-11.2) {\sse$ \Hss $}
\put(21.4,-11.2) {\sse$ H $}
\put(21.6,135.2) {\sse$ H $}
\put(37.8,135.2) {\sse$ H $}
\put(54.3,135.2) {\sse$ H $}
} } }
\end{lemma}
\begin{proof}
(i)\, Consider the equality between the left- and right-most expressions in
\erf{2prod_mon} for the case of our interest, i.e.\ with $A \eq F$ and with $c$
the braiding in \HBimod. This involves the left-dual actions $\rhoV^H$ and
$\ohrV^H$ of $H$ on the bimodule ${}_{}^\vee\! F$. The latter obey
\eqpic{Fv_act} {160} {33} {
\put(-4,0) {\Includepichtft{137a}
\put(-3,82) {\sse$ H $}
\put(16.5,-9.2) {\sse$ H $}
\put(31,-9.2) {\sse$ ^\vee\!F $}
\put(47.7,-9.2) {\sse$ H$}
\put(41.5,20.5) {\sse$ \ohrV^H $}
\put(39.2,35.5) {\sse$ \rhoV^H $}
}
\put(73,34) {$=$}
\put(104,0) {\Includepichtft{137b}
\put(-4.3,-9.2) {\sse$ H $}
\put(51,-9.2) {\sse$ ^\vee\! F $}
\put(73,-9.2) {\sse$ H$}
\put(17.5,82) {\sse$ H $}
} }
Implementing this relation and composing the resulting equality with suitable
duality morphisms yields \erf{Q2Delta}.
\\[2pt]
(ii)\,
On the left hand side of \erf{Q3Delta}, push the inverse antipode that
is located highest up in the picture to the top of the picture, i.e.\ through
a coproduct and two products, and push the two `lowest' products upwards by
invoking the connecting axiom of the bialgebra $H$. This results in the left
hand side of the following chain of equalities:
\eqpic{Q3Delta_proof} {420} {60} { \put(-17,0) { \setulen80
\put(0,0) { \includepichtft{141bB}
\put(1.1,0) {\sse$ Q^{-1} $}
\put(28.7,166.2) {\sse$ H $}
\put(50.7,166.2) {\sse$ H $}
\put(61.5,-11.2) {\sse$ H $}
\put(66.7,166.2) {\sse$ H $}
\put(82.7,166.2) {\sse$ H $}
\put(125.5,-.8) {\sse$ Q $}
}
\put(169,74) {$=$}
\put(204,0) { \includepichtft{141cB}
\put(1.5,.2) {\sse$ Q^{-1} $}
\put(26.7,166.2) {\sse$ H $}
\put(50.7,166.2) {\sse$ H $}
\put(61.8,-11.2) {\sse$ H $}
\put(66.7,166.2) {\sse$ H $}
\put(82.7,166.2) {\sse$ H $}
\put(125.2,-.8) {\sse$ Q $}
}
\put(373,74) {$=$}
\put(409,0) { \includepichtft{141dB}
\put(2.1,6.1) {\sse$ Q^{-1} $}
\put(9.7,166.2) {\sse$ H $}
\put(32.9,166.2) {\sse$ H $}
\put(33.2,-11.2) {\sse$ H $}
\put(50.1,166.2) {\sse$ H $}
\put(66.2,166.2) {\sse$ H $}
\put(95.2,9.5) {\sse$ Q $}
} } }
The first of these equalities follows easily by a multiple use of associativity
and coassociativity, while the second one is obtained with the help of the
defining property of the antipode and of coassociativity.
Now the right hand side of \erf{Q3Delta_proof} equals the right hand side of
\erf{Q3Delta}, as is seen by first applying the connecting axiom to the two
coproducts that come directly after the monodromy matrices and then invoking
the identity \erf{Q2Delta}.
\\[2pt]
(iii)\,
The first equality in \erf{Pic_QQ3} is obtained by pushing the inverse antipodes down
through the coproduct (together with a deformation compatible with the braid
relations). The second of the equalities in \erf{Pic_QQ3} follows from
coassociativity combined with \erf{removeQs}, while the third equality is
immediate from coassociativity and properties of the inverse antipode.
\\[2pt]
(iv)\,
The first equality in \erf{inv_Qq3} follows by pushing $\apoi$ through the upper
coproduct and products. The second equality uses coassociativity and
$(\apo\oti\apo)\cir Q \eq \tauHH\cir Q$. Applying first \erf{removeQs} and then
undoing the manipulations performed in the first two equalities yields the third one.
\end{proof}
The next two identities involve the two-sided integral $\Lambda$ and the right
cointegral $\lambda$ of $H$.
\begin{lemma}\label{lem:reppastint}
We have
\eqpic{reppastint} {100} {25} {
\put(0,0) {\Includepichtft{133a}
\put(-4,-8.5) {\sse$ H $}
\put(19.2,12.9) {\sse$ \Lambda $}
\put(-1.8,49.3) {\sse$ \ohrad $}
\put(6,68.5) {\sse$ H $}
\put(19,68.5) {\sse$ H $}
}
\put(50,30) {$=$}
\put(85,0) {\Includepichtft{133bA}
\put(18.5,-8.5) {\sse$ H $}
\put(-4.5,12.8) {\sse$ \Lambda $}
\put(17.6,52.8) {\sse$ \ohrad $}
\put(-3,68.5) {\sse$ H $}
\put(10,68.5) {\sse$ H $}
} }
where $\ohrad$ is the right-adjoint action of $H$ on itself
{\rm(}as defined in {\rm\erf{Corr12H})}.
\end{lemma}
\begin{proof}
This follows by combining the explicit form
of $\ohrad$ with the identities \cite[(2.18)]{fuSs3}
\eqpic{Hopf_Frob_trick2} {290} {29} {\setulen80
\put(0,0) {\includepichtft{84f} \put(2.2,0){
\put(-4.2,-11.2){\sse$ H $}
\put(2.2,92) {\sse$ H $}
\put(27.7,10) {\sse$ \Lambda $}
\put(31,92) {\sse$ H $}
} }
\put(60,41) {$ = $}
\put(86,0) { \includepichtft{84g}
\put(-4.2,-11.2){\sse$ H $}
\put(6.6,92) {\sse$ H $}
\put(26.6,92) {\sse$ H $}
\put(27.7,10) {\sse$ \Lambda $}
}
\put(174,41) {and}
\put(240,0) { \includepichtft{84h}
\put(-3.2,92) {\sse$ H $}
\put(15.7,10) {\sse$ \Lambda $}
\put(26.1,92) {\sse$ H $}
\put(29.8,-11.2){\sse$ H $}
}
\put(300,41) {$ = $}
\put(326,0) { \includepichtft{84i}
\put(1.2,92) {\sse$ H $}
\put(15.7,10) {\sse$ \Lambda $}
\put(21.7,92) {\sse$ H $}
\put(29.8,-11.2){\sse$ H $}
} }
(which, in turn, are obtained by combining various defining properties of the Hopf
algebra structure of $H$ with unimodularity).
\end{proof}
\begin{lemma}\label{lem:Sinv-7}
We have the chain
\eqpic{Sinv-7} {410} {47} {
\put(0,0) {\Includepichtft{124c}
\put(18,-8) {\sse$ \Hss $}
\put(30,-8) {\sse$ \Hss $}
\put(11,12) {\sse$ \Lambda $}
\put(31.5,86) {\sse$ \lambda $}
\put(-3,106) {\sse$ \Hss $}
}
\put(58,47) {$ = $}
\put(95,0) {\Includepichtft{124d}
\put(18,-9.2) {\sse$ \Hss $}
\put(30,-9.2) {\sse$ \Hss $}
\put(11,12) {\sse$ \Lambda $}
\put(31.5,86) {\sse$ \lambda $}
\put(-3,106) {\sse$ \Hss $}
}
\put(153,47) {$=$}
\put(190,0) {\Includepichtft{124e}
\put(18,-9.2) {\sse$ \Hss $}
\put(30,-9.2) {\sse$ \Hss $}
\put(11,12) {\sse$ \Lambda $}
\put(31.5,86) {\sse$ \lambda $}
\put(-3,106) {\sse$ \Hss $}
}
\put(249,47) {$=$}
\put(282,0) {\Includepichtft{124f}
\put(17.5,-9.2) {\sse$ \Hss $}
\put(30,-9.2) {\sse$ \Hss $}
\put(11.4,11) {\sse$ \Lambda $}
\put(26.5,74) {\sse$ \lambda $}
\put(-1.4,106) {\sse$ \Hss $}
}
\put(339,47) {$ = $}
\put(375,16) {\Includepichtft{124g}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8,-8) {\sse$ \Hss $}
\put(3,75) {\sse$ \Hss $}
} }
of equalities.
\end{lemma}
\begin{proof}
The second equality uses the fact that, since $H$ is unimodular, the cointegral
satisfies \cite[Thm.\,3]{radf12} $\lambda \cir m \eq \lambda \cir m \cir \tauHH
\cir (\id_H\oti\apo^2)$. The third follows with the help of $\apo\cir\Lambda \eq
\Lambda$. The important ingredient in the last equality is the invertibility of
the Frobenius map, which is equivalent to the statement that $H$ has a natural
structure of a Frobenius algebra (see e.g.\ \cite[Eq.\,(8)]{coWe6}).
\end{proof}
Finally we have the following result involving somewhat more complicated
expressions, which arise in connection with the action \erf{LyubactC3} of the
generators $t_{j,k}$ of the mapping class group.
\begin{lemma}\label{lem:Hpastloops}
For any integer $p \,{\ge}\, 1$ we have
\eqpic{Hpastloops} {320} {101} {\setulen80
\put(0,0) {\includepichtft{132cA}
\put(-5,-11.2) {\sse$ H $}
\put(17.2,-11.2) {\sse$ K $}
\put(60.2,-11.2) {\sse$ K $}
\put(66.8,128.2) {\sse$ \rho^K_F $}
\put(66.8,218.9) {\sse$ \rho^K_F $}
\put(76.2,-11.2) {\sse$ H $}
\put(92.2,-11.2) {\sse$ F $}
\put(92.9,273.3) {\sse$ F $}
\put(29,63.8) {\sse$ \rho_{\HK^{\!\otimes p}}^{} $}
\put(39,33.8) {\sse$ \ohr_{\HK^{\!\otimes p}}^{} $}
}
\put(152,115) {$=$}
\put(200,0) {\includepichtft{131dA}
\put(-5,-11.2) {\sse$ H $}
\put(8.2,-11.2) {\sse$ \Hss $}
\put(20.2,-11.2) {\sse$ \Hss $}
\put(76.2,-11.2) {\sse$ \Hss $}
\put(88.2,-11.2) {\sse$ \Hss $}
\put(121.2,-11.2){\sse$H$}
\put(140.2,-11.2){\sse$ \Hss $}
\put(192.2,272.5){\sse$ \Hss $}
\put(34,171.5) {\sse$ \Lambda $}
\put(100,103.5) {\sse$ \Lambda $}
\put(57,208) {\sse$ \ohrad $}
\put(124,142) {\sse$ \ohrad $}
} }
where $\rho^{}_{K^{\otimes p}_{}}$ and $\ohr^{}_{K^{\otimes p}_{}}$ are
the left and right actions of the factorizable ribbon Hopf algebra $H$ on
the $H$-bimodule $K^{\otimes p}_{}$, respectively.
\end{lemma}
\begin{proof}
Recalling the expression \erf{CorrgnH} for $\Corr g11$, writing out the actions
$\rho^{}_{K^{\otimes p}_{}}$ and $\ohr^{}_{K^{\otimes p}_{}}$
and invoking Lemma \ref{lem:reppastint},
the left hand side of \erf{Hpastloops} becomes
\eqpic{Hpastloops_1} {320} {103} {\setulen80
\put(0,0) {\includepichtft{131eA}
\put(16.4,-11.2) {\sse$ H $}
\put(40.8,-11.2) {\sse$ \Hss $}
\put(55.2,-11.2) {\sse$ \Hss $}
\put(98.2,-11.2) {\sse$ \Hss $}
\put(112.5,-11.2){\sse$ \Hss $}
\put(148,-11.2) {\sse$ \cdots\cdots $}
\put(199.2,-11.2){\sse$ \Hss $}
\put(214.2,-11.2){\sse$ \Hss $}
\put(310.9,-11.2){\sse$ H $}
\put(327.1,-11.2){\sse$ \Hss $}
\put(396.1,289) {\sse$ \Hss $}
\put(84,171.5) {\sse$ \Lambda$}
\put(141,100.5) {\sse$ \Lambda$}
\put(241.5,31.1) {\sse$ \Lambda$}
\put(92.2,217) {\sse$\ohrad$}
\put(92.2,234) {\sse$\ohrad$}
\put(92.2,254) {\sse$\ohrad$}
\put(149.7,137.5){\sse$\ohrad$}
\put(149.7,155.8){\sse$\ohrad$}
\put(149.7,175.3){\sse$\ohrad$}
\put(249.7,63) {\sse$\ohrad$}
\put(249.7,82) {\sse$\ohrad$}
\put(249.7,102.8){\sse$\ohrad$}
} }
Invoking the representation property of the right-adjoint action $\ohrad$
and the anti-(co)algebra
morphism property of the antipode several times, this morphism can be rewritten as
\eqpic{Hpastloops_2} {320} {109} {\setulen80
\put(0,0) {\includepichtft{131fA}
\put(2.2,-11.2) {\sse$ H $}
\put(36.8,-11.2) {\sse$ \Hss $}
\put(51.8,-11.2) {\sse$ \Hss $}
\put(69,-11.2) {\sse$\cdots $}
\put(83.5,-11.2) {\sse$ \Hss $}
\put(97.5,-11.2) {\sse$ \Hss $}
\put(157.2,-11.2){\sse$ \Hss $}
\put(171.2,-11.2){\sse$ \Hss $}
\put(222.2,-11.2){\sse$ H $}
\put(323.5,-11.2){\sse$ \Hss $}
\put(400.4,295) {\sse$ \Hss $}
\put(63.1,213.1) {\sse$ \Lambda$}
\put(108.8,169.7){\sse$ \Lambda$}
\put(182.1,107.1){\sse$ \Lambda$}
} }
We now observe that in \erf{Hpastloops_2} there appear two copies of the
$(p{-}1)$-fold coproduct of $H$. Using that these are bimodule morphisms from
$H$ to $H^{\otimes p}_{}$, we conclude that \erf{Hpastloops_2}, and thus the
left hand side of \erf{Hpastloops}, equals the right hand side of
\erf{Hpastloops}.
\end{proof}
\section{Proof of the main theorem}\label{sec:proofmain}
We are now in a position to establish invariance
of the correlators $\Cor gn$ under the action of the mapping class group \Map gn.
In the subsequent lemmas we treat separately the
various types of generators from the presentation given in Section \ref{ssec:mpg}.
We start with the generators affording S- and T-transformations of the
one-holed torus.
\begin{lemma}\label{lem:101S,101T}
We have
\be
\Corr 110 \circ \SK = \Corr 110 \qquand \Corr 110 \circ \TK = \Corr 110
\ee
for $\SK$ and $\TK$ as given in {\rm \erf{S_KH-TKH}}.
\end{lemma}
\begin{proof}
This has already been shown in Lemmas 5.8 and 5.9 of \cite{fuSs3}, but we
find it instructive to repeat the main steps here.
\\[2pt]
(i)\, Composing $\SK^{-1}$ as given in \erf{S_KHinv} with the first expression
for $\Corr 110$ in \erf{PF_1} results in the first equality in
\Eqpic{Sinv-6} {380} {65} {
\put(-31,65) {$ \Corr 110\circ\SK^{-1} ~= $}
\put(78,0) {\Includepichtft{124a_1}
\put(-5.4,-9.2) {\sse$ \Hss $}
\put(7,-9.2) {\sse$ \Hss $}
\put(16.3,52.7) {\sse$ Q $}
\put(31.9,53.1) {\sse$ Q^{-1} $}
\put(23.4,1.9) {\sse$ Q^{-1} $}
\put(46.6,1.7) {\sse$ Q $}
\put(32.3,39) {\sse$ \lambda $}
\put(34.5,91) {\sse$ \lambda $}
\put(72,126) {\sse$ \lambda $}
\put(81,147) {\sse$ \Hss $}
}
\put(189,65) {$=$}
\put(224,0) {\Includepichtft{124b}
\put(-5,-9.2) {\sse$ \Hss $}
\put(9,-9.2) {\sse$ \Hss $}
\put(26.5,48) {\sse$ \Lambda $}
\put(40,9) {\sse$ \Lambda $}
\put(60.4,126) {\sse$ \lambda $}
\put(70.5,147) {\sse$ \Hss $}
}
\put(329,65) {$=$}
\put(359,0) {\Includepichtft{124h}
\put(-5,-9.2) {\sse$ \Hss $}
\put(9,-9.2) {\sse$ \Hss $}
\put(40,9) {\sse$ \Lambda $}
\put(70.5,147) {\sse$ \Hss $}
} }
Here in the second step we get rid of the two monodromy matrices by implementing
the relations \erf{fQS_Psi} between the integral, cointegral and monodromy
matrix, combined with the identity $\lambda \cir m \eq \lambda \cir m \cir \tauHH
\cir (\id_H\oti\apo^2)$, while the third follows by Lemma \ref{lem:Sinv-7}.
Finally, after pushing the `upper' inverse antipode through the coproduct, the
so obtained final expression in \erf{Sinv-6} can be recognized as the right-most
one in formula \erf{PF_1}. This shows $\Corr 110 \cir \SK^{-1} \eq \Corr 110$.
\\[4pt]
(ii)\, Composing $\TK$ as given in \erf{S_KH-TKH} with the last expression for
$\Corr 110$ in \erf{PF_1} yields
\eqpic{T-inv} {200} {47} {
\put(11,48) {$ \Corr 110 \circ \TK ~= $}
\put(118,0) {\Includepichtft{123lA}
\put(-5,-9.2) {\sse$ \Hss $}
\put(8.5,-9.2) {\sse$ \Hss $}
\put(23.5,13.5) {\sse$ v $}
\put(49,13.5) {\sse$ v^{-1} $}
\put(36,2.5) {\sse$ \Lambda $}
\put(69.2,112) {\sse$ \Hss $}
} }
After applying \eqref{Hopf_Frob_trick2} and using that $\apo \cir v \eq v$,
the central elements $v$ and $v^{-1}$
appearing here cancel out. Thus we arrive at $\Corr 110 \cir \TK \eq \Corr 110$.
\end{proof}
In view of the relation \erf{gpqfromg11} between $\Corr g11$ and
$\Corr gpq$ for arbitrary number of incoming and outgoing insertions, Lemma
\ref{lem:101S,101T} immediately generalizes as follows.
\begin{prop}\label{prop:triple}
For any triple of integers $g\,{\ge}\,1$ and $p,q\,{\ge}\,0$ we have
\be
\bearl
\Corr gpq \circ
\big( \id_K^{\otimes m} \oti \SK^{} \oti\id_K^{\otimes g-m-1} \oti \id_F^{\otimes q} \big)
= \Corr gpq \qquad{\rm and}
\\{}\\[-5pt]
\Corr gpq \circ
\big( \id_K^{\otimes m} \oti \TK^{} \oti\id_K^{\otimes g-m-1} \oti \id_F^{\otimes q} \big)
= \Corr gpq
\eear
\ee
for all $m \iN \{0,1,...\,,g{-}1 \}$.
\end{prop}
Next we show that the correlators do not change when applying a twist to
any tensor power of the handle Hopf algebra $K$:
\begin{lemma}\label{lem:inv_twist}
For any integer $g\,{\ge}\,1$ we have
\be
\Corr g11 \circ \big( \id_K^{\otimes r} \oti \theta^{}_{K^{\otimes s}_{}} \oti
\id_K^{\otimes t} \oti \id_F \big) = \Corr g11
\labl{0.40}
for all triples $r,s,t$ of non-negative integers with $r\,{+}\,s\,{+}\,t \eq g$.
\end{lemma}
\begin{proof}
Consider first the case $s \eq g$. In view of the formula \erf{deftwist} for
the twist in \HBimod, it follows immediately from Lemma \ref{lem:Hpastloops}
that the left hand side of \erf{0.40} differs from the right hand side only by
multiplications from the left with $\apo \cir v$ and from the right with $\apoi
\cir v^{-1}$, both acting on the same $H$-line. Since $v$ is left invariant
by the antipode and is central, these two modifications cancel each other.
\\
The argument for $s \,{<}\, g$ is completely analogous.
\end{proof}
The next observations will allow us to establish invariance under
the action of Dehn twists around the cycles $a_k$.
\begin{lemma}
We have
\be
\Corr 211 \circ (\QQ \oti \id_F) = \Corr 211
\labl{QQ_inv}
with $\QQ$ as introduced in {\rm \erf{QHH}}.
\end{lemma}
\begin{proof}
According to \eqref{CorrgnH} and \eqref{QQ_Haa} we have
\eqpic{Pic_QQ1} {320} {71} {\setulen80
\put(-53,88) {$ \Corr 211 \circ (\QQ \oti\id_F) ~\equiv $}
\put(130,0) {\includepichtft{132d}
\put(2,-11.2) {\sse$ K $}
\put(8.8,33) {\sse$ \QQ $}
\put(20,-11.2) {\sse$ K $}
\put(47.6,-11.2) {\sse$ F $}
\put(48.8,197) {\sse$ F $}
\put(-7,130) {\begin{turn}{90}{\catpicH}\end{turn}}
}
\put(226,88) {$=$}
\put(272,0) {\includepichtft{134aA}
\put(-8.8,-11.2) {\sse$ \Hss $}
\put(11,-11.2) {\sse$ \Hss $}
\put(43.7,-11.2) {\sse$ \Hss $}
\put(60,-11.2) {\sse$ \Hss $}
\put(111.2,-11.2){\sse$ \Hss $}
\put(39.5,104.1) {\sse$ \Lambda $}
\put(88.5,41.1) {\sse$ \Lambda $}
\put(88,7.5) {\sse$ Q$}
\put(62,81.6) {\sse$ Q^{-1}$}
\put(96.4,82.9) {\sse$\ohrad$}
\put(46.7,143.8) {\sse$\ohrad$}
\put(164,193) {\sse$ \Hss $}
} }
Using the identities \erf{Hopf_Frob_trick2} and inserting the explicit form of
the right coadjoint action $\ohrad$, this can be rewritten as
\Eqpic{Pic_QQ2} {154} {81} { \put(0,5) { \setulen90
\put(-166,88) {$ \Corr 211 \circ (\QQ \oti\id_F) ~= $}
\put(-5,0) {\INcludepichtft{134bA}{342}
\put(-4,-10.2) {\sse$ \Hss $}
\put(8,-10.2) {\sse$ \Hss $}
\put(44.5,-10.2) {\sse$ \Hss $}
\put(56,-10.2) {\sse$ \Hss $}
\put(98.2,-10.2) {\sse$ \Hss $}
\put(21,102.5) {\sse$ \Lambda $}
\put(85,41.5) {\sse$ \Lambda $}
\put(74,14.5) {\sse$ Q $}
\put(57,76.7) {\sse$ Q^{-1} $}
\put(92.7,80.9) {\sse$ \ohrad $}
\put(42.7,143.5) {\sse$ \ohrad $}
\put(151,199) {\sse$ \Hss $}
}
\put(175,88) {$=$}
\put(212,0) {\INcludepichtft{134c}{342}
\put(-4,-10.2) {\sse$ \Hss $}
\put(8,-10.2) {\sse$ \Hss $}
\put(44.5,-10.2) {\sse$ \Hss $}
\put(56,-10.2) {\sse$ \Hss $}
\put(98.2,-10.2) {\sse$ \Hss $}
\put(20,123.3) {\sse$ \Lambda $}
\put(68.8,54.5) {\sse$ \Lambda $}
\put(65.7,26.5) {\sse$ Q $}
\put(116.5,57.4) {\sse$ Q^{-1} $}
\put(144,199) {\sse$ \Hss $}
} } }
Observing that $(\apo^2\oti\apo^2)\cir Q \eq Q$ and invoking the identity
\erf{Pic_QQ3}, comparison with \eqref{CorrgnH} establishes \erf{QQ_inv}.
\end{proof}
Again this result easily generalizes:
\begin{prop}\label{prop:tripleQQ}
For any triple of integers $g\,{\ge}\,2$ and $p,q\,{\ge}\,0$ we have
\be
\Corr gpq \circ \big( \id_K^{\otimes m} \oti
\QQ \oti\id_K^{\otimes g-m-2} \oti \id_F^{\otimes q} \big)
= \Corr gpq
\ee
for all $m \iN \{0,1,...\,,g{-}2 \}$.
\end{prop}
Now we present relations that will help us to show invariance under
the action of the generators $t_{j,k}$ of the mapping class group.
\begin{lemma}\label{lem:removeQq_t}
For any integer $m \,{\ge2}\,$ we have
\be
(\id_F\oti\tilde b_F)\circ(\Corr m20\oti\id_{^\vee\!F})
\circ \QB_{K^{\otimes m-1}\otimes{}^\vee\!F}
= (\id_F\oti\tilde b_F)\circ(\Corr m20\oti\id_{^\vee\!F})
\labl{eq:removeQq_t}
in $\HomHH(K^{\otimes m}\oti{}^\vee\!F,F)$. That is, graphically:
\eqpic{removeQq_t} {290} {81} {
\put(0,0) {
\put(5,0) {\includepichtft{140gA}}
\put(23.3,26.3) {\sse$ \QB_{\HK^{\otimes m-1}_{}\otimes{}^\vee\!F} $}
\put(8,-9.2) {\sse$ K $}
\put(21,-9.2) {\sse$ \cdots $}
\put(36,-9.2) {\sse$ K $}
\put(75,-9.2) {\sse$ ^\vee\!F $}
\put(52.6,174) {\sse$ F $}
}
\put(126,82) {$=$}
\put(150,0) {
\put(13,0) {\includepichtft{140h}}
\put(8,-9.2) {\sse$ K $}
\put(21,-9.2) {\sse$ \cdots $}
\put(36.4,-9.2) {\sse$ K $}
\put(77,-9.2) {\sse$ ^\vee\!F $}
\put(52.7,174) {\sse$ F $}
}
\put(265,154) {\catpicH}
}
\end{lemma}
\begin{proof}
Denote the left hand side of \erf{eq:removeQq_t} by $\Phi$. We first
insert the structural morphisms for $F$ from \erf{pic-Hb-Frobalgebra} and
the expressions \erf{CorrgnH} for $\Corr m11$ and \erf{Qq_X} for $\QB_X$ and
invoke Lemma \ref{lem:Hpastloops}, and in a second step we insert the
explicit expression for $\ohrad$ and use the identities \erf{Hopf_Frob_trick2}
and \erf{Fv_act}. This yields
\Eqpic{inv_tjk_1} {420} {116} { \put(0,17){
\put(-15,94) {$ \Phi ~= $}
\put(37,0) {\includepichtft{140cA}\setulen80
\put(-7,-11.2) {\sse$ \Hss $}
\put(8.2,-11.2) {\sse$ \Hss $}
\put(60.5,-11.2) {\sse$ \Hss $}
\put(74.5,-11.2) {\sse$ \Hss $}
\put(28.2,-11.2) {\sse$ \cdots\cdots$}
\put(229.5,-11.2){\sse$ H $}
\put(229,297) {\sse$ \Hss $}
\put(30,195.5) {\sse$ \Lambda$}
\put(100,132) {\sse$ \Lambda$}
\put(216,188) {\sse$ \lambda$}
\put(173,3.5) {\sse$ Q$}
\put(120,49,2) {\sse$ Q^{-1}$}
}
\put(253,94) {$=$}
\put(293,0) {\includepichtft{140dA}\setulen80
\put(-7,-11.2) {\sse$ \Hss $}
\put(8.2,-11.2) {\sse$ \Hss $}
\put(60.5,-11.2) {\sse$ \Hss $}
\put(74.5,-11.2) {\sse$ \Hss $}
\put(28.2,-11.2) {\sse$ \cdots\cdots$}
\put(194.5,-11.2){\sse$ H $}
\put(193,297) {\sse$ \Hss $}
\put(19.8,207.5) {\sse$ \Lambda$}
\put(100,132) {\sse$ \Lambda$}
\put(179.5,117.2){\sse$ \lambda$}
\put(136.4,4) {\sse$ Q$}
\put(118.1,36.1) {\sse$ Q^{-1}$}
} } }
Application of the identity \erf{Q3Delta} to the right hand side of
\erf{inv_tjk_1} results in the right hand side of \erf{eq:removeQq_t},
\end{proof}
Composition of \erf{removeQq_t} with $\id^{}_{K^{\otimes m}_{}} \oti
{}^\vee\!\eps_F$ gives
\begin{cor}\label{cor:BKKm-1}
For any integer $g \,{\ge2}\,$ we have
\be
\Corr g10 \circ \QB_{K^{\otimes g-1}_{}} = \Corr g10 \,.
\ee
\end{cor}
\medskip
We can combine the previous results to omit not only the twist of any tensor
power $K^{\otimes m}$ of $K$, but also the one of
$K^{\otimes m}\oti{}^{\vee\!}\!F$:
\begin{lemma}\label{lem:twist_S_remove}
For any integer $m\,{>}\,0$ the equalities
\eqpic{twist_S_remove} {420} {100} {\setulen80
\put(0,0) {\includepichtft{140e}
\put(-5,-11.2) {\sse$ K $}
\put(11,-11.2) {\sse$ \cdots$}
\put(28,-11.2) {\sse$ K $}
\put(50,265) {\sse$ F $}
\put(114,265) {\sse$ F $}
\put(1.5,39) {\sse$ \theta_{\!K^{\otimes m}_{}\otimes{}^{\vee\!}\!F} $}
}
\put(153,130) {$=$}
\put(180,0) {\includepichtft{140i}
\put(-5,-11.2) {\sse$ K $}
\put(11,-11.2) {\sse$ \cdots$}
\put(28,-11.2) {\sse$ K $}
\put(50,265) {\sse$ F $}
\put(114,265) {\sse$ F $}
}
\put(326,130) {$=$}
\put(357,0) {\includepichtft{140f}
\put(-5,-11.2) {\sse$ K $}
\put(11,-11.2) {\sse$ \cdots$}
\put(28,-11.2) {\sse$ K $}
\put(50,265) {\sse$ F $}
\put(73,265) {\sse$ F $}
\put(114,233) {\catpicH}
} }
hold in $\HomHH(K^{\otimes m}_{},F\oti F)$.
\end{lemma}
\begin{proof}
Using the compatibility between braiding and twist, as well as that $F$ has
trivial twist and that according to Lemma \ref{lem:inv_twist} the twist of
$K^{\otimes m}_{}$ can be omitted, we can replace $\theta_{\!K^{\otimes m}_{}
\otimes{}^\vee\!F}$ by the monodromy between $K^{\otimes m}_{}$ and ${}^\vee\!F$.
Using naturality of the braiding and thus of monodromy, we can
push this monodromy through the $F$-loops and thus arrive
at the middle picture. The second equality follows because $F$ is commutative
Frobenius, in the same way as in the proof of Lemma \ref{lem:2prod_mon}.
\end{proof}
\begin{lemma}\label{lem:t_act_Cor}
For any pair of integers $g,n\,{>}\,0$ we have
\eqpic{t_act_Cor} {380} {172} {
\put(36,170) {$ \Cor gn ~= $}
\put(105,10) {\includepichtft{140bA}
\put(49.3,339) {\sse$ \overbrace{\hspace*{6em}}^{j~ \rm factors} $}
\put(139.3,339) {\sse$ \overbrace{\hspace*{6em}}^{n-j~ \rm factors} $}
\put(51,64.7) {\sse$ \QB_{K^{\otimes m-1}\otimes{}^\vee\!\BF}$}
\put(95,30) {\sse\begin{turn}{40} $ \theta_{\!K^{\otimes m-1}_{}
\otimes{}^{\vee\!}\!F} $\end{turn}}
\put(31,38) {\sse$\TK$}
\put(-4,-8.5) {\sse$ K $}
\put(13,-8.5) {\sse$\cdots$}
\put(31,-8.5) {\sse$ K $}
\put(51,-8.5) {\sse$ K $}
\put(62,-8.5) {\sse$\cdots$}
\put(77,-8.5) {\sse$ K $}
\put(48,329) {\sse$ F $}
\put(66,329) {\sse$ F $}
\put(78,329) {\sse$\cdots$}
\put(95,329) {\sse$ F $}
\put(138,329) {\sse$ F $}
\put(156,329) {\sse$ F $}
\put(168,329) {\sse$\cdots$}
\put(185,329) {\sse$ F $}
\put(215,307) {\catpicH}
\put(-9.7,-12) {\sse$ \underbrace{\hspace*{5em}}_{g-m+1~\rm factors~} $}
\put(45,-12) {\sse$ \underbrace{\hspace*{3.7em}}_{~~m-1~ \rm factors} $}
} }
for all $m\eq 1,2,...\,,g$
\end{lemma}
\begin{proof}
As compared to the expression \erf{Sk_morph} for $\Cor gn$, on the right
hand side three additional pieces are present:
the endomorphism $\TK$ applied to one copy of $K$,
a twist endomorphism of $K^{\otimes m}_{}\oti{}^{\vee\!}\!F$, and a
partial monodromy $\QB_{K^{\otimes m-1}\otimes{}^\vee\!\BF}$. Now the latter
acts trivially by Lemma \ref{lem:removeQq_t}. After omitting this part, we
can invoke Proposition \ref{prop:triple} and Lemma \ref{lem:twist_S_remove}
(combined with the Frobenius property of $F$) to omit $\TK$ and
$\theta_{\!K^{\otimes m}_{}\otimes{}^{\vee}\!F}$, respectively, as well.
\end{proof}
As a final piece of information we give the graphical description of the
morphisms $\LAtjk(f)$ \erf{LyubactC3} that were introduced in formula
\erf{LyubactC3} for $f\iN \HomC(K^{\otimes g},X_1\oti\cdots\oti X_n)$:
\eqpic{t_act} {290} {141} {
\put(0,139) {$ \LAtjk(f) ~= $}
\put(81,0) {\Includepichtft{140aA}
\put(33,222.7) {$ f $}
\put(42.3,176.2) {\sse$\QB_{K^{\otimes k-1}\otimes
{}^\vee\!X_{n}\otimes\cdots\otimes{}^\vee\!X_{j+1}}$}
\put(38.5,85.5) {\sse$\TK $}
\put(1.2,-9.2) {\sse$ K $}
\put(12,-9.2) {\sse$\cdots$}
\put(25,-9.2) {\sse$ K $}
\put(37.8,-9.2) {\sse$ K $}
\put(46.9,-9.2) {\sse$ K $}
\put(59.8,-9.2) {\sse$\cdots$}
\put(75,-9.2) {\sse$ K $}
\put(-.2,296) {\sse$ X_1$}
\put(12,296) {\sse$ \cdots$}
\put(23.4,296) {\sse$ X_j$}
\put(147,296) {\sse$ X_{j+1}$}
\put(173,296) {\sse$ \cdots$}
\put(190,296) {\sse$ X_{n}$}
} }
\medskip
We have now all ingredients at our hands that are needed to finish the proof
of Theorem \ref{thm:main}.
\begin{proof}~\\[3pt]
Invoking the presentation of the mapping class group $\Mapgn$ described in
Section \ref{ssec:mpg}, invariance of the correlators $\Cor gn$ under the
action $\pi_{g:n}^{F^{\otimes n}}$ of $\Mapgn$ amounts to invariance under
$\pi_{g:n}^{F^{\otimes n}}(\gamma)$ for $\gamma\iN\Mapgn$ any of the
generators listed in Proposition \ref{Lyubact_prop}.
\\[4pt]
(i)\, $\gamma \eq \Ri$ ($i \eq 1,2,...\,,n$):
Invariance follows directly from the fact that $F$ is has trivial twist.
\\[4pt]
(ii)\, $\gamma \eq \omega_i$ ($i \eq 1,2,...\,,n{-}1$):
Invariance follows directly from the fact that $F$ is cocommutative.
\\[4pt]
(iii)\, $\gamma \eq S_k$ or $b_k$ or $d_k$ ($k\eq 1,2,...\,,g$):
In view of the explicit expressions \erf{LyubactC2} for $\LASk$, $\LAbk$
and $\LAdk$, invariance is implied by Proposition \ref{prop:triple}.
\\[4pt]
(iv)\, $\gamma \eq a_k$ ($k\eq 2,3,,...\,,g$):
By using Proposition \ref{prop:triple} twice we have
\be
\Cor gn \circ
\big( \id_K^{\otimes m} \oti \TK^{} \oti \TK^{} \oti\id_K^{\otimes g-m-2} \big)
= \Cor gn
\ee
for all $m \eq 0,1,...\,,g{-}2$. Thus in view of the explicit expression
\erf{LyubactC2} for $\LAak$, Proposition \ref{prop:tripleQQ}
establishes the invariance.
\\[4pt]
(v)\, $\gamma \eq e_k$ ($k\eq 2,3,,...\,,g$):
Combining Proposition \ref{prop:triple} and Lemma \ref{lem:inv_twist} we obtain
the equality $\Cor g1 \cir (\id_K^{\otimes g-k}\oti \TK \oti
\theta^{}_{K^{\otimes k-1}_{}}) \eq \Cor g1$
which, in turn, together with Corollary \ref{cor:BKKm-1} yields
\be
\Cor g1 \circ \big[ (\id_K^{\otimes g-k}\oti \TK \oti \theta_{K^{\otimes k-1}})
\circ \QB_{K^{\otimes k-1}} \big] = \Cor g1 \,.
\ee
This obviously generalizes to any number of $n \,{\ge}\, 0$ insertions and
thus in view of the explicit expression
\erf{LyubactC2} for $\LAek$ establishes invariance.
\\[4pt]
(vi)\, $\gamma \eq t_{j,k}$ ($j\eq 1,2,...\,,n{-}1$ and $k \eq 1,2,...\,,g$):
We first note that composing the dinatural family of the coend $K$ with the
partial monodromy $\QB_{}$ results in an ordinary monodromy. This implies that
\be
\QB_Y \circ (\id_K \oti f) = (\id_K \oti f) \circ \QB_X
\ee
for any morphism $f\iN\Hom(X,Y)$. With the help of this relation (as well as
the coassociativity and Frobenius property of $F$) one can in particular push
partial monodromies through coproducts $\Delta_F$, and by functoriality of the
twist the same can be done with twist endomorphisms. As a consequence, the
morphism obtained by acting according to formula \erf{LyubactC3} and picture
\erf{t_act} with $t_{j,k}$ on $\Cor g n$ can be rewritten as the one on the
right hand side of \erf{t_act_Cor}. Thus invariance under $t_{j,k}$ reduces to
the assertion of Lemma \ref{lem:t_act_Cor}.
\end{proof}
\section{Invariants from ribbon automorphisms}\label{sec:omega}
We finally extend our result from $F$ to similar $H$-bimodules for which the
action of the Hopf algebra is twisted by a suitable automorphism.
\begin{defi}
A \emph{ribbon} automorphism of a ribbon Hopf algebra $H$ is a Hopf algebra
automorphism $\omega$ of $H$ that preserves the ribbon element and the
$R$-matrix of $H$,
\be
\omega\circ v= v \qquad\text{and}\qquad\text (\omega\oti\omega)\circ R=R\,.
\ee
\end{defi}
For any Hopf algebra automorphism $\omega$ of a Hopf algebra $H$ we denote
by $\Fomega$ the bimodule obtained from $F$ by twisting the right $H$-action
by $\omega$, i.e.
\be
\Fomega:=(\Hs,\,\rho_F,\,\ohr_F\cir(\id_{\Hs}\oti\omega))\,.
\labl{F_omega}
(An isomorphic bimodule is obtained when twisting instead the left $H$-action
by $\omega^{-1}$.) In Section 6 of \cite{fuSs3} the following is shown:
\begin{lemma}
Let $H$ be a factorizable ribbon Hopf algebra and $\omega$ a ribbon
automorphism of $H$.
\\[3pt]
{\rm (i)}\, The $H$-bimodule $\Fomega$ together with the
dinatural family of morphisms
\be
\imath^{\Fomega_{}}_X := (\omega^{-1})^* \circ \iHb_X
\ee
is the coend of the functor from $\HMod\op{\times}\, \HMod$ to \HBimod\ that acts
on objects by assigning to a pair $\big((U,\rho_U), (V,\rho_V)\big)$ of left
$H$-modules the vector space $U^\vee{\otimes_\ko}\,V$ endowed with left $H$-ac\-tion
$[(\rho_U)_\vee^{}\cir(\omega^{-1}\oti\id_{U^*_{}})] \oti \id_V$ and right
$H$-action $\id_{U^*_{}} \oti [\rho_V \cir \tau_{V,H}\cir (\id_V\oti\apoi)]$.
\\[3pt]
{\rm (ii)}\, The linear maps defined in
\eqref{pic-Hb-Frobalgebra} equip the object $\Fomega$ in \HBimod\ with the
structure of a commutative symmetric Frobenius algebra, with trivial twist,
in \HBimod. Furthermore, $\Fomega$ is special iff $H$ is semisimple.
\end{lemma}
To proceed we note the following identity:
\begin{lemma}
For any factorizable ribbon Hopf algebra $H$ the relation
\be
f_{Q^{-1}} \big( \lambda \cir m \cir (v \oti \id_H) \big)
= (\lambda \cir v)\, v^{-1}
\labl{v-v-inv}
involving the ribbon element $v$, the inverse of the monodromy matrix $Q$ and
the cointegral $\lambda$ holds.
\end{lemma}
\begin{proof}
Just use the fact (see formula \erf{def-ribbon}) that $(v \oti v) \,{\cdot}\,
Q^{-1} \eq \Delta \cir v$ and afterwards the defining property of the cointegral.
\end{proof}
As a direct consequence we have
\begin{lemma}\label{lambda-omega}
Every ribbon automorphism $\omega$ of $H$ preserves the integral and
cointegral of $H$, i.e.
\be
\lambda \cir \omega = \lambda \qquand \omega \cir \Lambda = \Lambda \,.
\labl{lo=l,oL=L}
\end{lemma}
\begin{proof}
Consider the equality obtained by composing \erf{v-v-inv} with
$\omega$. Using on the left hand side of this equality
that $\omega$ is an algebra morphism and that it preserves $v$
(and thus $v^{-1}$) as well as $Q^{-1}$, one arrives at an equality that
differs from \erf{v-v-inv} only by replacing $\lambda$ on the left hand side
by $\lambda \cir \omega^{-1}$. Using further that the morphism $f_{Q^{-1}}$ as
well as the element $v$ of $H$ are invertible, the first of the equalities
\erf{lo=l,oL=L} follows.
\\
Further note that $\omega\cir\Lambda$ is again a non-zero
integral and is thus proportional to $\Lambda$. Since
$\lambda\cir\omega\cir\Lambda \eq \lambda\cir\Lambda \iN \ko$
is non-zero, this implies the second equality in \erf{lo=l,oL=L}.
\end{proof}
We can now generalize the morphisms $\Cor gn$ defined in \eqref{Sk_morph} by
simply replacing every occurrence of the Frobenius algebra $F$ with $\Fomega$.
We denote the so obtained morphisms by $\Corw gn$.
\begin{prop}\label{prop:Corw}
For every ribbon automorphism $\omega$ of $H$ we have
\be
\Corw gn = \Cor gn \circ \big(\id_{\Hs}\oti (\omega^{-1})^*\big)^{\otimes g}_{}
\labl{eq:Corw}
as linear maps, for all pairs of integers $g,n \,{\ge}\,0$.
\end{prop}
\begin{proof}
Inserting the $H$-bimodule structure \erf{F_omega} of $\Fomega$ into the
expression \erf{Lyubact_HKH} for the action of $K$ we have
\eqpic{Lyubact_HKH_Fomega} {125} {44} {
\put(0,45) {$ \rho^K_{\Fomega} =~ $}
\put(50,0) { \Includepichtft{133g}
\put(-4.4,-7.7) {\sse$ \Hss $}
\put(10,-7.7) {\sse$ \Hss $}
\put(44.7,-7.7) {\sse$ \Hss $}
\put(31,2.8) {\sse$ Q $}
\put(13,46.8) {\sse$ Q^{-1} $}
\put(54.7,68.8) {\sse$ \ohr_F^{} $}
\put(35.2,90.5) {\sse$ \rho_F^{} $}
\put(46.5,108.3) {\sse$ \Hss $}
\put(69.3,41.3) {\sse$\omega$}
} }
Using $(\id_H \oti \omega) \cir Q \eq (\omega^{-1} \oti \id_H) \cir Q$, it
follows immediately that $\rho^K_{\Fomega} \eq \rho^K_F \cir (\id_{\Hs}
\oti (\omega^{-1})^*)$ which, in turn, implies \erf{eq:Corw}.
\end{proof}
Next we note the following $\omega$-twisted version of Lemma \ref{lem:Hpastloops}:
\begin{lemma}\label{lem:Hpastloopsw}
Denoting, as in picture {\rm \erf{Hpastloops}}, by $\rho^{}_{K^{\otimes p}_{}}$
and $\ohr^{}_{K^{\otimes p}_{}}$ the left and right $H$-actions on
$K^{\otimes p}_{}$, we have
\eqpic{Hpastloopsw} {360} {107} {\setulen80
\put(0,0) {\includepichtft{132cB}
\put(-5,-10.2) {\sse$ H $}
\put(16.1,-10.2) {\sse$ K $}
\put(59.4,-10.2) {\sse$ K $}
\put(76.2,-10.2) {\sse$ H $}
\put(92.2,-10.2) {\sse$ \Fomega $}
\put(92.9,293.3) {\sse$ \Fomega $}
\put(28,73.2) {\sse$ \rho_{\HK^{\!\otimes p}}^{} $}
\put(39,43.2) {\sse$ \ohr_{\HK^{\!\otimes p}}^{} $}
}
\put(152,129) {$=$}
\put(230,0) { \put(-37.5,0) {\includepichtft{131g}}
\put(-42.5,-10.2) {\sse$ H $}
\put(-19,-10.2) {\sse$ \Hss $}
\put(10.2,-10.2) {\sse$ \Hss $}
\put(58.2,-10.2) {\sse$ \Hss $}
\put(87.4,-10.2) {\sse$ \Hss $}
\put(121.2,-10.2){\sse$ H $}
\put(139.2,-10.2){\sse$ \Hss $}
\put(192.2,292.5){\sse$ \Hss $}
\put(23.6,205.5) {\sse$ \Lambda $}
\put(99.6,125.1) {\sse$ \Lambda $}
\put(47.9,244.5) {\sse$ \ohrad $}
\put(124.5,165) {\sse$ \ohrad $}
\put(122.2,13.6) {$\sse\omega$}
\put(71.5,87.5) {$ {(\omega^{-1})}^* $}
\put(-5,87.5) {$ {(\omega^{-1})}^* $}
} }
\end{lemma}
\begin{proof}
This follows by the same line of arguments as in Lemma \ref{lem:Hpastloops},
combined with the identity $\ohrad \cir (\omega^{-1}\oti\id_{H})
\eq \omega^{-1}\cir\ohrad \cir (\id_H\oti\omega)$.
\end{proof}
\begin{thm}
Let $H$ be a finite-dimensional factorizable ribbon Hopf algebra and $\omega$
a ribbon automorphism of $H$. Then for any pair of integers $g,n \,{\ge}\,0$
the morphism $\Corw gn$ is invariant under the action
$\pi_{g:n}^{(\Fomega)^{\otimes n}_{}}$ of the mapping class group $\Mapgn$.
\end{thm}
\begin{proof}
Just like in the case $\omega \eq \id_\Hs$, invariance under the action of the
generators $\omega_i$ and $\Ri$ is an immediate consequence of the fact that
$\Fomega$ is cocommutative and has trivial twist.
\\[3pt]
Next consider the generators $\sk,\ak,\bk\,\dk$ and $\ek$. Proposition
\ref{prop:Corw} reduces invariance to the statement that the morphism
$\pi_{g:n}^{(\Fomega)^{\otimes n}_{}\!}(\gamma)$ commutes with
$(\id_{\Hs}\oti(\omega^{-1})^*)^{\otimes g}$ for $\gamma \eq S_k, a_m, b_m, d_m$
or $e_m$. In particular, for the case of $\gamma\eq S_1$ and
$g\eq 1$, the following chain of equalities establishes invariance:
\Eqpic{S_KH_w} {420}{44} { \put(-7,3){
\put(0,-6) { \Includepichtft{121dA}
\put(-4.5,-9.2) {\sse$ \Hss $}
\put(12.7,51.1) {\sse$ Q^{-1} $}
\put(37.7,-9.2) {\sse$ \Hss $}
\put(32.5,87) {\sse$ \lambda $}
\put(44.6,106) {\sse$ \Hss $}
\put(58.4,11.8) {\sse$ Q $}
\put(74.5,69) {\sse$ \lambda $}
\put(93.6,106) {\sse$ \Hss $}
\put(74,24) {\sse$ \omega^{-1}$}
}
\put(127,42) {$=$}
\put(160,-6) { \Includepichtft{121eA}
\put(-4.5,-9.2) {\sse$ \Hss $}
\put(12.7,51.1) {\sse$ Q^{-1} $}
\put(38.9,-9.2) {\sse$ \Hss $}
\put(32.5,87) {\sse$ \lambda $}
\put(44.6,106) {\sse$ \Hss $}
\put(60.4,2.1) {\sse$ Q $}
\put(77.5,78) {\sse$ \lambda $}
\put(93,106) {\sse$ \Hss $}
\put(65.3,62.5) {\sse$ \omega^{-1}$}
\put(65,20.1) {\sse$ \omega$}
}
\put(285,42) {$=$}
\put(320,-6) { \Includepichtft{121fA}
\put(-4.5,-9.2) {\sse$ \Hss $}
\put(13,51.1) {\sse$ Q^{-1} $}
\put(28.9,-9.2) {\sse$ \Hss $}
\put(32.5,87) {\sse$ \lambda $}
\put(43.6,106) {\sse$ \Hss $}
\put(61.1,2.1) {\sse$ Q $}
\put(80.5,56) {\sse$ \lambda $}
\put(95.6,106) {\sse$ \Hss $}
\put(43.5,30.1) {\sse$ \omega^{-1}$}
} } }
The first of these equalities follows by pushing the automorphism $\omega$
through the product and using that $\omega$ commutes with the antipode of $H$,
while the second equality follows by using $\lambda \cir \omega \eq \lambda$
from Lemma \ref{lambda-omega} and $(\omega\oti\omega)\circ Q\eq Q$.
\\
That $[\pi_{g:n}^{(\Fomega)^{\otimes n}_{}\!}(\gamma),
(\id_{\Hs}\oti(\omega^{-1})^*)^{\otimes g}] \eq 0$ holds as well for any genus
$g$ and any of the generators $\gamma \eq S_k, a_m, b_m, d_m, e_m$
is shown in a completely analogous manner.
\\[3pt]
It remains to consider the action of the generators $t_{j,k}$.
To this end we just observe that because of Proposition \ref{prop:Corw},
a version of Proposition \ref{prop:triple} holds in which $F$ is replaced by
$\Fomega$, while Lemma \ref{lem:Hpastloopsw} implies that there are versions
of Lemma \ref{lem:removeQq_t} and Lemma \ref{lem:twist_S_remove} in which $F$
is replaced by $\Fomega$. Combining these versions of Proposition
\ref{prop:triple} and Lemmas \ref{lem:removeQq_t} and \ref{lem:twist_S_remove},
one arrives at a corresponding $\omega$-twisted version of Lemma
\ref{lem:t_act_Cor}. Now notice that in the case of $\omega \eq \id_H$,
invariance under the action of $t_{j,k}$ follows from Lemma \ref{lem:t_act_Cor}
by just invoking that $F$ is Frobenius. As a consequence, the twisted version
of Lemma \ref{lem:t_act_Cor} allows us to deduce invariance in precisely the
same manner as for $\omega \eq \id_H$.
\end{proof}
\vskip 5.5em
\noindent{\sc Acknowledgments:}
We thank Benson Farb for a helpful correspondence.
JF is grateful to Hamburg university, and in particular to CSc and Astrid
D\"orh\"ofer, for their hospitality during the time when this study was initiated.
\\
JF is largely supported by VR under project no.\ 621-2009-3993.
CSc is partially supported by the Collaborative Research Centre 676 ``Particles,
Strings and the Early Universe - the Structure of Matter and Space-Time'' and
by the DFG Priority Programme 1388 ``Representation Theory''.
\newpage
\newcommand\wb{\,\linebreak[0]} \def\wB {$\,$\wb}
\newcommand\Bi[2] {\bibitem[#2]{#1}}
\newcommand\inBO[9] {{\em #9}, in:\ {\em #1}, {#2}\ ({#3}, {#4} {#5}),
p.\ {#6--#7} {{\tt [#8]}}}
\renewcommand\J[7] {{\em #7}, {#1} {#2} ({#3}) {#4--#5} {{\tt [#6]}}}
\newcommand\JO[6] {{\em #6}, {#1} {#2} ({#3}) {#4--#5} }
\newcommand\BOOK[4] {{\em #1\/} ({#2}, {#3} {#4})}
\newcommand\prep[2] {{\em #2}, preprint {\tt #1}}
\def\aspm {Adv.\wb Stu\-dies\wB in\wB Pure\wB Math.}
\def\coma {Con\-temp.\wb Math.}
\def\comp {Com\-mun.\wb Math.\wb Phys.}
\def\isjm {Israel\wB J.\wb Math.}
\def\joal {J.\wB Al\-ge\-bra}
\def\jktr {J.\wB Knot\wB Theory\wB and\wB its\wB Ramif.}
\def\jpaa {J.\wB Pure\wB Appl.\wb Alg.}
\def\momj {Mos\-cow\wB Math.\wb J.}
\def\nupb {Nucl.\wb Phys.\ B}
\def\plms {Proc.\wB Lon\-don\wB Math.\wb Soc.}
\def\pcps {Proc.\wB Cam\-bridge\wB Philos.\wb Soc.}
\def\slnm {Sprin\-ger\wB Lecture\wB Notes\wB in\wB Mathematics}
\def\taac {Theo\-ry\wB and\wB Appl.\wb Cat.}
\def\thmp {Theor.\wb Math.\wb Phys.}
\small
|
2006.14581
|
\section{Introduction}
Let $\omega$ be a modulus of continuity, i.~e. a non-decreasing continuous semi-additive function such that $\omega(0) = 0$. For a segment $[a,b]\subset \mathbb R$ denote by $H^\omega([a,b],\mathbb{R})$ the class of functions $f\colon [a,b]\to \mathbb R$ such that $|f(t)-f(s) |\leq \omega(|t-s|)$ for all $t,s\in [a,b]$.
The moduli of continuity $\omega(\cdot)$, as independent functions with mentioned above properties, the classes $H^\omega([a,b],\mathbb{R})$, as well as the classes $W^rH^\omega([a,b],\mathbb{R})$, were introduced by Nikol'skii in~\cite{Nikolsky46}.
For two {positive almost everywhere} and integrable functions $\psi_1\colon [a,a']\to\mathbb R_+$ and $\psi_2\colon [b',b]\to\mathbb R_+, \;(a<a'\le b'<b)$, the Korneichuk--Stechkin lemma, see~\cite[\S~7.1]{ExactConstants}, gives an estimate for the functional
\begin{equation}\label{numericFunct}
(\psi_1,\psi_2)\to\sup\limits_{f\in H^\omega ([a,b],\mathbb R)}\left|\int_a^{a'}f(t)\psi_1(t)dt-\int_{b'}^{b}f(t)\psi_2(t)dt\right|,
\end{equation}
which is sharp in the case of concave modulus of continuity $\omega$.
This lemma was published in~\cite{Korneichuk59, Korneichuk61}, (see also a remark in~\cite{Korneichuk59}) for the classes
$H^\omega([a,b],\mathbb R)$ with $\omega(t)=t^\alpha$, $0<\alpha\le 1$, and was generalized to the case of arbitrary modulus of continuity in~\cite{Korneichuk62}. The Korneichuk--Stechkin lemma played an important role in the solution of many extremal problems of approximation theory, see~\cite[Chapter~7]{ExactConstants}
and references therein. Some of its generalizations and more applications can be found in~\cite{bagdasarov2012,stepanets2018}.
The theory of Banach space valued, multi-valued and fuzzy-valued functions was actively developed over the last several decades (see~\cite{Aubin,Borisovich,Diamond2}), in particular, due to its applications in optimization theory, approximation theory, mathematical economics, numerical analysis and other branches of applied mathematics. Some results on approximation of multi- and fuzzy- valued functions can be found in~\cite{nira2014,anastassiou2010}.
Banach spaces, spaces of sets and spaces of fuzzy sets belong to the class of so-called $L$ spaces (i.~e. semi-linear metric spaces with two additional axioms, which connect the metric with the algebraic operations). The notion of an $L$-space was introduced in~\cite{Vahrameev}, see also~\cite{Aseev}. In Section~\ref{s::LSpace} we present necessary definitions and facts related to $L$-spaces. In particular, for the sake of completeness, we present the definition and some properties of the Lebesgue integral for bounded $L$-space valued functions.
In Section~\ref{s::KSLemma} we generalize the Korneichuk--Stechkin lemma to the case of $L$-space valued functions.
Let $(X,h_X)$ be an $L$-space and $H^\omega([a,b],X)$ be the class of functions $f\colon [a,b]\to X$
such that $h_X(f(t'),f(t''))\le \omega(|t'-t''|)$ for all $t',t''\in [a,b]$. Let also $\psi_1\colon[a,a']\to \mathbb R$ and $\psi_2\colon [b',b]\to \mathbb R$, $a<a'\le b'<b,$ be positive almost everywhere, measurable, and bounded functions. We obtain, see Lemma~\ref{l::KSLemma}, an estimate for the functional
\begin{equation}\label{LspacevaluedFunct}
S(\psi_1,\psi_2):=\sup\limits_{f\in H^\omega ([a,b],X)}h_X\left(\int_a^{a'}f(t)\psi_1(t)dt,\int_{b'}^{b}f(t)\psi_2(t)dt\right),
\end{equation}
which is sharp in the case of concave modulus of continuity $\omega$.
In a series of applications that we consider in this article, we show that our generalization may be an important tool for solution of extremal problems involving $L$-space valued functions.
In Section 4 we obtain a general estimate of the functional~\eqref{LspacevaluedFunct} for rather arbitrary functions $\psi_1$ and $\psi_2$ in terms of the Korneichuk $\Sigma$-rearrangement of the function $\Psi(t)=\int_a^t(\psi_1(u)-\psi_2(u))du$. This estimate generalizes the estimate for the functional~\eqref{numericFunct}, obtained by Korneichuk in~\cite{Korneichuk71}, see also~\cite[Theorem~7.1.9]{ExactConstants}.
In 1938 Ostrowski~\cite{Ostrowski38} proved a sharp inequality that estimates the deviation of a value of a function from its mean value using the uniform norm of the function's derivative.
Such inequalities
have been intensively studied, see~\cite{Dragomir17} for a survey of the obtained results.
It is worth noting, that the general estimate for the functional~\eqref{numericFunct}, which was obtained by Korneichuk some~50 years ago, essentially contains a series of results on the Ostrowski type inequalities, which were obtained much later. In particular, from this estimate, one can easily obtain the main result from~\cite{barnet} and one of the results in~\cite{Guessab02}. In~\cite{Barnett02,Chalco12, Chalco15, Anastassiou03,Roman18,Zhao19,Barnett01,Dragomir03,Anastassiou12} such type of inequalities
are investigated for non-real-valued functions. The obtained in this article estimate for functional~\eqref{LspacevaluedFunct} implies some of the main results in papers~\cite{Barnett02,Anastassiou03, Anastassiou12, Chalco12}.
An important part of approximation theory and optimal algorithms theory is theory of optimal recovery of operators.
Statements of the problems of this theory, many results and further references can be found in monographs~\cite{TraubWozhniakowski, Zhensikbaev03}.
We consider the optimal recovery problem in the following statement.
Let a metric space $(X, h_X)$, sets $Z$, $Y$, a class of elements $W\subset Z$, as well as mappings $\Lambda \colon Z\to X$ and $I\colon W\to Y$ be given. We call an arbitrary mapping $\Phi\colon Y\to X$ a method of recovery of the mapping $\Lambda$ on the class $W$ based on the information given by the mapping $I$. The error of recovery of the mapping $\Lambda$ on the class $W$ by the method $\Phi$ based on the information given by the mapping $I$ is given by the formula
$$
{\mathcal E}(\Lambda,W,I,\Phi, X)={\sup\nolimits_{z\in W}h_X(\Lambda(z), \Phi(I(z)))}.
$$
The quantity
\begin{equation}\label{errorOfRecovery}
{\mathcal E}(\Lambda,W, I, X)=\inf\nolimits_{\Phi} {\mathcal E}(\Lambda,W,I,\Phi, X)
\end{equation}
is called the optimal error of recovery of the mapping $\Lambda$ on the class $W$ based on the information given by the mapping $I$.
The problem of optimal recovery of the mapping $\Lambda$ on the class $W$ with the information given by $I$ in the metric of the space $X$ is to find quantity \eqref{errorOfRecovery}
and a method $\Phi^*$ (if such a method exists) on which the infimum in the right-hand side of \eqref{errorOfRecovery} is attained.
If $\mathcal{I}$ is some class of information operators, then it is also of interest to find the quantity
$$
{\mathcal E}(\Lambda,W, {\mathcal{I}}, X)=\inf\nolimits_{I\in\mathcal{I}} {\mathcal E}(\Lambda,W,I, X)
$$
and the best information operator.
In Section~\ref{s::HOmegaRecovery} we consider the problem of optimal recovery of the convexifying operator (see Section~\ref{s::LSpace} for definitions)
and of the integral on the class $H^\omega([a,b],X)$. Under some additional assumptions (in particular in the case of Banach space valued functions), the convexifying operator turns into the identity operator.
For real-valued functions, these problems are well studied when the informational operator maps a function from the class to its values at $n$ points of the segment $[a,b]$. Regarding recovery of a function we refer to~\cite[Chapter~5,6]{SplinesInApproxTh}; regarding the recovery of the integral we refer to~\cite{Korneichuk68}. In~\cite{Babenko15} the problem of optimization of approximate integration was solved for the class of multi-valued functions.
Here as informational operators, we use the ones that map functions from the class to their mean values on $n\in\mathbb N$ intervals belonging to $[a,b]$. This kind of information operators is of interest, since the analog measuring devices give such mean values of the measured functions. Moreover, the results on optimal recovery given such type of information, easily imply corresponding results on optimal recovery for the case, when the information operators map functions to their values at $n$ points of the interval $[a,b]$.
The problem of optimization of approximate integration given the ''interval'' information for the functions from the class $H^\omega([a,b],\mathbb R)$ was solved in~\cite{Borodachov98}.
Since a random process can be viewed as a function into a Banach space of random variables, our results can be applied to recovery problems for random processes. Some results in this direction can be found in~\cite{Drozhina, Kumar, Kovalenko20}.
In Section~\ref{s::PolylineApproximation} we consider the problem of optimal recovery of the identity operator and the operator $\mathcal{D}_H$ of Hukuhara type derivative on the class $W^1H^\omega([a,b],X)$ (see Section~\ref{s::PolylineApproximation} for the definitions).
In these problems the recovery is done based on the information operator that maps a function to its values at $n$ points of the interval $[a,b]$.
We again refer to~\cite[Chapter~5,6]{SplinesInApproxTh} for the results on optimal recovery of functions on the class $W^1H^\omega([a,b],\mathbb R)$ and its periodic analog, as well as on optimal recovery of the derivative of the functions on these classes.
In Section~\ref{s::StechkinPr}, we obtain sharp inequalities of Landau type for divided differences of Hukuhara type as well as for derivatives of Hukuhara type of the functions from the classes $\overline{W}^1H^\omega([a,b],X)=\bigcup_{k>0}k\cdot W^1H^\omega([a,b],X)$.
Many known results on Landau and Landau--Kolmogorov type inequalities can be found in ~\cite{BKKP, Mitrinovich, Arestov}. For the functions with values in Banach spaces, some inequalities were obtained in~\cite{ANASTASSIOU2012312,XIAO};
for the $L$-spaces valued functions defined on $\mathbb R$ or $\mathbb R_+$ --- in~\cite{VeraBabenko_JANO}.
Inequalities of such type are intimely connected to the Stechkin problem about approximation of an operator by the ones with smaller norm, in particular approximation of unbounded operators by bounded ones. The problem was first stated in~\cite{Stechkin}, where the first results on the solution of the problem were obtained. Information on further results can be found in~\cite{Arestov,BKKP}. In~\cite{VeraBabenko_JANO}, a generalization of the Stechkin problem for the case of unbounded operators acting in $L$-spaces was proposed; some results about approximation of Hukuhara type derivatives by Lipschitz operators on the classes $W^1H^\omega(J,X)$, where $J=\mathbb R$ or $J=\mathbb R_+$ were obtained. Here we consider this problem for the operator of Hukuhara type divided difference and the Hukuhara type derivative. We also consider the problem of optimal recovery of the operator $\mathcal{D}_H$ on the class $W^1H^\omega([a,b],X)$ in the case, when the elements of the class are known with error. Known results and further references can be found in \cite{Arestov,BKKP,VeraBabenko_JANO}.
\section{$L$-spaces}\label{s::LSpace}
\subsection{Definitions}
\begin{definition}
A set $X$ is called a semilinear space, if operations of addition of elements and their multiplication on real numbers are defined in $X$, and the following conditions are satisfied for all $x,y,z\in X$ and $\alpha,\beta\in\mathbb{R}$:
\begin{gather*}
x+y=y+x;
\\ x+(y+z)=(x+y)+z;
\\ \exists\; \theta\in X\colon x+\theta=x;
\\ \alpha(x+y)=\alpha x +\alpha y;
\\ \alpha(\beta x)=\left(\alpha\beta\right)x;
\\ 1\cdot x=x,\; 0\cdot x=\theta.
\end{gather*}
\end{definition}
\begin{definition}
We call an element $x\in X$ convex, if for all $\alpha,\beta\geq 0$, $(\alpha + \beta)x=\alpha x+ \beta x.$
Denote by $ X^{\rm c}$ the subspace of all convex elements of the space $X$.
\end{definition}
\begin{remark}
Some authors (see e.~g.~\cite{Borisovich}) include into the axioms of a semi-linear space the requirement $X=X^{\rm c}$.
\end{remark}
\begin{definition}
A semilinear space $X$, endowed with a metric $h_X$, is called an $L$-space, if it is complete and separable and for all $x,y,z\in X$, and $\alpha\in\mathbb{R}$
$$
h_X(\alpha x,\alpha y)=|\alpha|h_X(x,y);
$$
\begin{equation}\label{ax::LSemiIsotropic}
h_X(x+z,y+z)\leq h_X(x,y).
\end{equation}
\end{definition}
\begin{remark}
It follows from the triangle inequality and~\eqref{ax::LSemiIsotropic}, that
$$
\forall x,y,z,w\in X\;\;\; h_X(x+z,y+w)\leq h_X(x,y)+h_X(z,w).
$$
\end{remark}
\begin{definition}
An $L$-space $X$ is called isotropic, if inequality~\eqref{ax::LSemiIsotropic} turns into equality for all $x,y,z\in X$.
\end{definition}
Next we list some of the examples of $L$-spaces. More details can be found in~\cite{babenko19}.
Arbitrary separable Banach space and arbitrary complete and separable quasilinear normed space (see~\cite{Aseev}) are $L$-spaces. The space $\Omega(X)$ of non-empty compact subsets of a separable Banach space $X$ endowed with usual Hausdorff metric, the space $\Omega_{\rm conv}(X)$ of convex elements from $\Omega(X)$, and spaces of fuzzy sets (see e.~g.~\cite{Diamond2}) are also examples of $L$-spaces. All $L$-spaces mentioned above are isotropic.
An example of a non-isotropic $L$ space can be built as follows. Let $X = [0,\infty)$, for $\lambda\in\mathbb R$, $x,y\in X$ set $x\bigoplus y = \max\{x,y\}$, $\lambda\bigodot x = |\lambda|x$. Then $(X,\bigoplus,\bigodot)$ with the metric $h_X(x,y) = |x-y|$, $x,y\in X$, is a non-isotropic $L$-space.
A function $f\colon [a,b]\to X$ is said to be measurable, if
for any element $x\in X$ the real-valued function $h_X(f(t),x)$ is measurable.
For $[a,b]\subset \mathbb R$ and an $L$-space $(X,h_X)$, denote by $C([a,b],X)$ and $B([a,b],X)$ the spaces of continuous (resp. bounded and measurable) functions $f\colon [a,b]\to X$ with the metrices
$$h_{C([a,b],X)}(f,g):= \max\limits_{t\in [a,b]} h_X(f(t),g(t)) \;\;\text{and}\;\;
h_{B([a,b],X)}(f,g):= \sup\limits_{t\in [a,b]} h_X(f(t),g(t)).$$
\subsection{Hukuhara type derivative}
The notion of the Hukuhara difference of two sets was introduced in~\cite{Hukuhara}.
\begin{definition}
Let $X$ be an $L$-space. We say that $z\in X$ is the Hukuhara type difference of $x,y\in X$, if $x=y+z$.
We denote this difference by $z=x-_H y.$
\end{definition}
Note, that in an isotropic $L$-space the Hukuharu difference $x-_H y$ is unique, provided it exists. On the other hand, in a non-isotropic $L$-space, uniqueness is not guaranteed. For example, in the space $(X,\bigoplus,\bigodot)$, for arbitrary $x\in X$ the difference $x-_H x$ exists and is not unique for each $x\neq 0$.
Everywhere below, when we consider Hukuharu differences, we assume that the $L$-space is isotropic.
\begin{definition}
If $t\in(a,b)$, and for all small enough $\gamma>0$ there exist differences $f(t+\gamma)-_H f(t)$ and $f(t)-_H f(t-\gamma)$, and both limits $\lim\limits_{\gamma\to +0} \gamma^{-1}(f(t+\gamma)-_H f(t))$ and $\lim\limits_{\gamma\to +0} \gamma^{-1}(f(t)-_H f(t-\gamma))$ exist and are equal to each other, then the function $f$ has a Hukuhara type derivative $\mathcal{D}_H f(t)$ at the point $t$ (if $t=a$ or $t=b$ then there exists only one limit) and
$$
\mathcal{D}_H f(t):=\lim\limits_{\gamma\to +0} \gamma^{-1}(f(t+\gamma)-_H f(t)).
$$
\end{definition}
One can find properties of Hukuhara type differences and elements of calculus based on Hukuhara type difference and derivative in $L$-spaces in~\cite{babenko19}.
\subsection{Integration in $L$-spaces}
For completeness we present the definition and some of the properties of the Lebesgue integral for the functions $f\in B([a,b],X)$, where $X$ is an $L$-space (see~\cite{Vahrameev} and~\cite[\S 5]{Aseev}).
First, we recall the definition of a convexifying operator.
\begin{definition}\label{def::convexifyingOperator}
A surjective operator $P\colon X\to X^{\rm c}$ is called convexifying, if
\begin{gather*}
h_X(P(x),P(y))\le h_X (x,y) \text{ for all } x,y\in X; \\
P\circ P = P\notag; \\
P(\alpha x + \beta y) = \alpha P(x)+ \beta P(y)\text{ for all }x,y\in X \text{ and }\alpha,\beta\in\mathbb R.
\end{gather*}
\end{definition}
The operator ${\rm conv}\colon\Omega(\mathbb{R}^m)\to \Omega(\mathbb{R}^m)$ that maps each $x\in \Omega(\mathbb{R}^m)$ to its convex hull ${\rm conv}\,x$ is an example of a convexifying operator.
A mapping $f$ is called simple, if it has a finite number of values $\left\{f_k\right\}_{k=1}^n$ on pairwise disjoint measurable sets $\left\{T_k\right\}_{k=1}^n$, $n\in\mathbb N$. The Lebesgue integral of a simple mapping $f$ is by definition
$$
\int_a^b f(s)ds :=\sum\nolimits_{i=1}^n P(f_i)\mu(T_i),
$$
where $\mu$ is the Lebesgue measure.
The following properties hold for simple $f,g$.
\begin{enumerate}
\item For all $\alpha,\beta\in \mathbb R$
\begin{equation}\nonumber
\int_a^b \left(\alpha f(t)+\beta g(t)\right) dt=\alpha \int_a^b f(t) dt+\beta \int_a^b g(t) dt.
\end{equation}
\item The function $t\to h_X \left(f(t), g(t)\right)$ is integrable and
\begin{equation}\nonumber
h_X\left(\int_a^b f(t) dt, \int_a^b g(t) dt\right)\leq \int_a^b h_X \left(f(t),g(t)\right)dt.
\end{equation}
\item The function $P(f(\cdot))$ is integrable and
\begin{equation}\nonumber
\int_a^b f(t) dt = P\left( \int_a^b f(t) dt\right) = \int_a^b P(f(t)) dt.
\end{equation}
\item For disjoint measurable sets $T_1$ and $T_2$ such that $[a,b]=T_1\cup T_2$
$$
\int_a^b f(t) dt=\int_{T_1} f(t) dt+\int_{T_2} f(t) dt.
$$
\end{enumerate}
Any function $f\in B([a,b],X)$ is a uniform limit of a sequence $\{f^k\}$ of simple functions. Using standard arguments, one can prove that the sequence $\left\{\int_a^bf^k(t)dt\right\}_{k\in\mathbb N}$ is fundamental. By definition, the integral $\int_a^bf(t)dt$ of the function $f$ is set to be the limit of this sequence.
It is clear that properties 1
-- 4 of the Lebesgue integral for simple functions hold for arbitrary functions from $B([a,b],X)$.
Moreover, if $\rho$ is an absolutely continuous strictly monotone function from $[c,d]\subset \mathbb R$ onto $[a,b]\subset\mathbb R$, then for all $f\in B([a,b], X)$,
$$\int_{a}^bf(t)dt = \int_{c}^df(\rho(s))\rho'(s)ds.
$$
Indeed, from Properties~1--4 and the possibility to change variables in the integral for real-valued functions, it follows, that this property holds for the case, when $f$ is simple. The general case can be obtained using the limiting procedure.
Note, that in the case of $X$ being a Banach space, the integral becomes the Bochner integral, see~ \cite[Sections~3.7-3.8]{hille}; in the case $X=\Omega( {\mathbb R^m})$, the integral coincides with the Aumann integral, see~\cite[Theorem~12]{Aseev}.
\subsection{Some Properties of L-spaces}
\begin{definition}
We say that an element $x\in X$ is invertible, if there exists an element $x'\in X$ such that $x+x'=\theta$. In this case the element $x'$ is called the inverse to $x$. Denote by $X^{\rm inv}$ the set of all invertible elements of the space $X$.
\end{definition}
\begin{assumption}
In what follows we assume that $X^{\rm inv}\cap X^{\rm c}\neq\{\theta\}$.
\end{assumption}
In the space $\Omega(X)$ any element of the form $\{x\},$ $x\in X$, is convex and invertible.
We need the following lemmas, see~\cite{VeraBabenko_JANO}.
\begin{lemma}
If $x\in X^{\rm inv}$, then its inverse element $x'$ is unique.
\end{lemma}
\begin{lemma
If $x\in X^{\rm inv}\cap X^{\rm c}$, then $x'\in X^{\rm c}$.
\end{lemma}
\begin{lemma}\label{l::distBetweenConvexElemMultipliers}
For all $x\in X^c$ and $\alpha, \beta\in \mathbb{R}$, $
h_X(\alpha x, \beta x)\leq |\alpha -\beta |\cdot h_X (x,\theta).
$
If $X$ is isotropic and $\alpha\cdot\beta\geq 0$, then the inequality becomes equality.
\end{lemma}
In addition, we also need the following lemmas. We omit their elementary proofs.
\begin{lemma}\label{l::distBetweenInverseElems}
Let $X$ be an isotropic $L$-space. Then for any $ x \in X^c\cap X^{\rm inv}$,
$$
h_X(x,x')=h_X(x+x,\theta)=2h_X(x,\theta).
$$
\end{lemma}
\begin{lemma}\label{l::inversElementNorm}
For any $x \in X^{\rm inv}\cap X^{\rm c}$, $h_X(x',\theta) = h_X(x,\theta)$.
\end{lemma}
\subsection{Auxiliarly results}
\begin{definition}
For $f\colon [a,b]\to\mathbb R$ and $x\in X^{\rm inv}$ define the function $f_x\colon [a,b]\to X$,
\begin{equation}\label{simpleFunc}
f_x(t) = f_+(t)\cdot x + f_-(t)\cdot x',
\end{equation}
where for real $\xi$, $\xi_\pm :=\max\{\pm \xi ,0\}$.
\end{definition}
\begin{lemma}\label{l::LspaceValuedFunc}
Let $X$ be an isotropic $L$-space and $f\in H^\omega ([a,b],\mathbb R)$. If $x\in X^{\rm inv}\cap X^{\rm c}$ is such that $h_X(x,\theta) = 1$, then $f_x\in H^\omega([a,b],X)$ and
\begin{equation}\label{integralOfSimpleFunc}
\int_a^b f_x(t)dt = \int_a^b f_+(t)dt\cdot x + \int_a^b f_-(t)dt\cdot x'= \left(\int_a^b f(t)dt\right)_+\cdot x + \left(\int_a^b f(t)dt\right)_-\cdot x'.
\end{equation}
\end{lemma}
\begin{proof}
If $s,t\in [a,b]$ are such that $f(s),f(t)\geq 0$, then due to Lemma~\ref{l::distBetweenConvexElemMultipliers},
$$
h_X(f_x(s), f_x(t))
=
h_X(f(s)\cdot x, f(t)\cdot x)
\leq \omega(|s-t|).
$$
Analogously, due to Lemma~\ref{l::inversElementNorm}, in the case, when $f(s),f(t)\leq 0$. When $f(s)\geq 0\geq f(t)$,
\begin{multline*}
h_X(f_x(s), f_x(t))
=
h_X(f_+(s)\cdot x, f_-(t)\cdot x')
=
h_X(f_+(s)\cdot x +f_-(t)\cdot x, \theta)
\\=
|f(s)-f(t)|h_X(x,\theta)
\leq \omega(|t-s|).
\end{multline*}
Hence $f_x\in H^\omega([a,b],X)$. Equality~\eqref{integralOfSimpleFunc} follows from~\eqref{simpleFunc} and convexity of $x$ and $x'$.
Indeed, let for definiteness $\int_a^b f_+(t)dt \geq \int_a^b f_-(t)dt$. Then
\begin{gather*}
\int_a^b f_x(t)dt
=
\left(\int_a^b f_+(t)dt\right)\cdot x + \left(\int_a^b f_-(t)dt\right)\cdot x'
\\=
\left(\int_a^b f_-(t)dt\right)\cdot x +
\left(\int_a^b (f_+(t)- f_-(t))dt\right)\cdot x +
\left(\int_a^b f_-(t)dt\right)\cdot x'
\\=
\left(\int_a^b f(t)dt\right)\cdot x
=
\left(\int_a^b f(t)dt\right)_+\cdot x
=
\left(\int_a^b f(t)dt\right)_+\cdot x + \left(\int_a^b f(t)dt\right)_-\cdot x'.
\end{gather*}
\end{proof}
\begin{lemma}\label{l::derivativeOfSimpleFunc}
Let $X$ be an isotropic $L$-space, $x\in X^{\rm c}\cap X^{\rm inv}$, and $f\colon [a,b]\to \mathbb R$ be a continuously differentiable function. Then the derivative $\mathcal{D}_H f_x(t)$ exists at each point $t\in [a,b]$ and
$\mathcal{D}_H f_x(t) = (f'(t))_x.$
\end{lemma}
\begin{proof} First of all note, that if $t,t+\gamma\in [a,b]$, then
$
f_x(t+\gamma)-_H f_x(t) = (f(t+\gamma) - f(t))_x$.
Let $\gamma>0$. If $f'(t)>0$, then for all small enough $\gamma$, $f(t+\gamma) > f(t)$, hence
\begin{gather*}
\lim\limits_{\gamma\to +0}\gamma^{-1}( f_x(t+\gamma)-_H f_x(t))
=
\lim\limits_{\gamma\to +0}\gamma^{-1}(f(t+\gamma)- f(t))x
= f'(t)\cdot x = (f'(t))_x.
\end{gather*}
Analogously in the case $f'(t) <0$.
Finally, if $f'(t) = 0$, then, due to Lemma~\ref{l::inversElementNorm},
$$h_X\left(\gamma^{-1}(f_x(t+\gamma)-_H f_x(t)) ,\theta\right) =
\left|\gamma^{-1}(f(t+\gamma)- f(t))\right|h_X(x,\theta)
\to 0, \gamma\to+0.
$$
Analogously for the quantity $\lim\limits_{\gamma\to +0}\gamma^{-1}(f_x(t)-_H f_x(t-\gamma))$. The lemma is proved.\end{proof}
\section{On the Korneichuk--Stechkin lemma}\label{s::KSLemma}
\begin{lemma}\label{l::KSLemma}
Let positive almost everywhere functions $\psi_1\in B([a,a'],\mathbb R)$, $\psi_2\in B([b',b],\mathbb R)$, $a<a'\le b'<b$ such that
$
\int _ {a} ^ {a '} \psi_1 (t) dt =
\int _ {b '} ^ {b} \psi_2 (t) dt
$
be given.
Let also $ \omega$ be a modulus of continuity and the function $ \rho\colon [a, c] \to [c, b],\; c=(a'+b')/2,$ be defined by the relations
\begin{equation}\label{rhoDefinition}
\int_{a}^{s} \psi_1(t) dt =\int_{\rho(s)}^{b} \psi_2(t) dt,\;\text{if}\; s\in [a,a'],\;\;\;\text {and}\;\;\; \rho(s)=a'+b'-s,\; \text{if}\; s\in [a',c].
\end{equation}
Then for an $L$-space $X$ and the functional $S(\psi_1,\psi_2)$ defined in~\eqref{LspacevaluedFunct}
\begin{equation}\label{mainInequality}
S(\psi_1,\psi_2)
\\ \leq
\int_{a}^{a'} \psi_1(s)\omega(\rho(s)-s)ds = \int_{b}^{b'} \psi_2(t)\omega(t-\rho^{-1}(t))dt.
\end{equation}
If $\omega$ is a concave modulus of continuity and $X$ is isotropic, then~\eqref{mainInequality} turns into equality.
In this case, the supremum is attained on the functions $(\pm g+\alpha)_x(\cdot) + y\in H^{\omega}([a,b],X)$, where $\alpha\in \mathbb R$, $y\in X$, $x\in X^{\rm c}\cap X^{\rm inv}$, $h_X(x,\theta) = 1$, and
\begin{equation}\label{extremalFunctions}
g(t) =
\begin{cases}
-\int_t^c\omega'(\rho(s) - s)ds,& a\leq t \leq c, \\
\int_c^t\omega'(s - \rho^{-1}(s))ds,& c\leq t \leq b.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof} Differentiating equality~\eqref{rhoDefinition}, we get $\psi_1(s) = -\psi_2(\rho(s))\rho'(s)$ for all $s\in [a,a']$. After a substitution $t=\rho(s)$, we obtain that
\begin{equation*
\int_{b'}^{b}\psi_2(t)f(t)dt
=
-\int_{a}^{a'}\psi_2(\rho(s))\rho'(s)f(\rho(s))ds
=
\int_{a}^{a'}\psi_1(s)f(\rho(s))ds.
\end{equation*}
Hence
\begin{gather*}
h_X\left(\int_{a}^{a'} \psi_1(t)f(t) dt, \int_{b'}^{b}\psi_2(t) f(t) dt\right)
=
h_X\left(\int_{a}^{a'} \psi_1(t)f(t) dt,
\int_{a}^{a'}\psi_1(t)f(\rho(t)) dt\right)
\\ \leq
\int_{a}^{a'}h_X\left(\psi_1(t)f(t), \psi_1(t)f(\rho(t)) \right)dt
\\=
\int_{a}^{a'}\psi_1(t)h_X(f(t),f(\rho(t)))dt
\leq
\int_{a}^{a'}\psi_1(t)\omega(\rho(t) - t)dt
\end{gather*}
and the inequality in~\eqref{mainInequality} is proved. The
equality in~\eqref{mainInequality} can be obtained after the substitution $s = \rho^{-1}(t)$.
Let now $\omega$ be concave and $X$ be isotropic. For $y\in X$,
$$\int_{a}^{a'} \psi_1(t)y dt
=
\left(\int_{a}^{a'} \psi_1(t) dt\right)P(y)
=
\left(\int_{b}^{b'} \psi_2(t) dt\right)P(y)
=
\int_{b}^{b'} \psi_2(t) ydt,
$$
and since $X$ is isotropic, we obtain
\begin{gather*}
h_X\left(\int_{a}^{a'} \psi_1(t)f(t) dt , \int_{b'}^{b}\psi_2(t) f(t) dt\right)
=
h_X\left(\int_{a}^{a'} \psi_1(t)f(t) dt +\int_{a}^{a'} \psi_1(t)ydt, \right.
\\ \left.
\int_{b'}^{b}\psi_2(t) f(t) dt + \int_{b'}^{b} \psi_2(t)y dt\right)
=
h_X\left(\int_{a}^{a'} \psi_1(t)(f(t) +y) dt, \int_{b'}^{b}\psi_2(t) (f(t)+y) dt\right)
\end{gather*}
and hence if the supremum in~\eqref{mainInequality} is attained on some function $f$, then it is attained on all functions $f(\cdot)+y$, $y\in X$.
Let $G(\cdot) = g(\cdot) + \alpha$, where $g$ is defined in~\eqref{extremalFunctions} and $\alpha\in \mathbb R$. It is known (see~\cite[\S~7.1]{ExactConstants}), that the function $G$ belongs to $H^\omega([a,b],\mathbb R)$ and is extremal in~\eqref{mainInequality} for real-valued functions.
Let $I=\int_a^{a'} \psi_1(t)G(t) dt$ and $J=\int_{b'}^b\psi_2(t) G(t) dt$. By~\eqref{extremalFunctions}, the function $g$ is non-decreasing. Hence there are three possibilities 1) $I\geq 0, \; J\geq 0$, 2)~$I\leq 0, \; J\geq 0$ and 3) $I\leq 0, \; J\leq 0$. Considering each of them and taking into account Lemmas~\ref{l::distBetweenConvexElemMultipliers}, \ref{l::inversElementNorm} and~\ref{l::LspaceValuedFunc} we obtain
\begin{gather*}
h_X\left(\int_a^{a'} \psi_1(t)G_x(t) dt, \int_{b'}^b\psi_2(t) G_x(t) dt\right)
= |I-J|h_X(x,\theta) = \int_a^{a'}\psi_1(t)\omega(\rho(t) - t)dt.
\end{gather*}
We elaborate the proof in the case $I \leq 0$ and $J \geq 0$:
$$
h_X\left(\int_a^{a'} \psi_1(t)G_x(t) dt, \int_{b'}^b\psi_2(t) G_x(t) dt\right)=h_X(I_-\cdot x',J_+\cdot x)
$$
$$
=h_X(I_-\cdot x'+I_-\cdot x, I_-\cdot x+J_+\cdot x)=h_X(\theta, (-I+J)\cdot x)=|I-J|h(x,\theta).
$$
The case $G(\cdot)=-g(\cdot)+\alpha$, can be considered analogously.
The lemma is proved. \end{proof}
Recall, that for a measurable non-negative function $f\colon [a,b]\to\mathbb R$ the function
$$
m(f,y):={\rm mes}\{ t\in [a,b]\colon f(t)>y\},\;y\in\mathbb R
$$
is called the distribution function of $f$. The function
$$
r(f,t):=\inf\{ y\colon m(f,y)\le t\},\;t\in [0,b-a]
$$
is called the non-increasing (or Hardy's) rearrangement of $f$. $r(f,\cdot)$ is a non-increasing on $[0,b-a]$, and equimeasurable with $f$ function.
\begin{remark}
For a concave modulus of continuity $\omega$, and an isotropic $L$-space $X$, the statement of Lemma~\ref{l::KSLemma} can be rewritten as follows:
\begin{equation}\label{rearrangement}
S(\psi_1,\psi_2)= \left|\int_{0}^{b-a}r' (\Psi,s)\omega(s)ds\right| = \int_{0}^{b-a}r (\Psi,s)\omega'(s)ds,
\end{equation}
where
$ \Psi (s)=\int_a^s(\psi_1(u)-\psi_2(u))du,\;s\in[a,b].$
\end{remark}
The equality of the quantities in the right-hand sides of inequalities~\eqref{mainInequality} and~\eqref{rearrangement} for concave $\omega$ was proved by Korneichuk, see e.~g.~\cite[Lemma~7.1.2]{ExactConstants}.
\section{Estimate for the functional $S(\psi_1,\psi_2)$ and Ostrowski type inequalities}
\subsection{General estimate for the functional $S(\psi_1,\psi_2)$}
\begin{definition}
A function $\varphi\in C([a,b],\mathbb R)$
is called a hat-function, if
\begin{enumerate}
\item $\varphi(a) =\varphi(b)=0$, $|\varphi(t) |>0$ for $a<t<b$, and
\item $\forall y\in (0,\max\limits_{t\in[a,b]}|\varphi(t)|)$ the equation $|\varphi(t)|=y$ has exactly two roots on $(a,b)$.
\end{enumerate}
Each hat-function $\varphi$ is continued to the whole line by $\varphi(t) = 0$, $t\notin (a,b)$.
\end{definition}
Denote by $D[a,b]$ the set of functions $\psi\colon [a,b]\to\mathbb R$ that have finite one-sided limits $\psi(t+0)$ and $\psi(t-0)$ for all $t\in (a,b)$, and finite limits $\psi(a+0)$ and $\psi(b-0)$. Set
$
D_0[a,b]=\{ \psi\in D[a,b]\colon \int_a^b\psi(t)dt=0 \}$ and
$
D^1_0[a,b]=\{ f(t)=\int_a^t\psi(u)du\colon \psi \in D_0[a,b] \}.
$
As it is known (see e.~g.~\cite{Korneichuk71} and~\cite[Chapter~7]{ExactConstants}), each function $f\in D^1_0[a,b]$ can be represented as a finite or countable sum of hat-functions
\begin{equation}\label{sigma}
f(t)=\sum\nolimits_k\varphi_k(t).
\end{equation}
This equality is called the $\Sigma$-representation of $f$. The following properties are satisfied (see~\cite[Chapter~7]{ExactConstants}).
1) $|f(t)|=\sum\nolimits_k|\varphi_k(t)|,\; a\le t\le b;$
2) The intervals $(\alpha_k,\alpha_k')$, $(\beta_k',\beta_k)$, on which the functions $\varphi_k(t)$ are strictly monotone, are pairwise disjoint, and on each of them $\varphi_k(t)=f(t)+c_k$, $c_k\in\mathbb R$, and hence $f'(t)=\sum\nolimits_k\varphi_k'(t)$ almost everywhere on $[a,b]$;
3) $\int_a^b|f(t)|dt=\sum\nolimits_k\int_{\alpha_k}^{\beta_k}|\varphi_k(t)|dt$;
4) $ \bigvee_a^b f =\sum\nolimits_k\bigvee_{\alpha_k}^{\beta_k}(\varphi_k).$
\begin{definition}
For a function $f\in D^1_0[a,b]$ with $\Sigma$-representation~\eqref{sigma}, the Korneichuk $\Sigma$-rearrangment of $f$ is defined by equality
$$
R(f;t)=\sum\nolimits_kr(|\varphi_k|,t), \; 0\le t\le b-a.
$$
\end{definition}
\begin{theorem}\label{th::KSLemmaGeneralEstimate}
Let $\omega$ be a concave modulus of continuity and $\psi_1, \psi_2 \in B([a,b],\mathbb R)$ be such that $\psi_1- \psi_2\in D_0[a,b]$. Set $\Psi(t)=\int_a^t[\psi_1(u)- \psi_2(u)]du$. Then
\begin{equation}\label{estS}
S(\psi_1, \psi_2)\le \int_0^{b-a}|R'(\Psi;t)|\omega(t)dt.
\end{equation}
\end{theorem}
\begin{proof} Set $E_\pm =\{ t\in [a,b]\colon \pm \psi_1(t)\ge \pm \psi_2(t)\}$. If $P$ is the convexifying operator (see Definition~\ref{def::convexifyingOperator}), then $\psi_1(s)P(f(s)) = (\psi_1(s)-\psi_2(s))P(f(s)) + \psi_2(s)P(f(s))$ for all $s\in E_+$ and $\psi_2(s)P(f(s)) = (\psi_2(s)-\psi_1(s))P(f(s))+ \psi_1(s)P(f(s))$ for all $s\in E_-$. Hence for any function
$f\in H^\omega([a,b],X)$ we obtain
\begin{gather*}
h_X\left( \int_a^bf(t)\psi_1(t)dt,\int_a^bf(t)\psi_2(t)dt\right)
\\
=
h_X\left( \int_{E_+}f(t)(\psi_1(t)-\psi_2(t))dt+\int_{E_-}f(t)\psi_1(t)dt +\int_{E_+}f(t)\psi_2(t)dt,\right.
\\
\left.\int_{E_-}f(t)(\psi_2(t)-\psi_1(t))dt
+\int_{E_-}f(t)\psi_1(t)dt
+\int_{E_+}f(t)\psi_2(t)dt\right)
\\
\leq
h_X\left( \int_{E_+}f(t)(\psi_1(t)-\psi_2(t))dt,\int_{E_-}f(t)(\psi_2(t)-\psi_1(t))dt\right)
\\
=h_X\left( \int_{a}^bf(t)(\psi_1(t)-\psi_2(t))_+dt,\int_{a}^bf(t)(\psi_1(t)-\psi_2(t))_-dt\right).
\end{gather*}
Moreover, the inequality in the above chain becomes equality in the case of isotropic space $X$. Let $\Psi(t)=\sum\nolimits_k\varphi_k(t)$ be the $\Sigma$-representation of the function $\Psi.$ Since $\Psi'=\psi_1-\psi_2=\sum\nolimits_k\varphi_k'$, due to the mentioned above properties of $\Sigma$-representations,
$$
(\psi_1-\psi_2)_+=\sum\nolimits_k(\varphi_k')_+,\;\;\; (\psi_1-\psi_2)_-=\sum\nolimits_k(\varphi_k')_-.
$$
Moreover, the functions $(\varphi_k')_+$ and $(\varphi_k')_-$ satisfy the conditions of Lemma~\ref{l::KSLemma} for each $k$. Applying Lemma~\ref{l::KSLemma} (more precisely, equality~\eqref{rearrangement}), we obtain
\begin{gather*}
h_X\left( \int_{a}^bf(t)(\psi_1(t)-\psi_2(t))_+dt,\int_{a}^bf(t)(\psi_1(t)-\psi_2(t))_-dt\right)
\\
=h_X\left( \int_{a}^bf(t)\sum\nolimits_k(\varphi_k')_+dt,\int_{a}^bf(t)\sum\nolimits_k(\varphi_k')_-dt\right)\\
\le \sum\nolimits_kh_X\left( \int_{a}^bf(t)(\varphi_k')_+dt
\int_{a}^bf(t)(\varphi_k')_-dt\right)
\\
\le \sum\nolimits_k\int_{0}^{b-a}r(|\varphi_k|,t)\omega'(t)dt=\int_{0}^{b-a}R(\Psi,t)\omega'(t)dt.
\end{gather*}
The theorem is proved.\end{proof}
\begin{remark}\label{r::Sharpness}
In the case of isotropic $X$, estimate~\eqref{estS} is sharp, provided the exremal in Lemma~\ref{l::KSLemma} functions for $\psi_1=(\varphi_k')_+$ and $\psi_2=(\varphi_k')_-$ can be ''glued'' so that the obtained function belongs to $H^\omega([a,b],X)$.
\end{remark}
Assume that the $\Sigma$-representation of the function $\Psi$ is
$
\Psi(t)=\sum\nolimits_{k=1}^n\varphi_k(t),
$
if $[\alpha_k,\beta_k]$ is the support of the hat-function $\varphi_k$, $k=1,\ldots,n$, then
$$
\alpha_1<\beta_1\le\alpha_2<\beta_2\le\ldots\le \alpha_n<\beta_n,
$$
and on the segments $[\alpha_k,\beta_k]$ and $[\alpha_{k+1},\beta_{k+1}]$ the functions $\varphi_k$ and $\varphi_{k+1}$ have opposite signs, $k=1,\ldots, n-1$.
Below we sketch the procedure of gluing. We start with the case $X = \mathbb R$. Let $g_k\in H^\omega([\alpha_k,\beta_k],\mathbb R)$ be the extremal for the functional $S((\varphi_k')_+,(\varphi_k')_-)$ function. On the set $\bigcup_{k=1}^n[\alpha_k,\beta_k]$ define the function $g$, setting $g(t) = g_k(t) +c_k$, $t\in [\alpha_k,\beta_k]$, where $c_k$ are such that $g(\beta_k) = g(\alpha_{k+1})$, $k=1,\ldots, n-1$. Next, we continue $g$ to the whole segment $[a,b]$ setting $g(t) = g(\alpha_1)$, if $t\leq \alpha_1$, $g(t) =g(\beta_k)$, if $t\in (\beta_k,\alpha_{k+1}),\; k=1,\ldots, n-1,$ and $g(t) =g(\beta_k)$, if $t\geq \beta_n$.
Lemma~4.1 from~\cite{stepanets2018} contains a criteria for $g$ to belong to $H^\omega([a,b],\mathbb R)$. In particular, this is true, if
for some $m$, $1\le m\le n$,
$$
\beta_1-\alpha_1\le\beta_2-\alpha_2\le\ldots\le\beta_m-\alpha_m \text{ and } \beta_m-\alpha_m\ge\beta_{m+1}-\alpha_{m+1}\ge\ldots\ge \beta_n-\alpha_n.
$$
If $g\in H^\omega([a,b],\mathbb R)$, then the function $(g+\gamma)_x(\cdot) + y$ with $\gamma\in \mathbb R$, $y\in X$ and $x\in X^{\rm c}\cap X^{\rm inv}$, $h_X(x,\theta) = 1$, is a glued extremal for~\eqref{estS}.
\subsection{Ostrowski type inequalities.}
The following theorem is a wide generalization of Theorem~2 from~\cite{barnet}, in which $X = \mathbb R$, $\omega(t) = t$, and $[c,d]\subset [a,b]$.
\begin{theorem}\label{th::ostrowskiInequality}
Let two segments $[a,b]$ and $[c,d]$ be given. Set $M = \max\{b-a,d-c\}$, $m = \min\{b-a,d-c\}$, for $\alpha,\beta\geq 0$ set $I(\alpha,\beta) = \int_{\alpha}^{\beta}\omega(s)ds$ and assume for definiteness that $a\leq c$. Then for all $f\in H^\omega([a,\max\{b,d\}],X)$
\begin{multline*
h_X\left(\frac{1}{b-a}\int_a^b{f(t)dt}, \frac {1}{d-c}\int_c^d{f(t)dt}\right)
\\ \leq
\begin{cases}
\frac{M-m}{M^2}\left\{I\left(0,\frac{M(c-a)}{M-m}\right) + I\left(0,\frac{M(b-d)}{M-m}\right)\right\},& [c,d]\subset [a,b],\\
\frac{1}{M+m}I\left(\frac{M(b-c)}{m}, d-a\right) +
\frac{M-m}{M^2}I\left(0,\frac{M(b-c)}{m}\right), & b\in [c,d],\\
\frac{1}{M+m}I(c-b, d-a), & c\geq b.
\end{cases}
\end{multline*}
If $X$ is isotropic and $\omega$ is concave, then the inequality is sharp.
\end{theorem}
The theorem follows from Theorem~\ref{th::KSLemmaGeneralEstimate} applied to the functions $\psi_1=\frac 1{b-a}\chi_{[a,b]}$ and $\psi_2=\frac 1{d-c}\chi_{[c,d]}$. We omit the technical details of the proof. The extremal function can be obtained using the described above procedure of gluing.
Direct computations show that Theorem~\ref{th::ostrowskiInequality} implies the following result.
\begin{corollary}\label{c::symmetricalInterval}
If in Theorem~\ref{th::ostrowskiInequality} additionally $c+d = a+b$, i.~e. the midpoints of the intervals $(a,b)$ and $(c,d)$ coincide, then
$$
h_X\left(\int_a^b{f(t)dt}, \frac {b-a}{d-c}\int_c^d{f(t)dt}\right)
\leq
\frac {4(c-a)}{b-a}\int_0^{(b-a)/2}{\omega(t)dt}.
$$
If $X$ is isotropic and $\omega$ is concave, then the inequality is sharp.
\end{corollary}
Applying Theorem~\ref{th::ostrowskiInequality} to the segment that contains $t$ and the segment $[c,d]$ (while both are contained in $[a,b]$) and then shrinking the first one into a point, we obtain
\begin{corollary}\label{c::valueIntegralDeviation}
Let $t\in [a,b]$, $[c,d]\subset [a,b]$ and $\omega (\cdot)$ be an arbitrary modulus of continuity. If $f\in H^\omega([a,b], X)$ and $P$ is the convexifying operator, then
\begin{equation}\label{ostr}
h_X\left( P(f(t)),\frac 1{d-c}\int_c^d f(u)du\right)
\leq \frac{1}{d-c}\int_c^d\omega(|s-t|)ds.
\end{equation}
If $X$ is isotropic, then the inequality is sharp. An extremal function is $(\omega(|\cdot -t|))_x$, where $x\in X^{\rm c}, h_X(x,\theta)=1$.
\end{corollary}
Let a segment $[a,b]$ and numbers $t,h$ such that $a\le t<t+h\le (a+b)/2$ be given. Applying Theorem~\ref{th::KSLemmaGeneralEstimate} to $\psi_1=\frac 1{b-a}\chi_{[a,b]}$ and $\psi_2=\frac 1{2h}\left(\chi_{[t,t+h]}+\chi_{[a+b-t-h,a+b-t]}\right)$, and passing to the limit as $h\to 0$, we obtain a generalization of \cite[Theorem~2]{Guessab02}.
\begin{corollary}
For arbitrary $f\in H^\omega([a,b], X)$ and $t\in [a,(a+b)/2)$
\begin{multline}\label{guessab}
h_X\left(\frac 12 (P(f(t))+P(f(a+b-t))),\frac 1{b-a}\int_a^bf(u)du\right
\\ \le \frac 2{b-a}\left(\int_0^{t-a}\omega(u)du+\int_0^{({a+b-2t})/2}\omega(u)du\right).
\end{multline}
Inequality~\eqref{guessab} becomes equality for $f(u)=\min\{\omega(|u-t|),\omega(|u+t-a-b|)\}\cdot x$, $x\in X^{\rm c}$.
\end{corollary}
\begin{remark}
Inequalities~\eqref{ostr} and~\eqref{guessab} can easily be proved directly.
\end{remark}
\section{On optimal recovery problems on the class $H^\omega([a,b],X)$}\label{s::HOmegaRecovery}
In this section we consider the problems of recovery of the convexifying operator $P$ and the integral
$
\Lambda(f) = \int_a^b f(t) dt
$
on the class $H^\omega([a,b],X)$, given the information operator
$
I_{\bf t}(f)
=
\left(\frac 1 {2h}\int_{t_1 - h}^{t_1 + h}f(t)dt,\dots,\frac 1 {2h}\int_{t_n - h}^{t_n + h}f(t)dt\right),
$
where $n\in\mathbb N$, $h > 0$ and ${\bf t} := (t_1,\dots,t_n)$ are such that
\begin{equation}\label{knots}
a\leq t_1-h<t_1+h<t_2-h<\ldots<t_n +h \leq b,
\end{equation}
using arbitrary method of recovery $\Phi\colon X^n\to B([a,b],X)$ and $\Phi\colon X^n\to X$ respectively.
Define a vector $\tau = \tau({\bf t})$ with components
\begin{equation}\label{tau}
\tau_1 = a, \tau_i = \frac 12(t_{i-1} + t_{i}), i = 2,\ldots, n, \tau_{n+1} = b
\end{equation}
and set
\begin{equation}\label{t-star}
{\bf t}^* = (t_1^*,t_2^*\ldots, t_n^*) = \left(\frac{b-a}{2n}, \frac{3(b-a)}{2n},\ldots, \frac{(2n-1)(b-a)}{2n}\right).
\end{equation}
We need the following well known estimate for the value of the optimal recovery~\eqref{errorOfRecovery}.
\begin{lemma}\label{l::errorOfRecoveryFromBelow}
If $f,g\in W$ are such that $I(f) = I(g)$, then
$$ {\mathcal E}(\Lambda,W, I, X)\geq \frac 12 h_X(\Lambda(f),\Lambda(g)).$$
\end{lemma}
\begin{proof} We have
\begin{gather*}
\sup_{z\in W} h_X(\Lambda(z), \Phi(I(z)))
\geq
\max\left\{h_X(\Lambda(f), \Phi(I(f))), h_X(\Lambda(g), \Phi(I(g)))\right\}
\\ \geq
\frac12\left(h_X(\Lambda(f), \Phi(I(f))) + h_X(\Lambda(g), \Phi(I(f)))\right)
\geq
\frac12 h_X(\Lambda(f), \Lambda(g)).
\end{gather*}
\end{proof}
\subsection{Real-valued extremal functions}
For given $n\in\mathbb N$, $h>0$ and ${\bf t}$ that satisfy~\eqref{knots}, denote by $H_{\bf t}^h$ the class of functions $y\in H^\omega([a,b],\mathbb R)$ such that $\int_{t_i-h}^{t_i+h}y(t)dt = 0$ for all $i = 1,\ldots, n$. Note, that for arbitrary $f\in H_{\bf t}^h$, $x\in X^{\rm c}\cap X^{\rm inv}$, due to Lemma~\ref{l::LspaceValuedFunc}, $\int_{t_i-h}^{t_i+h}f_x(t)dt =\theta$, $i = 1,\ldots, n$.
\begin{lemma}\label{l::IdRecoveryLowerBound}
Let numbers $n\in\mathbb N$, $h>0$ and ${\bf t} := (t_1,\dots,t_n)$ that satisfy~\eqref{knots} be given. For arbitrary modulus of continuity $\omega$ there exists a function $f_{\bf t}\in H_{\bf t}^h$ such that
$$
\max\limits_{t\in[a,b]}|f_{\bf t}(t)|
\geq
\frac 1{2h}\int_{({b-a})/({2n})-h}^{({b-a})/({2n})+h}\omega(u)du.
$$
\end{lemma}
\begin{proof}
Among $2n$ segments $[\tau_i,t_i]$ and $[t_i,\tau_{i+1}]$, $i = 1,\ldots, n$, there exists at least one with length at least $\frac{b-a}{2n}$. Let for definitness it be the segment $[\tau_{i^*},t_{i^*}]$, $i^*\in \{1,\ldots, n\}$. We define a functions $f_{\bf t}$ on the segment $[\tau_{i^*},t_{i^*}+h] = [a,t_{1}+h]$, if $i^* = 1$, or on the segment $[t_{i^*-1}-h,t_{i^*}+h]$, if $i^* > 1$, by the formula
$$
f_{\bf t}(u)
=
\frac 1{2h}\int_{t_{i^*}-h}^{t_{i^*}+h}\omega(|s-\tau_{i^*}|)ds-\omega (|u-\tau_{i^*}|).
$$
Next we continue this function to the whole segment $[a,b]$ as follows. We set $f_{\bf t}(u) = f_{\bf t}(t_{i^*}+h)$ on $[t_{i^*}+h,t_{i^*+1}-h]$; $f_{\bf t}(u) = f_{\bf t}(t_{i^*}+t_{i^*+1}-u)$ on $[t_{i^*+1}-h,t_{i^*+1}+h]$; $f_{\bf t}(u) = f_{\bf t}(t_{i^*+1}+h)$ on $[t_{i^*+1}+h,t_{i^*+2}-h]$; $f_{\bf t}(u) = f_{\bf t}(t_{i^*+1}+t_{i^*+2}-u)$ on $[t_{i^*+2}-h,t_{i^*+2}+h]$ and so on. The process goes analogously for $u<t_{i^*}-h$.
From the definition it follows that $f_{\bf t}\in H^h_{\bf t}$ and
$$
\max\limits_{t\in[a,b]}|f_{\bf t}(t)|
\geq
f_{\bf t}(\tau_{i^*})
=
\frac 1{2h}\int_{t_{i^*}-h}^{t_{i^*}+h}\omega(|u-\tau_{i^*}|)du
\geq
\frac 1{2h}\int_{({b-a})/({2n})-h}^{({b-a})/({2n})+h}\omega(u)du.
$$
\end{proof}
\begin{lemma}\label{l::lowerBound}
Let numbers $n\in\mathbb N$, $h>0$ and ${\bf t} := (t_1,\dots,t_n)$ that satisfy~\eqref{knots} be given. Let $\omega$ be a concave modulus of continuity. Then there exists a function $f_{\bf t}\in H_{\bf t}^h$ such that
$$\int_a^bf_{\bf t}(t)dt\geq 2n\left(1-\frac{2nh}{b-a}\right)\int_0^{(b-a)/ (2n)}\omega(t)dt.$$
\end{lemma}
\begin{proof}
Consider the even function $y_0$, defined on $[0,\infty)$ by the following equation.
\begin{equation}\label{lowerBound.1}
y_0(t) =
\begin{cases}
-\frac{2nh}{b-a}\omega\left(\frac {b-a} {2nh}(h-t)\right), & t\in [0,h], \\
\frac{b-a-2nh}{b-a}\omega\left(\frac {b-a}{{b-a}-2nh}(t-h)\right), & t\in \left[h, \frac {b-a} {2n}\right], \\
y_0\left(\frac {b-a} {2n}\right), & t > \frac {b-a} {2n}.
\end{cases}
\end{equation}
Note, that the restriction of the function $y_0$ to the segment $\left[0,({b-a})/({2n})\right]$ is the function built according to formula~\eqref{extremalFunctions} with $\psi_1 =\frac 1h\chi_{[0,h]}$ and $\psi_2 = \frac {2n}{b-a-2nh} \chi_{\left[h,({b-a})/({2n})\right]}$.
Hence $y_0(t)\in H^\omega([0,(b-a)/(2n)],\mathbb R)$, since $\omega$ is a concave modulus of continuity.
Set $$y_1(t) := \min\{y_0(t-t_1), y_0(t-t_2),\dots,y_0(t-t_n)\},\; t\in\mathbb R.$$
Set $s_0:=a$, $s_i = (t_i+t_{i+1})/2$, $i=1,\ldots, n-1$, $s_n:=b$.
Then $y_1(t) = y_0(t-t_k)$, $t\in [s_{k-1}, s_k]$, $k=1,\dots,n$. Note, that $y_1(t)\in H^\omega([a,b],\mathbb R)$. Set $y(t):=y_1(t) + C$, where the constant $C$ is chosen in such a way that $\int_{-h}^h (y_0(t)+C)dt=0$. This implies
\begin{equation}\label{lowerBound.2}
\int_{t_k-h}^{t_k+h} y(t)dt=0,\;k=1,\dots,n.
\end{equation}
Hence $y\in H_{\bf t}^h$. We estimate the integral $\int_a^by(t)dt$ from below. The function $y_0$ is even and $J(t) := \int_{0}^{t}y_0(s)ds$ is convex, since $y_0$ is non-decreasing on $[0,\infty)$. Hence
\begin{gather*}
\int_{a}^{b} y(t)dt=C(b-a)+\int_{a}^{b} y_1(t)dt=C(b-a)+\sum\nolimits_{k=1}^n\int_{s_{k-1}}^{s_k}y_0(t-t_k)dt=
C(b-a)
\\+\sum\nolimits_{k=1}^n\int_{s_{k-1}-t_k}^{s_k-t_k}y_0(t)dt
=
C(b-a)+\sum\nolimits_{k=1}^n J(s_k-t_k) +\sum\nolimits_{k=1}^n J(t_k - s_{k-1}) \geq C(b-a)
\\
+
2n J\left(\frac 1 {2n}\sum\nolimits_{k=1}^n (s_k-t_k) + \frac 1 {2n}\sum\nolimits_{k=1}^n (t_k - s_{k-1})\right)=C(b-a)+2nJ\left(\frac {b-a} {2n}\right).
\end{gather*}
Using~\eqref{lowerBound.1} and~\eqref{lowerBound.2} to compute the right-hand side of the latter inequality, we obtain
$$C(b-a)+2nJ\left(\frac {b-a} {2n}\right) = 2n\left(1-\frac{2nh}{b-a}\right)\int_0^{(b-a)/ (2n)}\omega(t)dt$$
and the lemma is proved.\end{proof}
\subsection{Optimal recovery of the convexifying operator.}
\begin{theorem}
Let numbers $n\in\mathbb N$, $h>0$ and ${\bf t} := (t_1,\dots,t_n)$ that satisfy~\eqref{knots} be given. For the convexifying operator $P$ and arbitrary modulus of continuity $\omega$
$$
\inf\limits_{\bf t}{\mathcal E}(P,H^\omega([a,b],X), I_{\bf t}, B([a,b],X))
=
\frac 1{2h}\int_{({b-a})/({2n})-h}^{({b-a})/({2n})+h}\omega(u)du.
$$
The optimal informational operator is $I_{{\bf t}^*}$ and the optimal recovery method is
$$\Phi^*(I_{{\bf t}^*}(f))(u) = \frac{1}{2h}\int_{t_k^* - h}^{t_k^* + h}f(t)dt,\;u\in [\tau_{k}({\bf t}^*),\tau_{k+1}({\bf t}^*)],$$
where the vectors ${\bf t}^*$ and $\tau = \tau({\bf t}^*)$ are defined in~\eqref{t-star} and~\eqref{tau} respectively.
\end{theorem}
\begin{proof}
For $f\in H^\omega([a,b],X)$, $t\in [\tau_i({\bf t}^*),\tau_{i+1}({\bf t}^*)]$, and $i\in\{1,\ldots, n\}$, Corollary~\ref{c::valueIntegralDeviation} implies
\begin{multline*}
h_X(P(f(t)),\Phi^*(I_{{\bf t}^*})(t))
=
h_X\left(P(f(t)),\frac 1{2h}\int_{t_i^*-h}^{t_i^*+h}f(u)du\right)
\le
\frac 1{2h}\int_{t_i^*-h}^{t_i^*+h}\omega(|u-t|)du
\\ \le
\frac 1{2h}\int_{t_i^*-h}^{t_i^*+h}\omega(|u-\tau_i({\bf t}^*)|)du
=
\frac 1{2h}\int_{({b-a})/({2n})-h}^{({b-a})/({2n})+h}\omega(u)du.
\end{multline*}
Hence
$$
{\mathcal E}(P,H^\omega([a,b],X), I_{{\bf t}^*}, B([a,b],X))
\le
\frac 1{2h}\int_{({b-a})/{(2n)}-h}^{({b-a})/({2n})+h}\omega(u)du.
$$
Choose $x\in X^{\rm c}\cap X^{\rm {inv}}$, $h_X(x,\theta)=1$, and for the function $f_{\bf t}$ from Lemma~\ref{l::IdRecoveryLowerBound} set
\begin{equation}\label{LValuedExtremalFunc}
\underline{F}_n=(f_{\bf t})_{x'} \text{ and }
\overline{F}_n=(f_{\bf t})_x.
\end{equation}
Note, that the functions $\underline{F}_n$ and $\overline{F}_n$ are convex-valued. Using Lemma~\ref{l::errorOfRecoveryFromBelow}, we obtain
\begin{multline*}
{\mathcal E}(P,H^\omega([a,b],X), I_{\bf t}, B([a,b],X))
\ge
\frac 12\max\limits_{t\in[a,b]}h_X(\overline{F}_n(t),\underline{F}_n(t))
\\ =
\max\limits_{t\in[a,b]}|f_{\bf t}(t)|
\geq
\frac 1{2h}\int_{({b-a})/({2n})-h}^{({b-a})/({2n})+h}\omega(u)du.
\end{multline*}
The theorem is proved. \end{proof}
\subsection{Optimal recovery of the integral}
\begin{theorem}
Let numbers $n\in\mathbb N$, $h>0$, and ${\bf t} := (t_1,\dots,t_n)$ that satisfy~\eqref{knots} be given. For a concave modulus of continuity $\omega$,
$$
\inf\limits_{\bf t}{\mathcal E}(\Lambda,H^\omega([a,b],X), I_{\bf t}, X)
=
2n\left(1-\frac{2nh}{b-a}\right)\int_0^{(b-a)/ (2n)}\omega(t)dt.
$$
The optimal informational operator is $I_{{\bf t}^*}$ and the optimal recovery method is
$$\Phi^*(I_{{\bf t}^*}(f)) = \frac {b-a}{n}\sum\nolimits_{k=1}^n\frac{1}{2h}\int_{t_k^* - h}^{t_k^* + h}f(t)dt,$$
where the vector ${\bf t}^*$ is defined in~\eqref{t-star}.
\end{theorem}
\begin{proof}
For each $f\in H^\omega([a,b],X)$ we obtain, using Corollary~\ref{c::symmetricalInterval},
\begin{multline*}
h_X\left(
\int_{a}^bf(t)dt, \Phi^*(I_{{\bf t}^*}(f))\right)
\leq
\sum_{k=1}^nh_X\left(
{\int_{ \frac{(k-1)(b-a)}{n}}^{ \frac{k(b-a)}{n}}f(t)dt},
\frac {b-a}{2nh}
{\int_{ \frac{(2k-1)(b-a)}{2n}- h}^{ \frac{(2k-1)(b-a)}{2n} + h}f(t)dt}\right)
\\ \leq
\sum_{k=1}^n 2\left(1-\frac{2nh}{b-a}\right)
\int_0^{(b-a)/ (2n)}\omega(t)dt
=
2n\left(1-\frac{2nh}{b-a}\right)\int_0^{(b-a)/(2n)}\omega(t)dt.
\end{multline*}
Hence
$
\inf\limits_{\bf t}{\mathcal E}(\Lambda,H^\omega([a,b],X), I_{\bf t}, X)
\leq
2n\left(1-\frac{2nh}{b-a}\right)\int_0^{(b-a)/ (2n)}\omega(t)dt.
$
Now let an information set ${\bf t}$ be fixed and $f_{\bf t}$ be built according to Lemma~\ref{l::lowerBound}. Using functions~\eqref{LValuedExtremalFunc} with such $f_{\bf t}$, Lemmas~\ref{l::errorOfRecoveryFromBelow} and~\ref{l::lowerBound}, we obtain
\begin{gather*}
{\mathcal E}(\Lambda,H^\omega([a,b],X), I_{\bf t}, X)
\\ \geq
\frac 12h_X\left(\int_a^b \overline{F}_n(t)dt , \int_a^b \underline{F}_n(t)dt\right)
=
\int_a^b f_{\bf t}(t)dt
\geq
2n\left(1 - \frac{2nh}{b-a}\right)\int_0^{ (b-a)/(2n)}\omega(t)dt.
\end{gather*}
\end{proof}
\section{On optimal recovery problems on the class $W^1H^\omega ([a,b],X)$}\label{s::PolylineApproximation}
\begin{definition}\label{W1Homega}
Given a modulus of continuity $\omega(\cdot)$, denote by $W^1H^\omega([a,b],X)$ the class of functions $f$ of the form $$f(t)=x+\int_a^t\phi(s)ds, \text{ where } \phi\in H^\omega([a,b],X),\, x\in X^c.$$
\end{definition}
Note, that such a function $f$ is convex-valued, has Hukuhara type derivative $\mathcal{D}_H f$ and $\mathcal{D}_H f=P(\phi)$, see \cite[Lemma~2.24]{babenko19}, where $P$ is the convexifying operator.
\subsection{Real-valued extremal functions}
Let a partition ${\bf t} = (t_0,\ldots, t_n)$ of the segment $[a,b]$ be given,
\begin{equation}\label{partition}
a = t_0<t_1<\ldots <t_{n-1}<t_n = b.
\end{equation}
\begin{lemma}\label{l::omegaSpline}
Let $\omega$ be a concave modulus of continuity and a partition ${\bf t}$ be given. Then there exists a function $f_{\bf t}\in W^1H^\omega ([a,b],\mathbb R)$ such that $f_{\bf t}(t_i) = 0$, $i = 0,1,\ldots, n$, and
\begin{equation}\label{splineNorm}
\max\limits_{t\in [a,b]}|f_{\bf t}(t)|\geq \frac 14\int_0^{(b-a)/n}\omega(u)du.
\end{equation}
If the partition ${\bf t}$ is uniform, then inequality~\eqref{splineNorm} becomes equality.
\end{lemma}
\begin{proof}
In the space $\mathbb R^{n+1}$ consider the sphere
$$
\mathbb{S}^n=\left\{ \xi=(\xi_1,\ldots ,\xi_{n+1})\in \mathbb R^{n+1}\colon \sum\nolimits_{i=1}^{n+1}|\xi_i|=b-a\right\}.
$$
Each $\xi\in \mathbb{S}^n$ generates a set of points on the segment $[a,b]$
$$
\eta_0(\xi) = a,\;\eta_1(\xi)=a+|\xi_1|,\; \eta_2(\xi)=\eta_1+|\xi_2|,\ldots , \eta_n(\xi)=\eta_{n-1}+|\xi_n|,\; \eta_{n+1}(\xi) = b.
$$
Let $h(t) = \frac 12 \omega(2|t|)$, $t\in \mathbb R$. For $\xi\in \mathbb{S}^n$, set $h_\xi(t) = \min\limits_{k = \overline{1,n}} h(t-\eta_k(\xi))$ and
$g_\xi(t) = h_\xi(t)\cdot {\rm sgn\,}\xi_i$ for $t\in [\eta_{i-1}(\xi),\eta_{i}(\xi)]$, $i = 1, \ldots, n+1$. Then, due to concavity of $\omega$, $g_\xi\in H^\omega([a,b],\mathbb R)$.
Set
$$
G_\xi(t)=\int_a^tg_\xi(u)du
$$
and define the vector field on $\mathbb{S}^n$, by the formula
$
\xi \to (G_\xi(t_1),\ldots ,G_\xi(t_n)).
$
It is easy to see, that this field is continuous and odd. The Borsuk theorem implies that there exists $\xi^*=\xi^*({\bf t})=(\xi^*_1,\ldots ,\xi^*_{n+1})\in \mathbb{S}^n$ such that
$
G_{\xi^*}(t_1)=G_{\xi^*}(t_2)=\ldots =G_{\xi^*}(t_n)=0.
$
Moreover, $G_{\xi^*}(a)=0$. Hence the function $ G_{\xi^*}$ has at least $n+1$ zeros on $[a,b]$ and thus $g_{\xi^*}=G'_{\xi^*}$ has at least $n$ changes of sign. Since $g_{\xi^*}$ can change its sign only at the points $\eta_1(\xi^*),\ldots ,\eta_n(\xi^*)$, all these points are distinct, $g_{\xi^*}$ has exactly $n$ sign changes on $[a,b]$, and $\eta_i(\xi^*)$ is the unique point of local extremum of $G_{\xi^*}$ inside the segment $[t_{i-1},t_i]$, $i=1,\ldots, n$.
Since $\omega$ is non-decreasing, the function $u\to\int_0^{u}\omega(t)dt$ is convex, hence applying the Jensen inequality we obtain
\begin{gather*}
\bigvee\nolimits_a^bG_{\xi^*}=\int_a^b\left|g_{\xi^*}(u)\right|du
=
\frac 12\int_0^{|\xi^*_1|}\omega(2u)du+
2\frac 12\sum\nolimits_{i=2}^{n}\int_0^{|\xi^*_i|/2}\omega(2u)du
\\ +\frac 12\int_0^{|\xi^*_{n+1}|}\omega(2u)du
\ge
\int_0^{(|\xi^*_1|+|\xi^*_{n+1}|)/2}\omega(2u)du
+
\sum\nolimits_{i=2}^{n}\int_0^{|\xi^*_i|/2}\omega(2u)du
\\ =
\frac{1}2\int_0^{|\xi^*_1|+|\xi^*_{n+1}|}\omega(u)du+\frac{1}2\sum\nolimits_{i=2}^{n}\int_0^{|\xi^*_i|}\omega(u)du
\ge \frac n2\int_0^{(b-a)/n}\omega(u)du.
\end{gather*}
The function $G_{\xi^*}$ is monotone on the segments $[a,\eta_1(\xi^*)], [\eta_i(\xi^*),\eta_{i+1}(\xi^*)]$, $i = 1,\ldots, n-1$, $[\eta_n(\xi^*),b]$ and $G_{\xi^*}(0) = G_{\xi^*}(b) = 0$. Hence
$$
2n\max\limits_i |G_{\xi^*}(\eta_i(\xi^*))|
\ge
2\sum\nolimits_{i=1}^n |G_{\xi^*}(\eta_i(\xi^*))|
=\bigvee\nolimits_a^bG_{\xi^*}
\geq
\frac n2\int_0^{(b-a)/n}\omega(u)du,
$$
which implies~\eqref{splineNorm} for $f_{\bf t}=G_{\xi^*({\bf t})}$. If ${\bf t}$ is the uniform partition, all inequalities above become equalities, and hence~\eqref{splineNorm} also becomes equality.
The lemma is proved.\end{proof}
\subsection{Optimal recovery of the identity operator.}
Using the definition of the class $W^1H^\omega ([a,b],X)$, for an isotropic $L$-space $X$, $f\in W^1H^\omega ([a,b],X)$ and $t\in [a,b]$, applying Theorem~\ref{th::ostrowskiInequality} to $\mathcal{D}_H f\in H^\omega ([a,b],X)$, one has
\begin{multline}\label{*}
h_X\left( f(t),\frac{b-t}{b-a}f(a)+\frac{t-a}{b-a}f(b)\right)
=
h_X\left( f(a)+\int_a^t\mathcal{D}_H f(u)du,\right.
\\
\left.\frac{b-t}{b-a}f(a)+\frac{t-a}{b-a}f(a)
+
\frac{t-a}{b-a}\int_a^b\mathcal{D}_H f(u)du\right)
\\ =
h_X\left(\int_a^t\mathcal{D}_H f(u)du,\right.
\left.\frac{t-a}{b-a}\int_a^b\mathcal{D}_H f(u)du\right)
\le
\frac{(b-t)(t-a)}{(b-a)^2}\int_0^{b-a}\omega(u)du.
\end{multline}
Next we apply the obtained inequality to prove an estimate of the deviation of a function $f\in W^1H^\omega ([a,b],X)$ at a fixed point $t\in [a,b]$ from the interpolation polygonal function. Let a partition ${\bf t}$ as in~\eqref{partition} be given.
The interpolation polygonal function is
\begin{equation}\label{interpolationPolyline}
l_f({\bf t};t)=\frac{t_{k+1}-t}{t_{k+1}-t_k}f(t_k)+\frac{t-t_k}{t_{k+1}-t_k}f(t_{k+1}),\; t\in[t_k,t_{k+1}].
\end{equation}
Applying~\eqref{*}, we obtain that for $t\in [t_k, t_{k+1}]$
\begin{equation}\label{polylineUpperEstimate}
h_X(f(t),l_f({\bf t};t))\le \frac{(t_{k+1}-t)(t-t_k)}{(t_{k+1}-t_k)^2}\int_0^{t_{k+1}-t_k}\omega(u)du.
\end{equation}
Therefore for a uniform partition ${\bf t}^*$ of the segment $[a,b]$ the following generalization of a result by Malozemov~\cite{Malozemov66} holds: for each $f\in W^1H^\omega ([a,b],X)$
\begin{equation}\label{polylineDeviation}
\max\limits_{t\in [a,b]}h_X(f(t),l_f({\bf t}^*;t))\le \frac 1{4}\int_0^{(b-a)/n}\omega(u)du.
\end{equation}
\begin{theorem} If ${\bf t}$ is a partition of $[a,b]$,
$
I_{\bf t}(f)=(f(t_0),f(t_1),\ldots , f(t_n))
$
is the information operator and ${\rm Id}$ is the identity operator, then for an isotropic $L$-space $X$
$$
\inf\limits_{\bf t}\mathcal{E}({\rm Id}, W^1H^\omega([a,b],X),I_{\bf t}, C([a,b],X))
=
\frac 14\int_0^{(b-a)/n}\omega(t)dt.
$$
The optimal information operator is $I_{{\bf t}^*}$ where ${\bf t}^*$ is the uniform partition, and the optimal method of recovery is $\Phi(I_{{\bf t}^*}(f)) =l_f({\bf t}^*)$, where $l_f({\bf t})$ is defined by~\eqref{interpolationPolyline}.
\end{theorem}
\begin{proof}
For arbitrary partition ${\bf t}$ let $f_{\bf t}$ be the function from Lemma~\ref{l::omegaSpline}, and $x\in X^c\cap X^{\rm inv}$, $h_X(x,\theta) = 1$.
Using Lemmas~\ref{l::errorOfRecoveryFromBelow} and~\ref{l::distBetweenInverseElems}, and isotropness of $X$, we obtain
$$
\mathcal{E}({\rm Id}, W^1H^\omega([a,b],X),I_{\bf t}, (C[a,b],X))
\ge
{ \frac 12\max\limits_{t\in [a,b]}h_X\left((f_{\bf t})_x(t), (f_{\bf t})_{x'}(t)\right)}
$$
$$
= \frac 12 \max\limits_{t\in [a,b]}h_X(2(f_{\bf t})_+(t)\cdot x,2(f_{\bf t})_-(t)\cdot x) =
\max\limits_{t\in [a,b]}|f_{\bf t}\left(t\right)|
\geq
\frac 14\int_0^{(b-a)/n}\omega(t)dt.
$$
It follows from~\eqref{polylineDeviation}, that in the case of the uniform partition we have equalities in the above inequalities.
The theorem is proved.\end{proof}
\subsection{Recovery of the derivative}
Consider the problem about the deviation of the Hukuhara type derivative of a function $f\in W^1H^\omega ([a,b], X)$ from the derivative of its interpolation polygonal function.
The Hukuharu type derivative of the interpolation at the points of the partition ${\bf t}$ polygonal function $l_f({\bf t})$ for $t\in (t_k,t_{k+1})$ is equal to
$$
\mathcal{D}_H l_f({\bf t};t)=\frac{f(t_{k+1})-_H f(t_k)}{t_{k+1}-t_k}=\frac1{t_{k+1}-t_k}\int_{t_k}^{t_{k+1}}\mathcal{D}_H f(u)du, \;k = 0,\ldots, n-1.
$$
We define it at the points $t_k$, setting
$$
\mathcal{D}_H l_f({\bf t};t_k)=
\begin{cases}
(f(t_{k+1}-_H f(t_k))/(t_{k+1}-t_k), & \text{ if } k=0,1,\ldots,n-1,\\
(f(t_n)-_H f(t_{n-1}))/(t_{n}-t_{n-1}), & \text{ if } k=n.
\end{cases}
$$
For $t\in [t_k, t_{k+1}]$ we obtain, using Corollary~\ref{c::valueIntegralDeviation}
\begin{equation}\label{derivativeDeviation}
h_X\left(\mathcal{D}_H f(t), \mathcal{D}_H l_f({\bf t};t)\right)\le \frac 1{t_{k+1}-t_k}\int_{t_k}^{t_{k+1}}\omega (|s-t|)ds.
\end{equation}
The following theorem generalizes the results from~\cite{Malozemov67}.
\begin{theorem}
Let $\omega$ be an arbitrary modulus of continuity and ${\bf t^*}=(t^*_0,\ldots ,t^*_n)$ be the uniform partition of the segment $[a,b]$. Then
$$
\mathcal{E}(\mathcal{D}_H, W^1H^\omega([a,b],X),I_{{\rm\bf t}^*}, B([a,b],X))=\frac n{b-a}\int_{0}^{(b-a)/n}\omega (u)du.
$$
The optimal method of recovery is
$$
\Phi (f(t_0^*),f(t_1^*),\ldots ,f(t_n^*))=\mathcal{D}_H l_f({\bf t}^*).
$$
\end{theorem}
\begin{proof}
From~\eqref{derivativeDeviation} it follows that
$$
\sup\limits_{f\in W^1H^\omega ([a,b], X)}\sup\limits_{t\in [a,b]}h_X\left(\mathcal{D}_H f(t), \mathcal{D}_H l_f({\bf t}^*,t)\right)
\leq
\frac n{b-a}\int_{0}^{(b-a)/n}\omega (u)du.
$$
An extremal function is built as follows. Set
$
g_0(t)=\min\limits_{k\colon 2k\le n}\omega(|t-t_{2k}^*|)
$
and
$$
g(t)=g_0(t)-\frac{1}{b-a}\int_a^bg_0(u)du.
$$
The function $f_{{\bf t}^*}(t):=\int_a^tg(u)du$ belongs to $W^1H^\omega([a,b],\mathbb R)$. Moreover, since ${\bf t}^*$ is the uniform partition, $f_{{\bf t}^*}(t_k) = 0$, $k = 0,\ldots, n$, and hence $l_f({\bf t}^*)\equiv 0$. Finally, applying Lemma~\ref{l::errorOfRecoveryFromBelow} to functions
$(f_{{\bf t}^*})_x$ and $(f_{{\bf t}^*})_{x'}$ ($x\in X^c\cap X^{\rm inv}$, $h_X(x,\theta)=1$)
we obtain
\begin{gather*}
\mathcal{E}(\mathcal{D}_H, W^1H^\omega([a,b],X),I_{{\rm\bf t}^{*}}, B([a,b],X))
\geq
\frac 12h_X\left(\mathcal{D}_H(f_{{\bf t}^*})_x(a), \mathcal{D}_H(f_{{\bf t}^*})_{x'}(a)\right)
\\ =
|f'_{{\bf t}^*}(a)|
=
\frac{1}{b-a}\int_a^bg_0(u)du
=
\frac n{b-a}\int_{0}^{(b-a)/n}\omega (u)du.
\end{gather*}
The theorem is proved.\end{proof}
\section{On Inequalities of Landau type and Stechkin's Problem for Hukuharu Type Divided Differences and Derivatives}\label{s::StechkinPr}
\subsection{Deviation of Hukuhara type divided differences and derivatives}
Let $t\in [a,b]$ and non-negative numbers $\gamma_1,\gamma_2,h_1,h_2$ such that
\begin{equation}\label{h,gamma}
\gamma_1+\gamma_2>0, h_1+h_2>0, \text{ and }
[t-\gamma_1,t+\gamma_2]\subset [t-h_1,t+h_2]\subset [a,b]
\end{equation}
be given. For a function $f\in W^1H^\omega([a,b],X)$ set
$$
\Delta^H_{\gamma_1,\gamma_2}f(t)=\frac{f(t+\gamma_2)-_H f(t-\gamma_1)}{\gamma_1+\gamma_2}.
$$
Applying Theorem~\ref{th::ostrowskiInequality} to the segments $[t-\gamma_1,t+\gamma_2]$ and $[t-h_1,t+h_2]$, and writing
$I(\alpha)$ instead of $I(0,\alpha)$, we obtain
$$
h_X(\Delta^H_{\gamma_1,\gamma_2}f(t),\Delta^H_{h_1,h_2}f(t))
=h_X\left(\frac 1{\gamma_1+\gamma_2}\int_{t-\gamma_1}^{t+\gamma_2}\mathcal{D}_H f(u)du,\frac 1{h_1+h_2} \int_{t-h_1}^{t+h_2}\mathcal{D}_H f(u)du\right)
$$
$$
\le \frac{(h_1-\gamma_1)+(h_2-\gamma_2)}{(h_1+h_2)^2}\left\{ I\left(\frac{(h_1+h_2)(h_1-\gamma_1)}{(h_1-\gamma_1)+(h_2-\gamma_2)}\right)\right.
+\left.I\left(\frac{(h_1+h_2)(h_2-\gamma_2)}{(h_1-\gamma_1)+(h_2-\gamma_2)}\right)\right\}
$$
$$
=:K(\gamma_1,\gamma_2;h_1,h_2).
$$
If $\omega$ is a concave modulus of continuity, then the estimate
\begin{equation}\label{a}
h_X(\Delta^H_{\gamma_1,\gamma_2}f(t),\Delta^H_{h_1,h_2}f(t))\le K(\gamma_1,\gamma_2;h_1,h_2)
\end{equation}
is sharp. Extremal functions can be built as follows. Start with the extremal function $g$ from Theorem~\ref{th::ostrowskiInequality} for the segments $[t-\gamma_1,t+\gamma_2]$ and $[t-h_1,t+h_2]$. Continue it setting \begin{equation}\label{continuation}
g(u)=g(t-h_1) \text{ for } u\le t-h_1 \text{ and } g(u)=g(t+h_2) \text{ for } t\ge t+h_2.
\end{equation}
Inequality~\eqref{a} becomes equality on the functions
$
f(u)=\int_a^ug(s)ds+y, \; u\in [a,b]$, $y\in X^{\rm c}.
$
Shrinking the segment $[t-\gamma_1, t+\gamma_2]$ into the point $t$, we obtain
\begin{equation}\label{a'}
h_X(\mathcal{D}_H f(t),\Delta_{h_1,h_2}^Hf(t))\le\frac {I(h_1)+I(h_2)}{h_1+h_2}. \end{equation}
This inequality is sharp for arbitrary modulus of continuity $\omega$. Extremal functions can be built analogously to the extremal ones for~\eqref{a}, except we need to start from the extremal function from Corollary~\ref{c::valueIntegralDeviation}.
\subsection{Landau type inequalities}
Below for brevity we write $\overline{W}^1H^\omega([a,b],X):=\bigcup_{k>0}k\cdot {W}^1H^\omega([a,b],X)$
$$
\|x\|_X=h_X(x,\theta), \;
\| f\|_{\omega,X}=\sup\limits_{\substack{t',t''\in [a,b] \\ t'\neq t''}}\frac{h_X(f(t'),f(t''))}{\omega(|t'-t''|)},\;\;\;
\| f\|_{C([a,b],X)}=\sup\limits_{t\in [a,b]}\| f(t)\|_X.
$$
\begin{theorem}\label{1stLandauIn}
Let $\omega$ be a modulus of continuity, and $X$ be an isotropic $L$-space. For all $t\in [a,b]$, non-negative $\gamma_1,\gamma_2, h_1,h_2$ that satisfy~\eqref{h,gamma}, and $f\in \overline{W}^1H^\omega([a,b],X)$,
\begin{equation}\label{b}
\| \Delta_{\gamma_1,\gamma_2}^Hf(t)\|_X\le K(\gamma_1,\gamma_2; h_1,h_2)\| \mathcal{D}_H f\|_{\omega,X}+\| \Delta_{h_1,h_2}^Hf(t)\|_X,
\end{equation}
\begin{equation}\label{c}
\|\mathcal{D}_H f(t)\|_X\le \frac {I(h_1)+I(h_2)}{h_1+h_2}\| \mathcal{D}_H f\|_{\omega,X}+\| \Delta_{h_1,h_2}f(t)\|_X.
\end{equation}
Inequality~\eqref{b} is sharp for concave $\omega$. Inequality~\eqref{c} is sharp for arbitrary $\omega$.
\end{theorem}
\begin{proof} Inequalities~\eqref{b} and~\eqref{c} follow from~\eqref{a} and~\eqref{a'} respectively. An extremal function for~\eqref{b} can be built as follows. Let $g$ be a non-negative extremal function in Theorem~\ref{th::ostrowskiInequality} for the case $X=\mathbb R$ and the segments $[t-h_1,t+h_2]$, $[t-\gamma_1,t+\gamma_2]$. Continue it to the whole segment $[a,b]$ by~\eqref{continuation}.
Note, that due to construction of $g$, we can assume that there exists $\gamma\in (t-\gamma_1,t+\gamma_2)$ such that $g$ increases on $(t-h_1,\gamma)$ and decreases on $(\gamma,t+h_2)$. Hence
$$
\frac{1}{\gamma_1+\gamma_2}\int_{t-\gamma_1}^{t+\gamma_2}g(u)du \geq \frac{1}{h_1+h_2}\int_{t-h_1}^{t+h_2}g(u)du,
$$
and the function $f(u) = \int_{a}^ug(s)ds$ turns inequality~\eqref{b} into equality in the case $X = \mathbb R$. Indeed,
$$
\Delta_{\gamma_1,\gamma_2}^Hf(t)=\left(\frac{1}{\gamma_1+\gamma_2}\int_{t-\gamma_1}^{t+\gamma_2}g(u)du - \frac{1}{h_1+h_2}\int_{t-h_1}^{t+h_2}g(u)du\right)+\frac{1}{h_1+h_2}\int_{t-h_1}^{t+h_2}g(u)du
$$
$$
=K(\gamma_1,\gamma_2,h_1,h_2)+\Delta_{h_1,h_2}^Hf(t).
$$
In general case, the function $f_x$ with $x\in X^{\rm c}$, $\| x\|_X = 1$ is extremal for inequality~\eqref{b}.
An extremal function for~\eqref{c} can be built analogously to the one in~\eqref{b}, but we need to start from a non-negative extremal for Corollary~\ref{c::valueIntegralDeviation} for the point $t$ and the segment $[t-h_1,t+h_2]$.
\end{proof}
\begin{theorem}\label{th::landauIneq}
Under the conditions of Theorem~\ref{1stLandauIn}
for any $f\in \overline{W}^1H^\omega([a,b],X)$,
\begin{equation}\label{d}
\| \Delta_{\gamma_1,\gamma_2}f(t)\|_X\le K(\gamma_1,\gamma_2; h_1,h_2)\| \mathcal{D}_H f\|_{\omega,X}+\frac 2{h_1+h_2}\| f\|_{C([a,b],X)}, \end{equation}
\begin{equation}\label{e}
\| \mathcal{D}_H f(t)\|_X\le \frac {I(h_1)+I(h_2)}{h_1+h_2}\|\mathcal{D}_H f\|_{\omega,X}+\frac 2{h_1+h_2}\| f\|_{C([a,b],X)}.
\end{equation}
If for given $t\in [a,b]$ and $h>\gamma>0$
\begin{equation}\label{restrictionsOnH}
\gamma_1=\min\{ \gamma, t-a\},\; \gamma_2=\min\{ \gamma, b-t\},
\; h_1=\min\{ h, t-a\},\; h_2=\min\{ h, b-t\}
\end{equation}
and $\omega$ is concave, then inequality~\eqref{d} is sharp. If for $t\in [a,b]$ and $h>0$
\begin{equation}\label{restrictionsOnh}
h_1=\min\{ h, t-a\},\; h_2=\min\{ h, b-t\},
\end{equation}
and $\omega$ is an arbitrary modulus of continuity, then inequality~\eqref{e} is sharp.
\end{theorem}
\begin{proof}
Inequalities~\eqref{d} and~\eqref{e} follow from~\eqref{b} and~\eqref{c}, since
$$
\|\Delta_{h_1,h_2}f(t)\|_X\le \frac 2{h_1+h_2}\| f\|_{C([a,b],X)}.
$$
We prove their sharpness under above conditions on the numbers $t,\gamma_1,\gamma_2$, $h_1$ and $h_2$.
Let, for definiteness, $t\le (a+b)/2$, and hence $h_2\geq h_1$. For inequality~\eqref{d} as a function $g$ we take the non-negative extremal function in Theorem~\ref{th::ostrowskiInequality} for the segments $[t-\gamma_1,t+\gamma_2]$ and $[t-h_1,t+h_2]$ and $X=\mathbb R$ such that $g(t+h_2)=0$. Continue it to the segment $[a,b]$ setting $g(u) = 0$, $u\notin [t-h_1,t+h_2]$. For inequality~\eqref{e} we take $g(s)=(\omega(h_2)-\omega(|s-t|))_+$, $s\in [a,b]$. Both functions belong to $H^\omega([a,b],\mathbb R)$ (the first one in the case of concave $\omega$) and are non-negative on $[a,b]$.
Choose $\xi\in [t-h_1,t+h_2]$ so that
$
\int^\xi_{t-h_1}g(u)du=\int_\xi^{t+h_2}g(u)du.
$
The function
\begin{equation}\label{segmentExtremalFunc}
f(u)=\left(\int_\xi^{u}g(s)ds\right)_x,\; x\in X^{\rm c}\cap X^{\rm inv}, \;\| x\|_X=1
\end{equation}
is extremal. The theorem is proved. \end{proof}
Note, that for the extremal in inequality~\eqref{e} function one has
\begin{equation}\label{segmentExtremalFuncNorm}
\| f\|_{C([a,b],X)}=\frac 12\int_{t-h_1}^{t+h_2}[\omega(h_2)-\omega(|s-t|)]ds=\frac{h_1+h_2}2\omega(h_2)-\frac {I(h_1)+I(h_2)}{2}.
\end{equation}
\subsection{Approximation of operators by the ones with smaller norms}
In the space $C([a,b],X)$ consider the cone $C^H([a,b],X)$, that consists of functions $f$ such that for all $t\in [a,b]$ and $\gamma_1,\gamma_2>0$ such that $[t-\gamma_1,t+\gamma_2]\subset [a,b]$, the difference $\Delta^H_{\gamma_1,\gamma_2} f(t)$ exists. We call a positively homogeneous operator $T\colon C^H([a,b],X) \to X$ bounded, if
$$
\| T\|=\sup\{\| Tf\|_X\; :\; f\in C^H([a,b],X),\;\| f\|_{C([a,b],X)}\le 1\}<\infty .
$$
Assume that an operator $A\colon \overline{W}^1H^\omega ([a,b],X)\to X$, a number $N>0$ and an operator $T\colon C^H([a,b],X)\to X$ such that $\| T\|\le N$ are given. Set
$$
U(A,T)=\sup\limits_{f\in W^1H^\omega([a,b],X)}h_X(Af,Tf).
$$
The quantity
$$
E(A,N)=\inf\limits_{\| T\|\le N}U(A,T)
$$
is called the best approximation of the operator $A$ by operators $T$ with $\|T\|\le N$. It is clear, that if $A$ is defined on $C^H([a,b],X)$, is bounded, and $N\ge \| A\|$, then $E(A,N)=0.$
For $t\in [a,b]$ denote by $\Delta_{\gamma_1,\gamma_2}(t)$ and $\mathcal{D}_H(t)$ the operators that act by the formulae
$$
\Delta_{\gamma_1,\gamma_2}(t)f=\Delta^H_{\gamma_1,\gamma_2}f(t) \text{ and } \mathcal{D}_H(t)f=\mathcal{D}_H f(t).
$$
\begin{theorem}
Let $\omega$ be a modulus of continuity, $X$ be an isotropic $L$-space, $t\in [a,b]$, and numbers $h>\gamma>0$ be given. Let also numbers $\gamma_1,\gamma_2,h_1,h_2$ be defined by~\eqref{restrictionsOnH}. If $\omega$ is concave, then
\begin{equation}\label{k}
E\left(\Delta_{\gamma_1,\gamma_2}(t),\frac 2{h_1+h_2}\right)=U\left(\Delta_{\gamma_1,\gamma_2}(t),\Delta_{h_1,h_2}(t)\right)= K(\gamma_1,\gamma_2;h_1,h_2),
\end{equation}
and for arbitrary $\omega$
\begin{equation}\label{l}
E\left(\mathcal{D}_H(t),\frac 2{h_1+h_2}\right)=U\left(\mathcal{D}_H(t),\Delta_{h_1,h_2}(t)\right)= \frac {I(h_1)+I(h_2)}{h_1+h_2}.
\end{equation}
\end{theorem}
\begin{proof} It is clear that $\| \Delta^H_{h_1,h_2}\|\le \frac 2{h_1+h_2}$. Due to~\eqref{a} and~\eqref{a'} we have
$$
E\left( \Delta_{\gamma_1,\gamma_2}(t),\frac 2{h_1+h_2}\right)\le U(\Delta_{\gamma_1,\gamma_2}(t),\Delta_{h_1,h_2}(t))\le K(\gamma_1,\gamma_2;h_1,h_2)
$$
and
$$
E\left(\mathcal{D}_H(t),\frac 2{h_1+h_2}\right)\le U(\mathcal{D}_H(t),\Delta_{h_1,h_2}(t))\le \frac {I(h_1)+I(h_2)}{h_1+h_2}.
$$
We also proved that there exist functions $f_1,f_2\in W^1H^\omega([a,b],X)$ such that
\begin{equation}\label{alpha}
\| \Delta^H_{\gamma_1,\gamma_2}f_1(t)\|_X= K(\gamma_1,\gamma_2; h_1,h_2)+\frac 2{h_1+h_2}\| f_1\|_{C([a,b],X)}
\end{equation}
and
$$
\| \mathcal{D}_H f_2(t)\|_X=\frac {I(h_1)+I(h_2)}{h_1+h_2} +\frac 2{h_1+h_2}\| f_2\|_{C([a,b],X)}.
$$
To prove~\eqref{k}, assume there exists an operator $T$, $\| T\|\le\frac 2{h_1+h_2}$ such that
$$
U(\Delta_{\gamma_1,\gamma_2}(t),T)< K(\gamma_1,\gamma_2;h_1,h_2).
$$
Then for the function $f_1$ we get a strict inequality
$$
\| \Delta^H_{\gamma_1,\gamma_2}f_1(t)\|_X< K(\gamma_1,\gamma_2; h_1,h_2)+\frac 2{h_1+h_2}\| f_1\|_{C([a,b],X)},
$$
which contradicts to~\eqref{alpha}. Equality~\eqref{l} can be proved similarly. \end{proof}
\subsection{Recovery of an operator given inexact data}
Finally, we consider the problem of optimal recovery of an operator $A$ on the elements of the class $W^1H^\omega ([a,b],X)$ known with error. For an operator $A$, bounded operator $T$ and a number $\delta>0$ set
$$
U_\delta (A,T)=\sup\{ h_X(Af,Tg)\colon f\in W^1H^\omega ([a,b],X), g\in C([a,b],X), h_{C([a,b],X)}(f,g)\le \delta \}.
$$
The problem is to find the quantity
$$
\mathcal{E}_\delta(A)=\inf\nolimits_TU_\delta(A,T)
$$
and the operator $T^*$ on which the infimum in the right-hand side of the equality is attained.
\begin{theorem}
Let $\omega$ be a modulus of continuity, $t\in [a,b]$, $h>0$, and $\mathcal{D}_H(t) f=\mathcal{D}_H f(t)$ for $f\in W^1H^\omega ([a,b],X)$. If the numbers $h_1,h_2$ are defined by~\eqref{restrictionsOnh}, and
$$ \delta =\frac {h_1+h_2}2\max\{\omega(h_1), \omega(h_2)\}-\frac {I(h_1)+I(h_2)}2,
$$
then for the operator $\Delta_{h_1,h_2}(t)f=\Delta^H_{h_1,h_2}f(t)$ we have
$$
\mathcal{E}_\delta(\mathcal{D}_H(t))=U_\delta(\mathcal{D}_H(t),\Delta_{h_1,h_2}(t)) = \max\{\omega(h_1), \omega(h_2)\}.
$$
\end{theorem}
\begin{proof}
For each $f\in W^1H^\omega ([a,b],X)$, $g\in C([a,b],X)$ such that $h_{C([a,b],X)}(f,g)\leq \delta$, due to~\eqref{a'},
$$
h_X(\mathcal{D}_H f(t) , \Delta^H_{h_1,h_2}g(t))\le h_X(\mathcal{D}_H f(t) , \Delta^H_{h_1,h_2}f(t))+h_X(\Delta^H_{h_1,h_2}f(t) , \Delta^H_{h_1,h_2}g(t))
$$
$$
\le \frac {I(h_1)+I(h_2)}{h_1+h_2} +\frac 2{h_1+h_2}\delta = \max\{\omega(h_1), \omega(h_2)\}.
$$
Hence,
$
\mathcal{E}_\delta(\mathcal{D}_H(t))\le\max\{\omega(h_1), \omega(h_2)\}.
$
On the other hand, for the function $f$ defined by~\eqref{segmentExtremalFunc}, due to~\eqref{segmentExtremalFuncNorm},
$$
\mathcal{E}_\delta(\mathcal{D}_H(t))\ge\| \mathcal{D}_H f(t)\|_X= \frac {I(h_1)+I(h_2) }{h_1+h_2}+\frac 2{h_1+h_2}\| f\|_{C([a,b],X)}=\max\{\omega(h_1), \omega(h_2)\}
$$
and the theorem is proved. \end{proof}
\bibliographystyle{plain}
|
2006.14469
|
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Introduction}
Given a graph $G$ and a positive integer $r$, let $\text{\rm tc}_r(G)$ denote the minimum
number $k$ such that in any $r$-edge-colouring of $G$, there are $k$
monochromatic trees $T_1,\ldots,T_k$ such that the union of their vertex sets covers $V(G)$,
i.e.,
\begin{align*}
V(G) = V(T_1)\cup \dots \cup V(T_k).
\end{align*}
We define $\text{\rm tp}_r(G)$ analogously by requiring the union above to be disjoint.
It is easy to see that $\text{\rm tp}_2(K_n) = 1$ for all $n\geq 1$, and Erd\H{o}s,
Gy\'arf\'as~and Pyber~\cite{EGP91} proved that $\text{\rm tp}_3(K_n) = 2$ for all
$n\geq 1$, and conjectured that $\text{\rm tp}_r(K_n)=r-1$ for every $n$ and $r$. Haxell
and Kohayakawa~\cite{HaKo96} showed that $\text{\rm tp}_r(K_n) \leq r$ for all
sufficiently large $n \ge n_0(r)$. We remark that it is easy to see that
$\text{\rm tc}_r(K_n) \leq r$ (just pick any vertex $v \in V(K_n)$ and let~$T_i$, for
$i\in[r]$, be a maximal monochromatic tree of colour $i$ containing $v$), but it
is not even known whether or not $\text{\rm tc}_r(K_n) \leq r-1$ for every $n$ and $r$ (as
would be implied by the conjecture of Erd\H{o}s, Gy\'arf\'as~and Pyber).
Concerning general graphs instead of complete graphs, Gy\'arf\'as~\cite{Gyarfas}
noted that a well-known conjecture of Ryser on matchings and transversal sets in
hypergraphs is equivalent to the statement that for every graph $G$ and
integer~$r\geq 2$, we have $\text{\rm tc}_r(G)\leq (r-1)\alpha(G)$. In particular, Ryser's
conjecture, if true, would imply that $\text{\rm tc}_r(K_n) \leq r-1$, for every $n\geq 1$
and $r \geq 2$. Ryser's conjecture was proved in the case $r = 3$ by
Aharoni~\cite{Aharoni}, but for $r\geq 4$ very little is known. For example,
Haxell and Scott~\cite{HaSc12} proved (in the context of Ryser's original
conjecture) that there exists $\epsilon >0$ such that for $r\in \{4,5\}$, we
have $\text{\rm tc}_r(G)\leq (r-\epsilon)\alpha(G)$, for any graph $G$.
Bal and DeBiasio~\cite{BaDe} initiated the study of covering and partitioning
random graphs by monochromatic trees. They proved that if $p\ll
{\left(\frac{\log n}{n}\right)}^{1/r}$, then with high probability\footnote{We
will write shortly \emph{w.h.p.} for \emph{with high probability}.} we have
$\text{\rm tc}_r(G(n,p)) \to \infty$. They conjectured that for any $r\geq 2$, this was the
correct threshold for the event $\text{\rm tp}_r(G(n,p)) \leq r$. Kohayakawa, Mota and
Schacht~\cite{KoMoSc} proved that this conjecture holds for $r=2$, while Ebsen,
Mota and Schnitzer\footnote{A description of this construction can be found
in~\cite{KoMoSc}.} showed that it does not hold for more than two colours.
Buci\'c, Kor\'andi and Sudakov~\cite{BuKoSu} proved that if $p \ll
{\left(\frac{\log n}{n}\right)}^{\sqrt r/2^{r-2}}$, then w.h.p.\ we have
$\text{\rm tc}_r(G(n,p)) \geq r+1$, which implies that the threshold for the event
$\text{\rm tc}_r(G)\leq r$ is in fact significantly larger than the one conjectured by Bal
and DeBiasio when $r$ is large. Buci\'c, Kor\'andi and Sudakov also proved that
w.h.p.\ we have $\text{\rm tc}_r(G(n,p)) \leq r$ for $p \gg {\left(\frac{\log
n}{n}\right)}^{1/2^r}$. They were also able to roughly determine the
typical behaviour of $\text{\rm tc}_r(G(n,p))$ in terms of the range where $p$ lies in
(see~\cite[Theorem~1.3 and Theorem~1.4]{BuKoSu}).
Considering colourings with three colours, the results from~\cite{BuKoSu} imply
that if $p \gg {\left( \frac{\log n}{n} \right)}^{1/8}$, then w.h.p.\ we have
$\text{\rm tc}_3(G(n,p))\leq 3$, and if ${\left( \frac{\log n}{n} \right)}^{1/6} \ll p \ll
{\left(\frac{\log n}{n} \right)}^{1/7}$, then w.h.p.\ $\text{\rm tc}_3(G(n,p))\leq 88$. Our
main result improves these bounds for three colours.
\begin{theorem}\label{main_res}
If $p = p(n)$ satisfies $p \gg {\big(\frac{\log n}{n}\big)}^{1/6}$, then with high probability we
have
\[
\text{\rm tc}_3\big( G(n,p) \big) \leq 3.
\]
\end{theorem}
It is easy to see that if $p = 1 - \omega(n^{-1})$, then w.h.p.\ there is a
$3$-edge-colouring of~$G(n,p)$ for which three monochromatic trees are needed to
cover all vertices --- it suffices to consider three non-adjacent vertices
$x_1$, $x_2$ and $x_3$, and colour the edges incident to~$x_i$ with colour~$i$
and colour all the remaining edges with any colour. Therefore, the bound for
$tc_3(G(n,p))$ in~\cref{main_res} is the best possible as long as $p$ is not too
close to $1$.
We remark that, from the example described in~\cite{KoMoSc}, we know that for $p
\ll {\left( \frac{\log{n}}{n} \right)}^{1/4}$, we have w.h.p.\ $\text{\rm tc}_3(G(n,p))
\geq 4$. It would be very interesting to describe the behaviour of $\text{\rm tc}_3(G(n,p))$
when ${\big(\frac{\log n}{n}\big)}^{1/4}\ll p \ll {\big(\frac{\log
n}{n}\big)}^{1/6}$.
This paper is organized as follows. In~\cref{sec:preliminaries} we present some
definitions and auxiliary results that we will use in the proof of
Theorem~\ref{main_res}, which is outlined in~\cref{sec:sketch}. The details of
the proof of~\cref{main_res} are given in Section~\ref{sec:proof}.
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Preliminaries}\label{sec:preliminaries}
Most of our notation is standard (see~\cites{Bo98,BM08,Di10}
and~\cites{Bo01,JLR00}). However, we will mention in the following few
definitions regarding hypergraphs that will play a major role in our proofs just
for completeness.
We say that a set $A$ of vertices in a hypergraph $\mathcal{H}$ is a
\emph{vertex cover} if every hyperedge of $\mathcal{H}$ contains at least one
element of $A$. The \emph{covering number} of $\mathcal{H}$, denoted by
$\tau(\mathcal{H})$, is the smallest size of a vertex cover in $\mathcal{H}$. A
\emph{matching} in $\mathcal{H}$ is a collection of disjoint hyperedges in
$\mathcal{H}$. The \emph{matching number} of $\mathcal{H}$, denoted
by~$\nu(\mathcal{H})$, is the largest size of a matching in $\mathcal{H}$. An
immediate relationship between $\tau(\mathcal{H})$ and $\nu(\mathcal{H})$ is the
inequality~$\nu(\mathcal{H}) \leq \tau(\mathcal{H}) $. If additionally
$\mathcal{H}$ is $r$-uniform, then we have~$\tau(\mathcal{H}) \leq r
\nu(\mathcal{H})$. A conjecture due to Ryser (which first appeared in the thesis
of his Ph.D. student, Henderson~\cite{Ryser}) states that for every $r$-uniform
$r$-partite hypergraph $\mathcal{H}$, we have $\tau(\mathcal{H})\leq
(r-1)\nu(\mathcal{H})$. Note that K\"{o}nig-Egerv\'{a}ry theorem corresponds to
Ryser's conjecture for $r=2$. Aharoni~\cite{Aharoni} proved that Ryser's
conjecture holds for $r=3$, but the conjecture remains open for $r\geq 4$.
Given a vertex $v$ in a 3-uniform hypergraph $\mathcal{H}$, the \emph{link
graph} of $\mathcal{H}$ with respect to~$v$ is the graph $L_v = (V,E)$ with
vertex set $V = V(\mathcal{H})$ and edge set $E = \{xy : \{x,y,v\} \subseteq
\mathcal{H}\}$.
We will use the following theorem due to Erd\H{o}s, Gy\'arf\'as and
Pyber~\cite{EGP91} in the proof of our main result.
\begin{theorem}[Erd\H{o}s, Gy\'arf\'as and
Pyber]\label{lemma:EGP91} For any
$3$-edge-colouring of a complete graph $K_n$, there exists a partition of
$V(K_n)$ into~$2$ monochromatic trees.
\end{theorem}
We will also use the following lemma, which is a simple application of
Chernoff's inequality. For a proof of the first item see~\cite[Lemma~3.8]{KMNSS}.
The second item is an immediate corollary of~\cite[Lemma~3.10]{KMNSS}.
\begin{lemma}\label{lem:gnp}
Let~$\eps > 0$. If~$p=p(n)\gg{\left(\frac{\log{n}}{n}\right)}^{1/6}$, then
w.h.p.\ $G\in G(n,p)$ has the following properties.
\begin{enumerate}[label=\upshape({\itshape \roman*\,})]
\item For any disjoint sets~$X,Y\subseteq V(G)$ with~$|X|,|Y|
\gg \frac{\log{n}}{p}$, we have
\begin{align*}
|E_G(X,Y)| = (1\pm \eps)p|X||Y|.
\end{align*}
\item Every vertex $v\in V(G)$ has degree $d_G(v)=(1\pm
\eps)pn$ and
every set of~$i\leq 6$ vertices has $(1\pm\eps) p^i n$ common neighbours.
\end{enumerate}
\end{lemma}
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{A sketch of the proof}\label{sec:sketch}
In this section we will give an overview of the proof of~\cref{main_res}.
Let $G = G(n,p)$, with $p \gg {\left( \frac{\log{n}}{n} \right)}^{1/6}$, and let
$\phi: E(G) \to \{\text{\rm red},~\text{\rm green},~\text{\rm blue}\}$ be any 3-edge-colouring of~$G$. We consider an
auxiliary graph~$F$, with~$V(F)=V(G)$ and~$ij \in E(F)$ if and only if there is,
in the colouring~$\phi$, a monochromatic path in~$G$ connecting $i$ and $j$.
Then we define a 3-edge-colouring~$\phi'$ of~$F$ with~$\phi'(ij)$ being the color of any
monochromatic path in~$G$ connecting~$i$ and~$j$. Note that any covering of~$F$
with monochromatic trees with respect to the colouring~$\phi'$ corresponds to a
covering of~$G$ with monochromatic trees with respect to the colouring~$\phi$ with
the same number of trees.
Next, we consider different cases depending on the value of~$\alpha(F)$.
If~$\alpha(F)=1$, then~$F$ is a complete $3$-edge-coloured graph and by a
theorem of Erd\H{o}s, Gy\'{a}rf\'{a}s and Pyber (see~\cref{lemma:EGP91}), there
exists a partition of~$V(F)$ into~$2$ monochromatic trees. The remaining proof
now is divided into the cases $\alpha(F) \geq 3$ and $\alpha(F) = 2$.
\medskip
\noindent \textit{Case $\alpha(F) \geq 3$.} From the condition on the independence number of $G$, there
exist three vertices $r,b,g\in V(G)$ that pairwise do not have any monochromatic
path connecting them. With high probability, they have a common neighbourhood in
$G$ of size at least $np^3/2$.
Let~$X_{rbg}$ be the largest subset of this common
neighbourhood such that for each~$i\in\{r,b,g\}$, the edges from $i$ to $X_{rbg}$ in
$G$ are all coloured with one colour. Then, since there are no monochromatic
paths between any two of $r$, $b$, $g$, we have $|X_{rbg}| \geq np^3/12$ and moreover
we may assume that all edges between~$r$ and~$X_{rbg}$ are red, all between~$b$ and~$X_{rbg}$ are
blue and those between~$g$ and~$X_{rbg}$ are green. Now we notice that all vertices that
have a neighbour in~$X_{rbg}$ are covered by the union of the spanning trees of the
red component of~$r$, the blue component of~$b$ and the green component of~$g$.
We are done in the case where every vertex has a neighbour in $X_{rbg}$, as the vertices in $X_{rbg} \cup N_G(X_{rbg})$ are covered by the red, blue and green
component containing $r$, $b$ and $g$, respectively.
Otherwise, w.h.p.\ any vertex $y\in V \setminus \left( X_{rbg} \cup N_G(X_{rbg}) \right)$ has many common neighbours with~$r$,~$g$ and~$b$ in $G$ that are
also neighbours of some vertex in $X_{rbg}$.
An analysis of the possible colourings of the edges between $X_{rbg}$ and the common neighbourhood of the vertices $r$, $b$, $g$ and $y$ yields the following: for some $i \in \{r,g,b\}$, let us say $i=r$, every vertex $y \in X_{rbg}$
can be connected to~$r$ by a monochromatic path in colour~$\text{\rm red}$ or either
to~$g$ or~$b$ by a monochromatic path in the colour~$\text{\rm blue}$ or~$\text{\rm green}$,
respectively.
This already gives us that all vertices in~$G$ can be covered by~$5$
monochromatic trees, since all the vertices in~$N_G(X_{rbg})$ lie in the $\text{\rm red}$ component
of~$r$, or the $\text{\rm green}$ component of~$g$, or in the $\text{\rm blue}$ component of~$b$ and every
vertex in $V\setminus N_G(X_{rbg})$ lies in the $\text{\rm red}$ component of~$r$, in the $\text{\rm blue}$ component of $g$ or in the $\text{\rm green}$
component of $b$. By analysing the colours of edges to the common neighbourhood
of carefully chosen vertices, we are able to show that actually three of those
five trees already cover all the vertices of $G$.
\medskip
\noindent \textit{Case $\alpha(F) =2$.} Let us consider a $3$-uniform hypergraph
$\mathcal{H}$ defined as follows (this definition is inspired by a construction
of Gy\'{a}rf\'{a}s~\cite{Gyarfas}). The vertices of $\mathcal{H}$ are the
monochromatic components of~$F$ and three vertices form a hyperedge if the
corresponding three components have a vertex in common (in particular, those
three monochromatic components must be of different colours).
Hence~$\mathcal{H}$ is an~$3$-uniform $3$-partite hypergraph.
We observe that if~$A$ is a vertex cover of~$\mathcal{H}$, then the
monochromatic components associated with the vertices in~$A$ cover all the
vertices of~$G$. This implies that $\text{\rm tc}_{3}(G) \leq \tau(\mathcal{H})$. Also, it
is easy to see that $\nu(\mathcal{H}) \leq \alpha(F)=2$. Now, recall that
Aharoni's result~\cite{Aharoni} (which corresponds to Ryser's conjecture for
$r=3$) states that for every $3$-uniform $3$-partite hypergraph $\mathcal{H}$ we
have $\tau(\mathcal{H}) \leq 2\nu(\mathcal{H})$. Together with the previous
observation, this implies $\text{\rm tc}_3(G) \leq 4$. But our goal is to prove that
$\text{\rm tc}_3(G) \leq 3$. To this aim, we analyze the hypergraph $\mathcal{H}$ more
carefully, reducing the situation to a few possible settings of components
covering all vertices. In each of those cases, we can again analyse the possible
colouring of edges of common neighbours of specific vertices, inferring that
indeed there are~$3$ monochromatic components cover all vertices.
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Proof of~\cref{main_res}}\label{sec:proof}
Instead of analysing the colouring of the graph $G=G(n,p)$, it will be helpful
to analyse the following auxiliary graph.
\begin{definition}[Shortcut graph]
Let $G$ be a graph and $\varphi$ be a $3$-edge-colouring of $G$. The
\emph{shortcut graph} of $G$ (with respect to $\varphi$) is the graph
$F=F(G,\phi)$ that has $V(G)$ as the vertex set and the following edge set:
\[\{uv : u,v \in V(G) \text{ and $u$ and $v$ are connected in $G$ by a path
monochromatic under $\phi$}\}.\]
\end{definition}
We can consider a natural edge colouring~$\phi'$ of $F(G,\phi)$ by assigning to
an edge~$uv\in E(F(G,\phi))$ the colour of any monochromatic path connecting~$u$
and~$v$ in~$G$ under the colouring~$\phi$. We will say that~$\phi'$ is an
\emph{inherited colouring} of $F(G,\phi)$. Let~$\text{\rm tc}(F,\phi')$ be the minimum
number of monochromatic components (under the colouring~$\phi'$) covering all
the vertices of~$F$. Note that any covering of~$F(G,\phi)$ with monochromatic
trees under~$\phi'$ corresponds to a covering of~$G$ with monochromatic trees
under the colouring~$\phi$. In particular, if we show that for every
$3$-edge-colouring $\phi$ of $G$, we have $\text{\rm tc}(F,\phi')\leq 3$, for every
ineherited colouring $\phi'$, then we have shown that $\text{\rm tc}_3(G) \leq 3$.
Therefore,~\cref{main_res} follows from the following lemma.
\begin{lemma}\label{lemma:main}
Let $p\gg{\left(\frac{\log n}{n}\right)}^{1/6}$ and let $G =G(n,p)$. The
following holds with high probability. For any $3$-edge-colouring $\phi$ of
$G$ and any inherited colouring $\phi'$ of the shortcut graph $F = F(G,\phi)$,
we have $\text{\rm tc}(F,\phi')\leq 3$.
\end{lemma}
The proof of~\cref{lemma:main} is divided into two different cases,
depending on the independence number of $F$. Subsections~\ref{sec:alpha2}
and~\ref{sec:alpha3} are devoted, respectively, to the proof of
\cref{lemma:main} when $\alpha(F)\geq 3$ and $\alpha(F)\leq 2$.
From now on, we fix $\eps>0$ and assume that~$p\gg{\left(\frac{\log
n}{n}\right)}^{1/6}$ and $n$ is sufficiently large. Then, by~\cref{lem:gnp},
we may assume that the following holds w.h.p.:
\begin{enumerate}
\item There is an edge between any two sets of size~$\omega\left((\log n)/p\right)$.
\item Every vertex $v\in V(G)$ has degree $d_G(v)=(1\pm \eps)pn$.
\item Every set of~$i\leq 6$ vertices has $(1\pm\eps) p^i n$ common neighbours.
\end{enumerate}
\subsection{Shortcut graphs with independence number at least three}\label{sec:alpha2}
\begin{proof}[Proof of~\cref{lemma:main} for $\alpha(F)\geq 3$]
Since $\alpha (F) \geq 3$, there exist three vertices~$r,b,g\in V(G)$ that
pairwise do not have any monochromatic path connecting them in~$G$. In
particular, if $v$ is a common neighbour of $r$, $b$ and $g$ in $G$, then the
edges $vr$, $vb$ and $vg$ have all different colours. The common neighbourhood
of $r$, $b$ and $g$ in $G$ has size at least~$np^3/2$. Let~$X_{rbg}$ be the
largest subset of this common neighbourhood such that for each~$i\in
\{r,b,g\}$, the edges between $i$ and the vertices of $X_{rbg}$ are all
coloured with the same colour in~$G$. Then~$\vert X_{rbg}\vert\geq np^3/12$.
Without loss of generality, assume that all edges between~$r$ and the vertices
of~$X_{rbg}$ are red, between~$b$ and the vertices of~$X_{rbg}$ are blue and
those between~$g$ and the vertices of~$X_{rbg}$ are green.
Let~$C_\text{\rm red}(r)$,~$C_\text{\rm blue}(b)$ and~$C_\text{\rm green}(g)$ be respectively the red, blue
and green components in $G$ containing~$r$,~$g$ and~$b$.
Notice that all vertices of~$F$ that have a neighbour in~$X_{rbg}$ are covered
by~$C_\text{\rm red}(r)$,~$C_\text{\rm blue}(b)$ or~$C_\text{\rm green}(g)$. Therefore, the proof would be
finished if every vertex had a neighbour in~$X_{rbg}$. If this is not the
case, we fix an arbitrary vertex~$y\in V\setminus \left(X_{rbg} \cup
N_G(X_{rbg}) \right)$. By our choice of~$p$, there are at least~$np^4/2$
common neighbours of~$y$,~$r$,~$b$ and~$g$. Let~$X_{yrbg}$ be the largest
subset of the common neighbourhood of~$y$,~$r$,~$b$ and~$g$ such that for
each~$i\in\{r,b,g\}$, the edges between~$i$ and~$X_{yrbg}$ are all coloured
the same. Then~$|X_{yrbg}|\geq np^4/12$. Note that since~$y\notin
N_G(X_{rbg})$, the sets~$X_{yrbg}$ and~$X_{rbg}$ are disjoint. Furthermore,
since~$|X_{yrbg}|,|X_{rbg}| \gg \frac{\log{n}}{p}$, we have
\begin{align*}
|E_G(X_{yrbg},X_{rbg})|\geq 1.
\end{align*}
We now analyse the colours between~$r$, $b$, $g$ and the set~$X_{yrbg}$.
Again, since there is no monochromatic path connecting any two of~$r$,~$b$
and~$g$, all~$i \in \{r,b,g\}$ have to connect to~$X_{yrbg}$ in different
colours. Since~$X_{yrbg}$ is disjoint of~$X_{rbg}$, we cannot have~$r$,~$b$
and~$g$ being simultaneously connected to~$X_{yrbg}$ by red, blue and green
edges, respectively. Assume first that for each~$i \in \{r,b,g\}$, the edges
between~$i$ and~$X_{yrbg}$ have different colours from the edges between~$i$
and~$X_{rbg}$. Then let~$uv$ be an edge between~$X_{yrbg}$ and~$X_{rbg}$ and
notice that whatever the colour of~$uv$ is, we will have a monochromatic path
connecting two of the vertices in~$\{r,g,b\}$. Therefore, we can assume that
for some~$i \in \{r,g,b\}$, we have that all the edges between~$i$
and~$X_{rbg}$ and all the edges between~$i$ and~$X_{yrbg}$ coloured the same.
Without loss of generality, we may say that such~$i$ is~$r$. In this case, the
edges between~$b$ and~$X_{yrbg}$ are green and the edges between~$g$
and~$X_{yrbg}$ are blue. Finally, all the edges between~$X_{yrbg}$ and~$X_{rbg}$ are
red, otherwise we would be able to connect~$b$ and~$g$ by some monochromatic
path. Figure~\ref{fig:rbgy} shows the colouring of the edges that we have
analysed so far.
\begin{figure}
\centering
\begin{tikzpicture} [scale=1, thick, auto, vertex/.style={circle, draw,
fill=black!50, inner sep=0pt, minimum width=4pt}]
\node [vertex, label=left:$r$](r) at (0,2) {};
\node [vertex, label=left:$b$](b) at (0,0) {};
\node [vertex, label=left:$g$](g) at (0,-2) {};
\node [vertex, label=right:$y$](y) at (6,0) {};
\node [vertex, label={[label distance=.4cm]-45:$X_{rbg}$}](j) at (4,-1) {};
\node [vertex, label={[label distance=.2cm]+45:$X_{yrbg}$}](i) at (4,1) {};
\draw (j) circle (0.6 cm);
\draw (i) circle (0.4 cm);
\draw [red!75!black] (r)--(j);
\draw [blue!75!black] (b)--(j);
\draw [green!75!black] (g)--(j);
\draw (y)--(i);
\draw [red!75!black](j)--(i);
\draw [red!75!black] (r)--(i);
\draw [green!75!black] (b)--(i);
\draw [blue!75!black] (g)--(i);
\end{tikzpicture}
\caption{Analysis of the colouring of the edges incident on $X_{rbg}$ and on
$X_{yrbg}$.}\label{fig:rbgy}
\end{figure}
Let us now consider any further vertex~$x\in V\setminus \left(X_{rbg} \cup
N_G(X_{rbg}) \right)$ with~$x\neq y$, if such a vertex exists. We
define~$X_{xrbg}$ analogously to~$X_{yrbg}$ and observe that the colour
pattern from~$r$,~$b$,~$g$ to~$X_{xrbg}$ must be the same as the one
to~$X_{yrbg}$. Indeed, if this is not the case, then a similar analysis of the
colours of the edges between~$\{r,b,g\}$ and~$X_{xrbg}$ yields that for
some~$i \in \{b,g\}$, we know that the edges connecting~$i$ to~$X_{xrbg}$ are
of the same colour as the edges connecting~$i$ to~$X_{rbg}$. Without loss of
generality, let us say that~$i$ is~$g$. Then the edges between~$b$
and~$X_{xrbg}$ are red and the edges between~$r$ and~$X_{xrbg}$ are green,
otherwise~$X_{xrbg}$ and~$X_{rbg}$ would not be disjoints sets.
Figure~\ref{fig:rbgyx} shows the colouring of the edges incident to~$X_{yrbg}$
and~$X_{xrbg}$. Since~$|X_{yrbg}|,|X_{xrbg}| \gg \frac{\log{n}}{p}$, we have
that there is some edge~$uv$ between~$X_{yrbg}$ and~$X_{xrbg}$. But then
however we colour~$uv$, we will get an monochromatic path connecting two
vertices in~$\{r,b,g\}$, which is a contradiction. Thus, the colour pattern of
edges between~$\{r,b,g\}$ and~$X_{xrbg}$ is the same as the colour pattern of
the edges between~$\{r,b,g\}$ and~$X_{yrbg}$.
\begin{figure}
\centering
\begin{tikzpicture} [scale=1, thick, auto, vertex/.style={circle, draw,
fill=black!50, inner sep=0pt, minimum width=4pt}]
\node [vertex, label=above:$r$](r) at (0,2) {};
\node [vertex, label=above:$b$](b) at (0,0) {};
\node [vertex, label=above:$g$](g) at (0,-2) {};
\node [vertex, label=right:$y$](y) at (6,1) {};
\node [vertex, label=right:$x$](x) at (6,-1) {};
\node [vertex, label={[label distance=.4cm]left:$X_{rbg}$}](j) at (-2,0) {};
\node [vertex, label={[label distance=.2cm]+45:$X_{yrbg}$}](i) at (4,1) {};
\node [vertex, label={[label distance=.2cm]-45:$X_{xrbg}$}](k) at (4,-1) {};
\draw (j) circle (0.6 cm);
\draw (i) circle (0.4 cm);
\draw (k) circle (0.4 cm);
\draw [red!75!black] (r)--(j);
\draw [blue!75!black] (b)--(j);
\draw [green!75!black] (g)--(j);
\draw (y)--(i);
\draw [red!75!black] (r)--(i);
\draw [green!75!black] (b)--(i);
\draw [blue!75!black] (g)--(i);
\draw (x)--(k);
\draw [blue!75!black] (r)--(k);
\draw [red!75!black] (b)--(k);
\draw [green!75!black] (g)--(k);
\draw (i)--(k);
\end{tikzpicture}
\caption{Analysis of the color of the edges incident on $X_{yrbg}$ and on
$X_{xrbg}$.}\label{fig:rbgyx}
\end{figure}
Therefore, we have that each vertex in $X_{rbg} \cup N_G(X_{rbg})$ belongs to
one of the monochromatic components $C_\text{\rm red}(r)$, $C_\text{\rm blue}(b)$ or
$C_\text{\rm green}(g)$, while a vertex in $V(G)\setminus \left(X_{rbg} \cup
N_G(X_{rbg}) \right)$ belongs to one of the monochromatic components
$C_\text{\rm red}(r)$, $C_\text{\rm green}(b)$ or $C_\text{\rm blue}(g)$ where the latter two are the green
component containing~$b$ and the blue component containing~$g$, respectively.
This gives a covering of $G$ with five monochromatic trees. Next we will show
that actually three of those trees already cover all the vertices.
Suppose that at least~$4$ among the components
$C_{\text{\rm red}}(r)$,~$C_{\text{\rm blue}}(b)$,~$C_{\text{\rm green}}(b)$,~$C_{\text{\rm green}}(g)$,
and~$C_{\text{\rm blue}}(g)$ are needed to cover all vertices. Since there does not
exist any monochromatic path between any two of~$r$, $b$, $g$, we know that
for each~$i \in \{r,b,g\}$, any monochromatic component containing~$i$ does
not intersect~$\{r,g,b\}\setminus\{i\}$. Hence, among those at least~$4$
components, we have for each~$i\in\{r,b,g\}$ one component containing it and,
without loss of generality, two containing~$b$. That is, three components of
those at least $4$ components needed to cover all the vertices
are~$C_{\text{\rm red}}(r)$,~$C_{\text{\rm blue}}(b)$ and~$C_{\text{\rm green}}(b)$. Now there are two cases
regarding the fourth component: we need~$C_{\text{\rm green}}(g)$ as the fourth
component or we need~$C_{\text{\rm blue}}(g)$ (those two cases might intersect).
We begin with the first case, where we need the components $C_{\text{\rm red}}(r)$,
$C_{\text{\rm blue}}(b)$, $C_{\text{\rm green}}(b)$ and $C_{\text{\rm green}}(g)$ to cover all the vertices
of $G$. Let
\[
\tilde{b} \in C_{\text{\rm blue}}(b) \setminus \left( C_{\text{\rm red}}(r) \cup C_{\text{\rm green}}(b)
\cup C_{\text{\rm green}}(g) \right)
\]
and let
\[
\tilde{g} \in C_{\text{\rm green}}(b)\setminus \left( C_{\text{\rm red}}(r) \cup C_{\text{\rm blue}}(b)
\cup C_{green}(g) \right).
\]
Then let~$X_{\tilde{b}\tilde{g}rbg}$ be the maximum set of common neighbours
of~$\tilde{b},\tilde{g},r,g,b$ such that for each~$i \in \{\tilde{b},
\tilde{g}, r, b, g\}$, the edges from~$i$ to~$X_{\tilde{b}\tilde{g}rbg}$ are
all coloured the same. Since~$\vert X_{\tilde{b}\tilde{g}rbg}\vert \geq
np^5/240 \gg \frac{\log{n}}{p} $, we have
\[
|E_G(X_{\tilde{b}\tilde{g}rbg},X_{yrbg})| \geq 1 \quad\text{and}\quad
|E_G(X_{\tilde{b}\tilde{g}rbg},X_{rbg})| \geq 1.
\]
We will analyse the possible colours of the edges between the specified
vertices and~$X_{\tilde{b}\tilde{g}rbg}$. If for each of~${r,b,g}$, the colour
it sends to~$X_{\tilde{b}\tilde{g}rbg}$ is different from the colour it sends
to~$X_{rbg}$, then any edge between~$X_{\tilde{b}\tilde{g}rbg}$ and~$X_{rbg}$
ensures a monochromatic path between two of~$r$,~$b$,~$g$ (in the colour of
that edge). Similarly, it cannot happen that for each of~${r,b,g}$, the colour it sends
to~$X_{\tilde{b}\tilde{g}rbg}$ is different from the colour it sends
to~$X_{yrbg}$. Thus, since~$r$ sends red to both~$X_{rbg}$ and~$X_{yrbg}$ while the colours from~$b$ (and~$g$) to~$X_{rbg}$ and~$X_{yrbg}$ are switched, the colour of the edges between~$r$
and~$X_{\tilde{b}\tilde{g}rbg}$ is red.
Now note that, by the choice of~$\tilde{b}$ and~$\tilde{g}$, the edges between
each of them and~$X_{\tilde{b}\tilde{g}rbg}$ can not be red. Further, the
choice implies that an edge between~$\tilde{b}$
and~$X_{\tilde{b}\tilde{g}rbg}$ can not be of the same colour (green or blue)
as an edge between~$\tilde{g}$ and~$X_{\tilde{b}\tilde{g}rbg}$. If~$g$ would
send blue (and hence~$b$ would send green) edges
to~$X_{\tilde{b}\tilde{g}rbg}$, there would either be a blue path between~$b$
and~$g$ (if the edges between~$\tilde{b}$ and~$X_{\tilde{b}\tilde{g}rbg}$ are
blue) or~$\tilde{b}$ would lie in~$C_\text{\rm green}(b)$ (if the edges
between~$\tilde{b}$ and~$X_{\tilde{b}\tilde{g}rbg}$ are green). Since both
those situations would mean a contradiction, we may assume that each
of~$r$,~$b$,~$g$ sends edges with that colour to~$X_{\tilde{b}\tilde{g}rbg}$
as it does to~$X_{rbg}$. But then~$X_{\tilde{b}\tilde{g}rbg}$ is actually a
subset of~$X_{rbg}$ and therefore~$\tilde{g}$, having an edge to~$X_{rbg}$,
lies in one of~$C_{\text{\rm red}}(r)$, $C_\text{\rm blue}(b)$, or $C_\text{\rm green}(g)$, a contradiction.
In the case where the forth component that we need is~$C_\text{\rm blue}(g)$, we repeat
the construction of~$X_{\tilde{b}\tilde{g}rbg}$ similarly as before by letting
\[
\tilde{b} \in C_\text{\rm blue}(b) \setminus ( C_\text{\rm red}(r)\cup C_\text{\rm green}(b)\cup
C_\text{\rm blue}(g))
\]
and
\[
\tilde{g}\in C_\text{\rm green}(b) \setminus ( C_\text{\rm red}(r)\cup C_\text{\rm blue}(b)\cup
C_\text{\rm blue}(g)) .
\]
Also as before, we end up with~$X_{\tilde{b}\tilde{g}rbg}$ being part
of~$X_{rbg}$. From the choice of~$\tilde{g}$, the edges it sends
to~$X_{\tilde{b}\tilde{g}rbg}$ have to be green, since otherwise it would be
in~$C_{\text{\rm red}}(r)$ or~$C_\text{\rm blue}(b)$. But that gives a green path between~$b$
and~$g$, a contradiction.
Summarising, we infer that three components
among~$C_\text{\rm red}(r)$,~$C_\text{\rm blue}(b)$,~$C_\text{\rm green}(b)$,~$C_\text{\rm green}(g)$ and~$C_\text{\rm blue}(g)$
cover the vertex set of~$G$.
\end{proof}
\subsection{Shortcut graphs with independence number at most two}\label{sec:alpha3}
\begin{proof}[Proof of~\cref{lemma:main} for~$\alpha(F)\leq 2$]
We start by noticing that if~$\alpha (F)=1$, then the graph~$F$ together with
the colouring~$\varphi'$ is a complete~$3$-coloured graph and therefore,
by~\cref{lemma:EGP91}, there exists a partition of~$V(F)$ into~$2$
monochromatic trees. Thus, we may assume that~$\alpha(F)=2$.
Let $\mathcal{H}$ be the 3-uniform hypergraph with~$V(\mathcal{H})$ being the
collection of all the monochromatic components of~$F$ under the
colouring~$\varphi'$ and three monochromatic components form a hyperedge
in~$\mathcal{H}$ if they share a vertex. Notice that~$\mathcal{H}$ is
3-partite, since distinct monochromatic components of the same colour do not
have a common vertex and therefore they can not belong to the same hyperedge.
In other words, the colour of each component give us a 3-partition of the
vertex set of~$\mathcal{H}$. We denote by~$V_{\text{\rm red}}$,$V_{\text{\rm blue}}$ and~$V_{\text{\rm green}}$ the set of vertices of~$V(\mathcal{H})$ that correspond to,
respectively, red, blue and green components. Such construction was inspired
by a construction due to Gy\'arf\'as~\cite{Gyarfas}.
Note that every vertex~$v$ of~$F$ is contained in a monochromatic component
for each one of the colours (a monochromatic component could consist
only of $v$). Therefore, any vertex cover of~$\mathcal{H}$ corresponds to a
covering of the vertices of~$F$ with monochromatic trees. Indeed, if~$A$ is a
vertex cover of~$\mathcal{H}$, then consider the monochromatic components
corresponding to each vertex in~$A$. If any vertex~$v$ of~$F$ is not covered
by those components, then the vertices in $\mathcal{H}$ corresponding to the
red, green and blue components in $F$ containing $v$ do not belong to $A$ and
they form an hyperedge. But this contradicts the fact that $A$ is a vertex
cover of $\mathcal{H}$. Therefore,
\begin{align}\label{tctau}
\text{\rm tc}(F,\phi') \leq \tau(\mathcal{H}).
\end{align}
Let~$L = \bigcup_{s\in V_{\text{\rm red}}}L_s$ be the union of the link graphs~$L_s$ of all
vertices~$s\in V_{\text{\rm red}}$. Any vertex cover of this bipartite graph~$L$
corresponds to a vertex cover of~$\mathcal{H}$ of the same size. Therefore,
\begin{align}\label{tautau}
\tau(\mathcal{H}) \leq \tau(L).
\end{align}
Furthermore, by K\"{o}nig's theorem we know that $\tau(L) = \nu(L)$. Thus, if
$\nu(L) \leq 3$, then by~\eqref{tctau} and~\eqref{tautau}, we have
\begin{align*}
\text{\rm tc}(F,\phi') \leq \tau(\mathcal{H}) \leq \tau(L) = \nu(L) \leq 3.
\end{align*}
Therefore, we may assume that \(\nu(L) \geq 4\), and fix a matching~$M_L$ of
size at least~$4$ in~$L$. Let us say that $M_L$ consists of the edges
$G_1B_1$, $G_2B_2$, $G_3B_3$, and $G_4B_4$, where $\{G_1,G_2,G_3,G_4\}
\subseteq V_{\text{\rm green}}$ and $\{B_1,B_2,B_3,B_4\} \subseteq V_{\text{\rm blue}}$.
Now we give an upper bound for~$\nu(\mathcal{H})$. Note that any
matching~$M_{\mathcal{H}}$ in~$\mathcal{H}$ gives us an independent set~$I$ in~$F$.
Indeed, for each hyperedge~$e \in M_{\mathcal{H}}$, let~$v_e\in V(F)$ be any vertex in the
intersection of those monochromatic components associated to the vertices
in~$e$ and let~$I = \{v_e : e \in M_{\mathcal{H}}\}$. We claim that~$I$ is an independent
set in~$F$. Indeed, if~$v_e$ and $v_f$ were adjacent vertices in~$I$, then~$e$
and~$f$ intersect, as the edge connecting~$v_e$ to~$v_f$
in~$F$ will connect the monochromatic components containing~$v_e$ and~$v_f$ of
that colour that is given to the edge~$v_e v_f$. Therefore,
since~$\alpha(F) = 2$, we have
\begin{align}\label{nual}
\nu(\mathcal{H}) \leq \alpha(F) = 2.
\end{align}
Now, if there are three different edges in~$M_L$ that are edges in the link
graphs of three different vertices of~$V_{\text{\rm red}}$, then there would be a
matching of size~$3$ in~$\mathcal{H}$, contradicting~\eqref{nual}. Therefore,
we may assume that~$M_L$ is contained in the union of at most two link graphs,
say~$L_{R_1}$ and~$L_{R_2}$, of vertices $R_1,R_2 \in V_\text{\rm red}$. Now we are left
with three cases: (Case~\ref{case_1}) two edges of~$M_L$ belong to~$L_{R_1}$
and two belong to~$L_{R_2}$; (Case~\ref{case_2}) three edges of~$M_L$ belong
to~$L_{R_1}$ and one to~$L_{R_2}$; (Case~\ref{case_3}) the four edges of $M_L$
belong to~$L_{R_{1}}$. Without loss of generality, we can describe each of
those three cases as follows (see
Figures~\ref{fig:case1},~\ref{fig:case2} and~\ref{fig:case3}):
\begin{case}\label{case_1}
The edges $G_1B_1$ and $G_2B_2$ belong to~$L_{R_1}$ and the edges $G_3B_3$
and $G_4B_4$ belong to~$L_{R_2}$. That means that all the following four
sets are non-empty:
\begin{align*}
J_1&:=R_1\cap G_1\cap B_1,\\
J_2&:=R_1\cap G_2\cap B_2,\\
J_3&:=R_2\cap G_3\cap B_3,\\
J_4&:=R_2\cap G_4\cap B_4.
\end{align*}
\end{case}
\begin{case}\label{case_2}
The edges $G_1B_1$, $G_2B_2$ and $G_3B_3$ belong to~$L_{R_1}$ and the edge
$G_4B_4$ belongs to~$L_{R_2}$. That means that all the following four sets
are non-empty:
\begin{align*}
J_1&:=R_1\cap G_1\cap B_1,\\
J_2&:=R_1\cap G_2\cap B_2,\\
J_3&:=R_1\cap G_3\cap B_3,\\
J_4&:=R_2\cap G_4\cap B_4.
\end{align*}
\end{case}
\begin{case}\label{case_3}
The edges $G_1B_1$, $G_2B_2$, $G_3B_3$ and $G_4B_4$ belong to~$L_{R_1}$.
That means that all the following four sets are non-empty:
\begin{align*}
J_1&:=R_1\cap G_1\cap B_1,\\
J_2&:=R_1\cap G_2\cap B_2,\\
J_3&:=R_1\cap G_3\cap B_3,\\
J_4&:=R_1\cap G_4\cap B_4.
\end{align*}
In this case, let~$R_2$ be any other red component different from~$R_1$ and
let~$B$ and~$G$ be, respectively, a blue and a green component with~$R_2\cap
B \cap G\neq \emptyset$. Suppose that $G \notin \{G_1,G_2,G_3,G_4\}$. Then
the three of the edges $G_1,B_1$, $G_2,B_2$, $G_3,B_3$ and $G_4,B_4$ are not
incident to $GB$ (because $B$ must be different of at least three of the
sets $B_1$, $B_2$, $B_3$ and $B_4$) and those three edges together with $GB$
may be analysed just as in Case~\ref{case_2}. Therefore, we may suppose that
$G \in \{G_1,G_2,G_3,G_4\}$. Let us say, without loss of generality, that $G
= G_4$. If $B \notin\{B_1,B_2,B_3\}$, then the edges $G_1B_1$, $G_2B_2$ and
$G_3B_3$ belong to $L_{R_1}$, the edge $GB$ belongs to $L_{R_2}$ and this
case may be analysed, again, just as in Case~\ref{case_2}. Therefore, we may
assume that $B \in \{B_1,B_2,B_3\}$. Let us say, without loss of generality
that $B = B_3$. Then let $J_5$ be the following non-empty set:
\begin{align}\label{eq:case3-extra}
J_5:=R_2\cap G_4\cap B_3.
\end{align}
\end{case}
Let us further remark that, since~$\nu (\mathcal{H})\leq 2$, in each of the
three cases above, we have
\begin{align*}
V(F) = R_1 \cup R_2 \cup G_1 \cup G_2 \cup G_3 \cup G_4 \cup B_1 \cup B_2
\cup B_3 \cup B_4.
\end{align*}
Otherwise, for any uncovered vertex~$v \in V(F)$, the hyperedge given by the
red, blue and green components containing $v$ together with the hyperedges
$R_1B_1G_1$ and $R_2B_3G_3$ (in Cases~\ref{case_1} and~\ref{case_2}) or~$R_2B_3G_4$ (in Case~\ref{case_3}) give a matching of size~$3$ in~$\mathcal{H}$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1, thick, auto, vertex/.style={circle, draw,
fill=black!50, inner sep=0pt, minimum width=4pt}]
\node [label={[label distance=.8cm]left:$R_1$}](r1) at (0,0) {};
\node [label={[label distance=.8cm]right:$R_2$}](r2) at (4,0) {};
\node [label={[label distance=.1cm]-45:$B_1$}](b1) at (0.25,-1) {};
\node [label={[label distance=.1cm]-135:$G_1$}](g1) at (-0.25,-1) {};
\node [vertex, label={[label distance=.1cm]above:$j_1$}](j1) at (0,-0.85) {};
\node [label={[label distance=.1cm]45:$B_2$}](b2) at (0.25,1) {};
\node [label={[label distance=.1cm]135:$G_2$}](g2) at (-0.25,1) {};
\node [vertex, label={[label distance=.1cm]below:$j_2$}](j2) at (0,0.85) {};
\node [label={[label distance=.1cm]-135:$B_3$}](b3) at (3.75,-1) {};
\node [label={[label distance=.1cm]-45:$G_3$}](g3) at (4.25,-1) {};
\node [vertex, label={[label distance=.1cm]above:$j_3$}](j3) at (4,-0.85) {};
\node [label={[label distance=.1cm]135:$B_4$}](b4) at (3.75,1) {};
\node [label={[label distance=.1cm]45:$G_4$}](g4) at (4.25,1) {};
\node [vertex, label={[label distance=.1cm]below:$j_4$}](j4) at (4,0.85) {};
\draw[red!75!black, line width=1pt] (r1) circle (1 cm);
\draw[red!75!black, line width=1pt] (r2) circle (1 cm);
\draw[blue!75!black, line width=1pt] (b1) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g1) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b2) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g2) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b3) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g3) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b4) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g4) circle (0.4 cm);
\end{tikzpicture}
\caption{Case 1}\label{fig:case1}
\end{figure}
Let us start with Case~\ref{case_1}.
\medskip
\noindent\textit{Proof in Case~\ref{case_1}}:
We will prove that~$R_1$ and~$R_2$ together with possibly one further
monochromatic component cover $V(F)$. For each $i \in \{1,2,3,4\}$, let $\tilde{B}_i =
B_i\setminus(R_1\cup R_2)$ and $\tilde{G}_i = G_i\setminus (R_1 \cup R_2)$.
Pick vertices~$j_i\in J_i$, with~$i\in\{1,2,3,4\}$, arbitrarily. Consider a
vertex~$o\in \tilde{B}_1$ (if such a vertex exists). Since~$\alpha (F) = 2$,
there is an edge connecting two of~$o$,~$j_2$,~$j_3$. Because $j_2$ and $j_3$
belong to different components of each colour, such an edge must be incident to
$o$. So let us say that such edge is $oj_i$, for some $i\in\{2,3\}$. Since~$o
\notin R_1 \cup R_2$, the edge $oj_i$ cannot be red. And since $o \in B_1$,
$oj_i$ cannot be blue either, otherwise we would connect the blue
components $B_1$ and $B_i$. Now assume that $o$ and $j_2$ are not adjacent.
Then $oj_3$ is a green edge in $F$. By analogously analysing the edge between
$o$, $j_2$ and $j_4$ together with the supposition that $oj_2$ is not an edge
in $F$, we get that $oj_4$ must be a green edge in $F$. But then we have a
green path $j_3oj_4$ connecting $j_3$ to $j_4$, a contradiction. Therefore
$oj_2$ is an edge in $F$ and it is green. That implies that $o \in G_2$.
Therefore $\tilde{B}_1 \subseteq G_2$. Analogously, we can conclude the
following:
\begin{equation}
\begin{aligned}\label{conn_comp}
& \tilde{B}_1 \subseteq G_2, & \tilde{G}_1 \subseteq B_2, \\
& \tilde{B}_2 \subseteq G_1, & \tilde{G}_2 \subseteq B_1, \\
& \tilde{B}_3 \subseteq G_4, & \tilde{G}_3 \subseteq B_4, \\
& \tilde{B}_4 \subseteq G_3, & \tilde{G}_4 \subseteq B_3. \\
\end{aligned}
\end{equation}
\begin{claim}
We have $\tilde{B}_1 \cup \tilde{G}_1 \cup \tilde{B}_2 \cup \tilde{G}_2 =
\emptyset$ or $\tilde{B}_3 \cup \tilde{G}_3 \cup \tilde{B}_4 \cup
\tilde{G}_4 = \emptyset$.
\end{claim}
\begin{claimproof}
Suppose for a contradiction that there exist~$o_1\in \tilde{B}_1 \cup
\tilde{G}_1 \cup \tilde{B}_2 \cup \tilde{G}_2$ and~$o_2\in \tilde{B}_3 \cup
\tilde{G}_3 \cup \tilde{B}_4 \cup \tilde{G}_4$. Recall that from our choice
of $p$, there is some~$z\in N(j_1,j_2,j_3,j_4,o_1,o_2)$. Two of the edges
$zj_i$,for $i \in \{1,2,3,4\}$, have the same colour. Since each $j_i$ belongs to
different green and blue components, those two edges are red. Since
$\{j_1,j_2\} \in R_1$ and $\{j_3,j_4\}\in R_2$, those two red edges are
either $zj_1$ and $zj_2$ or $zj_3$ and $zj_4$. Let us say that $zj_1$ and
$zj_2$ are red (the other case is similar). Then one of the edges $zj_3$ and
$zj_4$ has to be green and the other blue. Now, since~$o_1 \notin R_1$, the
edge $zo_1$ is either green or blue. Then one of the paths~$o_1zj_3$ or
$o_1zj_4$ is green or blue. This implies that $o_1 \in B_3\cup G_3\cup
B_4\cup G_4$. On the other hand,~\eqref{conn_comp} implies that $o_1 \in
\left(B_1\cup B_2\right) \cap \left(G_1 \cup G_2\right)$. But then we
reached a contradiction, since that would mean that $o_1$ belongs to two
different components of the same colour.
\end{claimproof}
We may assume without loss of generality that~$\tilde{B}_3 \cup \tilde{G}_3
\cup \tilde{B}_4 \cup \tilde{G}_4$ is empty. Then, recalling
that~$\nu(\mathcal{H})\leq 2$ and in view of~\eqref{conn_comp}, the union of
the components~$R_1$, $B_1$, $G_1$ and $R_2$ covers every vertex of $F$. If we
show that ${B}_1\subseteq G_1 \cup R_1 \cup R_2$ or that ${G}_1\subseteq B_1
\cup R_1 \cup R_2$, then we get three monochromatic components covering the
vertices of $F$. Our next claim states precisely that.
\begin{claim}
We have $\tilde{B}_1\setminus G_1 = \emptyset$ or $\tilde{G}_1 \setminus
B_1= \emptyset$.
\end{claim}
\begin{claimproof}
Suppose that there exist two distinct vertices~$b\in \tilde{B}_1\setminus
G_1$ and~$g\in \tilde{G}_1\setminus B_1$. Let~$z\in N(j_1,j_2,j_3,j_4,b,g)$.
As before, either~$zj_1$ and~$zj_2$ or~$zj_3$ and~$zj_4$ are red edges. First assume that $zj_1$ and $zj_2$ are red. Then one of the edges $zj_3$ and
$zj_4$ has to be green and the other blue. Now, since $b \notin R_1$, the
edge $zb$ is either green or blue. Then one of the paths $bzj_3$ or $bzj_4$
is green or blue. This implies that $b \in B_3\cup G_3\cup B_4\cup G_4$. On
the other hand,~\eqref{conn_comp} implies that $b \in B_1 \cap G_2$. Then we
reached a contradiction, since that would mean that $b$ belongs to two
different components of the same colour.
Therefore, the edges $zj_3$ and $zj_4$ are red and one of the edges $zj_1$
and $zj_2$ is green and the other is blue. First let us say that $zj_1$ is
green and $zj_2$ is blue. Since $b\notin (R_1\cup R_2)$, the edge $zb$
cannot be red. Also the edge $zb$ cannot be blue otherwise the path $bzj_2$
would connect the components $B_1$ and $B_2$. Finally, $zb$ cannot be green,
otherwise the path~$bzj_1$ would gives us that $b\in G_1$. Therefore $zj_1$
is blue and $zj_2$ is green. But this case analogously leads to a
contradiction (with~$g$ and~$G_i$ instead of~$b$ and~$B_i$ and green and
blue switched).
\end{claimproof}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1, thick, auto, vertex/.style={circle, draw,
fill=black!50, inner sep=0pt, minimum width=4pt}]
\node [label={[label distance=.8cm]left:$R_1$}](r1) at (0,0) {};
\node [label={[label distance=.8cm]right:$R_2$}](r2) at (4,0) {};
\node [label={[label distance=.1cm]-45:$B_1$}](b1) at (0.25,-1) {};
\node [label={[label distance=.1cm]-135:$G_1$}](g1) at (-0.25,-1) {};
\node [vertex, label={[label distance=.1cm]above:$j_1$}](j1) at (0,-0.85) {};
\node [label={[label distance=.1cm]45:$B_2$}](b2) at (0.25,1) {};
\node [label={[label distance=.1cm]135:$G_2$}](g2) at (-0.25,1) {};
\node [vertex, label={[label distance=.1cm]below:$j_2$}](j2) at (0,0.85) {};
\node [label={[label distance=.1cm]15:$B_3$}](b3) at (1,0.25) {};
\node [label={[label distance=.1cm]-15:$G_3$}](g3) at (1,-0.25) {};
\node [vertex, label={[label distance=.1cm]left:$j_3$}](j3) at (0.9,0) {};
\node [label={[label distance=.1cm]135:$B_4$}](b4) at (3.75,1) {};
\node [label={[label distance=.1cm]45:$G_4$}](g4) at (4.25,1) {};
\node [vertex, label={[label distance=.1cm]below:$j_4$}](j4) at (4,0.85) {};
\draw[red!75!black, line width=1pt] (r1) circle (1 cm);
\draw[red!75!black, line width=1pt] (r2) circle (1 cm);
\draw[blue!75!black, line width=1pt] (b1) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g1) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b2) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g2) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b3) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g3) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b4) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g4) circle (0.4 cm);
\end{tikzpicture}
\caption{Case 2}\label{fig:case2}
\end{figure}
We proceed to the proof of Case~\ref{case_2}.
\medskip
\noindent\textit{Proof in Case~\ref{case_2}}:
As in Case~\ref{case_1}, pick vertices~$j_i\in J_i$, with~$i\in\{1,2,3,4\}$
arbitrarily. We claim that~$V(F)\subseteq R_1 \cup R_2 \cup B_4 \cup G_4$.
Indeed, let~$o\in V(F)\setminus (R_1\cup R_2)$. Notice that
since~$\alpha(F)=2$, there is an edge in each of the following sets of three
vertices:~$\{o, j_4, j_1\}$,~$\{o, j_4, j_2\}$, and~$\{o, j_4, j_3\}$. We
claim that~$oj_4$ is an edge of $F$. Indeed, if this was not the case, then
since there cannot be an edge between~$j_4$ and~$j_i$ for~$i=1,2,3$, we would
have the edges~$oj_1$,~$oj_2$ and~$oj_3$ and all of them would be coloured
green or blue. Thus, two of them would be coloured the same, connecting two
distinct components of one colour in this colour, a contradiction. So~$oj_4\in
E(F)$ and since~$oj_4$ cannot be red, we conclude that~$o\in (B_4\cup G_4)$.
Therefore,~$R_1$,~$R_2$,~$B_4$ and~$G_4$ cover all vertices of~$F$.
If $B_4\setminus (R_1\cup R_2\cup G_4)=\emptyset$ or $G_4\setminus (R_1\cup
R_2\cup B_4)=\emptyset$, then we get three monochromatic components covering
$V(F)$. So let us assume that there exist~$b\in B_4\setminus (R_1\cup R_2\cup
G_4)$ and~$g\in G_4\setminus (R_1\cup R_2\cup B_4)$. If $b$ and~$g$ are not
adjacent, then since each of the sets~$\{b, g, j_i\}$, for~$i=1,2,3$, has to
induce at least one edge, there are two edges between $b$
and~$\{j_1,j_2,j_3\}$ or two edges between $g$ and~$\{j_1,j_2,j_3\}$. However,
from the choice of $b$, we know that all the edges between $b$
and~$\{j_1,j_2,j_3\}$ are green, and therefore two of such edges would give us
a green connection between two different green components, a contradiction.
Similarly, from the choice of $g$, we know that all the edges between $b$ and
$\{j_1,j_2,j_3\}$ are blue, and two of such edges would give us a blue
connection between two different blue components, again a contradiction.
Hence, we conclude that $bg\in F$ for any $b\in B_4\setminus (R_1\cup R_2\cup
G_4)$ and any~$g\in G_4\setminus (R_1\cup R_2\cup B_4)$ and any such edge $bg$
is red. Therefore, there is a red component~$R_3$ covering~$(B_4\triangle
G_4)\setminus (R_1\cup R_2)$, where~$B_4\triangle G_4$ denotes the symmetric
difference. If~$(B_4\cap G_4) \setminus (R_1\cup R_2) =\emptyset$,
then~$R_1$,~$R_2$ and~$R_3$ cover~$V(F)$ and we are done. Therefore, suppose
there is a vertex~$x\in (B_4\cap G_4) \setminus (R_1\cup R_2)$.
If~$R_2\setminus (B_4\cup G_4)=\emptyset$, then~$R_1$,~$B_4$,~$G_4$
cover~$V(F)$ and we are done. Therefore, suppose there is a vertex $y\in
R_2\setminus (B_4\cup G_4)$. Note that~$xy\notin E(F)$, since $x$ and $y$
belong to different components in each of the colours. Also, $xj_i\notin
E(F)$, for~$i\in\{1,2,3\}$, since otherwise two different components of the
same colour would be connected in that colour by the edge $xj_i$. Now~$\alpha
(F)=2$ implies that~$yj_i\in E(F)$, for~$i\in\{1,2,3\}$ (otherwise,
$\{x,y,j_i\}$ would be an independent set). But these edges must all be green
or blue, hence two of them are of the same colour, connecting two different
components of one colour in that colour, a contradiction.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1, thick, auto, vertex/.style={circle, draw,
fill=black!50, inner sep=0pt, minimum width=4pt}]
\node [label={[label distance=.8cm]left:$R_1$}](r1) at (0,0) {};
\node [label={[label distance=.8cm]right:$R_2$}](r2) at (4,0) {};
\node [label={[label distance=.2cm]below:$B_1$}](b1) at (-0.25,-0.9) {};
\node [label={[label distance=.1cm]-135:$G_1$}](g1) at (-0.75,-0.7) {};
\node [vertex, label={[label distance=.1cm]above:$j_1$}](j1) at (-0.5,-0.75) {};
\node [label={[label distance=.2cm]above:$B_2$}](b2) at (-0.25,0.9) {};
\node [label={[label distance=.1cm]135:$G_2$}](g2) at (-0.75,0.7) {};
\node [vertex, label={[label distance=.1cm]below:$j_2$}](j2) at (-0.5,0.75) {};
\node [label={[label distance=.1cm]below:$B_3$}](b3) at (2,-0.25) {};
\node [label={[label distance=.1cm]below:$G_3$}](g3) at (0.75,-0.7) {};
\node [vertex, label={[label distance=.1cm]left:$j_3$}](j3) at (0.7,-0.5) {};
\node [label={[label distance=.1cm]above:$B_4$}](b4) at (0.75,0.7) {};
\node [label={[label distance=.1cm]above:$G_4$}](g4) at (2,0.25) {};
\node [vertex, label={[label distance=.1cm]left:$j_4$}](j4) at (0.7,0.5) {};
\node [vertex, label={[label distance=.2cm]right:$j_5$}](j5) at (3.2,0) {};
\draw[red!75!black, line width=1pt] (r1) circle (1 cm);
\draw[red!75!black, line width=1pt] (r2) circle (1 cm);
\draw[blue!75!black, line width=1pt] (b1) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g1) circle (0.4 cm);
\draw[blue!75!black, line width=1pt] (b2) circle (0.4 cm);
\draw[green!75!black, line width=1pt] (g2) circle (0.4 cm);
\draw[rotate around={-80:(b3)},blue!75!black, line width=1pt] (b3) ellipse (8pt and 1.6cm);
\draw[green!75!black, line width=1pt] (g3) circle (0.4 cm);
\draw[rotate around={80:(g4)},green!75!black, line width=1pt] (g4) ellipse (8pt and 1.6cm);
\draw[blue!75!black, line width=1pt] (b4) circle (0.4 cm);
\end{tikzpicture}
\caption{Case 3}\label{fig:case3}
\end{figure}
We arrived at the last case, Case~\ref{case_3}.
\medskip
\noindent\textit{Proof in Case~\ref{case_3}}:
Similarly to the previous cases, let us pick vertices~$j_i\in J_i$,
with~$i\in\{1,2,3,4,5\}$ arbitrarily. We will show first that we can cover all vertices
of~$F$ with~$4$ monochromatic components. Let~$o_1,o_2 \in V(F)\setminus
(R_1\cup B_3\cup G_4)$ and let~$z\in N(j_1,j_2,j_3,o_1,o_2,j_5)$. At least one
of the edges~$zj_1$,~$zj_2$ and~$zj_3$ is red, as otherwise we would connect
two distinct components of one colour in that colour. Therefore~$z \in R_1$. Since~$o_1,o_2,j_5 \notin R_1$, the edges~$zo_1$,~$zo_2$ and~$zj_5$ cannot be red.
Furthermore,~$o_1z$ and~$o_2z$ are coloured with a colour different from the
colour of the edge~$j_5z$, as otherwise they would belong to~$B_3$ or~$G_4$.
Thus,~$o_1$ and~$o_2$ are connected by a monochromatic path in green or blue.
Hence, we showed that any two vertices of~$V(F)\setminus (R_1\cup B_3\cup
G_4)$ are connected by a monochromatic path in green or blue. We infer
that there is a green or blue component covering~$V(F)\setminus (R_1\cup
B_3\cup G_4)$. Therefore,~$R_1$,~$B_3$,~$G_4$ and one further blue or green
component~$C$ cover all vertices of~$G$. Let us assume that~$C$ is a green
component; the case where~$C$ is a blue component is analogous.
We claim that~$R_1 \cup B_3 \cup C$, or~$R_1 \cup G_4 \cup C$, or~$R_1 \cup
B_3 \cup G_4$ covers~$V(F)$. Indeed, suppose for the sake of contradiction that
there exist vertices~$g\in G_4\setminus (R_1\cup B_3\cup C)$,~$b\in
B_3\setminus (R_1\cup G_4\cup C)$ and~$c\in C\setminus (R_1\cup B_3\cup G_4)$.
Let~$z\in N(j_1,j_2,j_3,g,b,c)$ and note that one of~$zj_1$,~$zj_2$ and~$zj_3$
is red. Consequently~$gz$,~$cz$ and~$bz$ are not red. Notice, however, that~$gz$ and~$bz$ can not be both green and neither both blue. Now let us say~$cz$
is green. Since~$c \notin G_4$ and~$g \in G_4$, we would have~$gz$ blue in
this case. But then~$bz$ must be green and since~$c\in C$ and~$C$ is a green
component, we have~$b \in C$, which is a contradiction. Therefore~$cz$
must be blue. Then, since~$c \notin B_3$ and~$b \in B_3$, the edge~$bz$ should
be green. Thus the edge~$gz$ is blue. Since this argument holds for any~$g\in
G_4\setminus (R_1\cup B_3\cup C)$ and~$c\in C\setminus (R_1\cup B_3\cup G_4)$,
we conclude that~$V(F)\setminus (R_1\cup B_3)$ can be covered by one blue
tree. Hence,~$G$ can be covered by the three monochromatic trees.
This finishes the last case and thereby the proof of~\cref{lemma:main}.
\end{proof}
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}}
\def\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}
\makeatother
\usepackage{amsmath,amssymb,amsthm}
\usepackage{mathrsfs}
\usepackage{mathabx}\changenotsign
\usepackage{dsfont}
\usepackage{xcolor}
\usepackage[backref]{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!60!black},
citecolor={green!60!black},
urlcolor={blue!60!black}
}
\usepackage[capitalise,noabbrev]{cleveref}
\usepackage[open,openlevel=2,atend]{bookmark}
\usepackage[abbrev,msc-links,backrefs]{amsrefs}
\usepackage{doi}
\renewcommand{\doitext}{DOI\,}
\renewcommand{\PrintDOI}[1]{\doi{#1}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\linespread{1.3}
\usepackage{geometry}
\geometry{left=25mm,right=25mm, top=25mm, bottom=25mm}
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\usepackage{enumitem}
\def\upshape({\itshape \roman*\,}){\upshape({\itshape \roman*\,})}
\def\upshape(\Roman*){\upshape(\Roman*)}
\def\upshape({\itshape \alph*\,}){\upshape({\itshape \alph*\,})}
\def\upshape({\itshape \Alph*\,}){\upshape({\itshape \Alph*\,})}
\def\upshape({\itshape \arabic*\,}){\upshape({\itshape \arabic*\,})}
\let\logreg=\log
\let\log=\ln
\let\polishlcross=\ifmmode\ell\else\polishlcross\fi
\def\ifmmode\ell\else\polishlcross\fi{\ifmmode\ell\else\polishlcross\fi}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
\let\emptyset=\varnothing
\let\setminus=\smallsetminus
\let\backslash=\smallsetminus
\makeatletter
\def\mathpalette\mov@rlay{\mathpalette\mov@rlay}
\def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen
\ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}}
\newcommand{\charfusion}[3][\mathord]{
#1{\ifx#1\mathop\vphantom{#2}\fi
\mathpalette\mov@rlay{#2\cr#3}
}
\ifx#1\mathop\expandafter\displaylimits\fi}
\makeatother
\newcommand{\charfusion[\mathbin]{\cup}{\cdot}}{\charfusion[\mathbin]{\cup}{\cdot}}
\newcommand{\charfusion[\mathop]{\bigcup}{\cdot}}{\charfusion[\mathop]{\bigcup}{\cdot}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareMathSymbol{\powerset}{\mathord}{MnSyC}{180}
\usepackage{tikz}
\usetikzlibrary{calc,decorations.pathmorphing}
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfdeclarelayer{front}
\pgfsetlayers{background,main,foreground,front}
\usepackage{subcaption}
\captionsetup[subfigure]{labelfont=rm}
\newcommand{\qedge}[7]{
\ifx\relax#4\relax
\def#4{0pt}
\else
\def#4{#4}
\fi
\def\qhedge{
($#1+#3!#4!-90:#2-#3$) --
($#2+#1!#4!-90:#3-#1$) --
($#3+#2!#4!-90:#1-#2$) -- cycle}
\coordinate (12) at ($#1!#4!90:#2$);
\coordinate (13) at ($#1!#4!-90:#3$);
\coordinate (23) at ($#2!#4!90:#3$);
\coordinate (21) at ($#2!#4!-90:#1$);
\coordinate (31) at ($#3!#4!90:#1$);
\coordinate (32) at ($#3!#4!-90:#2$);
\def\nqhedge{
(13) let \p1=($(13)-#1$), \p2=($(12)-#1$) in
arc[start angle={atan2(\y1,\x1)}, delta angle={atan2(\y2,\x2)-atan2(\y1,\x1)-360*(atan2(\y2,\x2)-atan2(\y1,\x1)>0)}, x radius=#4, y radius=#4] --
(21) let \p1=($(21)-#2$), \p2=($(23)-#2$) in
arc[start angle={atan2(\y1,\x1)}, delta angle={atan2(\y2,\x2)-atan2(\y1,\x1)-360*(atan2(\y2,\x2)-atan2(\y1,\x1)>0)}, x radius=#4, y radius=#4] --
(32) let \p1=($(32)-#3$), \p2=($(31)-#3$) in
arc[start angle={atan2(\y1,\x1)}, delta angle={atan2(\y2,\x2)-atan2(\y1,\x1)-360*(atan2(\y2,\x2)-atan2(\y1,\x1)>0)}, x radius=#4, y radius=#4] --
cycle}
\ifx\relax#5\relax
\def#5{1pt}
\else
\def#5{#5}
\fi
\ifx\relax#7\relax
\fill \nqhedge;
\else
\fill[#7]\nqhedge;
\fi
\ifx\relax#6\relax
\draw[line width=#5,rounded corners=#4]\nqhedge;
\else
\draw[line width=#5,#6]\nqhedge;
\fi
}
\def\longrightarrow{\longrightarrow}
\def\text{\rm red}{\text{\rm red}}
\def\text{\rm blue}{\text{\rm blue}}
\def\text{\rm green}{\text{\rm green}}
\def\text{\rm tc}{\text{\rm tc}}
\def\text{\rm tp}{\text{\rm tp}}
\let\epsilon=\varepsilon
\let\eps=\epsilon
\let\phi=\varphi
\let\rho=\varrho
\let\theta=\vartheta
\let\wh=\widehat
\def\mathds Q{\mathds Q}
\def{\mathds E}{{\mathds E}}
\def{\mathds N}{{\mathds N}}
\def{\mathds G}{{\mathds G}}
\def{\mathds Z}{{\mathds Z}}
\def{\mathds P}{{\mathds P}}
\def{\mathds R}{{\mathds R}}
\def{\mathds C}{{\mathds C}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathscr{S}}{\mathscr{S}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\text{{red}}}{\text{{red}}}
\newcommand{\text{{blue}}}{\text{{blue}}}
\newcommand{\text{{green}}}{\text{{green}}}
\newcommand{\comment}[1]
{\par {\bfseries \color{red} #1 \par}}
\newtheoremstyle{note} {4pt} {4pt} {\sl} {} {\bfseries} {.} {.5em} {}
\newtheoremstyle{introthms} {3pt} {3pt} {\itshape} {} {\bfseries} {.} {.5em} {\thmnote{#3}}
\newtheoremstyle{remark} {2pt} {2pt} {\rm} {} {\bfseries} {.} {.3em} {}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{example}[theorem]{Example}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{constr}{Construction}
\newtheorem{conj}[theorem]{Conjecture}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{claim}[theorem]{Claim}
\theoremstyle{note}
\newtheorem{dfn}[theorem]{Definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{setup}[theorem]{Setup}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{exmpl}[theorem]{Example}
\usepackage{accents}
\newcommand{\seq}[1]{\accentset{\rightharpoonup}{#1}}
\usepackage{lineno}
\newcommand*\patchAmsMathEnvironmentForLineno[1]{
\expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname
\expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname
\renewenvironment{#1}
{\linenomath\csname old#1\endcsname}
{\csname oldend#1\endcsname\endlinenomath}}
\newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{
\patchAmsMathEnvironmentForLineno{#1}
\patchAmsMathEnvironmentForLineno{#1*}}
\AtBeginDocument{
\patchBothAmsMathEnvironmentsForLineno{equation}
\patchBothAmsMathEnvironmentsForLineno{align}
\patchBothAmsMathEnvironmentsForLineno{flalign}
\patchBothAmsMathEnvironmentsForLineno{alignat}
\patchBothAmsMathEnvironmentsForLineno{gather}
\patchBothAmsMathEnvironmentsForLineno{multline}
}
\def\text{\rm odd}{\text{\rm odd}}
\def\text{\rm even}{\text{\rm even}}
\def\text{\rm flex}{\text{\rm flex}}
\defU_\text{\rm bad}{U_\text{\rm bad}}
\newcommand{G(n,p)}{G(n,p)}
\newcommand{}{}
\def\hfill\scalebox{.6}{$\Box$}{\hfill\scalebox{.6}{$\Box$}}
\newenvironment{claimproof}[1][Proof]{
\renewcommand{}{\qedsymbol}
\renewcommand{\qedsymbol}{\hfill\scalebox{.6}{$\Box$}}
\begin{proof}[#1]
}{
\end{proof}
\renewcommand{\qedsymbol}{}
}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.